Solaris provides multiple ways to manage storage:
- Solaris Volume Manager (SVM) – Traditional software volume manager
- UFS – Traditional Unix file system
- ZFS – Modern filesystem + volume manager combined
- VTOC / Disk Slices – Disk partitioning method
Understanding how these components work together is important for real-world system administration.
1. Solaris Volume Manager (SVM)
SVM is Solaris’ built-in logical volume manager. It allows combining disks or slices into logical volumes called metadevices.
SVM works at the block level and sits between the physical disk and the filesystem (usually UFS).
It is mainly used in:
- Solaris 8/9/10 environments
- Legacy systems using UFS
- Systems that require software RAID without hardware controller
1.1 SVM Architecture
SVM consists of:
- State Database Replicas
- Disksets
- Metadevices
- Hot spares
All SVM metadata is stored in special reserved disk areas.
1.2 State Database Replicas
SVM requires at least 3 replicas of its state database.
Purpose:
- Stores configuration of mirrors, stripes, RAID-5
- Tracks disk health
- Maintains mirror synchronization info
Create replicas:
# metadb -a -f -c 3 c1t0d0s7
Check replicas:
# metadb
Best practice:
- Minimum 3 replicas
- Spread across different disks
- Odd number (3, 5, 7)
If majority of replicas are lost, SVM will not function.
1.3 Disksets
A diskset is a group of disks managed together.
Used in:
Sun Cluster
Multi-host configurations
Commands:
Take ownership:
# metaset -s diskset1 -t
Release:
# metaset -s diskset1 -r
1.4 Metadevices
Metadevices are logical block devices created from slices.
Naming convention:
d0, d10, d100
Access paths:
/dev/md/dsk/d10
/dev/md/rdsk/d10
1.5 Types of Metadevices
A. Concatenation (RAID 0 without striping)
Combines slices sequentially.
# metainit d0 2 c1t0d0s0 c1t1d0s0
No performance gain, no redundancy.
B. Stripe (RAID 0)
Data distributed across disks.
# metainit d1 1 2 c1t0d0s0 c1t1d0s0 -i 32k
-i sets interlace (stripe size)
Improves performance
No redundancy
C. Mirror (RAID 1)
Create submirrors:
# metainit d11 1 1 c1t0d0s0
# metainit d12 1 1 c1t1d0s0
Create mirror:
# metainit d10 -m d11
# metattach d10 d12
Mirror status:
# metastat d10
D. RAID-5
Striping with distributed parity.
# metainit d20 -r c1t0d0s0 c1t1d0s0 c1t2d0s0
Provides fault tolerance for one disk failure.
1.6 Hot Spares
Create hot spare pool:
# metahs -a hsp001 c1t3d0s0
Attach to mirror:
# metaparam -h hsp001 d10
If disk fails, hot spare replaces it automatically.
1.7 Growing SVM Volumes
Attach new slice:
# metattach d0 c1t2d0s0
Grow filesystem:
# growfs /mountpoint
2. UFS (Unix File System)
UFS is traditional filesystem in Solaris.
Works on:
Raw slices
SVM metadevices
2.1 UFS Structure
UFS contains:
Superblock
Cylinder groups
Inodes
Data blocks
Each file has:
Inode number
Permissions
Owner
Size
Timestamps
2.2 UFS Logging
Enable logging for faster crash recovery:
# mount -o logging /dev/dsk/c1t0d0s0 /u01
Or enable permanently in /etc/vfstab.
Logging improves recovery time but adds slight write overhead.
2.3 Tuning UFS
Use tunefs:
# tunefs -a 4 /dev/rdsk/c1t0d0s0
Parameters:
- maxcontig
- rotdelay
- minfree
UFS performance heavily depends on correct tuning.
2.4 UFS Repair
Check filesystem:
# fsck -F ufs /dev/rdsk/c1t0d0s0
If root filesystem corrupted:
Boot to single-user mode.
3. ZFS – Modern Storage
ZFS combines:
Volume manager
RAID
Filesystem
Data integrity system
ZFS replaces SVM + UFS completely.
3.1 ZFS Architecture
ZFS uses:
Storage Pools (zpool)
Vdevs (virtual devices)
Filesystems
Datasets
Structure:
Disk → Vdev → Pool → Filesystem
3.2 Vdev Types
Single disk
Mirror
RAIDZ1 (1 parity)
RAIDZ2 (2 parity)
RAIDZ3 (3 parity)
Example:
# zpool create tank mirror c1t0d0 c1t1d0
3.3 Copy-on-Write (COW)
ZFS never overwrites live data.
Process:
Writes new block
Updates metadata
Commits transaction group
Prevents corruption during crash.
3.4 ZFS ARC Cache
ZFS uses RAM for caching (ARC).
Check ARC stats:
# kstat -p zfs:0:arcstats
ARC improves read performance.
3.5 Scrubbing
Checks data integrity:
# zpool scrub tank
ZFS verifies checksums and repairs if mirror/RAIDZ available.
3.6 ZFS Performance Tuning
Set recordsize (important for DB):
# zfs set recordsize=8K tank/db
Enable compression:
# zfs set compression=lz4 tank/data
Disable atime:
# zfs set atime=off tank/data
4. Disk Naming in Solaris
Format:
cXtYdZsS
Example:
c1t0d0s0
Meaning:
c1 → Controller 1
t0 → Target 0
d0 → Disk 0
s0 → Slice 0
5. Slices (Partitions)
Solaris traditionally supports slices 0–7 (and slice 8 reserved).
Important slices:
s0 → Boot or root
s1 → swap
s2 → backup slice (represents entire disk)
s3–s7 → other filesystems
Important rule:
Slice 2 often represents the whole disk in VTOC systems.
6. VTOC (Volume Table of Contents)
VTOC stores:
Slice number
Tag (root, swap, usr, backup)
Flag (wm, wu)
Start sector
Size
View VTOC:
# prtvtoc /dev/rdsk/c1t0d0s2
VTOC is limited to 2TB disks.
For larger disks Solaris uses EFI label.
7. EFI Partition Table
Used for disks larger than 2TB.
Create EFI label:
# format
format> label
EFI removes slice limitations and supports large storage.
8. Swap Management
View swap:
# swap -l
Add swap:
# swap -a /dev/dsk/c1t0d0s1
ZFS swap:
# zfs create -V 4G tank/swap
# swap -a /dev/zvol/dsk/tank/swap
9. Boot Disk Requirements
Boot disk must contain:
Boot block
Solaris kernel
Root filesystem
Install boot block:
# installboot /usr/platform/.../lib/fs/ufs/bootblk /dev/rdsk/c1t0d0s0
ZFS boot uses:
# bootadm install-bootloader
10. Whole Disk vs Slice Usage
UFS + SVM:
Requires manual slice management.
ZFS:
Can use whole disk directly.
Automatically creates EFI label if needed.
Easier administration.
11. SVM vs ZFS Comparison (Conceptual)
| Feature | SVM + UFS | ZFS |
|---|---|---|
| Volume Manager | Separate | Built-in |
| Snapshots | No | Yes |
| Checksums | No | Yes |
| RAID | Yes | Yes |
| Auto-healing | No | Yes |
| Compression | No | Yes |
| Max Disk Size | Limited (VTOC) | Very Large |
12. Recommended Usage Today
Use ZFS for new systems.
Use SVM only for legacy environments.
Avoid mixing SVM and ZFS on same disks.
Use mirrors for root filesystem.
Run regular ZFS scrubs.
Keep multiple SVM state database replicas.
No comments:
Post a Comment