Pages

RHEL Linux Redundant Array of Independent Disks(RAID)

RAID (Redundant Array of Independent Disks) transforms multiple physical drives into a single logical unit, boosting redundancy (data protection from failures), performance (faster reads/writes via striping), or capacity (pooling storage). In Linux, software RAID—powered by the kernel and mdadm tool—offers a cost-effective alternative to pricey hardware controllers. It's ideal for home servers, NAS builds, or cloud VMs where you control the disks.

Whether you're setting up a mirrored backup array or a high-speed striped volume, this guide walks you through RAID levels, configuration, monitoring, and recovery. Warning: Always back up data before experimenting—RAID isn't a backup!

Key RAID Levels in Linux: Which One Fits Your Needs?
Linux supports these common levels via mdadm. Choose based on your priorities:

RAID 0 (Striping): Splits data across disks for max speed.
Pros: Doubles read/write throughput (e.g., great for video editing).
Cons: No redundancy—one failure wipes everything.
Min disks: 2. Use case: Temporary high-perf scratch space.

RAID 1 (Mirroring): Duplicates data across disks.
Pros: Simple, fast reads, survives one failure per mirror set.
Cons: 50% capacity loss.
Min disks: 2. Use case: Critical boot/OS drives.

RAID 5 (Striping + Parity): Distributes parity for fault tolerance.
Pros: Balances speed/redundancy; usable capacity = (n-1) disks.
Cons: Writes slow down due to parity math; one failure max.
Min disks: 3. Use case: File servers with moderate I/O.

RAID 6: RAID 5 with double parity.
Pros: Survives two failures; safer for large drives (e.g., 10TB+ HDDs).
Cons: Higher write overhead; usable = (n-2) disks.
Min disks: 4. Use case: Enterprise NAS.

RAID 10 (1+0: Mirror + Stripe): Mirrors striped sets.
Pros: Excellent speed + redundancy (up to half the mirrors can fail).
Cons: 50% capacity loss; even number of disks.
Min disks: 4. Use case: Databases needing low latency.

RAID LevelMin DisksRedundancyPerf FocusCapacity EfficiencyBest For
02NoneHigh100%Speed only
121 failureRead-heavy50%Reliability
531 failureBalanced~67-80%Cost-effective storage
642 failuresBalanced~50-75%Large arrays
104MultipleVery High50%Perf + safety

Pro Tip: For bootable RAID, use /etc/mdadm/mkconf during install or GRUB config.

Prerequisites Before Starting
  • Hardware: 2+ identical disks (e.g., /dev/sdb, /dev/sdc). Check with lsblk or fdisk -l. Wipe them: wipefs -a /dev/sdX.
  • Software: mdadm (install via sudo apt install mdadm on Debian/Ubuntu or sudo dnf install mdadm on Fedora).
  • Backup: RAID protects against disk failure, not deletion, ransomware, or accidental wipes.
  • Kernel Support: Enabled by default in modern distros (CentOS 7+, Ubuntu 20.04+).
Creating a Software RAID Array

1. Create the Array
Assemble with mdadm --create. Example: RAID 1 mirror.
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
For RAID 5 (3 disks):
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
The --verbose flag shows progress. Arrays appear as /dev/md0, /dev/md1, etc.

2. Verify Creation
# cat /proc/mdstat
# mdadm --detail /dev/md0
Look for "clean" or "active sync" status. Initial sync can take hours for large drives.

3. Format and Mount
# mkfs.ext4 -F /dev/md0  # Or xfs, btrfs for advanced features
# mkdir /mnt/raid
# mount /dev/md0 /mnt/raid
# df -h  # Confirm it's mounted

4. Make It Persistent
Scan and update configs:
# mdadm --detail --scan >> /etc/mdadm.conf
# update-initramfs -u  # For bootable RAID
Add to /etc/fstab (use UUID for safety):
UUID=your-uuid-here /mnt/raid ext4 defaults 0 0
Get UUID: blkid /dev/md0.

Monitoring and Daily Management
  • Quick Status: watch cat /proc/mdstat (updates every 2s).
  • Details: mdadm --detail /dev/md0 or mdadm --examine /dev/sdX.
  • Logs: journalctl -u mdadm or tail -f /var/log/syslog | grep md.
  • Email Alerts: Add PROGRAM /usr/sbin/mdadm --monitor --scan --mailacpi --mailto=you@example.com --delay=60 /dev/md0 to a cron job.
Best Practice: Set up mdadm --monitor for proactive alerts on degradation.

Handling Failures: Recovery and Rebuild
RAID shines in recovery—automated rebuilds minimize downtime.

Mark Faulty: mdadm --fail /dev/md0 /dev/sdb.
Remove: mdadm --remove /dev/md0 /dev/sdb.
Replace Disk: Insert new one, then mdadm --add /dev/md0 /dev/sdb.
Monitor Rebuild: Watch /proc/mdstat for "[UU_]" progressing to full sync. Example output:
md0 : active raid1 sdb sdc
      [===>........] recovery = 25.3% (500GB/2TB)

Troubleshooting:
  • Degraded Array? mdadm --assemble --scan on boot.
  • Stuck Rebuild? Increase speed: echo 500000 > /proc/sys/dev/raid/speed_limit_min.
  • Common Error: "No arrays found"—run mdadm --assemble --scan and update /etc/mdadm.conf.
Stopping or Removing RAID
# mdadm --stop /dev/md0
# mdadm --zero-superblock /dev/sdb /dev/sdc  # Wipe metadata
# mdadm --remove /dev/md0

Final Tips and Caveats
  • Performance Tuning: Use deadline or mq-deadline scheduler for SSDs; cfq for HDDs.
  • Alternatives: ZFS (better checksums) or Btrfs (native RAID) for modern setups.
  • Scalability: Add spares: mdadm --add /dev/md0 /dev/sde as a hot spare.
  • Testing: Simulate failures with mdadm --fail in a VM first.

No comments:

Post a Comment