This guide provides a step-by-step procedure for safely adding a new VG to an active HACMP cluster.
1. Introduction
This procedure explains how to add a new volume group to an existing Resource Group (RG) in an active PowerHA/HACMP cluster.
Before creating the volume group, identify unused disks on each cluster node and configure them as physical volumes (PVs).
1. Introduction
This procedure explains how to add a new volume group to an existing Resource Group (RG) in an active PowerHA/HACMP cluster.
- Disk identification
- VG creation
- Filesystem setup
- VG import on other nodes
- Cluster configuration and synchronization
Before creating the volume group, identify unused disks on each cluster node and configure them as physical volumes (PVs).
On NodeA
# cfgmgr
Find the unuse disk
# cfgmgr
Find the unuse disk
# lspv | grep -i none
# chdev -l hdiskX -a reserve_policy=no_reserve
On NodeB
# cfgmgr
# cfgmgr
Find the unuse disk
# lspv | grep -i none
# lspv | grep -i none
# chdev -l hdiskX -a reserve_policy=no_reserve
Note:
pv=yes marks the disk as a physical volume.
reserve_policy=no_reserve -- disables disk reservation, which is required for concurrent access in HACMP environments.
3. Create a Concurrent-Capable Volume Group
PowerHA requires volume groups to support concurrent access across cluster nodes.
Check Available Major Numbers
pv=yes marks the disk as a physical volume.
reserve_policy=no_reserve -- disables disk reservation, which is required for concurrent access in HACMP environments.
3. Create a Concurrent-Capable Volume Group
PowerHA requires volume groups to support concurrent access across cluster nodes.
Check Available Major Numbers
Run this command on NodeB to find available major numbers:
# lvlstmajor
# lvlstmajor
Create the VG on NodeA
# mkvg -C -V 203 -S -s 32 -y <vgname> hdiskX
# varyonvg <vgname>
# chvg -Qn <vgname>
# mkvg -C -V 203 -S -s 32 -y <vgname> hdiskX
# varyonvg <vgname>
# chvg -Qn <vgname>
Explanation:
-C → Creates a concurrent-capable VG
-V 203 → Assigns a major number (must be available on both nodes)
-S → Disables strict allocation
-s 32 → Sets physical partition size to 32 MB
-Qn → Disables quota checking
4. Create Logical Volumes and Filesystems
After the VG is created, define logical volumes (LVs), mirrors, and filesystems.
# mklv -e x -u 4 -s s -t jfs2 -y <lvname> <vgname> 4 hdiskX
# lslv -m <lvname>
# mklvcopy -e x <lvname> 2 hdiskY hdiskZ
# varyonvg <vgname>
# crfs -v jfs2 -m /<mountpoint> -d /dev/<lvname> -A no -p rw -a logname=INLINE
# mount /<mountpoint>
# chown user:group /<mountpoint>
# umount /<mountpoint>
# varyoffvg <vgname>
-C → Creates a concurrent-capable VG
-V 203 → Assigns a major number (must be available on both nodes)
-S → Disables strict allocation
-s 32 → Sets physical partition size to 32 MB
-Qn → Disables quota checking
4. Create Logical Volumes and Filesystems
After the VG is created, define logical volumes (LVs), mirrors, and filesystems.
# mklv -e x -u 4 -s s -t jfs2 -y <lvname> <vgname> 4 hdiskX
# lslv -m <lvname>
# mklvcopy -e x <lvname> 2 hdiskY hdiskZ
# varyonvg <vgname>
# crfs -v jfs2 -m /<mountpoint> -d /dev/<lvname> -A no -p rw -a logname=INLINE
# mount /<mountpoint>
# chown user:group /<mountpoint>
# umount /<mountpoint>
# varyoffvg <vgname>
Explanation:
mklv → Creates the logical volume
mklvcopy → Creates mirrored copies of the LV
crfs → Creates a JFS2 filesystem
mklv → Creates the logical volume
mklvcopy → Creates mirrored copies of the LV
crfs → Creates a JFS2 filesystem
Filesystem ownership and permissions are set, then the VG is unmounted and varied off for cluster integration
5. Import the VG on the Secondary Node
To make the VG available on the other cluster node:
On NodeA (to get PVID)
# lspv | grep -w <vgname> | head -1
On NodeA (to get PVID)
# lspv | grep -w <vgname> | head -1
On NodeB
# cfgmgr
# lspv | awk '$3 ~ /^None$/ {print "chdev -l "$1" -a reserve_policy=no_reserve"}' | sh
# importvg -n -V 203 -y <vgname> <PVID>
# cfgmgr
# lspv | awk '$3 ~ /^None$/ {print "chdev -l "$1" -a reserve_policy=no_reserve"}' | sh
# importvg -n -V 203 -y <vgname> <PVID>
Explanation:
importvg -n → Imports the VG without automatically mounting filesystems
-V 203 → Ensures the same major number is used as on NodeA
6. Add the VG to the HACMP Resource Group
Once the VG is imported on both nodes, integrate it into the existing Resource Group (RG) configuration.
importvg -n → Imports the VG without automatically mounting filesystems
-V 203 → Ensures the same major number is used as on NodeA
6. Add the VG to the HACMP Resource Group
Once the VG is imported on both nodes, integrate it into the existing Resource Group (RG) configuration.
Verify HACMP Configuration
# smitty hacmp
→ Problem Determination Tools
→ HACMP Verification
→ Verify HACMP Configuration
# smitty hacmp
→ Problem Determination Tools
→ HACMP Verification
→ Verify HACMP Configuration
Discover HACMP Information
# smitty hacmp
→ Extended Configuration
→ Discover HACMP-related Information from Configured Nodes
# smitty hacmp
→ Extended Configuration
→ Discover HACMP-related Information from Configured Nodes
Add VG to an Existing Resource Group (e.g., RG01)
# smitty hacmp
→ Extended Configuration
→ Extended Resource Configuration
→ HACMP Extended Resource Group Configuration
→ Change/Show Resources and Attributes for a Resource Group
Select the desired Resource Group and add the new VG.
# smitty hacmp
→ Extended Configuration
→ Extended Resource Configuration
→ HACMP Extended Resource Group Configuration
→ Change/Show Resources and Attributes for a Resource Group
Select the desired Resource Group and add the new VG.
7. Synchronize the Cluster Configuration
After updating the RG, synchronize the cluster configuration across all nodes:
# smitty hacmp
→ Extended Configuration
→ Extended Verification and Synchronization
No comments:
Post a Comment