Pages

PowerHA NFS Cluster Installtion & Configuration

Ensuring high availability (HA) for NFS services in an AIX environment requires a robust clustering solution. IBM PowerHA SystemMirror 7.2 provides a reliable platform for clustering nodes, managing resources, and enabling automatic failover. This guide walks you through installing PowerHA, configuring NFS services, creating an Enhanced Concurrent Volume Group (ECVG), and adding it as a cluster resource.

1. Steps to Configure Passwordless SSH Between Nodes:

Step 1: Generate SSH Key Pair
On Node1 (repeat for Node2 if you want both directions):
# su - root
# ssh-keygen -t rsa -f /root/.ssh/id_rsa
Press Enter for default file location.
Leave passphrase empty (important for passwordless access).

Step 2: Copy Public Key to Other Node
Use ssh-copy-id (if available) or manual copy:
On Node1, view public key:
# cat /root/.ssh/id_rsa.pub
On Node2, append it to /root/.ssh/authorized_keys:
# vi /root/.ssh/authorized_keys
# (paste the key from Node1)
# chmod 600 /root/.ssh/authorized_keys

Step 3: Set Correct Permissions
On all nodes:
# chmod 700 /root/.ssh
# chmod 600 /root/.ssh/authorized_keys
# chmod 600 /root/.ssh/id_rsa
# chmod 644 /root/.ssh/id_rsa.pub

Step 4: Test SSH Connection
From Node1:
# ssh root@node2
You should log in without a password.
Repeat from Node2 → Node1.

2. Install PowerHA Software on Both Nodes:

Step 1: Mount the installation media
# mount nimserver01:/software/hacmp /mnt

Step 2: Install PowerHA filesets
# installp -acgXd /mnt bos.cluster.rte cluster.adt.es cluster.doc.en_US.es cluster.es.client cluster.es.cspoc cluster.es.nfs cluster.es.server cluster.license cluster.man.en_US.es

Step 3: Verify installation
# lslpp -l | grep cluster

Step 4: Reboot all nodes
# shutdown -Fr

3. Configure Network and Repository Disk:

Step 1: Configure internal cluster (boot) IPs
# chdev -l en0 -a netaddr=192.168.10.101 -a netmask=255.255.255.0 -a state=up
# chdev -l en0 -a netaddr=192.168.10.102 -a netmask=255.255.255.0 -a state=up

Step 2: Update /etc/hosts
192.168.10.101 node1_boot
192.168.10.102 node2_boot
192.168.10.201 nfs_service_ip

Step 3: Configure /etc/cluster/rhosts
192.168.10.101
192.168.10.102

Step 4: Restart cluster communication daemon
# stopsrc -s clcomd
# startsrc -s clcomd

Step 5: Set up repository disk
# lspv
# chdev -l hdisk2 -a pv=yes
# lspv | grep hdisk2

4. Create Cluster Configuration:

Step 1: Create a new cluster
# /usr/es/sbin/cluster/utilities/clmgr add cluster nfs_cluster

Step 2: Add nodes
# /usr/es/sbin/cluster/utilities/clmgr add node node1 -c nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr add node node2 -c nfs_cluster

Step 3: Define cluster network
# /usr/es/sbin/cluster/utilities/clmgr add network net_ether_01 ether

Step 4: Add network interfaces
# /usr/es/sbin/cluster/utilities/clmgr add interface net_ether_01 node1 en0
# /usr/es/sbin/cluster/utilities/clmgr add interface net_ether_01 node2 en0

Step 5: Add repository disk
# /usr/es/sbin/cluster/utilities/clmgr add repositorydisk hdisk2

Step 6: Verify and synchronize cluster
# /usr/es/sbin/cluster/utilities/clmgr verify cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr sync cluster nfs_cluster

Step 7: Start the cluster
# /usr/es/sbin/cluster/utilities/clmgr start cluster nfs_cluster
# clstat -o

5. Configure NFS Filesystem and Service

Step 1: Create shared volume group and logical volume (on one node)
# mkvg -S -y nfs_vg hdisk3
# mklv -t jfs2 -y nfs_lv nfs_vg 1024
# crfs -v jfs2 -d nfs_lv -m /nfs_share -A no -p rw -a logname=INLINE
# mkdir -p /nfs_share
# mount /nfs_share

Step 2: Export NFS share (on both nodes)
Edit /etc/exports:
/nfs_share -rw -secure -root=client1,client2 -access=client1,client2
Apply export:
# exportfs -a

Step 3: Add NFS resources to PowerHA
# /usr/es/sbin/cluster/utilities/clmgr add nfsserver nfs_srv1
# /usr/es/sbin/cluster/utilities/clmgr add nfsexpfs nfs_exp1 -m /nfs_share -e "/nfs_share -rw -secure -root=client1,client2 -access=client1,client2"
# /usr/es/sbin/cluster/utilities/clmgr add serviceip nfs_svc_ip -A 192.168.10.201 -n net_ether_01

Step 4: Create Resource Group for NFS
# /usr/es/sbin/cluster/utilities/clmgr add rg rg_nfs -n node1,node2 -p node1 -f never
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_nfs serviceip nfs_svc_ip
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_nfs nfsserver nfs_srv1
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_nfs nfsexpfs nfs_exp1
# /usr/es/sbin/cluster/utilities/clmgr verify cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr sync cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr online rg rg_nfs
# clRGinfo

6. Creating Enhanced Concurrent Volume Group (ECVG)

Step 1: Verify Shared Disks
# lspv
# lspv | grep hdisk4

Step 2: Create ECVG
# mkvg -S -y ecvg_vg hdisk4 -A y
-S → enables concurrent access
-y ecvg_vg → volume group name
-A y → allows access from multiple nodes

Step 3: Create Logical Volumes and Filesystem
# mklv -t jfs2 -y ecvg_lv ecvg_vg 1024
# crfs -v jfs2 -d ecvg_lv -m /ecvg_share -A no -p rw
# mkdir -p /ecvg_share
# mount /ecvg_share

Step 4: Add ECVG as a Cluster Resource
# /usr/es/sbin/cluster/utilities/clmgr add rg rg_ecvg -n node1,node2 -p node1 -f never
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_ecvg concurrentvg ecvg_vg
# /usr/es/sbin/cluster/utilities/clmgr verify cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr sync cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr online rg rg_ecvg
# clRGinfo

6. Validate NFS and ECVG
From an NFS client:
# showmount -e 192.168.10.201
# mount 192.168.10.201:/nfs_share /mnt
# touch /mnt/testfile
# ls -l /mnt

Test failover by shutting down node1. The NFS service and ECVG resource should automatically failover to node2.

No comments:

Post a Comment