Pages

High Availability with Redundant HMC Configuration

Introduction:
Ensuring high availability in IBM Power environments depends heavily on the reliability of the Hardware Management Console (HMC).

Although production LPARs continue running if an HMC fails, administrative control, monitoring, firmware management, and service operations become unavailable. A redundant HMC configuration eliminates this risk by deploying two HMCs managing the same Power Systems.

A properly configured redundant HMC environment provides:
  • Continuous system management access
  • Automatic Primary / Secondary role negotiation
  • Redundant service event management
  • Zero administrative downtime during HMC failure
Redundant HMC Overview
In a redundant configuration:
  • Two HMCs manage the same Power System
  • Both connect to the system’s FSP (Flexible Service Processor)
  • One HMC becomes Primary
  • The other becomes Secondary
  • Roles are automatically negotiated
  • If Primary fails, Secondary assumes control
Both HMCs:
  • See the same managed systems
  • Receive hardware events
  • Maintain synchronized awareness of the system

Design Options for Redundant HMC
There are two supported high-availability designs:

Option A — Dual Private Network (Recommended)
Architecture Concept
  • Each HMC connects to a separate FSP port
  • Each uses its own isolated private network
  • HMC acts as DHCP server for FSP
  • Networks do not overlap

Option B — Private + Static (Open) Network
Architecture Concept
  • Primary HMC uses private DHCP network
  • Secondary HMC connects remotely via open/static IP network
  • FSP configured with static IP
  • Used for remote HMC
  • Used when isolated private network not possible
Requirements
Before configuring redundancy:
  • Two installed HMCs (HMC-A and HMC-B)
  • Both running compatible HMC versions
  • Available FSP HMC1 and HMC2 ports
  • Network cables and switch ports
  • Admin credentials for FSP / ASMI
  • IP addressing plan prepared
Option A — Dual Private Network (Recommended)
Assumptions
  • Existing HMC = HMC-A
  • New HMC = HMC-B
  • HMC-A private interface = eth0
  • HMC-A open/service interface = eth1
  • New DHCP range for HMC-B = 172.16.0.3 – 172.16.255.254
  • FSP HMC2 port available

Step-by-Step Configuration:

Step 1 — Record HMC-A Network Configuration
On HMC-A:
lshmc -n
Document:
eth0 IP / subnet
eth1 IP / subnet
Gateway
DNS settings

Step 2 — Configure HMC-B Private Interface (eth0)
On HMC-B:
Navigate:
HMC Management → HMC Configuration → Customize Network Settings
Select:
eth0 → Details
Set:
Network Type = Private
Enable DHCP Server
DHCP Range = 172.16.0.3–172.16.255.254
Ensure no IP conflict with HMC-A.

Step 3 — Configure HMC-B Open Interface (eth1)
Select:
eth1 → Details
Set:
Network Type = Open
Static IP (same subnet as HMC-A eth1)
Gateway
DNS
Reboot HMC-B to apply changes.

Step 4 — Cable Connections
Connect:
HMC-B eth0 → Server FSP HMC2 port
Important:
Power up HMC network before enabling FSP connection
Avoid cross-network overlap

Step 5 — Accept Managed System on HMC-B
Wait for system discovery.
Navigate:
Server Management
Right-click detected system:
Enter/Update Managed System Password
Enter FSP credentials.

Step 6 — Verify Redundancy
Run:
lshmc -n
Confirm:
Both HMCs show valid IPs
Managed system appears in both GUIs
One HMC = Primary
One HMC = Secondary

Validation Checklist (Option A)
  • ping <fsp-ip> works from both HMCs
  • Managed system visible on both HMCs
  • Roles show Primary / Secondary
  • Shutdown HMC-A → HMC-B becomes Primary
  • No event management interruption

Option B — Private + Static (Open Network Remote HMC)
Used when HMC-B is remote and connects over open network.
Assumptions
  • HMC-A uses private DHCP (eth0)
  • HMC-B connects remotely via static IP (eth1)
  • FSP must be configured with static IP
Step-by-Step Configuration

Step 1 — Obtain Static IP Information
From Network Team:
FSP static IP
Subnet mask
Gateway
DNS
If dual FSP system, obtain both IPs.

Step 2 — Configure HMC-B Open Interface
Navigate:
HMC Management → HMC Configuration → Customize Network Settings
Select:
eth1 → Details
Set:
Network Type = Open
Static IP
Subnet
Gateway
DNS
Reboot HMC-B.

Step 3 — Configure FSP Static IP (via ASMI)
Access ASMI:
Service Applications → Service Focal Point → Service Utilities → Launch ASM
Navigate:
Network Services → Network Configuration
Configure unused FSP interface as:
  • Static IP
  • Subnet mask
  • Gateway
Save and exit.

Step 4 — Connect FSP to Open Network
Connect FSP port to production/open network.
Verify:
ping <fsp-static-ip>

Step 5 — Add Managed System to HMC-B
Navigate:
Server and Partition → Server Management → Add Managed Systems
Enter:
FSP IP
Credentials
System should appear.

Validation Checklist (Option B)
  • FSP static IP responds to ping
  • Managed systems visible on HMC-B
  • Dual FSP connections visible (if applicable)
  • Events synchronized between HMCs
  • Primary / Secondary roles visible
Role Management Behavior
  • Primary HMC handles service events
  • Secondary HMC remains synchronized
  • Automatic failover occurs if Primary becomes unavailable
  • No impact to running LPAR workloads

AIX NFS Server & Client

The AIX NFS (Network File System) Server provides a powerful distributed file-system service that lets users and applications access files and directories on remote systems as though they were stored locally. It supports NFS protocol versions 2, 3, and 4, ensuring both backward compatibility and modern NFSv4 features such as stateful connections and enhanced security.

On AIX, NFS is built on a set of coordinated server and client daemons that use Remote Procedure Calls (RPC) to manage file sharing, mounting, and communication between systems. This design enables transparent file access, centralized storage management, and efficient resource sharing across networked AIX environments.

1. Client sends MOUNT request → rpc.mountd
2. rpc.mountd authenticates → checks /etc/exports
3. rpc.mountd grants handle → returns to client
4. Client sends file requests → rpc.nfsd
5. rpc.nfsd performs I/O → local FS
6. rpc.lockd/statd manage locks/recovery

AIX NFS Port Control (Fixing Dynamic Ports)
By default, mountd, statd, and lockd use dynamically assigned ports.
You can assign fixed ports in AIX to simplify firewall configuration.
Edit /etc/services and assign static ports, for example:
mountd          635/tcp
mountd          635/udp
statd           662/tcp
statd           662/udp
lockd           4045/tcp
lockd           4045/udp
rquotad         875/udp

Commonly Fixed NFS Ports
Service              TCP Port UDP Port
portmap / rpcbind 111 111
nfsd                2049 2049
mountd                 635 635
statd                 662 662
lockd                 4045 4045
rquotad                 875 875

HARD Mount (Default and Recommended):
  • If the NFS server does not respond, all read/write operations are retried indefinitely.
  • The client process appears to hang (waits for the server to come back).
  • When the server is restored, the client continues automatically.
Advantages:
  • Ensures data integrity — no partial writes or file corruption.
  • Automatically recovers when the server returns.
  • Best for critical or write-heavy data.
Disadvantages:
  • Applications may appear frozen until the server is back.
  • If the network is unstable, the system can hang temporarily during NFS calls.
SOFT Mount (Optional, Non-Blocking):
  • If the NFS server doesn’t respond within timeout, the client returns an I/O error to the application.
  • The process does not hang, but the operation fails.
Advantages:
  • The client does not hang — user or process can continue.
  • Useful for read-only or non-critical mounts (e.g., logs, config files).
Disadvantages:
  • Data corruption risk — if a write fails mid-operation.
  • Application may see I/O or stale file handle errors.
  • Some applications may crash on failed I/O.
Server Export Configuration:
The NFS server administrator defines which directories to share and with which clients in:
/etc/exports

Basic Syntax:
<directory>  -option1,option2,...,access=<client_list>
Where:
<directory> — local path to export
option — NFS export options (e.g., rw, ro, root_squash)
access= — specifies which clients (hosts or subnets) can mount it

Example: Export Multiple Directories
Here’s how to share multiple directories from the same AIX NFS server.
/data/projects   -rw,access=10.1.1.0/24
/data/backups    -ro,access=backup01,root_squash
/home/shared     -rw,access=client1:client2:client3

Start the NFS subsystem using SMIT or command line:
# smitty mknfsexp
# startsrc -g nfs
This starts daemons:
portmap → statd → lockd → mountd → nfsd

Client Mount Request:
# mount -o rw servername:/data/projects /mnt/projects
Client contacts rpc.mountd on the server using RPC.

In /etc/filesystems (Persistent Mount on AIX)

/mnt/data:
        dev             = server1:/export/data
        vfs             = nfs
        nodename        = server1
        mount           = true
        options         = rw,vers=3,soft
        account         = false

Then mount it using:
# mount /mnt/data

Server rpc.mountd:
Checks /etc/exports for permission.
Grants or denies access.
Returns a file handle (reference to the directory).

Once mounted:
Client’s VFS (Virtual File System) routes file operations (open/read/write) through NFS.
Each NFS request (like read() or write()) is sent to rpc.nfsd on the server.
nfsd performs the requested operation on the local filesystem and sends results back to the client.

If the client locks a file:
The rpc.lockd and rpc.statd daemons coordinate file lock requests and recover locks if the server or client reboots.

You can verify running NFS daemons:
# lssrc -g nfs
Subsystem         Group            PID     Status
 portmap          portmap          12345   active
 biod             nfs              12346   active
 rpc.statd        nfs              12347   active
 rpc.lockd        nfs              12348   active
 rpc.mountd       nfs              12349   active
 nfsd             nfs              12350   active


Important NFS Configuration Files
/etc/exports → Lists shared directories and access permissions.
/etc/filesystems → Defines NFS mounts (can include type=nfs entries).
/etc/rc.nfs → Script used at startup to initialize NFS.

Command Reference:
# startsrc -g nfs  → Start NFS services 
# stopsrc -g nfs  → Stop NFS services 
# exportfs -i -v  → Export a directory 
# showmount -e  → Show exported directories 
# exportfs -u <dir> then exportfs -i <dir>  → Refresh exports
# mount -o vers=3 server:/export /mnt   → Mount with NFSv3
# mount -o vers=4 server:/export /mnt   → Mount with NFSv4
# nfsstat -m             → Check version used
# rpcinfo -p server   → Check server NFS versions

Common NFS Mount Flags (and Meanings):
vers=2/3/4 → NFS protocol version → Specifies which NFS version the client is using (2, 3, or 4).
proto=tcp / udp → Transport protocol → NFS can use either TCP (reliable) or UDP (legacy). TCP is default for v3/v4.
hard → Hard mount → Retries indefinitely if the server is unavailable — ensures data integrity.
soft → Soft mount → Fails after timeout — avoids hangs, but risks I/O errors.
intr → Interruptible → Allows users to interrupt hung NFS operations (Ctrl+C). Used with hard.
bg → Background mount → Retries the mount in background if the server is down during boot.
rw / ro → Read/write or Read-only → Determines access mode for the mount.
sec=sys → Security flavor → Uses traditional UNIX UID/GID authentication. Other options: krb5, krb5i, krb5p.
rsize= → Read buffer size → Max bytes per read request (e.g., 32768 or 65536).
wsize= → Write buffer size → Max bytes per write request.
timeo= → Timeout (in tenths of a second) → How long to wait before retrying. timeo=7 = 0.7 seconds.
retrans= → Retry count → Number of times to retry before failing (used with soft).
mountvers=3 → Mount protocol version → Version of the rpc.mountd protocol used.
namlen=255 → Max file name length supported by server.
acdirmin/acdirmax → Attribute cache min/max timeout for directories.
acregmin/acregmax → Attribute cache min/max timeout for files.
cto/nocto → Close-to-open consistency → Controls attribute cache revalidation.
noac → No attribute caching → Disables caching (slower, but consistent).
nolock → No file locking → Disables lockd (useful for read-only or stateless mounts).
nointr → Non-interruptible → Prevents user from interrupting hung NFS calls.
lookupcache=positive/none → Cache lookup results → Improves performance by caching name lookups.
port=2049 → Port used for NFS communication.
retry= → Retry count for initial mount attempts.
root_squash / no_root_squash → Server-side option → Maps root to anonymous user for security (shown only on server).
nolock → Disables NLM locking (use for NFSv4 or read-only).

PowerHA NFS Cluster Installtion & Configuration

Ensuring high availability (HA) for NFS services in an AIX environment requires a robust clustering solution. IBM PowerHA SystemMirror 7.2 provides a reliable platform for clustering nodes, managing resources, and enabling automatic failover. This guide walks you through installing PowerHA, configuring NFS services, creating an Enhanced Concurrent Volume Group (ECVG), and adding it as a cluster resource.

1. Steps to Configure Passwordless SSH Between Nodes:

Step 1: Generate SSH Key Pair
On Node1 (repeat for Node2 if you want both directions):
# su - root
# ssh-keygen -t rsa -f /root/.ssh/id_rsa
Press Enter for default file location.
Leave passphrase empty (important for passwordless access).

Step 2: Copy Public Key to Other Node
Use ssh-copy-id (if available) or manual copy:
On Node1, view public key:
# cat /root/.ssh/id_rsa.pub
On Node2, append it to /root/.ssh/authorized_keys:
# vi /root/.ssh/authorized_keys
# (paste the key from Node1)
# chmod 600 /root/.ssh/authorized_keys

Step 3: Set Correct Permissions
On all nodes:
# chmod 700 /root/.ssh
# chmod 600 /root/.ssh/authorized_keys
# chmod 600 /root/.ssh/id_rsa
# chmod 644 /root/.ssh/id_rsa.pub

Step 4: Test SSH Connection
From Node1:
# ssh root@node2
You should log in without a password.
Repeat from Node2 → Node1.

2. Install PowerHA Software on Both Nodes:

Step 1: Mount the installation media
# mount nimserver01:/software/hacmp /mnt

Step 2: Install PowerHA filesets
# installp -acgXd /mnt bos.cluster.rte cluster.adt.es cluster.doc.en_US.es cluster.es.client cluster.es.cspoc cluster.es.nfs cluster.es.server cluster.license cluster.man.en_US.es

Step 3: Verify installation
# lslpp -l | grep cluster

Step 4: Reboot all nodes
# shutdown -Fr

3. Configure Network and Repository Disk:

Step 1: Configure internal cluster (boot) IPs
# chdev -l en0 -a netaddr=192.168.10.101 -a netmask=255.255.255.0 -a state=up
# chdev -l en0 -a netaddr=192.168.10.102 -a netmask=255.255.255.0 -a state=up

Step 2: Update /etc/hosts
192.168.10.101 node1_boot
192.168.10.102 node2_boot
192.168.10.201 nfs_service_ip

Step 3: Configure /etc/cluster/rhosts
192.168.10.101
192.168.10.102

Step 4: Restart cluster communication daemon
# stopsrc -s clcomd
# startsrc -s clcomd

Step 5: Set up repository disk
# lspv
# chdev -l hdisk2 -a pv=yes
# lspv | grep hdisk2

4. Create Cluster Configuration:

Step 1: Create a new cluster
# /usr/es/sbin/cluster/utilities/clmgr add cluster nfs_cluster

Step 2: Add nodes
# /usr/es/sbin/cluster/utilities/clmgr add node node1 -c nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr add node node2 -c nfs_cluster

Step 3: Define cluster network
# /usr/es/sbin/cluster/utilities/clmgr add network net_ether_01 ether

Step 4: Add network interfaces
# /usr/es/sbin/cluster/utilities/clmgr add interface net_ether_01 node1 en0
# /usr/es/sbin/cluster/utilities/clmgr add interface net_ether_01 node2 en0

Step 5: Add repository disk
# /usr/es/sbin/cluster/utilities/clmgr add repositorydisk hdisk2

Step 6: Verify and synchronize cluster
# /usr/es/sbin/cluster/utilities/clmgr verify cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr sync cluster nfs_cluster

Step 7: Start the cluster
# /usr/es/sbin/cluster/utilities/clmgr start cluster nfs_cluster
# clstat -o

5. Configure NFS Filesystem and Service

Step 1: Create shared volume group and logical volume (on one node)
# mkvg -S -y nfs_vg hdisk3
# mklv -t jfs2 -y nfs_lv nfs_vg 1024
# crfs -v jfs2 -d nfs_lv -m /nfs_share -A no -p rw -a logname=INLINE
# mkdir -p /nfs_share
# mount /nfs_share

Step 2: Export NFS share (on both nodes)
Edit /etc/exports:
/nfs_share -rw -secure -root=client1,client2 -access=client1,client2
Apply export:
# exportfs -a

Step 3: Add NFS resources to PowerHA
# /usr/es/sbin/cluster/utilities/clmgr add nfsserver nfs_srv1
# /usr/es/sbin/cluster/utilities/clmgr add nfsexpfs nfs_exp1 -m /nfs_share -e "/nfs_share -rw -secure -root=client1,client2 -access=client1,client2"
# /usr/es/sbin/cluster/utilities/clmgr add serviceip nfs_svc_ip -A 192.168.10.201 -n net_ether_01

Step 4: Create Resource Group for NFS
# /usr/es/sbin/cluster/utilities/clmgr add rg rg_nfs -n node1,node2 -p node1 -f never
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_nfs serviceip nfs_svc_ip
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_nfs nfsserver nfs_srv1
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_nfs nfsexpfs nfs_exp1
# /usr/es/sbin/cluster/utilities/clmgr verify cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr sync cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr online rg rg_nfs
# clRGinfo

6. Creating Enhanced Concurrent Volume Group (ECVG)

Step 1: Verify Shared Disks
# lspv
# lspv | grep hdisk4

Step 2: Create ECVG
# mkvg -S -y ecvg_vg hdisk4 -A y
-S → enables concurrent access
-y ecvg_vg → volume group name
-A y → allows access from multiple nodes

Step 3: Create Logical Volumes and Filesystem
# mklv -t jfs2 -y ecvg_lv ecvg_vg 1024
# crfs -v jfs2 -d ecvg_lv -m /ecvg_share -A no -p rw
# mkdir -p /ecvg_share
# mount /ecvg_share

Step 4: Add ECVG as a Cluster Resource
# /usr/es/sbin/cluster/utilities/clmgr add rg rg_ecvg -n node1,node2 -p node1 -f never
# /usr/es/sbin/cluster/utilities/clmgr add rg_resource rg_ecvg concurrentvg ecvg_vg
# /usr/es/sbin/cluster/utilities/clmgr verify cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr sync cluster nfs_cluster
# /usr/es/sbin/cluster/utilities/clmgr online rg rg_ecvg
# clRGinfo

6. Validate NFS and ECVG
From an NFS client:
# showmount -e 192.168.10.201
# mount 192.168.10.201:/nfs_share /mnt
# touch /mnt/testfile
# ls -l /mnt

Test failover by shutting down node1. The NFS service and ECVG resource should automatically failover to node2.

PowerHA Cluster Installation & Configuration

Setting up a PowerHA SystemMirror (formerly HACMP) cluster requires careful planning and execution to ensure high availability for critical applications. This guide walks through the steps to configure a two-node AIX cluster using shared storage and CAA (Cluster Aware AIX).

Note: In this guide, the primary (active) node is referred to as node1, and the secondary (passive) node is node2.

Step 1: Generate SSH Key Pair
On Node1 (repeat for Node2 if you want both directions):
# su - root
# ssh-keygen -t rsa -f /root/.ssh/id_rsa
Press Enter for default file location.
Leave passphrase empty (important for passwordless access).

Step 2: Copy Public Key to Other node
Use ssh-copy-id (if available) or manual copy:
On node1, view public key:
# cat /root/.ssh/id_rsa.pub
On node2, append it to /root/.ssh/authorized_keys:
# vi /root/.ssh/authorized_keys
# (paste the key from node1)
# chmod 600 /root/.ssh/authorized_keys

Step 3: Set Correct Permissions
On all nodes:
# chmod 700 /root/.ssh
# chmod 600 /root/.ssh/authorized_keys
# chmod 600 /root/.ssh/id_rsa
# chmod 644 /root/.ssh/id_rsa.pub

Step 4: Test Node-to-Node Communication
Test remote shell communication using clrsh:
# /usr/sbin/clrsh node1 hostname   # Output: node1
# /usr/sbin/clrsh node2 hostname   # Output: node2
If tests fail, the cluster cannot be created until the communication issue is resolved.

Step 5: Verify Shared Storage
Before creating a cluster, ensure all shared disks are visible on both nodes:
# lspv
Check all shared datavg hdisks and the hdisk intended for the CAA repository.
Ensure all shared hdisks have reserve_policy = no_reserve:
# lsattr -El hdiskXX
# chdev -l hdiskXX -a reserve_policy=no_reserve
The hdisk used for the CAA repository should be new and never part of a VG.

Step 6: Verify Network Interfaces
Run ifconfig -a on both nodes to identify boot IP addresses for all adapters.
At this stage, there should be no alias IPs configured.

Step 7: Configure rhosts for Cluster Communication
Add all boot IPs to /etc/cluster/rhosts on both nodes (one IP per line, no comments).
Ensure /etc/hosts resolves all IPs locally.
Configure /etc/netsvc.conf to prioritize local hostname resolution:
hosts=local4,bind4

Step 8: Restart Cluster Communication Daemon
# stopsrc -s clcomd; sleep 5; startsrc -s clcomd

Step 9: Install PowerHA
Mount NFS share both the nodes
# mount nimserver1:/export/powerHA  /mnt
Change to the PowerHA installation directory:
# cd /mnt/HA7.x/
Install all packages:
# installp -acgXYd . all

Step 10: Create the Cluster
On node1, add the cluster and nodes:
# /usr/es/sbin/cluster/utilities/clmgr add cluster <cluster_name> NODES="node1 node2"
Add a repository disk (disable validation for a new disk):
# /usr/es/sbin/cluster/utilities/clmgr add repository <Repository_Disk_PVID> DISABLE_VALIDATION=true
Synchronize the cluster configuration:
# /usr/es/sbin/cluster/utilities/clmgr sync cluster
Verify cluster status on both nodes:
# lscluster -m       # Should show each node as UP
# lssrc -a | grep cthags  # Should show cthags active

Step 11: Create Resource Group and Service IP
Create a resource group:
# /usr/es/sbin/cluster/utilities/clmgr add resource_group <RG_Name> NODES=node1,node2
Create a service IP:
# /usr/es/sbin/cluster/utilities/clmgr add service_ip <Service_IP_Label> NETWORK=<Network_Name>

Step 12: Create Shared Volume Group
# /usr/es/sbin/cluster/sbin/cl_mkvg -f -n -cspoc -n 'node1,node2' -r '<RG_Name>' -y '<VG_Name>' -V '<VG_Major_Number>' -E <hdiskx_PVID>

Step 13: Add Resources to Resource Group
# /usr/es/sbin/cluster/utilities/clmgr modify resource_group '<RG_Name>' SERVICE_LABEL='<Service_IP_Label>'

Step 14: Synchronize and Start Cluster
Synchronize the cluster:
# /usr/es/sbin/cluster/utilities/clmgr sync cluster
Start PowerHA and bring the cluster online:
# /usr/es/sbin/cluster/utilities/clmgr online cluster WHEN=now START_CAA=yes

The GPFS Cluster filesystem extension (AIX & Linux)

The GPFS (General Parallel File System) filesystem extension is the core component that integrates IBM Spectrum Scale with the operating system’s Virtual File System (VFS) layer, allowing GPFS to function as a native filesystem similar to ext3 or xfs.

The GPFS kernel extension registers support at the OS vnode and VFS levels, enabling seamless recognition and handling of GPFS operations. It works closely with GPFS daemons that manage cluster-wide I/O operations, including read-ahead and write-behind optimizations for high-performance data access.

GPFS Filesystem Extension Steps for AIX:

1. Scan the LUNs all the GPFS nodes
# cfgmgr -v
2. Set reserve_policy on each disk on each node
# chdev -l <hdisk#> -a reserve_policy=no_reserve
3. Create the file /tmp/nsdhdiskX.txt
# vi /tmp/nsdhdiskX.txt
%nsd:
         device=/dev/<hdiskX>
         servers=server1,server2,server3,server4
         nsd=<nsd_name>
         usage=dataAndMetadata
         failureGroup=1
         pool=system

4. Create NSD from the file /tmp/nsdhdiskX.txt
# mmcrnsd -F /tmp/nsdhdiskX.txt

5. To see that NSD names corresponding to the disks in lspv output.
# lspv

6. To verify using the mmlsnsd command.
# mmlsnsd

7 Add disk to an existing filesystem
# mmadddisk <gpfs_filesystem> "<nsd_disk name1>;<nsd_disk name2>"
Example:
# mmadddisk gpfs01 nsd1;nsd2

8.To validate the NFS disk attached to the filesystem gpfs filesystem.
# mmlsdisk <gpfs_filesystem>
Example:
# mmlsdisk gpfs01

9. To Check new filesystem size.
# df -g | grep <filesystem_name>

GPFS Filesystem Extension Steps for Linux:

1. Scan the LUNs & find the shared LUNs all the GPFS nodes
# for host in /sys/class/scsi_host/host*; do
echo "- - -" > "$host/scan"
done
#lsblk

2. Create the file /tmp/disk1.txt
# vi /tmp/disk1.txt
%nsd:
       device=</dev/sdX>
       servers=server1,server2,server3,server4
       nsd=<nsd_name>
       usage=dataAndMetadata
       failureGroup=1
       pool=system

3. Create NSD from the file /tmp/disk1.txt
# mmcrnsd -F /tmp/disk1.txt

4. To see that NSD names corresponding to the disks in lspv output.
# lsblk

5. To verify using the mmlsnsd command.
# mmlsnsd

6 Add disk to an existing filesystem
# mmadddisk <gpfs_filesystem> "<nsd_disk name1>;<nsd_disk name2>"
Example:
# mmadddisk gpfs01 nsd1;nsd2

7.To validate the NFS disk attached to the filesystem gpfs filesystem.
# mmlsdisk <gpfs_filesystem>
Example:
# mmlsdisk gpfs01

8. To Check new filesystem size.
# df -h  or df -h <filesystem_name>