Pages

Renew GPFS (IBM Spectrum Scale) Certificates

IBM Spectrum Scale (GPFS) uses internal SSL certificates to secure communication among cluster nodes. When these certificates are close to expiration—or have already expired—you must renew them to restore healthy cluster communication.

This article provides step-by-step instructions for renewing GPFS certificates using both the online (normal) and offline (expired certificate) methods.

Renewing GPFS Certificate – Online Method (Recommended)
Use this method when the certificates have NOT yet expired.
This method does not require shutting down the cluster.

1. Check the current certificate expiry date
Run on any cluster node:
# mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid
Or:
# /usr/lpp/mmfs/bin/mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid

2. Generate new authentication keys
# mmauth genkey new

3. Commit the new keys
# mmauth genkey commit

4. Validate the updated certificate on all nodes
# mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid
Or:
/usr/lpp/mmfs/bin/mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid

Renewing GPFS Certificate – Offline Method (Certificates Already Expired)
If the cluster fails to start or nodes cannot communicate due to an expired certificate, use this offline method.
This requires a temporary cluster shutdown and manual time adjustment.

1. Verify certificate expiration
# mmdsh -N all 'openssl x509 -in /var/mmfs/ssl/id_rsa_committed.pub -dates -noout'

2. Stop NTP service (important for manual time rollback)
# lssrc -s xntpd
# stopsrc -s xntpd

3. Shut down GPFS on all nodes
# mmshutdown -a

4. Stop CCR monitoring on quorum nodes
# mmdsh -N quorumNodes "/usr/lpp/mmfs/bin/mmcommon killCcrMonitor"

5. Roll back the system time on ALL nodes
Set the clock just before the certificate expiry time.
Example:
date 072019542025
Explanation:
07 = Month (July)
20 = Day
19:54 = Time
2025 = Year

6. Restart CCR monitor
# mmdsh -N quorumNodes "/usr/lpp/mmfs/bin/mmcommon startCcrMonitor"

7. Generate & commit new keys
# mmauth genkey new
# mmauth genkey commit

8. Restore correct date and restart NTP
# date <current_correct_time>
# startsrc -s xntpd

9. Verify the new certificate
# mmdsh -N all 'openssl x509 -in /var/mmfs/ssl/id_rsa_committed.pub -dates -noout'

10. Restart GPFS on all nodes
# mmstartup -a

Extracting Disk Details (Size, LUN ID, and WWPN) on IBM AIX

Managing storage on IBM AIX systems often requires gathering detailed information about disks — including their size, LUN ID, and WWPN (World Wide Port Name) of the Fibre Channel adapters they connect through.

This information is especially useful for SAN teams and system administrators when verifying storage mappings, troubleshooting, or documenting configurations.

In this post, we’ll look at a simple shell script that automates this task.

The script:
  • Loops through all disks known to AIX (lspv output).
  • Extracts each disk’s LUN ID from lscfg.
  • Gets its size in GB using bootinfo.
  • Finds all FC adapters (fcsX) and displays their WWPNs.
  • Prints a consolidated, easy-to-read summary.
The Script

#!/bin/ksh
for i in $(lspv | awk '{print $1}')
do

# Get LUN ID
LUNID=$(lscfg -vpl "$i" | grep -i "LIC" | awk -F. '{print $NF}')

# Get size in GB
DiskSizeMB=$(bootinfo -s "$i")
DiskSizeGB=$(echo "scale=2; $DiskSizeMB/1024" | bc)

# Loop over all FC adapters
for j in $(lsdev -Cc adapter | grep fcs | awk '{print $1}')
do
WWPN=$(lscfg -vpl "$j" | grep -i "Network Address" | sed 's/.*Address[ .]*//')
echo "Disk: $i Size: ${DiskSizeGB}GB LUN ID: $LUNID WWPN: $WWPN"
done
done


How It Works:
  • lspv lists all disks managed by AIX (e.g., hdisk0, hdisk1).
  • lscfg -vpl hdiskX displays detailed configuration information for each disk, including the LUN ID.
  • bootinfo -s hdiskX returns the disk size in megabytes.
  • lsdev -Cc adapter | grep fcs lists all Fibre Channel adapters (fcs0, fcs1, etc.).
  • lscfg -vpl fcsX | grep "Network Address" shows the adapter’s WWPN.
  • sed 's/.*Address[ .]*//' cleans the output, leaving only the WWPN value.
Example Output:
Disk: hdisk0 Size: 100.00GB LUN ID: 500507680240C567 WWPN: C0507601D8123456
Disk: hdisk0 Size: 100.00GB LUN ID: 500507680240C567 WWPN: C0507601D8123457
Disk: hdisk1 Size: 200.00GB LUN ID: 500507680240C568 WWPN: C0507601D8123456
Disk: hdisk1 Size: 200.00GB LUN ID: 500507680240C568 WWPN: C0507601D8123457


This shows each disk (hdiskX) with its size, LUN ID, and all connected FC adapter WWPNs.

Presenting Fibre-Channel Storage to AIX LPARs with Dual VIOS (NPIV / vFC)

Present SAN LUNs to AIX LPARs using NPIV / virtual Fibre Channel (vFC) so each LPAR has redundant SAN paths through two VIOS servers (VIOS1 = primary, VIOS2 = backup) and can use multipathing (native MPIO or PowerPath).

NPIV (N_Port ID Virtualization) lets an LPAR present its own virtual WWPNs to the SAN while physical Fibre Channel hardware is on the VIOS. With two VIOS nodes and dual SAN fabrics, you get end-to-end redundancy:
  • VIOS1 and VIOS2 each present vFC adapters to the LPAR via the HMC.
  • Each VIOS has physical FC ports connected to redundant SAN switches/fabrics.
  • LUNs are zoned and masked to VIOS WWPNs. AIX LPARs discover LUNs, use multipathing, and survive single-path failures.
Prerequisites & Assumptions:
  • HMC admin, VIOS (padmin/root), and AIX root access available.
  • VIOS1 & VIOS2 installed, registered with HMC and reachable.
  • Each VIOS has at least one physical FC port (e.g., fcs0, fcs1).
  • SAN team will perform zoning & LUN masking.
  • Backups of VIOS and HMC configs completed.
  • You know which LPARs should receive which LUNs.
High-Level Flow:
  • Collect physical FC adapter names & WWPNs from VIOS1 and VIOS2.
  • Provide WWPNs to SAN admin for zoning & LUN masking.
  • Create vFC adapters for each AIX LPAR on the HMC and map them across VIOS1/VIOS2.
  • Verify mappings on HMC and VIOS (lsmap).
  • Ensure VIOS physical FC ports are logged into fabric.
  • On AIX LPARs: run cfgmgr, enable multipathing, create PVs/VGs/LVs as required.
  • Test failover by disabling a path and verifying I/O continues.
  • Document and monitor.
Step-by-Step Configuration

Step 1 — Verify VIOS Physical Fibre Channel Adapters
On VIOS1 and VIOS2, log in as padmin and identify FC adapters:
$ lsdev -type adapter
Expected output snippet:
VIOS1:
fcs0 Available 00-00 Fibre Channel Adapter
fcs1 Available 00-01 Fibre Channel Adapter
VIOS2:

fcs0 Available 00-00 Fibre Channel Adapter
fcs1 Available 00-01 Fibre Channel Adapter
Retrieve WWPNs for each adapter:
$ lsattr -El fcs0 | grep -i wwpn
Record results: 
VIOS    Adapter        WWPN
VIOS1   fcs0              20:00:00:AA:AA:AA
VIOS2   fcs0              20:00:00:CC:CC:CC

Step 2 — SAN Zoning & LUN Presentation
Provide the recorded VIOS WWPNs to the SAN Administrator.
Request:
  • Zoning between each VIOS WWPN and Storage Controller ports.
  • LUN masking to present LUN-100 to both VIOS WWPNs.
  • Confirmation that both VIOS ports see the LUNs across both fabrics.
Tip: Ensure both fabrics (A & B) are zoned independently for redundancy.

Step 3 — Create Virtual Fibre Channel (vFC) Adapters via HMC

On the HMC:
Select AIX-LPAR1 → Configuration → Virtual Adapters.
Click Add → Virtual Fibre Channel Adapter.
Create two vFC adapters:
vfc0 mapped to VIOS1
vfc1 mapped to VIOS2
Save configuration and activate (Dynamic LPAR operation if supported).
Expected vFC mapping:
Adapter     Client LPAR    Server VIOS       Mapping Status
vfc0           AIX-LPAR1     VIOS1                Mapped OK
vfc1           AIX-LPAR1     VIOS2                Mapped OK

Step 4 — Verify vFC Mapping on VIOS
Log in to each VIOS (padmin):
$ lsmap -all -type fcs

Example output:
On VIOS1:
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ ----------- --------
vfchost0 U9105.22A.XXXXXX-V1-C5 5 AIX-LPAR1 AIX
Status:LOGGED_IN
FC name:fcs0
Ports logged in: 2
VFC client name: fcs0
VFC client WWPN: 10:00:00:11:22:33:44:55

On VIOS2:
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ ----------- --------
vfchost0 U9105.22A.XXXXXX-V2-C6 5 AIX-LPAR1 AIX
Status:LOGGED_IN
FC name:fcs0

Ports logged in: 2
VFC client name: fcs1

VFC client WWPN: 10:00:00:55:66:77:88:99

Confirm each VIOS vFC host maps to the correct AIX vFC client.

Step 5 — Verify VIOS FC Port Fabric Login
On each VIOS:
$ fcstat fcs0
Verify:
Port is online.
Logged into fabric.
No link errors.

Step 6 — Discover Devices on AIX LPAR

Boot or activate AIX-LPAR1 SMS mode and 
  • Open HMC → Open vterm/console for AIX-LPAR1.
  • HMC GUI: Tasks → Operations → Activate → Advanced → Boot Mode = SMS → Activate. 
  • In SMS console: 5 (Select Boot Options) → Select Install/Boot Device → List all Devices → pick device → Normal Boot Mode → Yes to exit and boot from that device.
Verify Fibre Channel adapters:
# lsdev -Cc adapter | grep fcs
fcs0 Available Fibre Channel Adapter
fcs1 Available Fibre Channel Adapter
List discovered disks:
# lsdev -Cc disk
# lspv
Expected:
hdisk12 Available 00-08-00-4,0 16 Bit LUNZ Disk Drive

Step 7 — Configure Multipathing
If using native AIX MPIO, verify:
# lspath
Enabled hdisk12 fscsi0
Enabled hdisk12 fscsi1
If using EMC PowerPath:
# powermt display dev=all
Confirm both paths active.

Step 8 — Test Redundancy / Failover
To validate multipathing:
On VIOS1, disable the FC port temporarily:
$ rmdev -l fcs0 -R
On AIX LPAR, verify disk is still accessible:
# lspath -l hdisk12
Expected:
Enabled hdisk12 fscsi1
Failed hdisk12 fscsi0
Re-enable path:
$ cfgdev
Confirm path restoration:
Enabled hdisk12 fscsi0
Enabled hdisk12 fscsi1

Step 9— Post-Deployment Checks
Verify all paths:
# lspath
Check VIOS logs for FC errors:
$ errlog -ls
Save configuration backups:
$ backupios -file /home/padmin/vios1_bkup
$ backupios -file /home/padmin/vios2_bkup

SEA Failover on Dual VIOS with VLAN Tagged Ethernet Adapters

To configure Shared Ethernet Adapter (SEA) failover on a dual Virtual I/O Server (VIOS) setup while using IEEE 802.1Q VLAN tagging. Follow these steps to provide HA network connectivity for AIX client LPARs using VLAN-tagged networks.

Prerequisites:
  • Two VIOS instances (VIOS1 and VIOS2) managed in an HMC-managed system.
  • Physical NIC(s) available on each VIOS and the physical switch configured to support required VLANs.
  • HMC access (for DLPAR adapter creation) and root access to each VIOS.
  • Ensure PowerVM hypervisor supports VLAN tagging (IEEE 802.1Q).
  • Decide on VLAN IDs:
  • PVID (trunk PVID) for SEA trunk adapters (example: 1).
  • Control channel VLAN for SEA heartbeat (example: 100).
  • External (tagged) VLAN(s) to carry client traffic (example: 1000).
  • If using EtherChannel, configure switch ports for EtherChannel before creating EtherChannel on VIOS.
SEA (Shared Ethernet Adapter): Layer‑2 bridge on VIOS that connects virtual and physical networks.

ha_mode=auto (Failover): Configures two SEAs (one active, one standby) with a control channel for heartbeat.

Trunk (Access external network): Virtual adapters used by the SEA to bridge to external network; must share the same PVID on both VIOSes but use different priorities (lower value = higher priority).

Control channel: Dedicated virtual adapter on a unique VLAN (not exposed to external network) used for SEA heartbeats; must exist on both VIOSes and be specified in the SEA configuration.

High-level:

  • Create trunk virtual adapter on VIOS1 (PVID=1) and mark "access external network".
  • Create control-channel virtual adapter on VIOS1 (PVID=100), do not mark external access.
  • Create SEA on VIOS1 with ha_mode=auto and set ctl_chan to the control adapter.
  • Create VLAN subinterface on SEA for tagged VLAN (tag 1000) and assign IP to it.
  • Repeat steps 1–4 on VIOS2 using the same PVID values but set a higher trunk priority (so VIOS1 is primary).
  • On client LPAR(s), create virtual adapter(s) with the same PVID as SEAs and add VLAN subinterface(s) with the same tag(s).
  • Verify failover and connectivity.

Detailed Steps — VIOS1 (Primary):

1. Create trunk virtual Ethernet adapter (via HMC DLPAR):
Tasks → Dynamic Logical Partitioning → Virtual Adapters → Actions → Create → Ethernet Adapter.
Adapter ID: choose slot (e.g., ent2).
VLAN ID (PVID): 1 (example).
Select IEEE 802.1Q compatible adapter and enter VLAN tags you will use (e.g., 1000).
Check Access external network.
Set trunk priority: 1 (lower = higher priority).
Save.

2. Create control-channel virtual adapter (via HMC DLPAR):
Adapter ID: another slot (e.g., ent3).
VLAN ID (PVID): 100 (example control VLAN).
Do NOT check Access external network.
Save and if using DLPAR you may need to cfgdev or reboot to make adapters available.

3. Create SEA on VIOS1 (ha_mode auto):
On VIOS1 shell run:
# mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3
ent0 is the physical adapter or the target SEA device name; adjust according to your system.
-vadapter ent2 uses the trunk virtual adapter created earlier.
-default ent2 -defaultid 1 sets the default trunk PVID for SEA traffic.
-attr ha_mode=auto ctl_chan=ent3 enables failover and points to control channel adapter.

4. Create VLAN sub interface on the SEA for external tagged VLAN (e.g., 1000):
On VIOS1 shell run:
# mkdev -vlan ent4 -tagid 1000
Note: The device name ent4 is an example — the actual name displayed may differ (the command returns the available device).

5. Assign IP and start TCP/IP on the SEA VLAN interface:
Example using mktcpip:
# mktcpip -hostname vio1 -interface en5 -inetaddr 192.168.10.10 -netmask 255.255.255.0 -gateway 9.3.5.41 -nsrvaddr 9.3.4.2 -nsrvdomain itsc.austin.ibm.com -start
 Replace interface (en5) and addresses with your network specifics.

Detailed Steps — VIOS2 (Backup)

1. Create trunk virtual adapter (match PVID):

Create a virtual adapter (e.g., ent2) with VLAN ID 1 and Access external network checked.
Set trunk priority: 2 (higher value than primary so it's secondary).
Enable IEEE 802.1Q and include VLAN tags (e.g., 1000).

2. Create control-channel virtual adapter (match control VLAN):
Create adapter (e.g., ent3) with VLAN ID 100 and do NOT check Access external network.
Activate adapter (cfgdev or reboot if needed).

3. Create SEA on VIOS2 with failover attribute(Optional) :
If you want SEA objects present on both VIOSes, run the same mkvdev command on VIOS2:
# mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3
This makes the SEA available in standby mode.

4. Create VLAN sub interface on VIOS2 SEA for tagged VLAN (e.g., 1000):
Run:
# mkdev -vlan ent4 -tagid 1000

6. Assign IP to the SEA VLAN adapter on VIOS2:
Example:
# mktcpip -hostname vio2 -interface en5 -inetaddr 9.3.5.137 -netmask 255.255.255.0 -gateway 9.3.5.41 -nsrvaddr 9.3.4.2 -nsrvdomain itsc.austin.ibm.com -start

Client LPAR Steps (AIX)

1. Create virtual adapter on client LPAR (via HMC DLPAR):

Tasks → Dynamic Logical Partitioning → Virtual Adapters → Create → Ethernet Adapter.
Adapter ID: choose slot (e.g., ent0).
VLAN ID (PVID): 1 (must match SEA trunk PVID).
Select IEEE 802.1Q compatible adapter and include VLAN tags (e.g.,1000).
Do NOT check Access external network on client adapters.
Set trunk priority: 1.
Save.

2. Create VLAN subinterface on client LPAR (AIX):
On AIX, run smitty vlan → Add a VLAN → Select ent0 → Specify VLAN ID 1000.
The VLAN device (e.g., ent1) becomes available.

3. Add TCP/IP configuration for VLAN interface on client:
# smitty mktcpip → Select the VLAN device (e.g., en1) → Enter hostname, IP address, netmask, gateway, nameserver, start TCP/IP daemons.

Verification & Testing:

1. Check SEA status on VIOS:

# lsdev -dev entX -attr
# entstat -d entX
Confirm which SEA is active and which is standby.

2. Validate VLAN tagging:
From client LPAR, ping gateway and external hosts on VLAN 1000.

3. Force failover test:
Shut down the active VIOS SEA interface or simulate failure and ensure the standby SEA becomes active and client LPARs retain network access.
Create SEA on VIOS (example)
# mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3
Create VLAN subinterface for tagged VLAN
# mkdev -vlan ent4 -tagid 1000
Assign TCP/IP to SEA VLAN interface
# mktcpip -hostname vio1 -interface en5 -inetaddr 192.168.10.10 -netmask 255.255.255.0 -gateway 9.3.5.41 -nsrvaddr 9.3.4.2 -nsrvdomain itsc.austin.ibm.com -start

Deploying VIOS and Configuring IBM Power Systems

Setting up a Virtual I/O Server (VIOS) environment correctly is essential for achieving performance, scalability, and high availability on IBM Power Systems. This step-by-step guide walks you through the complete process — from installing VIOS (single or dual) to configuring networking, SEA failover, and client LPAR connectivity using HMC or IVM.
  • Install and register VIOS partitions for virtualized network and storage access.
  • Configure Shared Ethernet Adapters (SEA) with VLAN tagging for resilient networking.
  • Map virtual storage to clients via vSCSI or NPIV.
  • Create and configure AIX or Linux client LPARs, attach virtual adapters, and test network failover.
This practical workflow aligns with IBM Redbooks best practices and helps administrators build a robust PowerVM virtualization layer that supports flexible, high-availability enterprise workloads.

1. Obtain VIOS Installation ISO
Download the VIOS DVD ISO from IBM "https://www.ibm.com/servers/eserver/ess/landing/landing-page" for the desired version.

2. Upload ISO to HMC
On the HMC, navigate to:
HMC Management → Templates and OS Images → Add/Upload OS Image
Upload the downloaded ISO. The HMC will store it for VIO installations.

3. Activate VIO LPAR and Start Installation
Power on the Virtual I/O Server LPAR from HMC.
When prompted, choose "VIO Install".
Installation may take over an hour.
The progress bar may hang near the end for ~20 minutes — this is normal.

4. First Login and Initial Setup
# Login as padmin
# Set password for padmin
# Accept license
$ oem_setup_env
$ license -accept

5. Configure Network and Hostname
Configure primary interface
Example: en0
chdev -l en0 -a netaddr=172.16.10.100 -a netmask=255.255.255.0 -a state=up
Set default route
chdev -l inet0 -a route=0,172.16.10.1
Set hostname interactively
smitty hostname
Update /etc/hosts and /etc/netsvc.conf if needed.
Update /etc/resolv.conf with domain, search list, and nameservers.

6. Creating new paging space (if you want a dedicated LV):
mkps -s <size_in_MB> -n <paging_lv_name> <volume_group>
Example:
mkps -s 6144 -n paging01 rootvg   # 6 GB on rootvg
Activate Paging Space
swapon -a
Activates all paging spaces listed in /etc/swap.
Verify Paging Space
lsps -a

7. Configure Dump Devices
Create lv_dump01 on primary disk
mklv -t sysdump -y lv_dump01 rootvg 8 hdisk0
sysdumpdev -Pp /dev/lv_dump01
Create lv_dump02 on second disk
mklv -t sysdump -y lv_dump02 rootvg 8 hdisk1
sysdumpdev -Ps /dev/lv_dump02

8. NTP and Time Zone
Create missing NTP files
touch /home/padmin/config/ntp.conf /home/padmin/config/ntp.drift /home/padmin/config/ntp.log /home/padmin/config/ntp.trace
Sample ntp.conf
cat > /home/padmin/config/ntp.conf <<EOF
server ntp.mydomain.com
driftfile /home/padmin/config/ntp.drift
tracefile /home/padmin/config/ntp.trace
logfile /home/padmin/config/ntp.log
EOF
Enable NTP at startup
vi /etc/rc.tcpip
Uncomment/start line:
# start /usr/sbin/xntpd -a '-c /home/padmin/config/ntp.conf' "$src_running"
Set timezone
vi /etc/environment
TZ=Europe/Vienna

9. Mirror Root Volume
extendvg rootvg hdisk1
mirrorios -defer hdisk1
bosboot -ad hdisk0
bosboot -ad hdisk1

10. Configure syslog
vi /etc/syslog.conf
Example:
*.debug  /var/log/messages  rotate size 1m files 10
auth.debug /var/log/auth.log  rotate size 1m files 10
Create log files
touch /var/log/messages /var/log/auth.log
refresh -s syslogd

11. SSH Configuration
  • Add needed public key to authorized_keys
12. Adding Adapters
  • Add network cards or FC adapters as needed.
  • Verify device names.
13. Apply Recommended VIOS Rules
View differences
rules -o diff -s -d
Deploy recommended settings
rules -o deploy -d

14. Network Configuration (Optional: Dual VIOS / SEA / LACP)
HMC GUI
  • Configure link aggregation and SEA.
  • Create virtual networks and VLANs via GUI.
Command Line:
Create virtual Ethernet adapters
chhwres -r virtualio -m p950 -p vio01 -o a -s 100 --rsubtype eth -a "ieee_virtual_eth=1,port_vlan_id=4000,\"addl_vlan_ids=3030,1871\",is_trunk=1,trunk_priority=1"
chhwres -r virtualio -m p950 -p vio02 -o a -s 100 --rsubtype eth -a "ieee_virtual_eth=1,port_vlan_id=4000,\"addl_vlan_ids=3030,1871\",is_trunk=1,trunk_priority=2"
Save profile
mksyscfg -r prof -m p950 -p vio01 -o save -n default --force
mksyscfg -r prof -m p950 -p vio02 -o save -n default --force

Create LACP Etherchannel
mkvdev -lnagg ent4 ent5 -attr mode=8023ad hash_mode=src_dst_port

Create SEA
mkvdev -sea ent18 -vadapter ent12 ent14 ent16 ent17 -default ent12 -defaultid 4000
chdev -dev ent20 -attr ha_mode=sharing

15. Performance Tuning
Increase SEA buffer sizes:
chdev -l ent9 -a max_buf_huge=128 -a min_buf_huge=64 -a max_buf_large=128 -a min_buf_large=64 -a max_buf_medium=512 -a min_buf_medium=256 -a max_buf_small=4096 -a min_buf_small=2048 -a max_buf_tiny=4096 -a min_buf_tiny=2048 -P
Enable large_receive:
chdev -l ent10 -a large_receive=yes
Adjust FC queue_depth if needed:
chdev -l hdiskX -a queue_depth=32 -P