Pages

AIX varyonvg & mount filesystems

The script it checks if the volume group (VG) exists, automatically varies it on if inactive, and mounts all filesystems under it, including running fsck when needed.
=========================================================================
#!/bin/ksh
# =========================================================================
# mountvgfs.ksh - Varyon VG if not active, then mount all filesystems
# for an AIX LVM volume group.
# Usage : ./mountvgfs.ksh <vgname>
# ./mountvgfs.ksh -f <vgname>
# Author : Tasleem A Khan
# =========================================================================
usage() {
echo
echo "Usage: $0 [-f] <vgname>"
echo "Mounts all filesystems for the given AIX LVM volume group."
echo
exit 1
}
# --- Parse command-line options ---
opt_force=0
if [[ "$1" = "-f" ]]; then
opt_force=1
shift
fi

vg="$1"
[[ -z "$vg" ]] && usage

logfile="/mountvgfs.log"
fsck="/usr/sbin/fsck"

# --- Initialize log ---
: > "$logfile"
exec > >(tee -a "$logfile") 2>&1

echo "============================================================"
echo "Starting filesystem mount for VG: $vg"
echo "Logfile: $logfile"
echo "Start time: $(date '+%Y-%m-%d %H:%M:%S')"
echo "============================================================"

# --- Verify VG existence ---
if ! lsvg "$vg" >/dev/null 2>&1; then
echo "Error: Volume group '$vg' does not exist."
exit 1
fi

# --- Check if VG is active ---
vg_state=$(lsvg "$vg" | grep "VG STATE" | awk '{print $NF}')

if [[ "$vg_state" != "active" ]]; then
echo "Volume group '$vg' is not active. Attempting to varyon..."
varyonvg "$vg" >/dev/null 2>&1

if [[ $? -ne 0 ]]; then
echo "Error: Failed to varyon volume group '$vg'."
exit 1
else
echo "Successfully varied on volume group '$vg'."
fi
else
echo "Volume group '$vg' is already active."
fi

# --- Build filesystem list ---
typeset -A fs_to_dev
fslist=""

cmd="lsvg -l $vg | sed 1,2d | grep -v 'N/A'"
while read -r line; do
set -A arr -- $line
dev=${arr[0]}
type=${arr[1]}
mp=${arr[6]}
[[ -z "$mp" || "$mp" = "N/A" ]] && continue
fs_to_dev["$mp"]="/dev/$dev"
fslist="$fslist $mp"
done <<EOF
$($cmd)
EOF

# --- Sort filesystem list ---
fslist=$(echo "$fslist" | tr ' ' '\n' | sort)

if [[ -z "$fslist" ]]; then
echo "No filesystems found in VG '$vg'."
exit 0
fi

echo
echo "Filesystem list to mount:"
for fs in $fslist; do
echo " $fs"
done
echo

# --- Function: Check if filesystem is mounted ---
is_fs_mounted() {
fs="$1"
df -k "$fs" 2>/dev/null | sed 1d | awk '{print $NF}' | grep -qx "$fs"
}

# --- Mount filesystems ---
for fs in $fslist; do
echo "------------------------------------------------------------"
echo "Processing filesystem: $fs"

if is_fs_mounted "$fs"; then
echo "Already mounted."
continue
fi

if [[ ! -d "$fs" ]]; then
echo "Creating mount point: $fs"
mkdir -p "$fs"
fi

echo "Mounting: $fs"
mount "$fs" >/dev/null 2>&1

if ! is_fs_mounted "$fs"; then
echo "Mount failed. Running fsck..."
cmd="$fsck -y $fs"
echo "Executing: $cmd"
eval "$cmd"

echo "Retrying mount for $fs..."
mount "$fs" >/dev/null 2>&1

if is_fs_mounted "$fs"; then
echo "Successfully mounted after fsck."
else
echo "Error: Failed to mount $fs after fsck."
fi
else
echo "Mounted successfully."
fi
done

echo "------------------------------------------------------------"
echo "Finished mounting all filesystems."
echo "End time: $(date '+%Y-%m-%d %H:%M:%S')"
echo "============================================================"

=========================================================================

Example Outputs:

Case 1 — VG already active

Command:
# ./mountvgfs.ksh datavg
Example Output:
============================================================
Starting filesystem mount for VG: datavg
Logfile: /mountvgfs.log
Start time: 2025-11-08 12:42:01
============================================================
Volume group 'datavg' is already active.
Filesystem list to mount:
/data
/logs
/backup
------------------------------------------------------------
Processing filesystem: /backup
Already mounted.
------------------------------------------------------------
Processing filesystem: /data
Mounting: /data
Mounted successfully.
------------------------------------------------------------
Processing filesystem: /logs
Mounting: /logs
Mounted successfully.
------------------------------------------------------------
Finished mounting all filesystems.
End time: 2025-11-08 12:42:05
============================================================

Case 2 — VG not active (auto varyon)
Command:

# ./mountvgfs.ksh testvg
Example Output:
============================================================
Starting filesystem mount for VG: testvg
Logfile: /mountvgfs.log
Start time: 2025-11-08 12:50:18
============================================================
Volume group 'testvg' is not active. Attempting to varyon...
Successfully varied on volume group 'testvg'.
Filesystem list to mount:
/test1
/test2
------------------------------------------------------------
Processing filesystem: /test1
Mounting: /test1
Mounted successfully.
------------------------------------------------------------
Processing filesystem: /test2
Mounting: /test2
Mounted successfully.
------------------------------------------------------------
Finished mounting all filesystems.
End time: 2025-11-08 12:50:22
============================================================

Case 3 — VG doesn’t exist
Command:

# ./mountvgfs.ksh invalidvg
Example Output:
============================================================
Starting filesystem mount for VG: invalidvg
Logfile: /mountvgfs.log
Start time: 2025-11-08 12:55:11
============================================================
Error: Volume group 'invalidvg' does not exist.

IBM Power System Firmware Upgrade

Introduction:
IBM Power System firmware controls low-level hardware functions of the server frame (CEC – Central Electronics Complex). Keeping firmware up to date ensures:
  • Hardware stability
  • Security fixes
  • Performance improvements
  • Compatibility with new HMC levels
  • Support for new adapters and features
Firmware updates are performed through the HMC and must follow IBM-supported upgrade paths and compatibility matrices.
There are two types of firmware updates:
  • Concurrent Firmware Update
  • Disruptive Firmware Update
Choosing the correct method is critical to avoid unexpected downtime.

IBM Frame Firmware Update Types:

Concurrent Firmware Update
Description:
Firmware is updated while the system remains powered on and operational.
When to Use
  • Supported by current firmware level
  • Supported by HMC version
  • Minimal downtime is required
  • Production workloads must remain online
Benefits
  • No need to shut down LPARs
  • No need to stop Virtual I/O Servers
  • Minimal production impact
Limitations
  • Not supported on all firmware levels
  • Some updates may require later reboot
  • Certain hardware adapters may block concurrent update
Disruptive Firmware Update:
Description:
System must be powered off or rebooted to complete the firmware update.
When Required
Current firmware level does not support concurrent update
Major firmware change
IBM release notes specify disruptive update only
Impact
All LPARs must be shut down
Virtual I/O Servers must be shut down
Full system downtime required

How to Decide Which Update Type to Use
Before starting:
Run HMC Readiness Check
HMC → Select System → Actions → Update Firmware → Check Readiness
If status shows Ready for Concurrent, you may use concurrent update.
If not, disruptive update is required.
Check IBM compatibility matrix:
https://www.ibm.com/support/pages/hmc-and-system-firmware-supported-combinations
Review release notes from IBM Fix Central.
Always prefer Concurrent when supported and safe.
Use Disruptive when required or recommended.

IBM Power System Firmware Upgrade Checklist
Step 1 — Pre-Check Compatibility
1.1 Verify HMC & Firmware Combination
Check IBM supported combinations page:
https://www.ibm.com/support/pages/hmc-and-system-firmware-supported-combinations
Example for model 9080-M9S:
https://www.ibm.com/support/pages/node/7080068
Confirm:
HMC level supported
Target firmware level supported
Adapter compatibility confirmed
1.2 Confirm Firmware Repository
Ensure firmware image is available locally or via NIM:
Example:
nimserver01:/software/firmware/9080-M9S

Step 2 — Run Firmware Readiness Check
On HMC:
Select target system
Navigate:
Actions → Update Firmware → Check Readiness
Confirm status = Ready
If not ready:
Review blocking issues
Resolve adapter or firmware mismatch

Step 3 — Prepare for Firmware Upgrade (Disruptive Case)
Skip shutdown steps if performing Concurrent Update.
For disruptive update:
  • Stop Application
  • Stop Database
  • Shutdown all LPARs
  • Shutdown Virtual I/O Servers (VIO A & B)
Verify all partitions are in Not Activated state.

Step 4 — Perform Firmware Upgrade
On HMC:
Select Target System
Navigate:
Actions → Update Firmware → Update Firmware
Choose:
System Firmware → Update
Select Update Type:
Concurrent (if supported)
Disruptive (if required)
Select firmware image source (local repository, NFS, etc.)
Confirm and start update
Monitor progress
Wait until system state returns to:
Operating
or
Standby

Step 5 — Post Firmware Upgrade Tasks
5.1 Start Virtual I/O Servers
Login to HMC (SSH or PuTTY):
chsysstate -m <Frame_Name> -r lpar -n <VIO-ServerA> -o on
chsysstate -m <Frame_Name> -r lpar -n <VIO-ServerB> -o on
Wait until both are fully operational.
5.2 Start Application LPARs
chsysstate -m <Frame_Name> -r lpar -n <LPAR-A> -o on
chsysstate -m <Frame_Name> -r lpar -n <LPAR-B> -o on
5.3 Validate System Health
Check Filesystems
# df -g
Validate PowerHA Cluster (if used)
# clRGinfo
Validate GPFS (if used)
# mmgetstate -a
Validate Services
# lssrc -s sendmail
5.4 Start Database & Applications
  • Start Database
  • Start Application services
  • Confirm user access
  • Validate monitoring alerts

Post-Upgrade Validation Checklist
  • Firmware level verified on HMC
  • System status = Operating
  • VIO A & B running
  • All LPARs running
  • Cluster healthy (if applicable)
  • Storage paths active
  • No hardware errors
  • Applications accessible
Best Practices
  • Always review release notes before updating
  • Prefer concurrent update when supported
  • Test in non-production first
  • Schedule maintenance window for disruptive updates
  • Verify cluster and storage health after update
  • Keep HMC level compatible with firmware

SSH xxx.xxx.xxx.xx port 22: no matching connection (closed)

You are getting this SSH error when trying to connect:

Unable to negotiate with xxx.xxx.xxx.xx port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss connection [closed]

This means the SSH client and server cannot agree on a common host key algorithm to use during connection. The server offers ssh-rsa and ssh-dss, but your SSH client doesn't accept those by default anymore because of security changes in newer OpenSSH versions.

Option 1: 

1.Backup current SSH configs
# cp /etc/ssh/sshd_config /etc/ssh/sshd_config.06022025
# cp /etc/ssh/ssh_config /etc/ssh/ssh_config.06022025

You copy the SSH daemon config file (sshd_config) and the SSH client config file (ssh_config) to backups named with today's date (06022025).
This is good practice before making changes.

2.Modify SSH config files to allow ssh-rsa algorithm

echo "HostKeyAlgorithms +ssh-rsa" >> /etc/ssh/sshd_config
echo "PubKeyAcceptedAlgorithms +ssh-rsa" >> /etc/ssh/sshd_config
echo "HostKeyAlgorithms +ssh-rsa" >> /etc/ssh/ssh_config
echo "PubKeyAcceptedAlgorithms +ssh-rsa" >> /etc/ssh/ssh_config

You append lines to both SSH daemon and client config files to explicitly add ssh-rsa as an allowed algorithm.

3.Restart SSH daemon
stopsrc -s sshd;startsrc -s sshd;lssrc -s sshd

These commands stop, start, and check the status of the SSH daemon (sshd).

Option 2:

1. Step-by-step to create/edit .ssh/config
Open (or create) the file ~/.ssh/config in your user’s home directory:
vim ~/.ssh/config

2.Add the following configuration to allow ssh-rsa for a specific host:
Host myserver
HostName xxx.xxx.xxx.xx
User your_username
HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa


  • Replace myserver with any alias you want.
  • Replace xxx.xxx.xxx.xx with the server IP or hostname.
  • Replace your_username with your SSH login username.

If you want to enable this for all hosts, you can do:

Host *
HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa


3.Save and exit the editor.
4.Set the correct permissions for the file:
chmod 600 ~/.ssh/config

4.Restart SSH daemon
# stopsrc -s sshd;startsrc -s sshd;lssrc -s sshd

These commands stop, start, and check the status of the SSH daemon (sshd).

IBM PowerVM Basics

At its core, IBM PowerVM is a virtualization layer designed for IBM Power Systems servers, from POWER5 to POWER10. It allows multiple logical partitions (LPARs) to run independently on the same hardware, securely sharing CPUs, memory, and I/O. Each LPAR functions as a standalone server, giving you the flexibility to mix and match workloads.


Core Components and Architecture
PowerVM relies on the POWER Hypervisor, a firmware-based layer that slices physical resources into logical partitions. Here’s how it works:

CPU Options
  • Dedicated Processors – Reserve whole CPUs for a single LPAR. Ideal for latency-sensitive workloads (dedicated mode).
  • Micro-Partitioning – Share CPU capacity in fractions, down to 0.01 processing units. LPARs can draw from a shared pool, with capped or uncapped options for flexibility.
Management Tools
  • Integrated Virtualization Manager (IVM) – A web/CLI tool for entry-level servers, letting you manage LPARs, storage pools, and Ethernet without a full HMC.
  • Hardware Management Console (HMC) – Needed for advanced environments, offering more control over LPARs, storage, and networking.
PowerVM Editions
Standard Edition – Covers the basics (POWER6+).
Enterprise Edition – Adds Live Partition Mobility and Active Memory Sharing (POWER7+).

Key Limits to Plan Your Infrastructure:
Planning ahead ensures you avoid bottlenecks. Here’s a snapshot of POWER9/10 limits with VIOS 3.1.4+:
ResourceMax per LPARSystem MaxNotes
Virtual Ethernet Adapters2564096Up to 20 VLANs per adapter
vSCSI Adapters256N/AClient/server sides count separately
vFC Adapters64N/AFor NPIV; pairs with VIOS
Total Virtual Adapters1024N/AEthernet + SCSI + FC combined
Shared Ethernet Adapters (SEA) per VIOSN/A16Bridges virtual/physical networks; up to 4096 VLANs via trunking
VLAN IDs20 per adapter4096IDs 1-4094; 0 & 4095 reserved
These limits ensure a high-performance virtual infrastructure without bottlenecks.

Storage Smarts: Pools, Groups, and SSP

VIOS makes I/O efficient through virtualized storage:
FeatureStorage Pool (IVM)Volume Group (HMC/LVM)
Data DistributionRandom across all drivesSame set of drives
Drive Failure ImpactLow; fast rebuildHigh; slower rebuild
SparesBuilt-in preservationNeeds hot spares
ScalabilityClusters; many drivesLocal; fewer drives
Best ForThin provisioning, multi-tenantTraditional local storage

Shared Storage Pool (SSP) takes it further by clustering VIOS across servers for pooled SAN disks, thin provisioning, and redundancy.

Memory Magic: Expand, Share, and Protect
PowerVM has powerful memory features to optimize your workloads:
  • Active Memory Expansion (AME) – Compresses LPAR memory (e.g., 20GB → 30GB effective) with minimal CPU overhead depending on workload.
  • Active Memory Sharing (AMS) – Creates a dynamic memory pool for multiple LPARs, allowing overcommitment and paging to VIOS.
  • Active Memory Deduplication (AMD) – Merges identical pages in AMS for transparent RAM savings.
  • Active Memory Mirroring (AMM) – Mirrors hypervisor memory for failover (requires double the hypervisor RAM).
Networking Deep Dive
Virtual networking keeps LPARs connected with high performance:
  • Virtual Ethernet Adapters – Up to 256 per LPAR, memory-based, VLAN-aware.
  • Virtual Switches – Handle Layer 2 switching in hypervisor memory; add more for isolation.
  • Virtual Network Bridges + SEAs on VIOS – Connect virtual adapters to physical NICs with failover/load balancing.
  • VLANs – Segment traffic securely, supporting up to 4096 system-wide IDs.
Pro tip: Tag frames with IEEE 802.1Q for secure, broadcast-limited network domains.

PowerVM Still Rules?
From migrating legacy AIX workloads to enabling AI applications on POWER10, PowerVM delivers:
  • High utilization with micro-partitioning and memory sharing.
  • No-downtime mobility via Live Partition Mobility.
  • Granular control over CPU, memory, and I/O resources.
  • Secure, multi-tenant environments for enterprise workloads.
PowerVM is the backbone for efficient, resilient, and scalable Power environments.

HMC Update/Upgrade

Introduction:
IBM HMC is the central management interface for IBM Power Systems. Upgrading or updating the HMC ensures:
  • Compatibility with new server firmware
  • Access to new features and bug fixes
  • Secure and stable system administration
This guide provides step-by-step instructions for safely backing up, updating, and upgrading HMC using GUI, CLI, or NFS-based methods, including best practices and verification checks.

Step 1 — Pre-Upgrade Preparation
Read Documentation
Download Release Notes, Readme, Upgrade Path, Known Issues from IBM Fix Central
Review "Recommended Fixes – HMC Code Upgrades"
Verify Compatibility
Check managed servers’ firmware
Confirm HMC hardware meets disk/RAM requirements
Confirm network services: DNS, NTP, FTP/NFS availability
Check any custom scripts or external dependencies
Plan Downtime
Notify stakeholders
Schedule maintenance window
Ensure rollback plan is ready
Ensure Recovery Access
Physical or remote console (KVM) access
Media access for USB/DVD
SSH access to HMC

Step 2 — Backup HMC Data
Backups are mandatory before any update or upgrade.
2.1 Backup Methods
Option A — GUI
Login:
https://<hmc_server>
User: hscroot
Navigate:
HMC Management → HMC Action → Backup HMC Data

Choose Network File System (NFS)
Enter:
Server: <nfs_server_ip>
Directory: /export/hmc_backup
Click Save Backup

Option B — CLI
# Backup Critical Console Data (CCD)
:~> bkupccs -r nfs -h <nfs_server_ip> -l /export/hmc_backup
# Save Upgrade Data (for major upgrades)
:~> saveupgdata -r nfs -h <nfs_server_ip> -l /export/hmc_backup
# Backup Profile Data (LPAR/system profiles)
:~> bkprofdata -r nfs -h <nfs_server_ip> -l /export/hmc_backup

Step 3 — Prepare HMC Update/Upgrade Files
Download HMC update or upgrade package from IBM Fix Central.
Copy files to NFS server:
/export/hmc_update
Test NFS mount on HMC:
:~> mount -o vers=3 <nfs_server_ip>:/export/hmc_update /mnt
:~> ls /mnt   # verify files are visible
:~> umount /mnt

Step 4 — HMC Update / Corrective Service (Same Version)
GUI Method
Login to HMC GUI
Navigate:
HMC Management → HMC Action → Update HMC

Select:
  • SFTP Server hostname or IP address User Credential & ISO file Name:
  • FTP Server hostname or IP address User Credential & ISO file Name:
  • CD/DVD location & ISO file Name:
  • VIrtual Media & location:
  • USB:
  • IBM Website:
NFS Server hostname or IP address:
Mount Location:
ISO file Name:

Source: NFS, enter server & directory
Select update package → Install → Confirm
Reboot HMC:
:~> hmcshutdown -r -t now

CLI Method
Use same NFS mount, then apply updates via HMC commands (advanced users).

Step 5 — Full Version Upgrade (Major Release)
5.1 Download & Stage Upgrade
:~> getupgfiles -r nfs -h <nfs_server_ip> -d /export/hmc_update
5.2 Enable Alternate Disk Boot
:~> chhmc -c altdiskboot -s enable --mode upgrade
5.3 Reboot HMC
:~> hmcshutdown -r -t now
HMC boots into upgrade mode
Upgrade wizard starts
Saved upgrade data is restored automatically

Step 6 — Post-Upgrade Verification
Login to HMC GUI https://<hmc_server>
HMC version: HMC Management --> HMC Setting
Verify:
HMC version (lshmc -V)
Managed systems are visible
LPARs and profiles are intact
Users and network configuration correct
NTP and time/date correct
Optional: Run CLI checks:
:~> lssyscfg -r sys          # List managed systems
:~> lssyscfg -r lpar -m <system>   # List LPARs

Step 7 — Restore Procedure:
Login to HMC GUI https://<hmc_server>
Navigate:
HMC Management → Restore Management Console Data
Source: NFS, enter server & directory
Select backup file:
backupccs_YYYY-MM-DD_HHMMSS.tar.gz
Click Restore → Wait for completion → Reboot if prompted