Pages

HMC

IBM Hardware Management Console (HMC) is a centralized management platform used to control and administer:
  • IBM Power Systems servers
  • Logical Partitions (LPARs)
  • System firmware and hardware resources
  • Virtualized CPU, memory, and I/O
  • Power operations and system status
HMC is used by system administrators to manage enterprise Power environments securely and efficiently.

Core Capabilities
  • Create, modify, and delete LPARs
  • Dynamic LPAR (DLPAR) resource changes
  • Manage processor pools
  • Firmware updates
  • Monitor hardware health
  • View reference codes (LED)
  • Control power (on/off/reset)
  • Manage users and access roles
  • Backup and restore configuration
HMC can be deployed as:
  • A physical appliance
  • A virtual HMC (vHMC) running on supported platforms
It connects to managed Power Systems via:
  • Management Ethernet ports
  • Service processors (FSP)

Dual HMC Setup (High Availability)
Dual HMC provides redundancy for management access.
How It Works
  • Two HMCs connect to the same Power System.
  • Both can manage the same server.
  • One acts as primary (operational).
  • Second acts as backup.
  • No automatic configuration sync (profiles must be manually synchronized).
  • Failover is manual.
Requirements
  • Two installed HMCs
  • Compatible HMC versions
  • Network connectivity to Power server
  • Valid credentials
  • Power System online
Basic Dual HMC Configuration Steps:

Step 1 – Configure Primary HMC
Login to HMC GUI or CLI https://<hmc-ip/hostname-address?>
Naviate to:
Systems Management → Add Managed System
Enter:
System IP :
Credentials :
Confirm system appears under Systems

Step 2 – Configure Secondary HMC
Repeat the same discovery process using identical system IP and credentials.

Step 3 – Verify
Ensure:
Both HMCs show the managed system
LPAR list matches

Step 4 – Test Failover
Disconnect Primary
Access Secondary
Confirm system management works

5. Daily Administration Tasks
Check HMC Version
:~> lshmc -V
Check HMC Network
:~> lshmc -n
Reboot HMC
:~> hmcshutdown -t now -r

6. User Management
List Users
:~> lshmcusr
Create User
:~> mkhmcusr -u userID -a ROLE -d "Description" --passwd PASSWORD
Change Password
:~> chhmcusr -u username -t passwd -v NewPassword
Remove User
:~> rmhmcusr -u username

7. LPAR Management
List All Systems
:~> lssyscfg -r sys
List LPARs in a System
:~> lssyscfg -r lpar -m ManagedSystem
Start LPAR
:~> chsysstate -r lpar -m ManagedSystem -o on -n LPAR_Name
Shutdown LPAR
:~> chsysstate -r lpar -m ManagedSystem -o shutdown -n LPAR_Name
Hard Power Off
:~> chsysstate -r lpar -m ManagedSystem -o off -n LPAR_Name

8. Dynamic LPAR (DLPAR) Operations
Add Memory (1GB)
:~> chhwres -r mem -m ManagedSystem -o a -p LPAR_Name -q 1024
Remove Memory
:~> chhwres -r mem -m ManagedSystem -o r -p LPAR_Name -q 1024
Add CPU (Dedicated)
:~> chhwres -r proc -m ManagedSystem -o a -p LPAR_Name -procs 1
Add Processing Units (Shared)
:~> chhwres -r proc -m ManagedSystem -o a -p LPAR_Name -procunits 0.5

9. Profile Management
List Profiles
:~> lssyscfg -r prof -m ManagedSystem
Modify Memory in Profile
:~> chsyscfg -r prof -m ManagedSystem -i "name=Profile,lpar_name=LPAR,min_mem=512,desired_mem=8192,max_mem=16384"
Modify CPU Units
:~> chsyscfg -r prof -m ManagedSystem -i "name=Profile,lpar_name=LPAR,min_proc_units=0.2,desired_proc_units=1.0,max_proc_units=2.0"

10. Backup and Restore
Backup HMC Data (NFS)
:~> bkconsdata -r nfs -n ServerName -l /mountpoint
Backup Profiles
:~> bkprofdata -m ManagedSystem -f backupfile
Restore Profiles
:~> rstprofdata -m ManagedSystem -l restore_type -f backupfile

11. Reference Codes (LED)
Show System Reference Code
:~> lsrefcode -r sys -m ManagedSystem
Show LPAR Reference Code
:~> lsrefcode -r lpar -m ManagedSystem

12. Virtual Console
Open Console
:~> mkvterm -m ManagedSystem -p LPAR_Name
Close Console
:~> rmvterm -m ManagedSystem -p LPAR_Name

13. I/O and WWPN Management
List I/O Slots
:~> lshwres -r io -m ManagedSystem --rsubtype slot
Show Virtual WWPNs
:~> lsnportlogin -m ManagedSystem --filter "lpar_ids=12"
Login WWPN to SAN
:~> chnportlogin -o login -m ManagedSystem --id 12

14. SSH Key Setup
Add public key:
:~> mkauthkeys -a "ssh-rsa AAAA..."
Add for specific user:
:~> mkauthkeys -u username -a "ssh-rsa AAAA..."

15. System Power Policy
Set power-off policy:
:~> chsyscfg -r sys -m ManagedSystem -i "power_off_policy=0"
Values:
0 → Power off after all partitions shutdown
1 → Keep system powered on

16. Best Practices
Keep both HMCs on same firmware level
Schedule regular configuration backups
Test Dual HMC failover quarterly
Restrict hscroot usage
Use role-based access control
Monitor disk usage:
:~> monhmc -r disk -n 0
Keep SSH secured with keys instead of passwords

Conclusion:
IBM HMC is the central control point for managing IBM Power Systems environments.

With proper configuration, backup strategy, and Dual HMC setup, it provides:
  • High availability
  • Secure access
  • Flexible resource management
  • Enterprise-level control
This guide provides a practical system administrator–focused reference for daily operations.

Live Partition Mobility (LPM)

Live Partition Mobility (LPM) allows you to move a running logical partition (LPAR) from one physical IBM Power system (frame) to another without shutting it down.

The migration transfers the LPAR’s active memory, CPU, and virtual I/O connections to a target system while keeping the workload running.

LPM is often used for:
  • Hardware maintenance without downtime
  • Load balancing between  systems
  • Energy optimization
Requirements for Live Partition Mobility
Virtualization
  • The LPAR must be fully virtualized — no dedicated I/O hardware.
  • Managed through a Virtual I/O Server (VIOS).
Network Requirements
  • Only virtual network adapters can be used.
  • Dedicated network adapters and IVE/LHEA adapters are not supported.
  • All VLANs used by the source LPAR must also be available on the target frame.
  • Non-essential adapters (like for admin networks) can be temporarily removed before migration.
Storage Requirements
  • All disks must be shared storage accessed through VSCSI or NPIV.
  • Dedicated HBAs and internal disks are not supported.
  • Storage controllers (VSCSI or NPIV) must be consistent between the source and target systems.
  • If the LPAR uses VSCSI, it must connect to VIO servers on both source and target.
For NPIV, the same applies — WWPN mappings must exist on both sides.

Inactive Partition Migration
  • When the LPAR is shut down.
  • Only the LPAR profile and configuration are migrated — not live memory.
  • Faster, because it skips memory synchronization.
Storage Configuration Details
  • All LUNs must be masked to the inactive WWPNs (WWPN+1).
  • If this is already configured, no new storage request is needed.
  • LPM automatically validates LUN access using the standby WWPN.
  • VSCSI-backed LPARsAll LUNs must be masked to both source and destination VIO servers.
Identify:
  • Correct WWPNs for each VIO server
  • Storage array and device ID for each LUN
  • Run cfgmgr on the target VIOs to discover and validate the new disks.
  • Disk configurations must match on source and target.
Performing IBM Live Partition Mobility (LPM) using the HMC GUI

Prerequisites Before Starting LPM
Before using the GUI, make sure the following are complete and validated:
LPAR Requirements
  • The LPAR uses only virtual network and virtual storage adapters (VSCSI or NPIV).
  • No dedicated I/O, IVE, or LHEA adapters.
  • Internal disks are not used.
Storage Requirements
  • All LUNs are shared and zoned/masked to both source and target VIO servers.
  • For NPIV: Ensure LUNs are mapped to inactive WWPNs (WWPN+1).
  • For VSCSI: Ensure LUNs are mapped to both source and target VIO servers.
Network Requirements
  • All VLANs used by the LPAR on the source frame must exist on the target frame.
HMC Connectivity
If migrating between systems managed by different HMCs, SSH trust must exist between them:

# ssh hscroot@indhmctst01
Password:
Last login: Wed Feb 25 15:49:41 2026 from 192.168.10.201
hscroot@indhmctst01:~>

If prompted for a password, set up trust with:
hscroot@indhmctst01:~> mkauthkeys -g --ip <target HMC> -u hscroot

Resource Availability
  • Ensure the target system has enough CPU, memory, and virtual resources.
Steps for IBM LPM using the HMC GUI

1: Open the HMC GUI
Log in to the HMC that manages the source system.
Navigate to:
System resources --> Partition
Locate and select the LPAR you want to migrate.

2: Open the LPM Wizard
From the Actions menu (or right-click the LPAR), choose:
Select LPAR ---> Partition actions --> Validate and Migrate --> Next
The Migrate Partition wizard will open.

3: Select the Destination System
Select target systems --> Click Move to a system remote HMC/Select Target System for Migration --->Edit configuration settings (optional) Storage/Network/Lavels/MSP Setting/SR-IOV and vNIC/Other ---> Start Validation

If the target system is managed by another HMC:
  • Enter the target HMC’s IP address or hostname.
  • Enter the user ID (typically hscroot).
  • Click Refresh Destination System to retrieve the system list.
4: Choose Migration Type
  • If the LPAR is running, this will perform a Live Partition Mobility (LPM).
  • If the LPAR is shut down, it will perform an Inactive Partition Migration automatically.
5: Configure Destination Resources
Processor Pool: Choose the correct shared processor pool on the destination (important for licensing and performance).
Virtual I/O Mappings: Validate that all virtual adapters have mappings on the target VIOS.
Confirm memory and processor settings match or are compatible.

6: Validate Migration
System resources --> Partition --> Select LPAR ---> Partition actions --> Validate and Migrate --> Next -->Select target systems --> Click Move to a system remote HMC/Select Target System for Migration --->Edit configuration settings (optional) Storage/Network/Lavels/MSP Setting/SR-IOV and vNIC/Other ---> Start Validation

The system checks:
  • Shared storage access
  • Network adapter configuration
  • VIOS mappings
  • Target resource availability
  • If validation passes:
  • The Migrate button becomes active.
  • If validation fails, review the error messages, correct issues, and revalidate.
7: Start the Migration
Once validation is successful, click Migrate.

System resources --> Partition --> Select LPAR ---> Partition actions --> Validate and Migrate --> Next -->Select target systems --> Click Move to a system remote HMC/Select Target System for Migration --->Edit configuration settings (optional) Storage/Network/Lavels/MSP Setting/SR-IOV and vNIC/Other ---> Start Migration
  • A progress window will show:
  • Memory synchronization progress (for live migrations)
  • Status of storage and network connections
  • The process can take from a few seconds to several minutes depending on LPAR size and workload.
8: Monitor Migration Progress
The migration progress can be monitored in:
  • The Mobility Operations window (from the GUI).
Or from the CLI using:
hscroot@indhmctst01:~> lslparmigr -r lpar -m <source_system>
Once migration completes:
  • The LPAR appears under the target managed system.
  • The source system no longer lists the migrated LPAR.
9: Post-Migration Validation
After the migration completes:
  • Verify the LPAR is running on the target system.
Check:
  • Network connectivity (ping test, application check)
  • Storage access (lsdev, lsmap, or df -h inside the LPAR)
  • Processor and memory allocation (using HMC or lsattr commands)
  • Clean up unused virtual adapters or temporary storage if any.

Performing IBM Live Partition Mobility (LPM) using the HMC Command Line Interface (CLI).

Prerequisites :
Ensure the following are configured properly before using the CLI:

LPAR / System Requirements

  • The LPAR must be fully virtualized (no dedicated adapters, IVE, or LHEA).
  • All network adapters are virtual, and VLANs exist on both source and target systems.
  • All storage is on shared SAN using VSCSI or NPIV (no internal disks).
  • Target system has sufficient CPU, memory, and virtual slots available.

Storage Connectivity
  • VSCSI: LUNs must be masked to both source and target VIOS WWPNs.
  • NPIV: LUNs must be masked to the inactive WWPNs (WWPN+1) on both frames.

1: HMC Connectivity
If migrating between two systems managed by different HMCs, establish SSH trust:
ssh hscroot@indhmctst02
Password:
Last login: Wed Feb 25 15:49:41 2026 from 192.168.10.201
hscroot@indhmctst02:~>

If a password is prompted, set up trust:
:~> mkauthkeys -g --ip <target_HMC> -u hscroot

2: Validate Live Partition Mobility
Before performing the migration, always validate to confirm configuration readiness.
Basic Validation Command
:~> migrlpar -o v -m <source_system> -t <target_system> --ip <target_HMC_IP> -u hscroot -p <LPAR_name> --mpio 1 -i "shared_proc_pool_name=<Pool_Name>"

Example (Basic Validation)
:~> migrlpar -o v -m <source_system> -t <target_system> --ip 192.168.10.101 -u hscroot -p LPAR1 --mpio 1 -i "shared_proc_pool_name=SHARED_POOL"

If the LPAR uses NPIV adapters, define virtual Fibre Channel mappings:
:~> migrlpar -o v -m <source_system> -t <target_system> --ip hmcserver02 -u hscroot -p LPAR1 --mpio 1 -i "virtual_fc_mappings=\"401//402//2\",shared_proc_pool_name=PROD_POOL,source_msp_id=3,dest_msp_id=12"

For LPARs using VSCSI disks, specify SCSI mappings:
:~> migrlpar -o v -m <source_system> -t <target_system> --ip hmcserver02 -u hscroot -p LPAR1 --mpio 1 -i "virtual_scsi_mappings=\"148//1,248//2\",shared_proc_pool_name=TEST_POOL,source_msp_id=12,dest_msp_id=12"

The validation Checks:
  • LUN accessibility
  • Adapter and VLAN consistency
  • Target system resource availability
  • If validation fails, review error messages and correct:
  • Storage zoning/masking
  • VLAN mismatches
  • Profile or resource issues

3: Start the Live Migration
Once validation succeeds, repeat the command but change -o v to -o m (for migrate):
:~> migrlpar -o m -m <source_system> -t <target_system> --ip <target_HMC_IP> -u hscroot -p <LPAR_name> --mpio 1 -i "shared_proc_pool_name=<Pool_Name>"

Example:
hscroot@indhmctst01:~> migrlpar -o m -m <source_system> -t <target_system> --ip 192.168.10.101 -u hscroot -p LPAR1 --mpio 1 -i "shared_proc_pool_name=SHARED_POOL"

The migration process starts and memory synchronization begins.
The command will return once migration completes successfully or fails.

4: Monitor Migration Progress
To check the status of a migration in progress:

:~> lslparmigr -r lpar -m <source_system> --filter "lpar_ids=<LPAR_ID>"

Example:
:~> lslparmigr -r lpar -m <source_system> --filter "lpar_ids=12"
Output shows:
  • Migration state (Running / Completed / Failed)
  • Memory transferred
  • Percent complete
  • You can also monitor from the target HMC.

5: Recovery and Cleanup
If a migration fails or hangs, use the following commands to recover:

Stop an Ongoing Migration
:~> migrlpar -o s -m <source_system> -p <LPAR_name>

Recover a Failed Migration (Non-Force)
:~> migrlpar -o r -m <source_system> -p <LPAR_name>

Force Recovery
:~> migrlpar -o r -m <source_system> -p <LPAR_name> --force

After recovery, fix the issue (e.g., storage or network) and retry validation/migration.

6: Post-Migration Verification
After successful migration:
Verify the LPAR appears on the target system.
Log in to the LPAR and check:
Network connectivity:
# ping <hostname>
# ifconfig -a
# lsdev -Cc adapter

Storage access:
# lspath
# df -h
# lsdev -Cc disk

CPU/Memory allocation
# lsattr -El mem0

Save the LPAR profile to sync it with the running configuration:
# mksyscfg -r lpar -m <target_system> -o save -n <LPAR_name>

IBM VIOS 2.2 To 4.1 Migration

Upgrading IBM VIOS Using the Alt Disk Method (viosupgrade Command)

Upgrading a Virtual I/O Server (VIOS) can be tricky — especially when you want to minimize downtime and keep rollback options open. Fortunately, IBM provides a reliable method using the viosupgrade command, introduced in VIOS 2.2.6.30 and later, which lets you install the new version on an alternate disk while your current system keeps running.

This post walks through the alt_disk upgrade method step-by-step — the same process I’ve used in production environments for seamless VIOS upgrades.

What Is the Alt Disk Update Method?
The alt_disk update uses the viosupgrade command to install a new VIOS version on a separate disk, while keeping the existing rootvg intact.

How it works:
Preserves the existing system: Your current rootvg remains untouched.
Installs on a new disk: The new image is installed on a free disk (e.g. hdisk1).
Quick rollback: If anything goes wrong, just change the bootlist back to the old disk and reboot.

Pre-Upgrade Preparation:

1. Verify the Current Environment

# lspv
hdisk0          00c1d778e1720166    rootvg    active
hdisk1          00c1d77814bbcfb8     rootvg    active

Check for active iFixes and note them down.
# emgr -l

Backup Configurations

  • Always take a full backup before the migration.
2. Backup VIOS configuration
# su - padmin -c "ioscli viosbr -backup -file /usr/local/backup/`hostname`_viosbr_`date +%Y%m%d`"

Backup mappings and devices
# su - padmin -c "ioscli backupios -file /usr/local/backup/`hostname`.mksys_`date +%Y%m%d` -mksysb -nosvg"

Backup /etc directory
# mkdir /backup/etc_before_migrate
# tar cf - etc | (cd /backup/etc_before_migrate; tar xfvp -)

3. Remove iFixes
for i in $(/usr/sbin/emgr -l | grep ^[0-9] | awk '{print $3}')
do
echo "----- Removing ifix $i  -----"
/usr/sbin/emgr -r -L $i
done

4. Prepare Custom Files to Restore
Create /home/padmin/file_restore.txt with:
# /etc/hosts
# /etc/resolv.conf
# /home/padmin/config/ntp.conf
# /home/padmin/.ssh/authorized_keys

Disk Preparation and Mirroring Steps:

Check Boot Devices and Rootvg Mirroring

# lspv | grep root
# bootlist -m normal -o

Ensure hdisk0 is your boot device.

Unmirrored and Prepare hdisk1
We’ll use hdisk1 for migration.
# unmirrorvg rootvg hdisk1
# reducevg rootvg hdisk1
# bosboot -a -d /dev/hdisk0
# bootlist -m normal hdisk0
# chpv -c hdisk1

Now confirm that hdisk1 is free:
# lspv | grep hdisk1
hdisk1   00c2d76814bbcfb8   None

Getting the New VIOS Image

On the Jump Server, mount the VIOS ISO and extract the mksysb image.
# loopmount -i Virtual_IO_Server_Base_Install_4.1.1.0_Flash_122024_LCD8292402.iso -o "-V udfs -o ro" -m /tmp/dvd

# cp /tmp/dvd/mksysb_image /export/software/VIOS_Patches/4110_vios.mksysb

Then copy it to the VIOS server 
vioserver01:
# scp /export/software/VIOS_Patches/4110_vios.mksysb vioserver01:/home/padmin

Performing the Upgrade:

Switch to the padmin shell and run:

$ viosupgrade -l -u -X ROOTUSRFS -i 4110_vios.mksysb -a hdisk1 -g file_restore.txt

Command Breakdown:
-l --> Local upgrade (runs directly on VIOS)
-u
--> Manual reboot required after upgrade
-X --> ROOTUSRFS Exclude user file systems in rootvg
-i --> Specifies the mksysb image
-a --> Target alternate disk
-g --> File list to restore after upgrade

During the process, you’ll see:

WARNING!!! VIOS Upgrade operation is in progress. Kindly refrain from making any configuration changes...................................................................................................

Monitor progress:
$ viosupgrade -l -q

Post-Upgrade Steps:

Once the upgrade completes:

Change the boot device to the new disk:

# bootlist -m normal hdisk1
# bootlist -m normal -o

Reboot:
# reboot

After the system reboots, VIOS will restore saved configurations and reboot again automatically.

You can monitor the restore logs:
$ viosupgrade -l -q

Validation and Rollback:

After the final reboot, verify:

The new VIOS version (ioslevel)
ioslevel
  • Network, SEA, and virtual device mappings
  • HMC connectivity and profiles
If you face issues, simply boot from the old rootvg:
# bootlist -m normal hdisk0
# reboot

Building an AIX LPAR

This definitive guide covers end-to-end LPAR creation using mksysb cloning or fresh lpp_source installs. From HMC configuration to production-ready post-install tuning.

AIX LPAR BUILD CHECKLIST
  • HMC: LPAR created (CPU/Mem/VLAN/VFC )
  • Storage: WWPN logged, LUNs zoned (80GB rootvg, 140GB appvg)
  • NIM: mksysb/lpp_source → aixtest01 allocated
  • SMS: Network boot + ping test PASSED
  • BOS: Install complete (40-50 min), hdisk0 detected
  • Network: vlan101(backup) + vlan102(app) configured
  • System: hostname, /etc/hosts, cleanup complete
  • Storage: appvg created (128MB PP), /appfs mounted
  • Validated: lspv, df -g, netstat -rn clean
  • HMC: Profile saved, LPAR Running
BUILD SHEET:
• CPU: Uncapped 0.1 | 1 vCPU (min/max:1/1)
• RAM: 2048MB (min:2048MB, max:4096MB)
• Net: vlan101(192.168.10.101), vlan102(192.168.20.101)
• Storage: rootvg(2×40GB), appvg(2×70GB)

mksysb vs lpp_source
MethodPerfect ForContains
mksysbCloning, DROS + Apps + Data
lpp_sourceFresh buildsBase OS + LPPs

HMC: Create LPAR
https://hmc.ppc.com → hscroot
Systems → [Managed System] → Create Partition
Partition Details:
Name: aixtest01
CPU: Uncapped | Entitlement: 0.1 | vCPU: 1 (min/max: 1/1)
Memory: 2048MB online (min: 2048MB, max: 4096MB)

Network + Virtual Fibre Channel
VLANs (from build sheet):
vlan101 → 192.168.10.101/24 gw 192.168.10.1 (backup)
vlan102 → 192.168.20.101/24 gw 192.168.20.1 (app)

VFC Mapping:
VIO-A: fcs0,fcs2,fcs4,fcs6
VIO-B: fcs1,fcs3,fcs5,fcs7
CRITICAL: Save Partition Profile

Storage Team Request

Subject: RITM12345 | Zoning Request | aixtest01 WWPNs

SAN A WWPNs:
c050760b0beb0227
c050760b0beb0218

SAN B WWPNs:
c050760b0beb021a
c050760b0beb023b

Requirements:
rootvg: 2×40GB = 80GB total
appvg: 2×70GB = 140GB total

Validate WWPN Login:
# chnportlogin -m MGTSYS01 -o login -p aixtest01
# lsnportlogin -m MGTSYS01 --filter "lpar_names=aixtest01"

NIM Master: Allocate Resources
mksysb Clone (Recommended):
# nim -o define -t mksysb -a server=master \
    -a location=/exports/software/aixtest02_7300.mksysb \
    aixtest01_mksysb

# nim -o bos_inst -a source=mksysb -a spot=spot_7300-03-00 \
    -a mksysb=aixtest01_mksysb -a boot_client=no aixtest01

Fresh lpp_source:
# nim -o bos_inst -a source=rte -a lpp_source=lpp_7300-03-00 \
    -a spot=spot_7300-03-00 -a accept_licenses=yes aixtest01

SMS Network Boot
HMC → aixtest01 → Activate(SMS mode) → Open Console

SMS MENU:
2 → 2(Port2-Mgmt) → 1(IPv4) → 1(IP Parameters)
  Client: 192.168.10.101
  NIM:    192.168.22.100
  GW:     192.168.10.1
  Mask:   255.255.255.0
→ 3(Ping Test) ← MUST PASS
→ M → 5(Boot) → 1(Install) → 4(Network) → 2(LAN) → 3(Service) → 1(Yes)

BOS Console:
1 (console) → 2 (show settings) → 0 (install)
*Wait 40-50 minutes*

Post-Install Production Config

6.1 Network (AIX 7.3 optimized)
# entstat -d en0 | grep VLAN  # Verify: 101,102
# smitty tcpip → Network Interfaces → en1(vlan102 app)
  IP: 192.168.20.101  Netmask: 255.255.255.0  GW: 192.168.20.1

6.2 System Basics
# smitty hostname → aixtest01
# echo "192.168.10.10 aixtest01" >> /etc/hosts

6.3 Cleanup Legacy
# cp /etc/{filesystems,inittab}{,.old}
# vi /etc/filesystems  # Remove boot-time FS
# vi /etc/inittab      # # app services
# mount -a

6.4 AIX 7.3 Network Tuning
# smitty tcpip → Tuning → Network Parameters
rfc1323: 1 | tcp_recvspace: 262144 | tcp_sendspace: 262144

6.5 Storage (Build Sheet)
# mkvg -f -y appvg -s 128 hdisk1 hdisk2      # 128MB PP size
# mklv -t jfs2 -y applv01 appvg 860          # 70GB = 860×128MB
# crfs -v jfs2 -A yes -d applv01 -m /appfs01
# chown appuser:appgroup /appfs01

Production Validation
# One-command health check
echo "=== STORAGE ==="; lspv
echo "=== FS ==="; df -g | grep app
echo "=== NET ==="; ifconfig -a | grep -E "(vlan|UP)"
echo "=== ROUTING ==="; netstat -rn | grep default
echo "=== ERRORS ==="; errpt | tail -5

Expected Output:
hdisk0 00c123f56789b0c1  rootvg  active
hdisk1 00c123f56789b0c2  appvg   active  
/appfs01  jfs2   68G   5%   64288/860000
en0: flags=...UP...vlan101
en1: flags=...UP...vlan102
default  192.168.20.1  UG

Troubleshooting Matrix
ProblemSymptomsFix
No VLANsentstat -d en0 emptyHMC VLAN ID mismatch → cfgmgr -v
SMS Ping FailTimeoutWrong NIM IP/VLAN → Verify build sheet
No Diskslspv emptyWWPN not logged → chnportlogin
NIM 40% HangBoot stallsSPOT mismatch → lsslit on NIM master
Slow Network>100ms latencyAIX 7.3 tuning → smitty tcpip

This guide produces production-ready AIX 7.3 LPARs consistently. Print the checklist, follow the steps, zero failures guaranteed.

AIX OS Migration Issue

For managing NIM (Network Installation Manager) clients and servers on AIX, especially focused on troubleshooting an OS migration from AIX 7.2 to 7.3.

1. Checking communication between master and client
From master to client:
# rsh nimclient date
  • This tests remote shell (rsh) communication from the NIM master to the client machine (nimclient ). If it doesn’t work:
  • Check the .rhosts file on the client to ensure the master is allowed.
  • Check if firewall ports are open for rsh communication.
  • Alternatively, use nimsh which is NIM's preferred secure communication method.
From client to master:
# telnet nimclient 1058
This tests the connection from the client to the NIM master on port 1058 (default NIM port). If nimsh is used instead of rsh, port 3901 may need to be open for communication.

2. Managing the root volume group (rootvg) on the NIM client server
If the rootvg is mirrored, you may need to unmirror it before further operations:

# unmirrorvg rootvg hdiskX
This removes the mirror on hdisk5 from rootvg.
Then, reduce the volume group by removing the physical volume:
# reducevg -df rootvg hdiskX
-d removes all logical volumes on hdiskX
-f forces the removal
Then, mark the physical volume as clean so it can be reused or removed:
# chpv -c hdiskX
If you are dealing with altinst_rootvg (alternate installation rootvg):
# alt_rootvg_op -X altinst_rootvg
# chpv -c hdiskX
This removes the alternate rootvg.

3. Remove the NIM client machine from the NIM server
nim -o remove <MachineName>
This deletes the client machine's configuration from the NIM server.

4. Check /etc/hosts file on both NIM server and client
Ensure both machines know each other’s IP address and hostname for proper name resolution.
On NIM server:
# vi /etc/hosts
Add something like:
192.168.10.11 nimclient
On NIM client:
# vi /etc/hosts
Add:
192.168.10.11 nimserver

5. Define the NIM client machine on the NIM server
# smit nim_mkmac
This opens an interactive menu to create a NIM machine definition on the NIM server.

You enter parameters like:
NIM Machine Name (nimclient)
Machine Type (standalone)
Hardware Platform (chrp)
Kernel for Network Boot (64)
Communication Protocol (nimsh)
Network Interface details (ent-NetworkX)

This step registers the client in the NIM server’s database for further management.

6. Check the settings/attributes of the NIM client on the NIM master
# lsnim | grep nimclient
This lists the client’s configuration details stored on the NIM master.

7. Initialize the NIM client server (nimclient in your case)
Backup existing NIM info:
# cp -r /etc/niminfo /etc/niminfo.bkp
Run NIM client initialization:
# smitty niminit
This prompts for:
Machine name (nimclient)
Primary/Secondary network interface (enX)
Hostname of NIM master (nimserver)
communication protocol  (nimsh)

niminit sets up the client to be managed by the NIM master.

8. Verify the NIM config file on the client
# cat /etc/niminfo
This file contains the NIM client configuration (e.g., master hostname, communication protocol, interface used, etc.). Verify it matches the intended setup.

9. Run migration from AIX 7.2 to 7.3 from the NIM server
# nimadm -j <nimvgname> -c <nimclient> -s spot_7300-03-01 -l lpp_7300-03-01 -d hdiskX -Y

Explanation of options:
-j <nimvgname>: Name of the NIM volume group to be used.
-c <nimclient>: Client machine to be upgraded.
-s spot_7300-03-01: The spot resource, which is a boot image with the AIX 7.3 kernel for network booting.
-l lpp_7300-03-01: The lpp_source resource, which contains AIX 7.3 installation files.
-d hdiskX: Target disk on the client where AIX will be installed.
-Y: Runs the migration without interactive prompts (auto confirm).

This command performs the actual migration of the client OS from AIX 7.2 to 7.3 using NIM resources.