Pages

Live Partition Mobility (LPM)

Live Partition Mobility (LPM) allows you to move a running logical partition (LPAR) from one physical IBM Power system (frame) to another without shutting it down.

The migration transfers the LPAR’s active memory, CPU, and virtual I/O connections to a target system while keeping the workload running.

LPM is often used for:
  • Hardware maintenance without downtime
  • Load balancing between  systems
  • Energy optimization
Requirements for Live Partition Mobility
Virtualization
  • The LPAR must be fully virtualized — no dedicated I/O hardware.
  • Managed through a Virtual I/O Server (VIOS).
Network Requirements
  • Only virtual network adapters can be used.
  • Dedicated network adapters and IVE/LHEA adapters are not supported.
  • All VLANs used by the source LPAR must also be available on the target frame.
  • Non-essential adapters (like for admin networks) can be temporarily removed before migration.
Storage Requirements
  • All disks must be shared storage accessed through VSCSI or NPIV.
  • Dedicated HBAs and internal disks are not supported.
  • Storage controllers (VSCSI or NPIV) must be consistent between the source and target systems.
  • If the LPAR uses VSCSI, it must connect to VIO servers on both source and target.
For NPIV, the same applies — WWPN mappings must exist on both sides.

Inactive Partition Migration
  • When the LPAR is shut down.
  • Only the LPAR profile and configuration are migrated — not live memory.
  • Faster, because it skips memory synchronization.
Storage Configuration Details
  • All LUNs must be masked to the inactive WWPNs (WWPN+1).
  • If this is already configured, no new storage request is needed.
  • LPM automatically validates LUN access using the standby WWPN.
  • VSCSI-backed LPARsAll LUNs must be masked to both source and destination VIO servers.
Identify:
  • Correct WWPNs for each VIO server
  • Storage array and device ID for each LUN
  • Run cfgmgr on the target VIOs to discover and validate the new disks.
  • Disk configurations must match on source and target.
Performing IBM Live Partition Mobility (LPM) using the HMC GUI

Prerequisites Before Starting LPM
Before using the GUI, make sure the following are complete and validated:
LPAR Requirements
  • The LPAR uses only virtual network and virtual storage adapters (VSCSI or NPIV).
  • No dedicated I/O, IVE, or LHEA adapters.
  • Internal disks are not used.
Storage Requirements
  • All LUNs are shared and zoned/masked to both source and target VIO servers.
  • For NPIV: Ensure LUNs are mapped to inactive WWPNs (WWPN+1).
  • For VSCSI: Ensure LUNs are mapped to both source and target VIO servers.
Network Requirements
  • All VLANs used by the LPAR on the source frame must exist on the target frame.
HMC Connectivity
If migrating between systems managed by different HMCs, SSH trust must exist between them:

# ssh hscroot@indhmctst01
Password:
Last login: Wed Feb 25 15:49:41 2026 from 192.168.10.201
hscroot@indhmctst01:~>

If prompted for a password, set up trust with:
hscroot@indhmctst01:~> mkauthkeys -g --ip <target HMC> -u hscroot

Resource Availability
  • Ensure the target system has enough CPU, memory, and virtual resources.
Steps for IBM LPM using the HMC GUI

1: Open the HMC GUI
Log in to the HMC that manages the source system.
Navigate to:
System resources --> Partition
Locate and select the LPAR you want to migrate.

2: Open the LPM Wizard
From the Actions menu (or right-click the LPAR), choose:
Select LPAR ---> Partition actions --> Validate and Migrate --> Next
The Migrate Partition wizard will open.

3: Select the Destination System
Select target systems --> Click Move to a system remote HMC/Select Target System for Migration --->Edit configuration settings (optional) Storage/Network/Lavels/MSP Setting/SR-IOV and vNIC/Other ---> Start Validation

If the target system is managed by another HMC:
  • Enter the target HMC’s IP address or hostname.
  • Enter the user ID (typically hscroot).
  • Click Refresh Destination System to retrieve the system list.
4: Choose Migration Type
  • If the LPAR is running, this will perform a Live Partition Mobility (LPM).
  • If the LPAR is shut down, it will perform an Inactive Partition Migration automatically.
5: Configure Destination Resources
Processor Pool: Choose the correct shared processor pool on the destination (important for licensing and performance).
Virtual I/O Mappings: Validate that all virtual adapters have mappings on the target VIOS.
Confirm memory and processor settings match or are compatible.

6: Validate Migration
System resources --> Partition --> Select LPAR ---> Partition actions --> Validate and Migrate --> Next -->Select target systems --> Click Move to a system remote HMC/Select Target System for Migration --->Edit configuration settings (optional) Storage/Network/Lavels/MSP Setting/SR-IOV and vNIC/Other ---> Start Validation

The system checks:
  • Shared storage access
  • Network adapter configuration
  • VIOS mappings
  • Target resource availability
  • If validation passes:
  • The Migrate button becomes active.
  • If validation fails, review the error messages, correct issues, and revalidate.
7: Start the Migration
Once validation is successful, click Migrate.

System resources --> Partition --> Select LPAR ---> Partition actions --> Validate and Migrate --> Next -->Select target systems --> Click Move to a system remote HMC/Select Target System for Migration --->Edit configuration settings (optional) Storage/Network/Lavels/MSP Setting/SR-IOV and vNIC/Other ---> Start Migration
  • A progress window will show:
  • Memory synchronization progress (for live migrations)
  • Status of storage and network connections
  • The process can take from a few seconds to several minutes depending on LPAR size and workload.
8: Monitor Migration Progress
The migration progress can be monitored in:
  • The Mobility Operations window (from the GUI).
Or from the CLI using:
hscroot@indhmctst01:~> lslparmigr -r lpar -m <source_system>
Once migration completes:
  • The LPAR appears under the target managed system.
  • The source system no longer lists the migrated LPAR.
9: Post-Migration Validation
After the migration completes:
  • Verify the LPAR is running on the target system.
Check:
  • Network connectivity (ping test, application check)
  • Storage access (lsdev, lsmap, or df -h inside the LPAR)
  • Processor and memory allocation (using HMC or lsattr commands)
  • Clean up unused virtual adapters or temporary storage if any.

Performing IBM Live Partition Mobility (LPM) using the HMC Command Line Interface (CLI).

Prerequisites :
Ensure the following are configured properly before using the CLI:

LPAR / System Requirements

  • The LPAR must be fully virtualized (no dedicated adapters, IVE, or LHEA).
  • All network adapters are virtual, and VLANs exist on both source and target systems.
  • All storage is on shared SAN using VSCSI or NPIV (no internal disks).
  • Target system has sufficient CPU, memory, and virtual slots available.

Storage Connectivity
  • VSCSI: LUNs must be masked to both source and target VIOS WWPNs.
  • NPIV: LUNs must be masked to the inactive WWPNs (WWPN+1) on both frames.

1: HMC Connectivity
If migrating between two systems managed by different HMCs, establish SSH trust:
ssh hscroot@indhmctst02
Password:
Last login: Wed Feb 25 15:49:41 2026 from 192.168.10.201
hscroot@indhmctst02:~>

If a password is prompted, set up trust:
:~> mkauthkeys -g --ip <target_HMC> -u hscroot

2: Validate Live Partition Mobility
Before performing the migration, always validate to confirm configuration readiness.
Basic Validation Command
:~> migrlpar -o v -m <source_system> -t <target_system> --ip <target_HMC_IP> -u hscroot -p <LPAR_name> --mpio 1 -i "shared_proc_pool_name=<Pool_Name>"

Example (Basic Validation)
:~> migrlpar -o v -m <source_system> -t <target_system> --ip 192.168.10.101 -u hscroot -p LPAR1 --mpio 1 -i "shared_proc_pool_name=SHARED_POOL"

If the LPAR uses NPIV adapters, define virtual Fibre Channel mappings:
:~> migrlpar -o v -m <source_system> -t <target_system> --ip hmcserver02 -u hscroot -p LPAR1 --mpio 1 -i "virtual_fc_mappings=\"401//402//2\",shared_proc_pool_name=PROD_POOL,source_msp_id=3,dest_msp_id=12"

For LPARs using VSCSI disks, specify SCSI mappings:
:~> migrlpar -o v -m <source_system> -t <target_system> --ip hmcserver02 -u hscroot -p LPAR1 --mpio 1 -i "virtual_scsi_mappings=\"148//1,248//2\",shared_proc_pool_name=TEST_POOL,source_msp_id=12,dest_msp_id=12"

The validation Checks:
  • LUN accessibility
  • Adapter and VLAN consistency
  • Target system resource availability
  • If validation fails, review error messages and correct:
  • Storage zoning/masking
  • VLAN mismatches
  • Profile or resource issues

3: Start the Live Migration
Once validation succeeds, repeat the command but change -o v to -o m (for migrate):
:~> migrlpar -o m -m <source_system> -t <target_system> --ip <target_HMC_IP> -u hscroot -p <LPAR_name> --mpio 1 -i "shared_proc_pool_name=<Pool_Name>"

Example:
hscroot@indhmctst01:~> migrlpar -o m -m <source_system> -t <target_system> --ip 192.168.10.101 -u hscroot -p LPAR1 --mpio 1 -i "shared_proc_pool_name=SHARED_POOL"

The migration process starts and memory synchronization begins.
The command will return once migration completes successfully or fails.

4: Monitor Migration Progress
To check the status of a migration in progress:

:~> lslparmigr -r lpar -m <source_system> --filter "lpar_ids=<LPAR_ID>"

Example:
:~> lslparmigr -r lpar -m <source_system> --filter "lpar_ids=12"
Output shows:
  • Migration state (Running / Completed / Failed)
  • Memory transferred
  • Percent complete
  • You can also monitor from the target HMC.

5: Recovery and Cleanup
If a migration fails or hangs, use the following commands to recover:

Stop an Ongoing Migration
:~> migrlpar -o s -m <source_system> -p <LPAR_name>

Recover a Failed Migration (Non-Force)
:~> migrlpar -o r -m <source_system> -p <LPAR_name>

Force Recovery
:~> migrlpar -o r -m <source_system> -p <LPAR_name> --force

After recovery, fix the issue (e.g., storage or network) and retry validation/migration.

6: Post-Migration Verification
After successful migration:
Verify the LPAR appears on the target system.
Log in to the LPAR and check:
Network connectivity:
# ping <hostname>
# ifconfig -a
# lsdev -Cc adapter

Storage access:
# lspath
# df -h
# lsdev -Cc disk

CPU/Memory allocation
# lsattr -El mem0

Save the LPAR profile to sync it with the running configuration:
# mksyscfg -r lpar -m <target_system> -o save -n <LPAR_name>

No comments:

Post a Comment