Pages

IBM MQ 9.2.1 Installation & Configuration Guide on Linux

This document provides a comprehensive technical guide for installing, configuring, and verifying IBM MQ Developer Edition 9.2.1 on Linux systems. It covers all steps required to prepare the OS, install MQ binaries, create queue managers, configure channels and listeners, secure access, perform message testing, troubleshoot issues, and cleanly uninstall MQ if needed.

It is intended for system administrators, middleware engineers, and developers who require a precise, command-driven reference for deploying IBM MQ in a development or test environment. By following this guide, you will have a fully functional queue manager (QMLAB1) ready for application connectivity and testing.

1. System Prerequisites & Validation
1.1 OS & Hardware Requirements
RequirementRecommended
OS          RHEL / CentOS 7 or 8 (64-bit)
Kernel          3.10+
RAM          8 GB minimum
Disk          5 GB free (more for logs)
Filesystem          XFS / EXT4
Time Sync          chronyd / ntpd

Verify OS and kernel:
# cat /etc/redhat-release
uname -r
1.2 System Update
# yum update -y
# reboot
Reboot ensures kernel libraries are aligned before MQ installation.

1.3 Check for Existing MQ Installations
# rpm -qa | grep MQSeries
If any MQ packages exist, remove them first to avoid conflicts.

1.4 Disable SELinux
In production, use proper SELinux policies instead of disabling.
setenforce 0
# vi /etc/selinux/config
SELINUX=permissive
Verify:
getenforce

2. MQ User, Groups & System Limits
2.1 Create MQ User
# groupadd mqm
# useradd -g mqm -m -s /bin/bash mqm
# passwd mqm
Confirm:
# id mqm

2.2 Configure File Descriptor Limits
IBM MQ opens many files and sockets under load.
# vi /etc/security/limits.d/30-ibmmq.conf
Add:
mqm     -   nofile  65536
root    -   nofile  65536
Apply immediately:
# ulimit -n 65536
Verify:
# ulimit -n

2.3 Create MQ Data Directories
# mkdir -p /ibmmq/{logs,data}
# chown -R mqm:mqm /ibmmq
# chmod -R 775 /ibmmq

3. Dependency Installation & MQ Download
3.1 Install Required Packages
# yum -y install \
bash bc ca-certificates file findutils gawk \
glibc-common grep passwd procps-ng sed \
shadow-utils tar util-linux which wget

3.2 Download IBM MQ Developer Edition
# wget -T5 -q -O mqadv_dev921_linux_x86-64.tar.gz \
https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/messaging/mqadv/mqadv_dev921_linux_x86-64.tar.gz
Verify download:
# ls -lh mqadv_dev921_linux_x86-64.tar.gz
Verify checksum if IBM provides SHA values.

3.3 Extract & Accept License
# tar -zxvf mqadv_dev921_linux_x86-64.tar.gz
cd MQServer
# ./mqlicense.sh -text_only -accept

4. Install IBM MQ RPMs
Install all MQ components:
# rpm -Uvh MQSeries*.rpm
Verify:
# rpm -qa | grep MQSeries
Source MQ environment:
# . /opt/mqm/bin/setmqenv -s
Verify binaries:
# dspmqver

5. Queue Manager Creation
5.1 Create a Test Queue Manager
# crtmqm -ld /ibmmq/logs -md /ibmmq/data qm1
# strmqm qm1
# endmqm -i qm1

5.2 Create Production-Style Queue Manager (QMLAB1)
# crtmqm \
-lc \
-lf 65535 \
-lp 3 \
-ls 2 \
-u SYSTEM.DEAD.LETTER.QUEUE \
QMLAB1
Explanation:
-lc → Circular logging
-lf 65535 → Large log files (better throughput)
-lp/-ls → Balanced log allocation
DLQ defined for message recovery
Start QM:
# strmqm QMLAB1
Verify:
# dspmq

6. Configure MQ Objects
6.1 MQSC Configuration
# . /opt/mqm/bin/setmqenv -n Installation1
runmqsc QMLAB1 << EOF
ALTER QMGR MAXMSGL(4194304)
ALTER QL(SYSTEM.CLUSTER.TRANSMIT.QUEUE) MAXMSGL(4194304)
DEFINE LISTENER(QMLAB1.LISTENER) TRPTYPE(TCP) PORT(1414) CONTROL(QMGR)
START LISTENER(QMLAB1.LISTENER)
DEFINE CHANNEL(QMLAB1.SVRCONN) CHLTYPE(SVRCONN) \
MCAUSER('mqm') MAXMSGL(4194304) \
DESCR('Application SVRCONN')
DEFINE QL(ORDER.INPUT) DEFPSIST(YES)
END
EOF
Verify:
# runmqsc QMLAB1
DISPLAY LS(QMLAB1.LISTENER)
DISPLAY CHANNEL(QMLAB1.SVRCONN)
6.2 Firewall Configuration
# firewall-cmd --add-port=1414/tcp --permanent
# firewall-cmd --reload
Confirm:
# firewall-cmd --list-ports

7. Message Testing
7.1 Put a Message
# printf "%s\n\n" TestMessage1 | \
/opt/mqm/samp/bin/amqsput ORDER.INPUT QMLAB1

7.2 Get the Message
# /opt/mqm/samp/bin/amqsget ORDER.INPUT QMLAB1
Message should display successfully.

8. Security Configuration
8.1 Application User Setup
# groupadd mqgroup
# useradd -m -g mqm -G mqgroup -s /bin/bash mqadm
Grant MQ permissions:
# setmqaut -m QMLAB1 -t qmgr -g mqgroup +connect +inq +dsp
# setmqaut -m QMLAB1 -n ORDER.** -t queue -g mqgroup +allmqi +dsp
Verify:
# dspmqaut -m QMLAB1 -g mqgroup

8.2 Channel Authentication
# runmqsc QMLAB1 << EOF
SET CHLAUTH(QMLAB1.SVRCONN) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS)
SET CHLAUTH(QMLAB1.SVRCONN) TYPE(USERMAP) CLNTUSER('order') \
USERSRC(MAP) MCAUSER('order') ADDRESS('*')
EOF
Validate rule matching:
DISPLAY CHLAUTH(QMLAB1.SVRCONN) MATCH(RUNCHECK) \
CLNTUSER('order') ADDRESS('1.2.3.4')

9. Optional Admin User Setup
# useradd mqadmin
# usermod -aG mqm mqadmin
Grant full MQ access:
# setmqaut -m QMLAB1 -t qmgr -g mqadmin +all
REFRESH SECURITY

10. Uninstallation & Cleanup
endmqm -c QMLAB1
dltmqm QMLAB1
yum -y erase MQSeries*
rm -rf /opt/mqm /var/mqm /ibmmq MQServer mqadv_dev921_linux_x86-64.tar.gz
rpm -qa | grep MQSeries

11. Final Notes
  • Developer Edition is not licensed for production
  • Always enable CHLAUTH and AUTHREC
  • Monitor /var/mqm/errors
  • Consider multi-instance QM for HA in real environments

Rebuilding an AIX LPAR for Oracle RAC with Pure Storage FlashArray

Rebuilding an AIX LPAR in an Oracle RAC environment isn't just an OS reinstall—it's a coordinated dance across Clusterware, ASM, storage multipathing, volume groups, and HMC. One wrong move corrupts OCR, breaks ASM discovery, or leaves you with an unbootable node.

This guide delivers a safe, repeatable, production-grade process for Oracle RAC on Pure Storage FlashArray (FCP, MPIO). We cover everything from clean Clusterware shutdown to Grid Infrastructure validation on the new LPAR.

Goal: Minimize risk, preserve RAC integrity, and keep a rollback path.

Environment Overview
ComponentDetails
OSIBM AIX
DatabaseOracle RAC
StoragePure Storage FlashArray (FCP, MPIO)
Disk ManagementASM
Backupmksysb
High AvailabilityMirrored Oracle VG + Alternate rootvg

1: Cluster Shutdown & System Backup
Stop Oracle Clusterware cleanly and back up the system before any storage changes.
# ./crsctl stop crs
# /usr/local/bin/backup/mksysb.sh

Why it matters:
  • Prevents OCR/Voting disk corruption
  • Creates a known-good rollback point
  • Essential before destructive storage ops
mksysb Backup Script
This script validates rootvg, checks space, creates/verifies the image, and enforces retention.

#!/bin/ksh
########################################################################
# Script Name: mksysb.sh
# Purpose: Create filesystem-based mksysb backup for AIX
# Author: adminCtrlX
# Location: /usr/local/bin/backup
########################################################################
BACKUP_DIR="/backup/mksysb"
LOG_DIR="/var/log/mksysb"
RETENTION_DAYS=7
DATE=$(date +%Y%m%d_%H%M)
HOSTNAME=$(hostname)

MKSYSB_FILE="${BACKUP_DIR}/${HOSTNAME}_rootvg_${DATE}.mksysb"
LOG_FILE="${LOG_DIR}/mksysb_${DATE}.log"

mkdir -p ${BACKUP_DIR} ${LOG_DIR}
exec > >(tee -a ${LOG_FILE}) 2>&1

lsvg rootvg || exit 1

REQUIRED_MB=8192
AVAILABLE_MB=$(df -m ${BACKUP_DIR} | tail -1 | awk '{print $3}')

if [ ${AVAILABLE_MB} -lt ${REQUIRED_MB} ]; then
  echo "Insufficient space for mksysb"
  exit 1
fi

/usr/bin/mksysb -i -m -X ${MKSYSB_FILE} || exit 1
/usr/bin/lsmksysb -l ${MKSYSB_FILE} || exit 1

find ${BACKUP_DIR} -name "*.mksysb" -mtime +${RETENTION_DAYS} -exec rm -f {} \;

2: Pure Storage FCP Driver Installation
Mount the NIM image and install the MPIO driver for proper pathing.
# mount ibm-nim-001:/images/aix /mnt
# cd /mnt/purestorage
# installp -acXgd /mnt/PureStorage devices.fcp.disk.pure.flasharray.mpio.rte
# umount /mnt
# shutdown -Fr

Why it matters: Generic AIX paths lead to inconsistent failover and poor performance.

3: Service Startup & Disk Discovery
Post-reboot:
# startsrc -s collectd
# startsrc -s xntpd
# df -gt

Capture disk inventory:
# for i in `lspv | awk '{print $1}'`; do
    SIZE=`bootinfo -s $i`
    SERIAL=`lscfg -vl $i | grep Z1 | awk '{print $2 " " $3}'`
    echo $i $SIZE $SERIAL
done
Share WWPNs (e.g., from FCS0/FCS1) with storage team for LUN allocation.

4: Pure Storage MPIO Configuration
Set failover algorithm on Pure disks:
# for i in `lsdev -Cc disk | grep PURE | awk '{print $1}'`; do
    chdev -l $i -a algorithm=fail_over
    lsattr -Pl $i -a algorithm
    sleep 2
done

5: ASM Disk Renaming & Ownership
Rename and chown disks per ASM standards (e.g., ASMDATA_Disk, 

ASMOCR_Disk, ASMVOT_Disk):
# rendev -l <disk_name> -n <asm_disk_name>
# for i in `cat /tmp/asm-disk`; do
    chown grid:asmadmin $i
    lkdev -l $i -a
    ls -lrt $i
done
DB team then discovers and resyncs in ASM.

6: Oracle VG Operations (Source LPAR)
Clone rootvg, mirror Oracle VG, and wait for STALE PPs: 0.
# alt_disk_copy -B -d <disk_name>
# extendvg oraclevg <disk_name>
# mirrorvg -S -m oraclevg <disk_name>
# lsvg oraclevg

7: Create Target LPAR (HMC)
In HMC GUI/CLI:
  • Launch Create Partition wizard.
  • Map Virtual Ethernet (VIOS network, VLAN if needed).
  • Map Virtual FC (NPIV): Pair client/server adapters, auto-generate WWPNs.
  • Activate to Pending state.
  • Extract WWPNs: lshwres -r virtualio --rsubtype fc -m <sys> --filter "lpar_names=<lpar>".
Share WWPNs with storage for zoning.

8: Migration Window Operations
Split/export Oracle/Data VG for safe migration:
# cp -p /etc/filesystems /etc/filesystems_Backup_$(date +'%d_%m_%Y')
# splitvg -y <new_vg> -i <old_vg>
# varyoffvg <vg_name>
# exportvg <vg_name>

9: Boot Target LPAR
SMS boot > Set rootvg disk > Configure networking (mktcpip, /etc/hosts).

10: Storage Cleanup (Target LPAR)
# for i in `lsdev -Cc disk | grep Defined | awk '{print $1}'`; do
    rmdev -Rdl $i
done

11: Oracle Filesystem Recovery
# importvg -y <vg_name> <disk_name>
# chlv -n <new_lv> <old_lv>
# chfs -m <new_fs> <old_fs>
# mount <fs_name>

12: Clusterware Startup & Validation
# cd /opt/oracle/grid/home/bin
# ./crsctl start crs
# ./crsctl check crs
# ./crsctl status crs
# bootlist -m normal <rootvg_disk>

Final Checklist:
# lspv
# df -gt
# ifconfig -a
# netstat -nr
# lssrc -s sshd
# lssrc -s rubrik_backup

Key Takeaways
  • Pure Storage MPIO is mandatory.
  • Standardize ASM disk names.
  • Mirror Oracle VG for binaries.
  • Use alternate rootvg for rollback/DR.
  • Always clean shutdown/startup.
Pro Tip: Document serials in /tmp/asm-disk. Recovery becomes a checklist, not guesswork.

Automating VM Deployment on ESXi with OVF Templates Using CSV Input on RHEL

Deploying multiple VMs manually on an ESXi host can be time-consuming, especially when you need to configure hostnames, network interfaces, and DNS settings. In this tutorial, we’ll walk through an automated process using a CSV-driven script, ovftool, and sshpass on RHEL Linux. This allows parallel deployment of VMs with customized network configurations.

Prerequisites:
Before we start, make sure you have the following:
  • RHEL/CentOS 8+ machine with network access to the ESXi host.
  • ESXi credentials with permissions to deploy VMs.
  • OVA template for the VM.
  • sshpass and ovftool installed on your RHEL system.
Step 1: Installing sshpass on RHEL
sshpass allows automated SSH login using a password (necessary for our script).
Enable EPEL repository (if not already installed)
# sudo dnf install -y epel-release
Install sshpass
# sudo dnf install -y sshpass
Verify installation
# sshpass -V

Step 2: Installing ovftool on RHEL
ovftool is VMware's OVF deployment utility.
Download VMware-ovftool for Linux from VMware’s official site "https://developer.broadcom.com/tools/open-virtualization-format-ovf-tool/latest"
Extract and install:
# dnf install libnsl libxcrypt-compat -y
unzip VMware-ovftool-5.0.0-24781994-lin.x86_64.zip -d /opt/
chmod +x /opt/ovftool/ovftool /opt/ovftool/ovftool.bin
ln -s /opt/ovftool/ovftool /usr/local/bin/ovftool

Verify installation:
# ovftool --version

Step 2: Passwordless Authentication RHEL server to ESXI8 Server 

Generate the Key on RHEL
Log in to your RHEL server and generate the 4096-bit RSA key.
# ssh-keygen -t rsa -b 4096
Press Enter to save to the default location (/root/.ssh/id_rsa).
Enter a passphrase for extra security.
Display and copy the key:
# cat ~/.ssh/id_rsa.pub
Highlight and copy the entire string starting with ssh-rsa.
To fix Error :Unable to negotiate with 192.168.10.103 port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss
# vi ~/.ssh/config
Host *
    HostKeyAlgorithms +ssh-rsa
    PubkeyAcceptedAlgorithms +ssh-rsa
To Fix Error: ssh_dispatch_run_fatal: Connection to 192.168.10.103 port 22: error in libcrypto
system that uses crypto-policies, you can set it to LEGACY to restore compatibility with older hardware:
# update-crypto-policies --set LEGACY

Prepare the ESXi 8 Server
Enable SSH: Log into the ESXi Host Client (Web UI) -> Manage -> Services -> Start TSM-SSH.
Login via SSH using your root password.
Install the Key on ESXi
In ESXi 8, the default location for the root user's authorized keys is /etc/ssh/keys-root/authorized_keys.
Open the file:
# vi /etc/ssh/keys-root/authorized_keys
Paste the key: Press i for Insert mode, paste your key, then press Esc and type :wq! to save and exit.
Set Strict Permissions: ESXi will ignore the key if the permissions are too open.
# chmod 600 /etc/ssh/keys-root/authorized_keys

Test the Connection from RHEL
Go back to your RHEL terminal and attempt to connect.
# ssh root@<ESXi_IP_Address>

Note: If it still asks for a password, check the ESXi /etc/ssh/sshd_config file to ensure PubkeyAuthentication yes and AuthorizedKeysFile points to /etc/ssh/keys-root/authorized_keys.

Step 4: Prepare Your CSV Input
Your VM deployment details are stored in a CSV file. Example:

#vm_name,hostname,primary_ip,primary_gateway,primary_dns,secondary_ip,secondary_gateway,secondary_dns
#vm_name,hostname,primary_ip,primary_gateway,primary_dns,secondary_ip,secondary_gateway,secondary_dns
INDRXLTST11,indrxltst11.ppc.com,192.168.10.50,192.168.10.1,192.168.10.100,192.168.20.50,192.168.20.1,192.168.20.100
INDRXLTST12,indrxltst12.ppc.com,192.168.10.51,192.168.10.1,192.168.10.100,192.168.20.51,192.168.20.1,192.168.20.100
INDRXLTST13,indrxltst13.ppc.com,192.168.10.52,192.168.10.1,192.168.10.100,192.168.20.52,192.168.20.1,192.168.20.100

Save this file as vm-deploy.csv.

Step 5: Deploying VMs with the Script
We use a Bash script to read the CSV file and deploy each VM using ovftool. The script also configures network settings and hostnames on the VM using sshpass for password-based SSH.

Key features:
  • Deploy VMs from an OVA template.
  • Set hostname, primary, and secondary NIC IPs.
  • Reboot the VM and wait for SSH availability.
  • Run post-deployment checks (hostname, IP, disk, and network).
Script: deploy-configure-vms.sh
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
#!/bin/bash
# Author: adminCtrlX
set -e

ESXI_IP="192.168.10.101"
ESXI_USER="root"
ESXI_PASS="Welcome@123"
OVA_PATH="/ppcdata/servers-template/redhat10-template.ova"
DATASTORE="STG-DC01-MNG"
PRIMARY_NETWORK="Private Network"
SECONDARY_NETWORK="Local Network"
VM_ROOT_PASS="root123"

# Max concurrent deployments
MAX_PARALLEL=3

# Check if sshpass is installed
if ! command -v sshpass &>/dev/null; then
    echo "sshpass is required but not installed. Install it first."
    exit 1
fi

# =========================================
# Read CSV file
# =========================================
read -p "Enter CSV file path: " CSV_FILE
[[ ! -f "$CSV_FILE" ]] && { echo "CSV file not found!"; exit 1; }

# Function to deploy/configure a single VM
deploy_vm() {
    local VM_NAME="$1"
    local HOSTNAME="$2"
    local P_IP="$3"
    local P_GW="$4"
    local P_DNS="$5"
    local S_IP="$6"
    local S_GW="$7"
    local S_DNS="$8"

    local LOGFILE="deploy_${VM_NAME}.log"
    echo "==== Deploying $VM_NAME ====" | tee -a "$LOGFILE"

    # Deploy OVA if VM doesn't exist
    VMID=$(sshpass -p "$ESXI_PASS" ssh -o StrictHostKeyChecking=no ${ESXI_USER}@${ESXI_IP} \
           "vim-cmd vmsvc/getallvms" | awk -v vm="$VM_NAME" '$0 ~ vm {print $1}')
    if [[ -z "$VMID" ]]; then
        echo "Deploying OVA template..." | tee -a "$LOGFILE"
        ovftool --acceptAllEulas --skipManifestCheck \
            --name="$VM_NAME" \
            --datastore="$DATASTORE" \
            --network="$PRIMARY_NETWORK" \
            "$OVA_PATH" \
            "vi://${ESXI_USER}:${ESXI_PASS}@${ESXI_IP}/" &>> "$LOGFILE"

        sleep 10
        VMID=$(sshpass -p "$ESXI_PASS" ssh -o StrictHostKeyChecking=no ${ESXI_USER}@${ESXI_IP} \
               "vim-cmd vmsvc/getallvms" | awk -v vm="$VM_NAME" '$0 ~ vm {print $1}')
    fi

    [[ -z "$VMID" ]] && { echo "ERROR: VM deployment failed for $VM_NAME" | tee -a "$LOGFILE"; return; }
    echo "VMID: $VMID" | tee -a "$LOGFILE"

    # Power on VM
    STATE=$(sshpass -p "$ESXI_PASS" ssh -o StrictHostKeyChecking=no ${ESXI_USER}@${ESXI_IP} \
            "vim-cmd vmsvc/power.getstate $VMID" | tail -n1)
    if [[ "$STATE" != "Powered on" ]]; then
        echo "Powering on VM..." | tee -a "$LOGFILE"
        sshpass -p "$ESXI_PASS" ssh -o StrictHostKeyChecking=no ${ESXI_USER}@${ESXI_IP} \
            "vim-cmd vmsvc/power.on $VMID" &>> "$LOGFILE"
    else
        echo "VM already powered on." | tee -a "$LOGFILE"
    fi

    # Wait for initial VMware Tools IP (optional logging)
    VM_IP=""
    for i in {1..30}; do
        VM_IP=$(sshpass -p "$ESXI_PASS" ssh -o StrictHostKeyChecking=no ${ESXI_USER}@${ESXI_IP} \
                 "vim-cmd vmsvc/get.guest $VMID" | awk -F\" '/ipAddress/ {print $2; exit}')
        [[ -n "$VM_IP" && "$VM_IP" != "0.0.0.0" ]] && break
        sleep 5
    done
    [[ -n "$VM_IP" ]] && echo "VM initially reports IP: $VM_IP" | tee -a "$LOGFILE"

    # Configure hostname & network
    sshpass -p "$VM_ROOT_PASS" ssh -o StrictHostKeyChecking=no root@$VM_IP <<EOF &>> "$LOGFILE"
set -e

HN_SHORT=\$(echo "$HOSTNAME" | cut -d. -f1)
hostnamectl set-hostname "\$HN_SHORT"

cat > /etc/hosts <<EOL
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

$P_IP   $HOSTNAME \$HN_SHORT
EOL

# Primary NIC (ens192)
CON1=\$(nmcli -t -f NAME,DEVICE con show | grep ens192 | cut -d: -f1)
nmcli con mod "\$CON1" ipv4.addresses $P_IP/24 ipv4.gateway $P_GW ipv4.dns $P_DNS ipv4.method manual

# Secondary NIC (ens224)
CON2=\$(nmcli -t -f NAME,DEVICE con show | grep ens224 | cut -d: -f1)
nmcli con mod "\$CON2" ipv4.addresses $S_IP/24 ipv4.gateway $S_GW ipv4.dns $S_DNS ipv4.method manual

reboot
EOF

    echo "Waiting for VM $VM_NAME to come back..." | tee -a "$LOGFILE"

    # Wait for SSH on primary IP
    for i in {1..60}; do
        if nc -z -w5 "$P_IP" 22 &>/dev/null; then
            echo "SSH available on $P_IP" | tee -a "$LOGFILE"
            break
        fi
        echo "SSH not ready yet... retry $i/60" | tee -a "$LOGFILE"
        sleep 10
    done
    if ! nc -z -w5 "$P_IP" 22 &>/dev/null; then
        echo "ERROR: SSH never became available on $P_IP" | tee -a "$LOGFILE"
        return
    fi

    # Run post-checks
    sshpass -p "$VM_ROOT_PASS" ssh -o StrictHostKeyChecking=no root@$P_IP <<'POSTEOF' &>> "$LOGFILE"
set -e
echo "====== Post-deployment checks ======"
echo "Hostname: $(hostname)"
echo "IP Addresses:"; ip addr show
echo "Disk Usage:"; df -h
echo "Network connectivity test:"; ping -c 2 $P_IP || echo "Ping failed"
echo "Services status (example: sshd):"; systemctl status sshd | head -20
echo "Post-deployment checks completed successfully."
POSTEOF

    echo "==== VM $VM_NAME deployment complete! ====" | tee -a "$LOGFILE"
}

# =========================================
# Read CSV and deploy VMs in parallel
# =========================================
PIDS=()
while IFS=, read -r VM_NAME HOSTNAME P_IP P_GW P_DNS S_IP S_GW S_DNS; do
    [[ -z "$VM_NAME" || "$VM_NAME" =~ ^# ]] && continue

    deploy_vm "$VM_NAME" "$HOSTNAME" "$P_IP" "$P_GW" "$P_DNS" "$S_IP" "$S_GW" "$S_DNS" &

    PIDS+=($!)

    # Limit parallel jobs
    while [[ $(jobs -r -p | wc -l) -ge $MAX_PARALLEL ]]; do
        sleep 5
    done

done < "$CSV_FILE"

# Wait for all background jobs to finish
wait

echo "All VMs processed successfully ..............................................."

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Step 6: Running the Deployment
[root@inddcpppz01 scripts]# chmod +x deploy-configure-vms.sh
[root@inddcpppz01 scripts]# ./deploy-configure-vms.sh
[root@inddcpppz01 scripts]# ./esxi-deploy-configure.sh
Enter CSV file path: /root/scripts/deploy-configure-vms.csv
==== Deploying INDRXLTST11 ====
==== Deploying INDRXLTST12 ====
==== Deploying INDRXLTST13 ====
Deploying OVA template...
Deploying OVA template...
Deploying OVA template...
VMID: 4
VMID: 5
VMID: 6
Powering on VM...
Powering on VM...
Powering on VM...
VM initially reports IP: 192.168.10.38
VM initially reports IP: 192.168.10.48
VM initially reports IP: 192.168.10.47
Waiting for VM INDRXLTST13 to come back...
Waiting for VM INDRXLTST12 to come back...
Waiting for VM INDRXLTST11 to come back...
SSH not ready yet... retry 1/60
SSH not ready yet... retry 1/60
SSH not ready yet... retry 1/60
SSH not ready yet... retry 2/60
SSH not ready yet... retry 2/60
SSH not ready yet... retry 2/60
SSH available on 192.168.10.51
SSH available on 192.168.10.50
SSH available on 192.168.10.52
==== VM INDRXLTST12 deployment complete! ====
==== VM INDRXLTST13 deployment complete! ====
==== VM INDRXLTST11 deployment complete! ====
All VMs processed successfully ...............................................
[root@inddcpppz01 scripts]#

The script will:
  • Check if VM exists; if not, deploy using OVA.
  • Power on the VM.
  • Configure hostnames and network interfaces.
  • Wait for SSH to become available.
  • Run post-deployment checks.
Step 7: Example Deployment Log
After deployment, logs like deploy_INDRXLTST11.log are generated:

[root@inddcpppz01 scripts]# cat deploy_INDRXLTST11.log
==== Deploying INDRXLTST11 ====
Deploying OVA template...
Opening OVA source: /ppcdata/servers-template/redhat10-template.ova
Opening VI target: vi://root@192.168.10.101:443/
Deploying to VI: vi://root@192.168.10.101:443/
Transfer Completed
The manifest does not validate
Warning:
 - The manifest is present but user flag causing to skip it
Completed successfully
VMID: 53
Powering on VM...
Powering on VM:
VM initially reports IP: 192.168.10.39
Pseudo-terminal will not be allocated because stdin is not a terminal.
*********************************************************
*    !!!! WELCOME TO PPC.COM TEST LAB SERVER'S !!!!     *
* This server is meant for testing Linux commands and   *
* Tools. If you are not associated with ppc.com and     *
* Not authorized. Please dis-connect immediately.       *
*********************************************************
Waiting for VM INDRXLTST11 to come back...
SSH not ready yet... retry 1/60
SSH not ready yet... retry 2/60
SSH available on 192.168.10.50
Pseudo-terminal will not be allocated because stdin is not a terminal.
*********************************************************
*    !!!! WELCOME TO PPC.COM TEST LAB SERVER'S !!!!     *
* This server is meant for testing Linux commands and   *
* Tools. If you are not associated with ppc.com and     *
* Not authorized. Please dis-connect immediately.       *
*********************************************************
====== Post-deployment checks ======
Hostname: indrxltst11
IP Addresses:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:ca:5b:29 brd ff:ff:ff:ff:ff:ff
    altname enp11s0
    altname enx000c29ca5b29
    inet 192.168.10.50/24 brd 192.168.10.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feca:5b29/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:ca:5b:33 brd ff:ff:ff:ff:ff:ff
    altname enp19s0
    altname enx000c29ca5b33
    inet 192.168.20.50/24 brd 192.168.20.255 scope global noprefixroute ens224
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feca:5b33/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
Disk Usage:
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-root   17G  2.7G   15G  16% /
devtmpfs                 4.0M     0  4.0M   0% /dev
tmpfs                    478M     0  478M   0% /dev/shm
tmpfs                    192M  3.8M  188M   2% /run
tmpfs                    1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
/dev/sda2                960M  279M  682M  29% /boot
tmpfs                    1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
tmpfs                     96M  4.0K   96M   1% /run/user/0
Network connectivity test:
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=128 time=51.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=128 time=51.8 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 51.660/51.710/51.761/0.050 ms
Services status (example: sshd):
● sshd.service - OpenSSH server daemon
     Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; preset: enabled)
     Active: active (running) since Fri 2026-02-06 21:13:27 IST; 5s ago
 Invocation: 1c42fed46327449098fb26c3255a7190
       Docs: man:sshd(8)
             man:sshd_config(5)
   Main PID: 1003 (sshd)
      Tasks: 1 (limit: 5893)
     Memory: 7.6M (peak: 24.1M)
        CPU: 127ms
     CGroup: /system.slice/sshd.service
             └─1003 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"

Feb 06 21:13:27 indrxltst11 systemd[1]: Starting sshd.service - OpenSSH server daemon...
Feb 06 21:13:27 indrxltst11 (sshd)[1003]: sshd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS
Feb 06 21:13:27 indrxltst11 sshd[1003]: Server listening on 0.0.0.0 port 22.
Feb 06 21:13:27 indrxltst11 systemd[1]: Started sshd.service - OpenSSH server daemon.
Feb 06 21:13:27 indrxltst11 sshd[1003]: Server listening on :: port 22.
Feb 06 21:13:30 indrxltst11 sshd-session[1330]: Accepted password for root from 192.168.10.104 port 40404 ssh2
Feb 06 21:13:31 indrxltst11 sshd-session[1330]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
Post-deployment checks completed successfully.
==== VM INDRXLTST11 deployment complete! ====
[root@inddcpppz01 scripts]# 

Step 8: Summary
By combining ovftool, sshpass, and a CSV-driven Bash script, you can:
  • Rapidly deploy multiple VMs in parallel.
  • Automatically configure hostnames and network settings.
  • Perform initial health checks post-deployment.
  • Maintain logs for auditing and troubleshooting.
This approach is perfect for labs, test environments, and repetitive deployment scenarios.

Bulk Hostname and IP Updates in RHEL 7, 8, 9 & 10

Managing hostnames and IP addresses across multiple Linux servers is a routine task for system administrators—but doing it manually doesn’t scale. With RHEL 7, 8, 9 & 10, Red Hat standardizes network management through NetworkManager and nmcli, making automation the preferred and safest approach.

In this guide, we’ll build a production-ready Bash script that automates hostname and static IP changes across multiple RHEL 7, 8, 9 & 10 servers using a CSV file and SSH.

Why Automate Hostname and IP Changes?

Common use cases include:

  • VM cloning and re-IP addressing
  • Environment refreshes (DEV / STAGE / PROD)
  • Data center migrations
  • DR and failover testing
Automation helps you:
  • Eliminate human error
  • Apply consistent configuration
  • Save time on repetitive tasks
  • Follow Red Hat best practices
Prerequisites:

Before running the script, ensure:

  • RHEL 7, 8, 9 & 10 servers
  • SSH key-based authentication
  • Root access (or passwordless sudo)
  • NetworkManager is running
  • Network interfaces:
          ens192 – primary
          ens224 – secondary

Adjust interface names or subnet sizes as required for your environment.

CSV Input File Format
The script reads server configuration from a CSV file and supports comments.

Example: host_ip_change.txt
#remote,hostname,primary_ip,primary_gateway,primary_dns,secondary_ip,secondary_gateway,secondary_dns
192.168.10.244,indrxltst11,192.168.10.224,192.168.10.1,192.168.10.100,192.168.20.224,192.168.20.1,192.168.20.100

Field Description
FieldDescription
           remoteCurrent IP or hostname to SSH into
           hostnameNew hostname
           primary_ipStatic IP for ens192
           primary_gatewayGateway for ens192
           primary_dnsDNS server
           secondary_ipStatic IP for ens224
           secondary_gatewayGateway for ens224
           secondary_dnsDNS server


Final Bash Script (RHEL 7, 8, 9 & 10 Compatible)

Script Features
  • CSV-driven automation
  • Skips empty and commented lines
  • Uses nmcli (NetworkManager-compliant)
  • Updates hostname and two interfaces
  • Reloads connections and reboots
Script: rhel-hostname-ip-change.sh

#!/bin/bash
# Automate hostname and IP changes on RHEL 7, 8, 9 & 10 servers using CSV input
# Interfaces: ens192 (primary), ens224 (secondary)
echo "Enter the path to the CSV file:"
read csv_file

if [[ ! -f "$csv_file" ]]; then
echo "CSV file not found!"
exit 1
fi

while IFS=, read -r remote hostname primary_ip primary_gateway primary_dns secondary_ip secondary_gateway secondary_dns; do

# Skip empty lines and comments
[[ -z "$remote" || "$remote" =~ ^# ]] && continue

echo "---------------------------------------------------------"
echo "Processing server: $remote"
echo "---------------------------------------------------------"

ssh -T root@"$remote" << EOF

echo "Updating hostname and network configuration..."

hostnamectl set-hostname "$hostname"

nmcli con mod ens192 \
ipv4.method manual \
ipv4.addresses $primary_ip/24 \
ipv4.gateway $primary_gateway \
ipv4.dns $primary_dns

nmcli con mod ens224 \
ipv4.method manual \
ipv4.addresses $secondary_ip/24 \
ipv4.gateway $secondary_gateway \
ipv4.dns $secondary_dns

nmcli con reload

echo "---------------- Hostname ----------------"
hostnamectl

echo "---------------- ens192 ------------------"
nmcli con show ens192

echo "---------------- ens224 ------------------"
nmcli con show ens224

echo "Rebooting server..."
reboot
EOF

echo "Changes applied to $remote. Waiting for reboot..."
sleep 10
done < "$csv_file"
echo "All servers processed successfully."
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Sample Script Output

[root@inddcpppz01 scripts]# ./rhel-hostname-ip-change.sh
Enter the path to the CSV file:
/root/scripts/host_ip_change.txt
---------------------------------------------------------
Processing server: 192.168.10.244
---------------------------------------------------------
*********************************************************
* !!!! WELCOME TO PPC.COM TEST LAB SERVER'S !!!! *
* This server is meant for testing Linux commands and *
* Tools. If you are not associated with ppc.com and *
* Not authorized. Please dis-connect immediately. *
*********************************************************
Updating hostname and network configuration...
---------------- Hostname ----------------
Static hostname: indrxltst11
Icon name: computer-vm
Chassis: vm 🖴
Machine ID: 3b223da7ca514758839fb91ce7f319a9
Boot ID: d20afc3459524d93807b0ba9d9df55eb
Product UUID: 564d79af-6b4e-b4f6-3c9e-9075e3ccb935
AF_VSOCK CID: 3821844789
Virtualization: vmware
Operating System: Red Hat Enterprise Linux 10.0 (Coughlan)
CPE OS Name: cpe:/o:redhat:enterprise_linux:10::baseos
Kernel: Linux 6.12.0-55.9.1.el10_0.x86_64
Architecture: x86-64
Hardware Vendor: VMware, Inc.
Hardware Model: VMware Virtual Platform
Hardware Serial: VMware-56 4d 79 af 6b 4e b4 f6-3c 9e 90 75 e3 cc b9 35
Firmware Version: 6.00
Firmware Date: Mon 2015-09-21
Firmware Age: 10y 4month 1w 5d
---------------- ens192 ------------------
connection.id: ens192
connection.uuid: bf5c3e39-1f0e-3378-8dd7-fbb41fa659af
connection.stable-id: --
connection.type: 802-3-ethernet
connection.interface-name: ens192
connection.autoconnect: yes
connection.autoconnect-priority: -999
connection.autoconnect-retries: -1 (default)
connection.multi-connect: 0 (default)
connection.auth-retries: -1
connection.timestamp: 1769957773
connection.permissions: --
connection.zone: --
connection.controller: --
connection.master: --
connection.slave-type: --
connection.port-type: --
connection.autoconnect-slaves: -1 (default)
connection.autoconnect-ports: -1 (default)
connection.down-on-poweroff: -1 (default)
connection.secondaries: --
connection.gateway-ping-timeout: 0
connection.ip-ping-timeout: 0
connection.ip-ping-addresses: --
connection.ip-ping-addresses-require-all:-1 (default)
connection.metered: unknown
connection.lldp: default
connection.mdns: -1 (default)
connection.llmnr: -1 (default)
connection.dns-over-tls: -1 (default)
connection.mptcp-flags: 0x0 (default)
connection.wait-device-timeout: -1
connection.wait-activation-delay: -1
802-3-ethernet.port: --
802-3-ethernet.speed: 0
802-3-ethernet.duplex: --
802-3-ethernet.auto-negotiate: no
802-3-ethernet.mac-address: --
802-3-ethernet.cloned-mac-address: --
802-3-ethernet.generate-mac-address-mask:--
802-3-ethernet.mac-address-denylist: --
802-3-ethernet.mtu: auto
802-3-ethernet.s390-subchannels: --
802-3-ethernet.s390-nettype: --
802-3-ethernet.s390-options: --
802-3-ethernet.wake-on-lan: default
802-3-ethernet.wake-on-lan-password: --
802-3-ethernet.accept-all-mac-addresses:-1 (default)
ipv4.method: manual
ipv4.dns: 192.168.10.100
ipv4.dns-search: --
ipv4.dns-options: --
ipv4.dns-priority: 0
ipv4.addresses: 192.168.10.224/24
ipv4.gateway: 192.168.10.1
ipv4.routes: --
ipv4.route-metric: -1
ipv4.route-table: 0 (unspec)
ipv4.routing-rules: --
ipv4.replace-local-rule: -1 (default)
ipv4.dhcp-send-release: -1 (default)
ipv4.routed-dns: -1 (default)
ipv4.ignore-auto-routes: no
ipv4.ignore-auto-dns: no
ipv4.dhcp-client-id: --
ipv4.dhcp-iaid: --
ipv4.dhcp-dscp: --
ipv4.dhcp-timeout: 0 (default)
ipv4.dhcp-send-hostname-deprecated: yes
ipv4.dhcp-send-hostname: -1 (default)
ipv4.dhcp-hostname: --
ipv4.dhcp-fqdn: --
ipv4.dhcp-hostname-flags: 0x0 (none)
ipv4.never-default: no
ipv4.may-fail: yes
ipv4.required-timeout: -1 (default)
ipv4.dad-timeout: -1 (default)
ipv4.dhcp-vendor-class-identifier: --
ipv4.dhcp-ipv6-only-preferred: -1 (default)
ipv4.link-local: 0 (default)
ipv4.dhcp-reject-servers: --
ipv4.auto-route-ext-gw: -1 (default)
ipv4.shared-dhcp-range: --
ipv4.shared-dhcp-lease-time: 0 (default)
ipv6.method: auto
ipv6.dns: --
ipv6.dns-search: --
ipv6.dns-options: --
ipv6.dns-priority: 0
ipv6.addresses: --
ipv6.gateway: --
ipv6.routes: --
ipv6.route-metric: -1
ipv6.route-table: 0 (unspec)
ipv6.routing-rules: --
ipv6.replace-local-rule: -1 (default)
ipv6.dhcp-send-release: -1 (default)
ipv6.routed-dns: -1 (default)
ipv6.ignore-auto-routes: no
ipv6.ignore-auto-dns: no
ipv6.never-default: no
ipv6.may-fail: yes
ipv6.required-timeout: -1 (default)
ipv6.ip6-privacy: -1 (default)
ipv6.temp-valid-lifetime: 0 (default)
ipv6.temp-preferred-lifetime: 0 (default)
ipv6.addr-gen-mode: eui64
ipv6.ra-timeout: 0 (default)
ipv6.mtu: auto
ipv6.dhcp-pd-hint: --
ipv6.dhcp-duid: --
ipv6.dhcp-iaid: --
ipv6.dhcp-timeout: 0 (default)
ipv6.dhcp-send-hostname-deprecated: yes
ipv6.dhcp-send-hostname: -1 (default)
ipv6.dhcp-hostname: --
ipv6.dhcp-hostname-flags: 0x0 (none)
ipv6.auto-route-ext-gw: -1 (default)
ipv6.token: --
proxy.method: none
proxy.browser-only: no
proxy.pac-url: --
proxy.pac-script: --
GENERAL.NAME: ens192
GENERAL.UUID: bf5c3e39-1f0e-3378-8dd7-fbb41fa659af
GENERAL.DEVICES: ens192
GENERAL.IP-IFACE: ens192
GENERAL.STATE: activated
GENERAL.DEFAULT: yes
GENERAL.DEFAULT6: no
GENERAL.SPEC-OBJECT: --
GENERAL.VPN: no
GENERAL.DBUS-PATH: /org/freedesktop/NetworkManager/ActiveConnection/2
GENERAL.CON-PATH: /org/freedesktop/NetworkManager/Settings/1
GENERAL.ZONE: --
GENERAL.MASTER-PATH: --
IP4.ADDRESS[1]: 192.168.10.244/24
IP4.GATEWAY: 192.168.10.1
IP4.ROUTE[1]: dst = 0.0.0.0/0, nh = 192.168.10.1, mt = 100
IP4.ROUTE[2]: dst = 192.168.10.0/24, nh = 0.0.0.0, mt = 100
IP4.DNS[1]: 192.168.10.100
IP6.ADDRESS[1]: fe80::20c:29ff:fecc:b935/64
IP6.GATEWAY: --
IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 1024
---------------- ens224 ------------------
connection.id: ens224
connection.uuid: 4242c0f6-b9ba-39f4-b11e-fb965a79d709
connection.stable-id: --
connection.type: 802-3-ethernet
connection.interface-name: ens224
connection.autoconnect: yes
connection.autoconnect-priority: -999
connection.autoconnect-retries: -1 (default)
connection.multi-connect: 0 (default)
connection.auth-retries: -1
connection.timestamp: 1769957774
connection.permissions: --
connection.zone: --
connection.controller: --
connection.master: --
connection.slave-type: --
connection.port-type: --
connection.autoconnect-slaves: -1 (default)
connection.autoconnect-ports: -1 (default)
connection.down-on-poweroff: -1 (default)
connection.secondaries: --
connection.gateway-ping-timeout: 0
connection.ip-ping-timeout: 0
connection.ip-ping-addresses: --
connection.ip-ping-addresses-require-all:-1 (default)
connection.metered: unknown
connection.lldp: default
connection.mdns: -1 (default)
connection.llmnr: -1 (default)
connection.dns-over-tls: -1 (default)
connection.mptcp-flags: 0x0 (default)
connection.wait-device-timeout: -1
connection.wait-activation-delay: -1
802-3-ethernet.port: --
802-3-ethernet.speed: 0
802-3-ethernet.duplex: --
802-3-ethernet.auto-negotiate: no
802-3-ethernet.mac-address: --
802-3-ethernet.cloned-mac-address: --
802-3-ethernet.generate-mac-address-mask:--
802-3-ethernet.mac-address-denylist: --
802-3-ethernet.mtu: auto
802-3-ethernet.s390-subchannels: --
802-3-ethernet.s390-nettype: --
802-3-ethernet.s390-options: --
802-3-ethernet.wake-on-lan: default
802-3-ethernet.wake-on-lan-password: --
802-3-ethernet.accept-all-mac-addresses:-1 (default)
ipv4.method: manual
ipv4.dns: 192.168.20.100
ipv4.dns-search: --
ipv4.dns-options: --
ipv4.dns-priority: 0
ipv4.addresses: 192.168.20.224/24
ipv4.gateway: 192.168.20.1
ipv4.routes: --
ipv4.route-metric: -1
ipv4.route-table: 0 (unspec)
ipv4.routing-rules: --
ipv4.replace-local-rule: -1 (default)
ipv4.dhcp-send-release: -1 (default)
ipv4.routed-dns: -1 (default)
ipv4.ignore-auto-routes: no
ipv4.ignore-auto-dns: no
ipv4.dhcp-client-id: --
ipv4.dhcp-iaid: --
ipv4.dhcp-dscp: --
ipv4.dhcp-timeout: 0 (default)
ipv4.dhcp-send-hostname-deprecated: yes
ipv4.dhcp-send-hostname: -1 (default)
ipv4.dhcp-hostname: --
ipv4.dhcp-fqdn: --
ipv4.dhcp-hostname-flags: 0x0 (none)
ipv4.never-default: no
ipv4.may-fail: yes
ipv4.required-timeout: -1 (default)
ipv4.dad-timeout: -1 (default)
ipv4.dhcp-vendor-class-identifier: --
ipv4.dhcp-ipv6-only-preferred: -1 (default)
ipv4.link-local: 0 (default)
ipv4.dhcp-reject-servers: --
ipv4.auto-route-ext-gw: -1 (default)
ipv4.shared-dhcp-range: --
ipv4.shared-dhcp-lease-time: 0 (default)
ipv6.method: auto
ipv6.dns: --
ipv6.dns-search: --
ipv6.dns-options: --
ipv6.dns-priority: 0
ipv6.addresses: --
ipv6.gateway: --
ipv6.routes: --
ipv6.route-metric: -1
ipv6.route-table: 0 (unspec)
ipv6.routing-rules: --
ipv6.replace-local-rule: -1 (default)
ipv6.dhcp-send-release: -1 (default)
ipv6.routed-dns: -1 (default)
ipv6.ignore-auto-routes: no
ipv6.ignore-auto-dns: no
ipv6.never-default: no
ipv6.may-fail: yes
ipv6.required-timeout: -1 (default)
ipv6.ip6-privacy: -1 (default)
ipv6.temp-valid-lifetime: 0 (default)
ipv6.temp-preferred-lifetime: 0 (default)
ipv6.addr-gen-mode: eui64
ipv6.ra-timeout: 0 (default)
ipv6.mtu: auto
ipv6.dhcp-pd-hint: --
ipv6.dhcp-duid: --
ipv6.dhcp-iaid: --
ipv6.dhcp-timeout: 0 (default)
ipv6.dhcp-send-hostname-deprecated: yes
ipv6.dhcp-send-hostname: -1 (default)
ipv6.dhcp-hostname: --
ipv6.dhcp-hostname-flags: 0x0 (none)
ipv6.auto-route-ext-gw: -1 (default)
ipv6.token: --
proxy.method: none
proxy.browser-only: no
proxy.pac-url: --
proxy.pac-script: --
GENERAL.NAME: ens224
GENERAL.UUID: 4242c0f6-b9ba-39f4-b11e-fb965a79d709
GENERAL.DEVICES: ens224
GENERAL.IP-IFACE: ens224
GENERAL.STATE: activated
GENERAL.DEFAULT: no
GENERAL.DEFAULT6: no
GENERAL.SPEC-OBJECT: --
GENERAL.VPN: no
GENERAL.DBUS-PATH: /org/freedesktop/NetworkManager/ActiveConnection/3
GENERAL.CON-PATH: /org/freedesktop/NetworkManager/Settings/2
GENERAL.ZONE: --
GENERAL.MASTER-PATH: --
IP4.ADDRESS[1]: 192.168.20.244/24
IP4.GATEWAY: 192.168.20.1
IP4.ROUTE[1]: dst = 0.0.0.0/0, nh = 192.168.20.1, mt = 101
IP4.ROUTE[2]: dst = 192.168.20.0/24, nh = 0.0.0.0, mt = 101
IP4.DNS[1]: 192.168.20.100
IP6.ADDRESS[1]: fe80::20c:29ff:fecc:b93f/64
IP6.GATEWAY: --
IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 1024
Rebooting server...
Changes applied to 192.168.10.244. Waiting for reboot...
All servers processed successfully.
[root@inddcpppz01 scripts]#

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Login in to the server indrxltst11 & check the IP address 

Why This Approach Is Recommended for RHEL 9
  • Fully NetworkManager-compliant
  • No deprecated network-scripts
  • Changes persist across reboots
  • CSV-based = scalable and repeatable
  • Minimal dependencies
Best Practices
  • Test on a non-production server first
  • Keep console or hypervisor access available
  • Avoid changing the IP of your active SSH session
  • Maintain a rollback plan
Conclusion
This Bash script provides a safe, scalable, and enterprise-ready solution for managing hostnames and IP addresses across RHEL 7, 8, 9 & 10 servers. By leveraging nmcli and CSV-driven automation, you can eliminate manual configuration errors and standardize network changes across environments.

For sysadmins managing multiple Linux servers, this approach is simple, powerful, and highly effective.