Pages

Jenkins Installation

Jenkins Installation on RHEL 9: A Complete Guide
This guide walks through the steps required to install and configure Jenkins on RHEL 9, including setting up Java, dependencies, Jenkins repository, system service configuration, reverse proxy with Apache, Python tooling, Terraform, and PowerShell.

1. Install Required Dependencies
Begin by installing all essential packages, including Java 17, Git, compiler tools, Node.js, Python pip, Docker-related libraries, and others.
# dnf install -y fontconfig java-17-openjdk git gcc gcc-c++ nodejs gettext device-mapper-persistent-data lvm2 bzip2 python3-pip wget libseccomp
# java --version

2. Configure Jenkins Repository
Download and add the Jenkins repository for RHEL-based systems.
# wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
# rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key

3. Install and Start Jenkins
Install Jenkins and configure it as a service.
# dnf install jenkins -y
# systemctl start jenkins
# systemctl enable jenkins
# systemctl status jenkins

Jenkins will now be available on Example : http://192.168.10.106:8080
# cat /var/lib/jenkins/secrets/initialAdminPassword
6bedde9c71eb4d999a5cfdfe43f0d052  -- Enter this Password & Continue 

Install the suggested plugins then Click 
Now plugins installation in progress 
Once Plugin are installed then Set the Admin password & email Address then Save & Continue 

Jenkins URL : IP Address or FQDN hostname with port no 8080 as below then Save & Finish 

Now Jenkins is ready to start 


Install Additional Plugin Ansible, terraform, PowerShell, GitHub, GitLab, AWS, GCP & Azure 





 


4. Configure Apache as a Reverse Proxy for Jenkins
Install and enable Apache HTTP Server.
# dnf install httpd -y  
# systemctl start httpd 
# systemctl enable httpd 
# systemctl status httpd

Navigate to the Apache configuration directory:
# cd /etc/httpd/conf.d/ 
# mv welcome.conf welcome.conf.bkp 
# vi jenkins.conf
ProxyRequests Off
ProxyPreserveHost On
AllowEncodedSlashes NoDecode

<Proxy http://localhost:8080/*>
Order deny,allow
Allow from all
</Proxy>

ProxyPass / http://localhost:8080/ nocanon
ProxyPassReverse / http://localhost:8080/
ProxyPassReverse / http://www.jenkins.ppc.com/

Restart Apache: 
# systemctl restart httpd

Now Jenkins will be accessible using your domain or server IP via port 80.
http://www.jenkins.ppc.com


5. Install and Configure Python Tools
Upgrade pip and install commonly used DevOps/Cloud SDKs. 
# python3 -m pip install --upgrade pip 
# pip3 install ansible
# pip3 install gcloud 
# pip3 install awscli 
# pip3 install azure-cli 
# pip3 install --upgrade pyvmomi 
# pip3 install vmware-vcenter 
# pip3 install --upgrade git+https://github.com/vmware/vsphere-automation-sdk-python.git

Create and Activate Python Virtual Environment
# python3 -m venv venv_name 
# source venv_name/bin/activate 
# pip install --upgrade pip setuptools

6. Install Terraform on RHEL 9
Add the HashiCorp repo and install Terraform.
# yum install -y yum-utils 
# yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo 
# yum -y install terraform 
# terraform version

7. Install PowerShell on RHEL 9
Install PowerShell from the official RPM package. 
# dnf install https://github.com/PowerShell/PowerShell/releases/download/v7.5.4/powershell-7.5.4-1.rh.x86_64.rpm 
# pwsh --version

Conclusion
You've successfully installed Jenkins, configured Apache reverse proxy, set up Python cloud tooling, installed Terraform, and enabled PowerShell on RHEL 9. This setup prepares your server for end-to-end DevOps automation, CI/CD pipelines, cloud provisioning, and infrastructure management.
Feel free to extend Jenkins further using plugins and pipeline automation.

Ansible AWX

Install Ansible AWX on CentOS/RHEL8/9
If you want to manage automation at scale, Ansible AWX (the open-source version of Ansible Tower) is a powerful solution. This guide walks you through installing AWX 17.1.0 on a CentOS/RHEL-based system using Docker and Docker Compose.

Prerequisites:

Before starting, ensure you have:
  • A fresh CentOS/RHEL system (8/9 preferred)
  • Root or sudo access
  • Internet connectivity
Step 1: Install Required Packages
# dnf -y install git gcc gcc-c++ nodejs gettext device-mapper-persistent-data lvm2 bzip2 python3-pip wget libseccomp

Step 2: Remove Old Docker Installation
# dnf remove docker* -y

Step 3: Configure Docker Repository
# dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

Step 4: Install Docker CE
# dnf -y install docker-ce
# systemctl enable docker
# systemctl start docker 
# systemctl status docker

Step 5: Install Python Build Dependencies
# python3 -m pip install --upgrade pip
# pip3 install setuptools_rust 
# pip3 install wheel 
# pip3 install ansible
# pip3 install docker-compose

Step 6: Download AWX Installer
# git clone -b 17.1.0 https://github.com/ansible/awx.git
# cd awx/installer

Step 7: Create Required Directories
# mkdir -p /opt/awx/pgdocker /opt/awx/awxcompose /opt/awx/projects

Step 8: Generate a Secret Key
# openssl rand -base64 30
Copy this key for later use.

Step 9: Edit the AWX Inventory File
Open the inventory file:
# vi inventory
Update the following parameters as needed:
admin_password=Welcome@123
awx_official=true
pg_database=awx
pg_password=Welcome@123
awx_alternate_dns_servers="192.168.10.100,192.168.20.100"
postgres_data_dir="/opt/awx/pgdocker"
docker_compose_dir="/opt/awx/awxcompose"
project_data_dir="/opt/awx/projects"
secret_key=XXXXXXXXXX   # openssl rand -base64 30 command output
Save and exit the file.

Step 10: Run the AWX Installer

Once the inventory file is updated, run:
ansible-playbook -i inventory install.yml
This may take several minutes.

Step 11: Access AWX Web Interface

After the installation completes, open a browser and go to:
http://<server-ip>
http://<server_hostname or FQDN>

Log in using:
Username: admin
Password: Welcome@123 (or the password you set)

Conclusion
Installing Ansible AWX on CentOS/RHEL 8/9 provides a powerful and centralized way to manage automation across your infrastructure. By following this guide, you’ve prepared your system with all required dependencies, deployed Docker and Docker Compose, configured AWX using the official installer, and successfully launched the AWX web interface.
  • With AWX now running, you can:
  • Create and manage projects, inventories, and credentials
  • Build and schedule playbook automation workflows
  • Monitor job executions in real time
  • Integrate AWX with Git, cloud providers, and external systems
  • Scale automation across teams and environments
This setup forms the foundation for enterprise-grade automation and can be expanded further with clustering, HTTPS/SSL configuration, LDAP/AD integration, and backup strategies.

Renew GPFS (IBM Spectrum Scale) Certificates

IBM Spectrum Scale (GPFS) uses internal SSL certificates to secure communication among cluster nodes. When these certificates are close to expiration—or have already expired—you must renew them to restore healthy cluster communication.

This article provides step-by-step instructions for renewing GPFS certificates using both the online (normal) and offline (expired certificate) methods.

Renewing GPFS Certificate – Online Method (Recommended)
Use this method when the certificates have NOT yet expired.
This method does not require shutting down the cluster.

1. Check the current certificate expiry date
Run on any cluster node:
# mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid
Or:
# /usr/lpp/mmfs/bin/mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid

2. Generate new authentication keys
# mmauth genkey new

3. Commit the new keys
# mmauth genkey commit

4. Validate the updated certificate on all nodes
# mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid
Or:
/usr/lpp/mmfs/bin/mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid

Renewing GPFS Certificate – Offline Method (Certificates Already Expired)
If the cluster fails to start or nodes cannot communicate due to an expired certificate, use this offline method.
This requires a temporary cluster shutdown and manual time adjustment.

1. Verify certificate expiration
# mmdsh -N all 'openssl x509 -in /var/mmfs/ssl/id_rsa_committed.pub -dates -noout'

2. Stop NTP service (important for manual time rollback)
# lssrc -s xntpd
# stopsrc -s xntpd

3. Shut down GPFS on all nodes
# mmshutdown -a

4. Stop CCR monitoring on quorum nodes
# mmdsh -N quorumNodes "/usr/lpp/mmfs/bin/mmcommon killCcrMonitor"

5. Roll back the system time on ALL nodes
Set the clock just before the certificate expiry time.
Example:
date 072019542025
Explanation:
07 = Month (July)
20 = Day
19:54 = Time
2025 = Year

6. Restart CCR monitor
# mmdsh -N quorumNodes "/usr/lpp/mmfs/bin/mmcommon startCcrMonitor"

7. Generate & commit new keys
# mmauth genkey new
# mmauth genkey commit

8. Restore correct date and restart NTP
# date <current_correct_time>
# startsrc -s xntpd

9. Verify the new certificate
# mmdsh -N all 'openssl x509 -in /var/mmfs/ssl/id_rsa_committed.pub -dates -noout'

10. Restart GPFS on all nodes
# mmstartup -a

Extracting Disk Details (Size, LUN ID, and WWPN) on IBM AIX

Managing storage on IBM AIX systems often requires gathering detailed information about disks — including their size, LUN ID, and WWPN (World Wide Port Name) of the Fibre Channel adapters they connect through.

This information is especially useful for SAN teams and system administrators when verifying storage mappings, troubleshooting, or documenting configurations.

In this post, we’ll look at a simple shell script that automates this task.

The script:
  • Loops through all disks known to AIX (lspv output).
  • Extracts each disk’s LUN ID from lscfg.
  • Gets its size in GB using bootinfo.
  • Finds all FC adapters (fcsX) and displays their WWPNs.
  • Prints a consolidated, easy-to-read summary.
The Script

#!/bin/ksh
for i in $(lspv | awk '{print $1}')
do

# Get LUN ID
LUNID=$(lscfg -vpl "$i" | grep -i "LIC" | awk -F. '{print $NF}')

# Get size in GB
DiskSizeMB=$(bootinfo -s "$i")
DiskSizeGB=$(echo "scale=2; $DiskSizeMB/1024" | bc)

# Loop over all FC adapters
for j in $(lsdev -Cc adapter | grep fcs | awk '{print $1}')
do
WWPN=$(lscfg -vpl "$j" | grep -i "Network Address" | sed 's/.*Address[ .]*//')
echo "Disk: $i Size: ${DiskSizeGB}GB LUN ID: $LUNID WWPN: $WWPN"
done
done


How It Works:
  • lspv lists all disks managed by AIX (e.g., hdisk0, hdisk1).
  • lscfg -vpl hdiskX displays detailed configuration information for each disk, including the LUN ID.
  • bootinfo -s hdiskX returns the disk size in megabytes.
  • lsdev -Cc adapter | grep fcs lists all Fibre Channel adapters (fcs0, fcs1, etc.).
  • lscfg -vpl fcsX | grep "Network Address" shows the adapter’s WWPN.
  • sed 's/.*Address[ .]*//' cleans the output, leaving only the WWPN value.
Example Output:
Disk: hdisk0 Size: 100.00GB LUN ID: 500507680240C567 WWPN: C0507601D8123456
Disk: hdisk0 Size: 100.00GB LUN ID: 500507680240C567 WWPN: C0507601D8123457
Disk: hdisk1 Size: 200.00GB LUN ID: 500507680240C568 WWPN: C0507601D8123456
Disk: hdisk1 Size: 200.00GB LUN ID: 500507680240C568 WWPN: C0507601D8123457


This shows each disk (hdiskX) with its size, LUN ID, and all connected FC adapter WWPNs.

Presenting Fibre-Channel Storage to AIX LPARs with Dual VIOS (NPIV / vFC)

Present SAN LUNs to AIX LPARs using NPIV / virtual Fibre Channel (vFC) so each LPAR has redundant SAN paths through two VIOS servers (VIOS1 = primary, VIOS2 = backup) and can use multipathing (native MPIO or PowerPath).

NPIV (N_Port ID Virtualization) lets an LPAR present its own virtual WWPNs to the SAN while physical Fibre Channel hardware is on the VIOS. With two VIOS nodes and dual SAN fabrics, you get end-to-end redundancy:
  • VIOS1 and VIOS2 each present vFC adapters to the LPAR via the HMC.
  • Each VIOS has physical FC ports connected to redundant SAN switches/fabrics.
  • LUNs are zoned and masked to VIOS WWPNs. AIX LPARs discover LUNs, use multipathing, and survive single-path failures.
Prerequisites & Assumptions:
  • HMC admin, VIOS (padmin/root), and AIX root access available.
  • VIOS1 & VIOS2 installed, registered with HMC and reachable.
  • Each VIOS has at least one physical FC port (e.g., fcs0, fcs1).
  • SAN team will perform zoning & LUN masking.
  • Backups of VIOS and HMC configs completed.
  • You know which LPARs should receive which LUNs.
High-Level Flow:
  • Collect physical FC adapter names & WWPNs from VIOS1 and VIOS2.
  • Provide WWPNs to SAN admin for zoning & LUN masking.
  • Create vFC adapters for each AIX LPAR on the HMC and map them across VIOS1/VIOS2.
  • Verify mappings on HMC and VIOS (lsmap).
  • Ensure VIOS physical FC ports are logged into fabric.
  • On AIX LPARs: run cfgmgr, enable multipathing, create PVs/VGs/LVs as required.
  • Test failover by disabling a path and verifying I/O continues.
  • Document and monitor.
Step-by-Step Configuration

Step 1 — Verify VIOS Physical Fibre Channel Adapters
On VIOS1 and VIOS2, log in as padmin and identify FC adapters:
$ lsdev -type adapter
Expected output snippet:
VIOS1:
fcs0 Available 00-00 Fibre Channel Adapter
fcs1 Available 00-01 Fibre Channel Adapter
VIOS2:

fcs0 Available 00-00 Fibre Channel Adapter
fcs1 Available 00-01 Fibre Channel Adapter
Retrieve WWPNs for each adapter:
$ lsattr -El fcs0 | grep -i wwpn
Record results: 
VIOS    Adapter        WWPN
VIOS1   fcs0              20:00:00:AA:AA:AA
VIOS2   fcs0              20:00:00:CC:CC:CC

Step 2 — SAN Zoning & LUN Presentation
Provide the recorded VIOS WWPNs to the SAN Administrator.
Request:
  • Zoning between each VIOS WWPN and Storage Controller ports.
  • LUN masking to present LUN-100 to both VIOS WWPNs.
  • Confirmation that both VIOS ports see the LUNs across both fabrics.
Tip: Ensure both fabrics (A & B) are zoned independently for redundancy.

Step 3 — Create Virtual Fibre Channel (vFC) Adapters via HMC

On the HMC:
Select AIX-LPAR1 → Configuration → Virtual Adapters.
Click Add → Virtual Fibre Channel Adapter.
Create two vFC adapters:
vfc0 mapped to VIOS1
vfc1 mapped to VIOS2
Save configuration and activate (Dynamic LPAR operation if supported).
Expected vFC mapping:
Adapter     Client LPAR    Server VIOS       Mapping Status
vfc0           AIX-LPAR1     VIOS1                Mapped OK
vfc1           AIX-LPAR1     VIOS2                Mapped OK

Step 4 — Verify vFC Mapping on VIOS
Log in to each VIOS (padmin):
$ lsmap -all -type fcs

Example output:
On VIOS1:
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ ----------- --------
vfchost0 U9105.22A.XXXXXX-V1-C5 5 AIX-LPAR1 AIX
Status:LOGGED_IN
FC name:fcs0
Ports logged in: 2
VFC client name: fcs0
VFC client WWPN: 10:00:00:11:22:33:44:55

On VIOS2:
Name Physloc ClntID ClntName ClntOS
------------- ---------------------------------- ------ ----------- --------
vfchost0 U9105.22A.XXXXXX-V2-C6 5 AIX-LPAR1 AIX
Status:LOGGED_IN
FC name:fcs0

Ports logged in: 2
VFC client name: fcs1

VFC client WWPN: 10:00:00:55:66:77:88:99

Confirm each VIOS vFC host maps to the correct AIX vFC client.

Step 5 — Verify VIOS FC Port Fabric Login
On each VIOS:
$ fcstat fcs0
Verify:
Port is online.
Logged into fabric.
No link errors.

Step 6 — Discover Devices on AIX LPAR

Boot or activate AIX-LPAR1 SMS mode and 
  • Open HMC → Open vterm/console for AIX-LPAR1.
  • HMC GUI: Tasks → Operations → Activate → Advanced → Boot Mode = SMS → Activate. 
  • In SMS console: 5 (Select Boot Options) → Select Install/Boot Device → List all Devices → pick device → Normal Boot Mode → Yes to exit and boot from that device.
Verify Fibre Channel adapters:
# lsdev -Cc adapter | grep fcs
fcs0 Available Fibre Channel Adapter
fcs1 Available Fibre Channel Adapter
List discovered disks:
# lsdev -Cc disk
# lspv
Expected:
hdisk12 Available 00-08-00-4,0 16 Bit LUNZ Disk Drive

Step 7 — Configure Multipathing
If using native AIX MPIO, verify:
# lspath
Enabled hdisk12 fscsi0
Enabled hdisk12 fscsi1
If using EMC PowerPath:
# powermt display dev=all
Confirm both paths active.

Step 8 — Test Redundancy / Failover
To validate multipathing:
On VIOS1, disable the FC port temporarily:
$ rmdev -l fcs0 -R
On AIX LPAR, verify disk is still accessible:
# lspath -l hdisk12
Expected:
Enabled hdisk12 fscsi1
Failed hdisk12 fscsi0
Re-enable path:
$ cfgdev
Confirm path restoration:
Enabled hdisk12 fscsi0
Enabled hdisk12 fscsi1

Step 9— Post-Deployment Checks
Verify all paths:
# lspath
Check VIOS logs for FC errors:
$ errlog -ls
Save configuration backups:
$ backupios -file /home/padmin/vios1_bkup
$ backupios -file /home/padmin/vios2_bkup