Pages

GitLab CE Installation

GitLab CE Installation on RHEL 9 / CentOS 9
GitLab Community Edition (CE) is a powerful, self-hosted DevOps platform that provides Git repository management, CI/CD pipelines, artifact storage, container registry, issue tracking, and more. This guide walks you through installing GitLab CE on RHEL 9 / CentOS 9, configuring a custom external URL, and implementing SSL/TLS using Apache (httpd) as a reverse proxy.

1. Install Required Dependencies

Before installing GitLab, ensure your system has the required packages.
# dnf install -y curl policycoreutils openssh-server openssh-clients

2. Add GitLab CE Repository
Use GitLab’s official repository installation script.
# curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash

3. Install GitLab CE
# dnf install -y gitlab-ce
This installs all required GitLab components, including NGINX (bundled), Redis, and PostgreSQL.

4. Configure GitLab URL
Edit the primary GitLab configuration file:
# vim /etc/gitlab/gitlab.rb
Add or modify the external URL:
external_url 'http://www.gitlab.ppc.com'
Save and exit.

5. Reconfigure GitLab
Run the reconfiguration command to generate configurations and start services.
# gitlab-ctl reconfigure
GitLab will now be accessible at:
http://server-hostname
http://server-IP-address

SSL/TLS Implementation Using Apache (httpd)
GitLab comes with a built-in NGINX server, but many enterprises prefer using Apache for SSL termination and reverse proxying.
Below is how to configure Apache with SSL for GitLab.

6. Install Apache HTTP Server
# dnf install -y httpd mod_ssl
# systemctl enable httpd
# systemctl start httpd

7. Generate or Install SSL Certificates
You can use:
Self-signed Certificates (testing)
Let's Encrypt (production)
CA-signed Certificates (enterprise)

To generate a self-signed certificate:
# openssl req -newkey rsa:2048 -nodes -keyout /etc/pki/tls/private/gitlab.key -x509 -days 365 -out /etc/pki/tls/certs/gitlab.crt

8. Configure Apache Reverse Proxy for GitLab
Create a new Apache configuration file:
# vim /etc/httpd/conf.d/gitlab.conf
Add the following configuration:
<VirtualHost *:443>
ServerName www.gitlab.ppc.com

SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/gitlab.crt
SSLCertificateKeyFile /etc/pki/tls/private/gitlab.key

ProxyPreserveHost On

<Location />    --Optional if any port define  Example
ProxyPass http://127.0.0.1:8080/
ProxyPassReverse http://127.0.0.1:8080/
</Location>
</VirtualHost>

<VirtualHost *:80>
ServerName www.gitlab.ppc.com
Redirect permanent / https://gitlab.ppc.com/
</VirtualHost>

Save and exit.

9. Adjust SELinux Policies (if enabled)
# setsebool -P httpd_can_network_connect 1

10. Restart Apache
# systemctl restart httpd
You can now access GitLab using HTTPS:
https://www.gitlab.ppc.com

Conclusion

You have successfully installed GitLab CE on RHEL 9 / CentOS 9, configured the external URL, and set up SSL/TLS security using Apache as a reverse proxy. With GitLab now running securely, you can begin creating repositories, configuring CI/CD pipelines, managing runners, and integrating GitLab with your DevOps ecosystem.

Jenkins Installation

Jenkins Installation on RHEL 9: A Complete Guide
This guide walks through the steps required to install and configure Jenkins on RHEL 9, including setting up Java, dependencies, Jenkins repository, system service configuration, reverse proxy with Apache, Python tooling, Terraform, and PowerShell.

1. Install Required Dependencies
Begin by installing all essential packages, including Java 17, Git, compiler tools, Node.js, Python pip, Docker-related libraries, and others.
# dnf install -y fontconfig java-17-openjdk git gcc gcc-c++ nodejs gettext device-mapper-persistent-data lvm2 bzip2 python3-pip wget libseccomp
# java --version

2. Configure Jenkins Repository
Download and add the Jenkins repository for RHEL-based systems.
# wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
# rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key

3. Install and Start Jenkins
Install Jenkins and configure it as a service.
# dnf install jenkins -y
# systemctl start jenkins
# systemctl enable jenkins
# systemctl status jenkins

Jenkins will now be available on Example : http://192.168.10.106:8080
# cat /var/lib/jenkins/secrets/initialAdminPassword
6bedde9c71eb4d999a5cfdfe43f0d052  -- Enter this Password & Continue 

Install the suggested plugins then Click 
Now plugins installation in progress 
Once Plugin are installed then Set the Admin password & email Address then Save & Continue 

Jenkins URL : IP Address or FQDN hostname with port no 8080 as below then Save & Finish 

Now Jenkins is ready to start 


Install Additional Plugin Ansible, terraform, PowerShell, GitHub, GitLab, AWS, GCP & Azure 





 


4. Configure Apache as a Reverse Proxy for Jenkins
Install and enable Apache HTTP Server.
# dnf install httpd -y  
# systemctl start httpd 
# systemctl enable httpd 
# systemctl status httpd

Navigate to the Apache configuration directory:
# cd /etc/httpd/conf.d/ 
# mv welcome.conf welcome.conf.bkp 
# vi jenkins.conf
ProxyRequests Off
ProxyPreserveHost On
AllowEncodedSlashes NoDecode

<Proxy http://localhost:8080/*>
Order deny,allow
Allow from all
</Proxy>

ProxyPass / http://localhost:8080/ nocanon
ProxyPassReverse / http://localhost:8080/
ProxyPassReverse / http://www.jenkins.ppc.com/

Restart Apache: 
# systemctl restart httpd

Now Jenkins will be accessible using your domain or server IP via port 80.
http://www.jenkins.ppc.com


5. Install and Configure Python Tools
Upgrade pip and install commonly used DevOps/Cloud SDKs. 
# python3 -m pip install --upgrade pip 
# pip3 install ansible
# pip3 install gcloud 
# pip3 install awscli 
# pip3 install azure-cli 
# pip3 install --upgrade pyvmomi 
# pip3 install vmware-vcenter 
# pip3 install --upgrade git+https://github.com/vmware/vsphere-automation-sdk-python.git

Create and Activate Python Virtual Environment
# python3 -m venv venv_name 
# source venv_name/bin/activate 
# pip install --upgrade pip setuptools

6. Install Terraform on RHEL 9
Add the HashiCorp repo and install Terraform.
# yum install -y yum-utils 
# yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo 
# yum -y install terraform 
# terraform version

7. Install PowerShell on RHEL 9
Install PowerShell from the official RPM package. 
# dnf install https://github.com/PowerShell/PowerShell/releases/download/v7.5.4/powershell-7.5.4-1.rh.x86_64.rpm 
# pwsh --version

Conclusion
You've successfully installed Jenkins, configured Apache reverse proxy, set up Python cloud tooling, installed Terraform, and enabled PowerShell on RHEL 9. This setup prepares your server for end-to-end DevOps automation, CI/CD pipelines, cloud provisioning, and infrastructure management.
Feel free to extend Jenkins further using plugins and pipeline automation.

Ansible AWX

Install Ansible AWX on CentOS/RHEL8/9
If you want to manage automation at scale, Ansible AWX (the open-source version of Ansible Tower) is a powerful solution. This guide walks you through installing AWX 17.1.0 on a CentOS/RHEL-based system using Docker and Docker Compose.

Prerequisites:

Before starting, ensure you have:
  • A fresh CentOS/RHEL system (8/9 preferred)
  • Root or sudo access
  • Internet connectivity
Step 1: Install Required Packages
# dnf -y install git gcc gcc-c++ nodejs gettext device-mapper-persistent-data lvm2 bzip2 python3-pip wget libseccomp

Step 2: Remove Old Docker Installation
# dnf remove docker* -y

Step 3: Configure Docker Repository
# dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

Step 4: Install Docker CE
# dnf -y install docker-ce
# systemctl enable docker
# systemctl start docker 
# systemctl status docker

Step 5: Install Python Build Dependencies
# python3 -m pip install --upgrade pip
# pip3 install setuptools_rust 
# pip3 install wheel 
# pip3 install ansible
# pip3 install docker-compose

Step 6: Download AWX Installer
# git clone -b 17.1.0 https://github.com/ansible/awx.git
# cd awx/installer

Step 7: Create Required Directories
# mkdir -p /opt/awx/pgdocker /opt/awx/awxcompose /opt/awx/projects

Step 8: Generate a Secret Key
# openssl rand -base64 30
Copy this key for later use.

Step 9: Edit the AWX Inventory File
Open the inventory file:
# vi inventory
Update the following parameters as needed:
admin_password=Welcome@123
awx_official=true
pg_database=awx
pg_password=Welcome@123
awx_alternate_dns_servers="192.168.10.100,192.168.20.100"
postgres_data_dir="/opt/awx/pgdocker"
docker_compose_dir="/opt/awx/awxcompose"
project_data_dir="/opt/awx/projects"
secret_key=XXXXXXXXXX   # openssl rand -base64 30 command output
Save and exit the file.

Step 10: Run the AWX Installer

Once the inventory file is updated, run:
ansible-playbook -i inventory install.yml
This may take several minutes.

Step 11: Access AWX Web Interface

After the installation completes, open a browser and go to:
http://<server-ip>
http://<server_hostname or FQDN>

Log in using:
Username: admin
Password: Welcome@123 (or the password you set)

Conclusion
Installing Ansible AWX on CentOS/RHEL 8/9 provides a powerful and centralized way to manage automation across your infrastructure. By following this guide, you’ve prepared your system with all required dependencies, deployed Docker and Docker Compose, configured AWX using the official installer, and successfully launched the AWX web interface.
  • With AWX now running, you can:
  • Create and manage projects, inventories, and credentials
  • Build and schedule playbook automation workflows
  • Monitor job executions in real time
  • Integrate AWX with Git, cloud providers, and external systems
  • Scale automation across teams and environments
This setup forms the foundation for enterprise-grade automation and can be expanded further with clustering, HTTPS/SSL configuration, LDAP/AD integration, and backup strategies.

Renew GPFS (IBM Spectrum Scale) Certificates

IBM Spectrum Scale (GPFS) uses internal SSL certificates to secure communication among cluster nodes. When these certificates are close to expiration—or have already expired—you must renew them to restore healthy cluster communication.

This article provides step-by-step instructions for renewing GPFS certificates using both the online (normal) and offline (expired certificate) methods.

Renewing GPFS Certificate – Online Method (Recommended)
Use this method when the certificates have NOT yet expired.
This method does not require shutting down the cluster.

1. Check the current certificate expiry date
Run on any cluster node:
# mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid
Or:
# /usr/lpp/mmfs/bin/mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid

2. Generate new authentication keys
# mmauth genkey new

3. Commit the new keys
# mmauth genkey commit

4. Validate the updated certificate on all nodes
# mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid
Or:
/usr/lpp/mmfs/bin/mmcommon run mmgskkm print --cert /var/mmfs/ssl/id_rsa_committed.cert | grep Valid

Renewing GPFS Certificate – Offline Method (Certificates Already Expired)
If the cluster fails to start or nodes cannot communicate due to an expired certificate, use this offline method.
This requires a temporary cluster shutdown and manual time adjustment.

1. Verify certificate expiration
# mmdsh -N all 'openssl x509 -in /var/mmfs/ssl/id_rsa_committed.pub -dates -noout'

2. Stop NTP service (important for manual time rollback)
# lssrc -s xntpd
# stopsrc -s xntpd

3. Shut down GPFS on all nodes
# mmshutdown -a

4. Stop CCR monitoring on quorum nodes
# mmdsh -N quorumNodes "/usr/lpp/mmfs/bin/mmcommon killCcrMonitor"

5. Roll back the system time on ALL nodes
Set the clock just before the certificate expiry time.
Example:
date 072019542025
Explanation:
07 = Month (July)
20 = Day
19:54 = Time
2025 = Year

6. Restart CCR monitor
# mmdsh -N quorumNodes "/usr/lpp/mmfs/bin/mmcommon startCcrMonitor"

7. Generate & commit new keys
# mmauth genkey new
# mmauth genkey commit

8. Restore correct date and restart NTP
# date <current_correct_time>
# startsrc -s xntpd

9. Verify the new certificate
# mmdsh -N all 'openssl x509 -in /var/mmfs/ssl/id_rsa_committed.pub -dates -noout'

10. Restart GPFS on all nodes
# mmstartup -a

Extracting Disk Details (Size, LUN ID, and WWPN) on IBM AIX

Managing storage on IBM AIX systems often requires gathering detailed information about disks — including their size, LUN ID, and WWPN (World Wide Port Name) of the Fibre Channel adapters they connect through.

This information is especially useful for SAN teams and system administrators when verifying storage mappings, troubleshooting, or documenting configurations.

In this post, we’ll look at a simple shell script that automates this task.

The script:
  • Loops through all disks known to AIX (lspv output).
  • Extracts each disk’s LUN ID from lscfg.
  • Gets its size in GB using bootinfo.
  • Finds all FC adapters (fcsX) and displays their WWPNs.
  • Prints a consolidated, easy-to-read summary.
The Script

#!/bin/ksh
for i in $(lspv | awk '{print $1}')
do

# Get LUN ID
LUNID=$(lscfg -vpl "$i" | grep -i "LIC" | awk -F. '{print $NF}')

# Get size in GB
DiskSizeMB=$(bootinfo -s "$i")
DiskSizeGB=$(echo "scale=2; $DiskSizeMB/1024" | bc)

# Loop over all FC adapters
for j in $(lsdev -Cc adapter | grep fcs | awk '{print $1}')
do
WWPN=$(lscfg -vpl "$j" | grep -i "Network Address" | sed 's/.*Address[ .]*//')
echo "Disk: $i Size: ${DiskSizeGB}GB LUN ID: $LUNID WWPN: $WWPN"
done
done


How It Works:
  • lspv lists all disks managed by AIX (e.g., hdisk0, hdisk1).
  • lscfg -vpl hdiskX displays detailed configuration information for each disk, including the LUN ID.
  • bootinfo -s hdiskX returns the disk size in megabytes.
  • lsdev -Cc adapter | grep fcs lists all Fibre Channel adapters (fcs0, fcs1, etc.).
  • lscfg -vpl fcsX | grep "Network Address" shows the adapter’s WWPN.
  • sed 's/.*Address[ .]*//' cleans the output, leaving only the WWPN value.
Example Output:
Disk: hdisk0 Size: 100.00GB LUN ID: 500507680240C567 WWPN: C0507601D8123456
Disk: hdisk0 Size: 100.00GB LUN ID: 500507680240C567 WWPN: C0507601D8123457
Disk: hdisk1 Size: 200.00GB LUN ID: 500507680240C568 WWPN: C0507601D8123456
Disk: hdisk1 Size: 200.00GB LUN ID: 500507680240C568 WWPN: C0507601D8123457


This shows each disk (hdiskX) with its size, LUN ID, and all connected FC adapter WWPNs.