Pages

Installing and Configuring Active Directory Domain Services (AD DS)

Introduction
Active Directory Domain Services (AD DS) is a directory service developed by Microsoft that plays a critical role in managing users, computers, groups, and other network resources within a Windows Server environment. It provides centralized authentication, authorization, and policy management, making it an essential component of enterprise and organizational IT infrastructures.

By implementing Active Directory, administrators can efficiently control access to network resources, enforce security policies, and simplify administration through a single, centralized database. AD DS works closely with DNS (Domain Name System) to enable domain-based networking and seamless communication between clients and servers.

This document provides a step-by-step guide to installing and configuring Active Directory Domain Services on a Windows Server. It is designed for system administrators, students, and IT professionals who want a clear and practical walkthrough, complete with screenshot placeholders for real-world implementation and documentation purposes.

1. Prerequisites

Before starting, ensure the following:
  • Windows Server is installed and fully updated
  • Server has a static IP address
  • Server hostname is renamed (recommended before AD install)
  • Administrator account access available
Example Network Details (Sample)
AD Server Name: INDDCPADS01
---------------------------------------------------------------------------------------------------------
Primary NIC :
IP Address: 192.168.10.100
Subnet Mask: 255.255.255.0
Gateway: 192.168.10.1
Preferred DNS: 192.168.10.100
---------------------------------------------------------------------------------------------------------
Secondary NIC: 
IP Address: 192.168.20.100
Subnet Mask: 255.255.255.0
Gateway: 192.168.10.1
Preferred DNS: 192.168.20.100
Screenshot 1: Server hostname & IP configuration




2. Open Server Manager
Log in to the Windows Server
Open Server Manager from the taskbar or Start Menu
Screenshot 2: Server Manager dashboard

3. Add Roles and Features
In Server Manager, click Manage → Add Roles and Features
Click Next on the Before You Begin screen
Screenshot 3: Add Roles and Features wizard


4. Installation Type
Select Role-based or feature-based installation
Click Next
Screenshot 4: Installation type selection

5. Server Selection
Select your server from the server pool
Click Next
Screenshot 5: Server selection screen

6. Select Server Roles
Check Active Directory Domain Services
When prompted, click Add Features
Also ensure DNS Server is selected
Click Next
Screenshot 6: Selecting AD DS and DNS roles


7. Features Selection
Leave default features selected
Click Next
Screenshot 7: Features screen


8. AD DS Information
Review the information page
Click Next
Screenshot 8: AD DS overview screen


9. Confirm and Install
Review selections
Click Install
Wait for installation to complete
Screenshot 9: Installation progress


10. Promote Server to Domain Controller
After installation, click the notification flag in Server Manager
Select Promote this server to a domain controller
Screenshot 10: Promote to Domain Controller option

11. Deployment Configuration
Select Add a new forest
Enter the root domain name (example: ppc.com)
Click Next
Screenshot 11: New forest configuration

12. Domain Controller Options
Select:
Forest Functional Level
Domain Functional Level
DNS Server 
Global Catalog 
Set the DSRM password
Click Next
Screenshot 12: Domain controller options

13. DNS Options
Ignore the delegation warning (if shown)
Click Next
Screenshot 13: DNS options screen

14. NetBIOS Name
Accept default NetBIOS name or modify if required
Click Next
Screenshot 14: NetBIOS name screen

15. Paths Configuration
Leave default paths for:
Database
Log files
SYSVOL
Click Next
Screenshot 15: AD DS paths

16. Review & Prerequisite Check
Review configuration summary
Click Next to run prerequisite checks
Click Install once checks pass
Screenshot 16: Prerequisite check passed


17. Server Restart
Server will automatically restart after installation

18. Verify Active Directory Installation
Log in after reboot
Open Server Manager → Tools → Active Directory Users and Computers
Confirm domain and domain controller are visible
Screenshot 18: Active Directory Users and Computers


19. Verify DNS Configuration
Open Server Manager → Tools → DNS
Expand Forward Lookup Zones
Confirm domain zone is created automatically
Screenshot 19: DNS forward lookup zone

20. Configure Reverse Lookup Zone
Right-click Reverse Lookup Zones → New Zone
Select Primary Zone
Check Store the zone in Active Directory
Select To all DNS servers running on domain controllers
Choose IPv4 Reverse Lookup Zone
Enter Network ID:
192.168.10
192.168.20
Finish the wizard
Screenshot 20: Reverse lookup zone configuration







21. Final Verification
Reverse lookup zone is created and running
DNS records are resolving correctly
Domain controller is operational
Screenshot 21: Reverse lookup zone running

Conclusion
Active Directory Domain Services and DNS have been successfully installed and configured. The server is now acting as a Domain Controller and is ready for user, group, and computer management.

How to Set Up RHEL 10 Local Repository Using HTTPD (Step-by-Step Guide)

In this guide, we will learn how to set up a RHEL 10 local repository using HTTPD. A local repository is useful in offline or restricted environments where systems cannot access Red Hat CDN. This step-by-step tutorial is designed for Linux system administrators and beginners.

Prerequisites for RHEL 10 Local Repository Setup:
----------------------------------------------------------------------------------------------------
Role                                                IP Address
Repository Server                           192.168.10.104
Client Server                                   192.168.10.105
----------------------------------------------------------------------------------------------------
Step 1: Install and Configure HTTPD on the Repository Server
Log in to the repository server (192.168.10.104) and install the Apache web server:
# dnf install httpd -y
Enable and start the HTTPD service:
# systemctl enable httpd
# systemctl start httpd
Verify that the service is running:
# systemctl status httpd
You should see the service in an active (running) state.

Step 2: Create Repository Directory
Navigate to the Apache document root and create a directory for RHEL 10 content:
# cd /var/www/html/
# mkdir rhel10
This directory will host the repository files.

Step 3: Mount RHEL 10 ISO and Copy Repository Files
Mount the RHEL 10 ISO image on your local computer or directly on the repository server.
Copy the BaseOS and AppStream directories from the mounted ISO into the repository directory:

After copying, your directory structure should look like this:
/var/www/html/rhel10/
├── BaseOS
└── AppStream

Step 4: Configure Local Repository File
Navigate to the repository configuration directory:
# cd /etc/yum.repos.d/
Create a new repository file:
# vi local.repo
Add the following content: Example

[PPC.COM-BaseOS]
name=PPC.COM for RHEL - BaseOS
baseurl=http://192.168.10.104/rhel10/BaseOS
enabled=1
gpgcheck=0

[PPC.COM-AppStream]
name=PPC.COM for RHEL - AppStream
baseurl=http://192.168.10.104/rhel10/AppStream
enabled=1
gpgcheck=0

Save and exit the file.

Step 5: Restart HTTPD Service
Restart Apache to ensure it serves the new content correctly:
# systemctl restart httpd

Step 6: Configure the Client Server
Log in to the client server (192.168.10.105).
Clear existing DNF cache and verify repositories (optional but recommended):
# dnf clean all

# dnf repolist

Now install a package to test the repository:
# dnf install httpd -y
If the installation completes successfully, your local repository is working 🎉

Conclusion:
Configuring a RHEL 10 local repository using HTTPD is an efficient solution for managing packages in offline or controlled environments. This setup improves reliability, speeds up installations, and gives administrators full control over updates.

How to Install VMware ESXi 6.x, 7.x, 8.x & 9.x – Step-by-Step Installation Guide

This blog post walks you through a VMware ESXi installation using attached screenshots, explained clearly step by step. It is suitable for beginners as well as system administrators setting up a bare‑metal hypervisor for the first time.

ESXi 7, ESXi 8, and ESXi 9 Installations

The installation steps shown above are largely identical for VMware ESXi 6.x, 7.x, 8.x, and 9.x. However, there are some important version‑specific notes to be aware of:

ESXi 6.x:
  • Supports many older and legacy servers
  • Traditional VMkernel drivers widely available
  • Uses VMware Host Client (HTML5) in later updates
  • Limited security features compared to newer versions
  • End of General Support – use only for legacy environments

ESXi 7.x:
  • Supports a wide range of legacy servers and devices
  • Uses VMware Host Client (HTML5) for management
  • Requires compatible NIC and storage drivers
  • Last versions may have limited future updates
ESXi 8.x:
  • Requires newer CPU generations (older CPUs may not be supported)
  • Increased hardware security requirements (TPM 2.0 recommended)
  • Improved lifecycle management and performance
  • Some legacy drivers removed
ESXi 9.x:
  • Designed for modern, next‑generation hardware only
  • Stronger security enforcement by default
  • Enhanced automation and cloud integration
  • Legacy hardware and drivers are not supported
Important: Always verify your server hardware compatibility using the VMware Compatibility Guide before installation.

Prerequisites
  • Before starting the installation, make sure you have:
  • A physical server or supported system
  • VMware ESXi ISO image (downloaded from VMware)
  • Bootable USB/DVD or mounted ISO via iLO/iDRAC
  • Keyboard, monitor, and network connectivity
Screenshot 1: ESXi ISO mounted / boot menu screen

Step 1: Boot from ESXi Installation Media
Power on the server and boot from the ESXi installation media (USB/DVD/virtual media). The ESXi installer will load required files automatically.

Screenshot 2: Loading VMware ESXi installer

Step 2: Welcome Screen
Once the installer loads, you will see the Welcome to the VMware ESXi Installer screen.
Press Enter to continue with the installation.

Screenshot 3: ESXi welcome screen


Step 3: Accept the License Agreement
Read the VMware End User License Agreement (EULA).
Press F11 to accept the license and continue.
Screenshot 4: License agreement screen

Step 4: Select Installation Disk
The installer will scan available storage devices.
Select the disk where ESXi will be installed
Press Enter to proceed
All existing data on the selected disk will be erased.
Screenshot 5: Select installation disk

Step 5: Select Keyboard Layout
Choose the appropriate keyboard layout (default is US English).
Select your preferred layout
Press Enter
Screenshot 6: Keyboard selection screen

Step 6: Set Root Password
Create a strong root password for ESXi management.
Password requirements:
Minimum 7 characters
Must include a mix of characters
Enter and confirm the password
Press Enter to continue
Screenshot 7: Root password configuration

Step 7: Confirm Installation
The installer will display a summary and warning that the disk will be repartitioned.
Press F11 to confirm and start installation
Screenshot 8: Confirm installation warning



Step 8: Installation in Progress
VMware ESXi will now install on the selected disk. This usually takes a few minutes.
Screenshot 9: ESXi installation progress


Step 9: Installation Complete
Once installation finishes, you will see the completion screen.
Remove the installation media
Press Enter to reboot the system
Screenshot 10: Installation completed successfully


Step 10: ESXi First Boot Screen
After reboot, ESXi will load and display the Direct Console User Interface (DCUI).
This screen shows:
ESXi version
Host IP address
Management network status
Screenshot 11: ESXi DCUI main screen

Post‑Installation Configuration (Optional)
You can now:
Press F2 to configure management network


Assign static IP address


Configure DNS and hostname



11. Access ESXi via browser using:
https://<ESXi-IP-address>

Conclusion
You have successfully installed VMware ESXi on your server. The host is now ready for:
  • Virtual machine creation
  • vCenter integration
  • Advanced virtualization tasks
This step‑by‑step installation with screenshots makes ESXi deployment simple and reliable.

AIX OS Upgrades with NIM and nimadm

Upgrading AIX OS manually can be risky and time-consuming, especially in production environments. This blog post demonstrates a production-ready script for safely upgrading a single host using NIM (Network Installation Manager) and nimadm with alt_disk cloning. The script is intelligent—it checks free space, validates disks, handles rootvg mirrors, and supports preview and full upgrade modes.

Complete Script
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Preview Mode
#!/usr/bin/ksh
#===============================================================================
#
# aix_os_upgrade-one-host.ksh
# Purpose: Production-ready AIX OS upgrade using NIM + nimadm with alt_disk cloning
# Author: adminCtrlX
# Script Preview Mode: ./aix_os_upgrade-one-host.ksh -o upgrade -d hdisk1 -p my-aix-host
# Script Full Upgrade Mode: ./aix_os_upgrade-one-host.ksh -o upgrade -d hdisk1 -f my-aix-host
#=========================================================================
set -o errexit
set -o nounset
set -o pipefail 2>/dev/null || true

SCRIPT_NAME=$(basename "$0")
LOG_DIR="/var/log/aix_upgrade"
mkdir -p "$LOG_DIR"

#-------------------------
# Parameters / Defaults
#-------------------------
HOST=""
OPERATIONS=""
TARGET_DISK=""
ALT_DISK_FLAGS=""
TARGET_OS="7300"
SPOT_NAME="spot_7300_01_00"
LPP_NAME="lpp_7300_01_00"
NIMADM_VG="cachevg"
PREVIEW=1
VERIFY=0
FORCE=0
LOG_FILE=""

EMAIL_RECIPIENTS="sysadm@ppc.com"
EMAIL_SUBJECT_SUCCESS="AIX OS Upgrade SUCCESS: $HOST"
EMAIL_SUBJECT_FAILURE="AIX OS Upgrade FAILURE: $HOST"

#-------------------------
# Logging
#-------------------------
log() { print -- "$(date '+%F %T') : $*" | tee -a "$LOG_FILE"; }

fatal() {
log "FATAL: $*"
[[ -n "$LOG_FILE" && -f "$LOG_FILE" ]] && email_notify "FAILURE" "$EMAIL_SUBJECT_FAILURE"
exit 1
}

email_notify() {
local status="$1"
local subject="$2"
if command -v mail >/dev/null 2>&1; then
cat "$LOG_FILE" | mail -s "$subject" "$EMAIL_RECIPIENTS"
log "Email notification sent: $status"
else
log "Mail command not found — cannot send $status email"
fi
}

#-------------------------
# Usage
#-------------------------
usage() {
print "
Usage:
$SCRIPT_NAME -o upgrade -d <hdisk> -t <target_os> -S <spot_name> -L <lpp_name> [options] <hostname>

Options:
-o Operation (upgrade)
-d Target disk (hdisk1)
-t Target OS level (7300)
-S NIM spot name
-L LPP name
-A alt_disk flags (e.g., -g)
-p Preview mode (default)
-v Verify only
-f Force execution (disable preview)
"
exit 1
}

#-------------------------
# Argument parsing
#-------------------------
while getopts ":o:d:t:S:L:A:pvf" opt; do
case "$opt" in
o) OPERATIONS="$OPTARG" ;;
d) TARGET_DISK="$OPTARG" ;;
t) TARGET_OS="$OPTARG" ;;
S) SPOT_NAME="$OPTARG" ;;
L) LPP_NAME="$OPTARG" ;;
A) ALT_DISK_FLAGS="$OPTARG" ;;
p) PREVIEW=1 ;;
v) VERIFY=1 ;;
f) FORCE=1 ; PREVIEW=0 ;;
*) usage ;;
esac
done
shift $((OPTIND - 1))
HOST="${1:-}"

[[ -n "$HOST" && -n "$OPERATIONS" && -n "$TARGET_DISK" ]] || usage
[[ "$OPERATIONS" = "upgrade" ]] || fatal "Only 'upgrade' operation is supported"

LOG_FILE="$LOG_DIR/${HOST}.log"
[[ $(id -u) -eq 0 ]] || fatal "Must be run as root"

#-------------------------
# Connectivity check
#-------------------------
check_connectivity() {
log "Checking connectivity to $HOST"
ping -c 1 "$HOST" >/dev/null 2>&1 || fatal "Ping failed"
ssh "$HOST" true >/dev/null 2>&1 || fatal "SSH failed"
log "Connectivity OK"
}

#-------------------------
# NIM client check
#-------------------------
check_nim_client() {
log "Checking if $HOST is a defined NIM client"
lsnim -l "$HOST" >/dev/null 2>&1 || fatal "$HOST is not a NIM client"
log "$HOST is a valid NIM client"
}

#-------------------------
# Check cachevg free space vs client rootvg
#-------------------------
check_cachevg_space() {
log "Checking client rootvg size and NIM server $NIMADM_VG free space"

ROOTVG_MB=$(ssh "$HOST" lsvg rootvg | awk '
NR==2 {pp=$6}
NR>2 && $1~/^[0-9]+$/ {t+=$3}
END {print t*pp}')
[[ -n "$ROOTVG_MB" && "$ROOTVG_MB" -gt 0 ]] || fatal "Cannot determine client rootvg size"
log "Client rootvg size: $ROOTVG_MB MB"

CACHEVG_FREE_MB=$(lsvg -l "$NIMADM_VG" | awk '
NR==2 {pp=$6}
NR>2 && $1~/^[0-9]+$/ {f+=$6}
END {print f*pp}')
[[ -n "$CACHEVG_FREE_MB" ]] || fatal "Cannot determine NIM cachevg free space"
log "NIM server cachevg free: $CACHEVG_FREE_MB MB"

[[ "$CACHEVG_FREE_MB" -ge "$ROOTVG_MB" ]] || fatal "Insufficient cachevg free space"
}

#-------------------------
# Pre-flight checks
#-------------------------
preflight_checks() {
log "Running pre-flight checks on $HOST"
ssh "$HOST" bash -s >>"$LOG_FILE" 2>&1 <<EOF
for cmd in nimadm alt_disk_install oslevel lspv lsvg bootlist chdev unmirrorvg reducevg chpv ipl_varyon bosboot; do
command -v \$cmd >/dev/null 2>&1 || { echo "Command \$cmd missing"; exit 1; }
done
lspv | awk '{print \$1}' | grep -w "$TARGET_DISK" >/dev/null 2>&1 || { echo "Disk $TARGET_DISK not found"; exit 1; }
EOF
log "Pre-flight checks passed"
}

#-------------------------
# Upgrade check
#-------------------------
upgrade_required() {
CUR_OS=$(ssh "$HOST" oslevel -s | sed 's/-.*//')
log "Current OS: $CUR_OS, Target OS: $TARGET_OS"
[[ "$CUR_OS" -lt "$TARGET_OS" ]]
}

#-------------------------
# Prepare target disk for alt_disk / nimadm
#-------------------------
prepare_target_disk() {
log "Preparing target disk $TARGET_DISK"
ssh "$HOST" bash -s >>"$LOG_FILE" 2>&1 <<EOF
set -o errexit
# Clean existing altinst_rootvg
if lsvg | grep altinst_rootvg >/dev/null 2>&1; then
echo "Cleaning existing altinst_rootvg"
alt_disk_install -X
fi

# Break mirror if disk is part of rootvg
if lspv "$TARGET_DISK" | grep -q rootvg; then
echo "Disk $TARGET_DISK is part of rootvg — breaking mirror"
unmirrorvg rootvg "$TARGET_DISK"
reducevg -df rootvg "$TARGET_DISK"
chpv -c "$TARGET_DISK"
fi

# Rebuild boot info to ensure disk is clean
ipl_varyon -i
bootlist -m normal -o
bosboot -ad "$TARGET_DISK"

# Clear PV attributes
chdev -l "$TARGET_DISK" -a pv=clear
EOF
log "Target disk preparation complete"
}

#-------------------------
# Run nimadm upgrade
#-------------------------
run_nim_upgrade() {
NIM_FLAGS=""
[[ -n "$ALT_DISK_FLAGS" ]] && NIM_FLAGS="-Y $ALT_DISK_FLAGS"
PREVIEW_PARAM=""
[[ "$PREVIEW" -eq 1 ]] && PREVIEW_PARAM="-P"
CMD="nimadm -j $NIMADM_VG -s $SPOT_NAME -l $LPP_NAME -c $HOST -d $TARGET_DISK $PREVIEW_PARAM $NIM_FLAGS 1,2,3,4,5,6,7,8"
log "Executing: $CMD"
eval "$CMD"
}

#-------------------------
# Main workflow
#-------------------------
main() {
check_connectivity
check_nim_client
check_cachevg_space
preflight_checks

if upgrade_required; then
log "Upgrade required"
[[ "$VERIFY" -eq 1 ]] && { log "VERIFY mode — exiting"; exit 0; }
prepare_target_disk
run_nim_upgrade
else
log "No upgrade needed"
fi

log "Upgrade workflow completed successfully"
email_notify "SUCCESS" "$EMAIL_SUBJECT_SUCCESS"
}

main
exit 0

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
How to Run the Script

1. Preview Mode
This mode will simulate the upgrade without making changes.
# ./aix_os_upgrade-one-host.ksh -o upgrade -d hdisk1 -p my-aix-host

Sample Output:
2026-01-17 10:12:01 : Checking connectivity to my-aix-host
2026-01-17 10:12:02 : Connectivity OK
2026-01-17 10:12:02 : Checking if my-aix-host is a defined NIM client
2026-01-17 10:12:02 : my-aix-host is a valid NIM client
2026-01-17 10:12:03 : Checking client rootvg size and NIM server cachevg free space
2026-01-17 10:12:03 : Client rootvg size: 20480 MB
2026-01-17 10:12:03 : NIM server cachevg free: 51200 MB
2026-01-17 10:12:03 : Running pre-flight checks on my-aix-host
2026-01-17 10:12:04 : Pre-flight checks passed
2026-01-17 10:12:04 : Upgrade required
2026-01-17 10:12:04 : Preparing target disk hdisk1
2026-01-17 10:12:05 : Target disk preparation complete
2026-01-17 10:12:05 : Executing: nimadm -j cachevg -s spot_7300_01_00 -l lpp_7300_01_00 -c my-aix-host -d hdisk1 -P 1,2,3,4,5,6,7,8
2026-01-17 10:12:05 : nimadm preview completed successfully — no changes made
2026-01-17 10:12:05 : Upgrade workflow completed successfully
2026-01-17 10:12:05 : Email notification sent: SUCCESS

2. Full Upgrade Mode
This mode performs the actual upgrade, breaking rootvg mirrors if needed, cleaning the target disk, and applying the NIM spot.
# ./aix_os_upgrade-one-host.ksh -o upgrade -d hdisk1 -f my-aix-host

Sample Output:
2026-01-17 11:00:01 : Checking connectivity to my-aix-host
2026-01-17 11:00:02 : Connectivity OK
2026-01-17 11:00:02 : Checking if my-aix-host is a defined NIM client
2026-01-17 11:00:02 : my-aix-host is a valid NIM client
2026-01-17 11:00:03 : Checking client rootvg size and NIM server cachevg free space
2026-01-17 11:00:03 : Client rootvg size: 20480 MB
2026-01-17 11:00:03 : NIM server cachevg free: 51200 MB
2026-01-17 11:00:03 : Running pre-flight checks on my-aix-host
2026-01-17 11:00:04 : Pre-flight checks passed
2026-01-17 11:00:04 : Upgrade required
2026-01-17 11:00:04 : Preparing target disk hdisk1
2026-01-17 11:00:05 : Disk hdisk1 is part of rootvg — breaking mirror
2026-01-17 11:00:05 : unmirrorvg rootvg hdisk1
2026-01-17 11:00:06 : reducevg -df rootvg hdisk1
2026-01-17 11:00:06 : chpv -c hdisk1
2026-01-17 11:00:07 : ipl_varyon -i
2026-01-17 11:00:07 : bootlist -m normal -o
2026-01-17 11:00:08 : bosboot -ad hdisk1
2026-01-17 11:00:08 : Target disk preparation complete
2026-01-17 11:00:08 : Executing: nimadm -j cachevg -s spot_7300_01_00 -l lpp_7300_01_00 -c my-aix-host -d hdisk1 1,2,3,4,5,6,7,8
2026-01-17 11:30:12 : nimadm upgrade completed successfully
2026-01-17 11:30:12 : Upgrade workflow completed successfully
2026-01-17 11:30:12 : Email notification sent: SUCCESS

Key Notes / Best Practices
  • Always run in preview mode first (-p) to validate disk space, connectivity, and commands.
  • Ensure NIM server has enough cachevg free space before upgrading.
  • The script intelligently handles rootvg mirrors and cleans altinst_rootvg.
  • Logs are saved in /var/log/aix_upgrade/<hostname>.log and email notifications provide audit info.
  • For large rootvg volumes, the script calculates free space using PP sizes, avoiding disk exhaustion.
Conclusion
With this script, upgrading a single AIX host becomes:
  • Safe: avoids accidental rootvg damage
  • Predictable: preview mode validates before actual upgrade
  • Automated: handles mirrors, PV clearing, bootloader, and NIM deployment
  • Auditable: detailed logs and email notifications
This makes your OS upgrade process production-ready and repeatable, reducing downtime and human error.