Pages

MQ GPFS Cluster LUN Setup Using iSCSI and LVM

Enterprise Design for Shared Storage–Based MQ Multi-Instance Clusters

High availability (HA) for IBM MQ multi-instance queue managers depends on shared, consistent, low-latency storage. While SAN or enterprise NAS is common, iSCSI-backed LUNs combined with LVM and IBM Spectrum Scale (GPFS) provide a flexible, software-defined alternative that is production-proven when designed correctly.

This guide walks through a full-stack HA storage architecture:
  • LVM-backed iSCSI LUNs
  • Secure export using targetcli
  • Multipath iSCSI initiators
  • GPFS clustered filesystems
  • IBM MQ HA compatibility and failover guarantees
Architecture Overview
                 ┌──────────────────────┐
                 │  iSCSI Target Server │
                 │  RHEL / CentOS       │
                 │                      │
                 │  LVM (datavg)        │
                 │  ├─ mqsdisk01 (LUN)  │
                 │  └─ mqsdisk02 (LUN)  │
                 │                      │
                 │  targetcli / LIO     │
                 └──────────┬───────────┘
                            │ iSCSI (TCP 3260)
           ┌────────────────┴────────────────┐
           │                                 │
┌────────────────────┐        ┌────────────────────┐
│ MQ Node 1          │        │ MQ Node 2          │
indrxlmqs01        │        │ indrxlmqs02        │
│                    │        │                    │
│ iSCSI Initiator    │        │ iSCSI Initiator    │
│ Multipath          │        │ Multipath          │
│ GPFS Node          │        │ GPFS Node          │
│ MQ Instance        │        │ MQ Instance        │
└────────────────────┘        └────────────────────┘

Why GPFS for IBM MQ?
  • IBM MQ multi-instance queue managers require:
  • Shared filesystem
  • POSIX locking
  • Fast failover
  • Guaranteed write ordering
Why GPFS (Spectrum Scale)?
FeatureBenefit
Distributed lock managerSafe concurrent access
Quorum + tiebreakerSplit-brain prevention
High-performance journalingMQ log safety
Fast mount failover<30s recovery
Certified by IBMSupported configuration

GPFS is explicitly supported for MQ HA. ext4/XFS are not.

Environment Specifications
ComponentValue
iSCSI Target ServerRHEL/CentOS, IP: 192.168.20.20
Volume Groupdatavg (/dev/sdb)
Logical Volumesmqsdisk01 (5G), mqsdisk02 (5G)
iSCSI Target IQNiqn.2025-08.ppc.com:mqsservers
Initiator Nodesindrxlmqs01, indrxlmqs02
OSRHEL 8/9 or CentOS 8 Stream
GPFS VersionIBM Spectrum Scale 5.1+

Why LVM Under iSCSI?
Online resize (future MQ growth)
Snapshot capability
Alignment control
Striping across RAID

GPFS Optimal Alignment
PE size: 64KB
Stripe size: 64KB
Avoid 4MB defaults
Misalignment = silent performance loss.

Step 1: Prepare Storage on iSCSI Target
Disk Partitioning
# parted /dev/sdb mklabel gpt
# parted /dev/sdb mkpart primary 1MiB 100%

Physical Volume & VG Creation
# pvcreate --dataalignment 1m /dev/sdb1
# vgcreate -s 64K datavg /dev/sdb1

Create GPFS-Optimized Logical Volumes
# lvcreate -L 5G -i 4 -I 64K -n mqsdisk01 datavg
# lvcreate -L 5G -i 4 -I 64K -n mqsdisk02 datavg


Why striping?
GPFS issues parallel I/O
MQ log writes benefit from stripe width
Underlying RAID must support it

Step 2: iSCSI Target Configuration
Create Block Backstores
/backstores/block create MQS_LUN01 /dev/datavg/mqsdisk01
/backstores/block create MQS_LUN02 /dev/datavg/mqsdisk02

Create Target
/iscsi create iqn.2025-08.ppc.com:mqsservers

Map LUNs
/iscsi/iqn.2025-08.ppc.com:mqsservers/tpg1/luns create /backstores/block/MQS_LUN01
/iscsi/iqn.2025-08.ppc.com:mqsservers/tpg1/luns create /backstores/block/MQS_LUN02

ACL Configuration
/acls create iqn.2025-08.ppc.com:indrxlmqs01
/acls create iqn.2025-08.ppc.com:indrxlmqs02

Security Hardening
set attribute authentication=1
set auth userid=mqchap password=StrongSecret!

Never use demo_mode_write_protect in production

Step 3: iSCSI Initiators (MQ Nodes)
Install Required Packages
# dnf install iscsi-initiator-utils device-mapper-multipath -y

Configure IQN (Unique per Node)
# echo "InitiatorName=iqn.2025-08.ppc.com:indrxlmqs01" \
> /etc/iscsi/initiatorname.iscsi

Persistent Discovery
# iscsiadm -m discoverydb -t sendtargets \
  -p 192.168.20.20 --discover

Login
# iscsiadm -m node --login

Multipath Configuration (Strongly Recommended)
/etc/multipath.conf
defaults {
  user_friendly_names yes
  path_grouping_policy multibus
  rr_min_io 100
  rr_weight uniform
  failback immediate
}

# systemctl enable --now multipathd
# multipath -ll

GPFS must use multipath devices.

Step 4: GPFS Cluster Setup
Create Cluster
# mmcrcluster -N indrxlmqs01,indrxlmqs02 \
  -C mqsgpfscluster \
  --admin-interface eth0

Add Tiebreaker Disk (Required for 2-Node)
# mmadddisk tb -F /dev/mapper/mpatha

Create GPFS Filesystems
# mmcrfs gpfsdata -F /dev/mapper/mpathb \
  -A yes -Q no -T /ibm/mqdata
# mmcrfs gpfslogs -F /dev/mapper/mpathc \
  -A yes -Q no -T /ibm/mqlogs
Mount
# mmmount all -a
Quorum & Fencing Logic
# mmchconfig tiebreakerDisks=mpatha
# mmchconfig quorum=4

ComponentVote
Node11
Node21
Data disk1
Tiebreaker1

Prevents split-brain if one node loses storage or network.

IBM MQ Integration Notes
Recommended Layout
MQ ComponentGPFS FS
QM datagpfsdata
Logsgpfslogs
Error logsgpfsdata
Tracegpfsdata

MQ HA Behavior
  • Only one instance active
  • Standby monitors lock files
  • GPFS ensures fast lock transfer
  • Typical failover: 15–30s
Performance Tuning
iSCSI
# echo 128 > /sys/block/sdX/queue/nr_requests
GPFS
# mmchconfig pagepool=80%RAM
# mmchconfig maxFilesToCache=500000
MQ
  • Use linear logging
  • Separate data and logs
  • Avoid filesystem compression
Failure Scenarios & Behavior
FailureOutcome
MQ active node crashStandby takes over
iSCSI path lossMultipath reroutes
Storage server rebootGPFS retries
Network partitionQuorum prevents split-brain

Validation Checklist
  • LUNs visible on both nodes
  • Multipath active
  • GPFS quorum healthy
  • MQ starts on either node
  • Forced failover succeeds
  • No split-brain warnings
Operational Commands Cheat Sheet
# iscsiadm -m session -P 3
# multipath -ll
# mmgetstate -a
# mmlscluster
# strmqcsv QM1
# dspmq -x

Final Thoughts
This architecture delivers:
  • Enterprise-grade HA
  • SAN-like behavior with software-defined flexibility
  • IBM-supported MQ configuration
  • Predictable failover
The key is discipline:
  • Quorum
  • Fencing
  • Multipath
  • Testing
Done right, this design rivals expensive SAN solutions at a fraction of the cost.

RHEL 7, 8, 9, 10 – Storage Issues

Storage issues in Red Hat Enterprise Linux (RHEL) are among the most critical problems administrators face. They can cause boot failures, application downtime, data corruption, or performance degradation.

This guide provides a structured troubleshooting approach that works consistently across RHEL 7 through RHEL 10.

1. Identify the Storage Problem
Start by understanding what type of storage issue you are facing.
Symptom                                                     Likely Cause
Filesystem full                                          → Disk usage or log growth
Mount fails at boot                                            → /etc/fstab error
Disk not detected                                               → Hardware or driver issue
LVM volumes missing                                    → VG/LV not activated
Read-only filesystem                               → Filesystem corruption
Slow I/O                                                                → Disk or SAN performance
iSCSI/NFS not mounting                               → Network or auth issue

2. Check Disk Detection and Hardware Status
List Block Devices

# lsblk
Check Disk Details
# blkid
# fdisk -l
Check Kernel Disk Messages
# dmesg | grep -i sd
If disks are missing, verify:
  • SAN mapping
  • VM disk attachment
  • Hardware health
3. Filesystem Full Issues
Check Disk Usage

# df -h
Find Large Files
# du -sh /* 2>/dev/null
Clear Logs Safely
# journalctl --vacuum-time=7d

4. Read-Only Filesystem Issues
This usually indicates filesystem corruption.
Verify Mount Status
# mount | grep ro
Remount (Temporary)
# mount -o remount,rw /
Permanent Fix
Boot into rescue mode
Run:
# fsck -y /dev/mapper/rhel-root
Never run fsck on mounted filesystems.

5. Fix /etc/fstab Mount Failures
Incorrect entries cause boot into emergency mode.
Check fstab
# vi /etc/fstab
Verify UUIDs
# blkid
Test fstab
# mount -a
Comment out invalid entries if necessary.

6. LVM Issues (Most Common in RHEL)
Check LVM Status

# pvs
# vgs
# lvs
Activate Volume Groups
# vgchange -ay
Scan for Missing Volumes
# pvscan
# vgscan
# lvscan

7. Extend LVM Filesystem (Low Space)
Extend Logical Volume

# lvextend -L +10G /dev/rhel/root
Resize Filesystem
# xfs_growfs /
# resize2fs /dev/rhel/root

8. Recover Missing or Corrupt LVM
Rebuild LVM Metadata

# vgcfgrestore vg_name
List backups:
# ls /etc/lvm/archive/

9. Boot Fails Due to Storage Issues
Check initramfs

# lsinitrd
Rebuild initramfs
# dracut -f
Verify Root Device
# blkid

10. NFS Storage Issues
Check Mount Status

# mount | grep nfs
Test Connectivity
# showmount -e server_ip
Restart Services
# systemctl restart nfs-client.target

11. iSCSI Storage Issues
Check iSCSI Sessions

# iscsiadm -m session
Discover Targets
# iscsiadm -m discovery -t sendtargets -p target_ip
Login to Target
# iscsiadm -m node -l

12. Multipath Issues (SAN Storage)
Check Multipath Status
# multipath -ll
Restart Multipath
# systemctl restart multipathd

13. Storage Performance Issues
Check Disk I/O

# iostat -xm 5
Identify Slow Processes
# iotop

14. SELinux Storage-Related Issues
SELinux may block access to mounted volumes.
Check Denials
# ausearch -m avc -ts recent
Fix Context
# restorecon -Rv /mount_point

15. Backup and Data Safety (Before Fixes)
Always verify backups before major storage changes.

# rsync -av /data /backup

16. Best Practices to Prevent Storage Issues
  • Monitor disk usage proactively
  • Validate /etc/fstab changes
  • Use LVM snapshots
  • Keep rescue media available
  • Monitor SAN/NAS health
  • Perform regular filesystem checks
Conclusion
Storage troubleshooting in RHEL 7, 8, 9, and 10 follows consistent principles:
  • Verify hardware and detection
  • Fix filesystem and LVM issues
  • Validate mounts and network storage
  • Monitor performance and prevent recurrence
Using this step-by-step approach ensures data integrity, stability, and minimal downtime in enterprise Linux environments.

RHEL 7, 8, 9, 10 – Network Issues

Network issues in Red Hat Enterprise Linux (RHEL) can cause service outages, application failures, storage disconnects, and cluster instability.

This guide provides a systematic, version-aware approach to diagnosing and fixing network problems across RHEL 7 through RHEL 10.

1. Identify the Network Problem
Start by identifying what exactly is failing.
Symptom                                            Possible Cause
No network connectivity                      → Interface down, cable, driver
Cannot reach gateway                           → Routing issue
DNS not resolving                                  → DNS configuration
Network slow                                           → Duplex / MTU / congestion
Interface missing                                     → Driver or udev issue
Network fails after reboot                   → NetworkManager config
Services unreachable                             → Firewall or SELinux

2. Check Network Interface Status
List Interfaces
# ip link show
Check Interface IP
# ip addr show
Bring Interface Up
# ip link set eth0 up

3. Verify Network Services (RHEL Differences)
RHEL Version                        Network Service
RHEL 7                                → NetworkManager / network
RHEL 8+                              → NetworkManager only
Check NetworkManager
# systemctl status NetworkManager
Restart if needed:
# systemctl restart NetworkManager

4. Test Basic Connectivity
Test Loopback
# ping 127.0.0.1
Test Gateway
# ping <gateway-ip>
Test External IP
# ping 8.8.8.8
If IP works but hostname fails → DNS issue.

5. Check Routing Table
# ip route show
Ensure a default route exists:
default via <gateway> dev eth0
Add route (temporary):
# ip route add default via <gateway>

6. DNS Troubleshooting
Check DNS Configuration

# cat /etc/resolv.conf
Test DNS Resolution
# nslookup google.com
# dig google.com
NetworkManager DNS
# nmcli dev show | grep DNS

7. NetworkManager (nmcli) Troubleshooting
Show Connections

# nmcli connection show
Check Active Connection
# nmcli device status
Restart Connection
# nmcli connection down <conn-name>
# nmcli connection up <conn-name>

8. Fix Network Issues After Reboot
Check auto-connect:

# nmcli connection show <conn-name> | grep autoconnect
Enable:
# nmcli connection modify <conn-name> connection.autoconnect yes

9. Firewall Issues (firewalld)
Check Firewall Status

# firewall-cmd --state
List Rules
# firewall-cmd --list-all
Allow Service or Port
# firewall-cmd --add-service=ssh --permanent
# firewall-cmd --add-port=8080/tcp --permanent
# firewall-cmd --reload

10. SELinux Network-Related Issues
SELinux can block network connections.
Check SELinux Status
# getenforce
Identify Denials
# ausearch -m avc -ts recent
Enable Required Boolean
# setsebool -P httpd_can_network_connect on

11. Interface Missing or Renamed
List NICs

# lspci | grep -i ethernet
Check Drivers
# lsmod | grep <driver>
Predictable Interface Names
# ip link
Example: ens192 instead of eth0

12. MTU and Performance Issues
Check MTU

# ip link show eth0
Set MTU (Temporary)
# ip link set dev eth0 mtu 9000
Make Permanent
# nmcli connection modify <conn-name> 802-3-ethernet.mtu 9000

13. Bonding / Teaming Issues
Check Bond Status

# cat /proc/net/bonding/bond0
Restart Bond
# nmcli connection down bond0
# nmcli connection up bond0

14. Network Logs and Debugging
Kernel Messages
# dmesg | grep -i network
NetworkManager Logs
# journalctl -u NetworkManager

15. Network Storage Impact (NFS / iSCSI)
Network failures may affect storage mounts.
# showmount -e server_ip
# iscsiadm -m session

16. Best Practices to Prevent Network Issues
  • Use NetworkManager consistently
  • Validate firewall rules
  • Document static IP settings
  • Monitor network latency
  • Test changes before reboot
  • Keep NIC drivers updated
Conclusion
Network troubleshooting in RHEL 7, 8, 9, and 10 follows the same fundamentals:
  • Verify interfaces and IPs
  • Check routing and DNS
  • Validate NetworkManager
  • Review firewall and SELinux
Using this step-by-step approach ensures quick resolution and stable connectivity in enterprise Linux environments.

Splunk Server and Forwarder Installation

In any enterprise environment, log collection and analysis are critical for security monitoring, performance troubleshooting, and threat detection. Splunk is a market-leading platform that helps organizations collect, index, and visualize machine data.

However, manually installing Splunk Enterprise Server and configuring forwarders on several client machines can become time-consuming. In this blog post, we will automate the process from end-to-end.

Understanding the Components

Splunk Enterprise Server
This is the main Splunk system that stores, indexes, and searches all logs. It provides:
  • Web UI
  • Indexing database
  • Search head
  • User management
  • Dashboard visualization
Splunk Universal Forwarder
This is a lightweight agent installed on client machines. It:
  • Sends logs to the Splunk server
  • Runs silently as a background service
  • Consumes minimal CPU & memory
Prerequisites:

Server Requirements

  • OS: Linux (Ubuntu/RHEL/CentOS/Amazon Linux)
  • 4+ GB RAM
  • 20+ GB Disk
  • Port 8000 (Web), 8089 (mgmt), 9997 (data input) open
Client Requirements
  • Linux-based client machines
  • sudo access
  • Network reachability to server port: 9997
Download Splunk Installer Links
Component Download URL
Splunk Enterprise https://www.splunk.com/en_us/download/splunk-enterprise.html
Splunk Forwarder https://www.splunk.com/en_us/download/universal-forwarder.html

SPLUNK ENTERPRISE SERVER INSTALLATION 

Step 1: Update system

# dnf update -y

Step 2: Create Splunk OS User

Splunk should never run as root.
# useradd -m splunk
Verify:
# id splunk
uid=1001(splunk) gid=1001(splunk) groups=1001(splunk)

Step 3: Download Splunk Enterprise
Download Splunk from the official Splunk website and copy it to your server (example path used below):
/root/splunk-9.0.2-17e00c557dc1-linux-2.6-x86_64.rpm

Step 4: Install Splunk Enterprise
Install the RPM package:
# rpm -ivh splunk-9.0.2-17e00c557dc1-linux-2.6-x86_64.rpm
By default, Splunk installs to:
# ls -ld /opt/splunk/
drwx------ 12 splunk splunk 4096 Dec 20 03:04 /opt/splunk/
[root@inddcpspn01 ~]#

Step 5: Set Correct Ownership
Give ownership of Splunk files to the splunk user:
# chown -R splunk:splunk /opt/splunk

Step 6: First Start of Splunk (Create Admin User)
This step is critical
The admin user is created only on the first successful start.
Run the following command as the splunk user:
# sudo -u splunk /opt/splunk/bin/splunk start
Type q
Do you agree with this license? [y/n]: y
Please enter an administrator username:admin
Please enter a new password:Welcome@123
Please confirm new password:Welcome@123

Step 7: Verify Admin User Creation
Check the password file:
# ls -l /opt/splunk/etc/passwd
# cat /opt/splunk/etc/passwd
You should see:
:admin:$6$5SYFmoISyswPtUPt$AXKb2n0RD7mL8UAz1wyZkgTdHkHWFIes/9DMz.4gw3.xnVyLyxpzj1mADGt8HTVJ.ky7f8tay1.bg.7osl7ci1::Administrator:admin:changeme@example.com:::20441
If this file exists, the admin user is created successfully.

Step 8: Enable Splunk at Boot
Install chkconfig if your system complains it's missing (e.g., on RHEL/CentOS 9)
$ dnf install chkconfig
$ sudo -u splunk /opt/splunk/bin/splunk stop
$ /opt/splunk/bin/splunk enable boot-start -user splunk
Init script installed at /etc/init.d/splunk.
Init script is configured to run at boot.
$ sudo -u splunk /opt/splunk/bin/splunk start

Step 9: Start / Stop Splunk
Start Splunk
$ sudo -u splunk /opt/splunk/bin/splunk start
Stop Splunk
$ sudo -u splunk /opt/splunk/bin/splunk stop
Check Status
$ sudo -u splunk /opt/splunk/bin/splunk status

Step 10: Access Splunk Web UI
Open a browser and go to:
http://<server-ip>:8000 or http://<Server FQDN>:8000

Login with:
Username: admin
Password: Welcome@123

Step 11: (Optional) Firewall Configuration

Allow Splunk Web port:
# firewall-cmd --add-port=8000/tcp --permanent
# firewall-cmd --reload

Common Issues & Fixes
Admin password not working
Splunk was started before --seed-passwd
/opt/splunk/etc/passwd not created
Wrong user used to start Splunk
Fix: Stop Splunk, remove init files, and start again with --seed-passwd.

ENABLE SPLUNK DATA INPUT (TCP 9997)
Log into Splunk Web UI:
URL:
http://<server-ip>:8000 or http://<Server FQDN>:8000
Then:
Enable Receiving Port
Go to Settings → Forwarding and Receiving
Click Configure Receiving
Click New Receiving Port
Enter:
Port: 9997

Save
Verify Receiving Port
# netstat -tulnp | grep 9997
tcp        0      0 0.0.0.0:9997            0.0.0.0:*               LISTEN      42402/splunkd
# ss -tulnp | grep 9997
tcp   LISTEN 0      128          0.0.0.0:9997      0.0.0.0:*    users:(("splunkd",pid=42402,fd=197))

SPLUNK FORWARDER INSTALLATION ON CLIENT 

Step 1: Create Splunk User on Client
Splunk services should not run as root.
# useradd -m splunk
Verify:
# id splunk
uid=1001(splunk) gid=1001(splunk) groups=1001(splunk)

Step 2: Download Splunk Universal Forwarder
Download the Universal Forwarder package from the Splunk website and copy it to the client server.
Example RPM file:
splunkforwarder-9.0.2-17e00c557dc1-linux-2.6-x86_64.rpm

Step 3: Install Splunk Universal Forwarder
Install the RPM:
# rpm -ivh splunkforwarder-9.0.2-17e00c557dc1-linux-2.6-x86_64.rpm
Default installation path:
# ls -ld /opt/splunkforwarder
drwxr-xr-x 9 splunk splunk 4096 Dec 19 22:02 /opt/splunkforwarder

Step 4: Set Correct Ownership
# chown -R splunk:splunk /opt/splunkforwarder

Step 5: First Start of Splunk Forwarder
Start the Universal Forwarder for the first time:
$ sudo -u splunk /opt/splunkforwarder/bin/splunk start 
Type q
Do you agree with this license? [y/n]: y
Please enter an administrator username: admin
Please enter a new password: Welcome@123
Please confirm new password: Welcome@123

Step 6: Enable Forwarder to Start at Boot
Install chkconfig using dnf or yum. 
# dnf install chkconfig
$ sudo -u splunk /opt/splunkforwarder/bin/splunk stop
# /opt/splunkforwarder/bin/splunk enable boot-start -user splunk
Systemd unit file installed by user at /etc/systemd/system/SplunkForwarder.service.
Configured as systemd managed service.
$ sudo -u splunk /opt/splunkforwarder/bin/splunk start

Step 7: Configure Forwarder to Send Data to Indexer
Add Splunk Indexer as Receiving Destination
$ sudo -u splunk /opt/splunkforwarder/bin/splunk add forward-server 192.168.10.109:9997
Warning: Attempting to revert the SPLUNK_HOME ownership
Warning: Executing "chown -R splunk /opt/splunkforwarder"
egrep: warning: egrep is obsolescent; using grep -E
egrep: warning: egrep is obsolescent; using grep -E
WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details.
Splunk username: admin
Password:
Added forwarding to: 192.168.10.109:9997.

Verify:
$ sudo -u splunk /opt/splunkforwarder/bin/splunk list forward-server
Warning: Attempting to revert the SPLUNK_HOME ownership
Warning: Executing "chown -R splunk /opt/splunkforwarder"
egrep: warning: egrep is obsolescent; using grep -E
egrep: warning: egrep is obsolescent; using grep -E
WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details.
Active forwards:
        192.168.10.109:9997
Configured but inactive forwards:
        None

Step 8: Add Log Files to Monitor
Example: Monitor Linux system logs
$ sudo -u splunk /opt/splunkforwarder/bin/splunk add monitor /var/log/messages
Warning: Attempting to revert the SPLUNK_HOME ownership
Warning: Executing "chown -R splunk /opt/splunkforwarder"
egrep: warning: egrep is obsolescent; using grep -E
egrep: warning: egrep is obsolescent; using grep -E
WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details.
Added monitor of '/var/log/messages'.

For Ubuntu:
$ sudo -u splunk /opt/splunkforwarder/bin/splunk add monitor /var/log/syslog

Step 9: Restart Splunk Forwarder
$ sudo -u splunk /opt/splunkforwarder/bin/splunk restart

Step 10: Verify Data on Splunk Server
On the Splunk Enterprise server:
Login to Splunk Web : http://<server-ip>:8000 or http://<Server FQDN>:8000
Go to Search & Reporting
Run:
index=_internal | stats count by host
You should see the client hostname.

Firewall Configuration (Optional)
Allow outgoing traffic to indexer:
# firewall-cmd --add-port=9997/tcp --permanent
# firewall-cmd --reload

Common Issues & Troubleshooting
  • Forwarder not sending data
  • Indexer port 9997 not enabled
  • Firewall blocking traffic
  • Incorrect indexer IP
Check forwarder status
$ sudo -u splunk /opt/splunkforwarder/bin/splunk status
Warning: Attempting to revert the SPLUNK_HOME ownership
Warning: Executing "chown -R splunk /opt/splunkforwarder"
egrep: warning: egrep is obsolescent; using grep -E
splunkd is running (PID: 1768).
splunk helpers are running (PIDs: 1794).
egrep: warning: egrep is obsolescent; using grep -E

Check logs
$ tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log

Conclusion
You have successfully installed and configured the Splunk Enterprise Server and the Splunk Universal Forwarder on the Splunk server and Splunk client machine. The Splunk client is now actively forwarding log data to the Splunk Enterprise server, enabling centralized log collection, monitoring, and analysis across the environment.

This setup provides better visibility into system activity, faster troubleshooting, and a scalable foundation for enterprise-level monitoring and observability.

RHEL 7, 8, 9, 10 – Security Issues

Security issues in Red Hat Enterprise Linux (RHEL) can surface as login failures, service denials, SELinux blocks, firewall problems, authentication errors, or compliance violations.

This guide provides a structured troubleshooting methodology applicable to RHEL 7 through RHEL 10.

1. Identify the Type of Security Issue
Before making changes, determine what is being blocked.
User cannot log in → PAM / SSH / SELinux
Service not accessible → Firewall / SELinux
Permission denied → SELinux / file context
SSH connection refused → sshd / firewall
Application fails after reboot → SELinux labeling
Compliance scan failures → OpenSCAP / crypto policy

2. Check System Logs First (Golden Rule)
Authentication and Security Logs

/var/log/secure
systemd Journal (All Versions)
# journalctl -xe
# journalctl -u sshd

3. SELinux Troubleshooting (Most Common Issue)
SELinux is enabled by default in all RHEL versions.
Check SELinux Status
# getenforce
# sestatus
Identify SELinux Denials
# ausearch -m avc -ts recent
Or:
# journalctl | grep AVC
Interpret SELinux Alerts
# sealert -a /var/log/audit/audit.log
Fix SELinux Issues (Recommended Approach)
Restore File Contexts
# restorecon -Rv /path
Enable Required Booleans
# getsebool -a | grep httpd
# setsebool -P httpd_can_network_connect on
Temporary Disable (For Testing Only)
# setenforce 0
Permanent disable (NOT recommended):
# vi /etc/selinux/config

4. Firewall Issues (firewalld)
Check Firewall Status

# systemctl status firewalld
firewall-cmd --state
List Active Rules
# firewall-cmd --list-all
Allow a Service or Port
# firewall-cmd --add-service=http --permanent
# firewall-cmd --add-port=8080/tcp --permanent
# firewall-cmd --reload
Verify Zones
# firewall-cmd --get-active-zones

5. SSH Security Issues
Check SSH Service

systemctl status sshd
Verify SSH Configuration
# sshd -t
# vi /etc/ssh/sshd_config
Common issues:
  • PermitRootLogin no
  • PasswordAuthentication no
  • Wrong SSH port
Restart SSH Safely
# sshd -t && systemctl restart sshd

6. User Authentication & PAM Issues
Verify User Account

# id username
# passwd -S username
Check Account Lockout
# faillog -u username
# pam_tally2 --user username    # RHEL 7
# faillock --user username          # RHEL 8+
Reset Failed Login Count
# faillock --user username --reset

7. File and Directory Permission Issues
Check Ownership

# ls -ld /path
Fix Permissions
# chmod 755 /path
# chown user:group /path
Permissions alone may not fix SELinux issues.

8. sudo Issues
Check sudo Access

# sudo -l
Validate sudoers File
# visudo
Check:
username ALL=(ALL) ALL

9. Security Updates and Patch Issues
Check Installed Security Updates
# yum updateinfo list security # RHEL 7
# dnf updateinfo list security # RHEL 8+
Apply Security Updates
# yum update --security
# dnf update --security

10. OpenSCAP & Compliance Failures
Scan System

# oscap xccdf eval --profile standard --results scan.xml /usr/share/xml/scap/ssg/content/ssg-rhel*.xml
Common Compliance Failures
  • Password complexity
  • SSH hardening
  • File permissions
  • Crypto policies
11. Crypto Policy Issues (RHEL 8+)
Check Current Policy
# update-crypto-policies --show
Set Default Policy
# update-crypto-policies --set DEFAULT

12. Auditd Issues
Check Audit Service

# systemctl status auditd
Search Audit Logs
# ausearch -k ssh

13. Container Security Issues (RHEL 8+)
SELinux + Containers

# podman inspect container_name | grep SELinux
Fix volume labels:
:Z or :z

14. Kernel & Security Module Issues
Check Loaded Modules

# lsmod
Rebuild SELinux Labels
# touch /.autorelabel
# reboot

15. Best Practices to Prevent Security Issues
  • Keep SELinux enabled
  • Monitor /var/log/secure
  • Apply security patches regularly
  • Use firewalld zones properly
  • Test changes in non-production
  • Enable audit logging
Conclusion
Security troubleshooting in RHEL 7, 8, 9, and 10 follows a consistent methodology:
  • Identify blocked access
  • Review logs
  • Check SELinux and firewall
  • Validate authentication and permissions
  • Apply fixes systematically
Following these steps ensures secure, compliant, and stable systems in enterprise environments.