Enterprise Design for Shared Storage–Based MQ Multi-Instance Clusters
High availability (HA) for IBM MQ multi-instance queue managers depends on shared, consistent, low-latency storage. While SAN or enterprise NAS is common, iSCSI-backed LUNs combined with LVM and IBM Spectrum Scale (GPFS) provide a flexible, software-defined alternative that is production-proven when designed correctly.
This guide walks through a full-stack HA storage architecture:
- LVM-backed iSCSI LUNs
- Secure export using targetcli
- Multipath iSCSI initiators
- GPFS clustered filesystems
- IBM MQ HA compatibility and failover guarantees
Architecture Overview
┌──────────────────────┐
│ iSCSI Target Server │
│ RHEL / CentOS │
│ │
│ LVM (datavg) │
│ ├─ mqsdisk01 (LUN) │
│ └─ mqsdisk02 (LUN) │
│ │
│ targetcli / LIO │
└──────────┬───────────┘
│ iSCSI (TCP 3260)
┌────────────────┴────────────────┐
│ │
┌────────────────────┐ ┌────────────────────┐
│ MQ Node 1 │ │ MQ Node 2 │
│ indrxlmqs01 │ │ indrxlmqs02 │
│ │ │ │
│ iSCSI Initiator │ │ iSCSI Initiator │
│ Multipath │ │ Multipath │
│ GPFS Node │ │ GPFS Node │
│ MQ Instance │ │ MQ Instance │
└────────────────────┘ └────────────────────┘
Why GPFS for IBM MQ?
- IBM MQ multi-instance queue managers require:
- Shared filesystem
- POSIX locking
- Fast failover
- Guaranteed write ordering
Why GPFS (Spectrum Scale)?
| Feature | Benefit |
|---|---|
| Distributed lock manager | Safe concurrent access |
| Quorum + tiebreaker | Split-brain prevention |
| High-performance journaling | MQ log safety |
| Fast mount failover | <30s recovery |
| Certified by IBM | Supported configuration |
GPFS is explicitly supported for MQ HA. ext4/XFS are not.
Environment Specifications
| Component | Value |
|---|---|
| iSCSI Target Server | RHEL/CentOS, IP: 192.168.20.20 |
| Volume Group | datavg (/dev/sdb) |
| Logical Volumes | mqsdisk01 (5G), mqsdisk02 (5G) |
| iSCSI Target IQN | iqn.2025-08.ppc.com:mqsservers |
| Initiator Nodes | indrxlmqs01, indrxlmqs02 |
| OS | RHEL 8/9 or CentOS 8 Stream |
| GPFS Version | IBM Spectrum Scale 5.1+ |
Why LVM Under iSCSI?
Online resize (future MQ growth)
Snapshot capability
Alignment control
Striping across RAID
GPFS Optimal Alignment
PE size: 64KB
Stripe size: 64KB
Avoid 4MB defaults
Misalignment = silent performance loss.
Step 1: Prepare Storage on iSCSI Target
Disk Partitioning
# parted /dev/sdb mklabel gpt
# parted /dev/sdb mkpart primary 1MiB 100%
Physical Volume & VG Creation
# pvcreate --dataalignment 1m /dev/sdb1
# vgcreate -s 64K datavg /dev/sdb1
Create GPFS-Optimized Logical Volumes
# lvcreate -L 5G -i 4 -I 64K -n mqsdisk01 datavg
# lvcreate -L 5G -i 4 -I 64K -n mqsdisk02 datavg
Why striping?
GPFS issues parallel I/O
MQ log writes benefit from stripe width
Underlying RAID must support it
Step 2: iSCSI Target Configuration
Create Block Backstores
/backstores/block create MQS_LUN01 /dev/datavg/mqsdisk01
/backstores/block create MQS_LUN02 /dev/datavg/mqsdisk02
Create Target
/iscsi create iqn.2025-08.ppc.com:mqsservers
Map LUNs
/iscsi/iqn.2025-08.ppc.com:mqsservers/tpg1/luns create /backstores/block/MQS_LUN01
/iscsi/iqn.2025-08.ppc.com:mqsservers/tpg1/luns create /backstores/block/MQS_LUN02
ACL Configuration
/acls create iqn.2025-08.ppc.com:indrxlmqs01
/acls create iqn.2025-08.ppc.com:indrxlmqs02
Security Hardening
set attribute authentication=1
set auth userid=mqchap password=StrongSecret!
Never use demo_mode_write_protect in production
Step 3: iSCSI Initiators (MQ Nodes)
Install Required Packages
# dnf install iscsi-initiator-utils device-mapper-multipath -y
Configure IQN (Unique per Node)
# echo "InitiatorName=iqn.2025-08.ppc.com:indrxlmqs01" \
> /etc/iscsi/initiatorname.iscsi
Persistent Discovery
# iscsiadm -m discoverydb -t sendtargets \
-p 192.168.20.20 --discover
Login
# iscsiadm -m node --login
Multipath Configuration (Strongly Recommended)
/etc/multipath.conf
defaults {
user_friendly_names yes
path_grouping_policy multibus
rr_min_io 100
rr_weight uniform
failback immediate
}
# systemctl enable --now multipathd
# multipath -ll
GPFS must use multipath devices.
Step 4: GPFS Cluster Setup
Create Cluster
# mmcrcluster -N indrxlmqs01,indrxlmqs02 \
-C mqsgpfscluster \
--admin-interface eth0
Add Tiebreaker Disk (Required for 2-Node)
# mmadddisk tb -F /dev/mapper/mpatha
Create GPFS Filesystems
# mmcrfs gpfsdata -F /dev/mapper/mpathb \
-A yes -Q no -T /ibm/mqdata
# mmcrfs gpfslogs -F /dev/mapper/mpathc \
-A yes -Q no -T /ibm/mqlogs
Mount
# mmmount all -a
Quorum & Fencing Logic
# mmchconfig tiebreakerDisks=mpatha
# mmchconfig quorum=4
| Component | Vote |
|---|---|
| Node1 | 1 |
| Node2 | 1 |
| Data disk | 1 |
| Tiebreaker | 1 |
Prevents split-brain if one node loses storage or network.
IBM MQ Integration Notes
Recommended Layout
| MQ Component | GPFS FS |
|---|---|
| QM data | gpfsdata |
| Logs | gpfslogs |
| Error logs | gpfsdata |
| Trace | gpfsdata |
MQ HA Behavior
- Only one instance active
- Standby monitors lock files
- GPFS ensures fast lock transfer
- Typical failover: 15–30s
Performance Tuning
iSCSI
# echo 128 > /sys/block/sdX/queue/nr_requests
GPFS
# mmchconfig pagepool=80%RAM
# mmchconfig maxFilesToCache=500000
MQ
- Use linear logging
- Separate data and logs
- Avoid filesystem compression
Failure Scenarios & Behavior
| Failure | Outcome |
|---|---|
| MQ active node crash | Standby takes over |
| iSCSI path loss | Multipath reroutes |
| Storage server reboot | GPFS retries |
| Network partition | Quorum prevents split-brain |
Validation Checklist
- LUNs visible on both nodes
- Multipath active
- GPFS quorum healthy
- MQ starts on either node
- Forced failover succeeds
- No split-brain warnings
Operational Commands Cheat Sheet
# iscsiadm -m session -P 3
# multipath -ll
# mmgetstate -a
# mmlscluster
# strmqcsv QM1
# dspmq -x
Final Thoughts
This architecture delivers:
- Enterprise-grade HA
- SAN-like behavior with software-defined flexibility
- IBM-supported MQ configuration
- Predictable failover
The key is discipline:
- Quorum
- Fencing
- Multipath
- Testing
Done right, this design rivals expensive SAN solutions at a fraction of the cost.
No comments:
Post a Comment