End-to-End High Availability Storage & Database Architecture
IBM DB2 high availability deployments require consistent shared storage, predictable fencing, and cluster-aware filesystems. When SAN infrastructure is unavailable or cost-prohibitive, iSCSI + GPFS (IBM Spectrum Scale) provides a fully supported, enterprise-grade alternative.
This guide covers the complete stack:
- iSCSI shared LUNs (targetcli + LVM)
- Multipath-secured access
- GPFS cluster installation and quorum design
- DB2 installation on GPFS
- Performance tuning and HA validation
Reference Architecture
┌───────────────────────────────┐
│ iSCSI Target Server │
│ RHEL/CentOS │
│ LVM VG: datavg │
│ ├─ db2disk01 (3G) → Data │
│ ├─ db2disk02 (7G) → Indexes │
│ ├─ db2disk03 (3G) → Logs │
│ └─ db2disk04 (7G) → Temp │
│ targetcli / LIO │
└──────────────┬────────────────┘
│ iSCSI (TCP 3260, MTU 9000)
┌─────────────┴─────────────┐
│ │
┌────────────────────┐ ┌────────────────────┐
│ DB2 Node 1 │ │ DB2 Node 2 │
│ indrxldb201 │ │ indrxldb202 │
│ GPFS Node │ │ GPFS Node │
│ DB2 Instance │ │ DB2 Instance │
└────────────────────┘ └────────────────────┘
Why GPFS for DB2?
| Requirement | GPFS Capability |
|---|---|
| Concurrent access | Distributed lock manager |
| HA fencing | Quorum + tiebreaker disks |
| Write ordering | Journaled metadata |
| Fast failover | Sub-30s remount |
| IBM support | Fully certified |
ext4/XFS are NOT supported for DB2 HA shared-disk clusters
Environment Summary
| Component | Value |
|---|---|
| OS | RHEL 8/9 or CentOS Stream |
| GPFS | IBM Spectrum Scale 5.1+ |
| DB2 | 11.5.x |
| Network | 10Gb iSCSI VLAN (MTU 9000) |
| Cluster Nodes | indrxldb201, indrxldb202 |
| Quorum | Tiebreaker disk required |
Part 1 – GPFS Installation (Both Nodes)
1.1 Prerequisites
# dnf install -y \
kernel-devel \
gcc \
cpp \
elfutils-libelf-devel \
numactl \
chrony \
net-tools
Ensure time synchronization:
# systemctl enable --now chronyd
1.2 Install IBM Spectrum Scale Packages
Upload Spectrum Scale RPMs to both nodes.
# rpm -ivh gpfs.base*.rpm gpfs.gpl*.rpm gpfs.msg*.rpm
Build kernel module:
/usr/lpp/mmfs/bin/mmbuildgpl
Verify:
# lsmod | grep mmfs
# echo "export PATH=$PATH:/usr/lpp/mmfs/bin" >> /root/.bash_profile
1.3 Create GPFS Cluster
Run once from primary node:
# mmcrcluster -N indrxldb201:manager-quorum,indrxldb202:manager-quorum -p indrxldb201 -s indrxldb202 -r /usr/bin/ssh -R /usr/bin/scp -C GPFS_DB2
# /usr/lpp/mmfs/bin/mmstartup -N indrxldb201
# /usr/lpp/mmfs/bin/mmstartup -N indrxldb201
Start GPFS:
# mmstartup -a
Verify:
# mmgetstate -a
1.4 Configure Quorum & Tiebreaker
For 2-node clusters, quorum is mandatory.
# mmadddisk tb -F /dev/mapper/mpatha
# mmchconfig tiebreakerDisks=mpatha
# mmchconfig quorum=4
| Vote Source | Count |
|---|---|
| Node 1 | 1 |
| Node 2 | 1 |
| Data disk | 1 |
| Tiebreaker | 1 |
Part 2 – GPFS Filesystem Design for DB2
| Filesystem | Purpose |
|---|---|
| gpfs_db2data | Tablespaces |
| gpfs_db2index | Indexes |
| gpfs_db2logs | Transaction logs |
| gpfs_db2temp | Temp tables |
Create File /tmp/nsdlist
# vi /tmp/nsdlist
%nsd:
device=/dev/sdb
nsd=nsd_01
servers=indrxldb201,indrxldb202
usage=dataAndMetadata
failureGroup=-1
pool=system
Create NSDs
# mmcrnsd -F /tmp/nsdlist
Create Filesystems
# mmcrfs gpfs_db2data -F /dev/mapper/mpathb -A yes -Q yes
# mmcrfs gpfs_db2index -F /dev/mapper/mpathc -A yes -Q yes
# mmcrfs gpfs_db2logs -F /dev/mapper/mpathd -A yes -Q no
Mount:
# mmmount all -a
Part 3 – DB2 Installation (Both Nodes)
3.1 OS Kernel Tuning
# sysctl -w kernel.shmmni=8192
# sysctl -w kernel.shmmax=$(($(getconf _PHYS_PAGES) * 4096))
# sysctl -w kernel.shmall=$(($(getconf _PHYS_PAGES) * 4096 / 4096))
# sysctl -w vm.swappiness=1
Persist in /etc/sysctl.conf.
3.2 Create DB2 User & Groups
# groupadd -g 1001 db2iadm1
# groupadd -g 1002 db2fadm1
# useradd -u 1001 -g db2iadm1 -G db2fadm1 db2inst1
# passwd db2inst1
3.3 Install DB2 Software
# tar -xvf DB2_Svr_11.5_Linux_x86-64.tar.gz
# cd server_dec
# ./db2_install
Select:
Server Edition
Typical install
3.4 Create DB2 Instance
# /opt/ibm/db2/V11.5/instance/db2icrt \
-u db2fadm1 db2inst1
Part 4 – DB2 Database on GPFS
Directory Structure
# mkdir -p /gpfs_db2data/db2inst1
# mkdir -p /gpfs_db2logs/db2inst1
# chown -R db2inst1:db2iadm1 /gpfs*
Create Database
# su - db2inst1
db2 create database PRODDB \
on /gpfs_db2data \
dbpath on /gpfs_db2data \
using codeset UTF-8 territory us
Move logs:
# db2 update db cfg for PRODDB using NEWLOGPATH /gpfs_db2logs/db2inst1
Part 5 – High Availability Behavior
| Event | Result |
|---|---|
| Node crash | GPFS remounts |
| Network split | Quorum prevents corruption |
| iSCSI path loss | Multipath reroutes |
| Storage restart | DB2 recovers logs |
Typical DB2 recovery time: 20–45 seconds
Performance Tuning
GPFS
# mmchconfig pagepool=80%RAM
# mmchconfig maxFilesToCache=1000000
DB2
# db2 update db cfg using LOGFILSIZ 8192
# db2 update db cfg using LOGPRIMARY 20
# db2 update db cfg using LOGSECOND 100
iSCSI
# echo 256 > /sys/block/sdX/queue/nr_requests
Validation Checklist
- iSCSI sessions persistent
- Multipath active
- GPFS quorum healthy
- DB2 database starts on either node
- Forced node failure recovers cleanly
- No GPFS fencing events
Operational Commands Cheat Sheet
# mmgetstate -a
# mmlscluster
# mmlsdisk gpfs_db2data
# db2pd -db PRODDB -logs
# iscsiadm -m session -P 3
# multipath -ll
Final Thoughts
This architecture delivers:
- IBM-supported DB2 HA
- SAN-like behavior using software-defined storage
- Strong fencing and split-brain prevention
- Predictable performance at scale
The success factors are discipline and testing:
- Quorum
- Multipath
- Dedicated network
- Regular failover drills
No comments:
Post a Comment