Pages

DB2 GPFS Cluster LUN Setup Using iSCSI and LVM on RHEL/CentOS

End-to-End High Availability Storage & Database Architecture

IBM DB2 high availability deployments require consistent shared storage, predictable fencing, and cluster-aware filesystems. When SAN infrastructure is unavailable or cost-prohibitive, iSCSI + GPFS (IBM Spectrum Scale) provides a fully supported, enterprise-grade alternative.

This guide covers the complete stack:
  • iSCSI shared LUNs (targetcli + LVM)
  • Multipath-secured access
  • GPFS cluster installation and quorum design
  • DB2 installation on GPFS
  • Performance tuning and HA validation
Reference Architecture
┌───────────────────────────────┐
│  iSCSI Target Server          │
│  RHEL/CentOS                  │
│  LVM VG: datavg               │
│  ├─ db2disk01 (3G)  → Data    │
│  ├─ db2disk02 (7G)  → Indexes │
│  ├─ db2disk03 (3G)  → Logs    │
│  └─ db2disk04 (7G)  → Temp    │
│  targetcli / LIO              │
└──────────────┬────────────────┘
               │ iSCSI (TCP 3260, MTU 9000)
 ┌─────────────┴─────────────┐
 │                           │
┌────────────────────┐   ┌────────────────────┐
│ DB2 Node 1         │   │ DB2 Node 2         │
indrxldb201        │   │ indrxldb202        │
│ GPFS Node          │   │ GPFS Node          │
│ DB2 Instance       │   │ DB2 Instance       │
└────────────────────┘   └────────────────────┘

Why GPFS for DB2?
RequirementGPFS Capability
Concurrent accessDistributed lock manager
HA fencingQuorum + tiebreaker disks
Write orderingJournaled metadata
Fast failoverSub-30s remount
IBM supportFully certified

ext4/XFS are NOT supported for DB2 HA shared-disk clusters

Environment Summary
ComponentValue
OSRHEL 8/9 or CentOS Stream
GPFSIBM Spectrum Scale 5.1+
DB211.5.x
Network10Gb iSCSI VLAN (MTU 9000)
Cluster Nodesindrxldb201, indrxldb202
QuorumTiebreaker disk required

Part 1 – GPFS Installation (Both Nodes)

1.1 Prerequisites
# dnf install -y \
  kernel-devel \
  gcc \
  cpp \
  elfutils-libelf-devel \
  numactl \
  chrony \
  net-tools
Ensure time synchronization:
# systemctl enable --now chronyd

1.2 Install IBM Spectrum Scale Packages
Upload Spectrum Scale RPMs to both nodes.
# rpm -ivh gpfs.base*.rpm gpfs.gpl*.rpm gpfs.msg*.rpm
Build kernel module:
/usr/lpp/mmfs/bin/mmbuildgpl
Verify:
# lsmod | grep mmfs
# echo "export PATH=$PATH:/usr/lpp/mmfs/bin" >> /root/.bash_profile

1.3 Create GPFS Cluster
Run once from primary node:
# mmcrcluster -N indrxldb201:manager-quorum,indrxldb202:manager-quorum -p indrxldb201 -s indrxldb202 -r /usr/bin/ssh -R /usr/bin/scp -C GPFS_DB2

# /usr/lpp/mmfs/bin/mmstartup -N indrxldb201
# /usr/lpp/mmfs/bin/mmstartup -N indrxldb201

Start GPFS:
# mmstartup -a
Verify:
# mmgetstate -a

1.4 Configure Quorum & Tiebreaker
For 2-node clusters, quorum is mandatory.
# mmadddisk tb -F /dev/mapper/mpatha
# mmchconfig tiebreakerDisks=mpatha
# mmchconfig quorum=4

Vote SourceCount
Node 11
Node 21
Data disk1
Tiebreaker1

Part 2 – GPFS Filesystem Design for DB2
FilesystemPurpose
gpfs_db2dataTablespaces
gpfs_db2indexIndexes
gpfs_db2logsTransaction logs
gpfs_db2tempTemp tables
Create File /tmp/nsdlist
# vi /tmp/nsdlist
%nsd:
device=/dev/sdb
nsd=nsd_01
servers=indrxldb201,indrxldb202
usage=dataAndMetadata 
failureGroup=-1 
pool=system 

Create NSDs
# mmcrnsd -F /tmp/nsdlist
Create Filesystems
# mmcrfs gpfs_db2data  -F /dev/mapper/mpathb -A yes -Q yes
# mmcrfs gpfs_db2index -F /dev/mapper/mpathc -A yes -Q yes
# mmcrfs gpfs_db2logs  -F /dev/mapper/mpathd -A yes -Q no
Mount:
# mmmount all -a

Part 3 – DB2 Installation (Both Nodes)

3.1 OS Kernel Tuning
# sysctl -w kernel.shmmni=8192
# sysctl -w kernel.shmmax=$(($(getconf _PHYS_PAGES) * 4096))
# sysctl -w kernel.shmall=$(($(getconf _PHYS_PAGES) * 4096 / 4096))
# sysctl -w vm.swappiness=1
Persist in /etc/sysctl.conf.

3.2 Create DB2 User & Groups
# groupadd -g 1001 db2iadm1
# groupadd -g 1002 db2fadm1
# useradd -u 1001 -g db2iadm1 -G db2fadm1 db2inst1
# passwd db2inst1

3.3 Install DB2 Software
# tar -xvf DB2_Svr_11.5_Linux_x86-64.tar.gz
# cd server_dec
# ./db2_install
Select:
Server Edition
Typical install

3.4 Create DB2 Instance
# /opt/ibm/db2/V11.5/instance/db2icrt \
  -u db2fadm1 db2inst1

Part 4 – DB2 Database on GPFS
Directory Structure
# mkdir -p /gpfs_db2data/db2inst1
# mkdir -p /gpfs_db2logs/db2inst1
# chown -R db2inst1:db2iadm1 /gpfs*

Create Database
# su - db2inst1
db2 create database PRODDB \
  on /gpfs_db2data \
  dbpath on /gpfs_db2data \
  using codeset UTF-8 territory us

Move logs:
# db2 update db cfg for PRODDB using NEWLOGPATH /gpfs_db2logs/db2inst1

Part 5 – High Availability Behavior
EventResult
Node crashGPFS remounts
Network splitQuorum prevents corruption
iSCSI path lossMultipath reroutes
Storage restartDB2 recovers logs

Typical DB2 recovery time: 20–45 seconds

Performance Tuning
GPFS
# mmchconfig pagepool=80%RAM
# mmchconfig maxFilesToCache=1000000
DB2
# db2 update db cfg using LOGFILSIZ 8192
# db2 update db cfg using LOGPRIMARY 20
# db2 update db cfg using LOGSECOND 100
iSCSI
# echo 256 > /sys/block/sdX/queue/nr_requests

Validation Checklist
  • iSCSI sessions persistent
  • Multipath active
  • GPFS quorum healthy
  • DB2 database starts on either node
  • Forced node failure recovers cleanly
  • No GPFS fencing events
Operational Commands Cheat Sheet
# mmgetstate -a
# mmlscluster
# mmlsdisk gpfs_db2data
# db2pd -db PRODDB -logs
# iscsiadm -m session -P 3
# multipath -ll

Final Thoughts
This architecture delivers:
  • IBM-supported DB2 HA
  • SAN-like behavior using software-defined storage
  • Strong fencing and split-brain prevention
  • Predictable performance at scale
The success factors are discipline and testing:
  • Quorum
  • Multipath
  • Dedicated network
  • Regular failover drills

No comments:

Post a Comment