Oracle Linux 8 | Run 19c + 23c Together | Production RAC Setup
Want to run stable 19c production alongside cutting-edge 23c testing? Add bulletproof 2-node RAC? This guide has you covered—multi-home magic, Grid Infrastructure, SCAN, rolling patches, everything.
Why Multi-Home Rocks
/u01/app/oracle/
├── product/19.0.0/dbhome_1 ← LTS Production
├── product/23.0.0/dbhome_1 ← Innovation Testing
└── oraInventory ← Shared
Benefits:
- Zero-downtime feature testing
- Safe rolling migration path
- Independent patching cycles
- Separate listeners (1521 vs 1522)
1. Quick OS Check
ulimit -n # Should be ≥ 65536
ulimit -u # Should be ≥ 16384
# Fix if needed
echo "oracle soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "oracle hard nofile 65536" | sudo tee -a /etc/security/limits.conf
2. Enterprise Groups
sudo groupadd -g 20011 backupdba
sudo groupadd -g 20012 dgdba
sudo groupadd -g 20013 kmdba
sudo groupadd -g 20014 asmdba
sudo groupadd -g 20015 asmoper
sudo groupadd -g 20016 asmadmin
sudo usermod -a -G backupdba,dgdba,kmdba,asmdba,asmoper,asmadmin oracle
3. Install 23c Alongside 19c (Silent)
# Create 23c home
sudo mkdir -p /u01/app/oracle/product/23.0.0/dbhome_1
sudo chown oracle:oinstall /u01/app/oracle/product/23.0.0
sudo chmod 775 /u01/app/oracle/product/23.0.0 -R
su - oracle
cd /u01/app/oracle/product/23.0.0/dbhome_1
unzip LINUX.X64_230000_db_home.zip
export ORACLE_HOME=/u01/app/oracle/product/23.0.0/dbhome_1
export ORACLE_BASE=/u01/app/oracle
export CV_ASSUME_DISTID=OEL8.1
./runInstaller -silent -waitforcompletion \
-responseFile $ORACLE_HOME/install/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_HOME=$ORACLE_HOME \
ORACLE_BASE=$ORACLE_BASE \
oracle.install.db.InstallEdition=EE \
oracle.install.db.OSDBA_GROUP=dba \
oracle.install.db.OSBACKUPDBA_GROUP=backupdba \
oracle.install.db.OSDGDBA_GROUP=dgdba \
oracle.install.db.OSKMDBA_GROUP=kmdba \
DECLINE_SECURITY_UPDATES=true
# Root scripts
sudo $ORACLE_HOME/root.sh
4. Smart Listener Strategy
Separate listeners = no conflicts:
19c: LISTENER on port 1521
23c: LISTENER23 on port 1522
# 23c listener.ora
cat > $ORACLE_HOME/network/admin/listener.ora <<EOF
LISTENER23 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = $(hostname))(PORT = 1522))
)
)
EOF
lsnrctl start LISTENER23
5. Create 23c CDB+PDB
export ORACLE_SID=cdb23
dbca -silent -createDatabase \
-gdbname cdb23 -sid cdb23 \
-createAsContainerDatabase true \
-numberOfPDBs 1 -pdbName pdb23 \
-characterSet AL32UTF8 \
-totalMemory 4096 \
-storageType FS \
-datafileDestination "/u02/oradata23"
6. RAC Network Setup (Both Nodes)
/etc/hosts:
192.168.10.11 node1.localdomain node1
192.168.10.12 node2.localdomain node2
192.168.10.13 node1-vip.localdomain node1-vip
192.168.10.14 node2-vip.localdomain node2-vip
192.168.10.15 cluster-scan.localdomain cluster-scan
Public + Private + VIP + SCAN = 4 networks needed.
7. ASM Shared Storage (Both Nodes)
# Install ASM packages
sudo dnf install -y oracleasm-support oracleasmlib oracleasm
# Configure
sudo oracleasm configure -i
sudo oracleasm init
# Create disks (run on both nodes)
sudo oracleasm createdisk DATA01 /dev/sdb1
sudo oracleasm createdisk FRA01 /dev/sdc1
oracleasm listdisks
8. Grid Infrastructure (Node 1 Only)
sudo mkdir -p /u01/app/19.0.0/grid
sudo chown oracle:oinstall /u01/app/19.0.0/grid
sudo chmod 775 /u01/app/19.0.0/grid
su - oracle
cd /u01/app/19.0.0/grid
unzip LINUX.X64_193000_grid_home.zip
./runInstaller -silent -waitforcompletion \
-oracle.install.option=CRS_CONFIG \
-CLUSTER_NODES=node1,node2 \
-oracle.install.asm.OSASM=asmadmin \
-oracle.install.asm.OSDBA=asmdba \
-oracle.install.asm.OSOPER=asmoper \
-ASM_DISKSTRING="/dev/oracleasm/disks/*" \
-WAIT_FOR_COMPLETION=YES
Run root scripts on BOTH nodes when prompted.
9. Create RAC Database
dbca -silent -createDatabase \
-databaseType RAC \
-gdbname racdb -sid racdb \
-nodeList node1,node2 \
-storageType ASM \
-diskGroupName DATA \
-createAsContainerDatabase true \
-numberOfPDBs 1 -pdbName pdb1 \
-totalMemory 4096
10. RAC Services (Load Balancing)
# Apps service (node1 preferred, node2 available)
srvctl add service -db racdb \
-service appsvc \
-preferred racdb1 \
-available racdb2
srvctl start service -db racdb -service appsvc
11. Rolling Patch Magic
# Check cluster health
crsctl check cluster -all
# Patch node1 only
srvctl stop instance -db racdb -instance racdb1
$ORACLE_HOME/OPatch/opatch apply
srvctl start instance -db racdb -instance racdb1
# Verify
srvctl status database -d racdb
Zero downtime = happy users.
12. RAC Health Dashboard
# One-liner cluster status
echo "=== CLUSTER ==="; crsctl stat res -t | grep racdb
echo "=== DATABASE ==="; srvctl status database -d racdb -v
echo "=== NODES ==="; olsnodes -n
echo "=== SCAN ==="; srvctl config scan
# Global Cache (should be <5% DB time)
sqlplus / as sysdba <<EOF
SELECT inst_id, name, value FROM gv\$sysstat WHERE name LIKE 'gc%';
SELECT * FROM gv\$cluster_interconnects;
EOF
13. Multi-Home Switching
Use oraenv magic:
. oraenv # Prompts: ORACLE_SID = [cdb1] ? cdb23
# Switches PATH, LD_LIBRARY_PATH automatically
Or manual:
# 19c
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export ORACLE_SID=cdb1
# 23c
export ORACLE_HOME=/u01/app/oracle/product/23.0.0/dbhome_1
export ORACLE_SID=cdb23
14. Production Validation Matrix
| Environment | Expected | Command |
|---|---|---|
| 19c Single | cdb1 - OPEN | srvctl status database -d cdb1 |
| 23c Single | cdb23 - OPEN | srvctl status database -d cdb23 |
| RAC | 2 instances | srvctl status database -d racdb |
| ASM | DATA + FRA MOUNTED | asmcmd lsdg |
| SCAN | 3 IPs resolving | nslookup cluster-scan |
15. Security Hardening
# RAC-specific
sqlplus / as sysdba
ALTER SYSTEM SET REMOTE_LISTENER='cluster-scan:1521';
ALTER SYSTEM SET LOCAL_LISTENER='(ADDRESS=(PROTOCOL=TCP)(HOST=node1)(PORT=1521))';
# Valid node checking
lsnrctl <<EOF
SET CURRENT_LISTENER listener
CHANGE PASSWORD sys/password
SAVE_CONFIG
EOF
No comments:
Post a Comment