Pages

AWS Landscape 7: Subnet Allocation

Proper subnet allocation ensures high availability (HA), scalability, and security for SAP workloads across multiple AWS accounts and regions. By carefully planning public and private subnets across Availability Zones (AZs), the architecture supports multi-AZ deployments, SAP HANA replication, and future growth.

Key benefits:
  • Isolation: Public and private subnets separate external-facing services from internal SAP workloads.
  • HA and Fault Tolerance: Multi-AZ subnets prevent downtime from single-AZ failures.
  • Scalability: Sufficient CIDR allocation allows EC2 and HANA growth without IP conflicts.
  • Automation: Subnet tagging enables monitoring, compliance, and DevOps automation.
Objective:
  • Design and allocate subnets within each VPC to:
  • Support multi-AZ HA deployments for SAP workloads.
  • Separate public-facing services from private SAP workloads.
  • Reserve IP addresses for future growth and additional workloads.
  • Enable cross-VPC communication via Transit Gateway.
Design Overview
Each VPC (per account) contains multiple subnets distributed across two or more AZs.

Subnet Types:
Subnet TypePurpose
PublicNAT Gateways, Bastion Hosts, Internet-facing services
PrivateSAP Application Servers, SAP HANA DB, internal services

Subnet sizing accounts for:
  • Current EC2 and HANA workloads
  • Future growth for SAP HANA DB and additional EC2 instances
  • Potential new environments
Technical Steps for Subnet Allocation

Step 1: Define Subnet CIDR per AZ
Example for Production VPC (10.0.0.0/16):
AZSubnet TypeCIDRPurpose
AZ1Public10.0.0.0/24NAT Gateways, Bastion Hosts
AZ1Private10.0.1.0/24SAP Application, HANA DB
AZ2Public10.0.2.0/24NAT Gateways, Bastion Hosts
AZ2Private10.0.3.0/24SAP Application, HANA DB
Reserve additional IPs for elasticity and future scaling.

Step 2: Repeat for Other Environments
Example for Development VPC (10.1.0.0/16):
AZSubnet TypeCIDR
AZ1Public10.1.0.0/24
AZ1Private10.1.1.0/24
AZ2Public10.1.2.0/24
AZ2Private10.1.3.0/24
Apply similar logic for Pre-Prod, QA, Sandbox, and Shared Services accounts.

Step 3: Tagging Subnets
Apply consistent tags for automation, monitoring, and compliance:
KeyValue
EnvironmentProduction / Dev / QA / Sandbox
AZap-southeast-1a / ap-southeast-1b
SubnetTypePublic / Private
ProjectSAP Migration

Step 4: Associate Subnets with Route Tables
  • Public Subnets → Internet Gateway
  • Private Subnets → NAT Gateway for outbound internet traffic
  • Private Subnets → Transit Gateway for cross-VPC connectivity
Step 5: Consider HA & Future Growth
  • Reserve 2x IP addresses per instance for elasticity.
  • Leave extra CIDR space in each VPC for new subnets or services.
Diagram – Subnet Layout per VPC




Routing and Connectivity Overview:

Public Subnets
──────────────
 • Route → Internet Gateway (IGW)
 • Host NAT Gateway
 • Bastion Hosts for admin access

Private Subnets
───────────────
 • Route → NAT Gateway (Outbound Internet)
 • Route → Transit Gateway (Cross-VPC communication)
 • Host SAP Application Servers and SAP HANA DB


Subnet Tagging Strategy:
Key            Value
----------------------------------------
Environment    Production / Dev / QA
AZ             ap-southeast-1a / 1b
SubnetType     Public / Private
Project        SAP Migration

AWS Landscape 6: VPC Design

The Virtual Private Cloud (VPC) design establishes isolated and secure networking for SAP workloads while enabling cross-account and cross-environment connectivity. Each AWS account has a dedicated VPC, ensuring security boundaries, simplified routing, and support for high availability (HA) deployments across Availability Zones (AZs).
Key objectives of the VPC design:
  • Ensure network isolation per account.
  • Support HA across multiple AZs.
  • Enable cross-VPC communication using a centralized Transit Gateway.
  • Plan non-overlapping CIDR blocks for future expansion and on-prem connectivity.
Objective
  • Design VPCs for each AWS account to:
  • Provide isolated, secure networking for SAP workloads.
  • Maintain connectivity between accounts via Transit Gateway.
  • Support multi-AZ deployment for high availability.
  • Coordinate IP addressing with on-premises networks for Direct Connect or VPN.
Overview:
Single VPC per account simplifies management and improves security.
Subnets are distributed across multiple AZs for HA.
Transit Gateway in the Network Service account centralizes connectivity.

Proposed VPC CIDR Allocation per Environment/Account:
EnvironmentAccount TypeVPC CIDR
ProductionWorkload10.0.0.0/16
DevelopmentWorkload10.1.0.0/16
Pre-ProductionWorkload10.2.0.0/16
QualityWorkload10.3.0.0/16
SandboxWorkload10.4.0.0/16
Network ServicesShared Service10.10.0.0/16
Shared ServicesShared Service10.20.0.0/16
SecuritySecurity10.30.0.0/16
LoggingSecurity/Shared10.40.0.0/16

Technical Steps to Implement VPCs

Step 1: Create VPC
Log in to the AWS account (repeat for each environment: Production, Dev, QA, Sandbox, Shared, Network, Security, Logging).
Navigate to VPC → Create VPC.
Provide the following:
Name tag: e.g., Prod-VPC, Dev-VPC
IPv4 CIDR block: Follow your design table (e.g., 10.0.0.0/16 for Production, 10.1.0.0/16 for Dev, etc.)
Tenancy: Default (choose Dedicated only if required)
Click Create VPC.

Step 2: Create Subnets per AZ
Create 2+ subnets per AZ for high availability.
Example for Production VPC (10.0.0.0/16):
Subnet     AZ  CIDR       Purpose
Subnet-1 AZ1 10.0.0.0/24 App EC2 + DB Primary
Subnet-2 AZ2 10.0.1.0/24 App EC2 + DB Secondary
Repeat subnet creation for:
  • Dev, QA, Sandbox
  • Shared Services
  • Network Account
Best practice: Use consistent subnet naming conventions across accounts for clarity.

Step 3: Internet & NAT Gateways
Public Subnets
Attach Internet Gateway (IGW) to the VPC.
Associate public subnets with route tables that include a default route to IGW.
Private Subnets
Deploy NAT Gateway in a public subnet.
Update private subnet route tables to route internet-bound traffic through the NAT Gateway.

Step 4: Configure Route Tables
Public Subnet Route Table:
Route 0.0.0.0/0 → IGW
Private Subnet Route Table:
Route 0.0.0.0/0 → NAT Gateway
Transit Gateway Routing:
Configure route tables to allow cross-account VPC connectivity via Transit Gateway.

Step 5: Apply Security
Security Groups:
Create SGs per application, database, or service tier.
Use least privilege rules (allow only necessary ports and IP ranges).
Network ACLs (NACLs):
Apply subnet-level controls for additional security segmentation.
Segmentation:
Separate workloads into:
  • Application subnets
  • Database subnets
  • Management / bastion hosts
  • Shared services (DNS, AD, monitoring)
Step 6: Connect VPCs via Transit Gateway
Deploy Transit Gateway in the Network Service Account.
Attach all VPCs: Production, Dev, QA, Sandbox, Shared Services, Security, Logging.
Update Route Tables:
Add Transit Gateway routes in each VPC to enable cross-VPC communication.
Verify Connectivity:
Test routing between accounts while maintaining segmentation policies.

Diagram – VPC Connectivity Across Accounts:


Network Service VPC
  • Contains Transit Gateway for cross-VPC connectivity.
  • Acts as the central routing hub for all workload and shared VPCs.
Workload VPCs
  • Prod, Dev, and QA VPCs each have AZ1 and AZ2 subnets for high availability.
  • Connect to Transit Gateway for cross-account / cross-environment communication.
Central VPCs for Shared Functions
  • Shared Services VPC: CI/CD, automation, common tools
  • Security VPC: Threat detection, monitoring, security tooling
  • Logging VPC: Centralized audit logs
Scalability & Governance
  • Transit Gateway allows centralized management of routes and security.
  • Adding new workload VPCs only requires attaching to the TGW.
Notes and Best Practices
  • Single VPC per account simplifies isolation and security management.
  • CIDRs are planned for future expansion and on-prem integration.
  • Transit Gateway ensures scalable and centralized connectivity between accounts and environments.
  • Subnet distribution across AZs supports HA deployments for SAP workloads.

AWS Landscape 5: High Availability (HA)

High availability (HA) is critical for SAP workloads to ensure minimal downtime and maintain business continuity. AWS achieves HA by distributing resources across multiple Availability Zones (AZs) within a region.
Key benefits of HA deployment:
  • Fault tolerance: Isolated AZs prevent single points of failure.
  • Scalability: Auto Scaling ensures workloads can adapt to demand.
  • Reliability: Load balancers distribute traffic and maintain application availability.
  • Consistency: Monitoring and automated failover maintain uptime SLA > 99.9%.
This design assumes that HA decisions may vary by environment, with production always leveraging multi-AZ, while non-production accounts may use single-AZ based on cost and business priorities.

Objective
Design the AWS infrastructure to achieve high availability for SAP workloads by:
  1. Distributing workloads across 2 or more AZs per region
  2. Ensuring database and application tier fault tolerance
  3. Leveraging AWS-native services (ALB/NLB, Auto Scaling, HANA System Replication)
  4. Maintaining secure and monitored cross-AZ networking
Design Overview
High Availability Strategy:
  • Deploy SAP workloads across 2 or more AZs per region.
  • Use AZ-level isolation with independent subnets per AZ.
  • Distribute application servers (EC2) and database replicas (HANA) across AZs.
  • Implement Elastic Load Balancers to distribute traffic and remove unhealthy instances automatically.
  • Enable synchronous replication for database HA within a region.
Technical Steps to Implement HA

Step 1: Identify Availability Zones (AZs) per Region
Primary Region (Production): ap-southeast-1 (Singapore)
Choose 2 AZs for high availability, e.g., ap-southeast-1a and ap-southeast-1b.
Secondary Region (DR): ap-northeast-1 (Tokyo)
Select AZs for disaster recovery, e.g., ap-northeast-1a and ap-northeast-1b.
Best practice: Use at least 2 AZs per region for synchronous replication and failover.

Step 2: Configure VPC and Subnets
Create Subnets per AZ within each VPC. Example Production VPC:
Subnet         AZ CIDR             Purpose
Subnet-1 AZ1 10.0.0.0/24 App EC2 + DB Primary
Subnet-2 AZ2 10.0.1.0/24 App EC2 + DB Secondary
Repeat subnet creation for all environment accounts: Dev, QA, Pre-Prod, Sandbox.
Assign public/private subnets based on workload requirements.

Step 3: Deploy Application & Database Across AZs
Application Tier (SAP AS/NetWeaver):
Launch EC2 instances in both AZs.
Use Auto Scaling Groups (ASG) to automatically scale based on traffic.
Database Tier (SAP HANA):
Implement HANA System Replication (HSR) for synchronous replication.
If using RDS, enable Multi-AZ deployment for high availability.
Ensure replication is synchronous within the region for HA.

Step 4: Load Balancer Configuration
Deploy AWS Elastic Load Balancer (ALB or NLB).
Distribute traffic evenly across both AZs.
Configure health checks to automatically remove unhealthy instances.
Integrate the Load Balancer with Auto Scaling Groups.

Step 5: Cross-AZ Networking
Configure route tables for communication between subnets in different AZs.
Use Security Groups and Network ACLs (NACLs) to control traffic flow.
Ensure minimal latency between AZs by keeping CIDRs contiguous and subnets close in the AZ topology.
Optional: Implement VPC Peering or Transit Gateway for cross-account connectivity.

Step 6: Monitoring & Failover
CloudWatch
Enable alarms for instance health, CPU, memory, and AZ-level metrics.
Auto Scaling
Automatically replace failed instances in any AZ.
AWS Systems Manager
Centralized operational visibility, automation scripts, and patch management.
Failover Testing
Periodically simulate AZ failures to ensure SAP applications and databases fail over correctly.

Diagram – Multi-AZ High Availability

High Availability Across AZs
  • App EC2 instances and DB replicas are distributed across AZ1 and AZ2.
  • Ensures fault tolerance if one AZ fails.
Elastic Load Balancer
  • Distributes traffic evenly across both AZs for high availability.
Subnet Design
  • Separate subnets per AZ maintain isolation and improve resiliency.
Fault-Tolerant Database Setup
  • Primary DB in AZ1, replica in AZ2 for automatic failover.
Notes and Best Practices
  • Multi-AZ deployment ensures uptime SLA > 99.9%.
  • Production workloads always leverage multi-AZ HA; non-prod environments may use single-AZ for cost efficiency.
  • Apply consistent security, monitoring, and logging policies per subnet/AZ.
  • Regularly test failover scenarios to ensure reliability.
  • Use CloudWatch, Auto Scaling, and Route53 health checks to maintain HA.

AWS Landscape 4: AWS Regions

For mission-critical SAP workloads, deploying across multiple AWS Regions ensures high availability (HA), disaster recovery (DR), and business continuity. By separating workloads geographically, organizations reduce the risk of service disruption due to regional outages while maintaining consistent operational and security controls.

This design leverages primary and secondary regions for active workloads and DR replication while maintaining network isolation, replication consistency, and failover readiness.

Objective:
Define the AWS Regions for hosting the SAP landscape to achieve:
  • High Availability: Primary region handles active workloads.
  • Disaster Recovery: Secondary region maintains replicated SAP workloads.
  • Operational Consistency: VPCs, subnets, and shared services are replicated for seamless failover.
  • Compliance & Business Continuity: Cross-region backup and replication using AWS-native services.
Design Overview
Region TypeRegion Name & CodePurpose
Primary RegionSingapore (ap-southeast-1)Hosts active SAP workloads
Secondary RegionTokyo (ap-northeast-1)DR region for business continuity; replicates SAP workloads

Key Principles:
  • Regions are geographically isolated to minimize outage impact.
  • All VPCs, subnets, and shared services are mirrored between primary and secondary regions.
  • SAP HANA and EC2 instances are replicated using AWS Elastic Disaster Recovery (EDR) and Backint for HANA.
Technical Steps to Implement Multi-Region SAP Landscape

Step 1: Configure AWS Regions
Log in to AWS Management Console with the Master Payer Account (MPA) or appropriate environment account.
Select Primary Region (Production):
ap-southeast-1 (Singapore) for production workloads.
Select DR/Secondary Region:
ap-northeast-1 (Tokyo) for disaster recovery and backup.
Enable necessary services in both regions:
  • VPC
  • IAM
  • S3
  • AWS Elastic Disaster Recovery (EDR)
  • CloudTrail, CloudWatch, Route53
Step 2: Deploy Core Infrastructure in Both Regions
VPC Replication per Account
  • Create VPCs in Tokyo with the same design as Singapore (public/private subnets).
  • Ensure non-overlapping CIDRs for both regions, especially if using Direct Connect or VPN.
Shared Services Deployment
  • Deploy critical shared services in both regions:
  • Active Directory (AD)
  • DNS (Route53 private hosted zones)
  • Monitoring (CloudWatch, CloudTrail, Security Hub)
Transit Gateway Deployment
  • Deploy Transit Gateway in Network Service Account.
  • Connect all workload accounts in both regions.
  • Configure route tables to ensure traffic isolation and cross-account connectivity.
Step 3: Enable Disaster Recovery
EC2 / SAP Workload Replication
  • Use AWS Elastic Disaster Recovery (EDR) to replicate EC2 instances from Singapore → Tokyo.
  • Configure continuous replication for near real-time DR.
SAP HANA Database Replication
  • Use AWS Backint or native SAP HANA tools for asynchronous replication.
  • Ensure DR site has sufficient compute and storage capacity for SAP workloads.
DR Testing
  • Periodically perform failover drills to validate recovery procedures.
  • Test SAP application, database, and networking failover workflows.
Step 4: Route53 DNS Failover
Configure Health Checks
Set up Route53 health checks for primary production endpoints.
Set Traffic Routing
Default traffic → Primary region (Singapore).
Failover traffic → Secondary region (Tokyo) on health check failure.
Failover Automation
Verify that Route53 automatically switches endpoints during simulated outages.

Step 5: Cross-Region Backup
AWS Backup Configuration
Enable cross-region backup for EBS, RDS, S3, and other critical resources.
Store snapshots in both Singapore and Tokyo regions.
Encryption
Use KMS keys per region for backup encryption.
Backup Scheduling
Define schedules according to SAP RPO/RTO requirements.
Ensure compliance with internal and regulatory policies.

Diagram – Multi-Region Architecture

Primary Region – Singapore (ap-southeast-1):
Prod Account: Live SAP workloads
Dev Account: Development & experimentation
QA Account: Testing & validation

Network Services:
Transit Gateway connecting all primary workloads
Centralized VPC, routing, and security controls

Shared Services / Security / Logging:
CI/CD, identity services, audit logging, security monitoring

Secondary Region – Tokyo (ap-northeast-1):
Disaster Recovery (DR) SAP workloads
Continuous replication from primary region using EDR / SAP replication

Notes and Best Practices
  • Maintain consistent VPC and subnet CIDR strategy across regions.
  • Regularly test DR failover scenarios to ensure operational readiness.
  • Use Cross-Region replication and encrypted backups to meet compliance and RTO/RPO requirements.
  • Multi-region deployment ensures resilience, business continuity, and high availability for SAP workloads.

AWS Landscape 3: AWS Accounts

A well-architected SAP landscape in AWS requires a structured multi-account approach. Each account provides isolation, security, governance, and scalability, enabling independent lifecycle management for SAP workloads, shared services, and core infrastructure.
Key benefits of this multi-account approach:
  • Security: Limits blast radius by isolating production from non-production environments.
  • Governance: Allows centralized logging, compliance monitoring, and service control policies (SCPs).
  • Operational Management: Supports independent teams managing workloads, networking, and security.
  • Scalability: New accounts can be added with minimal disruption.
Objective:
  • Define all AWS accounts required for the SAP landscape to ensure:
  • Isolation for production, development, and sandbox workloads
  • Centralized management for security, logging, and shared services
  • Clear governance boundaries with tagging and SCPs
  • Scalable architecture for future environments or projects
Design Overview
AWS accounts are categorized into three main types:
Account TypeAccount NamePurpose
Master & GovernanceMaster Payer Account (MPA)Centralized billing, AWS Organizations management
Network & SecurityNetwork Service AccountCentralized VPC, Transit Gateway, and networking services
Shared Services AccountShared infrastructure (AD, DNS, monitoring)
Security AccountSecurity tooling, IAM policies, compliance monitoring
Logging AccountCentralized logging (CloudTrail, CloudWatch Logs, S3)
SAP Workload AccountsProduction AccountProduction SAP workloads
Development AccountSAP development workloads
Pre-Production AccountStaging/pre-release SAP workloads
Quality AccountQA and integration testing workloads
Sandbox AccountAd-hoc testing, PoC, learning environments
Principles:
  • Each account is isolated for security, operational management, and billing.
  • Shared and security accounts centralize governance.
  • Network connectivity is managed via Transit Gateway in the Network Service Account.
Technical Steps to Create & Configure AWS Accounts

Step 1: Create Accounts via AWS Organizations
Log in to the Master Payer Account (MPA).
Navigate to AWS Organizations → Accounts → Add account → Create account.
Enter the account details:
  • Account Name: e.g., Production, Development, Security
  • Email: Use internal aliases (e.g., prod-team@example.com)
  • Select OU: Choose appropriate OU (Production, Non-Production, Security)
Click Create account.
Repeat for all required accounts (Production, Development, QA, Pre-Prod, Sandbox, Security, Shared Services).
Tip: Keep a spreadsheet to track accounts, emails, and OU assignments for governance.

Step 2: Enable IAM & Security Baseline
Enable IAM Identity Center (SSO):
  • Centralizes user access across all AWS accounts.
  • Assign users/groups per environment and role.
Create IAM Roles per Account:
  • Admin: Cross-account administrative access.
  • Read-only: Monitoring and auditing.
  • SAP Ops: Workload-specific operational tasks.
Enforce Security Controls:
Enable MFA for all users and roles.
Apply Service Control Policies (SCPs) per OU:
  • Production OU: strict policies.
  • Non-Production OU: flexible policies.
  • Security OU: restricted to security-related services.
Step 3: Networking & Peering
Single VPC per account (as per VPC Design guidelines).
AWS Transit Gateway:
  • Deploy in Network Services Account.
  • Connect all environment accounts: Production, Development, QA, Pre-Production, Sandbox.
  • Connect Shared Services and Security accounts.
Route Tables & Subnets:
  • Use private and public subnets per account.
  • Maintain routing isolation: Production cannot directly access non-production networks.
  • Configure peering or Transit Gateway routes only where necessary.
Step 4: Tagging
Apply tags to all accounts for automation, cost tracking, and governance:
Tag KeyExample ValuePurpose
EnvironmentProduction / Dev / QA / SandboxIdentify environment
ProjectSAP MigrationTrack project costs
AccountTypeWorkload / Security / SharedCategorize account type
Tip: Use AWS Tag Policies to enforce tag standardization across accounts.

Step 5: Logging & Monitoring Integration
CloudTrail:
  • Enable in each account.
  • Aggregate logs into Logging Account for centralized visibility.
CloudWatch Logs:
  • Forward application and infrastructure logs to the Logging Account.
Security Account Monitoring:
  • Monitor IAM activity, compliance events, and guardrails.
  • Optionally, integrate with Security Hub or SIEM for consolidated alerting.
Diagram – AWS Account Connectivity

Master Payer Account (Root)
  • Centralized billing & governance
  • Service Control Policies (SCPs) enforcement
  • No workloads
Platform Services Accounts
  • Network Services: Hub VPC, Transit Gateway, DNS, firewalls
  • Shared Services: CI/CD, automation, identity management
  • Security: Threat detection, monitoring, vulnerability management
  • Logging: Centralized audit logs, immutable storage
Workload Environment Accounts
  • Production: Live workloads, strict security & availability
  • Development: Feature development & experimentation
  • Pre-Prod: Production-like staging environment
  • Quality (QA): Testing & validation
  • Sandbox: Safe experimentation & temporary workloads
Notes:
This multi-account structure follows AWS best practices for SAP workloads.
  • Future accounts can be added under the appropriate OU.
  • Security, logging, and shared services accounts provide centralized governance.
  • Transit Gateway ensures secure, scalable VPC-to-VPC connectivity.
  • Enforce naming conventions, tagging standards, and SCPs for consistent governance.