Oracle Exadata SQL Tuning

ora-base

Advanced Oracle Database Engineering, Exadata Optimization, and SQL Performance Tuning.

Oracle Interval and Reference Partitioning: Automated Partition Management for Parent-Child Tables

Interval partitioning extends range partitioning with automatic partition creation — the database creates new partitions as data arrives outside the defined range. Reference partitioning allows a child table to inherit the partitioning of its parent via a foreign key, ensuring rows in related tables are always co-located in the same partition. Interval Partitioning How Automatic Partition Creation Works An interval-partitioned table defines a seed partition (at minimum one VALUES LESS THAN partition) and an interval clause. When an INSERT or MERGE places a row in a value range that has no existing partition, Oracle automatically creates the partition. ...

March 1, 2026 · 5 min · mardaff

Oracle Multitenant Architecture: CDB and PDB Internals for Architects

Oracle Multitenant was introduced in 12c and is now the default deployment model. Despite broad adoption, the internal mechanics of how CDBs and PDBs share and isolate resources are frequently misunderstood. This article is for architects designing multitenant deployments who need to reason about performance isolation, resource governance, and operational boundaries. CDB vs Non-CDB: The Core Difference In a non-CDB, the database owns the entire Oracle instance. In a CDB: ...

March 1, 2026 · 5 min · mardaff

Oracle Partition Pruning: Mechanics, Gotchas, and VLDB Design Patterns

Partition pruning is the optimizer’s ability to eliminate partitions from a query’s access path based on predicate analysis. When it works, a query against a 10 TB table might access only 50 GB of relevant partitions. When it fails, you get a full table scan across every partition — often with no visible indication in the execution plan that pruning is not occurring. How Partition Pruning Works The optimizer evaluates predicates in the WHERE clause against the partition key definition during the parse phase. If a predicate can be resolved to a specific partition range, those partitions are excluded from the access path. ...

March 1, 2026 · 6 min · mardaff

Oracle Performance Tuning Scripts: Real-World Scenarios for CDB and Single Instance

This article is a practical tuning script library. Each script targets a specific, common real-world performance scenario — not abstract examples, but the situations that actually appear at 2am during an incident. Scripts are annotated with where they run, what the output means, and what action to take. Reference Environment Single Instance: Host: ora19c01.prod.example.com DB Name: ORCL19C Oracle: 19.22.0.0 (Oracle Linux 8) Memory: 256 GB RAM, 32 vCPU Storage: Local NVMe (non-Exadata) CDB Environment: Host: cdb01.prod.example.com (Exadata VM) CDB Name: PRODCDB PDBs: PDB_OLTP, PDB_DWH, PDB_REPORTING Oracle: 19.22.0.0 Memory: 512 GB RAM (Exadata X8M-2 node) Scenario 1: “The Database Was Slow from 09:00 to 09:30 This Morning” Situation: Application team reports intermittent slowness on cdb01. You need to identify what happened between 09:00 and 09:30. ...

March 1, 2026 · 12 min · mardaff

Oracle RAC Cache Fusion Deep Dive: How Blocks Travel Between Instances

Cache Fusion is the technology that allows Oracle RAC nodes to share a single consistent view of the database buffer cache across all instances. Without it, every cross-instance read or write would require a disk I/O. With it, dirty blocks travel directly from one instance’s buffer cache to another across the private interconnect. Understanding how this works is the foundation for diagnosing RAC-specific performance problems. The Problem Cache Fusion Solves In a traditional single-instance Oracle database, the buffer cache is authoritative. When an instance needs a block, it either finds it in cache or reads it from disk. In RAC, multiple instances have their own buffer caches, but they share the same datafiles. Without coordination, Instance A and Instance B could each have different versions of the same block in cache — a consistency nightmare. ...

March 1, 2026 · 6 min · mardaff

Oracle RAC Interconnect Tuning: OS Configuration, NIC Bonding, and Network Validation

The private interconnect is the central nervous system of an Oracle RAC cluster. Poor interconnect performance cascades directly into gc cr block 2-way, gc current block 2-way, and related wait times. Before tuning any Oracle parameter, the interconnect hardware and OS configuration must be validated. This article covers the full stack — from NIC configuration to Oracle’s use of the network. Interconnect Architecture Options Technology Bandwidth Latency Use Case 1 GbE 1 Gb/s ~100–200μs Small, low-traffic clusters only 10 GbE 10 Gb/s ~20–50μs Standard current minimum 25 GbE 25 Gb/s ~10–20μs High-throughput OLAP/mixed RoCE (RDMA over Ethernet) 25–100 Gb/s ~1–5μs Modern Exadata, high-performance InfiniBand (EDR) 100 Gb/s <1μs Pre-X8M Exadata, HPC For new deployments, 25 GbE minimum is the practical recommendation. On Exadata X8M+, RoCE is the default interconnect technology, delivering near-InfiniBand latency over standard Ethernet infrastructure. ...

March 1, 2026 · 5 min · mardaff

Oracle RAC Services: Workload Management, Application Continuity, and TAF

Oracle RAC Services are the primary mechanism for workload management in a RAC cluster. Rather than connecting directly to an instance (which bypasses all intelligent routing), applications connect to a named service, and Oracle manages where that service is active. Combined with Application Continuity (AC), services provide seamless failover that is invisible to most application workloads. Services Fundamentals A service is a named workload bucket with the following attributes: Preferred instances: Primary instances where the service is active. Available instances: Failover instances where the service moves if a preferred instance fails. Cardinality: SINGLETON (active on one instance at a time) or UNIFORM (active on all instances simultaneously). Role: PRIMARY or PHYSICAL_STANDBY (for Active Data Guard integration). # List all services in the cluster srvctl status service -d ORCL # Add a service for OLTP (preferred on instance 1, failover to instance 2) srvctl add service \ -d ORCL \ -s OLTP_SVC \ -preferred ORCL1 \ -available ORCL2 \ -cardinality SINGLETON \ -pdb PDB1 # Add a uniform service for read workloads (active on all instances) srvctl add service \ -d ORCL \ -s REPORT_SVC \ -preferred "ORCL1,ORCL2" \ -cardinality UNIFORM \ -pdb PDB1 srvctl start service -d ORCL -s OLTP_SVC srvctl start service -d ORCL -s REPORT_SVC Service-Level Performance Attributes Services can be annotated with performance and connection management attributes that affect how the Oracle Net listener and connection pool handle connections: ...

March 1, 2026 · 5 min · mardaff

Oracle Real Application Testing (RAT): A Complete Step-by-Step Guide

Oracle Real Application Testing (RAT) is the authoritative tool for validating changes to an Oracle database before production deployment. It enables you to replay a real production workload against a test system and measure the performance impact of changes such as database upgrades, parameter changes, hardware migrations, or schema modifications. This article walks through both RAT components — Database Replay and SQL Performance Analyzer (SPA) — with step-by-step procedures. RAT Component Overview Component What It Tests Best For Database Replay Entire production workload (concurrent sessions, timing, interactions) Database upgrades, hardware changes, OS upgrades SQL Performance Analyzer (SPA) Individual SQL statements from AWR SQL Tuning Set Optimizer changes, parameter tuning, index changes Both require the Oracle Real Application Testing licence option in addition to Enterprise Edition. ...

March 1, 2026 · 8 min · mardaff

Oracle SQL Monitor: Real-Time Execution Diagnostics for Long-Running Queries

Oracle SQL Monitor is the most powerful single-query diagnostic tool in the Oracle performance toolkit. Unlike DBMS_XPLAN.DISPLAY_CURSOR, which gives you a static execution plan, SQL Monitor captures real-time runtime statistics at every step of the execution plan — rows processed, memory used, temporary I/O, CPU time, and elapsed time per operation. For parallel queries, it shows the workload distribution across parallel server processes. When SQL Monitor Activates Automatically SQL Monitor begins monitoring a SQL statement automatically when either: ...

March 1, 2026 · 6 min · mardaff

ZDM Migration: Exadata X8 On-Premises to Oracle Database@Azure (ADB-S) via GoldenGate

This article presents a complete, real-world migration playbook for moving an Oracle Database from an on-premises Exadata X8 to Oracle Database@Azure Autonomous Database Serverless (ADB-S). The two data centres are 70 km apart, connected via a dedicated Azure ExpressRoute circuit. We use Oracle Zero Downtime Migration (ZDM) with GoldenGate replication to achieve a sub-minute application downtime. Environment Diagram ┌─────────────────────────────────────────────────────────────────┐ │ ON-PREMISES DATA CENTRE (Frankfurt) │ │ │ │ ┌──────────────────────────────────────────────────────────┐ │ │ │ EXADATA X8 FULL RACK │ │ │ │ │ │ │ │ ┌──────────────┐ ┌──────────────┐ │ │ │ │ │ x8db01 │ │ x8db02 │ Compute nodes │ │ │ │ │ (RAC inst1) │ │ (RAC inst2) │ │ │ │ │ └──────┬───────┘ └──────┬───────┘ │ │ │ │ └──────── IB ─────────┘ │ │ │ │ ┌──────────────────┐ │ │ │ │ │ Storage Cells │ (8 cells) │ │ │ │ │ x8cel01–x8cel08 │ │ │ │ │ └──────────────────┘ │ │ │ │ │ │ │ │ Database: ORCL (CDB), PDB: PDB_ERPSYS │ │ │ │ Size: 3.2 TB (data) + 400 GB indexes │ │ │ └──────────────────────────────────────────────────────────┘ │ │ │ │ ┌──────────────────────────────────────────────────────────┐ │ │ │ ZDM Service Host: zdmhost01.prod.example.com │ │ │ │ GoldenGate Extract: running on x8db01 (Integrated) │ │ │ └──────────────────────────────────────────────────────────┘ │ │ │ │ │ Azure ExpressRoute │ │ │ Dedicated, 10 Gbps, 70 km │ │ │ RTT: ~0.7 ms (measured) │ │ │ │ └─────────────────────┼──────────────────────────────────────────-─┘ │ ┌──────────────────────┼──────────────────────────────────────────┐ │ ORACLE DB@AZURE (West Europe — Amsterdam) │ │ │ │ │ ┌────────────────────▼──────────────────────────────────────┐ │ │ │ Oracle Database@Azure — ADB-S │ │ │ │ Display Name: adb-erpsys-prod │ │ │ │ DB Name: ADBERPSY │ │ │ │ Shape: 16 OCPUs, auto-scale │ │ │ │ Storage: 10 TB (elastic) │ │ │ │ TLS: mTLS (wallet-based) │ │ │ │ │ │ │ │ OCI GoldenGate (Managed Microservices deployment) │ │ │ │ GG Service: gg-erpsys-deployment │ │ │ │ Replicat: REP_ERPSYS (Integrated) │ │ │ └────────────────────────────────────────────────────────────┘ │ └──────────────────────────────────────────────────────────────────┘ Network Architecture On-Premises Exadata Azure ExpressRoute Oracle DB@Azure (Frankfurt DC) (Dedicated, 10 Gbps) (Amsterdam) 10.10.0.0/16 ───────────────────────────────────────► 172.16.0.0/16 Key Network Points: x8db01 public IP: 10.10.1.101 x8db01 SCAN IP: 10.10.1.200 (for GoldenGate source connection) zdmhost01: 10.10.5.50 OCI GoldenGate GW: 172.16.10.10 (private endpoint) ADB-S private EP: 172.16.20.5 (private endpoint in Oracle DB@Azure VNet) ExpressRoute Circuit: Provider: equinix-frankfurt Bandwidth: 10 Gbps dedicated Measured RTT (Frankfurt → Amsterdam): 0.7 ms Throughput achieved (iperf3): 9.2 Gbps (92% of capacity) Reference Environment SOURCE (On-Premises Exadata X8): RAC node 1: x8db01.prod.example.com (oracle user) RAC node 2: x8db02.prod.example.com Storage cells: x8cel01–x8cel08.prod.example.com ZDM host: zdmhost01.prod.example.com (oracle user, ZDM installed here) DB unique name: ORCL PDB: PDB_ERPSYS Data size: 3.2 TB DB version: 19.18.0.0 TARGET (Oracle DB@Azure): Region: Azure West Europe (Amsterdam) Resource Group: rg-oracle-prod ADB name: adb-erpsys-prod ADB DB Name: ADBERPSY Service name: adberpsy_high (high priority service) mTLS wallet dir: /etc/oracle/adb_wallet/ (on zdmhost01) OCI GG deploy: gg-erpsys-deployment (Microservices, managed by Oracle) OCI GG version: 21c ADMIN JUMP HOST: admin01.prod.example.com (used for OCI CLI, ZDM monitoring) Phase 1: Pre-Migration Assessment 1.1 Network Latency and Throughput Validation Before any migration work, validate the ExpressRoute circuit performance from the ZDM host: ...

March 1, 2026 · 10 min · mardaff