Oracle ($ORCL) is no longer just a “legacy database company.” As of the Q2 Fiscal Year 2026 earnings released today (December 10, 2025), it has firmly established itself as the most critical infrastructure bottleneck in the AI revolution outside of Nvidia itself.
The numbers are staggering. The company reported $16.1 billion in quarterly revenue, a 14% increase year-over-year. But the real story—the one that sent the stock soaring to a market cap of $630 billion—is the backlog. Oracle now has a Remaining Performance Obligation (RPO) of $523.3 billion.
To put that in perspective: companies have signed contracts promising to pay Oracle half a trillion dollars over the coming years just to secure access to its AI infrastructure.
Here is the deep dive into why Oracle—not AWS or Google—is winning the race for the most demanding AI workloads, and why Larry Ellison is betting the company’s future on nuclear fission.
The Hook: Why xAI and Microsoft Rent Oracle
In a strange twist of fate, Oracle has become the “Switzerland” of AI. Microsoft, despite owning Azure, uses Oracle Cloud Infrastructure (OCI) for some of its heaviest OpenAI workloads. Elon Musk’s xAI rented massive Oracle clusters before building its own “Colossus.”
Why? The answer lies in the networking physics.
When training a model like GPT-5 or Llama 4, you aren’t limited by how fast a single chip can calculate. You are limited by how fast 100,000 chips can talk to each other. If one GPU has to wait 5 microseconds for data from another GPU, the entire multimillion-dollar cluster sits idle for those 5 microseconds. Multiply that by billions of parameters, and “tail latency” becomes the most expensive problem in the world.
The Technical Deep Dive: RDMA and RoCE
To understand Oracle’s moat, you have to understand RDMA (Remote Direct Memory Access).
In a traditional cloud environment—let’s call it “Generation 1 Cloud”—data moves like this:
- Source App: GPU A wants to send data.
- OS Stack: The CPU takes the data, packages it (TCP/IP), and hands it to the network card.
- Network: The data travels over Ethernet.
- Destination OS: The receiving CPU unpacks the data.
- Destination App: The data is finally given to GPU B.
This involves the CPU at both ends. It adds latency (delay) and jitter (unpredictability). For a web server serving Netflix, this is fine. For a training run spread across 50,000 H100s, it’s catastrophic.
The OCI Architecture
Oracle Cloud Infrastructure (OCI) was rebuilt from scratch relatively recently (post-2016), allowing them to make a radical architectural choice: non-blocking RDMA over Converged Ethernet (RoCE v2) as the default standard.
In OCI’s architecture:
- Direct Access: GPU A puts data directly into the memory of GPU B via the network card (NIC).
- Bypass CPU: The CPU and OS are completely bypassed.
- Flat Network: There are fewer “hops” (switches) between any two servers.
This reduces latency from milliseconds to microseconds.
While AWS has its own solution called EFA (Elastic Fabric Adapter) and Azure uses InfiniBand, Oracle’s implementation of generic, high-speed Ethernet with RDMA gave them a price-performance advantage. They didn’t need expensive, proprietary InfiniBand cables for everything; they tuned standard Ethernet to perform like InfiniBand.
This is why Nvidia chose OCI for its own DGX Cloud. It wasn’t a partnership of convenience; it was a partnership of physics.
Contextual History: The Nuclear Pivot
If networking is the first bottleneck, power is the second. And frankly, it’s the scarier one.
A typical large data center consumes 30-50 megawatts (MW). The new “AI Factories” being designed for the next generation of models require 1 Gigawatt (1,000 MW). That is roughly the energy consumption of the city of San Francisco.
You cannot just plug a 1GW load into the existing US power grid without waiting 7-10 years for transmission line upgrades. The grid is congested, old, and regulated to death.
The Roadmap: Small Modular Reactors (SMRs)
Larry Ellison confirmed on today’s earnings call that Oracle is moving forward with designing a data center powered by three Small Modular Reactors (SMRs).
This is not a concept art project. It is a necessity.
Why SMRs? Traditional nuclear plants (like Diablo Canyon) are massive civil engineering projects that take 15 years to build. SMRs are different:
- Factory-built: They are assembled in modules in a factory and shipped to the site on a truck or train.
- Self-contained: They can be placed directly next to the data center (“behind the meter”), bypassing the grid transmission bottleneck entirely.
- Passive Safety: They are designed to shut down automatically using gravity and convection, without human intervention or active power.
The Regulation Hurdle
The technology for SMRs exists (companies like NuScale and Bill Gates’ TerraPower are leading), but the regulatory approval from the Nuclear Regulatory Commission (NRC) is notoriously slow.
However, Oracle is betting that the national security imperative of “AI Supremacy” will force the government to fast-track these approvals. If the US wants to beat China in AI, it needs power. Wind and solar, while cheap, are intermittent. You cannot train a model if the wind stops blowing for 4 hours; the batteries required to bridge that gap at 1GW scale are prohibitively expensive. Nuclear provides the “base load” power that runs 24/7/365.
The Sovereign AI Factor
Beyond the hyperscalers, Oracle has a unique ace in the hole: Sovereign Cloud.
As nations realize that AI models are critical national infrastructure, they are demanding that data never leaves their borders. The EU’s GDPR was just the start. Now, countries want “Sovereign AI” trained on domestic supercomputers.
Oracle’s architecture allows them to deploy a “Dedicated Region”—a full copy of the OCI public cloud—inside a customer’s own data center or a government-secured facility.
- Why it matters: AWS and Azure struggle to deploy their massive, monolithic stacks in disconnected, small-form-factor environments. Oracle designed OCI to be modular from day one.
- The Customer: This effectively makes Oracle the “arms dealer” for the global AI race, selling the infrastructure to massive sovereign wealth funds and governments in Japan, France, and the Middle East who want to build their own GPT competitors without relying on a data center in Virginia.
Forward-Looking Analysis: The $523 Billion Backlog
The most important number in today’s earnings report wasn’t the $16.1B revenue; it was the Remaining Performance Obligations (RPO) of $523.3 billion, up 433% year-over-year.
RPO represents signed contracts for future service. This number is exploding because hyperscalers and AI startups are signing 3-5 year commitments to lock in GPU supply.
The “Cloud Utility” Thesis
Oracle has effectively transformed itself into a utility company. Just as you pay your electric company for power, AI companies are signing long-term leases for Oracle’s “Intelligence Power.”
- The Moat: It’s not just the GPUs. Everyone can buy H100s or Blackwells (eventually). The moat is the ability to deploy them in clusters of 100,000 with working power and cooling.
- The Risk: Oracle is spending massively on CapEx (Capital Expenditure). If the AI bubble bursts, or if “scaling laws” hit a wall where more compute doesn’t equal better models, Oracle is left with billions of dollars in depreciating hardware.
Should You Buy at $630B Market Cap?
Trading at all-time highs, $ORCL is priced for perfection. However, compare it to the alternatives:
- AWS/Azure: Have massive legacy tech debt in their networking stacks. They are retrofitting; Oracle built for this.
- Snowflake/Databricks: These are software layers. They rely on the underlying cloud providers. If compute costs rise, their margins get squeezed.
Oracle is the only legacy giant that successfully pivoted to becoming a “Hardware/Cloud Hybrid.”
The Verdict: The valuation is stretched, but the fundamental thesis is sound. We are moving from the “Training Phase” to the “Infrastructure Phase” of the AI cycle. In this phase, the winners are the ones who own the power (Nuclear) and the pipes (RDMA). Oracle owns both.
The bet on $ORCL today is a bet that the demand for intelligence is infinite, and the supply of power is finite. In that equation, the supplier of power wins every time.
This is not just a stock pick; it is a leveraged play on the physics of intelligence.
🦋 Discussion on Bluesky
Discuss on Bluesky