Link Copied!

오라클의 새로운 베팅: '칩을 위한 에어비앤비' 전략

더 이상 GPU를 임대하는 것만이 아닙니다. 오라클은 'Bring Your Own Chips' 모델로 전환하여 OCI를 AI 인프라의 중립적인 '스위스'로 포지셔닝하고 있습니다. 여기에는 이러한 움직임의 배후에 있는 물리학과 경제학이 있습니다.

🌐
언어 참고

이 기사는 영어로 작성되었습니다. 제목과 설명은 편의를 위해 자동으로 번역되었습니다.

빛나는 중앙 네트워크 백본에 연결된 다양한 브랜드의 서버 블레이드를 보여주는 미래 지향적인 데이터 센터 내부.

“It’s no longer bring your own beer, it’s bring your own chips.”

That was the sentiment echoing from Oracle’s Q2 2026 earnings call, marking a radical pivot in the cloud wars. For the last three years, the strategy for every hyperscaler, AWS, Azure, Google Cloud, has been identical. They hoard as many Nvidia H100s as possible and rent them out at a premium.

But as the AI Boom enters its “Deployment Phase,” the math is breaking. The depreciation on a $30,000 GPU is brutal. In three years, it is effectively e-waste. Meanwhile, the power plants and fiber optic cables required to run them have useful lives measured in decades.

Larry Ellison has done the math. While Amazon and Microsoft are racing to build their own proprietary silicon (Trainium and Maia) to lock customers into their ecosystems, Oracle is going the other direction. They are becoming the “Airbnb of Silicon”. They provide the house, the power, and the plumbing, but let the customer bring the furniture.

Here is why OCI’s “Bring Your Own Chips” (BYOC) strategy is the most disruptive move in cloud infrastructure since the invention of the virtual machine.

The Physics of the “Universal Socket”

To understand why Oracle can offer “Bring Your Own Chips” while AWS cannot, you have to look at the cables.

In a modern AI supercluster, the network is the computer. When training a model like GPT-5 across 50,000 GPUs, those chips need to talk to each other constantly to exchange gradients. If the network halts, the training halts. This is the “All-Reduce” step in training algorithms, where every GPU must sync with every other GPU before the next calculation begins.

The Latency Trap

The problem with proprietary networks is lock-in.

  • Azure relies on InfiniBand. While it offers incredible speed, it is a specialized, non-standard protocol that requires specific network interface cards (NICs) and switches. You cannot easily plug a non-Nvidia rack into an InfiniBand spine without massive friction.
  • AWS uses EFA (Elastic Fabric Adapter). This is a proprietary wrapper around Ethernet. It works well within the AWS ecosystem but creates a dependency on AWS-specific drivers and control planes.
  • Google uses Optical Circuit Switches (OCS). This technology is brilliant for TPUs but bespoke to Google’s data centers.

Oracle’s Ace: RoCE v2

Oracle Cloud Infrastructure (OCI) made a specific architectural bet years ago that seemed risky at the time: RDMA over Converged Ethernet (RoCE v2).

RDMA (Remote Direct Memory Access) allows one computer to access the memory of another computer without involving the CPU or the Operating System. It is zero-copy networking. Typically, this requires InfiniBand. However, Oracle engineered a way to run this over standard, commodity Ethernet at massive scale.

By tuning standard Ethernet to perform with the low latency of InfiniBand, Oracle created a “Universal Socket.” Because it uses standard Ethernet frames rather than proprietary encapsulation, OCI can plug any rack into its spine-leaf network topology. This physical flexibility allows a customer to park a rack of AMD MI450s, a rack of Nvidia Blackwells, and a rack of custom silicon from a startup like Cerebras side-by-side. They all plug into the same 800 Gbps fabric without requiring custom adapters.

This is why Oracle is the only major cloud provider that is truly hardware-agnostic. They built a “voltage converter” for the AI world while everyone else was building proprietary plugs.

The “Ampere” Factor: The Hidden CPU War

While GPUs get the headlines, the “Bring Your Own Chips” strategy also applies to the CPU layer, where Oracle has a unique advantage through its investment in Ampere Computing.

Traditional x86 CPUs (Intel/AMD) are power-hungry generalists. For AI inference, you often don’t need a massive GPU; you just need a lot of efficient integer math. Oracle’s partnership with Ampere allows them to deploy massive clusters of ARM-based, 192-core processors.

In the BYOC model, this is critical. A customer building a specialized inference engine (e.g., for video transcoding or real-time voice translation) might find that standard Intel Skylake instances are too expensive due to power costs. OCI allows these customers to deploy custom ARM silicon, or even brings in Ampere instances as a “host” for their own accelerators, drastically lowering the Total Ownership Cost (TCO).

Unlike AWS Graviton, which is strictly for AWS customers, Ampere chips are merchant silicon. This aligns perfectly with the BYOC ethos. You are not locking yourself into “Oracle Silicon”; you are using an open-standard ARM chip that runs on Oracle’s metal.

The Economics: CapEx vs. OpEx

Why would Oracle want to let you bring your own chips? Isn’t renting out GPUs the most profitable business on Earth right now?

Yes, for now. But looking at the longer term, the risk profile changes.

The “Inventory Risk” Problem

If AWS spends $10 billion specifically on Nvidia B200s, they are taking a massive inventory risk. If Nvidia releases the B300 next year with 2x performance, that $10 billion inventory loses 40% of its value immediately. This is “High-Risk CapEx.”

The “Landlord” Model

Oracle’s BYOC strategy shifts that risk to the customer.

  1. The Customer (e.g., xAI, OpenAI, or a Sovereign Nation) buys the chips. They take the depreciation hit.
  2. Oracle provides the Power (1GW+ connections), the Cooling (liquid cooling loops), and the Network.

Infrastructure CapEx (Data Centers, Power Grids, Fiber) amortizes over 20–30 years. Silicon CapEx amortizes over 3–4 years. By decoupling them, Oracle improves its Return on Invested Capital (ROIC). They become a utility company, selling “Intelligence Power,” rather than a hardware rental company.

As Larry Ellison put it, they are “neutral” on GPUs. They don’t care if you use Nvidia, AMD, or your own secret sauce. They just want to be the ones selling the electricity and the bandwidth.

The Geopolitical Angle: Sovereign Clouds

This technical flexibility unlocks a massive new market: Sovereign AI.

Nations like France, Saudi Arabia, and Japan are acting to prevent their AI infrastructure from becoming historically dependent on American tech giants. They want to build “AI Factories” inside their borders. These facilities often require international or diversified hardware to avoid sanctions or supply chain chokepoints.

Because OCI is modular and chip-agnostic, Oracle can ship a “Dedicated Region,” a full copy of the OCI public cloud, to a government bunker in Riyadh or a data center in Frankfurt. The customer can populate it with whatever silicon aligns with their national interests. This includes domestic chips that wouldn’t be supported on AWS or Azure.

This capability has made Oracle the “Arms Dealer” of choice for the international market. They sell the weapon system (OCI), but they let the customer load whatever ammunition (chips) they want. This contrasts sharply with AWS Outposts or Azure Stack, which are rigid extensions of the American-controlled stack.

The Future Outlook: The Heterogeneous Data Center

The market is moving away from the “Nvidia Monoculture.” The future of AI inference is specialized. Developers will use Nvidia for training, AMD for batch processing, and specialized Groq or Etched chips for real-time inference.

In a heterogeneous world, the winner is the platform that supports the most diversity.

  • AWS is building a “Amazon Way” ecosystem (Trainium + Inferentia).
  • Azure is building a “Microsoft Way” ecosystem (Maia + Copilot).
  • Oracle is building the “Open Way.”

For the last decade, being the “integrated player” (Apple style) was the winning strategy in cloud. However, in the physical world of high-voltage power and massive heat dissipation, being the “neutral utility” might be the ultimate moat.

Oracle isn’t trying to beat Nvidia. It’s trying to be the power outlet that Nvidia, and everyone coming to replace Nvidia, has to plug into.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...