Link Copied!

Le piège des puces contre actions : le problème des 830 milliards de dollars d'OpenAI

OpenAI brûle 17 milliards de dollars par an et échange des actions contre des puces parce qu'elle n'a pas les moyens de s'offrir le matériel Nvidia. Ce n'est pas de la croissance. C'est de la survie.

🌐
Note de Langue

Cet article est rédigé en anglais. Le titre et la description ont été traduits automatiquement pour votre commodité.

Pièces d'échecs faites de puces informatiques sur une salle de marché avec un éclairage spectaculaire

Key Takeaways

  • The $17 Billion Hole: OpenAI’s internal projections show losses reaching $17 billion in 2026, even as revenue hits $20 billion. The company is effectively burning $1.4 billion per month.
  • Chips-for-Equity: The rumored $10 billion Amazon investment is not pure cash. It is structured around AWS infrastructure and Trainium chips. OpenAI is trading ownership for hardware it cannot afford.
  • The Google Advantage: Unlike OpenAI, Google trains Gemini on proprietary TPUs with zero external rent extraction. This structural difference means Google’s cost per token is fundamentally lower.
  • Microsoft’s Dilemma: Despite holding 27% of OpenAI, Microsoft absorbed a $3.1 billion quarterly hit from its equity stake. The “partnership” is now a liability on both sides of the balance sheet.

The $830 Billion Question

OpenAI is reportedly seeking a valuation of $830 billion in its latest funding round, a number that would place it among the ten most valuable companies on Earth. The catch: the company is projected to lose up to $17 billion in 2026.

The math does not add up. OpenAI’s $20 billion in 2025 revenue sounds impressive until compared to the projected $17 billion in losses. This is not a company scaling toward profitability. This is a company desperately trading equity for the computing power it needs to survive.

The catalyst for this analysis is a now widely-reported deal: OpenAI is in advanced talks for a $10 billion investment from Amazon, structured not as pure capital, but as a “chips-for-equity” arrangement. Under this framework, OpenAI would receive access to Amazon Web Services (AWS) infrastructure and the company’s proprietary Trainium chips in exchange for ownership stakes. It is a deal born of necessity, not strength.

The Structural Trap: Renters vs. Owners

The economics of training frontier AI models are brutal. A frontier model like GPT-5 or GPT-6 requires tens of thousands of specialized Graphics Processing Units (GPUs), primarily manufactured by Nvidia. These chips are not cheap. A single Nvidia H100 costs roughly $30,000, and a B200 (the current flagship) is priced even higher. A training cluster for a frontier model can require 100,000 or more of these chips, placing the hardware cost alone in the billions.

OpenAI does not own most of its infrastructure. It rents it.

The Landlord Problem

OpenAI’s primary compute partner has been Microsoft, which has invested over $13 billion in the company and provides access to Azure cloud infrastructure. But here is the critical detail: every hour of compute time on Azure incurs a cost. Microsoft is not providing this infrastructure for free. OpenAI is paying layered margins on top of Nvidia’s already-premium pricing.

The economic stack looks like this:

  1. Nvidia takes its margin on the chip.
  2. Microsoft Azure takes its margin on the cloud service.
  3. OpenAI pays both and hopes its ChatGPT subscription revenue covers the difference.

This is a fundamentally losing structure at scale. The more models OpenAI trains, the more money it loses.

Google: The Landlord-Free Competitor

Contrast this with Google DeepMind, which develops Gemini. Google designs and manufactures its own custom chips, called Tensor Processing Units (TPUs). When Google trains Gemini, it is not paying Nvidia. It is not paying a cloud provider margin. The infrastructure is vertically integrated.

Recent benchmarks show that Google’s TPU v7 achieves per-token inference costs roughly on par with Nvidia’s GB200, after a 70% cost reduction from the previous generation. For training, the advantage is even starker: Google’s ASIC-optimized architecture means no rental overhead, no third-party profit extraction.

This is not a “nice-to-have” efficiency gain. This is a fundamental structural advantage. If Google and OpenAI are competing to build the next frontier model, Google can afford to train it for less money. Over time, this compounds.

The Amazon Deal: A Desperation Play

The $10 billion Amazon investment is being framed by some analysts as a “strategic diversification.” The reality is less flattering. OpenAI is seeking “compute insurance” because it cannot afford to be bottlenecked by any single provider, including Microsoft.

The Terms

According to reports from January 22, 2026, the Amazon investment builds on a foundational $38 billion, seven-year cloud services agreement signed in November 2025. Under this arrangement, OpenAI commits to migrating significant training and inference workloads to AWS. The investment itself comprises not just cash but substantial tranches of Amazon’s proprietary Trainium chips.

The stated goal is to reduce OpenAI’s operational burn, currently estimated at $12 billion per quarter (though this figure may be closer to $17 billion annually based on updated projections). By diversifying compute providers, OpenAI gains pricing power against Microsoft and ensures it is not locked out of capacity during the critical development phase of GPT-6.

The Trainium Gamble

There is a significant risk embedded in this deal: Amazon’s Trainium chips are unproven at scale for frontier model training. No public benchmarks exist comparing Trainium 3 directly to Nvidia’s B200 for the specific workloads OpenAI runs. AWS documentation lists Trainium2 instances with impressive memory configurations (8192 GiB), but performance metrics against Nvidia hardware are conspicuously absent.

If Trainium underperforms in production, OpenAI will have traded equity for chips it cannot effectively use. The company would then be forced back to Nvidia hardware, paid for at Azure or AWS cloud rates, while also having diluted its ownership.

The Microsoft Relationship: From Partnership to Liability

The October 28, 2025 renegotiation between OpenAI and Microsoft revealed the fractures in what was once described as “the partnership of the decade.”

The Key Terms

  • Microsoft’s Stake: 27% ownership in the newly restructured OpenAI Group PBC (Public Benefit Corporation).
  • OpenAI’s Commitment: $250 billion in Azure cloud purchases over the life of the agreement.
  • Microsoft’s Concession: The company relinquished its “right of first refusal” on OpenAI’s future cloud computing purchases, freeing OpenAI to pursue deals with Amazon and Oracle.
  • Revenue Share: OpenAI will share 20% of its revenue with Microsoft until achieving Artificial General Intelligence (AGI), as verified by an independent panel.

The $250 billion Azure commitment is staggering. But consider the inverse: OpenAI is locked into spending tens of billions with Microsoft regardless of whether it finds cheaper alternatives. This is golden handcuffs, not a partnership.

Microsoft’s Quarterly Hit

Microsoft’s Q1 FY2026 earnings (for the quarter ending September 30, 2025) revealed the cost of its OpenAI bet. The company absorbed a $3.1 billion net income hit tied to its OpenAI equity stake, reflecting OpenAI’s estimated $11.5 billion in losses for the quarter.

This is the paradox: Microsoft’s investment in OpenAI is simultaneously its most strategic AI asset and a significant drag on its financials. The more OpenAI loses, the more Microsoft’s equity stake depreciates.

The $500 Billion Stargate: Doubling Down on a Losing Bet

OpenAI’s response to its structural cost problem is not to cut expenses. It is to spend more.

Project Stargate

Announced in early 2025 as a joint venture with SoftBank, Oracle, and Middle Eastern sovereign wealth fund MGX, Project Stargate is a $500 billion infrastructure initiative to build AI data centers across the United States. The goal: nearly 10 gigawatts of compute capacity by decade’s end, enough to power the development of models far beyond GPT-6.

Construction is already underway at sites including Abilene, Texas, with expansions planned in New Mexico, Ohio, and Wisconsin. In January 2026, OpenAI and SoftBank announced a $1 billion joint investment in SB Energy to power these facilities with 1.2 gigawatts of solar and storage capacity.

The Paradox of Scale

The logic of Stargate is that scale will eventually reduce per-unit costs. If OpenAI can build enough infrastructure, it can eventually compete with Google’s TPU cost structure, or so the theory goes.

The problem: this strategy requires years and tens of billions of dollars before any cost savings materialize. In the meantime, OpenAI must continue burning cash at a rate of $17 billion per year while defending its market share against Google, Anthropic, and a growing cohort of open-source alternatives.

OpenAI’s market share in the AI model space already dropped to 27% by end-2025, down from an estimated 50% in 2023. The window to achieve scale is closing.

The Historical Parallel: This is Iridium, Not Amazon

OpenAI’s trajectory is frequently compared to early Amazon, a company that famously lost money for years while building the infrastructure that would eventually dominate e-commerce and cloud computing. But a more apt parallel is Iridium, the satellite phone venture of the 1990s.

The Iridium Playbook

Iridium invested billions building a 66-satellite constellation to enable global mobile communication. The technical achievement was genuine. The problem: the market for satellite phones at $3,000 per handset and $6 per minute was vanishingly small. Iridium filed for bankruptcy in 1999, less than a year after launch.

The lesson: you can build unparalleled infrastructure and still fail if the economics do not close. OpenAI is building the most advanced AI infrastructure in the world. But if its cost per token remains structurally higher than Google’s or Meta’s open-source alternatives, no amount of compute will save it.

What This Means for Investors

If You Hold Amazon (AMZN)

The Amazon deal validates its custom silicon strategy and positions AWS as a player in frontier AI training. However, the deal’s success depends on Trainium 3 performing at scale, an unproven proposition. Watch for integration updates in AWS earnings calls.

If You Hold Microsoft (MSFT)

Microsoft’s 27% stake in OpenAI is a double-edged sword. If OpenAI’s losses continue at current rates, Microsoft will absorb billions in quarterly write-downs. If OpenAI achieves an IPO at an $830 billion valuation, Microsoft’s stake would be worth roughly $224 billion, a massive windfall.

The risk: an IPO at that valuation requires belief in eventual profitability. As of January 2026, that remains speculative.

If You Hold Nvidia (NVDA)

Nvidia remains the bottleneck. OpenAI’s desperation for alternatives (Amazon Trainium, Oracle infrastructure) underscores just how dependent the entire AI industry remains on Nvidia’s CUDA ecosystem. The custom ASIC trend is a long-term threat, but Nvidia’s time-to-market advantage (GB300, VR200 in H2 2026) keeps it dominant for now.

If You Hold Google (GOOG)

Google is the quiet winner in this chaos. Its vertically integrated TPU strategy means it does not face the margin stacking problem crushing OpenAI. Every dollar OpenAI spends on rented Nvidia hardware is a dollar Google saves. Long-term, this structural difference compounds.

The Bottom Line

OpenAI’s $830 billion valuation target is not a reflection of current fundamentals. It is a bet on a future where the company achieves the scale, efficiency, and profitability that have so far eluded it. The chips-for-equity deals with Amazon and others are not strategic expansions. They are survival moves by a company that has built the most advanced AI models in the world but cannot figure out how to afford them.

For investors, the question is simple: Do you believe OpenAI can escape the structural cost trap before it runs out of equity to trade?

If yes, the potential upside is enormous. If no, you are buying into the most expensive science project in corporate history.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...