Link Copied!

Broadcom AI Warning: 65% Margins vs Custom Chips

Broadcom crushed Q4 earnings with \$18B revenue and 74% AI growth, yet shares plunged 10.7%. This report breaks down the margin compression paradox - why booming AI chip sales can hurt profitability.

🌐
语言说明

本文以英文撰写。标题和描述已自动翻译以方便您阅读。

Futuristic semiconductor manufacturing facility showing advanced AI chips with declining profit margin graphs

Broadcom (AVGO) released Q4 fiscal 2025 earnings that surprised market observers. Despite reporting $18.02 billion in revenue (up 28% year-over-year) and non-GAAP earnings of $1.95 per share (beating estimates by $0.31), the results were mixed. The company even guided AI revenue to double year-over-year in Q1 2026, hitting $8.2 billion - continuing the surge covered after their Q3 earnings.

Wall Street’s response? Broadcom’s stock crashed 10.77% on December 12, 2025.

The culprit? A single line buried in the guidance: Broadcom expects its consolidated gross margin to decline by approximately 100 basis points (1%) in Q1 fiscal 2026 due to a “higher mix of AI revenue.”

Wait. How does winning the AI chip race make you less profitable?

The Margin Compression Paradox

Here’s the uncomfortable truth that Broadcom just confirmed: AI chips are structurally less profitable than legacy semiconductors and software.

In Q4 fiscal 2025, Broadcom achieved a non-GAAP gross margin of 78% - an impressive figure that expanded 100 basis points year-over-year. But CEO Hock Tan warned investors that this margin will compress as AI becomes a larger share of the revenue mix.

Why? Because Broadcom’s AI business operates at lower gross margins than its traditional networking ASICs and enterprise software portfolio. The company’s infrastructure software division (38.5% of revenue) operates at near-monopoly margins, while custom AI accelerators for hyperscalers like Google, Meta, and OpenAI come with razor-thin pricing.

Let’s break down the economics.

Understanding Gross Margin vs. Operating Margin

Before diving into the specifics, it is necessary to clarify the difference between “Classic AI” and “Custom AI” and operating margin - a distinction that’s critical for understanding Broadcom’s dilemma.

Gross Margin is the percentage of revenue left after subtracting the direct cost of goods sold (COGS):

Gross Margin=RevenueCOGSRevenue×100\text{Gross Margin} = \frac{\text{Revenue} - \text{COGS}}{\text{Revenue}} \times 100

For semiconductors, COGS includes wafer costs, packaging, testing, and direct manufacturing overhead. It does not include R&D, sales, or general administrative expenses.

Operating Margin goes one step further, subtracting all operating expenses (R&D, sales, marketing, G&A):

textOperatingMargin=fractextOperatingIncometextRevenuetimes100\\text{Operating Margin} = \\frac{\\text{Operating Income}}{\\text{Revenue}} \\times 100

Here’s the paradox: Broadcom’s AI chips can have lower gross margins but still boost operating profits through what’s called operating use - spreading fixed R&D and administrative costs over much higher revenue.

In Q4 2025, Broadcom’s non-GAAP operating margin reached 66.2%, up 350 basis points year-over-year, even as the AI mix compressed gross margins. The company is making more money overall by selling lower-margin products at massive scale.

But that doesn’t stop investors from panicking when they see margin compression in the near term.

The CoWoS Bottleneck: Why AI Chips Cost More

The technical reason AI chips have lower margins boils down to a single acronym: CoWoS (Chip-on-Wafer-on-Substrate).

CoWoS is TSMC’s 2.5D advanced packaging technology that integrates logic chiplets with high-bandwidth memory (HBM) on a silicon or organic interposer. It’s the enabling technology behind NVIDIA’s H200 and Blackwell GPUs, AMD’s MI300 series, and Broadcom’s custom AI accelerators for hyperscalers.

Here’s the problem: CoWoS costs have risen more than 20% in the past year due to capacity constraints and surging demand from NVIDIA. TSMC is running near full utilization, and the company is scrambling to build new CoPoS (Chip-on-Panel-on-Substrate) facilities in Chiayi Science Park to address the bottleneck. Demand for advanced packaging is projected to increase 40% year-over-year in 2026, from 484,000 wafers to 678,000 wafers.

For Broadcom, this means:

  • Higher wafer costs per unit (TSMC charges a premium for CoWoS)
  • Longer lead times (capacity constraints reduce negotiating power)
  • Yield challenges (integrating multiple chiplets and HBM stacks reduces yields, increasing waste)

Compare this to Broadcom’s traditional networking ASICs, which use mature packaging technology at much lower costs. The shift to AI chips shifts the product mix toward a higher-cost manufacturing process, compressing gross margins.

The Custom Silicon Trap

There’s another layer to this problem: Broadcom’s AI chips are custom accelerators, not merchant silicon.

Companies like Google (TPU), Meta (MTIA), and Amazon (Trainium/Inferentia) are building their own AI processors to avoid paying NVIDIA’s monopoly markups. Broadcom designs and manufactures these chips under contract, but hyperscalers negotiate aggressively on price because they’re ordering in massive volumes.

The economics of custom silicon are brutal:

  • High R&D costs: Designing a custom AI accelerator requires years of engineering work, with costs amortized over a single customer.
  • No pricing power: Unlike NVIDIA, which can charge $30,000+ per H200 GPU because of its CUDA moat, Broadcom is competing on cost savings relative to NVIDIA’s pricing.
  • Thin margins at scale: Broadcom makes money through volume, not per-unit profit. The company’s $73 billion AI backlog is concentrated among just five customers - meaning a single renegotiation could tank margins further.

This is the custom silicon trap: You win the business by offering lower costs than NVIDIA, but you sacrifice margin percentage in the process.

Why AI Accelerators Are Different from Traditional Chips

AI chips sit in a capex- and opex-heavy stack that traditional semiconductors don’t.

At the silicon level, AI accelerators can have strong unit economics. Advanced-node AI GPUs can command average selling prices (ASPs) in the tens of thousands of dollars per board, with foundry-level gross margins in the 50-60%+ range. But those margins get eaten alive by:

  • Massive R&D on architectures, compilers, firmware, and runtime software
  • Huge capex for data centers, networking, liquid cooling, and storage
  • Ongoing opex for power, operations, and software engineering

For hyperscalers building custom silicon, the “margin” shows up as avoided vendor margin - they’re replacing a high-margin NVIDIA GPU with internally priced capacity. But for Broadcom, which is selling the chips to the hyperscalers, If the margin compression is structural - meaning Broadcom has to pay more indefinitely to package these chips - then the entire AI hardware sector (including Nvidia and AMD) may face a valuation reset.

What This Means for Investors

Broadcom’s December 12 selloff reflects a harsh reality: AI chip revenue growth and profitability are decoupling.

The company’s AI revenue is doubling year-over-year, but investors are now pricing in the risk that this growth comes at the expense of margin percentage. The stock had rallied 75% year-to-date before the December 12 crash, and much of that gain was built on the assumption that AI would be as profitable as Broadcom’s software business.

It’s not.

For shareholders, the key question is whether operating use can offset gross margin dilution. Broadcom argues that AI’s fixed cost structure (high R&D, low per-unit manufacturing cost) means operating margins will remain strong even as gross margins compress. The company’s Q4 operating margin of 66.2% supports this thesis. Margins are compressing because the physics of CoWoS packaging are expensive. Analysts expect this trend to continue through 2026. But if AI chip pricing continues to erode - due to competition from Amazon’s custom silicon, open-source AI models, or a broader slowdown in hyperscaler capex - Broadcom could face a double whammy: lower margins and slowing revenue growth.

The Broader Industry Signal

Broadcom’s margin warning is not an isolated event. It’s part of a broader pattern across the AI infrastructure stack:

  • Oracle warned on December 10 that its capital expenditures for AI infrastructure would increase by $15 billion more than expected, spooking investors about ROI timelines.
  • TSMC is passing on CoWoS cost increases to customers, forcing chipmakers to absorb higher input costs.
  • NVIDIA is the only company in the AI chip stack with pricing power, and even it faces pressure from hyperscalers building custom alternatives.

The December 12 tech stock selloff - which saw the Nasdaq drop 1.9% - reflects a growing realization that the AI gold rush has a profitability problem. Demand is infinite, but margins are shrinking.

For Broadcom, the path forward is clear: maintain operating use by scaling AI revenue faster than operating expenses. But for investors, the margin compression paradox is a warning: not all AI revenue is created equal.

When the company reporting 74% AI growth sees its stock crash 10% in a single day, it’s time to ask whether Wall Street’s AI valuations are built on sand.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...