Link Copied!

A peça de $0,02 que está sufocando o boom da IA

Cada rack de servidores de IA da NVIDIA precisa de 441.000 capacitores cerâmicos. Duas empresas fabricam 84% deles. Os pedidos são o dobro da capacidade de produção. Os preços acabaram de subir 35%. O gargalo mais negligenciado do boom da IA não são as GPUs ou a memória - é um componente menor que um grão de arroz.

🌐
Nota de Idioma

Este artigo está escrito em inglês. O título e a descrição foram traduzidos automaticamente para sua conveniência.

Close-up macro extremo de um pequeno capacitor cerâmico MLCC equilibrado na ponta de um dedo com um enorme rack de servidores de IA borrado ao fundo, estilo fotojornalístico, pouca profundidade de campo

Key Takeaways

  • The Invisible Bottleneck: A single NVIDIA GB300 AI server requires roughly 30,000 Multi-Layer Ceramic Capacitors (MLCCs), 30 times more than a smartphone. A full NVL72 rack uses about 441,000.
  • A Two-Company Chokepoint: Murata Manufacturing and Samsung Electro-Mechanics together control 84% of AI server-grade MLCC production. Murata alone holds 45%.
  • Demand Has Outrun Supply: Murata’s AI server MLCC order inquiries are running at roughly twice its production capacity. Prices jumped 15-35% on April 1, 2026, the first major hike in three years.
  • No Quick Fix: Murata completed a new ¥47 billion production building in April 2026, but it primarily targets automotive and industrial segments. AI server-grade capacity expansion remains a multi-year ramp. Chinese alternatives remain locked out of the high-end market.

The Part Nobody Talks About

Every story about the Artificial Intelligence (AI) infrastructure boom follows the same script: NVIDIA can’t make Graphics Processing Units (GPUs) fast enough, High-Bandwidth Memory (HBM) is scarce, power grids are overloaded, and land for data centers is vanishing.

All true. But buried beneath the GPU headlines sits a bottleneck that almost nobody outside the electronics supply chain is watching, one that could delay server deployments just as effectively as a chip shortage, and for which there is even less slack in the system.

It is a ceramic chip smaller than a grain of rice: the Multi-Layer Ceramic Capacitor (MLCC).

An MLCC does not compute anything. It stores and releases tiny bursts of electrical charge, smoothing out the voltage ripples that would otherwise fry the delicate silicon running your AI model. Every processor on a server board is surrounded by hundreds or thousands of them, arrayed around the chip like a defensive perimeter. Without them, the GPU draws power, the voltage sags by a few millivolts, and the computation fails.

The problem is scale. A smartphone contains over 1,000 MLCCs. A traditional server uses a few thousand. An NVIDIA GB300 AI server, the kind hyperscalers are fighting to deploy, needs approximately 30,000. That is 30 times a smartphone and roughly eight times a conventional server.

Scale that up to a full rack and the numbers get absurd. An NVL36 rack requires around 234,000 MLCCs. The flagship NVL72 liquid-cooled rack, the configuration Microsoft, Google, and Meta are racing to deploy, consumes approximately 441,000.

And nearly all of them come from two companies.

The 84% Duopoly

The global MLCC market is dominated by Japanese and Korean manufacturers. Murata Manufacturing, headquartered in Kyoto, holds over 40% of global MLCC production. Samsung Electro-Mechanics (SEMCO) holds roughly 18%, followed by TDK at 12% and Taiyo Yuden and Yageo at approximately 10% each.

But global market share understates the concentration in the AI segment. When you narrow the lens to the high-capacitance, low-ESL (Equivalent Series Inductance) MLCCs that AI server power delivery demands, the market shrinks to a duopoly: Murata at 45% and Samsung at 39%.

That is 84% of the AI server MLCC market in the hands of two companies. For context, NVIDIA’s share of the AI training GPU market is often cited as a monopolistic 80-90%. The MLCC duopoly is in the same league, and it gets a fraction of the attention.

The reason for this concentration is physics. AI server-grade MLCCs are not commodity parts. They require specialized ceramic dielectric formulations, ultra-thin layer stacking (up to 600 layers in a single chip at the production frontier), and tight tolerances for Equivalent Series Resistance (ESR) and ESL that determine how quickly the capacitor can respond to the GPU’s nanosecond-scale current demands. Manufacturing them requires years of process refinement. You cannot simply spin up a new factory and start producing them.

Chinese MLCC manufacturers (Fenghua, Sanhuan, and others) have grown their collective global market share to about 10%, a significant jump from 6% in 2019. But they remain locked out of the high end. Industry reports note that Chinese entrants “remain excluded from automotive and data-center sockets that demand AEC-Q200 compliance and low-ESL metrics.” They compete aggressively on price in commodity segments — underbidding Japanese suppliers by 15-25% in smartphones. But for the parts that NVIDIA’s B300 platform actually needs, there is no Chinese alternative.

The Squeeze: Orders at 2x Capacity

Murata’s most recent earnings paint the picture clearly. In the company’s Q3 FY2025 report (covering October-December 2025, released February 2, 2026), capacitor segment revenue rose 10.1% year-over-year, driven primarily by server demand. The book-to-bill ratio for capacitors hit 1.12, meaning orders were 12% higher than shipments and still accelerating.

The broader picture is starker. Industry reporting from Digitimes in February 2026 found that Murata’s AI server MLCC order inquiries, not just confirmed orders but active demand signals from customers, were running at roughly twice its current production capacity. The company was operating its high-end MLCC lines at 90–95% utilization. Samsung and Taiyo Yuden were both above 80%.

Murata’s president, Norio Nakajima, said it plainly during the earnings call: “For 2026, a very big challenge is how to produce and to what extent we can meet customer demands.”

On April 1, 2026, Murata pulled the trigger on price increases of 15-35% across its AI server and automotive-grade MLCC product lines, the company’s first major hike in three years. Samsung Electro-Mechanics followed with double-digit increases of its own.

The market is now bifurcated. Commodity MLCCs for smartphones and consumer electronics remain flat or even soft, as demand in those segments is sluggish. But AI server-grade and automotive-grade MLCCs are in structural shortage, with spot prices for high-end parts already 15-20% above contract levels. Analysts call this the “scissors spread” (commodity flat, high-end surging), and it mirrors the DRAM dynamic this site covered in The RAMpocalypse, where AI’s demand for HBM cannibalizes the supply for everything else.

The tantalum capacitor market tells the same story from a different angle. Tantalum polymer capacitors serve a complementary role in AI server power delivery, handling different points in the voltage regulation chain. Yageo’s subsidiary KEMET, which controls over 40% of global tantalum capacitor production, has raised prices three times in twelve months. Panasonic followed with 15-30% hikes starting February 2026. The passive component supply chain, in aggregate, is flashing red.

The BOM Math: Why $0.02 Parts Cost Billions

An individual MLCC costs between two cents and two dollars, depending on the specification. At first glance, a 35% price increase on a two-cent part seems irrelevant next to a GPU that costs tens of thousands of dollars.

But the math says otherwise. At 441,000 MLCCs per NVL72 rack, the aggregate MLCC cost per rack runs between $2,500 and $4,600, and climbing. Across a hyperscaler deploying thousands of racks, the passive component bill runs into the tens of millions. While GPUs and High-Bandwidth Memory dominate AI server Bills of Materials (BOMs), MLCCs have become the single largest passive component cost, and their aggregate impact on rack-level pricing is growing rapidly as volumes per rack climb.

Cost, however, is not the real issue. Availability is.

You cannot ship a server with 29,999 out of 30,000 required capacitors. Every missing MLCC is a server that sits incomplete on the assembly line. When demand signals are running at 2x capacity, a significant share of that demand simply will not be fulfilled on the original timeline. This is not a price problem. It is a gating constraint, similar to how a $0.50 microcontroller held up $50,000 automobiles during the 2021 chip shortage.

NVIDIA’s own Blackwell platform backlog reinforces the point. As of late 2025, the B200 and GB200 were sold out through mid-2026 with a reported backlog of approximately 3.6 million units. Even if NVIDIA and TSMC (Taiwan Semiconductor Manufacturing Company) solved every GPU and packaging bottleneck, the servers still cannot ship until the passive component supply chain delivers hundreds of thousands of capacitors per rack.

The Ghost of Tantalum Past

This has happened before.

In late 2000, at the peak of the dot-com boom, the global electronics industry ran headfirst into a tantalum capacitor shortage. Internet infrastructure was being deployed at breakneck speed. Wireless communication was exploding. Server demand was surging. And the tantalum powder supply, the raw material for the capacitors that kept all of it running, could not keep up.

Lead times ballooned. Prices spiked. OEMs (Original Equipment Manufacturers) began double-ordering and triple-ordering to secure allocation, which inflated demand signals by three to four times the actual need. Manufacturers, seeing the surging order books, invested in massive capacity expansion. By mid-2001, when new tantalum ore production finally came online, the dot-com bubble had burst. Demand collapsed. The industry was left with a glut of capacity and cratered prices.

Murata and Samsung both lived through that cycle. The institutional memory of the bust shapes their capital allocation decisions in 2026.

Look at Murata’s capital expenditure: ¥250 billion (approximately $1.7 billion) for FY2025. The company completed a new 70,000-square-metre MLCC production building at its Izumo subsidiary in Shimane Prefecture on April 3, 2026, backed by a ¥47 billion investment. But that facility primarily targets automotive, industrial, and consumer electronics segments, not AI server-grade parts. A Philippines fab added 20% Southeast Asian output with full AEC-Q200 automotive qualification.

Murata is investing, but it is investing on its own terms. Its expansion plans are deliberate: build broad capacity for the segments it knows (automotive, industrial), while AI server-grade MLCC capacity additions proceed more cautiously. The company has not announced a dedicated, large-scale AI server MLCC expansion with a near-term completion date.

There is a historical reason for this caution, and it cuts both ways. The 2000 tantalum cycle taught component makers that demand signals in a boom are often inflated by double-ordering. If some of the current 2x demand signal reflects hyperscalers stockpiling allocation rather than true end-use demand, aggressive expansion could end in the same overcapacity glut that destroyed margins in 2001. Murata’s restraint is a bet that some portion of the current demand spike is artificial.

The consequence, however, is that AI hyperscalers need capacity now. Even if a fraction of demand is inflated, the structural shortage is real, and the gap between what Murata can deliver and what the market wants is unlikely to close before 2028 at the earliest.

The Structural Misalignment

The deeper story here is one of misaligned incentives.

NVIDIA, Microsoft, Google, and Meta are engaged in a capital expenditure arms race, pouring hundreds of billions into infrastructure on the bet that AI revenue will eventually justify the spend. Their incentive is speed: deploy racks as fast as possible, train the next model, capture market share before competitors.

Murata’s incentive is survival. It has watched component booms turn to busts. It projects approximately 15% operating margin for FY2025 (healthy but not extraordinary) and it plans to protect that margin by avoiding overinvestment. Its revenue forecast for FY2025 is ¥1.8 trillion (roughly $12 billion), up 3.2% year-on-year: steady, not explosive.

From Murata’s perspective, raising prices 15-35% and expanding capacity on a multi-year timeline is the rational play. From the hyperscalers’ perspective, it is an agonizing delay to a trillion-dollar bet.

The counterargument, and it is a real one, is that passive components are commodities with many substitutes. If Murata and Samsung cannot supply, other manufacturers will step in. But the data does not support this for the high-end segment. Chinese manufacturers remain excluded from AI server sockets. TDK and Taiyo Yuden are operating above 80% utilization themselves. There is no idle capacity waiting in the wings.

Murata has also begun diversifying beyond capacitors into AI server power modules, targeting approximately ¥50 billion (roughly $332 million) in revenue from these products. The company is not just a passive component supplier. It is positioning itself as a vertically integrated gatekeeper of AI server power delivery.

What Comes Next

Murata projects that AI server MLCC demand will grow at a 30% compound annual growth rate (CAGR) through 2030, with total demand reaching 3.3 times its 2025 levels.

If that projection holds, the current shortage is not a spike but the beginning of a structural reallocation of the world’s passive component manufacturing toward AI, much like the DRAM reallocation toward HBM that is already cannibalizing the smartphone memory supply.

The near-term math is unforgiving. Even with the Izumo building online, Murata’s announced expansions add capacity in the low double digits percentagewise while AI server MLCC demand grows at a projected 30% annually. The expansions do not close the current gap. Meaningful surplus capacity, if it arrives at all, is a 2028-2030 story.

For hyperscalers, this means the AI infrastructure build-out will proceed at the speed of the slowest component. Increasingly, that component is not the GPU or the memory. It is the ceramic chip that no one put on the critical path.

For investors, the passive component supply chain may be the most underappreciated “picks and shovels” play in the AI trade. Murata’s stock (TYO: 6981) trades at valuations more typical of a mature industrial company than a monopoly gatekeeper of the hottest technology buildout in history.

And for the AI industry as a whole, the lesson is the same one the auto industry learned in 2021, when a fifty-cent chip held up a fifty-thousand-dollar truck: the supply chain is only as strong as its most ignored link.

The $0.02 capacitor has become the most overlooked critical part in a trillion-dollar industry. And two companies in Kyoto and Suwon are deciding how fast the AI boom actually gets built.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...