While the world watches Nvidia’s every move, Broadcom (AVGO) has quietly cemented itself as the indispensable backbone of the AI era. In its latest earnings report, the semiconductor giant confirmed a staggering metric: AI chip sales have doubled year-over-year, and looking ahead to the current quarter, the trajectory remains vertical.
For investors and engineers alike, this isn’t just a revenue beat—it’s a validation of a fundamental shift in how AI infrastructure is built. The “Custom Silicon” era has arrived.
The News: A $30 Billion Run Rate
Broadcom’s report shattered expectations, driven almost entirely by its AI semiconductor division. The company is now projecting AI revenue to hit $30 billion in fiscal 2026.
To put that in perspective: just two years ago, Broadcom was viewed primarily as a legacy networking and wireless player. Today, it is the second-most important AI silicon provider on the planet, trailing only Nvidia.
The growth is being fueled by two distinct engines:
- AI Networking: The switches and interconnects that allow thousands of GPUs to talk to each other.
- Custom Accelerators (XPUs): Bespoke AI chips designed for specific hyperscalers (like Google and Meta).
Deep Dive: The Custom Silicon Revolution
Why is Broadcom’s AI business doubling? The answer lies in the limitations of general-purpose GPUs.
Nvidia’s H100 and Blackwell GPUs are technological marvels, but they are generalists. They are designed to do everything well, from training LLMs to running physics simulations. For a company like Google or Meta, which runs the same specific workloads 24/7 (like recommender systems or search transformers), a general-purpose chip is inefficient. It uses too much power and silicon area for features they don’t need.
Enter the ASIC (Application-Specific Integrated Circuit)
This is where Broadcom shines. They don’t sell an off-the-shelf “Broadcom GPU.” Instead, the company acts as a high-end design partner for the hyperscalers.
- Google’s TPU (Tensor Processing Unit): Broadcom has been Google’s partner for generations of the TPU, the chip that powers everything from Google Search to Gemini.
- Meta’s MTIA: As Meta ramps up its own silicon efforts to reduce reliance on Nvidia, Broadcom is heavily involved in the interconnect and IP integration.
- OpenAI’s Rumored XPU: Reports suggest OpenAI is collaborating with Broadcom to build its own inference chips, aiming to dramatically lower the cost of serving models like GPT-5.
This “Custom Silicon” business model is brilliant. Broadcom provides the high-speed SerDes (Serializer/Deserializer) IP—essentially the nervous system of the chip—and the packaging technology, while the customer provides the logic. This creates deep, multi-year lock-in. Once Google builds its data centers around TPU architecture, it can’t easily switch.
The Networking War: Ethernet vs. InfiniBand
While custom silicon gets the glory, Broadcom’s networking division is winning a critical war in the data center.
Nvidia pushes InfiniBand (via its Mellanox acquisition) for AI clusters, arguing it offers lower latency. Broadcom counters with Ethernet.
Historically, InfiniBand was faster. But with Broadcom’s Jericho3-AI and Tomahawk 5 switch chips, Ethernet has caught up. The Jericho3-AI chip, for example, features:
- Perfect Load Balancing: It can spray packets across all available links equally, preventing congestion.
- Zero-Impact Failover: If a link dies, traffic is rerouted instantly without crashing the training run.
The market has voted. The “Ultra Ethernet Consortium”—backed by AMD, Intel, Meta, and Microsoft—has effectively declared Ethernet the standard for future AI clusters. This is a massive moat for Broadcom, which supplies the silicon for the vast majority of the world’s high-speed Ethernet switches.
The Physics of the Problem: Why 1.6T Matters
We are hitting a physical wall. As we move from 800 Gigabit speeds to 1.6 Terabit (1.6T) per port, the copper cables that connect servers are becoming a problem.
At these frequencies, electrical signals degrade over just a few inches of copper wire. To solve this, Broadcom is pioneering Co-Packaged Optics (CPO).
In a traditional setup, the switch chip is in the middle of the board, and copper traces run to the edge where you plug in a fiber optic transceiver. At 1.6T, that trace is too long; the signal dies. CPO moves the laser and photonics directly onto the chip package.
By eliminating the distance, Broadcom reduces power consumption by 30-50%—a critical saving when AI data centers are pushing power grids to the breaking point.
The Competition: The Duopoly with Marvell
Investment in this sector often boils down to a binary choice: Broadcom or Marvell Technology (MRVL).
These two companies effectively form a duopoly in the high-speed networking and custom silicon space. While Nvidia dominates the GPU rack, the connections between those racks are almost exclusively Broadcom or Marvell territory.
- Marvell’s Strengths: Marvell is the primary silicon partner for Amazon Web Services (AWS). The Trainium and Inferentia chips used by Amazon are largely fruits of Marvell’s custom ASIC division. Marvell is also aggressive in the optical DSP (Digital Signal Processor) market, often competing head-to-head with Broadcom’s “Tomahawk” line with its own “Teralynx” switches.
- Broadcom’s Edge: Scale and IP depth. Broadcom’s “Jericho” architecture is widely considered the gold standard for routing complex traffic, which is why it powers the core backbones of the internet, not just AI clusters. Furthermore, Broadcom’s acquisition strategy has given it a fortress of patents in PCIe switching (via PLX Technology) and fiber channel (via Brocade), making it nearly impossible to build a modern data center without paying a toll to Broadcom.
For the AI investor, this isn’t a winner-take-all market, but Broadcom’s entrenched position in the “Big Three” (Google, Meta, Apple) arguably gives it a more diversified customer base than Marvell’s Amazon-heavy portfolio.
The VMware Wildcard: “Private AI”
There is another, often overlooked engine in Broadcom’s earnings beat: VMware.
When Broadcom acquired VMware for $69 billion, skeptics called it a legacy software play. But CEO Hock Tan had a different vision. Broadcom is now actively pushing “Private AI” through the VMware Cloud Foundation (VCF).
The thesis is simple: Most enterprises (banks, healthcare, governments) cannot upload their sensitive data to a public ChatGPT or Gemini instance due to regulation and privacy concerns. They must run AI models on-premise, in their own private data centers.
Vmware’s new stack allows these companies to virtualize GPUs and run private RAG (Retrieval-Augmented Generation) clusters on standard servers. This software push creates a flywheel effect:
- Enterprises buy VCF to run Private AI.
- Private AI requires massive bandwidth (unlike standard DB apps).
- Enterprises upgrade their on-premise networks to 400G and 800G Ethernet.
- Broadcom sells them the networking silicon to do it.
By controlling the software layer (VMware), Broadcom can effectively dictate the hardware refresh cycle for the Global 2000.
Forward Outlook: The $1 Trillion Infrastructure
Broadcom’s earnings are a lagging indicator of a leading trend. The CAPEX spend from Microsoft, Google, Amazon, and Meta is projected to exceed $200 billion in 2025 alone. A significant chunk of that is earmarked specifically for custom silicon and networking gear.
The risk for Broadcom is concentration. With a few massive customers (Google, Meta, Apple) accounting for a huge portion of revenue, losing one design win would be catastrophic. However, chip design cycles are long (3-5 years). The chips driving revenue in 2026 are already designed.
For now, Broadcom has successfully positioned itself as the “Arms Dealer” of the AI cold war—selling not just the ammunition, but the supply lines that make the war possible.
🦋 Discussion on Bluesky
Discuss on Bluesky