While the world watches Nvidiaās every move, Broadcom (AVGO) has quietly cemented itself as the indispensable backbone of the AI era. In its latest earnings report, the semiconductor giant confirmed a staggering metric: AI chip sales have doubled year-over-year, and looking ahead to the current quarter, the trajectory remains vertical.
For investors and engineers alike, this isnāt just a revenue beatāitās a validation of a fundamental shift in how AI infrastructure is built. The āCustom Siliconā era has arrived.
The News: A $30 Billion Run Rate
Broadcomās report shattered expectations, driven almost entirely by its AI semiconductor division. The company is now projecting AI revenue to hit $30 billion in fiscal 2026.
To put that in perspective: just two years ago, Broadcom was viewed primarily as a legacy networking and wireless player. Today, it is the second-most important AI silicon provider on the planet, trailing only Nvidia.
The growth is being fueled by two distinct engines:
- AI Networking: The switches and interconnects that allow thousands of GPUs to talk to each other.
- Custom Accelerators (XPUs): Bespoke AI chips designed for specific hyperscalers (like Google and Meta).
Deep Dive: The Custom Silicon Revolution
Why is Broadcomās AI business doubling? The answer lies in the limitations of general-purpose GPUs.
Nvidiaās H100 and Blackwell GPUs are technological marvels, but they are generalists. They are designed to do everything well, from training LLMs to running physics simulations. For a company like Google or Meta, which runs the same specific workloads 24/7 (like recommender systems or search transformers), a general-purpose chip is inefficient. It uses too much power and silicon area for features they donāt need.
Enter the ASIC (Application-Specific Integrated Circuit)
This is where Broadcom shines. They donāt sell an off-the-shelf āBroadcom GPU.ā Instead, the company acts as a high-end design partner for the hyperscalers.
- Googleās TPU (Tensor Processing Unit): Broadcom has been Googleās partner for generations of the TPU, the chip that powers everything from Google Search to Gemini.
- Metaās MTIA: As Meta ramps up its own silicon efforts to reduce reliance on Nvidia, Broadcom is heavily involved in the interconnect and IP integration.
- OpenAIās Rumored XPU: Reports suggest OpenAI is collaborating with Broadcom to build its own inference chips, aiming to dramatically lower the cost of serving models like GPT-5.
This āCustom Siliconā business model is brilliant. Broadcom provides the high-speed SerDes (Serializer/Deserializer) IPāessentially the nervous system of the chipāand the packaging technology, while the customer provides the logic. This creates deep, multi-year lock-in. Once Google builds its data centers around TPU architecture, it canāt easily switch.
The Networking War: Ethernet vs. InfiniBand
While custom silicon gets the glory, Broadcomās networking division is winning a critical war in the data center.
Nvidia pushes InfiniBand (via its Mellanox acquisition) for AI clusters, arguing it offers lower latency. Broadcom counters with Ethernet.
Historically, InfiniBand was faster. But with Broadcomās Jericho3-AI and Tomahawk 5 switch chips, Ethernet has caught up. The Jericho3-AI chip, for example, features:
- Perfect Load Balancing: It can spray packets across all available links equally, preventing congestion.
- Zero-Impact Failover: If a link dies, traffic is rerouted instantly without crashing the training run.
The market has voted. The āUltra Ethernet Consortiumāābacked by AMD, Intel, Meta, and Microsoftāhas effectively declared Ethernet the standard for future AI clusters. This is a massive moat for Broadcom, which supplies the silicon for the vast majority of the worldās high-speed Ethernet switches.
The Physics of the Problem: Why 1.6T Matters
We are hitting a physical wall. As we move from 800 Gigabit speeds to 1.6 Terabit (1.6T) per port, the copper cables that connect servers are becoming a problem.
At these frequencies, electrical signals degrade over just a few inches of copper wire. To solve this, Broadcom is pioneering Co-Packaged Optics (CPO).
In a traditional setup, the switch chip is in the middle of the board, and copper traces run to the edge where you plug in a fiber optic transceiver. At 1.6T, that trace is too long; the signal dies. CPO moves the laser and photonics directly onto the chip package.
By eliminating the distance, Broadcom reduces power consumption by 30-50%āa critical saving when AI data centers are pushing power grids to the breaking point.
The Competition: The Duopoly with Marvell
Investment in this sector often boils down to a binary choice: Broadcom or Marvell Technology (MRVL).
These two companies effectively form a duopoly in the high-speed networking and custom silicon space. While Nvidia dominates the GPU rack, the connections between those racks are almost exclusively Broadcom or Marvell territory.
- Marvellās Strengths: Marvell is the primary silicon partner for Amazon Web Services (AWS). The Trainium and Inferentia chips used by Amazon are largely fruits of Marvellās custom ASIC division. Marvell is also aggressive in the optical DSP (Digital Signal Processor) market, often competing head-to-head with Broadcomās āTomahawkā line with its own āTeralynxā switches.
- Broadcomās Edge: Scale and IP depth. Broadcomās āJerichoā architecture is widely considered the gold standard for routing complex traffic, which is why it powers the core backbones of the internet, not just AI clusters. Furthermore, Broadcomās acquisition strategy has given it a fortress of patents in PCIe switching (via PLX Technology) and fiber channel (via Brocade), making it nearly impossible to build a modern data center without paying a toll to Broadcom.
For the AI investor, this isnāt a winner-take-all market, but Broadcomās entrenched position in the āBig Threeā (Google, Meta, Apple) arguably gives it a more diversified customer base than Marvellās Amazon-heavy portfolio.
The VMware Wildcard: āPrivate AIā
There is another, often overlooked engine in Broadcomās earnings beat: VMware.
When Broadcom acquired VMware for $69 billion, skeptics called it a legacy software play. But CEO Hock Tan had a different vision. Broadcom is now actively pushing āPrivate AIā through the VMware Cloud Foundation (VCF).
The thesis is simple: Most enterprises (banks, healthcare, governments) cannot upload their sensitive data to a public ChatGPT or Gemini instance due to regulation and privacy concerns. They must run AI models on-premise, in their own private data centers.
Vmwareās new stack allows these companies to virtualize GPUs and run private RAG (Retrieval-Augmented Generation) clusters on standard servers. This software push creates a flywheel effect:
- Enterprises buy VCF to run Private AI.
- Private AI requires massive bandwidth (unlike standard DB apps).
- Enterprises upgrade their on-premise networks to 400G and 800G Ethernet.
- Broadcom sells them the networking silicon to do it.
By controlling the software layer (VMware), Broadcom can effectively dictate the hardware refresh cycle for the Global 2000.
Forward Outlook: The $1 Trillion Infrastructure
Broadcomās earnings are a lagging indicator of a leading trend. The CAPEX spend from Microsoft, Google, Amazon, and Meta is projected to exceed $200 billion in 2025 alone. A significant chunk of that is earmarked specifically for custom silicon and networking gear.
The risk for Broadcom is concentration. With a few massive customers (Google, Meta, Apple) accounting for a huge portion of revenue, losing one design win would be catastrophic. However, chip design cycles are long (3-5 years). The chips driving revenue in 2026 are already designed.
For now, Broadcom has successfully positioned itself as the āArms Dealerā of the AI cold warāselling not just the ammunition, but the supply lines that make the war possible.
š¦ Discussion on Bluesky
Discuss on Bluesky