The AI Race Is Shifting From Models to Compute Lock-In

FEATUREDCOMPANIES

Saad Amjad

4/7/20264 min read

Everyone loves talking about the latest AI model drop. But the real power moves in AI right now? They're happening in chips, data centers, and long-term compute deals.

And the one that just landed is a big one.

What Happened

Broadcom announced it signed a long-term deal with Google to develop and supply future generations of custom AI chips and networking components for Google's next-generation AI racks. The agreement runs through 2031. That's not a short experiment. That's a multi-year infrastructure commitment.

On top of that, Broadcom, Google, and Anthropic expanded their partnership to give Anthropic access to about 3.5 gigawatts of TPU-based compute capacity starting in 2027. To put that in perspective, Broadcom was already supplying Anthropic with 1 gigawatt of Google TPU compute in 2026. This deal more than triples that.

And Anthropic shared some numbers that really stand out. Its run-rate revenue has now passed $30 billion, up from roughly $9 billion at the end of 2025. The number of business customers spending over $1 million annually has doubled to more than 1,000 in less than two months.

That's not slow growth. That's a company sprinting.

Why This Actually Matters

This story is easy to glance at and file under "another big tech deal." But the real story here is about what's changing underneath the AI industry.

For the past couple of years, the AI conversation has mostly been about models. Who has the best one. Who released the newest one. Which benchmark got beaten this week.

But what we're seeing now is a clear shift. The competition is moving deeper into infrastructure. Custom chips. Long-term capacity deals. Supply assurance agreements. Networking hardware for AI racks.

The companies that lock in the best compute capacity now will have a real advantage later. Not because they have the smartest model on launch day, but because they'll have the resources to train and run the next five generations of models without worrying about supply.

That's the bigger picture with this Broadcom-Google-Anthropic deal.

The Custom Chip Play

One of the most interesting parts of this story is what it says about the chip market.

Nvidia has dominated AI compute for years. Its GPUs are still the default for most AI training and inference workloads. But demand for alternatives is growing fast. Google's TPUs are custom-designed AI accelerators built specifically for training and running large language models, and companies are paying attention.

Reuters noted that demand for Google's TPUs has surged as businesses look for cost-efficient alternatives to Nvidia's expensive GPUs. And Google has been pushing hard to position its TPUs as a real competitor, not just an internal tool.

Broadcom plays a key role here. It's the silicon partner that takes Google's TPU architecture and turns it into actual chips that can be manufactured at scale. This deal locks Broadcom into that role for years, covering everything from chip design to networking gear for AI racks.

And it's worth noting that Broadcom isn't just doing this with Google. It also has a separate custom silicon deal with OpenAI, reportedly a 10-gigawatt co-development effort. So Broadcom is quietly becoming the infrastructure backbone for two of the three biggest frontier AI companies.

Anthropic's Position

Anthropic's side of this deal is just as telling.

The company runs Claude on a mix of hardware: AWS Trainium, Google TPUs, and Nvidia GPUs. That multi-platform approach is strategic. It means Anthropic isn't locked into one chip vendor, which gives it flexibility and pricing leverage.

Amazon remains Anthropic's primary cloud provider and training partner through Project Rainier, and Claude is the only frontier AI model available on all three major cloud platforms: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.

But the real signal here is the scale of the compute commitment. Anthropic's CFO Krishna Rao called this their most significant compute investment to date. When a company that's growing revenue at 3x year-over-year says it needs to make its biggest infrastructure bet ever, that tells you something about the demand curve they're seeing.

And recent data from Silicon Republic adds another layer. Anthropic is now capturing more than 73% of all spending among companies buying AI tools for the first time, while OpenAI's share of new buyers has dropped to around 27%. That kind of market momentum explains why Anthropic is willing to commit to multi-gigawatt compute deals years in advance.

What to Watch Next

This deal tells us where the AI race is going. It's not just about who builds the best chatbot or drops the most impressive benchmark score anymore.

The questions that matter now are: Who controls the compute? Who has long-term chip supply locked in? Who can scale without hitting bottlenecks?

Google is betting on custom silicon as both a product and a competitive weapon. Broadcom is positioning itself as the company that actually builds the chips for multiple AI leaders. And Anthropic is spending aggressively on infrastructure to match the enterprise demand it's seeing.

If you're watching the AI industry, keep your eyes on the infrastructure layer. That's where the next chapter of this race is being decided. Not in model announcements, but in chip deals, supply agreements, and data center buildouts that stretch out to the end of the decade.

The companies that win the compute race will be the ones that win the AI race. And right now, the deals being signed suggest that race is getting very serious, very fast.