How Google’s Anthropic Deal Is Structured—and Why It Matters
Google-parent Alphabet’s new pact with Anthropic formalizes one of the largest cloud AI partnerships to date. Anthropic says Google has committed an initial USD 10 billion (approx. RM46.0 billion) in cash at a valuation of USD 350 billion (approx. RM1.61 trillion), with a further USD 30 billion (approx. RM138.0 billion) contingent on performance milestones. The agreement extends an existing collaboration in which Anthropic relies on Google’s custom chips and cloud services to train and deploy its Claude models. Structurally, the deal does two things at once: it injects massive capital into an independent foundation model rival while binding that rival to Google’s infrastructure roadmap. Rather than a simple equity stake, the commitment is explicitly tied to scaling Anthropic’s computing capacity—turning Google into both financier and primary landlord for Anthropic’s next wave of model training.

Strategic Co-opetition: Google, Amazon and the New Model Portfolio
Google’s investment underscores a deliberate strategy of co-opetition: backing a foundation model rival while pushing its own Gemini systems. By spreading bets across internal and external models, Google can offer customers a portfolio of AI options without ceding them entirely to another cloud. This mirrors Amazon’s approach. Anthropic recently disclosed that Amazon plans to invest up to USD 25 billion (approx. RM115.0 billion), with another USD 20 billion (approx. RM92.0 billion) tied to performance, and that Anthropic has committed to spend more than USD 100 billion (approx. RM460.0 billion) on Amazon Web Services over the coming decade. Rather than exclusive ownership, both cloud giants are racing to become the default home for high-performing, independent model labs. The result is a web of overlapping alliances where ostensible competitors depend on each other for chips, distribution and revenue.
Anthropic’s Growth Surges as Enterprises Seek Foundation Model Rivals
Anthropic’s ability to command multi-tens-of-billions in capital commitments rests on explosive demand for its Claude family of models. The company reports that its annualized revenue run rate has surged to more than USD 30 billion (approx. RM138.0 billion), up from about USD 9 billion (approx. RM41.4 billion) at the end of 2025—growth that it says now outpaces OpenAI’s trajectory. A key driver is its focus on coding, including strong uptake of its Claude Code tool among developers. Anthropic has also rolled out new models such as Mythos, which it initially restricted to a limited set of major tech firms due to cybersecurity concerns. Together, these moves suggest that enterprises are eager for credible alternatives to the leading incumbents, willing to integrate multiple foundation models, and increasingly sensitive to issues like security posture, model access policies and alignment with government expectations.

Model Labs as Anchor Tenants for Hyperscale Data Centers
The Google Anthropic deal also illustrates how the AI investment arms race is reshaping infrastructure economics. Anthropic has been signing multi-year agreements to secure vast computing capacity, including deals with Broadcom and CoreWeave and a plan to lock in nearly 1 gigawatt of capacity via Amazon’s chips by year-end. It previously signalled an intent to invest USD 50 billion (approx. RM230.0 billion) in data centers to support model training and deployment. These commitments turn Anthropic into an anchor tenant for hyperscale data centers, guaranteeing long-term demand for GPUs, custom accelerators and power-hungry facilities. For Google, the up-to USD 40 billion (approx. RM184.0 billion) commitment effectively pre-sells its next generation of cloud and TPU capacity. Instead of building data centers on speculation, hyperscalers can underwrite expansion with binding, decade-scale contracts from a handful of model labs.

Regulatory Risks and the Squeeze on Smaller AI Startups
As cloud providers bankroll foundation model rivals, regulators and customers are asking who ultimately controls the AI stack. Deals where a hyperscaler finances a major model lab while also supplying chips and cloud services raise questions about preferential access to scarce hardware, bundling power and potential lock-in through long-term spending commitments. Anthropic’s close ties to both Google and Amazon could attract antitrust scrutiny if authorities decide these arrangements entrench dominant clouds. At the same time, the scale of capital—tens of billions earmarked for a few players—raises the bar for smaller model startups, pushing them either into niche specializations or into the arms of regional clouds pursuing “sovereign AI” strategies. In this environment, model ‘frenemies’ are not an anomaly but the default: a competitive landscape defined by overlapping alliances, shared infrastructure and a narrowing field of truly independent contenders.
