From Cloud Customers to Long-Term Capacity Tenants
AI labs are rapidly shifting from flexible cloud usage to multi‑year AI infrastructure deals that resemble long‑term utilities contracts. Anthropic reportedly may pay Google USD 200 billion (approx. RM920 billion) over five years for cloud computing and custom chips, a figure that, if accurate, would rank among the largest forward compute commitments in the sector. This is not just about renting servers; it is a compute lockup strategy designed to secure guaranteed access to training and inference capacity before bottlenecks emerge. Earlier, Anthropic’s capacity expansion on Google’s TPU platform with Broadcom already signaled that the relationship had moved beyond standard cloud computing contracts. The newly reported number simply exposes the financial scale behind that shift. As AI models become larger and release cycles faster, guaranteed access to hardware, power, and data centers is now as strategically important as algorithmic breakthroughs.

Anthropic’s Multi-Cloud Bets and the Seven-Year Akamai Deal
Anthropic is not relying on a single provider for its AI infrastructure. Alongside its deepening partnership with Google, the company has reportedly signed a USD 1.8 billion (approx. RM8.3 billion) seven-year cloud computing deal with Akamai Technologies. Akamai described the agreement, with a “leading frontier model provider,” as the largest in its history, highlighting just how hungry model labs have become for compute. This long-term contract underscores that Anthropic is turning to multiple suppliers to reserve future capacity, diversify technical risk, and avoid being trapped by any one platform’s pricing or roadmap. Seven-year commitments also give partners like Akamai the confidence to invest in new data centers, networking, and storage tuned to AI workloads. In practice, strategic compute partnerships now sit alongside model architecture and data as core pillars of Anthropic’s competitive strategy.
ByteDance’s USD 30 Billion Push and Domestic Chip Pivot
ByteDance is mounting its own offensive in the AI chip investments race, reportedly preparing to spend more than USD 30 billion (approx. RM138 billion) on AI infrastructure in 2026. The plan, which would exceed an earlier target tied to 200 billion yuan, is shaped as much by supply constraints as by ambition. With access to some imported, top‑tier hardware limited by availability, price, and delivery windows, ByteDance is steering a larger portion of its budget toward domestic AI chips and a global data center buildout. Facilities in places such as Thailand, Finland, Southeast Asia, and Europe are expected to add capacity but also introduce execution risk, as power, cooling, and networking must all align. Whether this spending translates into usable compute will be tested by how quickly ByteDance can bring clusters online to support its Doubao assistant and related models.

The New Compute Divide: Pricing, Access, and Competitive Pressure
These mega cloud computing contracts and capital-intensive buildouts are transforming competitive dynamics across the AI market. When frontier labs like Anthropic reserve vast future capacity with hyperscalers, smaller AI buyers are pushed further back in the queue, facing longer wait times and weaker pricing leverage. Capacity that might once have been contested on the spot market is increasingly pre‑sold years in advance. At the same time, ByteDance’s massive infrastructure push shows how model developers are forced to spend more simply to defend planned deployments as memory and component costs rise. For challengers and startups, this environment elevates the entry barrier: winning on model quality alone is no longer enough without assured access to compute at scale. Strategic compute partnerships and multi‑year reservations have become central weapons in the AI compute wars, determining who can launch, scale, and monetise advanced models.
