A New Era of AI Compute Deals
AI development is increasingly defined by who can secure access to compute, not just who can write the best models. As foundation models scale, infrastructure commitments have exploded, turning cloud infrastructure spending into a competitive weapon. Anthropic, ByteDance and DeepSeek are now emblematic of an arms race in GPUs, custom accelerators, networking and data centres. These AI compute deals are no longer routine cloud contracts; they are multi‑billion‑dollar bets that lock in capacity years ahead, often with complex strategic partnerships. The result is a more concentrated market where a handful of labs and platforms pre‑book scarce resources, while smaller players are pushed toward longer queues and weaker bargaining power. At the same time, chip and cloud providers see an opportunity to cement long‑term customer relationships, bundling hardware, software and services into vertically integrated stacks that are hard to dislodge once deployed.
Anthropic’s Dual Track: Akamai Deal and the Anthropic Google Deal
Anthropic is building a multi‑supplier infrastructure strategy to keep Claude available as usage grows. The company has reportedly signed a USD 1.8 billion (approx. RM8.28 billion) compute deal with Akamai, tapping a distributed GPU and edge architecture that suits low‑latency agent and inference workloads. This supplements earlier arrangements, including a SpaceX‑linked compute path, and reflects Anthropic’s need to sustain longer‑running coding sessions and hosted automation features. In parallel, reports indicate Anthropic may pay Google USD 200 billion (approx. RM920 billion) over five years for cloud and chip access, potentially making it one of the largest forward compute commitments in the sector. While only a narrower TPU capacity expansion from 2027 has been confirmed, the scale of the reported Anthropic Google deal underscores how top labs are reserving future capacity years in advance, tightening supply for everyone else and reinforcing the gravitational pull of major cloud providers.

ByteDance’s $30 Billion Push and the AI Chip Shortage
ByteDance is preparing a massive expansion of its AI footprint, reportedly planning to spend more than USD 30 billion (approx. RM138 billion) on AI infrastructure in 2026, above 200 billion yuan and about 25 percent higher than a preliminary 160 billion yuan plan. This ByteDance AI investment targets compute, memory, networking and data‑centre capacity, with projects in Thailand, Finland, Southeast Asia and Europe to support its Doubao model family. Yet the company is reshaping its budget around what hardware it can reliably source, not just the highest‑performance chips it might prefer, highlighting ongoing AI chip shortages and supply constraints. A larger budget does not automatically translate into usable compute; ByteDance must still convert orders into functioning facilities with power, cooling and live systems. Timing is critical: delayed infrastructure could miss key product windows, turning a headline spend into stranded capital if Doubao and related models cannot scale when demand peaks.

DeepSeek’s Funding Gambit and State-Backed Compute Power
DeepSeek is emerging as a significant challenger by pairing aggressive pricing with a potential state‑backed capital infusion. The China Integrated Circuit Industry Investment Fund, often called the “Big Fund”, is reportedly in talks to lead a new investment that could value DeepSeek above 300 billion yuan, or about USD 44 billion (approx. RM202.4 billion). The deal is not final, and the round’s exact size and investor mix remain uncertain, but a state‑backed lead would tie DeepSeek more closely to national priorities in chips and training infrastructure. DeepSeek has already unveiled its V4‑Pro and V4‑Flash models with MIT‑licensed open weights, undercutting many Western API prices by an order of magnitude. Additional capital and political support could translate into preferential access to domestic accelerators, improved training‑system availability and smoother commercial rollout, tightening the link between funding, compute procurement and deployment scale in an already constrained AI infrastructure market.
Market Consolidation, Pricing Pressure and the Future of Cloud Infrastructure Spending
The surge in mega‑scale AI compute deals is reshaping market dynamics up and down the stack. When labs like Anthropic reserve enormous volumes of future capacity with Google or Akamai, cloud and chip suppliers gain predictable revenue streams and justification for new fabs and data centres. However, this also concentrates bargaining power: smaller AI buyers face longer waits, less favourable pricing and potentially reduced flexibility to switch providers. At the same time, ByteDance’s pivot toward more readily available domestic chips and DeepSeek’s potential state‑aligned funding show how supply constraints are driving diversification away from a single vendor or architecture. Strategic partnerships are increasingly bundled—compute, tooling, support and ecosystem lock‑in—making exits costly. As cloud infrastructure spending continues to climb, the central question becomes whether this consolidation and forward‑booking of capacity will accelerate innovation, or entrench a narrow set of infrastructure gatekeepers controlling the next generation of AI.
