Anthropic’s $200 Billion Bet on Cloud and Chips
Anthropic is emerging as one of the most aggressive buyers in the new AI compute arms race. According to reporting, the lab may pay Google USD 200 billion (approx. RM920 billion) over five years for cloud computing contracts and access to custom chips. While only part of this capacity is formally confirmed through a TPU expansion plan with Google and Broadcom starting in 2027, the reported figure signals a shift from routine cloud usage to gigantic forward purchasing obligations. Anthropic is effectively reserving a supply of accelerators years ahead to support larger training runs and heavier enterprise inference traffic. By turning AI infrastructure spending into long-term commitments, Anthropic aims to prevent capacity shortages from turning into product delays. The strategy also underscores how AI compute deals are becoming as strategic as model design, influencing who can launch and scale cutting‑edge systems.
Scaling Inference: Inside Anthropic’s $1.8 Billion Akamai Deal
Beyond training capacity, Anthropic is racing to secure infrastructure specifically tuned for inference scaling and AI agents. The company has reportedly signed a USD 1.8 billion (approx. RM8.3 billion) compute deal with Akamai Technologies, tapping its distributed GPU and edge architecture. Akamai’s network is designed to deliver low‑latency services, making it well suited for always‑on workloads such as managed agents, long coding sessions, and cloud‑hosted automations built on Claude. These products keep inference jobs running far beyond short chat exchanges, intensifying pressure on capacity and response times. By diversifying suppliers across Google, Akamai, and earlier arrangements such as a SpaceX compute deal, Anthropic is building a multi‑cloud safety net. This lets workloads spill over between providers when demand spikes, while locking in large tranches of compute that smaller AI buyers increasingly struggle to access on favourable terms.
ByteDance’s $30 Billion AI Infrastructure Push
ByteDance is moving into the same league of heavy buyers, reportedly planning to spend more than USD 30 billion (approx. RM138 billion) on AI infrastructure in 2026. That figure, above 200 billion yuan and roughly 25 per cent higher than an earlier internal plan, would deepen its role in the race for compute, memory, networking, and data‑centre capacity. Crucially, ByteDance is steering more of this budget toward domestic AI chips as access to top global hardware remains constrained. Projects spanning Thailand, Finland, Southeast Asia, and Europe are meant to convert this AI infrastructure spending into usable compute for its Doubao model family and related systems. Yet the company still faces execution risk: money must translate into powered facilities, cooling, and live clusters in time to meet product windows. Rising memory costs and tight supply chains continue to shape ByteDance’s choices and GPU pricing trends across the ecosystem.

How Mega Deals Rewire Cloud Pricing and Competition
As Anthropic and ByteDance lock in massive AI compute deals, they are reshaping the economics of cloud computing contracts. When a handful of labs reserve future capacity years in advance, smaller AI buyers may face longer waits, weaker bargaining power, and sharper GPU pricing trends. Cloud providers increasingly prioritise clients willing to commit huge, multi‑year volumes, concentrating the best discounts and earliest access at the top. This dynamic risks turning compute into a gating factor for innovation: emerging startups can design sophisticated models but may struggle to secure affordable accelerators at scale. At the same time, providers must balance their books carefully—overbuilding for one mega‑customer while demand shifts could leave idle capacity. The result is a more stratified market in which access to high‑performance infrastructure becomes a strategic moat for the best‑funded AI players.
AI Infrastructure Dominates Startup Investment
The knock‑on effects of these mega commitments are visible in startup funding trends. In the first quarter of 2026, European tech companies raised USD 17 billion (approx. RM78.2 billion), the strongest showing in two years. AI infrastructure emerged as the single largest category, attracting USD 4.8 billion (approx. RM22.1 billion). Just three late‑stage rounds—Nscale’s USD 2 billion (approx. RM9.2 billion) Series C, Neura Robotics’ USD 1.2 billion (approx. RM5.5 billion) Series C, and Wayve’s USD 1.2 billion (approx. RM5.5 billion) Series D—accounted for most of that total, underscoring how investors are backing capital‑intensive, high‑performance computing bets. Enterprise applications also surged, while fintech funding slipped despite notable deals. Overall, fewer rounds but larger cheque sizes point to a market where infrastructure and deep tech draw disproportionate capital, mirroring the consolidation of compute among a small group of well‑resourced AI leaders.

