Anthropic’s Mega Bets on Cloud and Edge Compute
Anthropic is rapidly transforming from a cloud tenant into a long‑term infrastructure power user. The company has reportedly signed a USD 1.8 billion (approx. RM8.3 billion) AI compute deal with Akamai Technologies to support growing demand for its Claude models and managed AI agents. Akamai’s globally distributed GPUs and edge architecture are tailored for low‑latency inference and always‑on agent workloads, helping Anthropic keep response times tight even as sessions stretch from quick chats into hours of coding or automation. In parallel, Anthropic may pay Google USD 200 billion (approx. RM920 billion) over five years for cloud and chip access, turning today’s capacity reservations into one of the sector’s largest forward commitments. While not all details are confirmed, the strategy is clear: lock in future AI compute deals across multiple suppliers so Claude can scale without being constrained by a tightening AI chip shortage.

ByteDance’s USD 30 Billion Push to Build Its Own AI Backbone
ByteDance is preparing one of the most aggressive artificial intelligence infrastructure expansions on record. Reports indicate the company could spend more than USD 30 billion (approx. RM138 billion) on AI infrastructure, up from a previously discussed 160 billion yuan plan. The budget targets compute, memory, networking and data‑center capacity needed to power its Doubao models and related AI products. Yet ByteDance’s challenge is not just how much it spends, but how effectively that capital converts into usable training and inference capacity. With access to top‑tier hardware constrained by an AI chip shortage, the company is shifting more of its cloud computing investment toward domestically produced chips it can reliably source. Parallel projects in locations such as Thailand, Finland and broader Southeast Asia and Europe add scale but also execution risk, as facilities, power and cooling must converge on schedule. Timing is critical: delayed infrastructure could miss the product windows Doubao needs to stay competitive.

DeepSeek, State Capital and the New AI Infrastructure Race
DeepSeek is emerging as a pivotal player in the global AI race, backed by potential state‑aligned capital. The China Integrated Circuit Industry Investment Fund, often called the “Big Fund,” is reportedly in talks to lead a new investment that could value DeepSeek above 300 billion yuan, or about USD 44 billion (approx. RM202 billion). While the round is not yet final, a lead role for this national AI fund would tightly link DeepSeek’s trajectory to strategic priorities in chips and artificial intelligence infrastructure. DeepSeek recently launched its V4‑Pro and V4‑Flash models under MIT‑licensed open weights, undercutting many Western APIs by an order of magnitude. A major funding package tied to a national semiconductor strategy would likely translate into privileged access to training systems, domestic deployment scale and long‑term compute procurement. That, in turn, could intensify competition not only on model quality but also on the price and openness of advanced AI offerings available to developers.
Why These AI Compute Deals Matter for Access, Performance and Price
Across Anthropic, ByteDance and DeepSeek, a clear pattern is emerging: AI labs now treat compute, chips and data‑center capacity as the core bottleneck and ultimate competitive advantage. Multi‑billion‑dollar cloud computing investments and long‑term chip reservations are less about short‑term experiments and more about securing decades of inference and agent workloads before rivals do. For users, the upside is that this infrastructure land‑grab should enable faster, more reliable models, with richer features like long‑running agents, real‑time collaboration and lower latency across regions. But there are trade‑offs. As major labs reserve future capacity years in advance, smaller AI buyers may face longer waits, weaker pricing leverage and fewer options during an AI chip shortage. The net effect is likely a more stratified AI landscape: top platforms offer powerful, integrated services at scale, while independent developers and enterprises must navigate a tighter, more expensive compute market to keep up.
