MilikMilik

Google Says Its AI Edge Can Close the Cloud Gap: What It Means for Enterprise Buyers

Google Says Its AI Edge Can Close the Cloud Gap: What It Means for Enterprise Buyers

Kurian’s ‘Full-Stack’ Bet: Chips, Models and Margin

Thomas Kurian is making a simple but aggressive claim: Google Cloud can close the gap with Amazon Web Services and Microsoft Azure by owning more of the AI stack end-to-end. After entering the cloud market late and being criticised for lagging AI products, Google now frames its differentiation around intellectual property. Kurian argues that, unlike “a hyperscaler reselling other people’s technology,” Google designs its own Tensor Processing Units (TPUs), operates its own data centres, and trains its own Gemini foundation models. Because Google is not “shipping 80 per cent” of each revenue dollar to external chip or model providers, he says it can reinvest more in infrastructure and R&D. That thesis underpins a cloud business that has doubled market share from 7 to 14 per cent and is now growing revenue faster than its larger rivals on the back of AI-centric workloads.

AI Chips for Cloud: TPUs Versus Trainium and Maia

Google’s latest AI chips sit at the heart of its Google Cloud AI strategy. The company has unveiled an eighth generation of TPUs, with one variant focused on training large AI models and another optimised for high-speed inference, thanks to expanded memory. Kurian positions this stack as superior to Amazon’s Trainium and Nova systems and Microsoft’s Maia processors and MAI models, claiming only Nvidia matches Google’s hardware-plus-software integration. A report from Epoch AI estimates Google controls about a quarter of global AI compute, with roughly 3.8 million TPUs and 1.3 million GPUs in operation, while Microsoft follows with about 3.2 million Nvidia GPUs. Kurian also rebuts Nvidia chief Jensen Huang’s criticisms of TPU performance, pointing to adoption by nine of the top 10 AI labs as evidence that Google’s chips are competitive on performance, price and quality and not just sustained by a single flagship customer.

Google Says Its AI Edge Can Close the Cloud Gap: What It Means for Enterprise Buyers

Which Enterprise AI Workloads Fit Google’s Pitch?

Google’s AI-first cloud narrative targets customers whose core workloads are defined by large-scale models and data-heavy applications. Enterprises building LLM-powered applications, autonomous agents and complex “agentic” workflows stand to benefit from tight integration between Gemini models, TPUs and Google’s data services. AI labs and start-ups training frontier models are an obvious focus, underscored by Anthropic’s decision to increase its TPU commitments alongside a broader partnership that includes a large investment and multi-year compute capacity. Traditional enterprises modernising analytics stacks also fit the strategy: Google is betting that lower-cost, higher-efficiency inference on its chips can make it attractive for large-scale data processing, recommendation engines and real-time decisioning. For customers already using Google Workspace or search advertising tools, the promise is smoother product integration and unified security and governance around AI usage, making Google Cloud a more compelling choice for consolidating AI-heavy workloads.

Risks Behind the AI-First Cloud Strategy

The same factors that power Google’s AI advantage also create significant risks. Capital expenditure is rising sharply as Google races to expand data centres and design successive TPU generations, with forecasts pointing to USD 185 billion (approx. RM851 billion) of spending this year. That level of investment raises the bar for monetising AI services at scale and sustaining margins. Kurian must also prove that differentiation goes beyond raw compute; enterprises increasingly expect managed platforms, governance, and tooling that make AI reliable and compliant, not just fast. Another tension lies in balancing proprietary Gemini models and TPUs with support for open-source and third-party approaches, a key concern for developers wary of vendor lock-in. If Google leans too heavily on its own stack, it risks alienating customers who want flexibility across clouds and model providers, especially as competitors refine their own AI offerings.

Implications for Pricing, Multi-Cloud and Buyer Power

Google’s claim that it avoids sending most of its revenue to external chip or model suppliers is ultimately a pricing and bargaining argument. By owning more of the stack, it says it can offer competitive prices for AI compute while funding rapid innovation. That could intensify the cloud wars AI battle, pressuring Amazon and Microsoft to sharpen their economics around Trainium, Nova, Maia and partner models. For large enterprises, this is likely to strengthen multi-cloud strategies: buyers can pit Google, AWS and Azure against each other for AI-heavy contracts, using Google’s in-house economics as leverage. However, deeper integration between models and hardware also increases potential lock-in. Customers will need to weigh short-term performance and cost gains against long-term portability of models, data and workflows as they decide where to place their most strategic enterprise AI workloads in an increasingly specialised cloud landscape.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!