MilikMilik

Google and SpaceX Bet on Orbital Data Centers to Power the Next Wave of AI

Google and SpaceX Bet on Orbital Data Centers to Power the Next Wave of AI

AI’s Power Problem Pushes Infrastructure into Space

The rapid expansion of AI infrastructure is colliding with a hard constraint: data center power consumption is pressing local grids to their limits. Tech giants are competing for megawatts as fiercely as for cutting-edge chips, sparking planning battles over land, water and new substations. In this context, Google and SpaceX are exploring whether part of the AI stack can escape Earth’s bottlenecks altogether. Their talks, reported alongside Google’s Project Suncatcher, frame orbit not as a science-fiction playground but as a potential relief valve for AI’s insatiable energy appetite. When infrastructure planners start asking if compute should literally leave the planet, it signals how severe terrestrial constraints have become. The goal is not to replace hyperscale facilities on the ground overnight, but to probe whether AI infrastructure in space could complement land-based capacity and ease pressure on overstressed power grids.

Project Suncatcher: Solar-Powered TPUs Above the Atmosphere

Google’s Project Suncatcher imagines a constellation of solar-powered satellites equipped with Tensor Processing Units, effectively turning orbit into an extension of its AI hardware fleet. A satellite in a carefully chosen orbit can access far more continuous sunlight than any terrestrial solar farm, sidestepping disputes over land use, water consumption and local utility strain. Planet is slated to build and operate two prototype satellites for this moonshot, with launch targeted by early 2027, primarily to test whether TPUs can operate reliably in space. The experiment focuses on the basics: how well chips handle radiation, how thermal systems dissipate heat without air and how long hardware can survive in low Earth orbit. Rather than running large AI models immediately, Suncatcher’s early phase is about proving that space-based computing nodes can be built, powered and maintained at all.

SpaceX’s Orbital Data Center Vision and the AI Launch Loop

SpaceX is positioning itself as the launch and operations backbone for AI infrastructure in space. The company has filed with the Federal Communications Commission for authority to deploy up to 1 million satellites under what it calls the SpaceX Orbital Data Center system. While that figure exceeds any existing constellation and should be treated as an upper bound, it signals a strategic pivot: leveraging Starlink’s mass-manufacturing and launch experience to host compute, not just connectivity. For SpaceX, orbital data centers offer a high-value use case to justify more frequent, reusable launches. For Google, which reportedly owns a 6.1% stake in SpaceX, the partnership aligns incentives: secure launch capacity in exchange for becoming a flagship customer. If launch costs fall as expected over the next decade, the energy advantages of AI infrastructure in space could begin to outweigh the expense of lifting hardware off the ground.

How Space-Based Computing Could Reshape AI Deployment

If orbital data centers mature, they could reshape how AI workloads are deployed and scaled. Instead of every new model requiring fresh grid connections and local substations, companies could offload certain compute-intensive, latency-tolerant tasks to solar-powered platforms in orbit. Training large models, batch inference or background analytics could be scheduled for space-based computing nodes, while latency-sensitive services remain grounded. This hybrid architecture would decouple part of AI expansion from terrestrial power and land constraints, potentially smoothing regional energy demand and reducing friction with local communities. It also encourages a fuller “industrial stack” view of AI infrastructure, where launch capacity, optical interlinks, orbital safety and spectrum rights become as critical as chips and cooling towers. Over time, AI infrastructure space could be treated as a strategic asset, much like data center campuses are today, but distributed across orbits instead of industrial parks.

Barriers: Heat, Radiation, Latency and Orbital Congestion

Despite the promise, turning orbital data centers into practical infrastructure faces formidable obstacles. In vacuum, there is no air to carry heat away from chips, forcing engineers to rely on radiators and meticulous thermal design; every watt of compute becomes a thermal challenge. Radiation in low Earth orbit can cause bit flips, component degradation and unpredictable faults, and early lab tests on Google’s Trillium TPUs must be validated by years of in-orbit operation. Latency is another constraint: workloads that require constant round trips between Earth and orbit may prove uneconomical. Finally, adding massive compute constellations to already crowded orbital shells amplifies concerns over space debris and long-term orbital safety. Regulators will have to weigh terrestrial environmental benefits against congestion above the atmosphere, shaping rules that determine whether space-based computing remains a niche experiment or evolves into a core pillar of global AI infrastructure.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!