How Orbital Data Centers Would Actually Work: Power, Cooling, Latency, Cost
Google says orbital data centers break even at $200/kg to LEO. Current launch costs are 7x higher. Here's what has to change first.
AnIntent Editorial
Most coverage of orbital data centers treats them as a sci-fi pitch with a launch date attached. The actual engineering question is narrower and harder: can you put a rack of AI accelerators in low Earth orbit, keep it cool, keep it powered, and beam data back fast enough to matter, all at a cost that beats a warehouse in Northern Virginia? Right now the answer is no, and the gap is roughly an order of magnitude.
Google thinks that gap closes around 2035. SpaceX thinks it closes sooner. Amazon thinks it does not close at all. Behind those positions sit specific, measurable numbers about radiation tolerance, radiator mass, optical link bandwidth, and the price of a Falcon 9 seat. Those numbers are the article.
The $200-Per-Kilogram Number That Decides Everything
Google's entire orbital compute thesis rests on a single threshold. According to Google Research, if launch costs to low Earth orbit fall below $200 per kilogram, operating a space-based data center becomes roughly comparable to the energy bill of an equivalent terrestrial facility on a per-kilowatt-per-year basis. That is the break-even point.
The current price is nowhere near it. Data Center Dynamics puts today's launch costs at $1,500 to $2,900 per kilogram, or more depending on launch requirements. That is roughly seven to fifteen times the level Google's economic model requires.
The path from here to there runs through one vehicle. Google's analysis assumes SpaceX's Starship launches soon and then flies 180 times per year to drive the per-kilogram cost down the learning curve. For comparison, Next Big Future notes a current reusable Falcon 9 launch sells for around $67 to $70 million, with a per-kilogram price near $3,600 in its reusable configuration. Hitting $200/kg means selling a Falcon 9-equivalent payload for closer to $5 million.
That is the cost problem in one paragraph. Everything else, the radiators, the lasers, the radiation-hardened TPUs, is engineering. This is finance, and it is the constraint that will decide whether Project Suncatcher Google announced in November 2025 becomes anything more than a research paper.
Why Cooling Is the Spec That Actually Limits Scale
There is a counterintuitive fact about putting computers in space: getting rid of heat is harder, not easier. On Earth, a data center can use forced air, chilled water, or immersion cooling because there is a fluid to dump heat into. In vacuum, the Wikipedia overview of space-based data centers explains, heat dissipation is limited to thermal radiation alone, which is less efficient than the convection used on the ground.
That constraint scales badly. NASA-cited analysis summarised by Singularity Hub notes that radiators can account for more than 40 percent of total power system mass at high power levels. Stack enough TPUs to match a 300-megawatt terrestrial AI cluster and the radiator surface dominates the satellite. Every extra square meter of radiator is mass, mass is launch cost, and launch cost is the thing the whole project is trying to minimise.
Google's own materials acknowledge this. Data Center Dynamics quotes the company saying "advanced thermal interface materials and heat transport mechanisms" would be required, "preferably passive to maximize reliability" to move heat from chips to dedicated radiator surfaces. Passive is the operative word. There are no pumps you can replace at 2 a.m. in low Earth orbit.
Think of it like a thermos in sunlight. The vacuum that protects the contents from outside heat also traps any heat generated inside. A 250-watt TPU has nowhere to put its waste energy except into a radiator panel pointed at the cold of deep space, and the panel itself has to survive temperature swings every 90 minutes as the satellite passes from sunlight to eclipse.
What Google Actually Plans To Launch in 2027
The announced mission is much smaller than the headlines suggest. Google's next milestone is a learning mission in partnership with Planet, slated to launch two prototype satellites by early 2027. Two satellites. Not a constellation, not a data center, a test article.
What those satellites are meant to validate is more interesting than their number. The experiment will test how Google's models and TPU hardware operate in space and validate the use of optical inter-satellite links for distributed ML tasks. The full vision sits behind that test. Data Center Dynamics has described Google's plan for 1-kilometer-wide arrays of 81-satellite compute clusters, flying in tight formation and linked by free-space optical lasers.
The radiation testing has already happened on the ground, and the result was better than expected. High Bandwidth Memory subsystems were the most sensitive component but only began showing irregularities after a cumulative dose of 2 krad(Si), nearly three times the expected shielded five-year mission dose of 750 rad(Si). No hard failures were attributable to total ionising dose up to the maximum tested dose of 15 krad(Si) on a single chip, indicating that Trillium TPUs are surprisingly radiation-hard for space applications. This is the non-obvious result of the whole program: Google's commercial AI silicon, designed for racks in Iowa, survives a proton beam well enough that it does not need to be redesigned for orbit.
That changes the economic model. Specialised rad-hard chips are expensive, slow, and several process generations behind the commercial state of the art. If a stock Trillium TPU works, the supply chain for space data centers AI workloads collapses back into Google's existing fab pipeline.
The Latency Reality Nobody Advertises
A constellation in sun-synchronous orbit at 650 kilometers altitude has a round-trip light-time of roughly 4 to 5 milliseconds to the nearest ground station, before any switching, queueing, or atmospheric weather effects. That sounds fine until you remember why people build data centers in specific places. A Northern Virginia facility serves Washington at sub-millisecond latency. A satellite cannot.
This is why the use case for Google SpaceX orbital compute is narrow in the near term. TechJournal reports that early orbital data centers would handle overflow AI training and inference workloads, with consumer-facing applications staying on terrestrial infrastructure because of latency requirements. Training a foundation model is a batch job. It does not care whether the gradient update arrives in 1ms or 50ms. A user typing into Gemini does.
Elon Musk's framing of the same problem is sharper. TechJournal reports Musk's December 2025 post on X claiming that satellites with localised AI compute, where just the results are beamed back from low-latency sun-synchronous orbit, will be the lowest cost way to generate AI bitstreams within three years. The trick in that sentence is "just the results." If you can do all the heavy computation in orbit and ship only a compressed answer to the ground, the link budget shrinks and the latency stops mattering as much. That is a specific architectural bet, and it only works for inference pipelines that can run end-to-end in space without round-tripping to a terrestrial database.
For data centers in space how it works in practice, the inter-satellite fabric matters as much as the ground link. According to InfoQ, Google has already demonstrated optical transmission at 1.6 terabits per second using a single transceiver pair in lab tests. To match a terrestrial AI cluster, the constellation needs tens of terabits per second between every pair of satellites, while they fly hundreds of meters apart at 7.5 kilometers per second.
The Power Argument That Actually Holds Up
The one claim in the orbital data center pitch that survives close inspection is the energy one. According to Google's research blog, a solar panel in a dawn-dusk sun-synchronous orbit can be up to eight times more productive than the same panel on Earth, with near-continuous generation that reduces the need for batteries.
That number does most of the work in the economic model. Eight times the yield per square meter of silicon, no clouds, no night, no atmosphere absorbing the spectrum. TechJournal notes that power grids in key data center regions, Northern Virginia, the Nordics, Singapore, are already maxed out, and that this is the primary driver pushing hyperscalers to look at orbit at all. The bet is not that space is cheap. The bet is that Earth is running out of room on the substation side.
Degradation eats into the advantage. The Wikipedia entry notes that solar array efficiency degrades 0.5% to 0.8% per year from UV exposure, space weather, and thermal cycling. A five-year mission loses roughly 3% to 4% of its peak output before retirement. That is manageable. It is also one of the reasons Google's economic math assumes a finite spacecraft lifetime rather than indefinite operation.
Who Else Is Already in Orbit
Google is not first. According to the Wikipedia overview, in 2025 Y Combinator-backed startup Starcloud became the first company to train an LLM in space and run a version of Google Gemini in space. The workload was tiny by hyperscaler standards. The precedent matters.
The industry is moving on multiple tracks. The same source notes that on May 5, 2026, Edge Aerospace was awarded a contract by the European Space Agency under its Space Cloud program to study architectures and implementation roadmaps for orbital data centers. SpaceX has its own play. TechJournal reports that SpaceX has filed for authorisation to launch up to 1 million satellites to support orbital data center ambitions, and that the company acquired xAI in February 2026, folding in xAI's Colossus 1 data center with more than 220,000 Nvidia GPUs across over 300 megawatts. Our prior coverage of the Anthropic-Colossus compute deal explains why that ground-based capacity matters for the IPO narrative.
The talks reported on May 12, 2026 fit into this. TechCrunch reports that Google and SpaceX are in active talks to launch orbital data centers, per the Wall Street Journal citing sources familiar with the matter, with Google also speaking to other rocket-launch companies and not exclusively SpaceX. The same report notes Google's $900 million investment in SpaceX in 2015 giving it a 6.1% stake. SpaceX is targeting a June 2026 IPO at a reported $1.75 trillion valuation per Gagadget, and orbital compute sits near the center of the pitch.
Not everyone is convinced. TechJournal reports that Amazon's cloud boss has publicly said orbital data centers are "nowhere close" to being practical, directly pushing back on the SpaceX narrative. The counterpoint to all of this is the capex flow. The same source reports that Meta, Amazon, Microsoft, and Alphabet have collectively committed approximately $725 billion in 2026 capital expenditures, a 75% year-over-year increase, almost entirely for terrestrial data centers, chips, and GPUs. Our piece on where big tech capex is actually going covers the ground-based side in more depth.
The Thing That Will Decide This Before 2035
The risk most coverage skips is debris. Without humans on hand to swap broken TPUs the way technicians do in terrestrial racks, the simplest solution is overprovisioning, but overprovisioning means more satellites in a constellation already crowded with Starlink, Kuiper, and now hypothetical kilometer-wide compute clusters. Wikipedia notes that megastructures in orbit are particularly exposed to orbital debris risk. A 1-kilometer formation of 81 satellites flying hundreds of meters apart is the definition of a target.
The question to watch is not the 2027 prototype launch. It is what happens between 2028 and 2032 to Starship's flight cadence. If SpaceX cannot get to roughly 180 launches per year by the mid-2030s, the $200/kg threshold slips, and the entire economic case for AI Infrastructure articles and the orbital variant in particular reverts to a research paper. If Starship does hit cadence, the constraint shifts from launch cost to thermal engineering, and the radiator mass problem becomes the next decade's bottleneck.
For anyone watching this space, the practical move is to stop treating orbital compute as a binary that arrives or does not. It is a curve. The curve has one slope set by SpaceX's launch economics and another set by Google's TPU power density. The intersection happens around 2035 if both teams hit their targets. Until then, the workloads that go up will be exactly the ones that do not care about latency, do not care about maintenance, and do not need to be close to a paying customer. That is a smaller use case than the press releases suggest, and it is the only one that pencils out.
Frequently Asked Questions
How much do orbital data centers cost compared to ground-based ones?
At current launch prices of $1,500 to $2,900 per kilogram to low Earth orbit, orbital data centers are roughly three times more expensive than terrestrial equivalents. Google's break-even threshold is $200 per kilogram, which its analysis projects could be reached around 2035 if SpaceX's Starship scales to approximately 180 launches per year.
Why can't you use water or air cooling on satellites?
There is no air or fluid in vacuum to carry heat away, so cooling is limited to thermal radiation through panels. NASA-cited analysis shows radiators can account for more than 40 percent of a high-power spacecraft's power system mass, which is why Google specifies passive thermal interface materials and dedicated radiator surfaces rather than pumped liquid loops.
Are Google's TPUs actually radiation-hardened for space?
Google's commercial Trillium TPUs have not been redesigned for space but tested surprisingly well. In a 67 MeV proton beam, no hard failures appeared up to 15 krad(Si) on a single chip, with High Bandwidth Memory showing irregularities only above 2 krad(Si), roughly three times the expected shielded five-year mission dose of 750 rad(Si).
What workloads make sense in an orbital data center?
Batch AI training and overflow inference where round-trip latency does not matter. Consumer-facing applications stay on terrestrial infrastructure because a low Earth orbit round trip adds several milliseconds before atmospheric and switching delays, which is unacceptable for interactive products like search or chatbots.
Has anyone actually run AI in space yet?
Yes. In 2025, Y Combinator-backed startup Starcloud became the first company to train a large language model in space and run a version of Google Gemini in orbit. The workloads were small compared to terrestrial hyperscaler clusters, but they established that AI training and inference are physically possible in low Earth orbit.