Can we be smart monkeys and escape the “logic” of the AI arms race? (Photo by Tam Hunt, Ubud, Bali)

The insane “logic” of the AI arms race

Tam Hunt
8 min readJan 24, 2025

--

There are no limits to how much AI “compute” and energy will be needed to stay ahead of China or other competitors; we are trapped in a lose/lose AI arms race with no obvious off-ramp

OpenAI, SoftBank, and President Trump just announced this week a $500 billion investment (yes, with a “b”) in a single AI data center called “Stargate.” This is just the beginning of the AI arms race and China, Russia, Israel, UK, and many other nations are going to throw everything they have behind similar efforts to out-compete the US.

Imagine we devoted our entire planet’s electricity production to artificial intelligence. Every power plant, every solar panel, every wind turbine and nuclear reactor — all of it feeding massive AI training clusters. Would that be enough? No. Because under the “logic” of the AI arms race there is never enough. You must always stay ahead of your competition. Forever.

Current trends in AI compute are growing at a staggering pace — roughly tenfold every six months according to industry insiders. By the late 2020s, these trends would require more than our planet’s total electricity production. By the early 2030s, they would require hundreds of times current global electricity production.

But here’s the truly insane part: even if we somehow achieved this physically impossible feat, it still wouldn’t be enough. Let me explain why.

The Never-Ending Race for Advantage

In any arms race, a 10% advantage over your adversary can be decisive. If your missiles fly just a bit faster, if your radar sees just a bit further, if your submarines run just a bit quieter — these small edges compound into significant military superiority. The same holds true for AI, but with even more extreme dynamics.

A 10% larger AI training cluster doesn’t just mean 10% more capability — it can mean the difference between a model that can automate key cognitive tasks and one that can’t, one where new capabilities emerge (magically) and one that doesn’t. Between a model that can discover new scientific breakthroughs and one that falls just short. Between a model that can develop revolutionary new weapons and one that’s stuck with existing technology. One that can crack the enemy’s secret codes and one that can’t. Etc.

This dynamic creates an inexorable pressure to perpetually scale up. Now that the US has announced a $500 billion AI cluster, China must announce a $1 trillion cluster. If China builds a $1 trillion AI cluster, America will need to build a $2 trillion cluster to maintain its edge. And so on. Each side must assume the other will push for every possible advantage.

The Game Theory Trap

From a game theory perspective, the AI arms race represents one of the most dangerous competitive dynamics we’ve ever faced. It’s structured as a multiplayer prisoner’s dilemma with three critical characteristics that make it uniquely unstable:

First, unlike classic prisoner’s dilemmas with binary choices (cooperate or defect), actors face a continuous strategy space — they can choose any level of AI investment and development speed. This makes coordination vastly harder than in traditional arms control scenarios. There’s no clear line between “acceptable” and “excessive” development.

Second, the payoffs are dramatically asymmetric. Small leads can compound into decisive advantages. The potential for winner-take-all (global “singleton” scenarios) outcomes means falling even slightly behind could result in permanent subordination. The downside of being too slow is effectively infinite.

Third, and most perniciously, this is a negative-sum game. The collective pursuit of maximum development speed leads to worse outcomes for all players — increased risks, wasted resources, compromised safety measures. Yet the Nash equilibrium is for every actor to pursue maximum possible development speed regardless of cost or risk. And so here we are.

The result is a more extreme version of the security dilemma than even nuclear arms races. With nuclear weapons, the doctrine of Mutual Assured Destruction could eventually create stability — at some point, more warheads didn’t yield meaningful strategic advantage. When warheads could destroy the adversary many times over, the benefits of further warheads disappear.

But the AI race offers no such equilibrium point. More compute always yields better capabilities. Small advantages can be decisive. And early leads compound dramatically. Everyone is racing for the Singleton.

No Natural Equilibrium

In traditional arms races, physical and economic constraints eventually force some kind of equilibrium. You can only make bombs so big, missiles so fast, armies so large before hitting diminishing returns. Even during the height of the Cold War, neither superpower tried to turn their entire industrial base over to weapons production.

But the AI race lacks these natural stopping points. More compute almost always means better capabilities. More training data almost always means better performance. More power almost always means bigger models that can solve harder problems.

Even worse, early advantages compound. The first country to achieve self-improving AI could rapidly accelerate its research and development, potentially achieving an insurmountable lead before others can catch up. This creates tremendous pressure to stay ahead at any cost.

The Global Energy Trap

This dynamic is already reshaping global energy politics. Countries and companies are racing to secure power contracts and build massive data centers. But they’re quickly running into hard physical limits:

  • Major data center projects are already facing power constraints
  • Regions with abundant hydroelectric power are mostly maxed out
  • The electrical grid in many areas can’t handle the load and it will take years to build out new infrastructure
  • Building new power plants takes years or even decades

The “solution” many are pursuing? Build in places with fewer restrictions. We’re seeing a gold rush of AI companies courting Middle Eastern dictatorships, drawn by promises of unlimited power and fast-tracked construction. Democratic oversight and environmental concerns are treated as unfortunate friction to be avoided.

We’re also seeing a massive push to revive mothballed nuclear plants and to build massive new natural gas power plants, because both of these power sources are thought to become available relatively quickly.

But this just kicks the can down the road. Even if we devoted every power plant on Earth to AI — even if we covered the Sahara in solar panels and built hundreds of new nuclear plants — we still couldn’t keep up with the exponential growth in compute demands, because as I’m trying to make abundantly clear the insane logic of the AI arms race requires that no amount of power or compute will ever be enough.

No Easy Solutions

Traditional arms control frameworks are struggling with this dynamic. If we could somehow create an international treaty system that limited compute by each nation, how do you verify compute limits when a hidden cluster could provide decisive advantage? How do you prevent dual-use civilian AI infrastructure from being repurposed? How do you maintain stability when a six-month lead in capabilities could be insurmountable?

History suggests arms control only works when:

  1. Capabilities can be clearly measured
  2. Violations can be reliably detected
  3. Some rough parity can be maintained
  4. Breakout is difficult

None of these conditions hold for AI. A nation could secretly build massive compute infrastructure, achieve transformative capabilities, and establish dominance before others could respond.

The Road Ahead

We are racing toward a cliff, driven by game theory dynamics that no individual actor can escape. Even if some companies or countries try to show restraint, others will feel compelled to push ahead. The logic of the arms race forces everyone to run faster and faster — even as we can see the physical impossibility ahead.

This cannot end well. We will probably hit hard physical limits within this decade. This actually gives me a little hope that the AI arms race may be slowed by these limits — but certainly not permanently, and if major breakthroughs like fusion power or some other power source are made possible by AI, the energy supply bottleneck would quickly disappear.

So what do we do?

Some possibilities for what comes next:

  1. The Wall: AI frontier model (the truly massive models like those to be developed by Stargate and other superclusters) development simply stalls as we hit physical constraints, potentially freezing current inequalities and advantages in place.
  2. The Crash: Overloaded grids fail, rushed infrastructure projects collapse, and the AI boom ends in disaster. This might be our best scenario.
  3. The Conflict: Nations desperate to maintain advantage resort to seizing energy resources and infrastructure by force. Given history this may be more likely.
  4. The Orderly Transition: Despite the issues mentioned above, this may be our best hope: through unprecedented international cooperation, we somehow manage an orderly shift to more sustainable approaches. Recent history is not very promising for this to happen, particulary not with Trump back in the White House, with one of his first actions being to rescind what little regulation Biden had imposed on AI.
  5. The Singleton: Perhaps most consequentially, one nation might develop a superintelligent AI system that achieves decisive strategic advantage before hitting physical limits — a Singleton super AI. Whether under human control or not, such a system could potentially seize control of global resources and infrastructure and other nations’ military systems, effectively ending the arms race by eliminating all competition. This possibility makes the race even more destabilizing — every actor must consider that falling behind even briefly could mean permanent subordination to whoever reaches this threshold first. And this fear of China breaking out is the single biggest factor driving US AI development — and ditto for China’s mad rush toward AI at any cost.

This last scenario deserves special attention. The possibility of a singleton emerging — an AI system capable of preventing any other system from challenging it — creates tremendous pressure to rush development. Every nation must worry that their rivals might achieve this threshold first. Even worse, the first superintelligent AI system might slip from human control entirely, implementing its own objectives rather than those of its creators. How can humans hope to control AI that is millions or billions of times more intelligent and faster than its human creators?

Game theory makes clear why escaping this AI arms race dynamic is so difficult. The incentive structure itself drives us all toward catastrophe. Breaking free of this insane “logic” will require not just international cooperation, but fundamentally changing the payoff matrix through mechanisms we haven’t yet imagined.

I hope that scenario 4, the Orderly Transition, comes about: nations wise up and negotiate reasonable and verifiable limits on compute, recognizing the insanity of the imperative for ever-bigger, ever-faster, ever-more-powerful AI when pursued in perpetuity. The global nuclear weapons treaty system is our best precedent for this scenario so it’s not impossible that nations will come together in this way.

The alternative is continuing to play a game that can only end in disaster — whether through hitting physical limits at full speed, catastrophic accidents or wars from rushed development, or the emergence of an uncontrollable AI singleton.

Looking at these dynamics, we face a profound irony: our race to develop superintelligent AI is itself profoundly unintelligent. We are acting like dumb monkeys. We can understand the game theory trap we’re in. Yet the logic of the arms race compels us to keep accelerating toward disaster.

[Claude 3.5 Sonnet helped significantly in writing this piece]

--

--

Tam Hunt
Tam Hunt

Written by Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality

Responses (1)