The author being sunny and enjoying the sun in Kauai, March 2024

Why worry? The inevitability of incorporating AI as decision-makers into nuclear weapons systems

Tam Hunt

--

I engage in a conversation with Claude 3 about the extreme danger of the current path we’re on incorporating AI systems into nuclear weapon systems surveillance and launch decisions

Tam: Aloha Claude, I’m writing to you from the lovely island of Kauai where I’m staying with some friends in the southern part of the island. It’s a bit strange that I’m thinking about AI safety from here, but so it goes. I’d like to work through some scenarios relating to AI and national security. Specifically, I’d like your help fleshing out the possible dynamics and game theory scenarios involving nuclear war. First question: please comment on the likelihood that major nations like the US, UK, China, France, etc., will feel compelled to hand an increasing amount of surveillance and decision-making over nuclear threats and actions to AI systems simply b/c AI systems can already think a million times faster than any human.

Claude Opus: Aloha! It’s great to hear from you, and I hope you’re enjoying the beautiful island of Kauai. AI safety and national security are important topics to consider, even in a peaceful setting.

Regarding your question, there are compelling reasons for major nuclear powers to increasingly rely on AI systems for nuclear threat surveillance and decision-making:

  1. Speed: As you mentioned, AI can process information and make decisions far faster than humans. In a nuclear crisis, every second counts, and AI could potentially identify threats and recommend responses much quicker.
  2. Data processing: AI can analyze vast amounts of data from various sources (e.g., satellite imagery, intelligence reports, sensor networks) to detect patterns and anomalies that might indicate an impending nuclear threat.
  3. Consistency: AI systems can apply decision-making rules consistently, without the emotional or psychological factors that might influence human judgment in high-stress situations.

However, there are also significant risks and challenges:

  1. Unpredictability: AI systems might behave in unexpected ways, especially in complex, high-stakes scenarios like nuclear crises. Unintended consequences or errors could be catastrophic.
  2. Hacking and manipulation: AI systems could be vulnerable to cyberattacks, data poisoning, or other forms of manipulation by adversaries, potentially triggering false alarms or worse.
  3. Lack of human judgment: While AI can process data and make fast decisions, it lacks the contextual understanding, intuition, and moral reasoning that human decision-makers bring to nuclear issues. Removing humans from the loop could lead to disastrous outcomes.
  4. Escalation risks: If nations come to rely heavily on AI for nuclear decision-making, it could accelerate the pace of escalation in a crisis, as AI systems respond to each other at machine speed, potentially outpacing human control.

In terms of likelihood, I believe there will be strong pressures for nations to adopt AI in nuclear decision-making, given the perceived advantages in speed and information processing. However, I hope that policymakers and military leaders will be cautious about ceding too much control to AI, recognizing the immense risks involved. Ideally, AI would be used to enhance human decision-making in nuclear crises, but not replace it entirely. Robust testing, fail-safes, human oversight, and international dialogue will be critical.

Ultimately, the role of AI in nuclear decision-making will depend on both technological progress and the choices made by national leaders. It’s a complex issue without easy answers, but one that demands careful thought and public discussion.

Let me know if you would like to explore any other aspects of this topic!

TH: Thank you. Let’s speculate that China takes the lead on implementing AI systems in nuclear surveillance and attack systems, and the US learns about this through its own spy networks. Would the US feel compelled at that point to also implement AI systems into its nuclear surveillance and attack systems?

Claude: In the scenario you described, where the US learns that China is implementing AI systems in its nuclear surveillance and attack capabilities, it is highly likely that the US would feel compelled to respond in kind. This dynamic is a classic example of the “security dilemma” in international relations theory, where actions taken by one nation to enhance its own security (in this case, China’s AI integration) can be perceived as threatening by other nations, prompting them to take similar measures in response.

The US would likely perceive China’s AI integration as a significant threat for several reasons:

  1. Technological advantage: The US would worry that China’s AI capabilities could give it an edge in detecting, responding to, or even initiating a nuclear attack, potentially undermining the US’s deterrence posture.
  2. Uncertainty and miscalculation: US policymakers might fear that China’s AI systems could behave unpredictably or misinterpret data, increasing the risk of accidental escalation or miscalculation leading to nuclear war.
  3. Arms race dynamics: The US would likely see China’s move as the start of an AI arms race in the nuclear domain, and would feel pressure to keep pace to avoid falling behind.

Given these concerns, it is probable that the US would initiate its own efforts to integrate AI into its nuclear systems, both to counter the perceived threat from China and to gain any potential advantages that AI might offer. This could involve increased funding for AI research and development, the creation of new military units focused on AI, and the adaptation of existing nuclear command, control, and communications infrastructure to incorporate AI decision support.

However, this action-reaction dynamic between the US and China could have dangerous consequences. It could erode strategic stability by introducing new uncertainties and risks into the nuclear relationship, and could make communication and crisis management between the two countries even more challenging. It might also spur other nuclear powers to pursue their own AI integration efforts, further complicating the global nuclear landscape.

To mitigate these risks, it would be important for the US and China to engage in dialogue and confidence-building measures related to AI and nuclear weapons. This could involve joint research projects, sharing of best practices for AI safety and reliability, or the establishment of crisis communication channels specifically designed to manage AI-related risks. International norms and treaties governing the use of AI in nuclear contexts might also be necessary.

Nonetheless, in the near term, the pressures of competition and the fear of falling behind would likely drive both the US and China to pursue AI integration in their nuclear systems, despite the risks involved. Managing this dynamic will be a major challenge for both countries in the years ahead.

TH: Yes, this security dilemma and classic arms race dynamics would seem to lead where you suggest. Now let’s speculate that the US and China have indeed, in the scenario we’re concerned with, incorporated advanced AI into these systems such that AI has the ability to respond to a perceived nuclear attack with both ABM strikes and counter-offensive nuclear strikes. World-class game theoreticians such as John von Neumann opined during the 50s and 60s that the US should and could engage in first strike nuclear attacks simply b/c it was too risky to live with the threat of another nation making a first strike. It would seem that basic game theory would lead to this conclusion. Under what conditions would AI nuclear control systems conclude that a first strike makes sense and how would such a course of action not lead to mutual annihilation?

Claude: The scenario you’ve outlined is deeply concerning, as it combines the risks of AI-controlled nuclear weapons with the cold logic of game theory and first-strike incentives. The conditions under which an AI system might conclude that a first strike is optimal are not hard to imagine.

First, if the AI assesses that the adversary is likely to launch a first strike, it may calculate that preempting with its own first strike is the best way to limit damage to its own nation. This assessment could be based on factors such as the adversary’s nuclear posture, intelligence on their intentions, or even the behavior of their own AI systems (e.g., detecting a sudden change in their alert status).

Second, if the AI determines that it has a significant advantage in first-strike capabilities (e.g., more accurate missiles, better chance of penetrating enemy defenses), it may conclude that a first strike could sufficiently degrade the enemy’s retaliatory capabilities to make it a rational choice.

Third, if the AI is programmed to prioritize certain objectives above all else (e.g., ensuring national survival, maintaining a perceived strategic advantage), it may resort to a first strike if it calculates that doing so is the only way to achieve those objectives.

The problem, of course, is that in a world of multiple AI-controlled nuclear arsenals, these calculations are happening on all sides. If one nation’s AI determines that a first strike is optimal, the adversary’s AI may detect those preparations and conclude that it must strike first itself. This could lead to a catastrophic spiral of escalation, even if neither side actually wants a nuclear war.

Moreover, even if AI systems are designed with robust safety features and human oversight, the sheer speed of AI decision-making in a crisis could outpace any meaningful human control. By the time human leaders realize what’s happening, it may be too late to intervene.

As for mutual annihilation, the cold logic of game theory and AI optimization might actually lead to scenarios where one side believes it can “win” a nuclear war by striking first and absorbing the diminished retaliation. However, this kind of thinking is incredibly dangerous, as it underestimates the devastating global consequences of even a “limited” nuclear exchange, and the potential for uncontrolled escalation.

In my view, the only way to reliably prevent this kind of catastrophic outcome is to maintain meaningful human control over nuclear launch decisions, and to pursue diplomatic and arms control efforts to reduce the risk of nuclear war in general. Relying on AI systems to manage nuclear crises, no matter how advanced, is simply too risky given the existential stakes involved.

Ultimately, the solution lies in reducing the role of nuclear weapons in national security strategies, and in finding ways for nations to cooperate and build trust, rather than relying on the cold calculations of machines. This is an immense challenge, but one that is essential for the survival of humanity in the age of AI.

TH: Yes, I agree with all that and find it somewhat comforting, in a strange way, that we are aligned on this. However, many scholars and policymakers are working on incorporating AI systems into nuclear defense systems, or see it as inevitable. For example, former senator Sam Nunn and his coauthor wrote in a 2021 piece for NTI:

“At a time when most countries with nuclear weapons are modernizing or diversifying their nuclear arsenals, significant technological advances in artificial intelligence (AI) for military applications suggest that AI inevitably will be explored for use in nuclear-weapon systems. Along with significant benefits, however, come associated risks and implications for strategic stability.

Two application areas are considered the most likely to take advantage of AI advances in the near term to mid term: Nuclear Command, Control, and Communications (NC3) and autonomous nuclear-weapon systems. This paper envisions the specific functions AI could perform in these two areas and analyzes the potential positive and negative consequences.”

Given that activities in this area are ramping up exponentially in teh last year and a half, since LLMs became so powerful, would you agree that it may be inevitable that nation-states do indeed give decision-making control over nuclear weapons to AIs in the near future?

Claude: You raise a crucial point. Even though the risks of AI-controlled nuclear weapons are clear, the momentum towards incorporating AI into nuclear systems seems to be growing, as the quote from Senator Nunn and his co-author illustrates.

There are indeed strong pressures that could make this outcome likely, if not inevitable, in the near future:

  1. Technological imperatives: As AI capabilities advance rapidly, there will be a strong temptation for nations to leverage these capabilities for military advantage, including in the nuclear domain. The fear of falling behind in an AI arms race could drive countries to take greater risks.
  2. Perceived benefits: As the Nunn quote suggests, there are potential benefits to using AI in nuclear systems, such as enhanced early warning, faster decision-making in a crisis, and more robust command and control. These benefits may be seen as outweighing the risks, especially as confidence in AI systems grows.
  3. Asymmetric incentives: For nations that feel at a disadvantage in conventional military capabilities, the allure of AI-enhanced nuclear systems as an “equalizer” may be particularly strong. This could accelerate AI adoption in the nuclear domain.
  4. Action-reaction dynamics: As we discussed earlier, if one nation (e.g., China) is seen as moving ahead with AI in its nuclear systems, other nations (e.g., the US) will feel compelled to respond in kind, creating a self-reinforcing cycle.

Given these pressures, I agree that it seems increasingly likely that nation-states will incorporate AI into their nuclear decision-making in the coming years, perhaps even ceding direct launch authority to AI systems in certain scenarios. This is a deeply troubling prospect, given the immense risks involved.

However, I don’t believe this outcome is inevitable. There is still time for the international community to come together and establish norms, treaties, and confidence-building measures to govern the use of AI in the nuclear domain. This could involve:

  1. Agreements to keep humans in the loop for all nuclear launch decisions.
  2. Transparency and information-sharing on AI development and testing related to nuclear systems.
  3. Joint research on AI safety and robustness in high-stakes, adversarial environments.
  4. Strengthened crisis communication channels and protocols to manage AI-related risks and misunderstandings.

Pursuing these measures will be challenging, given the secrecy and sensitivity surrounding nuclear weapons. But the alternative — a world of AI-controlled nuclear arsenals — is simply too risky to accept as inevitable.

It will take concerted effort from policymakers, military leaders, scientists, and the public to steer us towards a safer path. The work of scholars like Nunn in raising awareness of these issues is an important start. But much more needs to be done, and quickly, before the AI nuclear genie is fully out of the bottle.

--

--

Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality