Sitemap
Who, me, worry? (The author’s “jungalow” in Hawaii)

The Death of Bletchley: How Trump Killed International AI Safety

5 min readMar 22, 2025

The Trump administration has withdrawn from the international AI safety process and has eliminated almost all national AI safety efforts begun under former president Biden; this is a recipe for catastrophe

At the same place where Alan Turing and other codebreakers helped win World War II, world leaders gathered to launch what many hoped would be a new era of international cooperation on artificial intelligence (AI) safety.

The main product of that fateful meeting in 2023, the Bletchley Declaration, was signed by 28 countries including the United States, China, and the European Union. This agreement represented an unprecedented global consensus that AI development required careful governance to prevent existential risks while harnessing its benefits. It was meant to be the start of a comprehensive international process to ensure safe AI development for all nations.

Less than two years later, that promising coalition appears to be unraveling. The return of Donald Trump to the White House in January 2025 has dramatically shifted American policy priorities and diplomatic approaches to technological governance, effectively dismantling the fragile international consensus on AI safety that began at Bletchley Park.

The Promise of Bletchley

The Bletchley Summit represented a rare moment of international unity around a momentous technological challenge. Nations with competing interests acknowledged that AI development posed potential risks requiring collaborative solutions. The declaration emphasized the need for transparency in AI research, safety testing protocols, and international standards to ensure that frontier AI models would be developed responsibly.

Key commitments included:

  • Establishing shared scientific and evidence-based approaches to AI safety
  • Creating a network of research institutes focused on AI safety
  • Developing international standards and accountability mechanisms
  • Ensuring appropriate human oversight of AI systems

In the year following Bletchley, there were tangible signs of progress. The AI Safety Institute Network formed connections between research bodies across multiple countries. The G7 established AI safety testing protocols, and international standards bodies began work on governance frameworks.

Since returning to office, however, the Trump administration has systematically undermined these initiatives through a combination of policy shifts, diplomatic withdrawals, and rhetoric that has altered the international landscape for AI governance.

The administration quickly signaled its approach to AI with an “America First Technology Development” executive order that emphasized deregulation and competitive advantage over safety and international cooperation. The order specifically:

  • Rescinded numerous AI safety requirements for American companies
  • Eliminated mandatory safety testing for frontier models
  • Redirected funding from international AI safety research to domestic AI development programs
  • Characterized safety measures as “innovation-killing red tape”

The administration’s formal withdrawal from key international AI safety initiatives has left a leadership vacuum. Notable departures include:

  • Pulling American participation from the AI Safety Institute Network
  • Recalling U.S. representatives from international standards bodies working on AI governance
  • Reducing diplomatic engagement in multilateral AI discussions
  • Cutting funding for joint research initiatives established after Bletchley

Competitive Rather Than Cooperative Framing

Perhaps most consequentially, the administration has reframed AI development as a zero-sum competition rather than a shared challenge requiring cooperative solutions. Presidential rhetoric consistently depicts AI safety efforts as attempts by other nations to slow American innovation.

This framing has made it politically difficult for other countries to maintain robust safety standards without appearing to disadvantage themselves competitively. The resulting “race to the bottom” dynamic has undermined the careful balance achieved at Bletchley between innovation and safety.

The American retreat has triggered cascading consequences across the international AI governance landscape:

Fragmentation of Standards

Without U.S. participation, international standards efforts have fragmented. The EU continues to implement its AI Act, but with limited global impact. The EU’s landmark legislation, which created a risk-based framework for AI regulation with especially strict rules for “high-risk” applications and “general-purpose AI systems,” now stands as the most comprehensive AI safety law globally. It established mandates for transparency, documentation, human oversight, and risk management that could have served as a foundation for international standards.

However, without American participation, EU regulations have become isolated rather than foundational. Meanwhile, China has proceeded with its own national AI governance system with minimal international coordination. Rather than a coherent global approach, we now see competing and sometimes contradictory regulatory regimes.

Corporate Exploitation of Regulatory Gaps

Major AI companies have responded by shifting operations and deployments to jurisdictions with fewer restrictions. This regulatory arbitrage has rendered many safety measures ineffective, as companies can simply develop and deploy advanced models in regions with minimal oversight.

Collapse of Research Cooperation

The ambitious research cooperation envisioned at Bletchley has largely disintegrated. Joint research initiatives have been defunded or narrowed in scope. The promised network of safety institutes now operates as disconnected national laboratories with limited information sharing.

Erosion of Transparency Norms

Perhaps most concerning, the transparency norms established for frontier models have deteriorated. Companies increasingly cite competitive pressures to justify withholding information about model capabilities, safety testing results, and known limitations.

The Path Not Taken

What makes this reversal particularly troubling is that it came at a critical juncture in AI development. The period since Bletchley has seen remarkable advances in model capabilities, with systems demonstrating increasingly sophisticated reasoning, planning, and problem-solving abilities.

These developments have only heightened the importance of the safety and governance measures outlined at Bletchley. Without robust international coordination, we face:

  • Accelerated deployment of increasingly powerful systems without adequate safety testing
  • Diminished ability to detect and mitigate harmful capabilities
  • Limited accountability mechanisms for systems that cause harm
  • Reduced capacity to address risks that cross national boundaries

Is Revival Possible?

While the Bletchley consensus has been severely damaged, it has not been completely destroyed. Several factors suggest that international cooperation on AI safety could potentially be revived:

  • Many technical experts and institutions remain committed to safety principles
  • Regional efforts, particularly in Europe, continue to advance governance frameworks
  • Civil society organizations are increasingly filling coordination gaps
  • Private sector actors have economic incentives to avoid catastrophic AI accidents

However, without American leadership or participation, any revived international effort will be significantly hampered. The technical expertise, market influence, and diplomatic weight of the United States make it an essential participant in effective global governance of AI.

The Bletchley Declaration represented a rare moment of foresight and cooperation in the face of a powerful emerging technology. Its rapid unraveling under the current administration represents not just a policy shift, but a fundamentally different vision of how transformative technologies should be governed.

As AI systems grow more capable, the need for the kind of international coordination established at Bletchley only becomes more urgent. The question remains whether the foundation laid at Bletchley can be reconstructed before we face consequences that no single nation can address alone.

In the historic halls where codebreakers once worked together to confront an existential threat, world leaders glimpsed the possibility of similar cooperation on AI safety. That vision now appears to be fading — a development that may be remembered as one of the most consequential missed opportunities of our time.

[I used Claude 3.7 extensively in writing this essay]

--

--

Tam Hunt
Tam Hunt

Written by Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality

No responses yet