Why AI is freaking me out

Tam Hunt
4 min readApr 22, 2023

I’ve written a number of pieces recently on AI, which has become a major global story since late 2022 due to the release of ChatGPT by OpenAI. It’s the first really impressive “chatbot” because of its mastery of human and other languages, and because of its rapid improvement. It reached 100 million users in just 5 days, far faster than any previous technology, illustrating that the pace of change is itself accelerating.

Source: Statista

This pace of change, and the remarkable abilities of ChatGPT itself have dramatically increased concerns about AI safety. These are complex issues and I’m already finding that people are misunderstanding my stated positions in the debates about AI safety.

With the stakes so incredibly high — a 2023 survey of AI experts found that 36% feared a nuclear-level catastrophe could result from developing AI — it’s important for arguments on all sides to be clear.

So here’s my current position on AI safety and what to do about it:

  1. The trajectory of AI is exponential and has few inherent limits to bad actors using new AI tools to wreak havoc, or spawning autonomous agents that can hijack human economic and social structures.
  2. I’ve described how the near-term impacts of widespread AI use will lead to both “social weirding” and “political weirding” as people find it more and more difficult to tell what is real, and cybercrimes and cyberwarfare become widespread
  3. The exponential development of AI will, if left unregulated, almost certainly lead not only to social and political weirding, but also to potentially catastrophic events, such as takeover by AI-empowered dictators (as has already happened in China), or even global war with autonomous weapons, conventional weapons, and maybe nuclear weapons
  4. Avoiding these future scenarios requires that we are able to align AI with human interests and human survival; this is known as “the alignment problem” or “the control problem”
  5. Because of these issues we should pause AI development of any LLMs more powerful than GPT-4.0 and if companies don’t agree to a pause the federal government should impose a moratorium. The Future of Life open letter, signed by over 27,000 people now (as of April 22, 2023), including many prominent AI researchers and developers, calls for a six-month pause/moratorium, which is a reasonable starting point. The point of the pause/moratorium is to allow for a collective and vigorous consideration of the risks posed by AI and how to mitigate or eliminate them.
  6. Powerful LLMs are already being circulated that can work on laptops, and GPT-4.0 is already capable of some degree of autonomous self-improvement using tools like Reflexion. Some argue that because of these developments a moratorium couldn’t work. However, the point of a moratorium on anything more powerful than GPT-4.0 is to temporarily halt the exponential improvement and exponential growth in computing power and energy resources required for training ever more powerful LLMs — until, at the least, we can get a handle on the social and political impacts of the current generation of LLMs.
  7. The downsides of a six-month pause/moratorium are vastly outweighed by the downsides of continuing an AI arms race that is developing technology that 36% of AI experts say could to nuclear-level catastrophes if left unregulated.
  8. While the pause/moratorium is “Plan A,” it is obviously not a permanent solution. It is not realistic to expect a permanent moratorium to last globally, even if aggressive powers like the US or Israel opted to treat rogue AI development like rogue nuclear weapons development by bombing suspected AI server farms. Accordingly, Plan B is to initiate robust and widespread efforts to find realistic solutions to the alignment problem.
  9. The best and most realistic scenarios for human survival are to craft a syntropic future where AI is harnessed to solve major social, political, environmental and scientific problems — and to avoid the issues of social weirding, political weirding, techno dictatorship, and runaway AI. This is a massive massive challenge and quite likely without available solution.
  10. Mathematically, there is no solution to the alignment problem if we define Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) (which are different terms the essentially the same idea) as artificial intelligence that is far greater than human intelligence AND is able to improve itself. Defined this way, there can be no solution to the alignment problem in the same way that it is not possible for an ant to control a human or for a newborn baby to defeat a grandmaster in chess. The best we can hope for in terms of solutions to the alignment problem is to train/inculcate AGI with values that include valuing human life, such as David Shapiro’s “heuristic imperatives.” I’m working on a modification of Shapiro’s approach that I’m calling the “spiritual imperatives solution” to the alignment problem.

--

--

Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality