Art created with Clipdrop.com by Tam Hunt

The “hard problem” of AI — how to control it — has no solution

Tam Hunt
2 min readAug 4, 2023

--

Prof. Roman Yampolskiy and I argue in this Nautilus essay that the “control problem” of AI is fundamentally unsolvable

[This essay first appeared at Nautilus here]

If all goes well, human history is just beginning. … if we could learn to reach out further into the cosmos, we could have … trillions of years, to explore billions of worlds. Such a lifespan places present-day humanity in its earliest infancy. A vast and extraordinary adulthood awaits.

–Toby Ord, The Precipice: Existential Risk and the Future of Humanity

Every day there seems to be a new headline: Yet another scientific luminary warns about how recklessly fast companies are creating more and more advanced forms of AI, and the great dangers this tech poses to humanity.

We share many of the concerns researchers like AI researchers Geoffrey Hinton, Yoshua Bengio, and Eliezer Yudkowsky; philosopher Nick Bostrom; cognitive scientist Douglas Hofstadter, and others have expressed about the risks of failing to regulate or control AI as it becomes exponentially more intelligent than human beings.

This is known as “the control problem,” and it is AI’s “hard problem.”

Once AI is able to improve itself, it will quickly become much smarter than us on almost every aspect of intelligence, then a thousand times smarter, then a million, then a billion … What does it mean to be a billion times more intelligent than a human? Well, we can’t know, in the same way that an ant has no idea what it’s like to have a mind like Einstein’s. In such a scenario, the best we can hope for is benign neglect of our presence. We would quickly become like ants at its feet. Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it.

Why do we think this? In mathematics, science, and philosophy, it is helpful to first categorize problems as solvable, partially solvable, unsolvable, or undecidable. It’s our belief that if we develop AIs capable of improving their own abilities (known as recursive self-improvement), there can, in fact, be no solution to the control problem. One of us (Roman Yampolskiy) made the case for this in a recent paper published in the Journal of Cyber Security and Mobility. It is possible to demonstrate that a solution to the control problem does not exist.

Read the rest of this piece here.

--

--

Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality