Tam Hunt
1 min readJun 4, 2023

--

aloha Leif, there are various scenarios in which AGI/ASI could generate its own goals. Here's a recent talk (linked below) by Hinton fleshing out some scenarios, including nation-states tasking AGI with certain broad goals and enabling AGI to create its own subgoals to achieve those broader goals.

However, we can look to less dramatic examples already here today with AutoGPT or AgentGPT, both of which self-prompt in order to achieve the human user's initial goal.

Bostrom discusses "perverse instantiation" as a major issue, mythologized well by the story of King Midas who turned his own daughter into gold with his golden touch, and this kind of thing is another major risk of AGI/ASI achieving human goals given to it but in ways that we truly didn't intend.

More generally, I'm well aware of the distinctions between intelligence and sentience, and they can be quite different axes with respect to biology and AI. Koch's book, The Feeling of Life Itself, is quite good on this distinction. I interviewed him about his book a few years ago and published it on Medium. https://mpost.io/jeff-hinton-explores-two-paths-to-intelligence-and-the-dangers-of-ai-in-recent-talk/

--

--

Tam Hunt
Tam Hunt

Written by Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality

No responses yet