Tam Hunt
Jun 4, 2023

--

Great essay and I am inclined to agree. I also want to highlight no one actually knows how these LLMs are doing what they're doing. Peter Russell highlights this also in a recent interview with CNN here. https://edition.cnn.com/2023/05/31/opinions/artificial-intelligence-stuart-russell/index.html

The standard answer is that they're "predicting the next word," but that's clearly not what's going on. That's a toy model understanding of a toy model of intelligence. There are emergent properties at various levels in these highly complex ANNs and no one knows how to predict what other emergent properties will appear with more complex LLMs.

The world model your examples clearly illustrate is itself a good example of an emergent property. It wasn't programmed. It just ... emerged.

This is why these LLMs are so potentially dangerous -- as they trend toward AGI how can we possibly hope to understand them let alone align them?

--

--

Tam Hunt
Tam Hunt

Written by Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality

No responses yet