A very simple summary of why AI is incredibly dangerous to humanity
The longstanding pattern in human history of subjugation and destruction of weaker forces by far stronger forces suggests that artificial superintelligence will be catastrophic for humanity’s survival
It’s difficult nowadays to discern good arguments from bad, and good data from misinformation. I want to present the basic argument, which I believe has no strong rebuttal, as to why the current path we’re on with AI is all but certain to lead to catastrophe for humanity. Here it is:
- The current trajectory for AI improvement is exponential, with general AI expected in almost all human cognitive domains within a couple of years if not sooner
- This general AI (AGI) will be able improve itself, since that’s something that humans currently can do, but it will be able to do it a million or a billion times faster
- This means that AGI is followed very soon after with artificial superintelligence or ASI, which will have basically god-like powers as long as it has access to sufficient materials and power
- The history of interactions between human groups is quite clear: when a far more powerful force meets a weaker force that weaker force is destroyed, colonized, enslaved, subjugated. There are few exceptions
- Why would we expect the unfolding of ASI to be any different?
And that’s it. That’s the basic argument. I welcome rebuttals in the comments.