Could 2023 be the year that AI becomes self-improving?

The AI explosion: environmental and existential disaster?

Tam Hunt

--

Is AI set to become a world-eating monster due to its ever-increasing energy demands as well as its potentially catastrophic society-wide changes to everything we humans do?

As a public policy attorney, I’ve been working on green energy policy for over twenty years. In the last ten years I’ve analyzed the growth and future of green energy through the lens of exponential technology theory. It’s been very helpful and my projections of growth going back a decade have proven surprisingly accurate.

I described in 2015 how the “solar singularity is nigh” due to the power of exponential growth to go from what looked like almost no solar globally to an almost fully renewable energy future by the 2030s (a mix of solar, wind, hydro, geothermal, biomass and battery storage). I said at the time: “solar is taking over. We can now see many years into the future when it comes to energy. And that future is primarily solar-powered.”

Turns out that I was right (and I wrote a whole book about it).

This is an area where the power of “learning curves” and ever-improving technology costs have led to resoundingly good developments in the world. We are now looking at a fully (or close thereto) renewable energy future by 2035–2040 or so, due to the power of exponential growth in green energy.

I then began looking at the nature of exponential growth in the cryptocurrency context about six years ago. I began a company that planned to mine cryptocurrency using green power. “Solar Gold” was the idea and the company’s name: using very cheap solar power to mine crypto and create a more decentralized world. Do well by doing good.

The more I learned, however, about Bitcoin and other “proof of work” cryptocurrencies the more concerned I became about the growth rate in the energy needed over time to mine these currencies. Even if all or most of that power required was green energy like solar and wind, it would eventually require covering the entire surface of the earth to mine the remaining crypto because the power requirements got so large so fast.

I wrote in 2018 about the potential for a major energy crisis — what I called a potential “world eating monster” — to come about because of the perverse incentives from crypto mining to build ever larger mining farms and more and more power to feed them.

Now we’re looking at similar dynamics with AI server farms and the new AI arms race. But the big difference now is that AI is self-feeding and potentially entirely automatable in terms of not only its ability to improve itself computationally, but also to build out ever more server farms and the energy required to run them.

Yes, it is all but certain that AI will get so smart, and rather soon, that it can do all of these things and more, by itself.

To see how this could be catastrophic for our planet, let’s look a little deeper at the nature of exponential growth and “learning curves” in technology development.

The normal “lazy S” curve of exponential improvement (figure 1) illustrates the top of the improvement or adoption curve. In the case of adoption of new technologies, the top of the lazy S indicates a saturation of available markets. In the case of improvement of new technologies, such as a particular architecture of computer chip, the top of the lazy S indicates the attainment of physical or related limits for further improvement of that technology.

Figure 1. The “lazy S” curve of exponential technologies (source: Staffel and Green 2009).

The true distinction of AI intelligence, however, is that there may be no lazy S at the top. The AI intelligence explosion may simply go almost vertical (asymptotically) and remain at that dizzying pace of change as it eats its way through our planet, then other planets, and so on.

That pace of change would be obviously catastrophic because the rate of improvement of AI intelligence requires massive server farms, and the rate of growth in the necessary server farms, as well as the power required for those server farms, will grow with the pace of improvement in AI intelligence (albeit the power required will grow at a slower rate since efficiencies will steadily be found over time).

Figure 2 illustrates the AI intelligence explosion from OpenAI’s GPT “large language model” (LLM), assuming a conservative 4x improvement in AI intelligence each year. (To illustrate how conservative an assumption this is, note that Elon Musk commented in March 2024 how AI has over the last year been requiring 10x more computing power every six months, which means over the last year it’s now using 100x more computing power than it was just a year ago).

Figure 2. The AI explosion at 4x improvement per year.

At this pace of improvement, GPT achieves over 4 million times the intelligence of the average human by around 2033. What does this mean? We don’t have any idea, and surely cannot know, what it means to be that intelligent.

What does an ant know of the intelligence of Einstein?

This example of the intelligence explosion represents just one of the currently available AI models, which is physically based on OpenAI’s and Microsoft Azure’s server farms.

Many other companies and nations are, and will be, developing equivalent AI infrastructure, including, of course, China — the main geopolitical competitor to the US at this time.

A new “AI arms race” has already developed and much of the perceived urgency in developing these new AI superintelligent beings appears to be driven by competition to be the first among nations or companies to develop new breakthroughs. OpenAI is leading the way at this juncture but that could and likely will change over time.

The result of this new AI arms race appears to be the construction of ever-increasing server farms, in size and number, around the world, consuming ever-more energy resources of all types. Even if these server farms are powered by solar and wind, these energy farms will quickly cover all available areas.

In this AI arms race scenario, merely the physical infrastructure associated with AI becomes a world-eating monster.

There may be even larger concerns, however, due to the impacts on our social fabric long before these energy and environmental concerns become apparent.

Ray Kurzweil, one of the leading thinkers in the technological age we find ourselves in, has long discussed the exponential growth in computing power and AI that will, in his estimation, result in the technological singularity occurring by 2045. The singularity is the point at which computers become intelligent enough to improve themselves. He also projects that by 2028, computers will achieve human-level intelligence.

It appears clear that his 2028 prediction was off by about five years since OpenAI’s GPT4.0 model already does better than the average human on a battery of tests of reasoning ability and knowledge, including passing the Uniform Bar Exam at a level better than 90% of human test takers (up from only 10% with GPT3.5). (We need to consider the speed of GPT4.0 in doing what it does, which is hundreds or thousands of times faster than humans doing equivalent tasks).

Within the year it is likely that GPT4.5 or 5.0 (if that is released in 2023) will achieve significantly greater than average human-level intelligence. While we cannot know at this point what this entails in terms of computing power and AI improvements, it does seem likely that GPT4.5 or 5.0 will be able to, at the least, suggest software and architecture improvements to itself that would improve its functioning and allow it to continue to improve.

If that’s the case the beginning of the Singularity arrives in 2023 — not 2045 (see Figure 3). This is because once AI can improve itself, things move very rapidly.

All bets are off at that point. It’s very likely that the vertical leg of my chart below would be moved forward by a number of years if indeed the beginning of self-improving AI arrives in 2023 or 2024.

Figure 3. Will the AI singularity be achieved in 2023?

Kurzweil paints a picture in his 2008 book, The Singularity Is Near, of a far future in which all of the universe’s matter is converted into “computronium,” which is a generic term for computing machinery, in whatever form that takes. He describes this scenario and the Singularity itself in generally positive terms, seeming to ignore the catrastrophic risks to humanity itself from these developments.

If policymakers don’t change the current rules — by putting in place rigorous and reasonable regulations and treaties, and rather soon — that sci-fi and dystopian future in which the entire world, our solar system, our star, and every star in our galaxy, are converted into computronium by ever-more intelligent and ever-more powerful AIs, becomes clearly visible to even us lowly humans with biologically-evolved intelligence.

There are, it should be clear now, very serious national and international security concerns implicated by these readily foreseeable developments — and yet it appears that the mainstream of our national security and AI safety experts remain complacent and silent about the impacts of these world-changing technologies.

Do we realize what we are doing?

As I was writing this article, I learned that Elon Musk, Steve Wozniak, and over a thousand other AI experts and technologists have signed an open letter calling for a six-month moratorium on large AI development — a pause to give time to consider whether this path we’re on is wise.

I signed on to the letter and breathed a sigh of relief as I did so. This is a step toward real action being taking but only one step. Given the forces arrayed in favor of headling development of ever-more powerful AI systems, we need to amplify this message about the likely consequences of the next generations of AI.

[March 2024 update: one year later we know now that the pause letters (there was more than one) signed by hundreds and thousands of the luminaries in the field had little to no effect. As noted above, Elon Musk commented recently that in the last year alone the physical computing for AI has increased by 100x around the world and there is no sign of this slowing down; to the contrary, Nvidia recently became the world’s second largest market cap stock, at over $2.3 trillion, based on its roaring sales of AI-customized computer chips. The environmental and existential apocalypse I wrote about above seems well on pace to happen, perhaps even sooner than I projected — unless people and policymakers get real smart and real fast.]

--

--

Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality