Artificial general intelligence (AGI)

Many people call the future threat “artificial general intelligence,” but all three words there are misleading when trying to understand risks.

“Superhuman performance” is not scary; it’s what computers are for. It’s been achieved for a zillion tasks since the 1950s. It’s also completely out of reach for zillions more tasks we’d like done.

The promise, and the threat, of “artificial general intelligence” is that it could do everything. That could deliver either fully automated luxury communism or human extinction.

Taking it as a threat, “AGI” is often equated with Scary AI. This is a mistake. All three terms in “AGI” are inessential for an AI apocalypse.

Many AI safety researchers recognize that “AGI” is a misnomer. Because no one can explain what is distinctive about Scary AI, some explicitly preserve “AGI” as an arbitrary, conventional term. Joseph Carlsmith:

[S]ometimes, a given use of “AGI” just means something like “you know, the big AI thing; real AI; the special sauce; the thing everyone else is talking about.”2

Carlsmith rightly explains that it is power, not generality or intelligence, that makes AI risky:

I’ll say that an AI system has “advanced capabilities” if it outperforms the best humans on some set of tasks which when performed at advanced levels grant significant power in today’s world… [This] does not, I think, require meeting various stronger conditions sometimes discussed—for example, “human-level AI,” “superintelligence,” or “AGI.”

  1. 1.Already in 1958, researchers created a General Problem Solver, which was useless. (A. Newell et al., “Report on a General Problem-Solving Program,” The RAND Corporation, 30 December 1958, revised 9 February 1959.) Although in principle it could solve any problem you gave it, in practice it was so slow you’d never get an answer. This was not because 1959 computers were slow; it’s an inherent limitation of the algorithm, which would be too slow for most purposes on 2023 computers as well. A more recent system, AIXI, is even more general, because you don’t have to give it problems. In theory, it learns from experience, discovering and solving problems as it goes. However, it is mathematically provably incapable of actually doing anything within the lifespan of our universe, because in effect it has to consider in full detail all possible worlds and their futures before acting. (Marcus Hutter, Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability, 2005.)
  2. 2.Joseph Carlsmith, “Is power-seeking AI an existential risk?”, arXiv:2206.13353, 16 Jun 2022, p. 8. See also Ben Goertzel’s “Who coined the term “AGI”?” (goertzel.org, August 28th, 2011) for some history. He did. He had wanted to call it “Real AI,” “but I knew that was too controversial.”