Many people call the future threat “artificial general intelligence,” but all three words there are misleading when trying to understand risks.
“Superhuman performance” is not scary; it’s what computers are for. It’s been achieved for a zillion tasks since the 1950s. It’s also completely out of reach for zillions more tasks we’d like done.
The promise, and the threat, of “artificial general intelligence” is that it could do everything. That could deliver either fully automated luxury communism or human extinction.
Taking it as a threat, “AGI” is often equated with Scary AI. This is a mistake. All three terms in “AGI” are inessential for an AI apocalypse.
-
Superhuman intelligence is not threatening per se; it is risky only if it produces superhuman power.
-
Superhuman powers created with AI are risky whether they are wielded by artificial agents or natural humans.
-
A superhuman “narrow” AI capable only of devising extraordinarily effective bioweapons could cause human extinction. On the other hand, full generality in AI is a fatal flaw. Computer science often discovers inherent trade-offs between generality and efficiency. For deep mathematical reasons, any AI system that can solve all problems in theory must be incapable of solving any problem in practice, because it would take much too long.1
Many AI safety researchers recognize that “AGI” is a misnomer. Because no one can explain what is distinctive about Scary AI, some explicitly preserve “AGI” as an arbitrary, conventional term. Joseph Carlsmith:
[S]ometimes, a given use of “AGI” just means something like “you know, the big AI thing; real AI; the special sauce; the thing everyone else is talking about.”2
Carlsmith rightly explains that it is power, not generality or intelligence, that makes AI risky:
I’ll say that an AI system has “advanced capabilities” if it outperforms the best humans on some set of tasks which when performed at advanced levels grant significant power in today’s world… [This] does not, I think, require meeting various stronger conditions sometimes discussed—for example, “human-level AI,” “superintelligence,” or “AGI.”
- 1.Already in 1958, researchers created a General Problem Solver, which was useless. (A. Newell et al., “Report on a General Problem-Solving Program,” The RAND Corporation, 30 December 1958, revised 9 February 1959.) Although in principle it could solve any problem you gave it, in practice it was so slow you’d never get an answer. This was not because 1959 computers were slow; it’s an inherent limitation of the algorithm, which would be too slow for most purposes on 2023 computers as well. A more recent system, AIXI, is even more general, because you don’t have to give it problems. In theory, it learns from experience, discovering and solving problems as it goes. However, it is mathematically provably incapable of actually doing anything within the lifespan of our universe, because in effect it has to consider in full detail all possible worlds and their futures before acting. (Marcus Hutter, Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability, 2005.)
- 2.Joseph Carlsmith, “Is power-seeking AI an existential risk?”, arXiv:2206.13353, 16 Jun 2022, p. 8. See also Ben Goertzel’s “Who coined the term “AGI”?” (goertzel.org, August 28th, 2011) for some history. He did. He had wanted to call it “Real AI,” “but I knew that was too controversial.”