Fear power, not intelligence

Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes.

The AI risks literature generally takes for granted that superintelligence will produce superpowers. It rarely examines how or why specific powers might develop. In fact, it’s common to deny that an explanation is either possible or necessary.

The argument is that we are more intelligent than chimpanzees, which is why we are more powerful, in ways chimpanzees cannot begin to imagine. Then, the reasoning goes, something more intelligent than us would be unimaginably more powerful again. In that case, we can’t know how a superintelligent AI would gain inconceivable power, but we can be confident that it would.

However, for hundreds of thousands of years humans were not more powerful than chimpanzees. Significantly empowering technologies only began to accumulate a few thousand years ago, apparently due to cultural evolution rather than increases in innate intelligence. The more dramatic increases in human power beginning with the industrial revolution were almost certainly not due to increases in innate intelligence. What role intelligence plays in science and technology development is mainly unknown; I’ll return to this point later.

The AI safety literature also reasons that power consists of the ability to take effective action. It assumes effective action derives from plans, and that intelligence centrally features the ability to make plans, so greater intelligence means superintelligent AI’s actions would be more effective, potentially without limit.

This greatly overestimates the role of planning in effective action. Power rarely derives from exceptional planning ability. The world is too complicated, too little known, and too rapidly changing for detailed plans to succeed. Effective action derives from skillful improvisation in specific situations. That too is limited by unavoidably incomplete knowledge, regardless of intelligence.1

Joseph Carlsmith, recognizing that power is where the danger lies, provides a list of specific actions AI might take to gain it. The most plausible superpowers require no breakthroughs in material technology, and no construction of a robot army. A hostile AI might:

This may all sound implausible, like something from a bad science fiction TV series. I will argue in the next chapter that these are realistic worries.

People can do most of these things too, although not at a superpowered level. Unfriendly people are dangerous when they do. Plausible, concrete, catastrophic AI scenarios feature the creation or exploitation of pools of power—which could also be exploited by individual people; by institutions such as states or corporations; or by diffuse ideological networks.

I think the most promising, relatively neglected approaches to AI safety can address those pools, regardless of the role of AI in creating or exploiting them. I discuss these later in this book.

What we should fear is not intelligence as such, but sudden massive shifts of power to agents who may be hostile or callously indifferent. Technological acceleration can do that; but a new sort of AI is neither necessary nor sufficient to cause acceleration. Powerful new technologies are dangerous whether they are wielded by humans or AIs, and whether they were developed with or without AI.

Increasing computer power has already caused massive power shifts: for example to the United States versus the rest of the world, and to the tech industry versus the rest of the world economy. We’ll get bigger supercomputers and better algorithms for many years or decades yet. Those will result in further large power shifts. Whether the computer systems we build count as “Real AI” doesn’t affect their risks or benefits.

Imagining Real AI as human-like may blind us to the greatest unknowns and the greatest risks. Since we can’t identify what is specifically dangerous about Scary AI, we should be considering a wider range of scenarios than the common science-fictionish narratives. We should be concerned about any advanced computational systems that unlock new capabilities, or greatly magnify existing ones. Those might look very different from Scary AI.

This implies taking more seriously the risks of the AI already in use; of current methods under experimental development; and of concretely imaginable specific future technologies. It implies some resource allocation away from concerns about vague Scary future AI that is omnipotent by narrative fiat. Those should not be dismissed, but they have been overemphasized by comparison.

  1. 1.This was the topic of my research in AI as a graduate student. “Planning for conjunctive goals,” Artificial Intelligence 32:3, pp. 333–377, July 1987; and Vision, Instruction, and Action, MIT Press, 1991.
  2. 2.This is not a direct quote from Carlsmith, but I based the list largely on his “Is Power-Seeking AI an Existential Risk?”, arXiv:2206.13353, 16 Jun 2022.