We have a powerful intuition that some special mental feature, such as self-awareness, is a prerequisite to intelligence. This causes confusion because we don’t have a coherent understanding of what the special feature is, nor what role it plays in intelligent action. It may be best to treat mental characteristics as in the eye of the beholder, and therefore mainly irrelevant to AI risks.
“Intelligence” is ill-defined, but it is often thought to require some quality of mindness. Although the research discipline is called “artificial intelligence,” from its beginning it has also contemplated building artificial minds. Skeptics often argue that AI is impossible because machines cannot have some essential mental capacity such as sentience, consciousness, agency, creativity, self-awareness, intuition, or intentionality.
Science fiction, and popular discussions of superintelligent AI, often portray the critical event as a computer “waking up” and attaining one of these special mental attributes. A mindless calculating machine, no matter how vast, is just a thing, and we know how to deal with things. They just sit there unless you make them go. Minds, on the other hand, may be deceptive, dominating, monstrous, or malevolent. “Scary” AI is mind-like AI, then.
I share this powerful intuition at a gut level, but I think it is intractably confused; probably wrong; and importantly misleading. So do leading AI safety researchers, yet it still often skews the field’s reasoning. Thinking about AI in mental terms tends to blind us to what may be the most likely disaster paths.
It is difficult to reason clearly about mind-like AI because it is difficult to think clearly about any of the supposed essential characteristics of mindness, such as consciousness. The associated philosophical “mind/body problem” is a tar pit of unresolvable metaphysical conundrums. It is best ignored, in my opinion. Avoiding such confusions is important, because Apocalyptic AI may not (and probably won’t) depend on any of that.
Even seemingly less mysterious mental terms like “belief” and “intention” seem impossible to pin down. Attempts end up in circular definitions, failing to escape the realm of non-physical abstractions. It seems to be impossible to find criteria for what counts as a “belief” in terms compatible with a scientific worldview.1 This doesn’t imply beliefs “don’t exist”; but attributing them as objective, clear-cut entities is inescapably dubious.
Some psychologists think what makes humans uniquely effective is our ability to coordinate large social groups. They believe this depends on recently-evolved special-purpose brain mechanisms for reasoning about other minds. They say we have two separate modes of cognition: one for inanimate things, and one for people and for other animals whose intentions matter, like predators, prey, and pets. Mental terms are meaningful only in the second mode.
On this view, we relate to something using either our thingness cognition or our mindness cognition. We mistake our mode of relating as an objective fact about what or who we are relating to.2 Then we imagine an AI “waking up” as forcing us to shift modes of relationship.
There’s good reason to think that, in relating to existing AI systems, mechanical reasoning is preferable to psychological reasoning. From the beginning, AI researchers have confused ourselves by using mental terminology to describe AI systems. We know better, and should stop. Current discussions of AI “learning,” “reasoning,” “knowledge,” and “understanding” obstruct analysis. That is not because the systems lack some magic mental essence, but because the things they do are different from what people do in specific, relevant, explainable ways.
I will often put these mental terms in scare quotes, to remind readers that, for instance, current AI “learning” methods are quite unlike human learning. This may come across as snarky and annoying, but it’s meant to be helpful. It aims to prevent common misunderstandings. It draws attention to confusions that can result from slippery use of poorly-understood mental terms to label computations that may be only metaphorically similar.3 You can coax an AI text generator to output text arguing for or against mRNA vaccines, but it does not have any beliefs about them either way.
From the opposite side, skeptical arguments about AI often turn on the weasel-words “really” and “just.” For example, some say computers can never really understand language, or produce text that is really meaningful, because they are just performing calculations. This makes a strident metaphysical claim without explaining specifically what it is or why anyone should believe it.4 Apparently there is real understanding, which is very important, and then there is just calculating, which isn’t real, and is entirely insignificant. This claim could be valid, or at least worth arguing with—if the skeptic went on to explain specifically what the distinction between real understanding and not-real understanding is. (Some do; most don’t.) Otherwise, insisting on “really” is logically equivalent to SHOUTING AT YOUR OPPONENTS.
Since this pattern is so common and pernicious, I’ll often put really and just in bold when I want to draw your attention to it.
- 1.See the chapter “What can you believe?” in my In the Cells of the Eggplant; or, for the full catastrophe, the “Belief” article in the Stanford Encyclopedia of Philosophy.
- 2.This is pretty much Daniel Dennett’s analysis in The Intentional Stance. I am avoiding that language because he explicitly conflated the intentional stance with attributing rationality. I think rationality is a red herring; Scary AI is extra scary if it is mind-like but irrational, as seems plausible. His explanation is also tangled with ancient arguments about the mind/body problem, and about ethics, which I want to avoid importing.
- 3.Drew McDermott’s 1976 “Artificial intelligence meets natural stupidity” is the classic discussion of this mistake. ACM SIGART Bulletin, Issue 57. Murray Shanahan’s “Talking About Large Language Models” (arXiv:2212.03551, Dec 2022) is an outstanding recent discussion of these difficulties with reference to current ChatGPT-like systems. Unlike most philosophical discussions of AI, Shanahan understands the technology thoroughly as well.
- 4.See “Against ‘Really’” in my Meaningness.