This book is a call to action. You can participate. This is for you.
Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it.
AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines—and in so doing, we may destroy the systems we depend on for survival.
Worries about AI risks have long been dismissed because AI itself sounds like science fiction. That is no longer possible. Fluent new text generators, such as ChatGPT, have suddenly shown the public that powerful AI is here now. Some are excited about future possibilities; others fear them.
We don’t know how our AI systems work, we don’t know what they can do, and we don’t know what broader effects they will have. They do seem startlingly powerful, and a combination of their power with our ignorance is dangerous.
In our absence of technical understanding, those concerned with future AI risks have constructed “scenarios”: stories about what AI may do. We don’t know whether any of them will come true. However, for now, anticipating possibilities is the best way to steer AI away from an apocalypse—and perhaps toward a remarkably likeable future.
So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.
Most AI researchers think AI will have overall positive effects. This seems to be based on a vague general faith in the value of technological progress, however. It doesn’t involve worked-out ideas about possible futures in which AI systems are enormously more powerful than current ones. A majority of AI researchers surveyed also acknowledge that a civilization-ending catastrophe is quite possible.1
Unless we can find some specific beneficial path, and can gain some confidence in taking it, we should shut AI down.
I have been wildly enthusiastic about science, technology, and intellectual and material progress since I was a kid. I have a PhD in artificial intelligence, and I find the current breakthroughs fascinating. I’d love to believe there’s a way AI could improve our lives in the long run. If someone finds one, I will do an immediate 180, roll up my sleeves, and help build that better future.
Unless and until that happens, I oppose AI. I hope you will too. At minimum, I advise everyone involved to exercise enormously greater caution.
AI is extremely cool, and we can probably have a better future without it. Let’s do that.
This book is about you. It’s about what you can do to help avert apocalyptic outcomes. It’s about your part in a future we would like.
I offer specific recommendations for the general public; for technology professionals; for AI professionals specifically; for organizations already concerned with AI risks; for science and public interest funders, including government agencies, philanthropic organizations, NGOs, and individual philanthropists; and for governments in their regulatory and legislative roles.
Since this book is for everyone, it requires no technical background. It is also not a beginner’s introduction to artificial intelligence, nor an overview of the field, nor a survey of prior literature on AI safety. Instead, you will read about the AI risk scenarios I’m most concerned about, and what you can do about them.
Medium-sized apocalypses
This book considers scenarios that are less bad than human extinction, but which could get worse than run-of-the-mill disasters that kill only a few million people.
Previous discussions have mainly neglected such scenarios. Two fields have focused on comparatively smaller risks, and extreme ones, respectively. AI ethics concerns uses of current AI technology by states and powerful corporations to categorize individuals unfairly, particularly when that reproduces preexisting patterns of oppressive demographic discrimination. AI safety treats extreme scenarios involving hypothetical future technologies which could cause human extinction.2 It is easy to dismiss AI ethics concerns as insignificant, and AI safety concerns as improbable. I think both dismissals would be mistaken. We should take seriously both ends of the spectrum.
However, I intend to draw attention to a broad middle ground of dangers: more consequential than those considered by AI ethics, and more likely than those considered by AI safety. Current AI is already creating serious, often overlooked harms, and is potentially apocalyptic even without further technological development. Neither AI ethics nor AI safety has done much to propose plausibly effective interventions.
We should consider many such scenarios, devise countermeasures, and implement them.
A hero’s journey
This book has five chapters. They are mostly independent; you can read any on its own. Together, however, they form a hero’s journey path: through trials and tribulations to a brilliant future.
We are not used to reasoning about artificial intelligence. Even experts can’t make much sense of what current AI systems do, and it’s still more difficult to guess at the behavior of unknown future sorts. We are used to reasoning about powerful people, who may be helpful or hostile. It is natural to think about AI using that analogy. Most scenarios in science fiction, and in the AI safety field, assume the danger is autonomous mind-like AI.
However, the first chapter, “What is the Scary kind of AI?” explains why that is probably misleading. Scenarios in which AIs act like tyrants are emotionally compelling, and may be possible, but they attract attention away from other risks. AI is dangerous when it creates new, large, unchecked pools of power. Those present the same risks whether the power is exploited by people or by AI systems themselves. (Here the hero—that’s you—realizes that the world is scarier than it seemed.)
The second chapter, “Apocalypse now,” explores a largely neglected category of catastrophic risks of current and near-future AI systems. These scenarios feature AI systems that are not at all mind-like. However, they act on our own minds, coopting people to act on their behalf, altering our cultural and social systems for their benefit, amassing enormous power, undermining governments and other critical institutions, and could cause societal collapse unintentionally. That may now sound as unlikely as the scenarios in which a tyrannical, self-aware AI deliberately takes over the world and enslaves or kills all humans. I hope reading the chapter will make this alternative terrifyingly plausible. (The hero gets thrown into increasingly perilous, unexpected, complicated scenarios. Is survival possible?)
Chapter three, “Practical actions you can take against AI risks” describes seven approaches. These may be effective against both the mind-like AIs of the first chapter, and the mindless ones of the second. For each approach, it suggests helpful actions that different sorts of people and institutions can take. They are complementary, and none is guaranteed to work, so all are worth pursuing simultaneously. (The hero takes up magical arms against the enemy, and victory seems possible after all.)
The utopian case for AI is dramatic acceleration of scientific understanding, and therefore technological and material progress. Those are worthy goals, which I share fully. However, no one has explained how or why AI would accomplish them. Chapter four, “Technological transformation without Scary AI,” suggests that it wouldn’t—but that such acceleration is within our reach. The pace of progress currently depends on dysfunctional social structures and incentives for research and development. We can take immediate, pragmatic actions to remove obstacles and speed progress, without involving risky AI. (The hero achieves an epiphany of the better world to come, and discovers that the key is of quite a different nature than expected.)
The most important questions are not about technology but about us. The final chapter asks: What sorts of future would we like? What role would AI play in getting us there, and also in that world? What is your own role in helping it come about? (The story finishes with the hero beginning a new sort of journey, and leaves an open-ended conclusion.)
And some side quests
I removed two chapters from earlier versions of this book because they address particular topics in more detail than many readers would want. I’ve made them available instead on the web site betterwithout.ai. The book refers to these as two “companion documents.” They don’t require any specific technical background, but assume the reader’s willingness to follow a trail through somewhat dense conceptual thickets.
“Neural networks” are the technology underlying most current AI systems. That’s the topic of the first companion document, Gradient Dissent: Artificial neurons considered harmful. Neural networks are an exceptionally unreliable and dangerous technology. They produce systems that are deceptive and inherently error-prone. They should be used only under controlled conditions that reduce danger to acceptable levels. Currently, they are widely deployed in uncontrolled environments in which they cause large harms.
Gradient Dissent describes neglected scientific and engineering approaches that may make neural networks less risky. However, technical fixes cannot make them safe enough for most purposes. In the longer run, this technology should be deprecated, regulated, avoided, and replaced with better alternatives.
“Language models” are text generators, such as ChatGPT. This is a poorly-understood new technology, and it poses many near-future risks. For example, the recent discovery that they can perform multi-step reasoning worries me. Current research might possibly lead to superhuman reasoning ability, which might be catastrophic.
Are language models Scary? evaluates the risks. I recommend scientific and engineering investigation to understand what they do, and how, and what this may imply. That may enable technical and social mitigations: for example by replacing the underlying technology with better-understood ones; by regulating use; and by educating the public about the risks and limitations.
The web site betterwithout.ai contains several other essays on current and future artificial intelligence. I expect to add to it from time to time.
- 1.Katja Grace, “What do ML researchers think about AI in 2022?”
- 2.Kelsey Piper discusses the gap between these two fields, and possible synergies, in “There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.”