This book is a call to action. You can participate. This is for you.
Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it.
AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines—and in so doing, we may destroy the systems we depend on for survival.
Worries about AI risks have long been dismissed because AI itself sounds like science fiction. That is no longer possible. Fluent new text generators, such as ChatGPT, have suddenly shown the public that powerful AI is here now. Some are excited about future possibilities; others fear them.
We don’t know how our AI systems work, we don’t know what they can do, and we don’t know what broader effects they will have. They do seem startlingly powerful, and the combination of their power with our ignorance is dangerous.
In our absence of technical understanding, those concerned with AI risks have constructed “scenarios”: stories about what AI may do. Some involve killer robots, engineered plagues, newly invented weapons of mass destruction, or other science-fictional devices.
This book is not about those. It’s about disasters that could result from current and near-future technologies that change the way we humans think and act, just by communicating with us. That sounds more realistic.
We don’t know whether any of these scenarios will come true. However, for now, anticipating possibilities is the best way to steer AI away from catastrophe—and perhaps toward a remarkably likeable future.
So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero specific paths to good outcomes.
Most AI researchers think AI will have overall positive effects. However, this seems to be based only on a vague faith in the value of technological progress in general. It doesn’t involve worked-out ideas about desirable futures in which AI systems are enormously more powerful than current ones.
Many AI researchers also acknowledge that a catastrophe, even a civilization-ending one, is quite possible. So do the heads of leading AI laboratories. Most prominent leaders in the field signed the following Statement on AI Risk in May 2023:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.1
Unless we can find a specific beneficial way forward, and can gain confidence in following it with minimal chance of catastrophe, we should shut AI down.
I have been wildly enthusiastic about science, technology, and intellectual and material progress since I was a kid. I have a PhD in artificial intelligence, and I find the current breakthroughs fascinating. I’d love to believe there’s a way AI could improve our lives in the long run. If someone finds one, I will do an immediate 180, roll up my sleeves, and help build that better future.
Unless and until that happens, I oppose AI. I hope you will too. At minimum, I advise everyone involved to exercise enormously greater caution.
AI is extremely cool, and we can probably have a better future without it. Let’s do that.
This book is about you. It’s about what you can do to help avert apocalyptic outcomes. It’s about your part in a future we would like.
I offer specific recommendations for the general public; for technology professionals; for AI professionals specifically; for organizations already concerned with AI risks; for science and public interest funders, including government agencies, philanthropic organizations, NGOs, and individual philanthropists; and for governments in their regulatory and legislative roles.
Since this book is for everyone, it requires no technical background. It is also not a beginner’s introduction to artificial intelligence, nor an overview of the field, nor a survey of prior literature on AI safety. Instead, you will read about the AI risk scenarios I’m most concerned about, and what you can do about them.
Medium-sized apocalypses
This book considers scenarios less bad than the end of the world, but which could get worse than run-of-the-mill disasters that kill only a few million people.
Previous discussions have mainly neglected such scenarios. Two fields have focused on comparatively smaller risks, and extreme ones, respectively.
-
AI ethics concerns uses of current AI technology by states and powerful corporations to categorize individuals unfairly, particularly when that reproduces preexisting patterns of oppressive demographic discrimination.
-
AI safety treats extreme scenarios involving hypothetical future technologies which could cause human extinction.2
It is easy to dismiss AI ethics concerns as insignificant, and AI safety concerns as improbable. I think both dismissals would be mistaken. We should take seriously both ends of the spectrum.3
However, I intend to draw attention to a broad middle ground of dangers: more consequential than those considered by AI ethics, and more likely than those considered by AI safety.
Current AI is already creating serious, often overlooked harms, and is potentially apocalyptic even without further technological development. Neither AI ethics nor AI safety has done much to propose plausibly effective interventions.
We should consider many such scenarios, devise countermeasures, and implement them.
A hero’s journey
This book has five chapters. They are mostly independent; you can read any on its own. Together, however, they form a hero’s journey path: through trials and tribulations to a brilliant future.
We are not used to reasoning about artificial intelligence. Even experts can’t make much sense of what current AI systems do. It’s still more difficult to guess at the behavior of unknown future technologies. We are used to reasoning about powerful people, who may be helpful or hostile. It is natural to think about AI using that analogy. Most scenarios in science fiction, and in the AI safety field, assume the danger is autonomous mind-like AI.
However, the first chapter, “What is the Scary kind of AI?” explains why that is probably misleading. Scenarios in which AIs act like tyrants are emotionally compelling, and may be possible, but they attract attention away from other risks. AI is dangerous when it creates new, large, unchecked pools of power. Those present the same risks whether the power is exploited by people or by AI systems themselves. (Here the hero—that’s you!—realizes that the world is scarier than it seemed.)
The second chapter, “Apocalypse now,” explores a largely neglected category of catastrophic risks of current and near-future AI systems. These scenarios feature AI systems that are not at all mind-like. However, they act on our own minds: coopting people to act on their behalf, altering our cultural and social systems for their benefit, amassing enormous power, undermining governments and other critical institutions, and potentially causing societal collapse unintentionally. That may now sound as unlikely as the scenarios in which a self-aware AI deliberately takes over the world and enslaves or kills all humans. I hope reading the chapter will make this alternative terrifyingly plausible. (The hero gets thrown into increasingly perilous, unexpected, complicated scenarios. Is survival possible?)
Chapter three, “Practical actions you can take against AI risks” describes seven approaches. These may be effective against both the mind-like AIs of the first chapter, and the mindless ones of the second. For each approach, it suggests helpful actions that different sorts of people and institutions can take. They are complementary, and none is guaranteed to work, so all are worth pursuing simultaneously. (The hero takes up magical arms against the enemy, and victory seems possible after all.)
The utopian case for AI is dramatic acceleration of scientific understanding, and therefore technological and material progress. Those are worthy goals, which I share fully. However, no one has explained how or why AI would accomplish them. Chapter four, “Technological transformation without Scary AI,” suggests that it probably won’t.
Nevertheless, such acceleration is within our reach. Dysfunctional social structures for research and development limit our pace currently. We can take immediate, pragmatic actions to remove obstacles and speed progress, without involving risky AI. (The hero achieves an epiphany of the better world to come, and discovers that the key is of quite a different nature than expected.)
The most important questions are not about technology, but about us. The final chapter asks: What sorts of future would we like? What role would AI play in getting us there, and also in that world? What is your own role in helping it come about? (The story finishes with the hero beginning a new sort of journey, and leaves an open-ended conclusion.)
Gradient Dissent
Gradient Dissent discusses technical approaches to reducing AI risks. Originally a chapter in Better without AI, I’ve separated it out as a stand-alone text. The two are bound together in the paperback and Kindle versions, and share a site in the web version.
Gradient Dissent doesn’t require any specific technical background, but assumes the reader’s willingness to follow a trail through somewhat dense conceptual thickets.
“Neural networks” are the technology underlying most current AI systems. Neural networks are an exceptionally unreliable and dangerous technology. They produce systems that are deceptive and inherently error-prone. They should be used only under controlled conditions that reduce danger to acceptable levels. Currently, they are widely deployed in uncontrolled environments in which they cause large harms.
Gradient Dissent describes neglected scientific and engineering approaches that may make neural networks less risky. However, technical fixes cannot make them safe enough for most purposes. In the longer run, this technology should be deprecated, regulated, avoided, and replaced with better alternatives.
Text generators, such as ChatGPT, are a poorly-understood new technology, and pose many near-future risks. For example, the recent discovery that they can perform multi-step reasoning worries me. Current research might possibly lead to superhuman reasoning ability, which might be catastrophic.
Gradient Dissent recommends scientific and engineering investigation to understand what text generators do, and how, and what this may imply. That may enable technical and social mitigations: for example by replacing the underlying technology with better-understood ones; by regulating use; and by educating the public about the risks and limitations.
- 1.Statement on AI Risk, Center for AI Safety, 30 May 2023.
- 2.The distinction between “AI ethics” and “AI safety” has recently become muddled in public discussion. I will use the terms consistently for clarity.
- 3. Kelsey Piper discusses the gap between the two fields, and possible synergies, in “There are two factions working to prevent AI dangers. Here’s why they’re deeply divided,” Vox, Aug 10, 2022.