Superintelligence scenarios reduce the future to infinitely good or infinitely bad. Both are possible, but we cannot reason about or act toward them. Messy complicated good-and-bad futures are probably more likely, and in any case are more feasible to influence.
AI is about power and control. The technical details are interesting for some of us, but they’re a sideshow.
Superintelligence is a fantasy of power, not intelligence. Intelligence is just a technical detail. Not even that: there is no explanation of what “intelligence” even means there, or what it could do or how. It’s arbitrary unspecified magic, a deus ex machina used to introduce infinite superpower into the plot.
We wanted flying cars, but all we got was 280 characters.1 Technological progress has ground to a halt (or, that’s the feeling). Why?
What’s blocking the future are social and cultural dysfunction, not tech. We can’t even use the technologies we’ve got. High speed trains aren’t rocket science, but social coordination failures mean America can’t deploy them. We’ve unlocked the technology to make vaccines against all likely future pandemics, and to stockpile them against bioweapons or people messing with bats; but due to the culture war we can’t even get adequate funding for broad-spectrum covid vaccines that might end the pandemic permanently.2
This makes superintelligent AI attractive to technically-minded people. It’s the last open frontier; the last path forward. It would give us the superpower to just wave a wand and make the bullet trains run on time, without having to fight the legislature and unions and environmentalists and landowners and regulators and lawyers and “concerned citizens” groups and twitter trolls. We could dominate and disable Mooglebook recommender AI, which controls all those people’s brains.
“Whoever develops the biggest AI rules the world” is the ultimate techno-power fantasy showdown.
Superintelligence narratives take power to its logical limit: infinite power by fiat. They imagine extreme scenarios of human omnipotence (deploying AI to exert perfect control over all phenomena), or absolute helplessness under control of omnipotent enemy AI (as slaves, ems,3 corpses, or paperclips).
Superintelligence makes details of the future inconceivable, by fiat. All the future can get is a one-bit valence: either the AI is a good god and we go to an inconceivable heaven, or it is an evil god and we go to an inconceivable hell. In that case, all questions of meaning, purpose, and value reduce to “make sure we get a good god, not an evil one.” The simplicity of this moral absolutism may be powerfully emotionally attractive.
Then we try to think up technological means for forcing a god to be good when it would be evil by default (“alignment”). This fails, in part because most candidate solutions, while supposedly technological, follow the logic of narrative—not engineering.
We can fantasize about an AI-granted heaven, but it seems we are all going to hell, and there’s nothing we can do about it. The reply to “this is silly” is “well you can’t rule out its happening,” which is true and important, but it’s led to a dead end. We can’t use technical rationality to reason about gods.
The future might be one bit, but it may be a big complicated mess, the way things have always been. Trying to guess which is more likely isn’t helpful.
AI as our only possible savior is a sad failure of the imagination. It probably won’t work. We should think of other ways of having a good future.
There are no guarantees for success, but we know lots about how to deal with big complicated messes. We have engineering, and we have politics. They are imperfect, but we can, and should, use both.
- 1.That’s a much-repeated statement from Peter Thiel, a decade ago. The original version was “140 characters,” the length limit for Twitter posts at the time; it’s now 280 (progress!). He was pointing out that social media had been the most significant twenty-first century technological innovation, which seemed puny compared with twentieth-century progress in material technologies, notably transportation and energy.
- 2.I wrote this sentence before the announcement of Project Next Gen, a five billion dollar program to do just this, in April 2023.
- 3.“Ems” are digitally simulated copies of individual people’s brains. Robin Hanson, The Age of Em.