We are doing a terrible job of thinking about the most important question because unimaginably powerful evil artificial intelligences are controlling our brains.
“What sort of society and culture do we want, and how do we get that” is the topic of the AI-driven culture war. The culture war prevents us from thinking clearly about the future.
Mooglebook AI does not hate you, but you are made out of emotionally-charged memes it can use for something else.1
The culture war’s justification for itself is that Americans are profoundly split over fundamental values. This is a lie. Mostly everyone wants the same things; but we can’t get them because the Other Side will block any action to bring them about. Everyone urgently wants the healthcare system fixed, but for exactly that reason Mooglebook AI whips the Other Side into a frenzy of opposition to any specific proposal, on the basis of some insane theory its human servants invented on the spur of the moment.2
During the first few of months of covid, the left and right flip-flopped three times over whether it was an insignificant cold or Doom. (Do you remember that the first mainstream left position was that Trump was using covid, an insignificant cold, as a justification for anti-Chinese racism?) This arbitrary inconsistency suggests not a “conflict of values,” but Mooglebook AI and its human servants running A/B tests to see which alignment would generate the most ad clicks, page views, and campaign dollars.
Venkat Rao writes:
If we were all in better shape mentally, the way we were in 2006 say, we’d have proper discourses about all this stuff and form coherent mental models and act in a spirit of global mutualism. One reason we don’t is that it’s gotten significantly harder to care about the state of the world at large. A decade of culture warring and developing a mild-to-medium hatred for at least 2/3 of humanity will do that to you. General misanthropy is not a state conducive to productive thinking about global problems. Why should you care about the state of the world beyond your ark? It’s mostly full of all those other assholes, who are the wrong kind of deranged and insane.3
This is not a future we would like.
Here I will issue an urgent warning to all concerned with AI risks. Popularizing knowledge of that issue risks its appropriation by the culture war. The culture war grabs anything that provokes strong emotions, invents two insane positions, and assigns them to Team A and Team B. After that, any rational discussion of the topic becomes impossible. There is an immediate danger here, promoted inadvertently by the AI ethics community: if concern with AI risk gets labeled “woke,” half of America will be unshakably opposed to any efforts to mitigate it. Conversely, identification of the AI safety movement with “techbro billionaires” will guarantee that the other half of America will be unshakably opposed to any efforts to mitigate risks.4
What sorts of future would we like? Not what we would want. Not what is Correct. Not the future in which Our Side wins and “we” get everything the culture war AI has told us we want and deserve to get once we have humiliated the Other Side sufficiently. We’re not going to get that.
Realistic futures we would like won’t be perfect or Correct. They will be messy and imperfect. They can be better or worse in various respects. What would be, actually, surprisingly nice and pretty good all round?
- 1.This riffs on Eliezer Yudkowsky’s oft-quoted summary of the risk of AI non-alignment: “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.” Making paperclips, for example.
- 2.Scott Alexander, “The Toxoplasma of Rage.”
- 3.“Ark Head.” Lightly edited for concision.
- 4.I wrote this paragraph in September 2022. During revision in January 2023, it seems that the both halves of my prophesy have come to pass. For an early salvo (July 2022), see Émile P. Torres, “The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk’.”