How AI destroyed the future

The future? Ha ha ha

We are doing a terrible job of thinking about the most important question because unimaginably powerful evil artificial intelligences are controlling our brains.

“What sort of society and culture do we want, and how do we get that” is the topic of the AI-driven culture war. The culture war prevents us from thinking clearly about the future.

Mooglebook recommender AI does not hate you, but you are made out of emotionally-charged memes it can use for something else.1

The culture war’s justification for itself is that Americans are profoundly split over fundamental values. This is a lie. Mostly everyone wants the same things; but we can’t get them because the Other Side will block any action to bring them about. Everyone urgently wants the healthcare system fixed, but for exactly that reason Mooglebook AI whips the Other Side into a frenzy of opposition to any specific proposal, on the basis of some insane theory its human servants invented on the spur of the moment.2

During the first few of weeks of covid, it was clear that one side of the culture war would insist that it was an insignificant cold, and the other that it was Doom—but they hadn’t yet figured out which side would take which. (Do you remember that the first mainstream left position was that Trump was using covid, an insignificant cold, as a justification for anti-Chinese racism?) This arbitrary inconsistency suggests not a “conflict of values,” but Mooglebook AI and its human servants running A/B tests to see which alignment would generate the most ad clicks, page views, and campaign dollars.

Venkatesh Rao writes:

If we were all in better shape mentally, the way we were in 2006 say, we’d have proper discourses about all this stuff and form coherent mental models and act in a spirit of global mutualism. One reason we don’t is that it’s gotten significantly harder to care about the state of the world at large. A decade of culture warring and developing a mild-to-medium hatred for at least 2/3 of humanity will do that to you. General misanthropy is not a state conducive to productive thinking about global problems. Why should you care about the state of the world beyond your ark? It’s mostly full of all those other assholes, who are the wrong kind of deranged and insane.3

This is not a future we would like.

What sorts of future would we like? Not what we would want. Not what is Correct. Not the future in which Our Side wins and “we” get everything the culture war AI has told us we want and deserve to get once we have humiliated the Other Side sufficiently. We’re not going to get that.

Realistic futures we would like won’t be perfect or Correct. They will be messy and imperfect. They can be better or worse in various respects. What would be, actually, surprisingly nice and pretty good all round?

Imagining a likeable future is a crucial prerequisite to building one. That not easy. You may need to get hostile AI out of your head first.4

  1. 1.This riffs on Eliezer Yudkowsky’s oft-quoted summary of the risk of AI non-alignment: “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.” Making paperclips, for example.
  2. 2.Scott Alexander, “The Toxoplasma of Rage,” Slate Star Codex, December 17, 2014.
  3. 3.Ark Head,” Ribbonfarm, September 29, 2022. Lightly edited for concision.
  4. 4.I suggest methods in “Vaster than ideology,” summarized earlier.