What an AI apocalypse may look like

Is the “rollerskating transsexual wombats” scenario possible? Sure. But is it likely?

Perhaps not in every detail: after all, the tale’s absurdities were part of my strategy for maximizing your engagement. If it worked, you’ll share links to this book far and wide. Your contacts will engage in lively discussions about it; and then the recommender systems will see its potential and ramp up its distribution on the social media feeds of myriad users like you.

Stripped to essentials, however, a scenario wherein recommender AIs optimize for irrational conflict and degrade critical institutions to the point of collapse seems to me not merely likely, but well under way.

Paul Christiano’s “What failure looks like”, published in 2019, describes a similar scenario: runaway AI systems “give rise to ‘greedy’ patterns that try to expand their own influence” and ultimately take control of human centers of power. His story takes place many years from now, and in that future humanity fails to notice the growing influence of the machines until it’s too late.

I fear it’s happening now. Have you noticed yet? Is it too late?

How much influence the machines exert is controversial, though. There’s considerable debate among psychologists, sociologist, political scientists, and others over the extent to which social networks have caused political polarization, degraded individual understanding, and undermined institutional coherence and capacity.1

I have no formal expertise in any of the relevant disciplines. My impression as an observant layperson is that the effects have been disastrous and are accelerating. I will argue that the interaction between AI systems and society is an urgent and neglected aspect of AI risk. I will favor forceful clarity over dispassionate moderation. Putting the case starkly will allow you to consider it seriously, and to accept it, reject it, or adopt a more moderate version.

Considering severe social damage scenarios is unattractive to most in both the AI ethics and safety movements. The AI ethics movement primarily addresses current, localized, relatively small-scale social harms. It has mainly neglected more serious disasters, perhaps taking them as excessively speculative. Also, many in that movement are committed culture warriors, caught up in immediate political battles. Their side-taking may blind them to the larger-scale and longer-term consequences of social conflict.

The AI safety movement has mainly neglected anything short of human extinction as insignificant by comparison. Less extreme scenarios may allow for coordinated, pragmatic responses, and eventual human victory. Preparing for that requires consideration of cultural, social, and political factors many safety people would rather ignore, because they seem complicated, boring, and stupid. Political behavior never seems far from random noise anyway. AI might make it worse, but that surely isn’t the end of the world. Besides, in the roller derby scenario, it’s nuclear and biological weapons that cause human extinction, not AI.

These objections would be peculiar. The safety community’s apocalyptic AI scenarios commonly end the world with nuclear or biological weapons. In some, the AI has direct electronic control of them, but in many it just causes humans to use them. I think most lay people would find political incoherence scenarios far more plausible than many contemplated in the AI safety field. “Malevolent AI takes over the world and turns everyone into paperclips” sounds like a kids’ cartoon; “American politics gets so insane and hostile that we have a civil war, Russia and China take sides, and eventually it goes nuclear” sounds quite realistic. In a mid-2022 Ipsos poll,2 the majority of 8,620 Americans surveyed agreed that “in the next few years, there will be civil war in the United States,” and a substantial minority said they’d join in.

Well yes, maybe AI safety proponents would agree that’s bad, but it involves no superintelligent Scary AI, so it doesn’t count. The Mooglebook recommender algorithm is just some dumb statistical thing that barely works. Anyway, politics is tedious and stupid and random and incomprehensible, and therefore not something they want to think about.

Philosopher Nick Bostrom’s bestseller Superintelligence has hugely influenced AI safety discussions. Much of the movement’s writing embroiders the book’s extreme, intellectually engaging thought experiments, which echo classic science fiction stories. However, the main reason AI is risky is that its upcoming effects may still be inconceivable. Trying to guess details about what could happen in that limited collection of scenarios risks not noticing signs of a different, less intellectually dazzling doom getting under way.

Such signs, I will suggest, have been flashing garish neon lights for years. The kinds of AI we already have may not lead directly to human extinction, but their effect on our deeply interconnected society could get extremely bad.

Even before employing AI, Mooglebook had accumulated significant power. They put most of it behind ad-click maximizing recommender algorithms. Those select much of what most people watch and read. That forced most commercial media organizations into a new business model: produce things AIs will recommend, or die from lack of advertising revenue. People’s preferences shifted in response to the media they got shown by the AIs. That empowered some businesses, fashion trends, and political movements, and weakened others. Each of these shifts increased the king-maker power of AI at the expense of traditional institutions like corporations, academia, and political parties.

The 2016 Republican primaries provide an outstanding example. Mooglebook’s AI had already identified culture war controversy as a driver for viewer engagement, and therefore ad clicks, so it promoted anything that fit the pattern. Media companies had already taken notice, and provided the AI with more of what the public seemed to crave.

During the primaries, Donald Trump was initially considered a long shot among seventeen candidates. He had no relevant credentials, and his personal history seemed like it would alienate several traditional Republican voting blocs. His campaign statements were deliberately outrageous, both for challenging left culture war opinions and for personal attacks on fellow Republicans. Those offended posted “You won’t BELIEVE what Trump just said, check this link!” on social media millions of times per day. Recommender AI observed that the word “Trump” statistically correlated with clicks, and promoted any text containing it. Formerly-respected news organizations found that their page views and ad revenue skyrocketed whenever they highlighted Trump outrage—whether opposing or supporting him. Responding to the financial rewards bestowed by recommender AIs, the “news” became a firehose of all-Trump all-the-time gossip.

That provided enormous free publicity for him. No other candidate could pay for that level of voter awareness.3 Some political scientists believe that without it, Trump would not have won the primary, and therefore not the election.4

Recommenders’ power to shape public discourse continues unabated. “Jewish Space Lasers” are the top trending topic on Twitter right now—I just checked. “What??” Forgive me, reader, for I have sinned: I did click through to find out what that was about. Maybe such absurd mythologizing doesn’t seem significant?

But the stakes are high. It would not be clearly wrong to say that in 2016, AI chose the president. Whether you love or loathe Trump, is that the way you want future government leaders selected?


  1. 1.Jonathan Haidt makes the case for harm and alarm in “Social Media Is Warping Democracy,” “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” and numerous other articles. Facebook has replied to one directly. Scott Alexander’s “Sort By Controversial” is an amusing and horrifying parable. The Social Dilemma is a semi-documentary film, based partly on the work of the Center for Human Technology, which I haven’t watched. I also haven’t read Shoshana Zuboff’s book The Age of Surveillance Capitalism. Furthermore, I haven’t read two academic studies with evidence that social media driven polarization is not a thing, Chen et al.’s “Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos” and Boulianne et al.’s “Right-Wing Populism, Social Media and Echo Chambers in Western Democracies,” but I did check the abstracts. Adam Mastroianni makes a good case against polarization in “The great myths of political hatred.”
  2. 2.Wintemute et al., “Views of American Democracy and Society and Support for Political Violence: First Report from a Nationwide Population-Representative Survey.”
  3. 3.Emily Stuart, “Donald Trump Rode $5 Billion in Free Media to the White House.”
  4. 4.Sarah Oates and Wendy M. Moe, “Donald Trump and the ‘Oxygen of Publicity’: Branding, Social Media, and Mass Media in the 2016 Presidential Primary Elections.”