Scenarios in which artificial intelligence systems degrade critical institutions to the point of collapse seem to me not just likely, but well under way.
Paul Christiano’s 2019 “What failure looks like” describes one such scenario.1 Runaway AI systems “give rise to ‘greedy’ patterns that try to expand their own influence” and ultimately take control of human centers of power.
His story takes place many years from now. In that future, humanity fails to notice the growing influence of the machines until it’s too late.
I fear it’s happening now. Have you noticed yet? Is it too late?
How much influence the machines exert is controversial, though. There’s considerable debate among psychologists, sociologists, political scientists, and others over the extent to which AI-driven social networks have caused political polarization; have degraded individual understanding; and and undermined institutional coherence and capacity.2
I have no formal expertise in any of the relevant disciplines. My impression as an observant layperson is that the effects have been disastrous and are accelerating. The next section suggests that the interaction between AI systems and society is an urgent and neglected aspect of AI risk. Since the evidence is in dispute, you may reasonably reject or accept the argument. Or, like me, you may consider that prediction is always uncertain, but it’s worth working to forestall some possible disasters, even if we might get away with ignoring them.
AI, the king-maker
Even before employing AI, Mooglebook had accumulated significant power. They put most of it behind ad-click maximizing recommender algorithms. Those select much of what most people watch, read, and listen to. That forced most commercial media organizations into a new business model: produce things AIs will recommend, or die from lack of advertising revenue.
People’s preferences shifted in response to the media they got shown by the AIs. That empowered some businesses, fashion trends, and political movements, and weakened others. Each of these shifts increased the king-maker power of AI at the expense of traditional institutions like corporations, academia, and political parties.
The 2016 Republican primaries provide an outstanding example. Mooglebook’s AI had already identified culture war controversy as a driver for viewer engagement, and therefore ad clicks, so it promoted anything that fit the pattern. Media companies had already taken notice, and provided the AI with more of what the public seemed to crave.
During the primaries, Donald Trump was initially considered a long shot among seventeen candidates. He had no relevant credentials, and his personal history seemed like it would alienate several traditional Republican voting blocs. His campaign statements were deliberately outrageous, both for challenging left culture war opinions and for personal attacks on fellow Republicans. Those offended posted “You won’t BELIEVE what Trump just said, check this link!” on social media millions of times per day.
Recommender AI observed that the word “Trump” statistically correlated with clicks, and promoted any text containing it. Formerly-respected news organizations found that their page views and ad revenue skyrocketed whenever they highlighted Trump outrage—whether opposing or supporting him. Responding to the financial rewards bestowed by recommender AIs, the “news” became a firehose of all-Trump all-the-time gossip.
That provided enormous free publicity for him. No other candidate could pay for that level of voter awareness.3 Some political scientists believe that without it, Trump would not have won the primary, and therefore not the election.4
Recommenders’ power to shape public discourse continues unabated. “Jewish Space Lasers” are the top trending topic on Twitter right now—I just checked. “What??” Forgive me, reader, for I have sinned: I did click through to find out what that was about.
Maybe such absurd mythologizing doesn’t seem significant?
But the stakes are high. It would not be clearly wrong to say that in 2016, AI chose the president. Whether you love or loathe Trump, is that the way you want future government leaders selected?
Intellectually unattractive dooms
Considering social damage scenarios that lead to moderate apocalypses is unattractive to both the AI ethics and safety movements.
The AI ethics movement primarily addresses current, comparatively minor social harms. It has mainly neglected more serious disasters, perhaps taking them as excessively speculative.
Also, many in that movement are committed culture warriors, caught up in immediate political battles. Their side-taking may blind them to the larger-scale and longer-term consequences of social conflict. Their view is that smart machines are not the real problem: those are great, as long as “we” are in charge of them. The problem is stupid and evil people. “We” need to control Mooglebook’s AI to make it shut down the Bad Tribe’s propaganda. That refuses to recognize that much of the country wants and agrees with it. In the culture war, both sides believe they are fated to win, because they are morally correct, which justifies tearing societies apart. We’ll see how AI ensures neither can. The war itself, and the AI that stokes it, are our enemies.
The AI safety movement has mainly neglected anything short of human extinction as insignificant in comparison. Advocates might object that scenarios considered in this chapter are just about individuals and corporations making questionable use of the internet, somewhat aided by not-really-AI statistical algorithms. Ephemeral squabbles on social media are trivial by comparison with the end of the world, they might say.
Philosopher Nick Bostrom’s bestseller Superintelligence has hugely influenced AI safety discussions. Much of the movement’s writing embroiders the book’s extreme, intellectually engaging thought experiments, which echo classic science fiction stories.
Many such scenarios end the world with nuclear or biological weapons. In some, the AI villain has direct electronic control of them, but in many it just persuades humans to use them.
“America’s internet-driven politics get so insane and hostile that we have a civil war, Russia and China back opposite sides, and eventually it goes nuclear” sounds realistic. Realistic, at least, by comparison with the safety community’s Scary AI killer robot scenarios. In a mid-2022 Ipsos poll,5 half of 8,620 Americans surveyed agreed that “in the next few years, there will be civil war in the United States,” and a substantial minority said they’d join in.
AI safety proponents would agree that’s bad. However, the Mooglebook recommender algorithm involves no superintelligent Scary AI, so it’s not what they’re interested in.
Pragmatic responses to social dysfunction dynamics demand consideration of cultural and political factors. Most AI safety people would rather ignore those, because they seem boring, inscrutable, random, and stupid.
A main reason AI is risky is that its upcoming effects may still be inconceivable. Trying to guess details about what could happen in limited collection of classic science fiction plots risks not noticing signs of a different, less philosophically interesting catastrophe getting under way.
Such signs, I will suggest, have been flashing garish neon lights for years. The kinds of AI we already have may not lead directly to human extinction, but their effect on our deeply interconnected society could get extremely bad.
- 1.LessWrong, 17th Mar 2019.
- 3.Emily Stuart, “Donald Trump Rode $5 Billion in Free Media to the White House,” TheStreet, Nov 20, 2016.
- 4.Sarah Oates and Wendy M. Moe, “Donald Trump and the ‘Oxygen of Publicity’: Branding, Social Media, and Mass Media in the 2016 Presidential Primary Elections, American Political Science Association Annual Meeting, August 25, 2016.”
- 5.Wintemute et al., “Views of American Democracy and Society and Support for Political Violence: First Report from a Nationwide Population-Representative Survey.”