The AI apocalypse is now.
Recall the science-fictionish list of actions that an superintelligent AI might take to dominate humankind (repeated here verbatim):
- Seize control of large parts of the internet
- Spread pieces of itself onto many or all computers globally
- Develop sophisticated models of human social dynamics
- Target particular humans with specific manipulations, directing them to perform particular tasks, based on knowledge of individual vulnerabilities
- Gain control over supply chains
- Cooperate or compete with other AIs for power and control of resources: money, computer power, communication channels, human servants, and institutional influence
- Use superhuman persuasive techniques to get humans to do what it wants
- Get subservient humans to attack enemies
- Use its social models to manipulate human discourse and politics
- Coopt, weaken, or destroy human institutions and response capacities, including governments.
ALL THAT HAS ALREADY HAPPENED.
The AI safety field has listed these capabilities as terrifying future possibilities, and suggests we should treat AI systems starting to develop them as alarm signals.
If you are waiting for these alarm bells to go off before worrying—you are already much too late.
We are at war with the machines. An AI apocalypse is under way and you didn’t notice, because science fiction told you Scary AI would be mind-like AI, and that’s not what we got.
Seize control of large parts of the internet.
The visibility of web pages depends almost entirely on whether they get recommended by social media or web search, both of which are AI-driven. Every day, one out of every four humans alive looks at Facebook, and many of them see the rest of the internet only via AI-selected Facebook links.
Recommender engines are the dominant current use for AI in dollar terms.1 A recommender engine shows you a list of things you might want, based on statistical analysis (using “AI”) of information about you personally.2 Recommenders are provided by companies that profit when you choose something from the list. This includes, for example, Amazon showing you things you might buy, and Netflix showing you things you might watch. It includes Google search’s listing of web sites it hopes you might visit. Those then show you ads a Google recommender engine selects as the ones you are most likely to click, on the basis of what it knows about you personally.
Likewise, “social” networks were once actually social—you saw whatever your friends posted—but are now “recommender networks” instead.3 You see whatever things AI has determined will be most profitable to the recommender network company for you to see.
Spread pieces of itself onto many or all computers globally.
Nearly every web page you look at invisibly downloads tracking scripts onto your computer. Those are programs that watch everything you do and report it to AI programs run by Facebook, Google, Microsoft, and many other advertising technology companies. (I’ll refer to such companies generically as Mooglebook, to avoid offending any of them in particular.4) Many apps on your phone do the same.
Recommender engines craft their suggestions using enormous databases of information about you personally, collected by software tentacles of their AI systems. They include everything you do online, all your non-cash purchases, and a history of everywhere you have been and when (tracked via both your phone and your car). Some of this data is supposedly secured, but much or most is available for purchase from “data brokers” by pretty much anyone. It’s easy to reconstruct from it who you are having an affair with, which illegal drugs you take, what you actually do when you are supposed to be working, and where and when you got an abortion.
Develop sophisticated models of human social dynamics.
Facebook’s “social graph” is its foundation. That is a database of how nearly every individual human interacts with other specific individuals, with particular organizations, with physical products and physical locations, and with media content items. Facebook AI finds patterns in those interactions, and uses them to get you to do what it wants: to influence people you know, to join or oppose organizations, to buy products and go to places, and to persuade your friends to do those things too.
All that is weighted by its prediction of what will most increase advertising revenue. We’ll see how optimizing for that often implies accidentally also optimizing for outcomes which are inimical to humanity, as collateral damage.
Target particular humans with specific manipulations, directing them to perform particular tasks, based on knowledge of individual vulnerabilities.
Enormous databases of personal information, created for recommender engines using pervasive surveillance, contain all the material needed to deceive us, or to enhance the persuasiveness of messages sent to us.
This is already happening. Sophisticated automated phishing operations use these databases to target people who are statistically likely to fall for particular types of financial scams, and to personalize the deceptive messages sent to them. Political organizations similarly target and personalize automated propaganda spam that urges us to vote for their candidates, or to influence our representatives in their favor. Internet security experts predict scammers will use chatbots to automate the labor-intensive “long con” that gains victims’ trust during the lead up to the final fleecing.5
Gain control over supply chains.
AI supply chain control can drive down prices and increase availability, which is beneficial for consumers. (This section emphasizes the risks and harms of the capabilities I listed at the beginning; but they may also have benefits.) Amazon’s AI is famous for this, although there are many similar systems. It is tightly integrated with the supply chain across several million non-Amazon companies, and controls them to varying degrees.6 The AI optimizes every aspect of goods production, from new product planning to front-door delivery. Like all current AI, it is inscrutable and error-prone, and can capriciously destroy or enrich other businesses.
Amazon’s recommender AI incentivizes other companies to get their products recommended by gaming Amazon’s scoring algorithm. Some pay for fake product reviews on Amazon, mass produced either entirely automatically by AI, or by humans working under close supervision of automation.
Supply chains for intangible products and services are also controlled or influenced by AIs.7 For instance, the media industry now optimizes products to make them more likely to get shown to viewers by recommenders; and to get ads placed in them by recommenders.8 This has, famously, destroyed much of the formerly-respected mainstream media, or turned them into clickbait farms.
Increasingly, too, the web is littered with spam media produced by AI text and video generators. They are posted there to fool AI recommender systems into promoting them. At the other end, internet media’s consumers are often AI-driven click fraud systems. This means that in parts of the media supply chain, all of the players are AIs, and the products are never encountered by any human.
Cooperate or compete with other AIs for power and control of resources: money, computing power, communication channels, human servants, and institutional influence.
AI systems already do all these things. For example, stock market trading is currently dominated by competing AI systems that can recognize patterns and react to events faster than people can.
You may object that it is not the AIs that gain the power or control the resources. A stock trading bot doesn’t get to keep the money it wins; that belongs to whatever financial firm runs it. The bot is mindless and has no clue what money even is, or what to do with it. It has no agency. It’s not bots competing in the stock market, it’s groups of humans organized into companies.
This is true in some sense, and that may matter. However:
-
I argued earlier that AI is dangerous due to its ability to create pools of power, whether that gets wielded by AI or people. Suppose someone created a dramatically superior bot that was so profitable it could, within a few seconds after it was turned on, buy a controlling share of nearly all the public companies in the world. That would be a big problem, even if the bot’s creator exercised that control rather than the bot. What makes an AI risky is not its mind-like intentions (if any), it’s the effects it can cause.
-
I also explained how agency is nebulous, and partly in the mind of the beholder. Professional traders generally think of their opponents as bots, not as the institutions that run them. They often recognize a particular bot by its distinctive pattern of activity, without knowing which company is running it.
-
Trading bots do know something about what to do with money. That’s their whole job: figuring out what to buy with it, and when to sell to get cash instead. And, they do benefit from the money they make. Trading bots are subject to relentless Darwinian competition. If they lose money, they get shut down. If they make money, their firm gives them more resources: cash stake, computer power, and a bigger share of the special ultra-high-speed communication channel that connects traders to the stock exchange’s central database.
-
Trading bots are literally out of human control. In the short run, they act so quickly that human oversight is impossible. That sometimes results in disasters, like the 2010 flash crash discussed earlier. In the longer run, if an AI system works sufficiently well, the institution that runs it comes to depend on the AI for the institution’s own survival, and is effectively incapable of turning it off. The next section discusses this, with current real-life examples.
Increasingly large fractions of economic and political activity, of many sorts, are AI-driven. The rest of “At war with the machines” concentrates on the media sector, where it is currently most obvious and important.
When you visit most major news web sites, they download onto your phone or computer an advertising auction program. As soon as the web page starts loading, your device contacts many potential advertisers and tells them who you are, what you’ve been doing, and which web page you are about to look at. The advertisers’ AI systems consult their databases for information about you, estimate how likely you’d be to click on their ad and how likely you’d be to do whatever they want if you did, calculate your financial value to them, and send bids back to your device. The software chooses a winner and informs the publisher’s computer, which accepts the bid from the winner, gets paid, and inserts their ad into the page. (The “publisher” is the company whose web site you are looking at.) All this takes a second or two, finishing before you have read past the headline.9
Here the advertising AIs are competing for access to a communication channel (the ad placement), with which they intend to influence your thoughts and actions (to vote for a politician or buy a crocodile pool decoy). They cooperate with the publisher’s AI for mutual benefit. Meanwhile, major web publishers run their own content optimization and promotion AIs, to compete with each other both for your attention and for advertising revenue. Publishers’ AIs cooperate with recommender AIs to show you content optimized for advertising. Publishers’ AIs compete with each other to get the recommender AIs to recommend them. Successful AIs are given more computer power—and more of your mindshare.
Mainstream publishers are now also starting to use AI to write, not merely adjust, what you read. In January 2023, it came out that CNET, a formerly-respected news conglomerate, for several months had been using an AI text generator to write financial advice articles, with inadequate human supervision. Unsurprisingly, the articles often contained factual errors that could have led readers to make expensive mistakes.10
AIs collaborating and competing with each other to control people and institutions are a central theme of this chapter; let’s look at ways in which they could achieve this.
Use superhuman persuasive techniques to get humans to do what it wants.
If you post on a social network, you are working under the control of an AI—consciously or unconsciously. Skillful use of Twitter involves maximizing the reach of your messages by gaming its algorithms.11 What you tweet about and exactly how you word it affects how likely it is to get seen, liked, or retweeted. So does the time of day and day of the week you post it. So does your use of images, emoji, links, polls, and videos. You may be oblivious to all that, but you probably notice how many likes you get, and your brain finds patterns in that reward signal, and you are getting trained by the AI. I, for one, am a cyborg: a hybrid organism composed of some neural glop and an AI server farm somewhere in Texas.12
The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)
“Fuzzing” a program means feeding it massive quantities of aberrant data to find inputs that cause it to freeze, crash, wig out, or produce bizarrely wrong outputs. An effective fuzzer creates inputs plausible enough that its victim doesn’t recognize and reject them as aberrant, but which do create unexpected behaviors deep inside the program, exposing logic failures in its construction. Some fuzzers use machine learning methods to discover the internal structure and patterns of behavior of a program, in order to break it more effectively.
All “neural” AI systems are vulnerable to “adversarial inputs,” which cause them to produce bizarrely wrong outputs, errors no human could make.13 Those often seem alien and uncanny, in comparison with their more usual valid outputs and understandable mistakes.
People are strange and much less human than we like to pretend. Our responses to inputs are sometimes also alien and inscrutable. Seemingly-trivial messages (cartoon animals, variant spellings, initially-meaningless catch phrases) can trigger inexplicable individual and collective emotional meltdowns. Then we hit buttons on the recommender site, and post bizarrely wrong outputs, things we’d never say offline, and the AI notices and tries showing them to people who might be vulnerable due to internal logic failures, and finds new patterns…
The machines are fuzzing us.
More and more of our inputs are AI-optimized adversarial weirdness designed to cause irrational mental breakage—so we’ll output what the AI wants us to.
Get subservient humans to attack enemies.
AI coordinates “social media mobs” conducting what Wikipedia calls “internet vigilantism,” in which “hordes of what are likely otherwise nice people—shielded by anonymity, informed by echo chambers, restricted by character counts, incentivized to provoke shock—give in to their feral impulses and vomit abusive nonsense onto the web for a world-wide audience.”14 Online harassment may be merely unpleasant, but in many cases employers have hastily fired the targets, some of whom became effectively unemployable, often for trivial, irrelevant, or non-existent offenses.15 Social media mobs often call for killing the target. Sometimes that is credible enough (as when accompanied by “doxing”) to drive innocent people into hiding.
You might object that social media mob vigilantism is individual people attacking their enemies, not AI attacking its enemies. I did write “attack enemies”—not “attack its enemies”—but maybe that was implied?
It’s not so clear what’s going on here. Whose enemies are the targets? Typically, “otherwise nice people” attack someone, on the basis of nearly zero information, who they’ve never heard of before, and who they completely forget about two minutes later—but the damage is done. Was the target of the otherwise nice people their enemy? Or were the otherwise nice (but mindless) people used as weapons by some other agent that was temporarily controlling their brains?
The AI fuzzer chooses targets to maximize viewer engagement with media reports about the drama, with accompanying advertisements for diet cola and toenail fungus remedies. But how can it turn otherwise nice people into momentary monsters?
Online mobs are almost always ideologically driven. Participants believe they are engaging in “slacktivism”: exaggerated expressions of righteous rage in order to feel that they are contributing to a noble political cause, with minimal effort. So it might be more accurate to say mob victims are enemies of ideologies, rather than of the nice but mindless perpetrators.
Which ideologies are those? Online mobs do not speak for the boring old-fashioned ones discussed in political philosophy classes. They speak for Extremely Online ideologies invented last week, whose names begin with #, the hashtag sign. Those ideologies are themselves conjured into existence in part by Mooglebook AIs—as we shall soon see.
Some political actors actively coordinate and direct twitter mobs. However, that is limited by needing to find a message that both generates an irrational hate response and causes ad clicks, so it gets propagated by recommender systems. It’s best to understand such human actors as collaborating with the AI to craft such messages.16
Here the immune system, or “mosaic warfare,” is a better analogy than a human mind. Agency emerges from dynamic interactions between individual people, ideologies, media and political organizations, and artificial intelligence systems. The victims of online mobs are enemies of the composite, symbiotic superintelligent superorganism.
Use its social models to manipulate human discourse and politics.
Mooglebook’s AI has developed superhuman social engineering capabilities by applying stochastic gradient descent17 to human behavior.
Memes—viral packets of meanings—have spread through human communication for millennia. The internet didn’t much change their dynamics at first; it was just a new human-to-human communication medium. Starting about a decade ago, though, social networks introduced the like/share/retweet buttons. They fed Like counts, along with personal data gathered through internet surveillance, to AI systems. They replaced genuinely social feeds, which showed you what your friends wanted you to see, with profit-optimized algorithmic feeds, which show you what the AI wants you to see. That set off a new evolutionary arms race. The fittest “content” items maximize Likes and advertising clicks. Mooglebook AI figures out which those are, and promotes them. Human content creators—journalists, influencers, marketers, activists, AI safety researchers—also try to figure out what the AIs will consider worthy.18
AI has discovered that inciting tribal hatred is among the best ways to sell ads.19 In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages.20 Under partial brain control from AIs, we humans create emotion-inducing culture-war messages.21 The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).
There was a culture war before AI seized control of the media, but it wasn’t as irrational, pervasive, fast-moving, polarized, or hostile. “If it bleeds it leads” was a maxim of the traditional “yellow journalism” news media: their editors selected stories they guessed would upset you. However, the internet dramatically accelerated the news cycle. Social media statistics and tracking technologies gave editors real-time feedback on how upsetting a story was, so they could follow up with more, faster. New upsets arrive so quickly that there’s no time to reflect on what they may mean; all one can do is retweet and move on to the next.22
Recommender AI amplifies selected Daily Outrages, ones that no human editor could have predicted, based on its inscrutable predictive models of social psychology. As I write this in mid-January 2023, Twitter is all about whether gas stoves cause asthma, which AI has somehow turned into a proxy for The Other Tribe Is Wrong About Everything. Editors at formerly-respected “news” organizations are rejoicing: they are getting paid for so many ads, placed by AI on their hasty clickbait coverage of this Critical Issue.23
Ideologies now spread not mainly person-to-person, but person-to-AI-to-person-to-AI. Ideologies compete for the computational resources they need to propagate: human attention and AI approval.
The most powerful agents in the world are now hybrid distributed superintelligences: amalgams of AIs, media products, synthetic ideologies, and infected humans and institutions.
Coopt, weaken, or destroy human institutions and response capacities, including governments.
Our social and cultural institutions, on which our lives depend, have been gradually losing their ability to maintain systematicity and rationality over the past half century.24 They are disintegrating and risk catastrophic collapse.
This process has accelerated dramatically in the past decade, driven by the internet, particularly the social networks. Major systematic institutions have been crippled or effectively destroyed under AI-driven memetic attack, generally from both sides of the culture war.
Public health agencies—the WHO, FDA, and CDC—are obvious cases. During covid, they have been unable to act effectively on the basis of scientific knowledge (as, until recently, they reliably did), due to recommender-driven memetic damage. The two sides of the culture war invested masks, vaccines, and potential treatments with opposing symbolic meanings, ungrounded in physical reality. The agencies increasingly and explicitly made recommendations on the basis of how they guessed the public would interpret statements as culture war moves, rather than on the basis of medical evidence.
Extrapolating this trend, disabling critical institutions may spell Doom. Much of the rest of this chapter elaborates that possibility.
- 1.As far as I can tell. I have not found a financial breakdown of commercial applications for AI. I suspect that’s because, in terms of revenue, everything else is insignificant by comparison. AI “works” for recommenders because a high error rate is not a major problem; if 20% of their suggestions are way off, it doesn’t matter. Not many applications are so tolerant.
- 2.There’s a literature on recommender alignment, analogous to AI alignment. An interview with Stuart Russell at https://www.youtube.com/watch?v=vzDm9IMyTp8is a good starting point.
- 3.Michael Mignano, “The End of Social Media and the Rise of Recommendation Media,” mignano.medium.com, 27 July 2022.
- 4.I’m following the lead of Gwern Branwen’s “It Looks Like You’re Trying To Take Over The World” here. (gwern.net, 2022-03-06–2023-03-28.)
- 5.Bruce Schneier and Barath Raghavan, “Brace Yourself for a Tidal Wave of ChatGPT Email Scams,” Wired, Apr 4, 2023.
- 6.Moira Weigel, “Amazon’s Trickle-Down Monopoly: Third Party Sellers and the Transformation of Small Business,” Data & Society, no date.
- 7.Jon Stokes, “Coupling, drift, and the AI nobody noticed,” jonstokes.com, Jun 18, 2021.
- 8.See the “Journalism’s AI revolution” section in Jon Stokes’ “Is machine learning in the enterprise mostly ‘snake oil’?”, jonstokes.com, May 25, 2021.
- 9.This is called “header bidding.” I find it technologically astonishing as well as quite creepy. There’s a more detailed explanation at headerbidding.com. The auction may, alternatively, run on the publisher’s server, or on an advertising company’s server, rather than your device; all three approaches are common.
- 10.Lauren Leffer, “CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple Major Corrections,” Gizmodo, revised version of January 17, 2023. Also see CNET’s official non-apology: Connie Guglielmo, “CNET Is Experimenting With an AI Assist. Here’s Why,” Jan. 16, 2023.
- 11.Jon Stokes, “Welcome to the Everything Game,” jonstokes.com, May 5, 2021.
- 12.So it’s more accurate to say that, in using social networks, you are trained by the hybrid superintelligence composed of AI systems and your human-cyborg audience. The training agency is diffuse, like the immune system. How much depends on AI versus humans probably varies considerably, and we don’t have measures yet. The feedback cycles are complicated. We won’t know for sure until we shut down the AIs and see how much everything improves!
- 13.Moosavi-Dezfooli et al., “Universal adversarial perturbations,” arXiv:1610.08401v1, 6 Oct 2016.
- 14.Micah Cash in “Against the Social-Media Mob,” The Wall Street Journal, April 16, 2019.
- 15.Jon Ronson’s “How One Stupid Tweet Blew Up Justine Sacco’s Life” discusses several such cases. The New York Times Magazine, Feb. 12, 2015.
- 16.B.J. Campbell, “Facebook is Shiri’s Scissor,” Handwaving Freakoutery, May 3, 2021.
- 17.Stochastic gradient descent is the mathematical method used in most current AI systems. Similar to biological evolution, it compares the effectiveness of small random changes, and reinforces the strongest.
- 18.Jonathan Haidt, “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” The Atlantic, April 11, 2022.
- 19.Rathje et al., “Out-group animosity drives engagement on social media,” PNAS, June 23, 2021.
- 20.Jon Stokes, “Segmentation faults: how machine learning trains us to appear insane to one another,” jonstokes.com, Jun 11, 2021.
- 21.Daniel Williams’ “The marketplace of rationalizations” describes “a social structure in which agents compete to produce justifications of widely desired beliefs in exchange for money and social rewards such as attention and status.” Economics & Philosophy, March 2023.
- 22.Brady et al. found that “the presence of moral-emotional words in [Twitter] messages increased their diffusion by a factor of 20% for each additional word.” That’s in “Emotion shapes the diffusion of moralized content in social networks,” PNAS, June 26, 2017. Relatedly, Facebook conducted a covert experiment of showing randomly selected users either more positive or more negative messages. They found that “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness”; and that “when positive expressions were reduced, people produced fewer positive posts and more negative posts.” Kramer et al., “Experimental evidence of massive-scale emotional contagion through social networks,” PNAS, June 2, 2014. See also the discussion of implications by Robinson Meyer in “Everything We Know About Facebook’s Secret Mood-Manipulation Experiment,” The Atlantic, June 28, 2014.
- 23.“Biden Is Coming for Your Gas Stove,” The Wall Street Journal Editorial Board, Jan. 10, 2023. David Watsky, “Two Shocking Studies That Likely Sparked a Gas Stove Ban Debate,” CNET, Jan. 15, 2023. Lisa Hagen and Jeff Brady, “Gas stoves became part of the culture war in less than a week. Here’s why,” NPR, Jan. 21, 2023.
- 24.My incomplete but extensive “How meaning fell apart” traces the history of disintegration (on meaningness.com). “A bridge to meta-rationality vs. civilizational collapse” suggests a possible antidote (on metarationality.com).