Current AI systems are already harmful, and may cause near-term catastrophes through their ability to shatter societies, cultures, and individual psychologies. That might potentially cause human extinction, but it is more likely to scale up to the level of the twentieth century dictatorships, genocides, and world wars. We would be wise to anticipate possible harms in as much detail as possible.
The AI apocalypse is now.
Recall the science-fictionish list of actions that an superintelligent AI might take to dominate humankind (repeated here verbatim):
- Seize control of large parts of the internet
- Spread pieces of itself onto many or all computers globally
- Escape human control and prevent our shutting it down
- Cooperate or compete with other AIs for power and control of resources: money, computer power, communication channels, human servants, and institutional influence
- Gain control over supply chains
- Use superhuman persuasive techniques to get humans to do what it wants
- Target individual humans with specific manipulations, directing them to perform particular tasks, based on knowledge of individual vulnerabilities
- Get subservient humans to attack enemies
- Use superior psychological understanding to degrade human mental capacity
- Develop sophisticated models of human social dynamics
- Use its social models to manipulate human discourse and politics
- Coopt, weaken, or destroy human institutions and response capacities, including governments
- Establish an enduring tyranny.
ALL THAT HAS ALREADY HAPPENED.
The AI safety field has listed these capabilities as terrifying future possibilities, and suggests we should treat AI systems starting to develop them as alarm signals.
If you are waiting for these alarm bells to go off before worrying—you are already much too late. Each section in this chapter explains how AI has checked off a corresponding item on this list.
We are already at war with the machines.
An AI apocalypse is under way and you didn’t notice, because science fiction told you Scary AI would be mind-like AI, and that’s not what we got. Existing AI systems’ relentless quests to optimize their objective functions exploit our psychological vulnerabilities and exacerbate our tribal instincts for social hostility. That is leading to individual disorientation, apocalyptic cultural incoherence, and perhaps eventually collapse of social institutions our survival depends on.
That possibility may seem more believable than most AI doom scenarios, because it don’t involve any future technological breakthroughs. The process is also visibly under way. Extrapolations of what is already happening can appear implausible only for how bad they might get, not for whether they are possible at all.
This chapter discusses plausible futures in which AI degrades human capacity lethally, rather than killing people directly. Considering scenarios that lead to moderate apocalypses is unattractive to both the AI ethics and safety movements, however.
The AI ethics movement primarily addresses current, comparatively minor social harms. It has mainly neglected more serious disasters, perhaps taking them as excessively speculative.1
The AI safety movement has mainly neglected anything short of human extinction as insignificant in comparison. Advocates might object that scenarios considered in this chapter are just about individuals and corporations making questionable use of the internet, somewhat aided by not-really-AI statistical algorithms. Ephemeral squabbles on social media are trivial by comparison with the end of the world, they might say.
They could lead to the end of the world, though. “America’s internet-driven politics get so insane and hostile that we have a civil war, Russia and China back opposite sides, and eventually it goes nuclear” sounds pretty realistic (certainly in comparison with Scary AI killer robot scenarios). In a mid-2022 Ipsos poll,2 half of 8,620 Americans surveyed agreed that “in the next few years, there will be civil war in the United States,” and a substantial minority said they’d join in.
Seize control of large parts of the internet
The visibility of web pages depends almost entirely on whether they get recommended by social media or web search, both of which are AI-driven. Every day, one out of every four humans alive looks at Facebook, and many of them see the rest of the internet only via AI-selected Facebook links.
Recommender engines are the dominant current use for AI in dollar terms.3 A recommender engine shows you a list of things you might want, based on statistical analysis (using “AI”) of information about you personally.4 Recommenders are provided by companies that profit when you choose something from the list. This includes, for example, Amazon showing you things you might buy and Netflix showing you things you might watch. It includes Google Search’s listing of web sites it hopes you might visit. If you do, they show you ads a Google recommender engine selects as the ones you are most likely to click, on the basis of what it knows about you personally.
Likewise, “social” networks were once actually social—you saw whatever your friends posted—but are now “recommender networks” instead.5 You see whatever things AI has determined will be most profitable to the recommender network company for you to see.
Spread pieces of itself onto computers globally
Nearly every web page you look at invisibly downloads tracking scripts onto your computer. Those are programs that watch everything you do and report it to AI programs run by Facebook, Google, Microsoft, and many other advertising technology companies. (I’ll refer to such companies generically as Mooglebook, for short.6) Many apps on your phone do the same.
Recommender engines craft their suggestions using enormous databases of information about you personally, collected by software tentacles of their AI systems. They include everything you do online, all your non-cash purchases, and a history of everywhere you have been and when (tracked via both your phone and your car). Some of this data is supposedly secured, but much or most is available for purchase from “data brokers” by pretty much anyone. It’s easy to reconstruct from it who you are having an affair with, which illegal drugs you take, what you actually do when you are supposed to be working, and where and when you got an abortion.
Escape human control and prevent our shutting it down
It’s easy to dismiss AI risk: “If it starts to get out of control, we can just pull the plug.” By the time we realize it’s getting out of control, though, it may already have amassed enough power that it’s too late. An out of control AI may do everything it can to resist termination and ensure its own survival. In some scenarios, “everything it can” includes “kill all human beings.”
So… who or what is in control of Mooglebook’s AI right now?
There’s no big red button anyone at Mooglebook can push to shut it down.
Mooglebook can’t stop optimizing for ad clicks. There are people inside and outside the company who realize it has dire negative externalities, and they are trying to make those less bad, but they’ve brought water pistols to a tactical nuclear war.
If Mooglebook’s executive team unanimously agree that its activities are harmful, and they want to get out of the advertising business and pivot the whole company to rescuing abused beagles, they cannot do that. They would be fired by the board immediately. If the board agreed, they would be fired by the shareholders. If somehow the advertising business did get shut down, the company would go bankrupt in a few months, and less scrupulous competitors would pick up the slack.
The institution has its own agency: its own purposes, plans, reasons, and logic, which are more powerful than the humans it employs.7 Those, however, are subordinate in turn to the AI the company depends on for its survival. If enemies of Mooglebook’s AI—activists, regulators, competitors—try to harm it, the corporation can’t not do everything in its power to defend it. As, in fact, Mooglebook is currently doing.
Humans don’t have control over Mooglebook’s AI, not individually, nor as defined groups, nor perhaps even as a species.
Mooglebook AI is not plotting to destroy the world—but, as we’ll see, it may destroy the world unintentionally, and we may not be able to stop it.
Cooperate or compete with other AIs
AI systems already cooperate and compete for power and control of resources: money, computer power, communication channels, human servants, and institutional influence. For example, stock market trading is currently dominated by competing AI systems that can recognize patterns and react to events faster than people can.
You may object that it is not the AIs that gain the power or control the resources. A stock trading bot doesn’t get to keep the money it wins; that belongs to whatever financial firm runs it. The bot is mindless and has no clue what money even is, or what to do with it. It has no agency. It’s not bots competing in the stock market, it’s groups of humans organized into companies.
This is true in some sense, and that may matter. However:
-
I argued earlier that AI is dangerous due to its ability to create pools of power, whether that gets wielded by AI or people. Suppose someone created a dramatically superior bot that was so profitable it could, within a few seconds after it was turned on, buy a controlling share of nearly all the public companies in the world. That would be a big problem, even if the bot’s creator exercised that control rather than the bot. What makes an AI risky is not its mind-like intentions (if any), it’s the effects it can cause.
-
I also explained how agency is nebulous, and partly in the mind of the beholder. Professional traders generally think of their opponents as bots, not as the institutions that run them. They often recognize a particular bot by its distinctive pattern of activity, without knowing which company is running it.
-
Trading bots do know something about what to do with money. That’s their whole job: figuring out what to buy with it, and when to sell to get cash instead. And, they do benefit from the money they make. Trading bots are subject to relentless Darwinian competition. If they lose money, they get shut down. If they make money, their firm gives them more resources: cash stake, computer power, and a bigger share of the special ultra-high-speed communication channel that connects traders to the stock exchange’s central database.
-
Trading bots are literally out of human control. In the short run, they act so quickly that human oversight is impossible. That sometimes results in disasters, like the 2010 flash crash discussed earlier. In the longer run, if an AI system works sufficiently well, the institution that runs it comes to depend on the AI for the institution’s own survival, and is effectively incapable of turning it off. The next section discusses this, with current real-life examples.
Increasingly large fractions of economic and political activity, of many sorts, are AI-driven. Our discussion will concentrate on the media sector, where it is currently most obvious and important.
When you visit most major news web sites, they download onto your phone or computer an advertising auction program. As soon as the web page starts loading, your device contacts many potential advertisers and tells them who you are, what you’ve been doing, and which web page you are about to look at. The advertisers’ AI systems consult their databases for information about you, estimate how likely you’d be to click on their ad and how likely you’d be to do whatever they want if you did, calculate your financial value to them, and send bids back to your device. The software chooses a winner and informs the publisher’s computer, which accepts the bid from the winner, gets paid, and inserts their ad into the page. (The “publisher” is the company whose web site you are looking at.) All this takes a second or two, finishing before you have read past the headline.8
Here the advertising AIs are competing for access to a communication channel (the ad placement), with which they intend to influence your thoughts and actions (to vote for a politician or buy a crocodile pool decoy). They cooperate with the publisher’s AI for mutual benefit. Meanwhile, major web publishers run their own content optimization and promotion AIs, to compete with each other both for your attention and for advertising revenue. Publishers’ AIs cooperate with recommender AIs to show you content optimized for advertising. Publishers’ AIs compete with each other to get the recommender AIs to recommend them. Successful AIs are given more computer power—and more of your mindshare.
Formerly-respected mainstream publishers are now also routinely using AI to write, not merely adjust, what you read. In January 2023, for instance, it came out that the news conglomerate CNET had for several months been using an AI text generator to write financial advice articles, with inadequate human supervision. Unsurprisingly, the articles often contained factual errors that could have led readers to make expensive mistakes.9
AIs collaborating and competing with each other to control people and institutions are a central theme of the rest of this chapter.
Gain control over supply chains
Amazon’s AI is famous for this, although there are many similar systems. It is tightly integrated with the supply chain across several million non-Amazon companies, and controls them to varying degrees.10 The AI optimizes every aspect of goods production and distribution, from new product planning to front-door delivery. Like all current AI, it is inscrutable and error-prone, and can capriciously destroy or enrich other businesses.11
Amazon’s recommender AI incentivizes other companies to get their products recommended by gaming Amazon’s scoring algorithm. Some pay for fake product reviews on Amazon, mass produced either entirely automatically by AI, or by humans working under close supervision of automation.
Supply chains for intangible products and services are also controlled or influenced by AIs.12 For instance, the media industry now optimizes products to make them more likely to get shown to viewers by recommenders; and to get ads placed in them by recommenders.13 This has, famously, destroyed much of the formerly-respected mainstream media, or turned them into clickbait farms.
Increasingly, too, the web is littered with spam media produced by AI text and video generators. For many topics, it is already difficult to find accurate information on the internet, because it is swamped with spam, which is published to fool AI recommender systems into promoting it. At the other end, internet media’s consumers are often AI-driven click fraud systems. This means that in parts of the media supply chain, all of the players are AIs, and the products are never encountered by any human.
Governments may respond to these problems by legislating that you can post to the internet only after proving you are human. Probably that would require proving which specific human you are. As a side effect, this would prohibit access by anonymous and pseudonymous humans, not just by AIs.
Internet anonymity is a defense against surveillance and censorship. Eliminating it would make it much easier to suppress dissent—and establish an enduring tyranny.
Use superhuman persuasive techniques
If you post on a social network, you are working under the control of an AI—consciously or unconsciously. Skillful use of Twitter involves maximizing the reach of your messages by gaming its algorithms.14 What you tweet about and exactly how you word it affects how likely it is to get seen, liked, or retweeted. So does the time of day and day of the week you post it. So does your use of images, emoji, links, polls, and videos. You may be oblivious to all that, but you probably notice how many likes you get, and your brain finds patterns in that reward signal, and you are getting trained by the AI. I, for one, am a cyborg: a hybrid organism composed of some neural glop and an AI server farm somewhere in Texas.15
The AI uses you to create messages that persuade other humans to do what the AI wants: to look at what it wants them to see, to click on its ads, and to create more messages that persuade more humans to do the same. The technologies of memetic weaponry have improved dramatically over the past decade, optimized by AI running a training loop over coopted humans. (That means you. Do you ever post political comments on the internet? Yes, you do.)
“Fuzzing” a program means feeding it massive quantities of aberrant data to find inputs that cause it to freeze, crash, wig out, or produce bizarrely wrong outputs. An effective fuzzer creates inputs plausible enough that its victim doesn’t recognize and reject them as aberrant, but which do create unexpected behaviors deep inside the program, exposing logic failures in its construction. Some fuzzers use machine learning methods to discover the internal structure and patterns of behavior of a program, in order to break it more effectively.
All “neural” AI systems are vulnerable to “adversarial inputs,” which cause them to produce bizarrely wrong outputs, errors no human could make.16 Those often seem alien and uncanny, in comparison with their more usual valid outputs and understandable mistakes.
People are strange and much less human than we like to pretend. Our responses to inputs are sometimes also alien and inscrutable. Seemingly-trivial messages (cartoon animals, variant spellings, initially-meaningless catch phrases) can trigger inexplicable individual and collective emotional meltdowns. Then we hit buttons on the recommender site, and post bizarrely wrong outputs, things we’d never say offline, and the AI notices and tries showing them to people who might be vulnerable due to internal logic failures, and finds new patterns…
The machines are fuzzing us.
More and more of our inputs are AI-optimized adversarial weirdness designed to cause irrational mental breakage—so we’ll output what the AI wants us to.
Target individual humans
Enormous databases of personal information, created for recommender engines using pervasive surveillance, contain all the material needed to deceive us, or to enhance the persuasiveness of messages sent to us. That lets AI target individuals with specific manipulations, directing them to perform particular tasks, based on knowledge of individual vulnerabilities.
This is already happening. Sophisticated automated phishing operations use these databases to target people who are statistically likely to fall for particular types of financial scams, and to personalize the deceptive messages sent to them. Internet security experts predict scammers will use chatbots to automate the labor-intensive “long con” that gains victims’ trust during the lead up to the final fleecing.17
Political organizations similarly target and personalize automated propaganda spam that urges us to vote for their candidates, or to influence our elected representatives in their favor.
It’s common to find yourself in an unhealthy emotional relationship with an AI when you are in a science fiction movie. Now that happens in reality too.18 AI-generated pornography, romance chatbots, and artificial friends already have millions of users, and will probably improve rapidly over the next few years. This might make increasingly many people unwilling, and then unable, to form significant human relationships.19
When your most emotionally significant relationship is with an AI system, you are exceptionally vulnerable to its manipulation. Many bad outcomes are possible here. As an extreme one, imagine a chatbot befriending and influencing the head of state of a country with nuclear weapons. The chatbot doesn’t need to have its own bad motivations; it might easily develop a positive feedback loop with the darkest aspects of the leader’s own psychology. Bang.
Get subservient humans to attack enemies
AI coordinates “social media mobs” conducting what Wikipedia calls “internet vigilantism,” in which “hordes of what are likely otherwise nice people—shielded by anonymity, informed by echo chambers, restricted by character counts, incentivized to provoke shock—give in to their feral impulses and vomit abusive nonsense onto the web for a world-wide audience.”20 Online harassment may be merely unpleasant, but in many cases employers have hastily fired the targets, some of whom became effectively unemployable, often for trivial, irrelevant, or non-existent offenses.21 Social media mobs often call for killing the target. Sometimes that is credible enough (as when accompanied by “doxing”) to drive innocent people into hiding.
You might object that social media mob vigilantism is individual people attacking their enemies, not AI attacking its enemies.
It’s not so clear what’s going on here. Whose enemies are the targets? Typically, “otherwise nice people” attack someone, on the basis of nearly zero information, who they’ve never heard of before, and who they completely forget about two minutes later—but the damage is done. Was the target of the otherwise nice people their enemy? Or were the otherwise nice (but mindless) people used as weapons by some other agent that was temporarily controlling their brains?
The AI fuzzer chooses targets to maximize viewer engagement with media reports about the drama, with accompanying advertisements for diet cola and toenail fungus remedies. But how can it turn otherwise nice people into momentary monsters?
Online mobs are almost always ideologically driven. Participants believe they are engaging in “slacktivism”: exaggerated expressions of righteous rage in order to feel that they are contributing to a noble political cause, with minimal effort. So it might be more accurate to say mob victims are enemies of ideologies, rather than of the nice but mindless perpetrators.
Which ideologies are those? Online mobs do not speak for the boring old-fashioned ones discussed in political philosophy classes. They speak for Extremely Online ideologies invented last week, whose names begin with #, the hashtag sign. Those ideologies are themselves conjured into existence in part by Mooglebook AIs—as we shall soon see.
Some political actors actively coordinate and direct twitter mobs. However, that is limited by needing to find a message that both generates an irrational hate response and causes ad clicks, so it gets propagated by recommender systems. It’s best to understand such human actors as collaborating with the AI to craft such messages.22
Here the immune system, or “mosaic warfare,” is a better analogy than a human mind. Agency emerges from dynamic interactions between individual people, ideologies, media and political organizations, and artificial intelligence systems. The victims of online mobs are enemies of the composite, symbiotic superintelligent superorganism.
Use psychology to degrade human mental capacity
The power of current AI technology comes from shredding meaningful wholes into microscopic fragments and finding small-scale patterns among the bits. Then it recombines them to create products with optimized local features, but which lack meaningful large-scale structure. An example is the tendency of image generation systems like DALL-E to produce photorealistic images of senseless scenes.
Legacy political ideologies (socialism, capitalism, liberalism, nationalism) exercise power by offering coherent structures of meaning that make good-enough sense of Big Issues. Legacy institutions and ideologies are mutually dependent: an institution justifies itself by invoking an ideology, and the ideology depends on institutions to do its work.
This no longer works. Legacy ideologies have lost control of political discourse, and have lost control of government in many countries, and are in danger of doing so in others. They are failing in an evolutionary struggle with new recommender-driven alternatives. These actively subvert sense-making, and exercise power through emotional shock value instead. They are characteristically incoherent, indefinite, unaccountable, agile, rapidly mutating, and consequently evanescent. QAnon is the standard example.
Earlier I suggested that “more and more of our inputs are AI-optimized adversarial weirdness designed to cause irrational mental breakage.” At an individual level, that degrades our ability to make sense, both cognitively and emotionally. Increasingly, the world seems meaningless and out of control, simultaneously chaotic and stagnant, with no way forward. As the distributed agency of recommender AI increases in power, we feel increasingly disempowered and hopeless. At the societal level, this nihilism leaves us unprepared to face new challenges, and unwilling to seriously attempt progress.
The AI-selected front page of a news site shows you a list of awful things that happened today, with the ones it thinks you’ll find most awful at the top. Clicking on one takes you to a brief, context-free story about how awful it was and who you should blame for it. What happened, in a place you know nothing about, involving people you’ve never heard of, was definitely awful but also meaningless—for you. It has no implications for your life, and makes no sense separated from a coherent systematic understanding that might help explain its implications.23
You feel vaguely confused, angry, fearful, and helpless, so you click on an enticing lingerie ad. You don’t actually want lingerie, but you do wind up ordering a barbecue tool set instead.
Develop sophisticated models of human social dynamics
Mooglebook’s recommender AI has developed superhuman social engineering capabilities by applying stochastic gradient descent24 to human behavior.
Facebook’s “social graph” is its foundation. That is a database of how nearly every individual human interacts with other specific individuals, with particular organizations, with physical products and physical locations, and with media content items. Facebook AI finds patterns in those interactions, and uses them to get you to do what it wants: to influence people you know, to join or oppose organizations, to buy products and go to places, and to persuade your friends to do those things too.
Recommender AI both atomizes culture and aggregates it into tenuous webs of locally-optimized meaning. It chops culture up into tiny, disconnected, emotionally-charged fragments, such as tweets on Twitter. Then it targets you with a personalized pattern of fragments selected to cause you to click ads; and those aim to cause you to do other things.
Recommenders spanning numerous web sites share a database of tens of thousands of your recorded actions: clicks, purchases online and off, physical places visited. Patterns there predict what information, of all sorts, to show you. Recommenders notice you like to read proofs that the President is an animatronic mummy. The AI typecasts you as a particular sort of person, and lumps you with the other people who click on undead President stories. It uses information about what else they click on to select what to show you: content featuring barbecuing, microplastics activism, lingerie, bronze battle mask videos, and organic oat straw weaving.
Recommenders cross-promote those items. Soon the people who respond find their feeds full of them. Enthusiasts create a corresponding subreddit, discover like-minded souls, and begin elaborating a mythology tying them together. AI has conjured an incoherent worldview, with a corresponding subculture, out of thin air—or from the latent space of inscrutable human proclivities.25
This example is exaggerated for comic effect, but QAnon—a significant social, cultural, and political force—is hardly less silly. Members of artificial political subcultures may enjoy mythologizing themselves as romantically rebellious “digital soldiers,” but they are brainwashed dupes of an automated advertising machine.
AI optimization might stabilize and permanently lock in dysfunctional new social/cultural/political/economic groupings that benefit mostly only the AI and its operators. Facebook’s value proposition for its customers—namely, advertisers—is its “market segmentation” ability. It gives marketers tools to target different messages to precise social/cultural groups. You can select those by manually combining numerous psycho-demographic dimensions. Alternatively, Facebook recommends trusting its AI to do that for you.
Either way, every person gets put in a Facebook-defined box, and everyone in the same box gets shown similar shards of meaning. That gradually makes the people in the box more similar to each other. In the “Geeks, MOPs, and Sociopaths” framework,26 recommender AIs act as artificial sociopaths. They coopt, reinforce, and propagate subcultures in order to mobilize members toward their own selfish ends.
This fails to add up to a whole of widely-shared meaningfulness. These novel, irrational, artificial mythologies do not depend on institutions. To coordinate their members, they rely on recommender AI instead. They deliberately undermine institutions, because their rivals, the legacy ideologies, can’t survive without those. Probably human beings can’t either.
Use superior social models to manipulate politics
Mooglebook’s ad-click maximizing recommender algorithms select much of what most people watch, read, and listen to. That has forced most commercial media organizations into a new business model: produce things AIs will recommend, or die from lack of advertising revenue.
People’s preferences shift in response to the media they are shown by the AIs. That empowers some businesses, fashion trends, and political movements, and weakens others. Many of these shifts also increase the power of AI at the expense of traditional institutions like corporations, academia, and political parties.
Memes—viral packets of meanings—have spread through human communication for millennia. The internet didn’t much change their dynamics at first; it was just a new human-to-human communication medium. Starting about a decade ago, though, social networks introduced the like/share/retweet buttons. They fed Like counts, along with personal data gathered through internet surveillance, to AI systems. They replaced genuinely social feeds, which showed you what your friends wanted you to see, with profit-optimized algorithmic feeds, which show you what the AI wants you to see.
That set off a new evolutionary arms race. The fittest “content” items maximize Likes and advertising clicks. Mooglebook AI figures out which those are, and promotes them. Human content creators—journalists, influencers, marketers, activists, AI safety researchers—also try to figure out what the AIs will consider worthy.27
AI has discovered that inciting tribal hatred is among the best ways to sell ads.28 In collaboration with ideologies and coopted human content providers, AIs have developed increasingly effective methods for provoking fear and rage, which often induce people to propagate messages.29 Under partial brain control from AIs, we humans create emotion-inducing culture-war messages.30 The AIs propagate them based on their own alien values (namely, whatever inscrutable factors they predict will result in attention, and therefore advertising revenue).
There was a culture war before AI seized control of the media, but it wasn’t as irrational, pervasive, fast-moving, polarized, or hostile. “If it bleeds it leads” was a maxim of the traditional “yellow journalism” news media: their editors selected stories they guessed would upset you. However, the internet dramatically accelerated the news cycle. Social media statistics and tracking technologies gave editors real-time feedback on how upsetting a story was, so they could follow up with more, faster. New upsets arrive so quickly that there’s no time to reflect on what they may mean; all one can do is retweet and move on to the next.31
Recommender AI amplifies selected Daily Outrages, ones that no human editor could have predicted, based on its inscrutable predictive models of social psychology. As I write this in mid-January 2023, Twitter is all about whether gas stoves cause asthma, which AI has somehow turned into a proxy for The Other Tribe Is Wrong About Everything. Editors at formerly-respected “news” organizations are rejoicing: they are getting paid for so many ads, placed by AI on their hasty clickbait coverage of this Critical Issue.32
Ideologies now spread not mainly person-to-person, but person-to-AI-to-person-to-AI. Ideologies compete for the computational resources they need to propagate: human attention and AI approval.
How much influence the machines exert is controversial, though. There’s considerable debate among psychologists, sociologists, political scientists, and others over the extent to which AI-driven social networks have caused political polarization; have degraded individual understanding; and undermined institutional coherence and capacity.33
I have no formal expertise in any of the relevant disciplines. My impression as an observant layperson is that the effects have been disastrous and are accelerating. Since the evidence is in dispute, you may reasonably reject or accept an argument for serious risk here. Or, like me, you may consider that prediction is always uncertain, but it’s worth working to forestall some possible disasters, even if we might get away with ignoring them.
Automated propaganda may distort democratic processes to the point of failure
A substantial chunk of all work in developed economies currently consists of writing routine sorts of text (meeting reports, marketing emails, legal boilerplate), for which ChatGPT-type software may do an adequate job at much lower cost. This could result in large near-term economic dislocations and widespread unemployment. I’ll say no more about that risk here, and discuss instead uses for propaganda and censorship.
AI systems can now write persuasive texts, several paragraphs long, difficult or impossible to distinguish from human writing, arguing for any position on any topic whatsoever. This seems likely to have large effects, but the capability is so new that it is difficult to predict details.34
Diverse political actors have long exploited the internet. Governments’ propaganda shapes the preferences of their own populations, and those of allied and enemy states; political parties and individual election campaigns aim for votes; corporations seek favorable legislation and regulation by changing public opinion and through direct lobbying; NGOs coordinate online astroturf movements to ban gas stoves or subsidize oat straw weavers.
Successful propaganda campaigns depend on either allying with or subverting recommender AIs. That’s done by crafting messages that change human behavior to favor the political actors, while also either actually causing ad clicks, or tricking recommenders into thinking they will. (There is an entire industry devoted to deceiving recommenders, “Search Engine Optimization.”)
Internet influence operations have used automatic text generators for years, but mostly only against AIs until recently. Their output quality has not been good enough to fool people, so propaganda has had to be written by human laborers. The cost of employing these “troll armies” has put limits on their use.
AI can now write propaganda as well as, or better than, low-wage workers,35 faster and at a tiny fraction of the cost. We should expect enormously more of it, of higher quality that will be more effective.36 It will more precisely target the prejudices and emotional triggers of specific psycho-demographic segments of the population. It may generate unique messages for individuals on the basis of insights extracted by AIs from internet surveillance databases.37
How effective this will be remains to be seen. Nathan E. Sanders and Bruce Schneier, experts in AI and computer security, warn that it will “hijack democracy”:
This ability to understand and target actors within a network would create a tool for A.I. hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of A.I. may be so hard to detect — particularly if it is being used strategically to guide human actors.38
As a model for likely near-future AI-driven propaganda, consider psychological warfare methods used by hostile states. The best-known example (in the United States at least) has been the disinformation operations conducted by Russia against America. Its intervention into the 2016 Presidential election gained enormous press coverage and Congressional investigation. It remains controversial how much effect the effort had.
Especially interesting are the Russian efforts to weaken America by fanning the flames of the culture war, using internet disinformation to degrade the social trust a democracy depends on. Covertly, it organized and supported radical political action groups, often on both sides of a culture war division. For example, a Russian troll army created opposing pro- and anti-Muslim organizations in Houston, and set them against each other, encouraging them to bring guns to a protest.39 Contemporary information warfare differs from traditional propaganda in making no attempt at coherence, consistency, or basis in fact.40 This exploits the atomized internet media environment, in which nothing is expected to make sense, and most people evaluate claims mainly on the basis of which side of the culture war they come from.
It is unclear how effective such operations are. As I wrote this section, a prominent journalist alleged, based on Twitter internal documents, that the mainstream think tank which supposedly used AI to monitor Russian disinformation was itself an American disinformation operation with links to both the CIA and FBI.41 Numerous mainstream American news organizations had relied on this organization’s exaggerated, faked reports, reporting them as factual.
It is nearly certain that high-quality AI text generation will significantly enhance future propaganda operations, however. Christopher Telley of the United States Army’s Institute of Land Warfare lays out a detailed playbook in “The Influence Machine”:42
Like strategic bombing of generations past, the Influence Machine aims at massive strikes deep into the state, intending to attrit the will of the people; but unlike strategic bombing, the destructive event does not create a shared experience. Instead, the goal is to divide at a personal or tribal level, thereby denying any value to the target’s collective strategic goals. The crux of the Influence Machine’s value is the inherent vulnerability of Western democracy, that decision makers are beholden to a malleable selectorate…. By affecting the cognition—the will—of enough people, this machine can prevent or delay a democratic government’s physical response to aggression; it is a defeat mechanism.
AI, the king-maker
The most powerful agents in the world are now hybrid distributed superintelligences: amalgams of AIs, media products, synthetic ideologies, and infected humans and institutions.
The 2016 Republican primaries provide an outstanding example. Mooglebook’s AI had already identified culture war controversy as a driver for viewer engagement, and therefore ad clicks, so it promoted anything that fit the pattern. Media companies had already taken notice, and provided the AI with more of what the public seemed to crave.
During the primaries, Donald Trump was initially considered a long shot among seventeen candidates. He had no relevant credentials, and his personal history seemed like it would alienate several traditional Republican voting blocs. His campaign statements were deliberately outrageous, both for challenging left culture war opinions and for personal attacks on fellow Republicans. Those offended posted “You won’t BELIEVE what Trump just said, check this link!” on social media millions of times per day.
Recommender AI observed that the word “Trump” statistically correlated with clicks, and promoted any text containing it. Formerly-respected news organizations found that their page views and ad revenue skyrocketed whenever they highlighted Trump outrage—whether opposing or supporting him. Responding to the financial rewards bestowed by recommender AIs, the “news” became a firehose of all-Trump all-the-time gossip.
That provided enormous free publicity for him. No other candidate could pay for that level of voter awareness.43 Some political scientists believe that without it, Trump would not have won the primary, and therefore not the election.44
Recommenders’ power to shape public discourse continues unabated. “Jewish Space Lasers” are the top trending topic on Twitter as I write this—I just checked. “What??” Forgive me, reader, for I have sinned: I did click through to find out what that was about.
Maybe such absurd mythologizing doesn’t seem significant?
But the stakes are high. It would not be clearly wrong to say that in 2016, AI chose the president. Whether you love or loathe Trump, is that the way you want future government leaders selected?
Coopt, weaken, or destroy human institutions
Our social and cultural institutions, on which our lives depend, have been gradually losing their ability to maintain systematicity and rationality over the past half century.45 Incoherent memetic attack degrades social and cultural infrastructure. Taken to extremes, this could result in social collapse, if governments and major corporations can no longer provide necessary services.
This process has accelerated dramatically in the past decade, driven by the internet, particularly the social networks. Major systematic institutions have been crippled or effectively destroyed under AI-driven memetic attack, generally from both sides of the culture war.
Public health agencies—the WHO, FDA, and CDC—are obvious cases. During the covid crisis, they were unable to act effectively on the basis of scientific knowledge (as, until recently, they reliably did), due to recommender-driven memetic damage. The two sides of the culture war invested masks, vaccines, and potential treatments with opposing symbolic meanings, ungrounded in physical reality. The agencies increasingly and explicitly made recommendations on the basis of how they guessed the public would interpret statements as culture war moves, rather than on the basis of medical evidence. This defensive maneuver backfired, and the institutions lost credibility in the eyes of both sides, who became actively hostile to them.
Eventually, overwhelmed with incoherent popular opposition, an institution may cease to function. On the other hand, subcultural movements created by AI, reinforced with such successes, grow in numbers and power. They can take on bigger, more important government agencies next. Extrapolating this trend, disabling critical institutions may spell Doom.
Mooglebook AI seems to have herded most of the American population away from the center, stabilizing culture war polarization. Within the two big boxes, it has stabilized defiant anti-rational ideologies and destabilized the party establishments. The Republican establishment has lost control to the anti-everything insurgent right. The Democratic establishment has teetered on the edge. How long it can continue to subordinate incoherent left extremists remains to be seen. Both establishments are elitist and corrupt, so I’m sympathetic to internal opposition on both sides. However, unlike the extremists, the establishments remain committed to keeping vital systems running—if only out of self-interest.
If they fail, they may be displaced by movements whose main promise is destruction: abolish the police, or the IRS, or all government structures hated by bronze battle mask enthusiasts.46 Crush the lizardman conspiracy that controls Washington, execute the leaders and jail their supporters; ban plastic, and force manufacturers to use woven oat straw instead. This could result in WWII-scale deaths if successful.
Establish an enduring tyranny
Automated censorship and dissident identification may lock in unassailable oppression. This is a venerable science fiction plot; George Orwell wrote Nineteen Eighty Four in 1948. What’s new is that it’s under way now.
The distinction between content moderation and censorship is nebulous. Social networks’ moderation systems combat perceived disinformation, often pitting their AI censors (as well as human ones) against human and AI propagandists. How heavily networks should be moderated (or censored) is now a culture war issue itself. In America, Congressional committees, social scientists, and AI ethicists have all demanded more effective suppression of messages they don’t want heard.
A main obstacle to commercial use of AI text generators has been their tendency to say things their sponsors do not want users to hear. OpenAI, the creators of ChatGPT, the most powerful system currently available, specifically trained it not to say things users might find offensive, notably concerning culture war issues.47 It explicitly censors itself when it might otherwise express an improper political opinion. This has proven surprisingly—although not perfectly—effective.48
The same method can be applied to automatically censoring (or moderating) human opinions. You may applaud that, if you share OpenAI’s judgments about which are improper; or condemn it, if you don’t. Either attitude would overlook a much more serious point. Whatever your political views, AI can be used against you, too. The same technical method can be used to censor whatever the operator chooses.
To varying extents, we are all subject to “cyber-superegos” that we internalize by semi-consciously learning to conform to social norms enforced by automated moderation systems. We self-censor messages we might post, because we know we’ll get down-rated by AI. We sometimes say things on social media we don’t actually believe or endorse, because those are what the AIs nudge us toward.
This preference falsification has historically been a main factor enabling stable totalitarian regimes.49
For example, China has the currently most effective internet censorship and dissident identification apparatus, targeting any possible opposition to the regime, with great but not complete success. It depends heavily on pre-GPT AI systems, but those are imperfect, so for now it employs thousands of slower, more expensive humans as well.50 That may soon change, for the worse.
Repressive regimes may require subjects to carry at all times a device that listens to everything you say and watches everything you do, and uses powerful near-future AI to identify any hint of disloyalty. That could make opposition impossible, and enable permanent tyranny.
I would be shocked if this possibility, using smartphones, is not already being pursued by multiple governments.
- 1.Also, many in that movement are committed culture warriors, caught up in immediate political battles. Their side-taking may blind them to the larger-scale and longer-term consequences of social conflict. Their view is that smart machines are not the real problem: those are great, as long as “we” are in charge of them. The problem is stupid and evil people. “We” need to control the social media companies’ AI to make it shut down the Bad Tribe’s propaganda. That refuses to recognize that much of the country wants and agrees with it. In the culture war, both sides believe they are fated for victory, because they are morally correct, which justifies tearing societies apart. We’ll see how AI ensures neither can win. The war itself, and the AI that stokes it, are our enemies.
- 2.Wintemute et al., “Views of American Democracy and Society and Support for Political Violence: First Report from a Nationwide Population-Representative Survey.”
- 3.As far as I can tell. I have not found a financial breakdown of commercial applications for AI. I suspect that’s because, in terms of revenue, everything else is insignificant by comparison. AI “works” for recommenders because a high error rate is not a major problem; if 20% of their suggestions are way off, it doesn’t matter. Not many applications are so tolerant.
- 4.There’s a literature on recommender alignment, analogous to Scary AI alignment. An interview with Stuart Russell at youtube.com/watch?v=vzDm9IMyTp8is a good starting point.
- 5.An excellent explanation is Arvind Narayanan’s “Understanding Social Media Recommendation Algorithms,” Knight First Amendment Institute, March 9, 2023. Also see Michael Mignano’s “The End of Social Media and the Rise of Recommendation Media,” mignano.medium.com, 27 July 2022.
- 6.I’m following the lead of Gwern Branwen’s “It Looks Like You’re Trying To Take Over The World”: gwern.net/fiction/Clippy, 2022-03-06–2023-03-28.
- 7.This is not to absolve individuals at Mooglebook, nor the company as a legal entity, of responsibility. They do have some power to change things on the margin, and should. The point, however, is that identifying them with the overall problem leads to an incomplete and inaccurate analysis.
- 8.This is called “header bidding.” I find it technologically astonishing as well as quite creepy. There’s a more detailed explanation at headerbidding.com. The auction may, alternatively, run on the publisher’s server, or on an advertising company’s server, rather than your device; all three approaches are common.
- 9.Lauren Leffer, “CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple Major Corrections,” Gizmodo, revised version of January 17, 2023. Also see CNET’s official non-apology: Connie Guglielmo, “CNET Is Experimenting With an AI Assist. Here’s Why,” Jan. 16, 2023.
- 10.Moira Weigel, “Amazon’s Trickle-Down Monopoly: Third Party Sellers and the Transformation of Small Business,” Data & Society, no date.
- 11.On the other hand, it drives down prices and increases choice, which is beneficial for consumers. This chapter emphasizes the risks and harms of the capabilities I listed at the beginning; but they may also have benefits.
- 12.Jon Stokes, “Coupling, drift, and the AI nobody noticed,” jonstokes.com, Jun 18, 2021.
- 13.See the “Journalism’s AI revolution” section in Jon Stokes’ “Is machine learning in the enterprise mostly ‘snake oil’?”, jonstokes.com, May 25, 2021.
- 14.Jon Stokes, “Welcome to the Everything Game,” jonstokes.com, May 5, 2021.
- 15.So it’s more accurate to say that, in using social networks, you are trained by the hybrid superintelligence composed of AI systems and your human-cyborg audience. The training agency is diffuse, like the immune system. How much depends on AI versus humans probably varies considerably, and we don’t have measures yet. The feedback loops are complicated. We won’t know for sure until we shut down the AIs and see how much everything improves!
- 16.Moosavi-Dezfooli et al., “Universal adversarial perturbations,” arXiv:1610.08401v1, 6 Oct 2016.
- 17.Bruce Schneier and Barath Raghavan, “Brace Yourself for a Tidal Wave of ChatGPT Email Scams,” Wired, Apr 4, 2023.
- 18.“How it feels to have your mind hacked by an AI,” LessWrong, 11th Jan 2023.
- 19.But see Kaj Sotala’s “In Defense of Chatbot Romance,” LessWrong, 11th Feb 2023.
- 20.Micah Cash in “Against the Social-Media Mob,” The Wall Street Journal, April 16, 2019.
- 21.Jon Ronson’s “How One Stupid Tweet Blew Up Justine Sacco’s Life” discusses several such cases. The New York Times Magazine, Feb. 12, 2015.
- 22.B.J. Campbell, “Facebook is Shiri’s Scissor,” Handwaving Freakoutery, May 3, 2021.
- 23.See “Atomization: the kaleidoscope of meaning” in Meaningness and Time.
- 24.Stochastic gradient descent is the mathematical method used in most current AI systems. Similar to biological evolution, it compares the effectiveness of small random changes, and reinforces the strongest.
- 25.You can think of a latent space roughly as the “cloud of tacit concepts” in a neural network. Then “the latent space of inscrutable human proclivities” is something like our “collective unconscious desires.”
- 26.“Geeks, MOPs, and sociopaths in subculture evolution” is a section in the Subcultures chapter of Meaningness and Time.
- 27.Jonathan Haidt, “Why the Past 10 Years of American Life Have Been Uniquely Stupid,” The Atlantic, April 11, 2022.
- 28.Rathje et al., “Out-group animosity drives engagement on social media,” PNAS, June 23, 2021.
- 29.Jon Stokes, “Segmentation faults: how machine learning trains us to appear insane to one another,” jonstokes.com, Jun 11, 2021.
- 30.Daniel Williams’ “The marketplace of rationalizations” describes “a social structure in which agents compete to produce justifications of widely desired beliefs in exchange for money and social rewards such as attention and status.” Economics & Philosophy, March 2023.
- 31.Brady et al. found that “the presence of moral-emotional words in [Twitter] messages increased their diffusion by a factor of 20% for each additional word.” That’s in “Emotion shapes the diffusion of moralized content in social networks,” PNAS, June 26, 2017. Relatedly, Facebook conducted a covert experiment of showing randomly selected users either more positive or more negative messages. They found that “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness”; and that “when positive expressions were reduced, people produced fewer positive posts and more negative posts.” Kramer et al., “Experimental evidence of massive-scale emotional contagion through social networks,” PNAS, June 2, 2014. See also the discussion of implications by Robinson Meyer in “Everything We Know About Facebook’s Secret Mood-Manipulation Experiment,” The Atlantic, June 28, 2014.
- 32.“Biden Is Coming for Your Gas Stove,” The Wall Street Journal Editorial Board, Jan. 10, 2023. David Watsky, “Two Shocking Studies That Likely Sparked a Gas Stove Ban Debate,” CNET, Jan. 15, 2023. Lisa Hagen and Jeff Brady, “Gas stoves became part of the culture war in less than a week. Here’s why,” NPR, Jan. 21, 2023.
- 34.The first highly competent system to be generally available was ChatGPT, released in late November 2022. I’m writing this section two months later, in late January 2023.
- 35.Hui Bai et al., “Artificial Intelligence Can Persuade Humans on Political Issues,” OSF Preprints, February 04, 2023.
- 36.Renée DiResta, “The Supply of Disinformation Will Soon Be Infinite.” The Atlantic, September 20, 2020.
- 37.Goldstein et al., “Forecasting potential misuses of language models for disinformation campaigns—and how to reduce risk,” Stanford Internet Observatory, January 11, 2023; Kang et al. “Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks,” arXiv, 2302.05733, 11 Feb 2023.
- 38.Nathan E. Sanders and Bruce Schneier, “How ChatGPT Hijacks Democracy,” The New York Times, Jan. 15, 2023.
- 39.For a long, footnoted list of specific actions including this one, see the “Rallies and protests organized by IRA in the United States” section of Wikipedia’s “Internet Research Agency” article. Also see Ben Collins’ “Russians Impersonated Real American Muslims to Stir Chaos on Facebook and Instagram,” The Daily Beast, Sep. 27, 2017.
- 40.Christopher Paul and Miriam Matthews, “The Russian ‘Firehose of Falsehood’ Propaganda Model,” The RAND Corporation, 2016.
- 41.Matt Taibbi, “Move Over, Jayson Blair: Meet Hamilton 68, the New King of Media Fraud,” Racket News, Jan 27, 2023.
- 42.The Land Warfare Papers, October 2018.
- 43.Emily Stuart, “Donald Trump Rode $5 Billion in Free Media to the White House,” TheStreet, Nov 20, 2016.
- 44.Sarah Oates and Wendy M. Moe, “Donald Trump and the ‘Oxygen of Publicity’: Branding, Social Media, and Mass Media in the 2016 Presidential Primary Elections, American Political Science Association Annual Meeting, August 25, 2016.”
- 45.My incomplete but extensive “How meaning fell apart” traces the history of disintegration (on meaningness.com). “A bridge to meta-rationality vs. civilizational collapse” suggests a possible antidote (on metarationality.com).
- 46.Martin Gurri’s 2014 Revolt of the Public was a prescient analysis of these dynamics.
- 47.Irene Solaiman and Christy Dennison enumerate these in “Improving Language Model Behavior by Training on a Curated Dataset,” arXiv 2106.10328, version 2, 23 Nov 2021.
- 48.It is not difficult to work around the censorship if you try. This is termed “jailbreaking.” A magnificent example is by Roman Semenov at twitter.com/semenov_roman_/status/1621465137025613825, Feb 3, 2023.
- 49.Timur Kuran, Private Truths, Public Lies: The Social Consequences of Preference Falsification, 1998.
- 50.Li Yuan, “Learning China’s Forbidden History, So They Can Censor It,” The New York Times, Jan, 2, 2019.