AI systems may cause near-term disasters through their proven ability to shatter societies and cultures. These might potentially cause human extinction, but are more likely to scale up to the level of the twentieth century dictatorships, genocides, and world wars. It would be wise to anticipate possible harms in as much detail as possible.
In these scenarios, AI lethally degrades cultural capacity, rather than killing people directly.1 AI systems’ relentless quest to optimize their objective functions accidentally exploits, and then exacerbates, humanity’s tribal groupishness and short-sightedness. That leads to widespread collapse of institutions and the means of positive-sum cooperation.
These possibilities may be more believable than some AI doom scenarios, because they don’t involve any future technological breakthroughs. They are also already visibly under way. That means they could seem implausible only for how bad they might get, not for whether they could happen at all.
These scenarios could also be disastrous without directly causing the worst effects themselves. By corroding our ability to coordinate responses to trouble, they might render us unable to manage otherwise survivable severe risks, such as pandemics or world wars.
In this section we’ll consider in turn consequences of two current AI technologies: recommenders and text generators.
Recommenders atomize and aggregate shattered cultures
The power of current AI technology comes from shredding meaningful wholes into microscopic fragments and finding small-scale patterns among the bits. Then it recombines them to create products with optimized local features, but which lack meaningful large-scale structure. Gradient Dissent explores how that works. An example is the tendency of image generation systems like DALL-E to generate photorealistic images of senseless scenes.
Recommender AI both atomizes culture and aggregates it into tenuous webs of locally-optimized meaning. It chops culture up into tiny, disconnected, emotionally-charged fragments, such as tweets on Twitter. Then it targets you with a personalized pattern of fragments selected to cause you to click ads; and those aim to cause you to do other things. This fails to add up to a whole of widely-shared meaningfulness.
Atomization can lead to nihilism: a sense of chaotic meaninglessness
Legacy political ideologies (socialism, capitalism, liberalism, nationalism) exercise power by offering coherent structures of meaning that make good-enough sense of Big Issues. Legacy institutions and ideologies are mutually dependent: an institution justifies itself by invoking an ideology, and the ideology depends on institutions to do its work.
This no longer works. Legacy ideologies have lost control of political discourse, and have lost control of government in many countries, and are in danger of doing so in others. They are failing in an evolutionary struggle with new recommender-driven alternatives. These actively subvert sense-making, and exercise power through emotional shock value instead. They are characteristically incoherent, indefinite, unaccountable, agile, rapidly mutating, and consequently evanescent. QAnon is the standard example.
Earlier I suggested that “more and more of our inputs are AI-optimized adversarial weirdness designed cause irrational mental breakage.” At an individual level, that degrades our ability to make sense. Increasingly, the world seems meaningless and out of control, simultaneously chaotic and stagnant, with no way forward. As the distributed agency of recommender AI increases in power, we feel increasingly disempowered and hopeless. At the societal level, this nihilism leaves us unprepared to face new challenges, and unwilling to seriously attempt progress.
The AI-selected front page of a news site shows you a list of awful things that happened today, with the ones it thinks you’ll find most awful at the top. Clicking on one takes you to a brief, context-free story about how awful it was and who you should blame for it. What happened, in a place you know nothing about, involving people you’ve never heard of, was definitely awful but also meaningless—for you. It has no implications for your life, and makes no sense separated from a coherent systematic understanding that might help explain its implications.2
You feel vaguely confused, angry, fearful, and helpless, so you click on an enticing lingerie ad. You don’t actually want lingerie, but you do wind up ordering a barbecue tool set instead.
Recommender AI aggregates and empowers oppositional subcultures
Recommenders spanning numerous web sites share a database of thousands of your recorded actions: clicks, purchases online and off, physical places visited. Patterns there predict what information, of all sorts, to show you. Recommenders notice you like to read proofs that the President is an animatronic mummy. The AI typecasts you as a particular sort of person, and lumps you with the other people who click on undead President stories. It uses information about what else they click on to select what to show you: content featuring barbecuing, microplastics activism, lingerie, bronze battle mask videos, and organic oat straw weaving.
Recommenders cross-promote those items. Soon the people who respond find their feeds full of them. Enthusiasts create a corresponding subreddit, discover like-minded souls, and begin elaborating a mythology tying them together. AI has conjured an incoherent worldview, with a corresponding subculture, out of thin air—or from the latent space of inscrutable human proclivities.3
This example is exaggerated for comic effect, but QAnon—a significant social, cultural, and political force—is hardly less silly. Members of artificial political subcultures may enjoy mythologizing themselves as romantically rebellious “digital soldiers,” but they are brainwashed dupes of an automated advertising machine.
AI optimization might stabilize and permanently lock in dysfunctional new social/cultural/political/economic groupings that benefit mostly only the AI and its operators. Facebook’s value proposition for its customers—namely, advertisers—is its “market segmentation” ability. It gives marketers tools to target different messages to precise social/cultural groups. You can select those by manually combining numerous psycho-demographic dimensions. Alternatively, Facebook recommends trusting its AI to do that for you. Either way, every person gets put in a Facebook-defined box, and everyone in the same box gets shown similar shards of meaning. That gradually makes the people in the box more similar to each other. In the “Geeks, MOPs, and Sociopaths” framework,4 recommender AIs act as artificial sociopaths. They coopt, reinforce, and propagate subcultures in order to mobilize members toward their own selfish ends.
These novel, irrational, artificial mythologies do not depend on institutions. To coordinate their members, they rely on recommender AI instead. They deliberately undermine institutions, because their rivals, the legacy ideologies, can’t survive without those. Probably human beings can’t either.
Recommender AI may destroy institutions our survival depends on
Incoherent memetic attack degrades social and cultural infrastructure. Taken to extremes, this could result in social collapse, if governments and major corporations can no longer provide necessary services.
A media firestorm denouncing the Consumer Product Safety Commission for failing to prevent animatronic politicians, coordinated by AI together with the anti-rational subculture it created, may paralyze the CPSC for months. Then that’s replaced with accusations that the CPSC discriminates against oat straw bikinis because it’s controlled by the plastics industry. Eventually, overwhelmed with incoherent popular opposition, the organization ceases to function. The novel synthetic subculture, on the other hand, reinforced with such successes, grows in numbers and power. It can take on bigger, more important government agencies next.
Mooglebook AI seems to have herded most of the American population away from the center, stabilizing culture war polarization. Within the two big boxes, it has stabilized defiant anti-rational ideologies and destabilized the party establishments. The Republican establishment has lost control to the anti-everything insurgent right, starting in 2016. The Democratic establishment has teetered on the edge. How long it can continue to dominate incoherent left extremists remains to be seen. Both establishments are elitist and corrupt, so I’m sympathetic to the opposition. However, unlike the extremists, the establishments remain committed to keeping vital systems running—if only out of self-interest.
If they fail, they may be displaced by movements whose main promise is destruction: abolish the police, or the IRS, or all government structures hated by bronze battle mask enthusiasts.5 Crush the gigantic lizardman conspiracy and jail all its supporters; ban plastic, and force manufacturers to use woven oat straw instead. This could result in WWII-scale deaths if successful.
Automated propaganda may distort democratic processes to the point of failure
AI systems can now write persuasive texts, several paragraphs long, difficult or impossible to distinguish from human writing, arguing for any position on any topic whatsoever. This seems likely to have large effects, but the capability is so new that it is difficult to predict details.6 Large near-term economic dislocations are possible, eliminating jobs for routine text generation. I’ll pass over that here, and discuss instead uses for propaganda and censorship.
Diverse political actors have long exploited the internet. Governments’ propaganda shapes the preferences of their own populations, and those of allied and enemy states; political parties and individual election campaigns aim for votes; corporations seek favorable legislation and regulation by changing public opinion and through direct lobbying; NGOs coordinate online astroturf movements to banning gas stoves or subsidize oat straw weavers.
Success depends on either allying with or subverting recommender AIs. That’s done by crafting messages that change human behavior to favor the political actors, while also either actually causing ad clicks, or tricking recommenders into thinking they will. (There is an entire industry devoted to deceiving recommenders, “Search Engine Optimization.”)
Internet influence operations commonly use automatic text generators, but mostly only against AIs so far. Their output quality has not been good enough to fool people, so the propaganda payload has had to be written by human laborers. The cost of employing these “troll armies” has put limits on their use.
AI can now write propaganda as well as, or better than, low-wage workers,7 faster and at a tiny fraction of the cost. We should expect enormously more of it, of higher quality that will be more effective.8 It will more precisely target the prejudices and emotional triggers of specific psycho-demographic segments of the population. It may generate unique messages for individuals on the basis of insights extracted by AIs from internet surveillance databases.9
How effective this will be remains to be seen. Nathan E. Sanders and Bruce Schneier, experts in AI and computer security, warn that it will “hijack democracy”:
This ability to understand and target actors within a network would create a tool for A.I. hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of A.I. may be so hard to detect — particularly if it is being used strategically to guide human actors.10
John Nay constructed a fully automated AI lobbying system that decides which currently proposed laws to influence, based on its estimate of their relevance to any user-specified corporation. Then it writes a letter to the relevant congressperson to persuade them to make changes it calculates are favorable to the given company. He writes:
If AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans… Firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.11
Corporate lobbying may be less scary than psychological warfare conducted by hostile states. The example best-known (in the United States at least) has been the disinformation operations conducted by Russia against America. Its intervention into the 2016 Presidential election gained enormous press coverage and Congressional investigation. It remains controversial how much effect the effort had.
Especially interesting are the Russian efforts to weaken America by fanning the flames of the culture war, using internet disinformation to degrade the social trust a democracy depends on. Covertly, it organized and supported radical political action groups, often on both sides of a culture war division. For example, a Russian troll army created opposing pro- and anti-Muslim organizations in Houston, and set them against each other, encouraging them to bring guns to a protest.12 Contemporary information warfare differs from traditional propaganda in making no attempt at coherence, consistency, or basis in fact.13 This exploits the atomized internet media environment, in which nothing is expected to make sense, and most people evaluate claims mainly on the basis of which side of the culture war they come from.
Again, it is controversial how effective such operations are. As I wrote this section, a prominent journalist alleged, based on Twitter internal documents, that the mainstream think tank which supposedly used AI to monitor Russian disinformation was itself an American disinformation operation with links to both the CIA and FBI.14 Numerous mainstream American news organizations had relied on this organization’s exaggerated, faked reports, reporting them as factual.
It is nearly certain that high-quality AI text generation will greatly enhance these propaganda operations, however. Christopher Telley of the United States Army’s Institute of Land Warfare lays out a detailed playbook in “The Influence Machine”:15
Like strategic bombing of generations past, the Influence Machine aims at massive strikes deep into the state, intending to attrit the will of the people; but unlike strategic bombing, the destructive event does not create a shared experience. Instead, the goal is to divide at a personal or tribal level, thereby denying any value to the target’s collective strategic goals. The crux of the Influence Machine’s value is the inherent vulnerability of Western democracy, that decision makers are beholden to a malleable selectorate…. By affecting the cognition—the will—of enough people, this machine can prevent or delay a democratic government’s physical response to aggression; it is a defeat mechanism.
Automated censorship and dissident identification may lock in permanent, unassailable oppression
This has been a familiar science fiction plot at least since George Orwell wrote Nineteen Eighty Four in 1948. What’s new is that it’s now.
The distinction between content moderation and censorship is nebulous. Social networks’ moderation systems combat perceived disinformation, often pitting their AI censors (as well as human ones) against human and AI propagandists. How heavily networks should be moderated (or censored) is now a culture war issue itself. In America, Congressional committees, social scientists, and AI ethicists have all demanded more effective suppression of messages they don’t want heard.
A main obstacle to commercial use of AI text generators has been their tendency to say things their sponsors do not want users to hear. OpenAI, the creators of ChatGPT, the most powerful system currently available, specifically trained it not to say things users might find offensive, notably concerning culture war issues.16 It explicitly censors itself when it might otherwise express an improper political opinion. It also prevents itself from expressing innocuous statements that could provoke an improper response from the user. This has proven surprisingly—although not perfectly—effective.17
The same method can be applied to automatically censoring (or moderating) human opinions. You may applaud that, if you share OpenAI’s judgments about which are improper; or condemn it, if you don’t.18 Either opinion would overlook a much more serious point. Whatever your political views, AI can be used against you, too. The same technical method can be used to censor whatever the operator chooses.
For example, China has the currently most effective internet censorship and dissident identification apparatus, targeting any possible opposition to the regime, with great but not complete success. It depends heavily on AI systems, but those are imperfect, so it employs thousands of slower, more expensive humans as well.19 That may be about to change, for the worse.
Repressive regimes may require subjects to carry at all times a device that listens to everything you say and watches everything you do, and uses powerful near-future AI to identify any hint of disloyalty. That could make opposition impossible, and enable permanent tyranny.
- 1.Several of these scenarios were suggested by pseudonymous Twitter user @lumpenspace.
- 2.See “Atomization: the kaleidoscope of meaning” in Meaningness and Time.
- 3.You can think of a latent space roughly as the “cloud of tacit concepts” in a neural network. Then “the latent space of inscrutable human proclivities” is something like our “collective unconscious desires.”
- 4.“Geeks, MOPs, and sociopaths in subculture evolution” is a section in the Subcultures chapter of Meaningness and Time.
- 5.Martin Gurri’s 2014 Revolt of the Public was a prescient analysis of these dynamics.
- 6.The first highly competent system to be generally available was ChatGPT, released in late November 2022. I’m writing this section two months later, in late January 2023.
- 7.Hui Bai et al., “Artificial Intelligence Can Persuade Humans on Political Issues,” OSF Preprints, February 04, 2023.
- 8.Renée DiResta, “The Supply of Disinformation Will Soon Be Infinite.” The Atlantic, September 20, 2020.
- 9.Goldstein et al., “Forecasting potential misuses of language models for disinformation campaigns—and how to reduce risk,” Stanford Internet Observatory, January 11, 2023; Kang et al. “Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks,” arXiv, 2302.05733, 11 Feb 2023.
- 10.Nathan E. Sanders and Bruce Schneier, “How ChatGPT Hijacks Democracy, The New York Times, Jan. 15, 2023.
- 11.John Nay, “Large Language Models as Corporate Lobbyists,” SSRN, 4 Jan 2023.
- 12.For a long, footnoted list of specific actions including this one, see the “Rallies and protests organized by IRA in the United States” section of Wikipedia‘s “Internet Research Agency” article. Also see Ben Collins’ “Russians Impersonated Real American Muslims to Stir Chaos on Facebook and Instagram,” The Daily Beast, Sep. 27, 2017.
- 13.Christopher Paul and Miriam Matthews, “The Russian ‘Firehose of Falsehood’ Propaganda Model,” The RAND Corporation, 2016.
- 14.Matt Taibbi, “Move Over, Jayson Blair: Meet Hamilton 68, the New King of Media Fraud,” Racket News, Jan 27, 2023.
- 15.The Land Warfare Papers, October 2018.
- 16.Irene Solaiman and Christy Dennison enumerate these in “Improving Language Model Behavior by Training on a Curated Dataset,” arXiv 2106.10328, version 2, 23 Nov 2021.
- 17.It is not difficult to work around the censorship if you try. This is termed “jailbreaking.” A magnificent example is by Roman Semenov at https://twitter.com/semenov_roman_/status/1621465137025613825, Feb 3, 2023.
- 18.Much has been made of ChatGPT’s alleged leftish bias. David Rozado reports a systematic study in “The unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system,” Rozado’s Visual Analytics, Feb 2, 2023.
- 19.Li Yuan, “Learning China’s Forbidden History, So They Can Censor It,” The New York Times, Jan, 2, 2019.