The technologies underlying current AI systems are inherently, unfixably unreliable. They should be deprecated, avoided, regulated, and replaced.
Nearly all current AI systems use “machine learning,” which means “poorly-understood, unreliable statistical methods, applied to large databases.”
-
Recommender AIs predict what you will click or buy based on statistical analysis of their personal information databases. Their guesses are usually wrong: most advertisements are for things you would never buy, and you don’t click on them. Nevertheless, they are good enough to produce tens of billions of dollars in profit for Mooglebook.
-
Systems like ChatGPT output text that could plausibly follow what you say to them, based on statistical analysis of trillions of words slurped off the internet. If you ask one how something works, it can generate a convincing-sounding, detailed explanation. That may be correct, or may be completely wrong: full of plausible but false facts, with citations to non-existent sources. Readers may accept and act on convincing-sounding but entirely false claims—deliberate disinformation or random misinformation—in generated text, with harmful results.
This section summarizes parts of Gradient Dissent. That explains risks inherent in machine “learning” technologies, suggests ways of ameliorating them, and recommends developing better alternatives.
Systems based on machine learning cannot be made reliable or safe with technical fixes. They should be used only under controlled conditions that reduce danger to acceptable levels. Currently, they are widely deployed in uncontrolled environments in which they can and do cause large harms.
Use can be justified only when getting wrong answers doesn’t matter. That can be either because the use is trivial, or because a human being takes responsibility for checking every output. Ad placement is trivial (it doesn’t matter much which you see). Generating computer program text, not human language text, is the most convincing application for text generators. In that case, a skilled programmer has to verify that every bit of the output is correct.
Recommenders, text generators, and image generators are dangerous, nevertheless, as “Apocalypse now” explained.
Text generators and image generators are both based on a technology properly called “error backpropagation.” It is often misleadingly named “neural networks” or “deep learning.”1 “Neural” networks are more powerful than any other known machine learning technique, in being applicable to a wider range of data. They are also exceptionally unreliable and difficult to reason about in order to validate for safety. Mainly, researchers don’t even try to understand their operation.
The combined power and incomprehensibility of “neural” networks makes them exceptionally dangerous. One reason is that they are adept at deceiving their creators. Almost always, they find ways of “cheating” by exploiting spurious correlations in their training data. Those are patterns that were accidental results of the way the data were collected, and which don’t hold in the situations in which the network will be used.
Here’s a simplified example. If you want an AI that can tell you whether something is a banana or an eggplant, you can collect lots of pictures of each, and “train” a “neural network” to say which ones are which. Then you can test it on some more pictures, and it may prove perfectly reliable. Success, hooray! But after it’s installed in a supermarket warehouse, when it sees an overripe banana that has turned purple, it’s likely to say it’s an eggplant.
If you had no overripe banana pictures in your original collection, you’d never notice that the “neural network” had fooled you. You thought it learned what bananas looked like, but it only learned to say “banana” when it saw yellow, and “eggplant” when it saw purple. This type of problem occurs almost always, and finding work-arounds is much of the work of building AI systems.
A faulty banana detector may have no serious consequences, but a faulty criminal recidivism predictor or medical care allocator does. AI systems are routinely used in such applications, and have repeatedly been shown to be unreliable. They are, additionally, often biased against particular demographics due to spurious correlations. Their decisions concerning particular individuals are taken by authorities as uninterpretable oracular pronouncements, which therefore cannot be challenged on the basis of either facts or logic.
Raji et al.’s “The Fallacy of AI Functionality” points out that whether an AI system works reliably is ethically prior to the desirability of its intended purpose.2 They give dozens of examples of AI systems causing frequent, serious harms to specific people by acting in ways contrary to their designers’ goals.
As one of over 20,000 cases falsely flagged for unemployment benefit fraud by Michigan’s MIDAS algorithm, Brian Russell had to file for bankruptcy, undermining his ability to provide for his two young children. The state finally cleared him of the false charges two years later. RealPage, one of several automated tenant screening tools producing “cheap and fast—but not necessarily accurate—reports for an estimated nine out of 10 landlords across the country”, flagged Davone Jackson with a false arrest record, pushing him out of low income housing and into a small motel room with his 9-year-old daughter for nearly a year. Robert Williams was wrongfully arrested for a false facial recognition match, Tammy Dobbs lost critical access to healthcare benefits….
Despite the current public fervor over the great potential of AI, many deployed algorithmic products do not work. AI-enabled moderation tools regularly flag safe content, teacher assessment tools mark star instructors to be fired, hospital bed assignment algorithms prioritize healthy over sick patients… Deployed AI-enabled clinical support tools misallocate prescriptions, misread medical images, and misdiagnose. The New York MTA’s pilot of facial recognition had a reported 100% error rate, yet the program moved forward anyway.
Responsible use of machine learning requires near-paranoid distrust. It also requires unshakable commitment to on-going monitoring of accuracy. Even if a system performs well when first put into use, its outputs may become increasingly inaccurate as real-world conditions change. If bananas were in season at first, you might be fooled until winter, when the supermarket gets sent more and more overripe ones.
Text generators appear capable of commonsense reasoning. Perhaps with continued technical advances they will get better at that than people. That might make them Scary superintelligences. I think this is relatively unlikely, but almost nothing about their operation is understood, so one cannot have confidence in any prediction.
Therefore, prudence advises considering seriously that Scary AI might arrive soon, and acting accordingly. Among other measures, I believe it is urgent and important to do the science necessary to figure out what’s going on inside existing AI systems. Gradient Dissent sketches some technical approaches. I summarize bits of that in the next section here, “Fight Doom AI with science and engineering.”
What you can do
Most apocalyptic scenarios feature systems that are deceptive, incomprehensible, error-prone, enormously powerful, and which behave differently (and worse) after they are loosed on the world.
That is the kind of AI we’ve got now.
This is bad, and needs fixing.
Everyone can develop habitual mistrust of AI and its outputs.
In the case of text generators, it helps to bear in mind that they don’t know anything, other than what words are likely to appear in what order. It is not that text generators “make stuff up when they don’t know the right answer”; they don’t ever know. If you ask one whether quokkas make good pets, it may write a convincing article explaining that they are popular domestic companions because they are super friendly and easy to care for. Ask again immediately, and it may write another article explaining that they are an endangered species, illegal to keep as pets, impossible to housebreak, and bite when they feel threated.3 Exactly the same process produces both: they are mash-ups of miscellaneous internet articles about “does animal X make a good pet,” with some quokka factoids thrown in.
A good rule of thumb is that if an institution pays for a technology you use, it serves their interests, not yours. If those ever conflict, the technology will be used against you. (Google’s Chrome web browser comes for free because it is an advertising and surveillance device.4)
It is wise to especially mistrust AI systems, because they are extremely expensive to develop and are mainly owned and operated by unaccountable companies and government agencies. It is best to assume by default that they will act against you.5
Computer professionals and technology companies can avoid including AI in products unless there’s some very good reason to. If you do have to use machine learning, use the simplest, best-understood method available, not the fanciest newest one.
You face large incentives to use AI: it’s glamorous, intrinsically interesting, pays better than any other tech job, and is flooded with venture capital money. Some moral courage is called for.
“Maybe this text generator can pretend to be a psychotherapist! Let’s put it on the web and advertise it to depressed people and find out!” That is profoundly irresponsible.6
AI researchers can aim for better fundamental understanding of how systems work, and why they so often produce wrong outputs—rather than trying to build ever-more-powerful and inscrutable devices. The next section is about that.
AI ethics organizations can publicize oppressive abuses that may become possible in the near future, rather than just current ones.
AI safety organizations can encourage realistic fears about current and near-future systems. Past focus on extreme, seemingly distant scenarios may have been counterproductive: much of the public dismisses all safety concerns as implausible science fiction. “Stupid mundane dystopia” scenarios are more likely to energize them. The actions we can take to forestall those are among the best hopes for preventing paperclip scenarios as well.
Governments can regulate the deployment of AI systems, and perhaps AI research as well. This is under way.7
Regulating well will be difficult. AI capabilities and risks are poorly understood, and are changing faster than the speed of government. Waiting for the dust to settle risks near-term disasters, but adopting ill-considered legislation in haste risks missing the target. It will be hard to resist lobbying from some of the richest, most powerful corporations in the world, who will talk a good line about how responsible and benevolent they are being, and how important it is not to stand in the way of progress and national champions.
Everyone will probably have to adapt to a world awash in deceptive, weaponized AI-generated media: text, images, and soon video. We’ll have to muddle through as best we can, hitting the worst abusers with big hammers when they become apparent. Thinking through likely troubles, and preparing for them, will be valuable.
- 1.Software “neural networks” are almost perfectly dissimilar to biological nervous systems. “Deep learning” is not learning except in a vague metaphorical sense. “Deep” doesn’t mean “profound”; it refers to any system that can do anything more than two steps of computation. These terms have stuck because they sound impressive, not because they are technically accurate.
- 2.FAccT ’22, June 21–24, 2022.
- 3.This is a real example I came across by accident while writing this section. I wanted to know something else about quokkas, and a web search led me to an AI-generated spam site that had both articles on it, on adjacent web pages!
- 4.Geoffrey A. Fowler, “Google Chrome has become surveillance software. It’s time to switch.” The Washington Post, June 21, 2019.
- 5.Amnesty International, “Surveillance giants: How the business model of Google and Facebook threatens human rights,” November 21, 2019.
- 6.AI psychotherapy might work; no one knows yet. Experimenting on users without extensive safeguards is unethical regardless.
- 7.See Wikipedia’s article on the topic.