Recognize that AI is probably net harmful

Actually-existing and near-future AIs are net harmful—never mind their longer-term risks. We should shut them down, not pussyfoot around hoping they can somehow be made safe.

AI has been “on the verge” of providing fabulous benefits for innumerable fields throughout my lifetime. It mostly has not delivered.

Automated medical diagnosis is a common example. It was demonstrated in the lab, and proclaimed as a breakthrough with immediate benefits, starting in the 1970s. New systems get hyped every year since. However, AI diagnosis is rarely used in clinical practice because it doesn’t work well enough.

Mainstream media articles commonly proclaim that “applications of artificial intelligence are already revolutionizing a host of industries.” My web searches for practical uses have turned up mainly experimental prototypes and vaporware fantasies instead. On examination it turns out that:

Recommender engines have long been the main commercial application. As I argued in “Apocalypse now,” recommenders are intrinsically useful, but their effects have been disastrous, due to unanticipated interactions with the world at large.

The past year’s dramatic improvements in chatbots (ChatGPT), programming assistants (Copilot), and image generators (DALL-E) may provide new, highly significant examples of practical AI systems. The capabilities of these systems are extraordinary. So, however, is their propensity for error, which may limit them to a handful of low-value uses.

It will take a year or two for their worth to become clear. Opinions are sharply divided now about how useful they will prove, for what. Opinions are also sharply divided about whether uses will prove positive or negative on balance.

I suspect many people confuse “amazing!” with “useful”; and “useful” with “desirable.” Some users report tenfold increases in productivity, which does not seem credible. Others find that needing to check and revise all AI outputs takes longer than doing the work other ways. Major uses for ChatGPT include negative-value activities such as spam, near-spam such as marketing emails, and pointless internal corporate communications.

Future AI is justified mainly on the basis of of distant, vague promises. Although some benefits are likely, I have found no specific argument that they will outweigh the inevitable harms. Reasoning often jumps from superhuman intelligence to an unspecified utopia, with no concrete scenario intervening.

Typically, the hope is that AI will speed science and engineering, which drive material progress. In a later chapter, I suggest that we can dramatically speed science, engineering, and material progress, but that AI is unnecessary and probably mostly irrelevant. My final chapter suggests that social and cultural improvements are also important in a future we’d like. It seems that current AI is rapidly degrading society and culture—as “Apocalypse Now” suggested earlier—making it net negative.

Alternatively, advocates invoke generic techno-optimism: all knowledge is good, because we can choose to use it for good. There are many counterexamples; biological weapons, for instance. We can and should choose which technologies to develop, instead of rushing blindly into “this AI thing is fascinating and unexpectedly powerful—let’s spend hundreds of billions of dollars to make it even more so.”

What you can do

Everyone can pay some attention to AI as it develops. The rate of technical progress, and its effects, are both impossible to predict. Most experts say they can barely guess at what AI will or won’t be able to do in even a couple years time; no one can look far ahead. If progress continues at the startling pace of 2021-2023, your life may soon be impacted, for better or worse. There may be little warning, so it would be wise to watch new developments out of the corner of your eye at least.

It seems unlikely that AI will soon automate many jobs out entirely of existence. More likely is that some parts of your work can and will be automated. That may be either good or bad for people in your occupation. Many observers fear that high quality AI image generators will put most artists out of work. Others suggest that they will greatly increase artists’ productivity, in which case commissioning custom artwork might become affordable to many more people, which could increase demand even faster than productivity, driving up artists’ income.

You know the details of your work better than anyone else, so you can probably predict better than AI experts which parts will be automated, if you take a lay interest in new developments as they happen.

Technology professionals can reflect honestly on the broader effects and value of your work. Profit is good, but not if it generates significant negative externalities. What are possible downstream consequences of the AI you are using, or contemplating using?

I find a failure of nerve in both the AI ethics and AI safety communities. Both fields regard the expected utility of AI as negative on its current path, but neither consistently advocates simply stopping. They tacitly assume that AI is inevitable, so all we can hope for is to lessen its worst effects. Neither has a credible plan for altering the path to make it a net positive. We should admit this, and aim to cut it off instead. I believe that is feasible.

AI ethics aims mainly to prevent harmful but non-catastrophic misuses of current technologies, rather than bringing AI itself into doubt. In contrast with AI safety, it does recognize that power, not intelligence, is dangerous;1 and that AI-enabled power is already often harmful.2

AI ethics activists can take seriously near-future AI that could drastically increase the power of oppressive institutions. If that seems possible, you might rethink priorities. What’s your longer-term game plan? Demanding piecemeal regulation of current misuses may miss the tiger for the mosquitos. Can you envision a positive future for AI? Do you expect to steer us into that, in the face of governments and corporations with vast incentives to ignore, or deliberately create, harmful effects? If not, consider opposing AI outright. You can center the argument that it’s plainly unethical to deploy technologies that we don’t understand, that are inherently unreliable, and that may drastically harm culture, society, and individual people.

AI safety organizations can call out AI labs’ PR pieces about their safety efforts as drastically inadequate. You can advocate for a slowdown or moratorium on research in its current directions. You can advocate for research toward alternative, inherently safer technologies.

I suggest also that the AI safety community should criticize vague utopian promises as misleading advertising hype. If there isn’t a plausible specific path to safe AI, we should oppose it outright.

Funders can shift priority from “find a way to make AI safe” to “find ways to halt unsafe AI research, development, and deployment.”

Governments can regulate AI, requiring strong evidence of safety before deployment. You can stop funding research that aims to increase AI capabilities without commensurate safety guarantees. You can fund countermeasures such as those recommended in this chapter.

  1. 1.Seth Lazar’s “Legitimacy, Authority, and the Political Value of Explanations” makes a clear case that current AI is politically illegitimate because it is powerful, error-prone, and uninterpretable.
  2. 2.See Kate Crawford’s Atlas of AI for an overview. AI wielded by oppressive states deliberately harms dissenters and disfavored minorities. AI wielded carelessly by more benign governments and by corporations often does unintentional harm because it doesn’t work well, or has unanticipated damaging side effects. Then, because “the artificial intelligence said so”—for uninterpretable reasons—the harmed may lack avenues for redress.