Recognize that AI is probably net harmful

Actually-existing and near-future AIs are net harmful—never mind their longer-term risks. We should shut them down, not pussyfoot around hoping they can somehow be made safe.

I find a failure of nerve in both the AI ethics and AI safety communities. They tacitly assume that AI is inevitable, so all we can hope for is to lessen its worst effects. AI ethics aims mainly to prevent harmful but non-catastrophic misuses of current technologies, rather than bringing AI itself into doubt. The AI safety community has mainly tried to figure out how to prevent the end of the world using philosophical reasoning from first principles, which has failed. The movement’s more recent, more realistic efforts have proposed bending current technology, rather than rejecting it.

Both fields regard the expected utility of AI as negative on its current path. Neither has a credible plan for altering the path to make it a net positive. We should admit this, and aim to cut it off instead. I believe that is feasible.

Regarding current AI, I know of few significantly valuable uses, apart from recommenders. It has been “on the verge” of providing untold benefits for innumerable fields throughout my lifetime, but mostly has not delivered. The application lists my web searches have turned up include mainly experimental prototypes and vaporware fantasies. Automated medical diagnosis, for example; that has been demonstrated in the lab from the 1970s, and new systems are hyped regularly, but it is rarely used in clinical practice because it doesn’t work well enough.

It’s common to read that “applications of artificial intelligence are already revolutionizing a host of industries,” but on examination it turns out that:

AI is justified mainly on the basis of promises of vague future benefits, rather than current uses. Although some benefits are likely, I have found no specific case for a net positive future.

Reasoning often jumps from “superhuman intelligence” to “utopia” with no concrete scenario intervening. Typically, the hope is that AI will speed science and engineering, which drive material progress. In a later chapter, I suggest that we can dramatically speed science, engineering, and material progress, but that AI is unnecessary and probably mostly irrelevant. My final chapter suggests that social and cultural improvements are also important in a future we’d like. It seems that current AI is rapidly degrading society and culture—as the roller derby scenario dramatizes—making it net negative.

Alternatively, advocates invoke generic techno-optimism: all knowledge is good, because we can choose to use it for good. There are many counterexamples; biological weapons, for instance. We can and should choose which technologies to develop, instead of rushing blindly into “this AI thing is fascinating and unexpectedly powerful—let’s spend hundreds of billions of dollars to make it even more so.”

The AI ethics community, in contrast with the AI safety one, recognizes that it is power, not intelligence, that is dangerous;1 and that AI-enabled power is already often harmful.2 AI wielded by oppressive states deliberately harms dissenters and disfavored minorities. AI wielded carelessly by more benign governments and by corporations often does unintentional harm because it doesn’t work well, or has unanticipated damaging side effects. Then, because “the artificial intelligence said so”—for uninterpretable reasons—the harmed may lack avenues for redress.

What you can do

Everyone can pay some attention to AI as it develops. The rate of technical progress, and its effects, are both impossible to predict. Most experts say they can barely guess at what AI will or won’t be able to do in even a couple years time; no one can look far ahead. If progress continues at the startling pace of 2021-2023, your life may soon be impacted, for better or worse. There may be little warning, so it would be wise to watch new developments out of the corner of your eye at least.

It seems unlikely that AI will soon automate many jobs out entirely of existence. More likely is that some parts of your work can and will be automated. That may be either good or bad for people in your occupation. Many observers fear that high quality AI image generators will put most artists out of work. Others suggest that they will greatly increase artists’ productivity, in which case commissioning custom artwork might become affordable to many more people, which could increase demand even faster than productivity, driving up artists’ income.

You know the details of your work better than anyone else, so you can probably predict better than AI experts which parts will be automated, if you take a lay interest in new developments as they happen.

Technology professionals can reflect honestly on the broader effects and value of your work. Profit is good, but not if it generates significant negative externalities. What are possible downstream consequences of the AI you are using, or contemplating using?

AI ethics activists can take seriously the possibility of much worse outcomes than those you currently address. If those seem plausible, you might rethink priorities. What’s the longer-term game plan? Demanding piecemeal regulation of particular misuses may miss the tiger for the mosquitos. Can you envision a positive future for AI? Do you expect to steer us into that, in the face of governments and corporations with vast incentives to ignore harmful effects? If not, consider opposing AI outright. You can center the argument that it’s plainly unethical to deploy technologies that we don’t understand, that are inherently unreliable, and that may drastically harm culture, society, and individual people.

AI safety organizations can call out AI labs’ PR pieces about their safety efforts as drastically inadequate. You can advocate for a slowdown or moratorium on research in its current directions. You can advocate for research toward alternative, inherently safer technologies.

I suggest also that the AI safety community should either make a serious case for positive expected value, or drop the utopian fantasies. If there isn’t a plausible specific path to safe AI, should you not oppose it outright?

Funders can shift priority from “find a way to make AI safe” to “find ways to halt unsafe AI research, development, and deployment.”

Governments can regulate AI, requiring strong evidence of safety before deployment. You can stop funding research that aims to increase AI capabilities without commensurate safety guarantees. You can fund countermeasures such as those recommended in this chapter.


  1. 1.Seth Lazar’s “Legitimacy, Authority, and the Political Value of Explanations” makes a clear case that current AI is politically illegitimate because it is powerful, error-prone, and uninterpretable.
  2. 2.See Kate Crawford’s Atlas of AI for an overview.