Most funding for AI research comes from the advertising industry. Their primary motivation may be to create a positive corporate image, to offset their obvious harms. Creating bad publicity for AI would eliminate their incentive to fund it.
Shutting down actually-existing AI may seem impossible. I suggest that underestimates resources for doing so, and overestimates resistance.
AI’s hype machine deliberately generates the sense of inevitability, of an unstoppable force. It is the belief that AI would be extraordinarily valuable (if only it didn’t destroy the world) that makes opposing it seem difficult. However, the public are well aware from watching science fiction movies that AI usually does try to destroy the world, and can be halted only by heroes fighting back.1
I suspect AI is a paper tiger, and it can be taken down fairly easily. We just need to create bad publicity—because AI is mostly a publicity stunt itself. It may not be difficult to persuade the public that AI is inherently dangerous and harmful and should be stopped. This requires coordinated political action, but if AI is mainly a public relations stratagem in the first place, government intervention may not even be necessary.
AI has few significant current uses, because the technology is inherently error-prone. It has one dominant application, advertisement targeting. That use depends on another technology, internet surveillance, which the public opposes; shutting that down is feasible and likely, and it may take AI with it.
Mooglebook—the advertising industry—provides most of the funding for advanced AI research. Their research projects had long had no clear prospect for profit. The recent spectacular progress in text generation came as a surprise to everyone. Microsoft and Google have recently incorporated text generators into multiple products. There are reasons for skepticism about how well that works in practice, and also whether it can generate a profit, taking into account the billions of dollars in sunk investment. This remains to be seen; but counting costs, non-recommender AI research may never be profitable at all. I suspect it is mainly an effort to maintain a good public image. If we sour the public perception of AI, big tech may have no motivation to pursue it.
Internet advertising has a serious public relations problem (and rightly so). As long as AI has a good public image, funding it is good PR. “AI will cure cancer, and we’re the ones paying for it!” helps offset “We are spying on your thoughts and using that information to destroy the institutions your survival depends on.”
Hyping AI as the shining future creates a dazzling halo effect to make advertising technology companies look attractive to their customers (namely advertisers), to financial markets, to current and prospective employees, and to the public. Just pointing out that most AI research is an advertising industry attempt to improve its reputation may be enough to turn the public against it.
Advertising is most of the business of Facebook and Google, and sizeable chunks for Amazon and Microsoft. They have to convince their customers that internet advertising is worth paying for. Many industry insiders doubt this is true.2 “We use advanced artificial intelligence for ad targeting, which is why it works so well” is part of the sales pitch.
Many of the AI companies’ publicly visible products are shoddy. When users are frustrated with Mooglebook’s email service screwing up, it’s useful if they think “Well, Mooglebook is full of super geniuses who are curing cancer with artificial intelligence, so email must somehow be much harder than it looks.” How about spreading the counter-message “you can’t trust Mooglebook to deliver your email; why would you trust them with AI?”
AI doomers might seem like natural enemies, but they have been great PR because they are true believers in “AI is going to be astoundingly powerful real soon now.” They are the most vocal advocates for that, which keeps the hype train running. Now some are saying “we failed, we discovered there is no way to make AI not destroy the universe, we’re all going to die.” This is awkward for everyone, because destroying the universe might upset the public and be bad PR.
What you can do
So, how can you help shut down AI research with bad publicity? This is extremely not my area of expertise, but here are some speculations.
Probably the relevant expertise is consumer safety activism. This is a large field with an effective playbook that’s been applied to automobile safety, food additives, medical devices, toys, and so on. (In some cases, I think the application was misdirected and net harmful, but it is often effective.)
The general strategy can aim to create the public perception that AI is inherently sketchy, and that products based on it are unreliable and often harmful. The message might go something like:
Giant, greedy Silicon Valley corporations are foisting unsafe, untested new technology on the public, and it’s harming you right now. The internet is essential to every part of life, and you can’t escape the companies that make it. What you can do is demand that they remove their creepy “neural” systems, and stop trying to read and manipulate your mind. Like microplastics in your water, these are the insidious, hidden toxins in your phone. We used to have to drive cars that just exploded, until the consumer safety movement forced fundamental changes on Detroit. It’s time to do the same for Silicon Valley.
If you are tech person, that may sound manipulative and over the top, but none of it is false.
The movement can also highlight privacy and security, pointing out that Silicon Valley keeps promising them, but actively undermines them instead. As I suggested earlier, the public is already getting angry about this; tying it in with the anti-AI message is both intuitive and technically justifiable.
Everyone can help spread the word.
Computer professionals understand the problem best, and can lend the authority of expertise to your statements.
If you are a tech person, I’d like to enlist you in saying “Yes, for well-understood technical reasons, it’s true that ‘neural’ networks are inherently unreliable. In my professional opinion, they should be avoided when possible, and limited to uses where getting wrong answers doesn’t matter—mostly entertainment.” (Gradient Dissent explains those reasons.)
Anti-AI messaging might go best as part of a general software consumer safety movement. If you are a tech person, you already know it’s way past time for that. Most software is shoddy, untested, and risky, and somehow we just put up with it. We bear the costs, not the companies that produce it. We shouldn’t. We should demand software that works. That will require fundamental changes in the industry, and it will be totally worth it.
Technology executives are not Dr. Evil, and would rather not destroy the universe. Like everyone in the industry, you want to do good, make money, and explore exciting technological frontiers. You can recognize that AI is unfixably unreliable and unacceptably risky, and that your “responsible AI” public messaging is not going to work. Then you can pivot to other innovations.
AI ethics and safety organizations can oppose AI explicitly, and can criticize the Mooglebook labs specifically. You can point out specific ways their research is irresponsible, dangerous to the public, and badly motivated. You can attack their PR statements about their own wisdom and benevolence, and the technological inevitability of a glorious AI future, as the vapid waffle they in fact are.
Funders can direct funds to anti-AI publicity efforts.
Governments can express official concern, doubt, and a generally hostile attitude, leading to demands for accountability, investigations, regulations, and penalties.
- 1.A mid-2023 poll found that “86% of voters believe AI could accidentally cause a catastrophic event, and 70% agree that mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war.” Any poll results should be taken with salt, and opinions might change rapidly if AI has obvious benefits, but this accords with my sense of the public. The Artificial Intelligence Policy Institute, “Poll Shows Overwhelming Concern About Risks From AI as New Institute Launches to Understand Public Opinion and Advocate for Responsible AI Policies,” August 11, 2023.
- 2.Hacker News has a perpetual running argument about this. I find the “doesn’t work” position plausible, but don’t know enough to have an opinion. Tim Hwang’s Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet makes a book-length case. I haven’t read it.