It may already be too late to pull the plug on the existing AI systems that could destroy the world.
Some in the AI safety movement might object that the previous section is just about individuals and corporations making questionable use of the internet, somewhat aided by not-really-AI statistical algorithms. It’s not about Real AI getting out of control, which is the much greater danger the safety movement concentrates on. Ephemeral squabbles on social media are trivial by comparison with the end of the world, they might say.
On the other hand, some in the more socially-oriented AI ethics movement might object that the previous section commits both-sidesism. The urgent problem is that Mooglebook AI promotes propaganda from the Bad Tribe. We need to get control over the AI so we can make it shut that stuff down. The real problem is not smart machines—they’re fine as long as we’re in charge—the problem is stupid, evil people. However, all the Tribes are waging the same war the same ways. They all believe they are fated to win, because they are morally correct, which justifies tearing societies apart. Trying even harder to stomp on the Bad Tribe is the fuel for the dynamic of dysfunction. Pointing the finger at their propaganda overlooks that much of the country wants and agrees with it. Realistically, your side can never gain enough power to silence them, nor will your propaganda ever convert them. No side can win the culture war. We all lose. It’s better to see the war itself, and the technology that stokes it, as the enemy.
These objections from the AI safety and ethics communities both fail to recognize the seriousness of the problem. It’s the same error as a common lay reaction to AI risk concerns: “If it starts to get out of control, we can just pull the plug.” Most AI safety primers have a section addressing that: by the time we realize it’s getting out of control, it may already have amassed enough power that it’s too late. An out of control AI may do everything it can to resist termination and ensure its own survival. “Everything it can” may include “kill all human beings.”
Who or what is in control of Mooglebook’s AI?
There’s no big red button anyone at Mooglebook can push to shut it down. Mooglebook can’t stop optimizing for ad clicks. There are people inside and outside the company who realize it has dire negative externalities, and they are trying to make those less bad, but they’ve brought water pistols to a tactical nuclear war. If Mooglebook’s executive team unanimously agree that its activities are harmful, and they want to get out of the advertising business and pivot the whole company to rescuing abused beagles, they cannot do that. They would be fired by the board immediately. If the board agreed, they would be fired by the shareholders. If somehow the advertising business did get shut down, the company would go bankrupt in a few months, and less scrupulous competitors would pick up the slack.
The institution has its own agency: its own purposes, plans, reasons, and logic, which are more powerful than the humans it employs.1 Those are subordinate in turn to the AI the company depends on for its survival. If enemies of Mooglebook’s AI—activists, regulators, competitors—try to harm it, the institution can’t not do everything in its power to defend it. As, in fact, it is currently doing.
Humans don’t have control over Mooglebook’s AI, not individually, nor as defined groups, nor perhaps even as a species.
Mooglebook AI is not plotting to destroy the world—but it may destroy the world unintentionally, and we may not be able to stop it.
- 1.This is not to absolve individuals at Mooglebook, nor the company as a legal entity, of responsibility. They do have some power to change things on the margin, and should. The point, however, is that identifying them with the overall problem leads to an incomplete and inaccurate analysis.