We can and should protect against current and likely future harmful AI effects. This chapter recommends practical, near-term risk reduction measures. I suggest actions for the general public, computer professionals, AI ethics and safety organizations, funders, and governments.
Except in extreme scenarios, an AI apocalypse might be fought as it unfolds. However, it would be far better to act ahead of time, to forestall or limit the damage from bad outcomes.1. The time is now. It’s especially now if you accept the previous chapter’s suggestion that an apocalypse has already begun.
In extreme scenarios, a superintelligent AI might destroy the world before we had time to react. Discontinuous innovations can’t be predicted, and might be instantaneously catastrophic. However, even unexpected and potentially overwhelming AI threats might be prevented or defeated if we’ve previously put in place mundane generic mitigations, like adequate cybersecurity.
The actions a hostile AI might take are mainly also ones hostile people or institutions might take. AI risks are exploits on pools of power, so guarding those pools provides broader benefits. Working through plausible scenarios in gritty detail may be useful as thought experiments for imagining pools of power that do not yet exist.2 Preventing misuse of power covers disaster scenarios that don’t necessarily involve AI. That may be more appealing to publics, or governments, who are skeptical of AI doom. It also makes the effort well-spent even if Scary AI never happens.
Identifying and mitigating concrete threats will require tedious realism, boring engineering, frustrating political coalition building, and massive infrastructure construction. These are bigger projects than AI risk organizations can take on unaided. You and your organization can help.
-
Pervasive digital surveillance and inadequate cybersecurity feature in most extreme AI doom scenarios; and in the less extreme ones I presented in the previous chapter; and they also empower bad human actors. These are urgent problems, discussed in the first two sections of this chapter, worth addressing regardless. The practical measures we can take against them now probably have negative cost: they would be valuable even if Scary AI turns out to be distant or impossible after all.
-
Current AI systems are built on technologies that we don’t understand, but that we do know are inherently unreliable and actively deceptive. They should be deprecated, avoided, regulated, and replaced. Specific, neglected science and engineering investigations can help with that.
-
The previous chapter identified AI’s corrosive influence on society and culture as a key risk. A section in this one suggests ways to reinforce individuals and institutions against memetic attack.
-
Many in the AI ethics and safety fields believe AI has negative expected future value. In the absence of any good argument to the contrary, we should agree. The final section of this chapter recommends activism to slow or end AI research and deployment.
- 1.José Luis Ricón’s “Set Sail For Fail? On AI risk” also advocates this approach. This has not been the mainstream approach in the AI safety field, which has mainly sought abstract, magic-bullet solutions to extreme scenarios. Ricón and I are both outsiders to the AI safety community, and have significant experience in multiple science, engineering, and business fields, which gives us a similarly pragmatic orientation.
- 2.Ricón also advocates this. Section 7 of his “Set Sail For Fail?” sketches some examples; he suggests “wargaming” them and others to discover vulnerabilities and defenses. As a thought experiment, even positing a god-like AGI that magics up superpowers may help brainstorm realistic risks that might otherwise get overlooked.