We can and should protect against current and likely future harmful AI effects. This chapter recommends practical, near-term risk reduction measures. I suggest actions for the general public, computer professionals, AI ethics and safety organizations, funders, and governments.
An AI apocalypse might be fought as it unfolds. However, it would be far better to act ahead of time, to prevent or limit the damage.1
The time to act is now. That’s especially clear if you accept the previous chapter’s suggestion that an apocalypse has already begun.
In theory, a superintelligent AI might destroy the world before we had time to react. However, even such discontinuous, unanticipated AI threats might be prevented or defeated if we’ve previously put in place general-purpose safeguards, such as adequate cybersecurity.
AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom. Also, working through AI risk scenarios in gritty detail may be useful as thought experiments for discovering hostile uses of power we have not yet imagined.2
Identifying and mitigating concrete threats will require tedious realism, boring engineering, frustrating political coalition building, and massive infrastructure construction. You and your organization can help in these large efforts.
-
Pervasive digital surveillance and inadequate cybersecurity feature both in extreme AI doom scenarios and in the medium-sized catastrophes I discussed in the previous chapter. They also empower bad human actors right now. These are urgent problems, discussed in the first two sections of this chapter. The practical measures we can take against them now probably have negative cost: they would be worthwhile even if AI turns out to have no bad consequences.
-
Current AI systems are built on technologies that we don’t understand, but that we do know are inherently unreliable and actively deceptive. They should be deprecated, avoided, regulated, and replaced. Specific, neglected science and engineering investigations can help with that.
-
The previous chapter identified AI’s corrosive influence on society and culture as a key risk. A section in this one suggests ways to reinforce individuals and institutions against memetic attack.
-
Many in the AI ethics and safety fields believe AI has negative expected future value. In the absence of any good argument to the contrary, we should agree. The final section of this chapter recommends activism to slow or end AI research and deployment.
- 1.José Luis Ricón’s “Set Sail For Fail? On AI risk” also advocates this approach. This has not been the mainstream approach in the AI safety field, which has mainly sought abstract, magic-bullet solutions to extreme scenarios. Ricón and I are both outsiders to the AI safety field, and have significant experience in multiple science, engineering, and business disciplines, which gives us a similarly pragmatic orientation.
- 2.Ricón also advocates this. Section 7 of his “Set Sail For Fail?” sketches some examples; he suggests “wargaming” them and others to discover vulnerabilities and defenses. As a thought experiment, even positing a god-like AGI that magics up superpowers may help brainstorm realistic risks that might otherwise get overlooked.