How to avert an AI apocalypse

We can and should protect against current and likely future harmful AI effects. This chapter recommends practical, near-term risk reduction measures. I suggest actions for the general public, computer professionals, AI ethics and safety organizations, funders, and governments.

Except in extreme scenarios, an AI apocalypse might be fought as it unfolds. However, it would be far better to act ahead of time, to forestall or limit the damage from bad outcomes.1. The time is now. It’s especially now if you accept the previous chapter’s suggestion that an apocalypse has already begun.

In extreme scenarios, a superintelligent AI might destroy the world before we had time to react. Discontinuous innovations can’t be predicted, and might be instantaneously catastrophic. However, even unexpected and potentially overwhelming AI threats might be prevented or defeated if we’ve previously put in place mundane generic mitigations, like adequate cybersecurity.

The actions a hostile AI might take are mainly also ones hostile people or institutions might take. AI risks are exploits on pools of power, so guarding those pools provides broader benefits. Working through plausible scenarios in gritty detail may be useful as thought experiments for imagining pools of power that do not yet exist.2 Preventing misuse of power covers disaster scenarios that don’t necessarily involve AI. That may be more appealing to publics, or governments, who are skeptical of AI doom. It also makes the effort well-spent even if Scary AI never happens.

Identifying and mitigating concrete threats will require tedious realism, boring engineering, frustrating political coalition building, and massive infrastructure construction. These are bigger projects than AI risk organizations can take on unaided. You and your organization can help.


  1. 1.José Luis Ricón’s “Set Sail For Fail? On AI risk” also advocates this approach. This has not been the mainstream approach in the AI safety field, which has mainly sought abstract, magic-bullet solutions to extreme scenarios. Ricón and I are both outsiders to the AI safety community, and have significant experience in multiple science, engineering, and business fields, which gives us a similarly pragmatic orientation.
  2. 2.Ricón also advocates this. Section 7 of his “Set Sail For Fail?” sketches some examples; he suggests “wargaming” them and others to discover vulnerabilities and defenses. As a thought experiment, even positing a god-like AGI that magics up superpowers may help brainstorm realistic risks that might otherwise get overlooked.