Practical actions you can take against AI risks

We can and should protect against current and likely future harmful AI effects. This chapter recommends practical, near-term risk reduction measures. I suggest actions for the general public, computer professionals, AI ethics and safety organizations, funders, and governments.

An AI apocalypse might be fought as it unfolds. However, it would be far better to act ahead of time, to prevent or limit the damage.1

The time to act is now. That’s especially clear if you accept the previous chapter’s suggestion that an apocalypse has already begun.

In theory, a superintelligent AI might destroy the world before we had time to react. However, even such discontinuous, unanticipated AI threats might be prevented or defeated if we’ve previously put in place general-purpose safeguards, such as adequate cybersecurity.

AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom. Also, working through AI risk scenarios in gritty detail may be useful as thought experiments for discovering hostile uses of power we have not yet imagined.2

Identifying and mitigating concrete threats will require tedious realism, boring engineering, frustrating political coalition building, and massive infrastructure construction. You and your organization can help in these large efforts.

  1. 1.José Luis Ricón’s “Set Sail For Fail? On AI risk” also advocates this approach. This has not been the mainstream approach in the AI safety field, which has mainly sought abstract, magic-bullet solutions to extreme scenarios. Ricón and I are both outsiders to the AI safety field, and have significant experience in multiple science, engineering, and business disciplines, which gives us a similarly pragmatic orientation.
  2. 2.Ricón also advocates this. Section 7 of his “Set Sail For Fail?” sketches some examples; he suggests “wargaming” them and others to discover vulnerabilities and defenses. As a thought experiment, even positing a god-like AGI that magics up superpowers may help brainstorm realistic risks that might otherwise get overlooked.