Gaining unauthorized access to computer systems is a key source of power in many AI doom scenarios. That is easy now, because there are scant incentives for serious cybersecurity; so nearly all systems are radically insecure. Technical and political initiatives must mitigate this problem.
Doom scenarios often attribute superhuman internet hacking ability to superintelligent AI. With that, it can grab control of computers and bank accounts for its own use, gain access to secret information, control communication channels, and take over military drones and factory automation.1 Safety recommendations often involve preventing AI from accessing the internet.
Unfortunately, the internet is already mainly controlled by AI, and few if any AI technologies are isolated from it. If someone deliberately builds a Scary AI, they could try to “keep it in a box” by blocking its internet access.2 So far, though, labs have put many cutting-edge AI systems on the net after only desultory testing—often with embarrassing results.3 Extrapolating, we should expect developers would give Scary AI full internet access from the moment of its conception. That implies hardening the public net, instead of relying on a box for containment.
Unfortunately too, superhuman skills are not necessary for hacking. This is an enormous human-species vulnerability now, and it isn’t taken nearly seriously enough.
Internet security is mainly fictional. Almost daily there’s a news report about a major corporation or government agency that has been penetrated by human hackers who have extracted sensitive personal data of millions of people; or which has been shut down by a ransomware or DDOS attack. Anecdotally, most successful attacks are hushed up, and we never hear about them.
This is an example of a large, recently-created, rapidly expanding, inadequately guarded pool of power and resources which poses severe risks. (Those potentially include total nuclear war, depending on how secure you imagine the relevant communication channels are.4) Securing computer networks protects against unaligned autonomous AI. It also protects against unaligned conventional hackers, unaligned hackers using AI systems as tools, and unaligned organizations (cybercrime companies and hostile state agencies).
There are two obstacles: the incentives for the relevant decision makers point in the wrong direction, and doing the right thing will be unavoidably difficult and expensive.
Accountability for cybersecurity failures is nearly nonexistent.5 Organizations whose security policies and implementations were glaringly deficient, and whose systems were hacked, harming millions of innocent people, nearly never suffer significant penalties. On the other hand, cybersecurity implementations are expensive, and (to be fair) are inadequate even when conforming to best practice. Current cybersecurity consists of layers and layers of band-aids over systems that were not designed for security, and which are riddled with gaping holes.
We do know how to build intrinsically secure computer systems. “Intrinsically secure” does not mean “absolutely secure”; there always remain gaps between theory and implementation. Intrinsically secure systems are ones in which all parts are built to be secure themselves, rather than hiding the mass of intrinsically insecure software behind a patchwork of thin protections. Also, “secure” does not mean “safe”; any software can cause harm if misused, just as no amount of safety engineering can prevent deliberate injury with power tools.6
Methods for intrinsic security include:
- Capability-based computer architecture, which enforces security and correctness at the hardware level
- Capability-based operating systems, which enforce much more stringent and fine-grained permissions than conventional ones
- Language security, which produces intrinsically secure network software
- Formal program verification, which uses semi-automatic theorem provers to ensure correctness relative to a specification.
These techniques are reasonably well-understood, and have transitioned from research to limited practical application. Currently, they are considered too expensive and difficult for anything other than safety-critical systems. In practice, that includes nearly nothing, since there is so little incentive for adequate cybersecurity.
A large technology development and transfer effort is needed. It might take a decade to complete, but will bear some fruit sooner.
Significantly improving cybersecurity needs incentive changes: to bring more systems at least up to the level of current best practices, to build better tools for creating inherently secure replacements, and to broaden usage.
What you can do
Everyone can spread the word that companies and government agencies carelessly allowing cybercriminals and hostile states to get access to private personal data is outrageous and unacceptable. Make a point of this on social media. Demand legislation for financial and legal accountability.
Computer professionals can exert pressure within your organizations to take cybersecurity seriously.
You could also consider doing cybersecurity research and development work. The intrinsic security technologies I described above are among the most intellectually interesting and socially valuable in all of computer science, in my opinion. They combine deep theoretical insights with engineering challenges and potentially vast human benefit.
If you work in AI, or intend to, you could consider this as a more ethical and equally fascinating alternative career.7
AI ethics and safety organizations can lend public support to the effort by explaining how current unethical uses of AI, and the risk of Scary AI, make cybersecurity even more pressing. Include it explicitly in your statement of your domain of concern.
Funders can support both policy advocacy and the technical work.
This area needs advanced development, more than research: projects to make intrinsic security methods less expensive, and easier to use. That “technology transfer” work is underfunded—barely funded at all—because it’s too big for academic projects, and the technology industry does not see a way to profit from it. Even some critical current cybersecurity technologies, which billions of people depend on, are maintained by unpaid, overwhelmed volunteers.8
A Focused Research Organization might provide the right structure for this work. Those are a new institutional structure for solving technological challenges too large for academia, too risky for industry, and too difficult for government.
Governments can force accountability for cybersecurity through legislation, regulation, and reputational threats. They can fund the development of intrinsic security technologies through both grants and procurement contracts.
- 1.Section 6.1 in José Luis Ricón’s “Set Sail For Fail? On AI risk” is a good discussion, and includes a list of real-world incidents that provide good reasons to be scared. Nintil, 2022-12-12.
- 2.See the “Boxing” section in Wikipedia’s “AI capability control” article.
- 3.The most famous example was Microsoft’s 2016 Tay chatbot, shut down after only sixteen hours because it was easy to get it to say offensive things. The most recent example (November 2022) was Facebook’s Galactica, which was supposed to provide summaries of scientific knowledge. Much of what it produced sounded plausible, but was entirely false. Facebook’s AI lab shut it down after only three days, claiming that this was also because “trolls” had figured out how to make it say offensive things. The much more serious issue, though, was that scientists found it was dangerously unreliable. Will Douglas Heaven, “Why Meta’s latest large language model survived only three days online,” Technology Review, November 18, 2022.
- 4.It would be nice to think a cybersecurity agency in each country with nuclear weapons has adequately secured its military command networks. Based on past incidents and the difficulty of the task, I doubt it.
- 5.Moshe Vardi, “Accountability and Liability in Computing,” Communications of the ACM, November 2022.
- 6.It may seem contradictory that I advocate building intrinsically secure conventional computer systems, whereas I advocate the “layers of band-aids” approach to AI safety. That is because we know how to do the former, whereas I am skeptical about current approaches to building intrinsically safe (“aligned”) AI systems. I may be wrong; and also if we find a better base technology for AI than “neural” networks (as Gradient Dissent recommends), it’s more likely we could make it intrinsically safe.
- 7.Unfortunately, it doesn’t pay as well yet. This should change.
- 8.The famous wakeup case was the “Heartbleed” vulnerability in OpenSSL, the primary encryption program used to secure 17% of all web sites. OpenSSL had only $2,000 per year in funding, and was maintained mostly by only two people, on an almost entirely volunteer basis.