Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.
—Dr. Ian Malcolm, in Jurassic ParkI have felt it myself. The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles — this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds.
—Freeman DysonTechnologists need to be educated both in how to spot risks, how to respond constructively to them, and how to maximize safety while still moving forward with their careers. They should be instilled with a deep sense of responsibility, not in a way that induces guilt about their field, but in a way that inspires them to hold themselves to the highest standards.
—Jason Crawford, “Towards a philosophy of safety”1
You’ve heard all this before, and you don’t want to hear it again. The quote from the fictional Dr. Malcolm is a pop-cultural cliché. Self-interested entrenched institutions have used “There are Things Man Was Not Meant to Know” to block life-enhancing progress for centuries. It is nearly impossible to advise scientific responsibility without sounding like a morally hectoring junior high school teacher.
Nevertheless, I would urge both AI researchers and AI futurists to consider directions carefully.
AI researchers have tended to dismiss safety concerns, partly because the field moved so slowly, and partly because risks were usually framed in terms of extreme, paperclip-style scenarios. If you have have read this far in my book, you probably believe some concern is now warranted. I suggest that you make that known to your colleagues, and discuss with them possible consequences of the work you do.
Some AI futurists want to create inherently safe superintelligent AI, for the benefits it would bring. I suggest that this is unrealistic, and I recommend reconsidering. If you think superintelligent AI will be a net good, make a specific case for how that’s going to work, rather than a handwave like “it will cure cancer, somehow, because superintelligence can do anything, and we can probably sort out the safety issues somehow eventually.”
Superpower cannot be made safe. Abstract conceptual approaches to alignment have reached a dead end. Training “neural” networks to behave better cannot yield safety, because the technology is inherently unreliable. AI may become safer or less safe, with better or sloppier engineering, but actual safety is out of the question.
The means (technological awesomeness) do not justify the ends (an apocalypse). As safety researchers have pointed out, institutions supposedly founded to prevent Scary AI have done more to hasten it than the merely profit-seeking AI labs.
I have felt it myself.2 The glitter of artificial intelligence. It is almost, but not quite, irresistible if you come to it as a scientist. To feel it’s there in your fingers, to code up this power that fuels human progress, to make it do your bidding. To perform these miracles, to create superhuman beings out of mathematics. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles — this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds.
- 1.The Roots of Progress, September 16, 2022.
- 2.David Chapman, Vision, Instruction, and Action, MIT Press, 1991.