AI may radically accelerate technology development. That might be extremely good or extremely bad. There are currently no good explanations for how either would happen, so it’s hard to predict which, or when, or whether. The understanding necessary to guide the future to a good outcome may depend more on uncovering causes of technological progress than on reasoning about AI.
Holden Karnofsky writes:1
By “transformative AI,” I mean “AI powerful enough to bring us into a new, qualitatively different future.” The Industrial Revolution is the most recent example of a transformative event; others would include the Agricultural Revolution and the emergence of humans.
His example is “AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement.” That might lead to a material paradise, or to robopocalyptic doom.
I think scientific discovery and technology development can be accelerated dramatically. Advanced computational methods, involving complex statistics and ever more powerful computers, will surely be involved. (They already are: physics simulations are the main use of conventional supercomputers.) I expect robotic laboratory automation will also be important; I used to run a lab automation company, for that reason.
Whether advanced scientific computing gets called “AI” or not seems fairly arbitrary and unimportant. I don’t see any reason to think that mind-like or general AI would be required (nor does Karnofsky suggest that).
“Transformative” is a different sort of Scary AI—but specifically what it might consist of is anyone’s guess.
We should investigate possible negative consequences of sudden, dramatic speedups in science and technology. Some technologies are inherently dangerous, such as nuclear and biological weapons. Some are risky (although not necessarily harmful) in redistributing power. The industrial revolution dramatically shifted the relative power of particular nations, and of classes within them. AI-driven social networks are now shifting power away from established institutions and toward diffuse memetic trends such as QAnon and Black Lives Matter.
How can we mitigate against the dangers of increasingly powerful computer technology in advance? And, given that speeding up science and technology might be extremely beneficial, should we try to make that happen now? Or wait until we’re confident it won’t be disastrous? If we want to go ahead, how? The role advanced computational methods might play in accelerating innovation is one question among many worthy of investigation. I’ll return to these issues later.
- 1.Holden Karnofsky, “Forecasting Transformative AI, Part 1: What Kind of AI?”, Cold Takes, Aug 10, 2021.