“Narrow” AI systems, specialized for particular technical tasks, are probably feasible, useful, and safe. Let’s build those instead of Scary superintelligence.
“AI” is a famously indefinite term. It means quite different things in different contexts; and, worse, none of those things are clear or specific either. We can’t meaningfully address “could AI cause a dramatic technological transformation of everything, and if so when” without some clarity about what we’re talking about.
“Superintelligent AGI” certainly could accelerate innovation, but only by definitional fiat. It means “AI that is better than humans at everything.” So, yes, if that were a thing, it would be better at technology development. How much better? Specifically what could it do, how quickly? We have no clue what superintelligent AI would be like, or how it would do research and development differently, so we can’t say.
The superintelligence scenario posits a singularity: the progress curve goes to infinity. This is a magical solution to all problems; a deus ex machina bursting forth from Mooglebook Labs and brandishing a three-step plan:
- AI.
- ???
- Utopia!
That can’t be ruled out, but neither can benevolent interstellar aliens showing up and delivering immortality, tasty veggie burgers, and flying cars for everyone. Neither scenario has actionable implications for the present.
We don’t know enough about technology development to estimate inherent limits to acceleration, if any. In the paperclip scenario, the AI figured out everything about everything in a few seconds because it was so superintelligent. We can’t know whether that is possible, even in principle. However, it seems likely that science and engineering both require conducting inherently time-consuming experiments in the material world, which puts hard constraints on acceleration. Figuring out which are the best experiments to do may save a lot of wasted time, but we don’t know how much.
Anyway, superintelligent general AI would probably be bad, so let’s not go there, not if we can avoid it. Probably a “narrow AI,” meaning one that just did science stuff—or, more likely, many narrow AIs with different specializations—would suffice. That seems safer. Narrow science AIs need not be similar to human scientists, engineers, or mathematicians. Mind-likeness seems unnecessary (and scary).
Since “AI” is such a vague term, it’s not clear how “narrow AI” differs from “advanced computer systems” in general. We’re already using those in science applications. There seems to be a spectrum of imaginable systems that range from definitely not AI to definitely AI, with intermediate points that are not clearly one or the other.
-
Most supercomputer time has been used for arithmetic on large matrices in physics simulations: stress analysis for mechanical engineering, fluid dynamics for weather prediction, n-body mechanics for astrophysics and molecular modeling, and so on. Those are definitely not AI. (Why not?)
-
Actually-existing “neural network” AI just does arithmetic on large matrices, usually to predict which ads you will click on. That’s “AI” but maybe not Real AI. (Why is it classified differently from fluid dynamics? From “real” AI?)
-
“Neural networks” are already applied, as a data analysis method, to specific scientific problems. (Is this AI? Real AI? Why or why not?) Frequent success announcements support the plausibility of accelerating science with AI, but it turns out that in many to most cases researchers were fooling themselves.1 The best case so far is AlphaFold, which predicts protein shapes. It includes several sorts of complex domain-specific machinery apart from its “neural” network, including traditional physical simulation. (Does that mean it is not Real AI?) AlphaFold is overhyped, but somewhat useful.2 Generally, I doubt using “neural networks” for scientific data analysis will result in radical transformation, but it’s not impossible.
-
For decades, AI researchers have built “artificial scientists” which “reason about experiments.” These have all been laughably weak. That’s because they automate simplistic misunderstandings of what scientists do. Automating a more realistic general understanding of scientific work might count as Real AI. However, we don’t know much about how humans do science, nor what sort of human intelligence is involved. I’ll come back to this later.
-
A god-like superintelligence that invents supersymmetric zeptobots would definitely count as Real AI. However, by definition that’s impossible to reason about, so there’s no point considering it.
- 1.Sayash Kapoor and Arvind Narayanan give numerous examples in “Leakage and the Reproducibility Crisis in ML-based Science” at reproducible.cs.princeton.edu. See also Elizabeth Gibney’s reporting on the problem in “Could machine learning fuel a reproducibility crisis in science?”, Nature, 26 July 2022.
- 2.Derek Lowe, “Why AlphaFold won’t revolutionize drug discovery,” Chemistry World, 5 August 2022.