What kind of AI might accelerate technological progress?

“Narrow” AI systems, specialized for particular technical tasks, are probably feasible, useful, and safe. Let’s build those instead of Scary superintelligence.

“AI” is a famously indefinite term. It means quite different things in different contexts; and, worse, none of those things are clear or specific either. We can’t meaningfully address “could AI cause a dramatic technological transformation of everything, and if so when” without some clarity about what we’re talking about.

“Superintelligent AGI” certainly could accelerate innovation, but only by definitional fiat. It means “AI that is better than humans at everything.” So, yes, if that were a thing, it would be better at technology development. How much better? Specifically what could it do, how quickly? We have no clue what superintelligent AI would be like, or how it would do research and development differently, so we can’t say.

The superintelligence scenario posits a singularity: the progress curve goes to infinity. This is a magical solution to all problems; a deus ex machina bursting forth from Mooglebook Labs and brandishing a three-step plan:

  1. AI.
  2. ???
  3. Utopia!

That can’t be ruled out, but neither can benevolent interstellar aliens showing up and delivering immortality, tasty veggie burgers, and flying cars for everyone. Neither scenario has actionable implications for the present.

We don’t know enough about technology development to estimate inherent limits to acceleration, if any. In the paperclip scenario, the AI figured out everything about everything in a few seconds because it was so superintelligent. We can’t know whether that is possible, even in principle. However, it seems likely that science and engineering both require conducting inherently time-consuming experiments in the material world, which puts hard constraints on acceleration. Figuring out which are the best experiments to do may save a lot of wasted time, but we don’t know how much.

Anyway, superintelligent general AI would probably be bad, so let’s not go there, not if we can avoid it. Probably a “narrow AI,” meaning one that just did science stuff—or, more likely, many narrow AIs with different specializations—would suffice. That seems safer. Narrow science AIs need not be similar to human scientists, engineers, or mathematicians. Mind-likeness seems unnecessary (and scary).

Since “AI” is such a vague term, it’s not clear how “narrow AI” differs from “advanced computer systems” in general. We’re already using those in science applications. There seems to be a spectrum of imaginable systems that range from definitely not AI to definitely AI, with intermediate points that are not clearly one or the other.

  1. 1.Sayash Kapoor and Arvind Narayanan give numerous examples in “Leakage and the Reproducibility Crisis in ML-based Science”; see also Elizabeth Gibney’s reporting on the problem in Nature (“Could machine learning fuel a reproducibility crisis in science?”).
  2. 2.Derek Lowe, “Why AlphaFold won’t revolutionize drug discovery.”