Limits to reasoning, reduction, and simulation

Most of the doing of science is not reasoning; it is the physical work of experimentation. Therefore, superhuman intelligence might not speed progress much.

Superintelligence scenarios often describe the AI making astonishing breakthroughs by reasoning from first principles, or by short-cutting experimental work by using simulation instead. It can design self-reproducing supersymmetric zeptobots just by thinking hard, and can get it right the first time. If superintelligence axiomatically implies omnipotence, then this may be possible, but that’s not actionable information for us.

Otherwise, it’s relevant that first-principles reasoning is a very small part of science and engineering, although sometimes an important one. It is relevant that simulation is necessarily imprecise and often inaccurate, so it usually cannot replace experimentation, although it is sometimes valuable.

There’s a popular image of scientific geniuses figuring things out by thinking about math in an armchair. Newton and Einstein did that, but the kind of science they did is extremely atypical, and they are misleading as prototypes. Deductive reasoning is sound only when its premises are absolute truths.1 Newtonian dynamics follow only when Newtonian axioms are absolutely satisfied, and are reliably predictive only when initial conditions are known with perfect precision. Otherwise, they may be unboundedly wrong.

Most of science concerns nebulous messes, not billiard balls on infinite frictionless inclined planes. Cell biology involves nearly no puzzle solving. It is not advanced by theoretical reasoning; it’s done in a wet lab. Cells are glop, and there are no glop axioms.

Some AI apocalypse discussions invoke Moore’s Law to suggest that arbitrarily detailed simulations will eventually be possible, and faster than running experiments in the real world, and also won’t require access to robots or human servants, and that’s how the AI gets biological warfare agents.

Most experiments cannot be done in simulation, even with unbounded computation power. The data on which those simulations would be based do not currently exist, and would be extremely difficult to produce. For example, models of biomolecular interactions are limited in accuracy due to insufficient empirical knowledge of the physics of hydrogen bonds and the entropy of solvation. In principle, you could address that with quantum mechanical simulations. However, those are also limited in accuracy due to approximations made based on empirical measurements that are also incomplete and imprecise. In principle, you could address that with quantum field simulations at the chromodynamic level. However, quantum chromodynamics is also limited in accuracy due to approximations made based on empirical measurements that are also incomplete and imprecise.

Science is empirical all the way down. You never reach a level of deductive bedrock.

Likewise, using AI to cure cancer by reasoning about it would be fantastic, but impossible because human biology is mostly terra incognita.2 Predicting even two-molecule binding is extremely difficult; the relevant physical chemistry is not understood in adequate detail, and data sources that might constrain parameters in empirical models are scant.3 Predicting all the ways a molecule will interact with the whole human body—toxicity in an unrelated organ, for example—is impossible. We just don’t know most of what’s going on in there.

There is no way to find out everything a drug candidate will do other than giving it to lots of people and crossing your fingers. That’s inherently slow and expensive. It also faces enormous, constantly growing bureaucratic obstacles; clearing those would do far more to speed new medicine than AI.4

It seems unlikely that a superintelligent AI could develop biological weapons enormously faster than people can. There’s no substitute for killing a lot of monkeys.

  1. 1.Part One of In the Cells of the Eggplant draws out the implications of deductive absolutism for rationalism and for science in detail.
  2. 2.See Derek Lowe’s “AI and Drug Discovery: Attacking the Right Problems” (Science, 19 Mar 2021)” and Shrager et al.’s “Is Cancer Solvable?” (The Journal of Law, Medicine & Ethics, 47 (2019): 362-368).
  3. 3.Andreas Bender and Isidro Cortes-Ciriano, “Artificial Intelligence in drug discovery: what is realistic, what are illusions? Part 2: a discussion of chemical and biological data,” Drug Discovery Today, Volume 26, Issue 4, April 2021, Pages 1040-1052.
  4. 4.Matthew Herper, “Here’s why we’re not prepared for the next wave of biotech innovation,” STAT, Nov. 3, 2022.