What is the role of intelligence in science?

Actually, what are “science” and “intelligence”? Precise, explicit definitions aren’t necessary, but discussions of Transformative AI seem to depend implicitly on particular models of both. It matters if those models are wrong.

The plausibility of Transformative AI may derive from an implicit belief in The Scientific Method: a rational recipe that reliably results in knowledge. Obsolete—but still influential—theories of science imagine it consists of deduction (reasoning about more-or-less formal theories) and induction (relating data and theories). Those are feasible to automate.

In fact, software already has superhuman powers of rationality. Theorem provers (deduction engines) and statistical packages (induction engines) can tackle problems much too large for people, and make no mistakes. Mathematica, a scientific analysis package, knows hundreds of formal methods and gargantuan quantities of specific facts, and can solve in seconds problems that previously required months of human work.

Would a super-Mathematica radically accelerate science? I doubt it. Later in this chapter, we’ll see why deduction and induction are only small parts of scientific activity.

There is no Scientific Method, so we can’t automate one. No one can explain how or why science works in general, nor how to do it. Philosophers of science proposed various theories in the first half of the twentieth century, but none of them survived comparison with specific instances of good, real-world scientific work. By about 1980, it became clear that there can be no such theory.1

Different sciences work quite differently. Further, each science involves many dissimilar types of cognition and action, which contribute in qualitatively different ways. Major breakthroughs often result from doing science conceptually differently. Conforming to the ritual norms of a scientific community is not sufficient to bring about discovery. Innovative intellectual work addresses the unknown, and so cannot be routinized, ritualized, rationalized, or reduced to any defined method.

“Intelligence” is often thought of as the ability to solve well-specified problems using more-or-less formal methods; i.e., technical rationality. This is also pretty much what IQ tests measure.

According to popular understanding, you have to be unusually intelligent to be a scientist, and superintelligent people are better at it, like Einstein. Presumably that means it consists of difficult thinking, like in an undergraduate science class, where you learn some formal manipulation methods and get tested on whether you can apply them on paper.

Since most people are bad at formal problem solving, and even the most “intelligent” humans can’t do it all that fast or reliably, maybe science would go much faster if we automated more of it?

That might be true if solving well-specified puzzles was science’s bottleneck, but in most fields it’s not. Homework problems from science classes are almost perfectly dissimilar to scientific research. Formal problem solving—technical rationality—is a prerequisite for innovation, but dissimilar to critical aspects of it. Building an AI that can get perfect scores on IQ tests is probably easy, and uninteresting because that’s not the only kind of intelligence needed.2

So what is the role of intelligence in science, technology, and material progress, anyway? What sort of “intelligence” matters?

A standard Scary AI argument is that we are more intelligent than chimpanzees, and humans are much better at science than chimpanzees, so something more intelligent than us would be much better at science again. But we weren’t much better than chimpanzees at science until recently. Human innate intelligence presumably didn’t suddenly increase around the time of the Scientific Revolution.

Chimpanzees’ cognitive abilities are partly just different from ours; less social, in particular. It is culture and the social coordination of work that has made us superior in the past few thousand years, and dramatically more so in the past few hundred. So maybe we need better scientific culture and social coordination, more than IQs of 14,000.3

We don’t know how much extra scientific ability you get from extra intelligence. Maybe an IQ of 14,000 would make you only a little better at science, even though you’d be unimaginably better at the kinds of pointless puzzles IQ tests throw at you. That might be consistent with science being mostly not bottlenecked by unsolved formal problems.

  1. 1.Part One of In The Cells Of The Eggplant explains this in a style intended to communicate to working scientists.
  2. 2.Nevertheless, according to the only available data, IQ tests (particularly their mathematical and spatial tasks) do predict future scientific achievement in humans, with a strong correlation holding even at the extreme high end. This is a finding of the Study of Mathematically Precocious Youth. See e.g. Robertson et al.Beyond the Threshold Hypothesis,” Current Directions in Psychological Science 19(6), 2010, 346-351; and Lubinski et al.Top 1 in 10,000: A 10-Year Follow-Up of the Profoundly Gifted,” J Appl Psychol, August 2001, 86(4):718-29. There are possible confounds here (expectation effects, for instance); this is the only study of its kind; and it’s not entirely clear how to interpret the results, but I think they should be taken seriously. Kaj Sotala discusses implications for AI risk in “How​ ​Feasible​ ​Is​ ​the​ ​Rapid​ ​Development of​ ​Artificial​ ​Superintelligence?,” Phys. Scr. 92 (2017).
  3. 3.Katja Grace made arguments similar to this, and others in this section, in “Counterarguments to the basic AI x-risk case,” AI Impacts, 31 August 2022.