If we want to know what a superintelligent AI might do, and how, it could help to investigate what the most intelligent humans do, and how. If we want to know how to dramatically accelerate science and technology development, it could help to investigate what the best scientists and technologists do, and how.
These are shockingly understudied topics. I’ve found only one systematic investigation of extreme intelligence, the Study of Mathematically Precocious Youth, and it did not ask the relevant question: how are these people thinking? What specifically do they do differently or better than others? Yes, they can visualize rotating geometrical shapes—a typical IQ test problem—but that is rarely a key to success in itself.
Studies of great scientists mainly describe what they achieved (and maybe what else was going on in their lives), but not how. Intellectual biographies of single scientists sometimes illuminate their modes of thought, but these are scarce. Comparisons or syntheses are scarcer, and systematic investigations non-existent.
Can whatever it is that great scientists do be taught? I know of no systematic investigations of these topics. They could form a focus area for the discipline of Progress Studies.1
Later in this chapter I’ll suggest a class of human cognitive activities (“meta-rationality”) that seem key to innovation. These are dissimilar to technical rationality (problem solving), although that is a prerequisite. Some people I consider extremely intelligent are outstanding problem solvers. Some not as much; they excel instead at turning real-world situations into well-specified problems, thereby making them amenable to technical approaches. Problem identification, selection, and formalization are meta-rational activities. Doing them well is at least important for progress as problem solving is.
AI risk discussions often argue that superintelligences would necessarily seek power without limit, because power is “instrumentally” useful for achieving any goal, and that they would successfully take power because they could use their intelligence to figure out how. This would seem to imply that the most powerful people are among the most intelligent, and that the most intelligent humans seek and gain the greatest power. Neither is true, as far as I can tell, which casts doubt on the premise.
Investigating this may be important in preventing Monstrous AI.
The most powerful people, and notably the most monstrous, are not conspicuously intelligent, at least not in the sense measured by IQ tests. Successful politicians in democracies probably average around one standard deviation above the population; large-company CEOs not too much more, although there are some extremely intelligent ones.2 Autocratic tyrants seem generally less intelligent than that (although I know of no studies of this).
Success in gaining power seems to depend instead on extreme Dark Tetrad traits (psychopathy, narcissism, machiavellianism, and sadism). That’s moral idiocy, not any sort of intelligence. Maybe we should be more concerned with AI developing superhuman dark tetrad traits than superintelligence.
Why aren’t extremely intelligent people extremely powerful? Because they don’t want to be, or because intelligence doesn’t help gain power beyond a certain point? Or because personal psychology and/or the social environment actively, differentially disempowers them?
Based on revealed preferences, it appears that the most intelligent people in the world believe power is not useful for what they want to do.3 If, for instance, you wanted to understand why some cats don’t get high on catnip, what would you do with an army of mooks?4 “Wait, what?” That’s the sort of thing extremely intelligent people do. The goals of the extremely intelligent are often incomprehensible for outsiders. Scientific and technological progress comes from accepting this, and getting out of the way of the sort of people who figure out the optimal method for getting cats high by treating it as a Markov decision process and applying backwards induction on a decision tree using Bayesian linear regression to predict posterior probability of each remaining drug’s success probability conditional on previous drugs not working.5
Or who obsess about why spinning plates wobble slower than they spin. Richard Feynman’s puzzling about that led to his figuring out quantum electrodynamics, which is generally considered a big deal, and the sort of incomprehensible science thing only extremely intelligent people do.
- 1.Patrick Collison and Tyler Cowen, “We Need a New Science of Progress.”
- 2.Standard deviation is a statistical measure of distance from the average. On typical tests, “one standard deviation above the population” corresponds to a 115 IQ. Occupational IQ data are uniquely available for Sweden (which may not be representative of other countries, due to population homogeneity). Both CEOs and politicians average slightly less than one standard deviation above the population there. Dal Bó et al, “Who Becomes a Politician?”; Adams et al., “Are CEOs Born Leaders? Lessons from Traits of a Million Individuals.”
- 3.What is power good for? Why do some people seek it? At a guess, their ego is damaged in a way that makes them crave constant confirmation that it isn’t. Is that something we would expect in an AI system?
- 4.Anecdotally, when the extremely intelligent get asked “why don’t you want power,” they say “well, then I’d have to tell normies what to do, which means I’d have to talk to them, which would be unbearably tedious.” Is that something we would expect in an AI system?
- 5.Gwern Branwen, “Catnip immunity and alternatives.”