Technological progress, in medicine for example, provides an altruistic motivation for developing more powerful AIs. I suggest that AI may be unnecessary, or even irrelevant, for that. We may be able to get the benefits without the risks.
Material progress matters. Most people, globally, are enormously better off than we were even only a few decades ago. We have greatly improved health, and longer life span, due to medical advances and increased access. Plague, famine, and war have declined significantly. The majority of the world has instant access to practically all human knowledge and cultural products in our pockets—and wondrous tools for creating and distributing more of it.
In developed countries, nearly everyone takes the fruits of progress for granted, as if they had always already been there, or fell from the sky somehow, or were only grudgingly provided at last by selfish institutions that had previously withheld them. Conspicuously lacking is an awed recognition of the stream of miracles delivered by improved technology and logistics, derived from engineering and science. That’s considered politically naive, insufficiently cynical, and as therefore somehow playing into the hands of oppressors.
Most people do not viscerally believe that any further progress is possible. That disbelief, that unwarranted pessimism, is a major impediment to progress itself.
It’s actually possible to make altogether new kinds of beneficial things—and we should be doing more of that, faster!
Acceleration of progress in science and technology provides a main reason for both hope and fear around Transformative AI. Maybe AI can put an end to old age, sickness, and death. Or, maybe it will produce bioweapons that cause human extinction.
Is it realistic to expect dramatic technological acceleration from AI? I don’t think we know enough about either AI or innovation to make a confident prediction.
However, we do know that AI is not the only way to accelerate progress. Others may become available sooner, or may be more effective, or safer. Should we invest more in AI, or in other initiatives? This isn’t something we can reason out in the abstract; it requires concrete, evidence-based understanding of what facilitates or impedes progress, and what AI is capable of.
Is it possible to get the benefits of Transformative AI without the risks? My expectation is that dramatic acceleration is feasible without Scary AI. I believe a ten-fold increase in science productivity is feasible in the relatively short term. There are many interventions we can apply immediately, and investigation can open up others.
This chapter:
- Discusses the nature of intelligence, and its role in research and development.
- Suggests that technical rationality is not the bottleneck, so automating that with AI wouldn’t cause dramatic acceleration.
- Recommends investigating and promulgating other modes of scientific cognition that may be key determinants of progress.
- Observes that the social and cultural contexts for research and development can dramatically help or hinder them.
- Recommends reforming those contexts to remove unnecessary impediments, and to encourage collective creativity.
I mainly discuss scientific research rather than technology development. That’s partly because AI futurist discussions generally assume fundamental scientific breakthroughs would enable downstream technological progress. It’s partly also because I’ve read more research on what enables scientific progress than on facilitating technology development. However, my own work experience is as much with engineering as basic research. I believe that what I say about accelerating science mainly goes for accelerating technology too.
The ways we do science now are extremely inefficient, due to incentives enforced by a dysfunctional research environment. We’ve got pretty good theories of how to fix this; we’ve confirmed some experimentally, and tests of others are under way.
What would accelerate research, apart from ceasing to actively impede it? What makes it go fast when it does? A sudden breakthrough may advance a field more in months or moments than it’s managed in decades. What makes those more likely?
Breakthroughs are often attributed to “creative genius” or “intuitive leaps” that are intrinsically beyond any possibility of understanding. I suspect this is a myth that obstructs progress. I believe a better understanding of how science gets done well, and why that works, should give us insight into how to accelerate it. (This is the engineering attitude!) An accurate understanding should come from close observation and interventional experimentation. (This is the scientific attitude!)
Some individual scientists and networks of scientists contribute dramatically more than others. Why? I believe we can discover and understand what great scientists do differently from mediocre ones.
How can we do more of that? I believe we can teach it.
What support environments lead to great science? How are new fields born, how do they get old, sick, or die, and how can they be revivified?
All these are under-studied research questions. Preliminary investigation suggests that better understanding may lead to better outcomes.
This chapter draws from work I’ve done elsewhere, particularly the essay “Upgrade your cargo cult for the win” and the unfinished online book In the Cells of the Eggplant, both on metarationality.com. You can consult them for more details.