We’ve seen that current AI practice leads to technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of the risks of current AI technology, and can lead to safer technologies.
AI is unavoidably hybrid as an intellectual discipline. It incorporates aspects of six others: science, engineering, mathematics, philosophy, design, and spectacle. Each of these contributes valuable ways of understanding, and their synergies power AI insights. Different schools of thought within AI research have emphasized some disciplines and deemphasized others, which contributes to the schools’ different strengths and weaknesses.
The current backprop-based research mainstream overemphasizes spectacle (the creation of impressive demo systems) and mathematics (optimization methods, misleadingly termed “learning”). It neglects science (understanding how and why the networks work) and engineering (building reliable, efficient solutions to specific problems). Naturally, this has led to powerful optimization methods which can yield spectacular results, but which we don’t understand and which aren’t reliable or efficient when applied to specific problems.
To address these problems, I suggest getting much more skeptical about spectacles; deemphasizing the math; and doing AI research as science and engineering instead.
Better understanding:
- May reveal that there is less to seemingly-spectacular results than meets the eye, thereby deflating hype (and consequently funding and deployment)
- May enable adding safety features to technologies similar to those we have now
- May lead to a full replacement of backprop and GPTs with quite different, safer technologies.
This chapter of Gradient Dissent draws on my 2018 essay “How should we evaluate progress in AI?”1 That covers some of the same themes in greater depth; so you might like to read it if the discussion here is intriguing.