The “neural network” technology that powers current artificial intelligence is exceptionally error prone, deceptive, poorly understood, and dangerous. It is widely used without adequate safeguards in situations where it causes increasing harms. It is not inevitable; we should replace it with better alternatives.
Gradient Dissent is divided in three chapters, plus an epilogue.
- “Artificial neurons considered harmful” explains how and why neural networks are dangerously unreliable.
- “Do AI as science and engineering instead” discusses neglected technical approaches that may make neural networks less risky. Ultimately, though, they cannot be made adequately safe with technical fixes; nor can technical progress address irresponsible misuse. Instead, neural networks should be deprecated, avoided, and regulated.
- “Backpropaganda: anti-rational neuro-mythology” explains the misleading rhetoric decision makers use to justify building and putting into use dangerously defective AI technology.
- Neural networks have dominated AI for only a decade. They are not mandated in some Cosmic Plan. The brief epilogue, “A better future, without backprop,” suggests that it’s important, urgent, and probably possible to replace them with better alternatives.
Gradient Dissent is a companion document for Better without AI. Originally a chapter in that book, I removed it because it goes into more detail than some readers would want. Although primarily meant as a supplement to the book, this document is self-contained, and you can read it on its own.
Reading Gradient Dissent requires no specific technical background. It neither assumes you know how neural networks work, nor does it contain an introductory explanation that would get you up to speed. You can understand the risks without knowledge of those details. If you run into technical bits that seem difficult, you can skim or skip over them without missing much.