Summary of Dissent
The neural network and GPT technologies that power current artificial intelligence are exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives.
Gradient Dissent is divided into three chapters, plus an epilogue.
-
“Artificial neurons considered harmful” explains how and why neural networks are dangerously unreliable.
-
“Backpropaganda: anti-rational neuro-mythology” explains the misleading rhetoric decision makers use to justify building and deploying dangerously defective AI technology.
-
“Do AI as science and engineering instead” discusses neglected technical approaches that may make systems based on neural networks less risky. Ultimately, though, those cannot be made adequately safe with technical fixes; nor can technical progress address irresponsible misuse. Instead, these technologies should be deprecated, avoided, regulated, and replaced.
-
Neural networks have dominated AI for only a decade. They are not mandated in some Cosmic Plan. The brief epilogue, “A better future, without backprop,” suggests that it’s important, urgent, and probably possible to replace them with better alternatives.
Gradient Dissent is a companion document for Better without AI. It goes into more detail than some readers would want, so I have separated it out from the main book. It also is self-contained, and you can read it on its own, if you are more interested in the technologies themselves than in the ways they may interact with society (the main book’s topic).
Reading Gradient Dissent requires no specific technical background. It neither assumes you know how neural networks work, nor does it contain an introductory explanation that would get you up to speed. You can understand it without knowledge of those details. If you run into technical bits that seem difficult, you can skim or skip over them without missing much.