Comments on “Fight DOOM AI with SCIENCE! and ENGINEERING!!”
Adding new comments is disabled for now.
Comments are for the page: Fight DOOM AI with SCIENCE! and ENGINEERING!!
MOLE; Machine Operated Learning Emulators
2024-01-22
This is an excellent human-readable summary of the problems with the “machine learning”* approach from the POV of an insider to the field, , and what we can do about it right now. Possibly the most important page in the book so far.
- I have been calling it MOLE; Machine Operated Learning Emulators, both to avoid using vague and potentially misleading descriptors like “AI” or “Machine Learning” and to emphasise how blind and stupid MOLE actually is compared to sci-fi AI monsters like SkyNet or Control.
Mechanistic Interpretability
I second your belief in the importance of mechanistic interpretability. Despite the fact that my math skills aren’t up to the job, I’ve gotten a lot out of reading some of their research. Neel Nanda has in informal discussion of some of that work that’s worth reading through and he has some YouTube videos that are helpful. They’re at his YouTube channel. FWIW, the grokking stuff seems quite important. What seems to be doing on is that, early in training, the engine is, in effect, building tools. When a tool or tools finally comes together, you get a phase change in learning as the tool(s) are now doing further construction.
In the case of LLMs I’ve reached the tentative conclusion that something like a classical GOFAI semantic or cognitive network gets constructed, albeit in latent mode, and it handles most of the sentence-level syntax. Sort of like building a high-level programming language on top of assembly language. I discuss this here.