Comments on “Artificial neurons considered harmful”
Adding new comments is disabled for now.
Comments are for the page: Artificial neurons considered harmful
Control theory
I have a suspicion that the rising arguments about the outright impossibility of “AGI alignment” might be much more readily applicable, and more long-term durable, if they were restated in terms of boring old control theory. I’d take a stab at it myself, but I feel like I’m grasping for the terminology. Something about it being a bad idea to operate a poorly-instrumented, self-reinforcing open-loop mechanism…
Forbes article link broken
The URL to the startup hype article is broken. I had to get rid of the quotation marks at the end to get to the article
LLMs as Artificial Left Hemispheres?
Hi David,
One of the things that’s struck me is that, in the evolution of biologically based intelligence, world sensing and acting in the world eventually gave rise to language.
In our development of AI we’ve started with language. So we’ve started with symbols sans referents, for the LLM anyhow.
It’s also been interesting to see how some of the deficiencies of LLMs parallel the distorted behaviour of humans with right hemisphere affecting strokes and other injuries. The tendency to confabulation is particularly interesting. See McGilchrist, particularly part 1 of The Matter with Things and The Master and his Emissary.
To be or not to be
the output of a particular sort of statistical analysis will meaningful in a particular way.
Again, I like this as a poetic turn of phrase, but I think you left out a “be” there.
Self-driving
I’m mildly surprised you don’t say much about self-driving car tech; this seems like an obvious area where ML is killing people right now by being connected to heavy machinery (your footnote 7 explains some acceptable variants of this but doesn’t really address self-driving, I initially misread it).
I haven’t quite reached your level of opposition to AI yet, but this particular use I find quite maddening – nobody asked me if I feel OK sharing the road with death machines under the control of an unverified and unregulated ML system. It has good propaganda value as a very visceral risk.