Comments on “Do AI as science and engineering instead”
Adding new comments is disabled for now.
Comments are for the page: Do AI as science and engineering instead
Almost
Hmmm – I think what I meant by “scientific discovery” is more like ‘discovery that’s scientifically interesting’ or ‘a single interesting example with some mathematical property is also a mathematical result’.
I agree that they’re not discoveries of ‘universal’ laws or really any kind of scientific theory.
I think I’d temper what I wrote in my previous comment:
In particular, I disagree that “AI is bad” – as-is – even if there are many bad uses of it currently and even tho I agree that the field consists of way too much “spectacle”.
AI – as a scientific or intellectual field/subject – is neutral. (It glitters brightly!)
AI as the actually-existing field of human endeavors is wildly unsafe and unfriendly – very bad.
Reverse engineering (neural networks) seems like a great thing to be tempted to do!
Safe experimentation discouraged by the funding lottery
David:
I don’t think we have the luxury of experimenting on a small number of unwilling guinea pigs that automobiles had.
We do; by not putting them on the internet. But then we can’t create the spectacle and we don’t get the funding.
I think this points to a huge part of the problem. Most of the research funding in computer technology (and beyond) are being driven by the whims of rich people, all trying to invest in things that will definitely make them richer. Which makes funding priorities especially vulnerable to being hijacked by tulip fever.
We saw this with the DotCom bubble, the DataFarming bubble, the crypto bubble, and now the backprop bubble.
Almost with you
I agree with so much of this (both this essay and the book and your larger corpus) – but not all. I, separately, greatly appreciate your thoughts and analysis anyways!
In particular, I disagree that “AI is bad” – as-is – even if there are many bad uses of it currently and even tho I agree that the field consists of way too much “spectacle”.
A related and similarly, but differently, insightful post/essay:
I think a point Wolfram makes that you seem to (somewhat) tacitly accept is that the results of a lot of ‘AI’ work are themselves a kind of scientific discovery, e.g. that ChatGPT works as well as it does (however it is that it works exactly).
And in that vein, I don’t think it’s (entirely) unreasonable to object to mandated safety engineering for cars – when cars had first been developed. I also wouldn’t assume that any requirements were net-positive. It seems sadly far easier for people generally to accept known but worse risks than new ones. (It’s certainly not the case that horses were engineered for safety.)
But maybe you’ve extrapolated from your own experience and reasonably reached the same conclusion as myself: AI is bad – eventually, in expectation. I’ve definitely updated towards ‘AI’ being worse, now, and in ways I didn’t much appreciate.
I am probably also still entranced by its “glitter”! Reverse engineering ‘neural networks’ (or actual neural networks) seems (at least) just as fascinating as reverse engineering biochemical networks! (And it’s NOT obvious that the latter research is definitely ‘safe enough’ as it’s currently practiced either!)