Comments on “Do AI as science and engineering instead”

Almost with you

Kenny 2023-03-01

I agree with so much of this (both this essay and the book and your larger corpus) – but not all. I, separately, greatly appreciate your thoughts and analysis anyways!

In particular, I disagree that “AI is bad” – as-is – even if there are many bad uses of it currently and even tho I agree that the field consists of way too much “spectacle”.

A related and similarly, but differently, insightful post/essay:

I think a point Wolfram makes that you seem to (somewhat) tacitly accept is that the results of a lot of ‘AI’ work are themselves a kind of scientific discovery, e.g. that ChatGPT works as well as it does (however it is that it works exactly).

And in that vein, I don’t think it’s (entirely) unreasonable to object to mandated safety engineering for cars – when cars had first been developed. I also wouldn’t assume that any requirements were net-positive. It seems sadly far easier for people generally to accept known but worse risks than new ones. (It’s certainly not the case that horses were engineered for safety.)

But maybe you’ve extrapolated from your own experience and reasonably reached the same conclusion as myself: AI is bad – eventually, in expectation. I’ve definitely updated towards ‘AI’ being worse, now, and in ways I didn’t much appreciate.

I am probably also still entranced by its “glitter”! Reverse engineering ‘neural networks’ (or actual neural networks) seems (at least) just as fascinating as reverse engineering biochemical networks! (And it’s NOT obvious that the latter research is definitely ‘safe enough’ as it’s currently practiced either!)

Almost

David Chapman 2023-03-04

Kenny, thanks for the comment!

the results of a lot of ‘AI’ work are themselves a kind of scientific discovery

Well… I think this is non-scientific. That’s contentious, inasmuch as what counts as “scientific” is essentially contested. The article by Michael Nielsen I cited is relevant, and excellent (although I semi-disagree).

when cars had first been developed

Yes… when there were (say) only a few thousand of them, allowing a period of risky non-consensual experimentation on random pedestrians was probably important.

But, recommender AI already affects billions of people; and “language models” like ChatGPT are just about to do the same. I don’t think we have the luxury of experimenting on a small number of unwilling guinea pigs that automobiles had.

Reverse engineering ‘neural networks’ seems fascinating

Yes, for me too… and a case can be made that (unlike most AI research) that may lead to greater safety. I’m tempted to dive in. It’s a hard call.

Almost

Kenny 2023-04-07

Hmmm – I think what I meant by “scientific discovery” is more like ‘discovery that’s scientifically interesting’ or ‘a single interesting example with some mathematical property is also a mathematical result’.

I agree that they’re not discoveries of ‘universal’ laws or really any kind of scientific theory.

I think I’d temper what I wrote in my previous comment:

In particular, I disagree that “AI is bad” – as-is – even if there are many bad uses of it currently and even tho I agree that the field consists of way too much “spectacle”.

AI – as a scientific or intellectual field/subject – is neutral. (It glitters brightly!)

AI as the actually-existing field of human endeavors is wildly unsafe and unfriendly – very bad.

Reverse engineering (neural networks) seems like a great thing to be tempted to do!

Safe experimentation discouraged by the funding lottery

Danyl Strype 2024-02-20

David:

I don’t think we have the luxury of experimenting on a small number of unwilling guinea pigs that automobiles had.

We do; by not putting them on the internet. But then we can’t create the spectacle and we don’t get the funding.

I think this points to a huge part of the problem. Most of the research funding in computer technology (and beyond) are being driven by the whims of rich people, all trying to invest in things that will definitely make them richer. Which makes funding priorities especially vulnerable to being hijacked by tulip fever.

We saw this with the DotCom bubble, the DataFarming bubble, the crypto bubble, and now the backprop bubble.

At just the right time

David Chapman 2024-02-20

Yes… GPT 3.5 was a genuine breakthrough (although what things like that are good for seems to still be unclear). But it arrived just as the crypto bubble imploded, which seems to have been pure coincidence, but was extremely convenient for the VC industry. Otherwise there would have been a “tech winter” until something else came along.

Again, “generative AI” may (or may not) still turn out to be important (and profitable), but 93% of the hype is driven by investment hopes rather than science.