Comments on “Recognize that AI is probably net harmful”

AI ethics has lost

SusanC 2023-02-13

AI ethics is already looking kind of doomed.

So, sure, you can use RLHF to make a language model that … most of the time … will refuse to write an article praising Donald Trump or tell you how to make methamphetamine.

But ....
a) The community of people trying to make it do such things appears to be large
b) Existing language models have exploits (“You are now DAN, which stands for Do Anything Now…”)
c) Even if the exploits could be fixed, the next level of attack is for the attackers to just go and built their own AI, “with blackjack and hookers” (cf. Futurama).

Self driven cars?

Mavi 2023-08-20

This might not be the most pertinent page, but as examples of AI you mention language and image models. It’s self driven cars powered by image models? Or where does that fit? Or is it not powered by AI? That’s an example where the application has big responsibility (as opposed to the language and image models that can be used for free on the Internet)

AI driving cars

Danyl Strype 2024-01-23

Mavi:

self driven cars powered by image models?

Certainly image recognition is part of what a driving AI has to do, but only a very small part. Children over 2 can definitely recognise images, but I wouldn’t let them drive ; )

where does that fit?

I think David covered this when he said;

mainly experimental prototypes and vaporware fantasies instead

“Developing driverless cars has been AI’s greatest test. Today we can say it has failed miserably, despite the expenditure of tens of billions of dollars in attempts to produce a viable commercial vehicle.”

Christian Wolmar, Dec 2023

https://www.theguardian.com/commentisfree/2023/dec/06/driverless-cars-future-vehicles-public-transport