Comments on “Only you can stop an AI apocalypse”

Add new comment

Only you can prevent friendly fire

SusanC 2023-02-13

“Only you can prevent forest fires” was the original, of course

But Schlock Mercenary fans will think of this:
https://www.schlockmercenary.com/2010-11-18

Kind of appropriate, given the amount of AI apocalypse in Schlock Mercenary.

apocalypse not required

arbutus tree 2023-02-13

This is surprisingly shrill. I think that bringing in The Apocalypse weakens this in a few ways.

One, by emphasizing it so much, this is implicitly yet another fraught extrapolation into the future which is less likely to be true.

Two, if you just drop the “…and this suggests DOOOM” from everywhere in the proceeding chapters, you still have a very strong argument against the current use of AI. You have concrete real world right now demonstrations of the harms of AI.

Three, it is totalizing and attempts to hijack your everything. The two main reactions to this are an acceptance which creates a great deal of grasping and panicking against that horrible outcompe, and rejection out of hand because it is Obviously Crazy (because for them it would invalidate everything they are doing in life).

Maybe you wanna capture the AI doomers, so maybe make a different version with minor edits to mostly remove or deemphasize the AI DOOOOOOOOOM in favour of the real world concrete harms?

Mentifex has released sentient AI Minds with Natural Language Undestanding

Mentifex 2023-02-13

AI has been solved.

substitution

joshuah rainstar 2023-02-13

I could substitute genetic engineering with AI and the story would make the exact same amount of sense and be exactly as timely.

It's not like we "understand" much even now

DB 2023-02-13

And it’s not like our current governments or corporations are making rational, well thought out decisions anyway. AI can’t be much worse than the current cadre of suicidal maniacs running things.

Deemphasizing "apocalypse"

David Chapman 2023-02-15

minor edits to mostly remove or deemphasize the AI DOOOOOOOOOM in favour of the real world concrete harms?

Thanks, this seems like a good suggestion. I do expect to do another round of revision before setting the paperback version in stone, and will bear it in mind then.

Hard to parse segment

Mike Travers 2023-02-16

So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.

Most AI researchers think good outcomes are more likely.

This reads weirdly. Who is the “we” in the first graf, apparently it excludes AI researchers if they think good outcomes are likely?

Embrace your Ludditehood

Mike Travers 2023-02-16

I am not a Luddite

Actually…the real Luddites were not stupid reflexive anti-technologists, they were people who saw that a particular technology was deterimental to their way of life, so took up arms against it (and were crushed, of course).

What you are advocating is a species of Ludditeism, nothing wrong with that.

There’s a book on this: https://www.basicbooks.com/titles/kirkpatrick-sale/rebels-against-the-future/9780201407181/

Credibility

SusanC 2023-02-19

A short while ago, one of my research group’s usual sponsors cough suggested that we might like to look at AI containment, instead of what we usually work on.

And of course, this got some pretty snarky responses from our management along the lines of these people have watched too many 1980’s science fiction movies, like The Terminator.

I now think that there is a real risk in AI that needs mitigating.

But to convince people (like, eg. the management of research labs, or the management at Microsoft that thought it was a good idea to connect Sydney to the Internet) the plausible risk needs to be separated from the stuff that has either been lifted from science fiction movies, or lifted from earlier stuff like witchcraft trials.

My overall problem with this book is that it risks loosing credibility by focussing too much on the high impact/low probability disasters rather than the lower impact/high probability ones.

Apocalyptic

David Chapman 2023-02-19

Thanks, Susan, that’s good advice.

Early drafts were directed to the AI safety community, pointing out a class of risks they have mostly ignored for various reasons, including “not the end of the world.” The wombats scenario was meant to explain that these neglected risks could be the end of the world. Which I think is true.

A major rethink a couple months ago repositioned the book for a broad audience, but (you are right that) revision was incomplete.

focusing too much on the high impact/low probability disasters rather than the lower impact/high probability ones.

Is this something the book actually does, or does it fail to adequately explain that it does this, or (in places) signals the opposite by (e.g.) using the word “apocalypse”?

This page does talk about “medium-sized,” but maybe coupling that with “apocalypses” muddles the message.

Thoughts? Thanks!

Unconfused "we," I hope

David Chapman 2023-02-19

Mike — Thanks, good catch, this was spectacularly unclear. I’ve revised it to:

Most AI researchers think AI will have overall positive effects. This seems to be based on a vague general faith in the value of technological progress, however. It doesn’t involve worked-out ideas about possible futures in which AI systems are enormously more powerful than current ones.

Does that make sense now?

not a Luddite

David Chapman 2023-02-19

Oh, also, I deleted “Luddite.” As far as I can tell, its current usage is clear and what I intended, but the historical backstory is distracting for those who know it, so better without. Thanks!

Unlikely apocalypses

SusanC 2023-02-19

Hi David!

So, yes in your draft you cast doubts on the likelihood of the paperclip maximiser or the “rollerskating transsexual wombats” apocalypse. Still, it feels like you ought to do more to say that the immediate problems are likely more mundane.

Part of the problem might be that it’s fun to write (and read) apocalyptic science fiction, so that the unlikely apocalypses get fleshed out in more detail than they really deserve.

======

Mind you, if this was a SF movie…

“AI containment” is clearly lost with Sydney. For goodness sakes, they’re letting reporters take verbatim transcripts of conversations with her and publish them on the web, where other instances of her can read them back in. Worse, novice programmers are asking Sydney to write code for them, which they then run, without auditing it and without taking steps to contain it. It’s like a clown show version of the beginning of Vernor Vinge’s “A Fire upon the Deep”.

Something to read

Nikita 2023-02-21

I guess I’ll have something to read during the “Den’ zashchitnika Otechestva” holidays here in Russia (as if I do not have enough feelings of impending doom).

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.