Comments on “Only you can stop an AI apocalypse”
Adding new comments is disabled for now.
Comments are for the page: Only you can stop an AI apocalypse
apocalypse not required
This is surprisingly shrill. I think that bringing in The Apocalypse weakens this in a few ways.
One, by emphasizing it so much, this is implicitly yet another fraught extrapolation into the future which is less likely to be true.
Two, if you just drop the “…and this suggests DOOOM” from everywhere in the proceeding chapters, you still have a very strong argument against the current use of AI. You have concrete real world right now demonstrations of the harms of AI.
Three, it is totalizing and attempts to hijack your everything. The two main reactions to this are an acceptance which creates a great deal of grasping and panicking against that horrible outcompe, and rejection out of hand because it is Obviously Crazy (because for them it would invalidate everything they are doing in life).
Maybe you wanna capture the AI doomers, so maybe make a different version with minor edits to mostly remove or deemphasize the AI DOOOOOOOOOM in favour of the real world concrete harms?
Mentifex has released sentient AI Minds with Natural Language Undestanding
AI has been solved.
substitution
I could substitute genetic engineering with AI and the story would make the exact same amount of sense and be exactly as timely.
It's not like we "understand" much even now
And it’s not like our current governments or corporations are making rational, well thought out decisions anyway. AI can’t be much worse than the current cadre of suicidal maniacs running things.
Hard to parse segment
So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.
Most AI researchers think good outcomes are more likely.
This reads weirdly. Who is the “we” in the first graf, apparently it excludes AI researchers if they think good outcomes are likely?
Embrace your Ludditehood
I am not a Luddite
Actually…the real Luddites were not stupid reflexive anti-technologists, they were people who saw that a particular technology was deterimental to their way of life, so took up arms against it (and were crushed, of course).
What you are advocating is a species of Ludditeism, nothing wrong with that.
There’s a book on this: https://www.basicbooks.com/titles/kirkpatrick-sale/rebels-against-the-future/9780201407181/
Credibility
A short while ago, one of my research group’s usual sponsors cough suggested that we might like to look at AI containment, instead of what we usually work on.
And of course, this got some pretty snarky responses from our management along the lines of these people have watched too many 1980’s science fiction movies, like The Terminator.
I now think that there is a real risk in AI that needs mitigating.
But to convince people (like, eg. the management of research labs, or the management at Microsoft that thought it was a good idea to connect Sydney to the Internet) the plausible risk needs to be separated from the stuff that has either been lifted from science fiction movies, or lifted from earlier stuff like witchcraft trials.
My overall problem with this book is that it risks loosing credibility by focussing too much on the high impact/low probability disasters rather than the lower impact/high probability ones.
Unlikely apocalypses
Hi David!
So, yes in your draft you cast doubts on the likelihood of the paperclip maximiser or the “rollerskating transsexual wombats” apocalypse. Still, it feels like you ought to do more to say that the immediate problems are likely more mundane.
Part of the problem might be that it’s fun to write (and read) apocalyptic science fiction, so that the unlikely apocalypses get fleshed out in more detail than they really deserve.
======
Mind you, if this was a SF movie…
“AI containment” is clearly lost with Sydney. For goodness sakes, they’re letting reporters take verbatim transcripts of conversations with her and publish them on the web, where other instances of her can read them back in. Worse, novice programmers are asking Sydney to write code for them, which they then run, without auditing it and without taking steps to contain it. It’s like a clown show version of the beginning of Vernor Vinge’s “A Fire upon the Deep”.
Something to read
I guess I’ll have something to read during the “Den’ zashchitnika Otechestva” holidays here in Russia (as if I do not have enough feelings of impending doom).
Only you can prevent friendly fire
“Only you can prevent forest fires” was the original, of course
But Schlock Mercenary fans will think of this:
https://www.schlockmercenary.com/2010-11-18
Kind of appropriate, given the amount of AI apocalypse in Schlock Mercenary.