Comments on “The inescapable: politics”

Add new comment

AI's tribal defenders

respatialized 2023-02-16

This book is clearly the product of dialogue with LessWrong and adjacent x-risk communities. I think it’s worth seriously applying your analysis of political tribalism - and the way it can be “used” by the emergent causal properties of AI - to this group. I think, even if they don’t intend to, they will as a group be some of the most implacable opponents of the aims you lay out in this conclusion.

Despite the stated goals of AI safety, the members of the community actively involved in research have mostly coalesced around a research program succinctly described as “build lots of computational capacity first, understand it maybe never.” Despite their claims of trying to get beyond ideology and approach serious problems with an open mind, they certainly have one: don’t mess with our toys (or, implicitly, the companies with the budgets to pay for them). Worse, they have deliberately blinded themselves to the political dimensions of their institutional relationships by referring to politics as such as “the mind-killer.”

LessWrong itself chooses to use an upvote/downvote system for ranking comments and structuring discussions, making it susceptible to the same kinds of gamification pressures and ingroup/outgroup definition you describe elsewhere in the book. The people in this orbit also hang out a lot on Twitter, where they serve as a mutually-reinforcing clique defending the current program of AI research, eager to prove the haters and skeptics wrong. Sam Altman tweets things like “I am a stochastic parrot, and so r u,” turning a dehumanizing view of human intelligence into a meme to be used as a weapon by OpenAI’s defenders in ideological combat with critics.

Dissent - some of which you cite - exists in the community, but from the outside it appears to have zero effect on institutional practices and funding priorities. That was true before the wild success of ChatGPT in attracting users and hype, but I think the momentum on this has subsequently increased by a couple of orders of magnitude. If AIs can be said to “use” anyone, they certainly get a lot of utility out of the “AI safety” community. Perhaps more than they get out of fanning the flames of tribal conflict.

I wish you luck in trying to persuade this crowd to take a different approach, but I have my doubts. While their fear of AI risk is strong, their collective fascination with what can be built is ultimately stronger. I think, as you convincingly argue, the only way to actually stop these efforts is to starve them of raw materials - personal data - from the outside.

Semi-intentional accelerationism

David Chapman 2023-02-16

Thank you! These are perceptive, accurate, and important observations, I think.

Drafts of the book included much more critique of the LW-style AI safety movement. I removed that because I didn’t think it would change anyone’s mind, and it might alienate some people who would otherwise be supportive.

I see two relevant cultural shifts. First, the movement is increasingly shifting to “wait, no, we should be trying to stop AI, not make it safe, because there isn’t a way to make it safe.” This seems good to me. It may not yet be the majority view, but it was not expressed within the community at all before a few months ago.

Second, ChatGPT has made AI, and its risks, an increasingly mainstream political topic. That is probably bad, because mainstream politics is currently dysfunctional (partly due to AI itself, as the book argues). For better or worse, the mainstream discussion is also enormously more powerful than the LW-style AI safety movement. I think it’s likely that the movement has been suddenly rendered irrelevant; it’s a flea on the back of a dinosaur. In other words, whatever LW thinks will be ignored by the mainstream discussion. So whatever LW got wrong won’t matter. Unfortunately, whatever LW got right also won’t matter.

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.