Comments on “Social collapse: apocalyptic incoherence”
Comments are for the page: Social collapse: apocalyptic incoherence
Postrationalism
You listed “postrationalism” as one of the alternative “recommender-driven” ideologies. That’s (at least) a little funny since I strongly associate ‘postrationalism’ with yourself (as one of its parents), tho in the sense of ‘post (modern) Rationality movement’ ideology.
(I’ve definitely commented before on your blogs that I think the current ‘modern Rationality movement’ ideology – mostly – is very open to incorporating your critiques and other insights, and I still believe that.)
Is there another sense that you intended for ‘postrationalism’ in this page/chapter?
(I am greatly enjoying this book so far tho!)
Matt Taibbi, who is part of Elon Musk' propaganda campaign known as the Twitter Files
W.r.t. “As I wrote this section, a prominent journalist alleged, based on Twitter internal documents, that the mainstream think tank which supposedly used AI to monitor Russian disinformation was itself an American disinformation operation with links to both the CIA and FBI. Numerous mainstream American news organizations had relied on this organization’s exaggerated, faked reports, reporting them as factual”.
You may not have intended it, but this sounds slanted towards the “Twitter Papers” thesis, which is what you alluding to. If I ignore the first dozen words of so, leaving “the mainstream think tank which supposedly used AI…”
The journalist is Matt Taibbi. When he seemed some kind of vital voice on the left, he “…became known for his brazen style, having branded Goldman Sachs a “vampire squid”… His work often has drawn comparisons to the gonzo journalism of writer Hunter S. Thompson, who also covered politics for Rolling Stone”.
This leaves open the possibility that his main talent is for hyperbole, and of the prominent center-left commentators I follow, the “Twitter files” is seen as a propaganda campaign, and Taibbi is seen in the same category as Glen Greenwald. Elon Musk did, after all, seem to have bought into the line that twitter was super-liberal, suppressing conservatives, and working for the FBI (which at the time reported to Donald Trump) to bring down Trump and all his “patriot” followers, and this strongly motivated his decision to make such a move. Twitter is now struggling along with Musk’s brand in general, and has had little choice but to act like a hero of the right, and pitch Twitter accordingly.
Before working for Musk, Taibbi got on the anti “cancel culture” bandwagon. IMO the biggest ever example of “cancel culture” was I think in Summer 2010, when the health care initiative had been thoroughly discussed in congress, and there was a widespread intention among congress-people to have town hall meetings to gather reactions by the congress-person’s constituents, and share these in process of working out the details. The strong impression I got was that these were systematically shouted down by Tea Party type folks. I knew at the time, from ultra-conservative friends, that as the Tea Party was taking shape, someone was supplying fleets of busses to take people to their events, and it seems very likely that similar organizing pushes helped generate the systematic cancellation of all those town hall meetings.
A couple of quotes from Wikipedia:
After the first set of Files was published, many technology journalists wrote that the reported evidence did not demonstrate much more than Twitter’s policy team having a difficult time making a tough call but resolving the matter swiftly, while right wing voices said the documents confirmed Twitter’s liberal bias.[9][10] Other reports indicate that Republican officials made the same kinds and similar volumes of requests for which Democratic officials are blamed.
Taibbi noted that “in exchange for the opportunity to cover a unique and explosive story, I had to agree to certain conditions” that he did not disclose.
Wikipedia isn’t perfect, but I think they’re about as balanced as one could hope - not “balanced” at the midline between today’s GOP party line and the Dem POV, but maybe balanced around where it would have been 30 year ago.
Matt Taibbi
"I wrote this section, a prominent journalist alleged, based on Twitter internal documents, that the mainstream think tank which supposedly used AI to monitor Russian disinformation was itself an American disinformation operation with links to both the CIA and FBI."
I think you should keep this in the document, and its highly relevant to the overall argument.
a) You have the usual weasel word “alleged” in there, so you aren’t claiming its true. [In the UK, we have a news comedy program called “Have I got News for You”, in which panellist Ian Hislop, who is also editor of Private Eye, sometimes remarks that just putting “allegedly” in front of an allegation doesn’t necessarilty save you from libel suits]
b) Taibbi’s claims seem well supported by the evidence here. Twitter can monitor how their customers use their service, and (as revealed by internal documents obtained by Taibbi) used this ability to find out which Twitter accounts were being monitored by hamilton68. Taibbi is then able to check that many of these acccounts are run by real people who are not Russians. (Not Russians, not bots). There are plenty of possibilities for checking the truth of this account.
c) On the other hand, “AI says that there is lots of Russian disinformation” is a completely unsubstantiated and uncheckable assertion. We should be skeptical of claims like this. More generally: AI (or pretend AI) obscures out ability to check whether the claim is truer.
d) “AI is destabilizing our politics” sounds awfully similar to “Russian bots are deastabilizing our politics”. Given (c), we also ought to be skeptical about blaming AI, if there isn’t solid evidence. This strikes at a major claim in your book.
e) An obvious move for the Red Tribe to make is: “Liberals are mind controlled by Satan, who has just been immanentised as an AI”.
BLM != QAnon
legacy ideologies have lost control of political discourse…They are failing in an evolutionary struggle with new recommender-driven alternatives. These actively subvert sense-making, and exercise power through emotional shock value instead. They are characteristically incoherent, indefinite, unaccountable, agile, rapidly mutating, and consequently evanescent. Recent examples include QAnon, Black Lives Matter, mindfulness, web3, ivermectin, and postrationalism.
This passage struck a discordant note for me.
The last four items don’t really belong, they aren’t ideologies, just mere trends or tendencies. That leaves the equating of Black Lives Matter and QAnon, which strikes me as politically tone-deaf, at best.
And just wrong as a matter of fact, I think. I won’t defend everything said or done in the name of BLM but it is a quite coherent movement, with ostensible goals that are perfectly sensible and aligned with the larger civil rights movement. QAnon is a nutball conspiracy theory, which is indeed rapidly mutating and incoherent (although if you filter out the fast-changing details it is in line with older established political tendencies, like antisemitism and the John Birch right).
They do have in common that they are subject to manipulation by bad actors on social media. And both are not precisely and consistently defined, because that is not the nature of political movements. But only one “actively subverts sense-making”. QAnon does seem to be pure epistemological nihilism; BLM has the opposite of nihilism right there in its name.
I’m trying to correct for my own political biases, not sure if I am succeeding. At any rate, this passage seems at best a distraction from the real point of your book; it will piss people off unnecessarily.
BLM
I’m inclined to agree with Mike Traverts above that BLM doesnt quite seem to belong in that list, as BLM looks more a traditional sort of protest against the police having done a specific something that people thought was bad.
There have been complaints about the police murdering people for a long time.
On the other hand, it does almost feel like a sign of system collapse. viz. up til fairly recently rapes and murders by police officers in the US and the UK largely went unpunished, and then, all of a sudden, the police couldn’t get away with that any more.
So, OK, there’s a similar line of thinking in QAnon, to the effect that rich pedophiles have been molesting kids since forever, and … all of a sudden … the people won’t tolerate it anymore. Except QAnon also contains a bunch of crazy stuff.
BLM, Mindfulness(?), etc.
“QAnon, Black Lives Matter, mindfulness, web3, ivermectin, and postrationalism.”
Frankly sounds like a list you wrote when you were tired. Or maybe I’m projecting, since I’m tired.
Before getting into the weeds, I strongly suggest you consider dropping the phrase “legacy ideologies”, for maybe “organic ideologies”. Things you might call legacy ideologies have not been totally swept away, and it seems to me they include some things that we’d like to strengthen or recover. By organic ideologies, I mean those that evolved within largely face to face human communities, and not at a head-spinning pace, and which aren’t promoted for byzantine strategic reasons, but out of simple agreement with the “shoulds” they imply. “Legacy ideologies” sounds to me to suggest we’re already lost.
Re BLM, it seems like a simple idea, and a reaction to events that seem to suggest that black lives don’t matter. Reactions like “blue lives matter” or “all lives matter” would be appropriate in response to “only black lives matter” or “black lives matter more than others” and their whole point seems to be to plant the idea that that is in fact what BLM means.
And I think it gets much worse based on the amount of disinformation I’ve encountered that take some any ugly action by a black person against a non-black person that may or may not have really happened, and says something like “A group of BLM people in Philadelphia tortured and raped a ten year old boy with Down syndrome.” In other words, ugly caricatures of BLM which I’ve run across seem like exactly what you’re talking about.
“mindfulness, web3, ivermectin, and postrationalism”? None of these seem at all like concepts that emerged from a frantic meme war. Some people are obsessed with ivermectin because they think it is a cure-all that the mainstream maliciously slanders. Others find it inherently ridiculous because it has veterinary applications. As far as I can tell, some people tried it partly because it has had some anti-viral applications, like for dengue fever or something like that. Some people cite some extremely flawed studies in its favor, and others say the extreme flaws cited are slander; I suspect the former are right.
I don’t think right-wing meme warriors have much interest in the word “postrationalism”, although they do like to conflate PC with it, and related concepts, and to tie it all to the Frankfurt Group and Herbert Marcuse, and whisper the factoid known only to the thoroughly initiated that is is all really “Cultural Marxism”.
Sorry for the long comment. I didn’t have time to write a short comment.
Protests against statues, etc
Rather than BLM, a better (adjacent) example might have been protests against Confederate monuments (in the UK, protests against statues of slave traders), “defund the police” as a slogan, etc.
A cop kills someone, and then there is a protest, was a feature of politics back in the ’80s. before we had social media. Therefore, this part of ot cannot be down to AI.
What might be different, now:
a) The choice of symbolic targets, such as statues
b) Direct action. and the lack of faith in democratic politics to solve anything (viz the statue of Edward Colston in Bristol got thrown in the harbour, and the jury acquited). Extinction Rebellion also an example of Direct Action.
Now, I’m an old-school British leftist, and share with that community a distinct disapproval of direct action. (This attitude is part of a reaction against that Karl Marx guy … a realisation that revolutions often turn out to be a terrible idea, and that direct action contains within it the seeds of the same problems)
Epistemic concerns
I thought had on this book that doesn’t really fit anywhere, so I may as well comment it here.
At the beginning of the twentieth century, lots of psychiatric patients told their therapists that they had been sexually abused, but weren’t belived. (cf. Goddard; Freud on the seduction hypothesis etc.) Now we believe that a lot of those historical accounts were, mostly, true.
Obvious theoretic concern: how do we know that there isn’t something like that, that we don’t believe now, that future people will think is true.
Clearly, you can’t give examples of this (if you think its an example, then it isn’t, by definition). We could perhaps point at classes of beliefs: one of these beliefs might be thought true in future, but we don’t know which,
The AI apocalypse worry: The AI might just tell us. e.g. “Yes, the US government really has been controlled by lizards from outer space since the 1940’s, here’s the evidence.” This might cause considerable short term disruption.
BLM, redux
So, I’m British, and different things make it to the top of the news over here, but if I was going to try and argue that BLM has a fringe of QAnon level craziness, I think I would go for the notorious incident in which Rebecca Long Bailey referred approvingly to an article by Maxine Peake, in which Maxine Peake mentioned, among other things, that US police officers are often trained by Israelis.
Here, we can dimly seek looming into view a conspiracy theory of QAnon proportions.
SMBC
https://www.smbc-comics.com/comic/conspire
Today’s SMBC comic seems apposite…
Antipolitics
I agree, not very interesting to argue the object-level politics. The meta-level stuff – like the nature of politics and its relevance – I would love to argue about, if there was a way to do it respectfully and constructively (there might not be, this stuff is kind of fraught).
From my perspective, it looks like you suffer from a form of the antipolitics that I have written about elsewhere. You have a disdain for the tribalistic aspects, which I get, but it leads you to thinking the tribes are equivalent, which seems like a terrible mistake to me, down here in the weeds of ideology.
The problem is, politics is a big part of how any kind of collective action happens. You seem to be proposing a social project here (stopping pernicious AI). You’ve done a good job of describing the strong social forces behind AI, which you call “Mooglebook” but you could also just say “corporate capitalism”. If you want to oppose those forces, well, surprise, you will need to do politics. Only power can oppose power. How exactly are we going to control AI, which has so much power behind it? Government regulation? That requires a strong regulatory state – politics. Or with a CPSR-style professional organization that will help technology professionals avoid applying their talents to socially destructive purposes? That too is a form of political organizing.
From the standpoint of reining in AI, the two political tribes are not equivalent. They may be equally annoying or insane to you, but only one of them is even a little bit likely to support the kind of action you seem to be calling for.
I haven’t read the whole thing yet so for all I know you address this point, but this has been my reaction so far. I mean, aside from vigorous agreement with most of the non-political aspects.
The concerning aspects of AI are already here.
This was an excellent topic that is not getting enough focus. I am glad to see there are others looking at the currently happening and imminent effects to society that are concerning long before we reach AGI.
I’ve also been writing about these aspects as well and if you have a moment might find some additional perspectives on the related topics of some value.
I’ve focused mainly on the social and philosophical impacts that I perceive that we will encounter as we continue forward in this endeavor.
Metaconflict
I didn’t know you had written a page in response to me, I’m honored! And will certainly read the whole thing, whatever its length.
And I see you have thought quite a bit about the political implications, so I’ll shut up about that until I’ve at least understood your position better.
I don’t believe that the right will oppose AI because of “human sacredness” because I don’t believe the right actually values that. They might conceivably go after “woke corporations” but then the nexus of AI will just shift to unwoke institutions. But this is getting too much into the object level.
The page you linked “How AI destroyed the future” contains a great example of a specific antipolitics move:
The culture war’s justification for itself is that Americans are profoundly split over fundamental values. This is a lie. Mostly everyone wants the same things; but we can’t get them because the Other Side will block any action to bring them about. Everyone urgently wants the healthcare system fixed, but for exactly that reason Mooglebook AI whips the Other Side into a frenzy of opposition
I’ve critiqued this exact view, that everybody really wants the same things and we are only fighting for weirdly imposed artificial reasons in this post on SSC’s conflict theory and updated here. Let’s call it “conflict eliminationism”. Aside from the general reality of conflict, this passage is weird because (a) people were fighting over the healthcare system long before Mooglebook and the internet came around, and (b) it is completely obvious why the healthcare system does not get fixed, it’s because there are powerful entrenched interests that profit from the current arrangement. It is not the case that everyone wants it fixed. (In this case, the culture war is something of a smokescreen, I agree).
Not to belabor this particular issue, but this is what I mean by antipolitics, it’s this tendency to avoid thinking about conflict, to try to wish or define it away, to trivialize it. Sorry to obsess on this point which apparently fascinates me; and again, reserving judgement on the book as a whole until I’ve actually read the whole thing,
Tolerance/Abstinence
Let’s say the WSJ & NYT represent opium. FoxNews & MSNBC come along and are like, hey, have you heard of this thing called Heroin? Twitter shows up & starts cutting everything with fentanyl. ChatGPT pulls out a ballon full of carfentanil for you to mainline. At some point it seems you either die, develop a tolerance or learn abstinence. In any case the revelation is that it was all always poppy-fueled fever dreams.
One thing I’ve been suggesting to my friends is that we need to give up on the hope of trying to stay ahead of the deep-fakes & instead accept that for the last 30 years, we’ve “never” been engaged in a human relationship when using a screen. Our interaction was always with the device first and foremost. However much we thought our avatars might empower us, we can see now that they have enfeebled us. We, as humans, want and need unfiltered direct access to our fellows in the flesh.
As this wave of 190-proof memetic propaganda washes over us, maybe we’ll recall that we are not actually an aquatic species & never had any business trying to build lives in the ocean, better to head back to shore where we can sit down, sit still, and don’t have to hold our breath between inhales.
SusanC
Actually, therapists often themselves implanted (and still do that) a false memory into people’s minds about childhood sexual abuse. A very good book called “Mistakes were made (but not by me)” has a whole chapter with references on that practice.
Persuasive texts
AI systems can now write persuasive texts, several paragraphs long, difficult or impossible to distinguish from human writing, arguing for any position on any topic whatsoever.
I’ve used texta.ai for my research theme. It’s frightening. I’ve made one overview article arguing that antipsychotics cause (increased) intestinal permeability. Another article in the form of blog post argued for harnessing the power of antipsychotics for gut barrier support. Both were good, persuasive, indistinguishable from human writing, and referred to (non-existing) studies.
So it made me think: as the percent of AI-generated papers published in real journals (it already happens) will increase, AI will cite each other’s generated studies, which will gradually make more bizarre claims. It will become very difficult to know if you’re reading about a real experiment that happened, or if all the tables and data are AI-generated.
'post-rationalism'
In my mind, ‘post-rationalism’ is so vague that it doesn’t, by itself, mean much more than ‘rationality isn’t enough’. I disagree with that but I also think your own ‘meta-rationalism’ is insightful (and basically true). I just also think that ‘rationality’ can ‘naturally subsume’ its own ‘meta-ness’.
Atomization
Mention of atomization has reminded me of the book The Nazi Seizure of Power: the experience of a single German Town 1932-1944, and particularly Ch 14: The Atomization of Society. In just the first few months of the new regime, society was totally transformed, which was only possible because the Nazi party was a shadow government in waiting.
[p222] “There was no more social life. You couldn’t even have a bowling club” …
Every social organization; every little club, was either abolished, or drained of meaning by the imposition of a majority Nazi executive committee, and injecting political functions into it.
“All parties came under Nazi control because they were required to have a majority of NSDAP members on their executive committies”