Comments on “Apocalypse now”
Adding new comments is disabled for now.
Comments are for the page: Apocalypse now
question from abroad
hello Remedios
would you please tell me what is the “a-word”, as a foreigner i know too little about american culture war to spot it … my gues would be amazon but i don’t see the culture war connection here :)
Re: Michal’s Q
“where and when you got an abortion.”
I want to share this article with family who would be offended by the casual reference to this topic. The topic is main artillery in the U.S. culture war. I think it’s distracting and polarizing to throw it in as an example when something else could be used.
I understand why it might be useful here but it can’t be considered a neutral topic. If I shared the article with my diverse family, some of them would assume that the article, and me by extension, are aligned with “The other guys” and they’d disregard all of its important points.
To achieve a politically bilingual article, it would be better to remove that example. If you want to just be heard by the sizable chunk of the population that is in agreement about the issue, it could of course be left as is. I mention it in a proofreading capacity to point out that the article would be more shareable by me with almost any alternate example.
oh now i see it
abortion is also quite a big topic here, i just didn’t see the mention as inflammatory as it was quite neutral …
thanks for explaining and sorry for extra comment caused by refreshing the page :)
Cordwainer Smith
Amusingly, Cordwainer Smith write a decent novella in which the premise is that a Military AI more or less accidentally buys Earth on behalf of its owner who needs it to do this for fairly personal reasons. War includes Economic War, in this case, so the AI id pretty good at this. This produces, obviously, a very specific pool of power with consequences that the novella then works out. As one does.
Your discussion of AI-driven stock market activity put me in mind of it.
Andrew Molitor
In your framing here I think it’s wrong to think of victims of internet mobs as enemies of anything. They’re more like food. The cow is arguably “trying” to optimize some stuff, and a certain amount of grass gets eaten along the way.
The mob attacking some mook isn’t an enemy thing at all, at your level of abstraction. It’s just the cow’s big flat teeth tearing up some grass to chuck into the digestion to (suddenly switching away from the metaphor) sell some ads.
Human Error
Thought for a moment that researcher’s name was Wintermute. The blood runs cold.
Enemies: Nature of, Whose?, etc.
I had a thought similar to Andrew’s. Victims of an internet mob are in no way enemies of the AI.
Actually, I think “enemy” is a kind of innate human proto-concept, and it fits the reality better to think of “enemy tribe” rather than personal enemy. If you’re important enough, you may have an “arch-enemy” but it’s really not the same concept.
It is analogous to imprinting, as in certain birds imprinting on the first animate object they see on hatching. Initially the “enemy” proto-concept is skeletal, but there are a wide range of triggers that could activate it and start fleshing it out. Unlike the “mother” concept, which can have only one referent, “enemy” is a template allowing one to have as many enemies as seem necessary to survival (in the environment of evolutionary adaptation - EEA, at least).
I believe the innate concept of “enemy tribe” includes, or has strong links to, an expected ideology (wrong-headed, or course), expected patterns of body decoration (mostly, in the modern case, clothing, make-up, hair-style, piercings, tattoos, scarrings). These are simultaneously triggers and affects. It is an affect in that once a relation of enemy-hood is firmly established, a looks or ideological tendencies at variance with the “enemy” start to crop up. It is easy to start a trend in that direction because we’re on the lookout for such trends. When you say “that’s a ridiculous way to wear your hair, or a very ugly piercing”, you may be moving a little in the direction of forming an “enemy” brain complex (possibly worth thinking of as a “meme-plex”). It you notice within your in-group that some group is ridiculed constantly, and/or evokes fear, that is a much more powerful push towards enemy-hood. On the way to enemy-hood, we may just see people with certain beliefs, or talk a certain way, or wear certain hats, as figures to have a laugh over. After enough social processing goes on, the “enemy” brain-complex is likely acquire a major highly reaction-inducing label. The latest, and most powerful, of which (in the US) seems to be “Woke” - like we’re always expecting an attack from the “woke mob”; at the same time the word likely has a convenient ridiculing aspect.
So, what’s happening with the AI? An AI isn’t wired like a human brain, though it could have some practical implicit built-in “knowledge” (who’s afraid of “scare quotes”? Not me) about how the “enemy” brain-complex hangs together.
In the original list of qualities of a potential scary-AI, “Get subservient humans to attack enemies” really was about “enemies” of the AI, not in the sense I tried to define above, but in a generally understood sense – possibly those who are out to destroy the AI before it can destroy civilization.
Social media facilitated internet mob attacks validate the capacity of the physicality of a social media to attack ITs enemies; they just say nothing about an AI’s ability to HAVE enemies or to recognize who is trying to destroy it.
However, with the “pools of power” concept added, which may be the most important single concept in the whole argument up to this point, and the understanding that the AI doesn’t have to have the ultimate agency in order for it to destroy the world (or facilitate its destruction), I think I get what you’re trying to say, but some new distinctions might make this a more powerful argument.
Atomization
Mention of atomization has reminded me of the book The Nazi Seizure of Power: the experience of a single German Town 1932-1944, and particularly Ch 14: The Atomization of Society. In just the first few months of the new regime, society was totally transformed, which was only possible because the Nazi party was a shadow government in waiting.
[p222] “There was no more social life. You couldn’t even have a bowling club” …
Every social organization; every little club, was either abolished, or drained of meaning by the imposition of a majority Nazi executive committee, and injecting political functions into it.
“All parties came under Nazi control because they were required to have a majority of NSDAP members on their executive committies”
Egre Gory
You might take a second look at “egregore”. I thought it was just weird at first, but it’s growing on me. Maybe you can show me some of the abandoned version. Also, I may be able to dredge up some analogues that can help to introduce the level of abstraction.
You’ve been brave enough to refer frequently to memes, and maybe together we could explore how to put that idea on a more solid footing.
Thanks
Thanks
Postrationalism
You listed “postrationalism” as one of the alternative “recommender-driven” ideologies. That’s (at least) a little funny since I strongly associate ‘postrationalism’ with yourself (as one of its parents), tho in the sense of ‘post (modern) Rationality movement’ ideology.
(I’ve definitely commented before on your blogs that I think the current ‘modern Rationality movement’ ideology – mostly – is very open to incorporating your critiques and other insights, and I still believe that.)
Is there another sense that you intended for ‘postrationalism’ in this page/chapter?
(I am greatly enjoying this book so far tho!)
Sydney, and that murder accusation
As evidence of what an AI apocalypse might look like - Sydney accused a reporter she didn’t like of being connected to a ’90s murder.
Ok, so now we have search engines that are sufficiently agent-y that they can take a dislike to someone, and defame them in search results.
Kind of bad news for Microsoft, given that defamation isn’t protected the first amendment. (The court case over whether they’re liable for what Sydney says would be so much fun).
But for everyone who isn’t Microsoft, this is also seriously bad.
Matt Taibbi, who is part of Elon Musk' propaganda campaign known as the Twitter Files
W.r.t. “As I wrote this section, a prominent journalist alleged, based on Twitter internal documents, that the mainstream think tank which supposedly used AI to monitor Russian disinformation was itself an American disinformation operation with links to both the CIA and FBI. Numerous mainstream American news organizations had relied on this organization’s exaggerated, faked reports, reporting them as factual”.
You may not have intended it, but this sounds slanted towards the “Twitter Papers” thesis, which is what you alluding to. If I ignore the first dozen words of so, leaving “the mainstream think tank which supposedly used AI…”
The journalist is Matt Taibbi. When he seemed some kind of vital voice on the left, he “…became known for his brazen style, having branded Goldman Sachs a “vampire squid”… His work often has drawn comparisons to the gonzo journalism of writer Hunter S. Thompson, who also covered politics for Rolling Stone”.
This leaves open the possibility that his main talent is for hyperbole, and of the prominent center-left commentators I follow, the “Twitter files” is seen as a propaganda campaign, and Taibbi is seen in the same category as Glen Greenwald. Elon Musk did, after all, seem to have bought into the line that twitter was super-liberal, suppressing conservatives, and working for the FBI (which at the time reported to Donald Trump) to bring down Trump and all his “patriot” followers, and this strongly motivated his decision to make such a move. Twitter is now struggling along with Musk’s brand in general, and has had little choice but to act like a hero of the right, and pitch Twitter accordingly.
Before working for Musk, Taibbi got on the anti “cancel culture” bandwagon. IMO the biggest ever example of “cancel culture” was I think in Summer 2010, when the health care initiative had been thoroughly discussed in congress, and there was a widespread intention among congress-people to have town hall meetings to gather reactions by the congress-person’s constituents, and share these in process of working out the details. The strong impression I got was that these were systematically shouted down by Tea Party type folks. I knew at the time, from ultra-conservative friends, that as the Tea Party was taking shape, someone was supplying fleets of busses to take people to their events, and it seems very likely that similar organizing pushes helped generate the systematic cancellation of all those town hall meetings.
A couple of quotes from Wikipedia:
After the first set of Files was published, many technology journalists wrote that the reported evidence did not demonstrate much more than Twitter’s policy team having a difficult time making a tough call but resolving the matter swiftly, while right wing voices said the documents confirmed Twitter’s liberal bias.[9][10] Other reports indicate that Republican officials made the same kinds and similar volumes of requests for which Democratic officials are blamed.
Taibbi noted that “in exchange for the opportunity to cover a unique and explosive story, I had to agree to certain conditions” that he did not disclose.
Wikipedia isn’t perfect, but I think they’re about as balanced as one could hope - not “balanced” at the midline between today’s GOP party line and the Dem POV, but maybe balanced around where it would have been 30 year ago.
Matt Taibbi
"I wrote this section, a prominent journalist alleged, based on Twitter internal documents, that the mainstream think tank which supposedly used AI to monitor Russian disinformation was itself an American disinformation operation with links to both the CIA and FBI."
I think you should keep this in the document, and its highly relevant to the overall argument.
a) You have the usual weasel word “alleged” in there, so you aren’t claiming its true. [In the UK, we have a news comedy program called “Have I got News for You”, in which panellist Ian Hislop, who is also editor of Private Eye, sometimes remarks that just putting “allegedly” in front of an allegation doesn’t necessarilty save you from libel suits]
b) Taibbi’s claims seem well supported by the evidence here. Twitter can monitor how their customers use their service, and (as revealed by internal documents obtained by Taibbi) used this ability to find out which Twitter accounts were being monitored by hamilton68. Taibbi is then able to check that many of these acccounts are run by real people who are not Russians. (Not Russians, not bots). There are plenty of possibilities for checking the truth of this account.
c) On the other hand, “AI says that there is lots of Russian disinformation” is a completely unsubstantiated and uncheckable assertion. We should be skeptical of claims like this. More generally: AI (or pretend AI) obscures out ability to check whether the claim is truer.
d) “AI is destabilizing our politics” sounds awfully similar to “Russian bots are deastabilizing our politics”. Given (c), we also ought to be skeptical about blaming AI, if there isn’t solid evidence. This strikes at a major claim in your book.
e) An obvious move for the Red Tribe to make is: “Liberals are mind controlled by Satan, who has just been immanentised as an AI”.
BLM != QAnon
legacy ideologies have lost control of political discourse…They are failing in an evolutionary struggle with new recommender-driven alternatives. These actively subvert sense-making, and exercise power through emotional shock value instead. They are characteristically incoherent, indefinite, unaccountable, agile, rapidly mutating, and consequently evanescent. Recent examples include QAnon, Black Lives Matter, mindfulness, web3, ivermectin, and postrationalism.
This passage struck a discordant note for me.
The last four items don’t really belong, they aren’t ideologies, just mere trends or tendencies. That leaves the equating of Black Lives Matter and QAnon, which strikes me as politically tone-deaf, at best.
And just wrong as a matter of fact, I think. I won’t defend everything said or done in the name of BLM but it is a quite coherent movement, with ostensible goals that are perfectly sensible and aligned with the larger civil rights movement. QAnon is a nutball conspiracy theory, which is indeed rapidly mutating and incoherent (although if you filter out the fast-changing details it is in line with older established political tendencies, like antisemitism and the John Birch right).
They do have in common that they are subject to manipulation by bad actors on social media. And both are not precisely and consistently defined, because that is not the nature of political movements. But only one “actively subverts sense-making”. QAnon does seem to be pure epistemological nihilism; BLM has the opposite of nihilism right there in its name.
I’m trying to correct for my own political biases, not sure if I am succeeding. At any rate, this passage seems at best a distraction from the real point of your book; it will piss people off unnecessarily.
BLM
I’m inclined to agree with Mike Traverts above that BLM doesnt quite seem to belong in that list, as BLM looks more a traditional sort of protest against the police having done a specific something that people thought was bad.
There have been complaints about the police murdering people for a long time.
On the other hand, it does almost feel like a sign of system collapse. viz. up til fairly recently rapes and murders by police officers in the US and the UK largely went unpunished, and then, all of a sudden, the police couldn’t get away with that any more.
So, OK, there’s a similar line of thinking in QAnon, to the effect that rich pedophiles have been molesting kids since forever, and … all of a sudden … the people won’t tolerate it anymore. Except QAnon also contains a bunch of crazy stuff.
BLM, Mindfulness(?), etc.
“QAnon, Black Lives Matter, mindfulness, web3, ivermectin, and postrationalism.”
Frankly sounds like a list you wrote when you were tired. Or maybe I’m projecting, since I’m tired.
Before getting into the weeds, I strongly suggest you consider dropping the phrase “legacy ideologies”, for maybe “organic ideologies”. Things you might call legacy ideologies have not been totally swept away, and it seems to me they include some things that we’d like to strengthen or recover. By organic ideologies, I mean those that evolved within largely face to face human communities, and not at a head-spinning pace, and which aren’t promoted for byzantine strategic reasons, but out of simple agreement with the “shoulds” they imply. “Legacy ideologies” sounds to me to suggest we’re already lost.
Re BLM, it seems like a simple idea, and a reaction to events that seem to suggest that black lives don’t matter. Reactions like “blue lives matter” or “all lives matter” would be appropriate in response to “only black lives matter” or “black lives matter more than others” and their whole point seems to be to plant the idea that that is in fact what BLM means.
And I think it gets much worse based on the amount of disinformation I’ve encountered that take some any ugly action by a black person against a non-black person that may or may not have really happened, and says something like “A group of BLM people in Philadelphia tortured and raped a ten year old boy with Down syndrome.” In other words, ugly caricatures of BLM which I’ve run across seem like exactly what you’re talking about.
“mindfulness, web3, ivermectin, and postrationalism”? None of these seem at all like concepts that emerged from a frantic meme war. Some people are obsessed with ivermectin because they think it is a cure-all that the mainstream maliciously slanders. Others find it inherently ridiculous because it has veterinary applications. As far as I can tell, some people tried it partly because it has had some anti-viral applications, like for dengue fever or something like that. Some people cite some extremely flawed studies in its favor, and others say the extreme flaws cited are slander; I suspect the former are right.
I don’t think right-wing meme warriors have much interest in the word “postrationalism”, although they do like to conflate PC with it, and related concepts, and to tie it all to the Frankfurt Group and Herbert Marcuse, and whisper the factoid known only to the thoroughly initiated that is is all really “Cultural Marxism”.
Sorry for the long comment. I didn’t have time to write a short comment.
Protests against statues, etc
Rather than BLM, a better (adjacent) example might have been protests against Confederate monuments (in the UK, protests against statues of slave traders), “defund the police” as a slogan, etc.
A cop kills someone, and then there is a protest, was a feature of politics back in the ’80s. before we had social media. Therefore, this part of ot cannot be down to AI.
What might be different, now:
a) The choice of symbolic targets, such as statues
b) Direct action. and the lack of faith in democratic politics to solve anything (viz the statue of Edward Colston in Bristol got thrown in the harbour, and the jury acquited). Extinction Rebellion also an example of Direct Action.
Now, I’m an old-school British leftist, and share with that community a distinct disapproval of direct action. (This attitude is part of a reaction against that Karl Marx guy … a realisation that revolutions often turn out to be a terrible idea, and that direct action contains within it the seeds of the same problems)
Epistemic concerns
I thought had on this book that doesn’t really fit anywhere, so I may as well comment it here.
At the beginning of the twentieth century, lots of psychiatric patients told their therapists that they had been sexually abused, but weren’t belived. (cf. Goddard; Freud on the seduction hypothesis etc.) Now we believe that a lot of those historical accounts were, mostly, true.
Obvious theoretic concern: how do we know that there isn’t something like that, that we don’t believe now, that future people will think is true.
Clearly, you can’t give examples of this (if you think its an example, then it isn’t, by definition). We could perhaps point at classes of beliefs: one of these beliefs might be thought true in future, but we don’t know which,
The AI apocalypse worry: The AI might just tell us. e.g. “Yes, the US government really has been controlled by lizards from outer space since the 1940’s, here’s the evidence.” This might cause considerable short term disruption.
BLM, redux
So, I’m British, and different things make it to the top of the news over here, but if I was going to try and argue that BLM has a fringe of QAnon level craziness, I think I would go for the notorious incident in which Rebecca Long Bailey referred approvingly to an article by Maxine Peake, in which Maxine Peake mentioned, among other things, that US police officers are often trained by Israelis.
Here, we can dimly seek looming into view a conspiracy theory of QAnon proportions.
SMBC
https://www.smbc-comics.com/comic/conspire
Today’s SMBC comic seems apposite…
Antipolitics
I agree, not very interesting to argue the object-level politics. The meta-level stuff – like the nature of politics and its relevance – I would love to argue about, if there was a way to do it respectfully and constructively (there might not be, this stuff is kind of fraught).
From my perspective, it looks like you suffer from a form of the antipolitics that I have written about elsewhere. You have a disdain for the tribalistic aspects, which I get, but it leads you to thinking the tribes are equivalent, which seems like a terrible mistake to me, down here in the weeds of ideology.
The problem is, politics is a big part of how any kind of collective action happens. You seem to be proposing a social project here (stopping pernicious AI). You’ve done a good job of describing the strong social forces behind AI, which you call “Mooglebook” but you could also just say “corporate capitalism”. If you want to oppose those forces, well, surprise, you will need to do politics. Only power can oppose power. How exactly are we going to control AI, which has so much power behind it? Government regulation? That requires a strong regulatory state – politics. Or with a CPSR-style professional organization that will help technology professionals avoid applying their talents to socially destructive purposes? That too is a form of political organizing.
From the standpoint of reining in AI, the two political tribes are not equivalent. They may be equally annoying or insane to you, but only one of them is even a little bit likely to support the kind of action you seem to be calling for.
I haven’t read the whole thing yet so for all I know you address this point, but this has been my reaction so far. I mean, aside from vigorous agreement with most of the non-political aspects.
The concerning aspects of AI are already here.
This was an excellent topic that is not getting enough focus. I am glad to see there are others looking at the currently happening and imminent effects to society that are concerning long before we reach AGI.
I’ve also been writing about these aspects as well and if you have a moment might find some additional perspectives on the related topics of some value.
I’ve focused mainly on the social and philosophical impacts that I perceive that we will encounter as we continue forward in this endeavor.
Metaconflict
I didn’t know you had written a page in response to me, I’m honored! And will certainly read the whole thing, whatever its length.
And I see you have thought quite a bit about the political implications, so I’ll shut up about that until I’ve at least understood your position better.
I don’t believe that the right will oppose AI because of “human sacredness” because I don’t believe the right actually values that. They might conceivably go after “woke corporations” but then the nexus of AI will just shift to unwoke institutions. But this is getting too much into the object level.
The page you linked “How AI destroyed the future” contains a great example of a specific antipolitics move:
The culture war’s justification for itself is that Americans are profoundly split over fundamental values. This is a lie. Mostly everyone wants the same things; but we can’t get them because the Other Side will block any action to bring them about. Everyone urgently wants the healthcare system fixed, but for exactly that reason Mooglebook AI whips the Other Side into a frenzy of opposition
I’ve critiqued this exact view, that everybody really wants the same things and we are only fighting for weirdly imposed artificial reasons in this post on SSC’s conflict theory and updated here. Let’s call it “conflict eliminationism”. Aside from the general reality of conflict, this passage is weird because (a) people were fighting over the healthcare system long before Mooglebook and the internet came around, and (b) it is completely obvious why the healthcare system does not get fixed, it’s because there are powerful entrenched interests that profit from the current arrangement. It is not the case that everyone wants it fixed. (In this case, the culture war is something of a smokescreen, I agree).
Not to belabor this particular issue, but this is what I mean by antipolitics, it’s this tendency to avoid thinking about conflict, to try to wish or define it away, to trivialize it. Sorry to obsess on this point which apparently fascinates me; and again, reserving judgement on the book as a whole until I’ve actually read the whole thing,
Tolerance/Abstinence
Let’s say the WSJ & NYT represent opium. FoxNews & MSNBC come along and are like, hey, have you heard of this thing called Heroin? Twitter shows up & starts cutting everything with fentanyl. ChatGPT pulls out a ballon full of carfentanil for you to mainline. At some point it seems you either die, develop a tolerance or learn abstinence. In any case the revelation is that it was all always poppy-fueled fever dreams.
One thing I’ve been suggesting to my friends is that we need to give up on the hope of trying to stay ahead of the deep-fakes & instead accept that for the last 30 years, we’ve “never” been engaged in a human relationship when using a screen. Our interaction was always with the device first and foremost. However much we thought our avatars might empower us, we can see now that they have enfeebled us. We, as humans, want and need unfiltered direct access to our fellows in the flesh.
As this wave of 190-proof memetic propaganda washes over us, maybe we’ll recall that we are not actually an aquatic species & never had any business trying to build lives in the ocean, better to head back to shore where we can sit down, sit still, and don’t have to hold our breath between inhales.
SusanC
Actually, therapists often themselves implanted (and still do that) a false memory into people’s minds about childhood sexual abuse. A very good book called “Mistakes were made (but not by me)” has a whole chapter with references on that practice.
Persuasive texts
AI systems can now write persuasive texts, several paragraphs long, difficult or impossible to distinguish from human writing, arguing for any position on any topic whatsoever.
I’ve used texta.ai for my research theme. It’s frightening. I’ve made one overview article arguing that antipsychotics cause (increased) intestinal permeability. Another article in the form of blog post argued for harnessing the power of antipsychotics for gut barrier support. Both were good, persuasive, indistinguishable from human writing, and referred to (non-existing) studies.
So it made me think: as the percent of AI-generated papers published in real journals (it already happens) will increase, AI will cite each other’s generated studies, which will gradually make more bizarre claims. It will become very difficult to know if you’re reading about a real experiment that happened, or if all the tables and data are AI-generated.
Warrior and Theorists of Apocalyptic Incoherence and Social Collapse.
Zuboff may have the best analysis of, and approach to apocalyptic incoherence and social collapse, and Maria Ressa should perhaps write the book on The Art of War Against apocalyptic incoherence and social collapse, so here is a little bit of Zuboff’s thought, from Maria Ressa’s book “How to Stand Up to a Dictator”. People who’ve heard of Ressa may think of her as a Philippino Journalist who publishes something called Rappler, and who has for a long time been protesting the Duterte government, and now that of “Bong Bong” Marcos, the son of the 25-year dictator, Fernando Marcos.
The Rappler has been operating for five years under official government sanction with no license, and from 2018-January 2023, Ressa and others of her organization were convicted, but not sentenced of tax evasion, for which there was probably no basis at all. In January 2023 all but one of the charges were dropped.
Ressa shared the Nobel Peace prize in 2021 with a Russian journalist, publisher of the last independent newspaper in Russia, shut down in March 2022. She is a fellow at the Initiative on the Digital Economy at the Massachusetts Institute of Technology and is a 2021 Joan Shorenstein Fellow at the Shorenstein Center on Media, Politics and Public Policy and Hauser Leader at the Center for Public Leadership at Harvard Kennedy School.[Wikipedia].
For all this recognition and participation in top research organizations, you need to at least understand that Rappler is much more than a publishing venture, and it, and its creator are up to something really extraordinary. Ressa resigned from the heading the news operation of the Philippines main television network over a policy disagreement, taking her main partners with her. She wrote that they could all expect to make between 1/10 and 1/3 of what they had been making. Their project used social media to do community building, developed citizen journalists, employed sophisticated analysis of social media feedback, combining and sharing user response in real time, using the full set of principles of the “Wisdom of the crowds, exactly summarized on p91. She started out an enthusiast for Facebook and social media, but by 2021 concluded “Facebook represents one of the gravest threats to democracies around the world”, and supports her analysis with talk of eigenvector centrality using a huge database she has built of network interactions and relationship.
The criteria for crowds acting wisely is that “a group’s members have diversity of ideas, independence of one another, a decentralized structure, and a mechanism for turning judgments into a collective decision”, which seems to me might help differentiate an anarchist collective and a fully meta-modern organization (as described in “Upgrade Your Cargo Cult”).
It also helps to know to know a bit of her history. She spent her first 10 years in Philippines, was then raised in Tom’s River New Jersey and graduated Princeton, the author of a play featured in the Edinburg Fringe Festival, and a Fulbright fellowship that took her back to the Philippines to study theater and politics. She continues to collect titles from academic institutions for her work. So her collaboration with Shoshana Zuboff in “The Real Facebook Oversignt Board” (which also included Timothy Snyder and Lawrence Trabe, and was a response to Facebooks recently created but ineffective Oversight Board) should not be too surprising.
In 2021, soon after accepting the Nobel Peace Prize she was invited to Zuboff’s home where they talked for days.
“For Shoshana, every other problem is a distraction and by-product of the original sin of “primary extraction” … using machine learning and AI to capture and store out private actions and lives, “building models of each of us, then publicly declar[ing] that they now own these corporate assets … that insidiously manipulate us for profit. They offer no compensation [nor ask us for] permission … Primary extraction is a morally reprehensible practice that Shoshana compares to slavery; she demands that it be outlawed.” This would address “every other problem it has created, the cascading failures it has allowed … including safety, competition, and privacy.”
“I was looking for quick, easy steps that we could push platforms like Facebook and YouTube to adopt. She took every suggestion and showed why what I was recommending was inadequate and why nothing less than attacking the business model directly could bring about true change. … Journalism as an institution has to be reinvented for the twenty-first century. … ‘And we feed into [their business model] by using their social sharing buttons so we give them out most precious resource–our relationships.’“
“Facebook told advertisers and publishers that vdeo would get greater distribution so news groups around the world had laid off editorial staff and hired video teams and advertisers had placed their ads on video on Facebook. Except that Facebook lied: it inflated the number of video views by as much as 900 percent, and, according to its internal documenets, it lied about its mistake keeping it secret for more than a year.”
Apocalyptic Incoherence and Social Collapse must be reigned in ...
… or I very much doubt that the actions required by any other approach given in “Better Without AI” can be coherently put to work.
That is what I’m most passionate about, and may have led to my flying out of orbit in the comments on the next section.
Shoshana Zuboff's book - a masterpiece
I highly recomend reading Shoshana Zuboff’s book, it is a masterpiece of social sciences.
The c word
Great series. Your analysis here seems applicable to any corporate structure, so I am a tad surprised to see no mention of capitalism or corporations more generally.
recommender algorithms vs social networks
Suppose FB, Twitter, etc. eliminated their recommender algorithms and presented you with posts and reposts only from people whom you choose to follow. Then would most people (especially those who are more prone to reflexive outrage) see a substantially different distribution of messages than now? Or would we have the same feedback loop of extreme posts propagating by virtue of their very provocativeness, maybe taking 10% longer to reach saturation or something?
These aren’t rhetorical questions, and I don’t know how to answer them. But it’s not obvious to me that our current level of polarization, as unpleasant and dangerous as it may be, is more extreme than the much lower-tech polarization of the US in the 1960s or 1860s, or the world in the 1930s, etc.
Extreme ads, user control
This is old news, but:
https://www.theguardian.com/commentisfree/2021/jan/26/facebook-ads-combat-gear-rightwing-users
https://www.yahoo.com/video/facebook-rules-ban-promoting-weapons-160023779.html
Extreme ads can be an issue as well as extreme content.
That said, if people have more control over their feeds, that puts the choice about polarization (and concentration) in a different place. Block lists already exist, as do more technical tools for users (though this seems to vary by platform). Although social media feeds may be monolithic today*, having different accounts (and being careful with the subscriptions) seems like it can address this, if people want to engage with politics, but not all the time. This seems like a good thing, as does fixing the incentives from having one party with both feed control/moderation and also getting money from the ads. (Mastodon doesn’t do ads.)
*My rss feed reader has folders, so it’s in a better spot.
re: recommender algorithms vs social networks
David: Thanks, good to be here! I appreciate your insights on these issues, and the clarity of your presentation. And thanks for the link; it does make a persuasive case that recommender algorithms make a big difference in what posts get widely seen by others who wouldn’t seen them by network chains (though it’s still, as you agree, hard to quantify what effect that has on social polarization, compared to old-fashioned chains of transmission).
Pattern: Agreed, ads matter too (though in the specific case of weapons ads, I’d be unhappy with FB pushing those even if the ads were untargeted, or targeted only via opt-in user-specified categories of interest).
'post-rationalism'
In my mind, ‘post-rationalism’ is so vague that it doesn’t, by itself, mean much more than ‘rationality isn’t enough’. I disagree with that but I also think your own ‘meta-rationalism’ is insightful (and basically true). I just also think that ‘rationality’ can ‘naturally subsume’ its own ‘meta-ness’.
I don't know about the transsexual wombats...
…but you did inspire me to refill my beer, even though it is 2:30 AM and I really should go to bed and likely would have were it not for your provocation. Also, it’s a homebrew and I screwed it up with way too much crystal malt, making it taste cloying and overly malty. If the slow death of humanity by a combination of sugar, alcohol, poor sleep, and whatever causes the weird flavors of excessive crystal malt was AI’s goal, I have evidence it is achieving it.
More to the point, could you please go back to Meaningness and Metarationality and tie up some loose ends rather than bloviating about AI like everyone else who is rationalism-adjacent in 2023? You more or less singlehandedly pulled me out of the nihilistic STEM depression I suffered from for several years as an underperforming grad student studying an application of bad machine learning models for something they are definitely not well-suited for. I think you still have a lot of valuable insights in those areas that need to be expressed, and I think your contributions are more useful there rather than here.
Social media vs. recommender media
Suppose FB, Twitter, etc. eliminated their recommender algorithms and presented you with posts and reposts only from people whom you choose to follow. Then would most people (especially those who are more prone to reflexive outrage) see a substantially different distribution of messages than now?
I think so. Based on my experiences over the last decade or so as an active user of the fediverse; the decentralised social network of which Mastodon servers are a part.
Lots of people have migrated into the ‘verse from Titter since Melon Husk acquired it, and they do bring their reflexive outrage habits with them. But after a few months they seem to either calm down, or find the ‘verse is too short of ragebait and drift back to Titter.
Assuming your audience is mostly Blue
Cool, cool.
I was going to share this with my large, rational, traditionally raised fam. But when you throw the a-word in there casually as an example it means I can’t. I suspect you’ll lose half of some very enthusiastic readers at the first section and they might be the half you otherwise might never reach!
Also I think it dates your writing to tie into whatever lightning rod is in vogue. I mean, you’re implicitly sympathetic to the a word but not low taxes or having kids 🤪 At least balance out the implied endorsements to both sides of the culture war or leave out the inflammatory reference. Many ppl who aren’t offended by this issue are already gonna love you anyway. Also if you put that word in unnecessarily, doesn’t it imply that you might be unconsciously participating in the ai training you refer to in the post? There’s *so much other things * you could put there that don’t assume a default obviously rational stance on one side or other of the culture war. Like where you bought your toenail fungus treatment and when. For example.
Thanks for the writing and thinking. Love it. Would like to share across the aisle. Can’t share if it’s not obviously impartial…