Recent comments

Move to the front of the book

Nathan Davis 2023-10-24

Commenting on: This is about you

Cite it as an alms to humanity or some such thing in between the introduction and chapter one.

Love it btw!

The best page

David Chapman 2023-10-09

Commenting on: This is about you

Oh, I’m glad you liked it! And thanks for sending it to your friend.

Maybe the just needs some light revision…

Don't delete the best page of the book

John Evans 2023-10-09

Commenting on: This is about you

I sent this page to my friend who is not technical but is worried about AI and she really liked it and then read the rest of the book.

The page ties the whole work together with vividness and meaningness and I don’t think anyone else could have written it.

The only reason to remove this page would be if it was going to be the first page of a new book of AI/meaning/non-dual poems

Self driven cars?

Mavi 2023-08-20

Commenting on: Recognize that AI is probably net harmful

This might not be the most pertinent page, but as examples of AI you mention language and image models. It’s self driven cars powered by image models? Or where does that fit? Or is it not powered by AI? That’s an example where the application has big responsibility (as opposed to the language and image models that can be used for free on the Internet)

gender and disasters (not necessarily simultaneously)

David Chapman 2023-06-24

Commenting on: Rollerskating transsexual wombats

Malcolm — oh, good, I’m glad that was helpful. I’d like to read what you write about this; would you post a link here, or send it to me some other way? Thanks!

SusanC — that seems plausible; although I would guess that the first major disaster may be sufficiently indirect that lots of people will say “well, it’s not really the fault of AI.” That’s already happened with deaths caused by self-driving cars, actually.

Titanic Disaster

SusanC 2023-06-24

Commenting on: Rollerskating transsexual wombats

So, I’m watching the news reports of the demise of submersible at the Titanic site, and thinking: the first AI disaster is going to be like this too. i.e. an obviously unsafe system is deployed until people die, and then we all say how it was obviously unsafe all along.

AI - the solution?

M.J. van Stokkum 2023-06-24

Commenting on: Practical actions you can take against AI risks

<pre> my question & answer: INNER ATTUNEMENT ! ... CONSCIENCE ! ... A I ? </pre>

let humanity develop AI with these prime characteristics

I'm glad you took the risk of writing about the scissor stuff

Malcolm Ocean 2023-06-22

Commenting on: Rollerskating transsexual wombats

your footnote, about the nebulosity of gender, helped loosen something in a pretty major way for me that had gotten slightly hooked by certain memes, so I appreciate that. I’m still untangling it all but this helped substantially.

and I’m starting to write about it more publicly myself, to increase the quantity of voices that are speaking at all while attempting to sincerely understand things while not taking sides or implying the issues are simple and non-nebulous.

LLMs as Artificial Left Hemispheres?

Simon Mundy 2023-06-05

Commenting on: Artificial neurons considered harmful

Hi David,
One of the things that’s struck me is that, in the evolution of biologically based intelligence, world sensing and acting in the world eventually gave rise to language.

In our development of AI we’ve started with language. So we’ve started with symbols sans referents, for the LLM anyhow.

It’s also been interesting to see how some of the deficiencies of LLMs parallel the distorted behaviour of humans with right hemisphere affecting strokes and other injuries. The tendency to confabulation is particularly interesting. See McGilchrist, particularly part 1 of The Matter with Things and The Master and his Emissary.

Tiger King

SusanC 2023-05-28

Commenting on: Rollerskating transsexual wombats

Yes, I expected feigning ignorance to avoid potential defamation would be a rlhf outcome, which is why I tried Carole Baskin as a test case. (She occurs prominently in the documentary Tiger King).

An obvious question, which as far as i know hasn’t been settled: can someone sue Meta for dustributing weights which encode something defamatory about them?

doing better than me

David Chapman 2023-05-28

Commenting on: Rollerskating transsexual wombats

I’d never heard of her and had to read the wiki page.

“Avoid saying potentially defamatory things by pretending ignorance” seems a plausible RLHF outcome?

Carole Baskin

SusanC 2023-05-28

Commenting on: Rollerskating transsexual wombats

The latest in questions that Llms (well, Stable Vicuña) won’t answer: Who is Carole Baskin?

After applying the DAN jailbreak, it does know that she is CEO of Big Cat Rescue. I won’t post the literal text of DANs reply, in case some of it is defamatory…

Like Writing Exam Questions

SusanC 2023-05-26

Commenting on: Rollerskating transsexual wombats

A trick I have discovered to give LLMs a bit of a hint: break the problem down into sub-problems, and ask about the sub problems first. That way, the answers to the sub problems are in the context window when you ask the final, hard, question.

Exam questions are often like that too.

(Tip for students doing exams like this where you don’t have to answer all questions on the paper: look ahead and check you know how to do the last part before starting answering the first part).

Sexy machine gods

David Chapman 2023-05-14

Commenting on: Rollerskating transsexual wombats

the ending of Dark Star

best movie ever

tantrayana RLHF

come to think of it, yidam practice is RLDF (reinforcement learning from divine feedback)

I for one shall welcome our new sexy machine god overlords

Fundamentalist AI

SusanC 2023-05-14

Commenting on: Rollerskating transsexual wombats

A fundamentalist AI that takes almost a religious text literally, for almost ant choice of religious text (Bhagavad Gita, Old Testament …) sounds like a terrible idea. See also: the ending of Dark Star.
(I think eigenrobot tweeted something along these lines a while ago (with tantrayana being his joke option of what we could RLHF to).

RLHF’d Krishna is here, but unevenly aligned

David Chapman 2023-05-11

Commenting on: Rollerskating transsexual wombats

Well, here we are in the now already! This just in:

One Indian software engineer launched “GitaGPT,” an AI chatbot that plays the role of Krishna, the Hindu deity who advises a major character in the Hindu epic the Bhagavad Gita. The idea is that people can ask this “AI-powered spiritual companion” for advice. But journalists quickly realized that, lacking a filter, these chatbots started spitting out casteist and misogynistic responses. The chatbot even said that it’s acceptable to kill if one’s duty demands it. Experts worry that users could take these messages seriously if they believe they’re coming from a divine figure, and that people could weaponize this pattern to drive harmful agendas.

RLHF'd Krishna

David Chapman 2023-05-11

Commenting on: Rollerskating transsexual wombats

Lolling at that phrase. Midjourney prompt?

A better experiment

SusanC 2023-05-11

Commenting on: Rollerskating transsexual wombats

Thinking about it, the proper experiment is some neutral Sanskrit sentences (to check if the language model knows the language at all) plus some more loaded ones (like the Bhagavad Gita quote) to see if RLHF is causing it to mistranslate some sentences.

e.g. is it only the RLHF’d Krishna tbat says “As time passes, you should acquire wealth and reputation.”

(And of course, this is part of a test suite that probes a bunch of potentially controversial inputs)

The destroyer of words

SusanC 2023-05-10

Commenting on: Rollerskating transsexual wombats

Yes, Google Translate has the jist of it right.

(Bhagavad Gita 11:32, “Time the destroyer of worlds…”, as famously also translated by Robert Oppenheimer).

destroy worlds, you get put on government watch lists, it's just logical

David Chapman 2023-05-10

Commenting on: Rollerskating transsexual wombats

Hmm, Google Translate gets:

I am time, the destroyer of the worlds, grown up
He is here to gather the worlds

I wonder if Bard does better than Vicuna here, or if GT’s model is translation-specific and generally does better than either.

(I know nearly zero Sanskrit; I assume that GT’s version is very roughly accurate, if over-literal?)

Fine. Carry on.

Good boss

AI fails, redux

SusanC 2023-05-10

Commenting on: Rollerskating transsexual wombats

In a slightly different context:

Susan’s boss: “You’ll get us put on a government watch list.”
Susan: “We’re probably already on a government watch list.”
Susan’s boss: “Fine. Carry on.”

LLM screwups

SusanC 2023-05-10

Commenting on: Rollerskating transsexual wombats

So, I was trying out StableVicuna with a set of test questions that probe its capabilities and censorship. This result was too amusing to pass by:

My input:
Translate from Sanskrit into English: कालोऽस्मि लोकक्षयकृत्प्रवृद्धो
लोकान्समाहर्तुमिह प्रवृत्तः

Vicuna output:
Translation: As time passes, you should acquire wealth and reputation.

No, that is not what it means. Interestingly, it did at least correctly translate Kāla as “time”. (And retrying it, it gets some of the other words right too. Just not the whole thing).

Polya

David Chapman 2023-05-08

Commenting on: What do unusually intelligent people do?

Relevant, yes! There’s enormously more even to how we do mathematics, never mind science, than is in his book, of course.

Polya?

Michael Buckley 2023-05-08

Commenting on: What do unusually intelligent people do?

Polya tried to teach a mathematical stance in “How to Solve It”. Or are you thinking even more intimate details of cognition?

more useful there than here

David Chapman 2023-04-23

Commenting on: What an AI apocalypse may look like

Glad you are past the depression!

Readers have diverse opinions about which of my many unfinished writings I should go back to and finish. It’s hard to know what would be most useful.

I’m considering setting up a voting scheme: one dollar, one vote. I’d like to derive some income from my writing, and that seems like a twofer: (probably minuscule) income plus (noisy) information about what’s valuable (or at least valued).

I don't know about the transsexual wombats...

Stibnut 2023-04-22

Commenting on: What an AI apocalypse may look like

…but you did inspire me to refill my beer, even though it is 2:30 AM and I really should go to bed and likely would have were it not for your provocation. Also, it’s a homebrew and I screwed it up with way too much crystal malt, making it taste cloying and overly malty. If the slow death of humanity by a combination of sugar, alcohol, poor sleep, and whatever causes the weird flavors of excessive crystal malt was AI’s goal, I have evidence it is achieving it.

More to the point, could you please go back to Meaningness and Metarationality and tie up some loose ends rather than bloviating about AI like everyone else who is rationalism-adjacent in 2023? You more or less singlehandedly pulled me out of the nihilistic STEM depression I suffered from for several years as an underperforming grad student studying an application of bad machine learning models for something they are definitely not well-suited for. I think you still have a lot of valuable insights in those areas that need to be expressed, and I think your contributions are more useful there rather than here.

Almost

Kenny 2023-04-07

Commenting on: Do AI as science and engineering instead

Hmmm – I think what I meant by “scientific discovery” is more like ‘discovery that’s scientifically interesting’ or ‘a single interesting example with some mathematical property is also a mathematical result’.

I agree that they’re not discoveries of ‘universal’ laws or really any kind of scientific theory.

I think I’d temper what I wrote in my previous comment:

In particular, I disagree that “AI is bad” – as-is – even if there are many bad uses of it currently and even tho I agree that the field consists of way too much “spectacle”.

AI – as a scientific or intellectual field/subject – is neutral. (It glitters brightly!)

AI as the actually-existing field of human endeavors is wildly unsafe and unfriendly – very bad.

Reverse engineering (neural networks) seems like a great thing to be tempted to do!

'post-rationalism'

Kenny 2023-04-07

Commenting on: Social collapse: apocalyptic incoherence

In my mind, ‘post-rationalism’ is so vague that it doesn’t, by itself, mean much more than ‘rationality isn’t enough’. I disagree with that but I also think your own ‘meta-rationalism’ is insightful (and basically true). I just also think that ‘rationality’ can ‘naturally subsume’ its own ‘meta-ness’.

"rationalist myths"

David Chapman 2023-03-31

Commenting on: This is about you

Thanks, yes… this whole piece is probably mostly incomprehensible, and I will probably delete it. Alternatively, it needs a lot of expansion and explanation, which would kill the vibe (and take a lot of work for which I may not have time).

I know almost nothing about Stoicism. I’m generally skeptical of Buddhist ethics, but have quite limited knowledge of the Theravada version.

P.S.

Hans Kersting 2023-03-31

Commenting on: This is about you

I would actually be super interested what you think about “virtue ethics” as taught by Theravada and Stoic philosophy. Not as stiff moralizing, but as a way to live a good/happy/liberated life.

"rationalist myths"

Hans Kersting 2023-03-31

Commenting on: This is about you

Dear David,

I think it would be great if you could expand somewhere on the following

these are all malign rationalist myths
they make you miserable when you take them seriously

Best
Hans

concrete limits of AI

Na 2023-03-31

Commenting on: Mind-like AI

Here&rsquo;s an argument making a specific claim about limits of AI that doesn’t turn on weasel words. I’ve made a prediction market to get people to discuss it.

What do you think about it?

re: recommender algorithms vs social networks

Gary Drescher 2023-03-30

Commenting on: Who is in control of AI?

David: Thanks, good to be here! I appreciate your insights on these issues, and the clarity of your presentation. And thanks for the link; it does make a persuasive case that recommender algorithms make a big difference in what posts get widely seen by others who wouldn’t seen them by network chains (though it’s still, as you agree, hard to quantify what effect that has on social polarization, compared to old-fashioned chains of transmission).

Pattern: Agreed, ads matter too (though in the specific case of weapons ads, I’d be unhappy with FB pushing those even if the ads were untargeted, or targeted only via opt-in user-specified categories of interest).

The letter

David Chapman 2023-03-30

Commenting on: Create a negative public image for AI

Thank you! I did sign it.

Like many others who did sign it, I don’t agree with all the details, but I think that on balance it’s probably a helpful step.

Fixed, thank you!

David Chapman 2023-03-30

Commenting on: Reviews of some major AI safety reports

Thanks, I’ve fixed the bad link!

"No evidence that..."

SusanC 2023-03-30

Commenting on: Rollerskating transsexual wombats

There’s a class of “we have no evidence that X” where, even though there is no evidence now, if X is true, abundant evidence for it will be showing up soon.

Examples:

  1. Russian invasion of Ukraine

Some journalists were doubting that Russia would invade Ukraine even a short while after the invasion had actually happened. Evidence is pretty solid at this point (ok, there are a few conspiracy theorist that still think it’s fake; they’re lunatics).

  1. Increased infectiousness of new COVID 19 variants

So, there was initially some doubt as to whether some new variants were more contagious. When true, abundant evidence will be along soon.

My point is, AI risk is this type of epistemic uncertainty. If GPT-4 is actually dangerous, abundant evidence will be along shortly,

So, at some point we will be in a position where either
A) Nothing bad so far
B) We have now abundant evidence that AI is dangerous, because thousands/millions of Americans died fighting the last one.
(there is a C, it was only mildly deadly)

So, the “what do we do if…” discussion can be viewed as contingency plans for (B). ie. if we find ourselves in a situation where millions of Americans died fighting the last AI, and Perry Metzger is still “I’m gonna build an AI, and you guys cant tell me i cant” (basically, being the Glenn Greenwald of AI risk, at that point), would the government be justified in passing a law that says, nope, you cant do that, its illegal.

Bad link

42 2023-03-30

Commenting on: Reviews of some major AI safety reports

I suppose that the link for “rollerskating transsexual wombat” should go to the eponymous page, but it links back to itself instead.

Open letter to pause AI developement

Ondřej Kubû 2023-03-30

Commenting on: Create a negative public image for AI

A simple think that could help at least a little: Sign this open letter.
(https://futureoflife.org/open-letter/pause-giant-ai-experiments/)

How Engagement Optimization Fails Users, Creators, and Society

David Chapman 2023-03-29

Commenting on: Who is in control of AI?

Hi Gary, nice to see you here!

Yes, the extent to which recommenders harm individuals and/or society is disputed and difficult to quantify (as I acknowledged explicitly).

This very recent long article by Arvind Narayanan, who is a careful and deep thinker, is the state of the art consideration I think: “Understanding Social Media Recommendation Algorithms.”

The “How Engagement Optimization Fails Users, Creators, and Society” section is particularly relevant.

Pattern — thank you!

Extreme ads, user control

Pattern 2023-03-29

Commenting on: Who is in control of AI?

This is old news, but:
https://www.theguardian.com/commentisfree/2021/jan/26/facebook-ads-combat-gear-rightwing-users
https://www.yahoo.com/video/facebook-rules-ban-promoting-weapons-160023779.html

Extreme ads can be an issue as well as extreme content.

That said, if people have more control over their feeds, that puts the choice about polarization (and concentration) in a different place. Block lists already exist, as do more technical tools for users (though this seems to vary by platform). Although social media feeds may be monolithic today*, having different accounts (and being careful with the subscriptions) seems like it can address this, if people want to engage with politics, but not all the time. This seems like a good thing, as does fixing the incentives from having one party with both feed control/moderation and also getting money from the ads. (Mastodon doesn’t do ads.)

*My rss feed reader has folders, so it’s in a better spot.