New evidence that AI predictions are meaningless

Everyone’s opinions about the future of AI seem astonishingly overconfident. Everyone from hair dressers to famous AI researchers insists that they know what is going to happen. Yet when you probe why… there’s nearly nothing.

What?? Why so much conviction, based on nothing?!

In the past, nearly everyone has been consistently wrong, both when predicting that AI research would make progress, and when predicting that it would make nearly no progress. You would think this would make people—informed people, at least—more cautious about insisting that they know. But no.

A cool new study from the Forecasting Research Institute explores this mystery empirically. It’s summarized clearly in a tweet thread, with a link to the full report at the end.

This seems to me an innovative, well-designed, and well-executed experiment on an important question. Kudos!

To summarize the summary, the experimenters convened a group of eleven “superforecasters,” who think an AI catastrophe in this century is essentially impossible (average probability estimate 0.1%), and eleven AI experts who think it’s likely enough for grave concern (average probability estimate 20%). The experiment had them engage in extensive conversation, specifically to try to figure out why they disagree. (This is an “adversarial collaboration.”)

The experiment was a roaring success, in the sense that it confirmed that both groups were almost perfectly immune to evidence or arguments against their beliefs. Despite an average of more than a full-time week’s worth of work in discussion per participant, opinions changed by nearly zero.

The experiment was a total failure, in the sense that you would think, and hope, this process would at least make participants slightly less confident—which it didn’t. The experiment also tested various plausible hypotheses for why this happened (e.g., participants didn’t understand the other group’s arguments), none of which turned out to explain the anomaly.

Particularly striking, in terms of overconfidence, was a secondary question: when will AI take over? The skeptical superforecasters agreed that this is reasonably likely to happen in the next thousand years (average probability 30%), although almost certainly not in the next 75 (0.1%). When asked to put a date on it, they estimated on average the year 2450.

So, they were willing to make a prediction four hundred years into the future. I’m not willing to predict AI more than two years into the future, and only with very low confidence. Making any prediction for centuries from now seems to me completely insane.

I find this baffling and disheartening. What on earth is going on? The FRI report suggests that “social and personality factors” determine opinions, rather than arguments and evidence. (I’d say “feels,” informally.) This seems like it has to be correct, by process of elimination. It’s definitely worth further experimental investigation.

Something I’ve long observed is that AI is a topic on which everyone (including hair dressers and famous AI researchers) has always felt they are entitled to a strong opinion, based on nothing. The FRI finding is not directionally surprising to me—although I am surprised by the magnitude.

Sometimes the best way to solve a problem is to look at a more general version. Maybe that could help in this case.

There are other topics like this, in which people feel entitled to absolute confidence, based on nothing. Two examples are quantum interpretation and nutrition.

What do these examples have in common with opinions about future AI? Why these particular topics?

I have guesses… but what do you think? Leave a comment!