Comments on “Limits to experimental induction”

Add new comment

Usefulness of "intelligence"

S 2023-02-15

Okay, I think this has something that partially responds to my comment in the previous post: there you seemed to me to say simulation is limited, and I said “simulations are not in a different realm than experiments”, but you also here say the value of experiment itself is limited.

Again, wonderful points, individually. But they don’t add up to “not much reason to think it would be useful”!

If “seriality” is the new bottleneck (where previously it was experiment), why does it seem unlikely for very different architectures and methodologies to have orders of magnitude differences in speed?

I think it’s easiest to see this by just imagining a “lesser” primate trying to figure out whether slightly more advanced brains would help along certain dimensions. Your post then seems to prove too much. That’s a pretty standard argument, so I imagine you already have a response, maybe even in the next post :)

(One possibility is that you are, with these posts, in part trying to send the message “hey, (artificial) intelligence is not useful” because you’re worried about global catastrophes, and don’t want people to be working on capabilities. Catastrophe certainly undermines any “usefulness”. But that makes it hard to tell where I should take you literally.)

Lab automation

Mike Travers 2023-03-01

BTW, this whole chapter or section on improving science is very interesting and important, but also doesn’t seem to be very closely related to the dangers of AI. It almost feels like it wants to be its own book (as if you don’t have enough of those already).

This page seems to conflate two things: there’s AI for science, and automating routine laboratory operations. The first is problematic for the reasons you give; the second seems quite doable and is in fact being done, at a scale that is actually interesting because it makes more things possible and makes more data available. Sequencers and other next gen bio instrumentation is essentially doing this already. The impact on actual science and knowledge remains to be seen, bio has more data than it knows what to do with, but this is an area where better non-AI computational techniques can help a lot I think

Science FTW

David Chapman 2023-03-04

doesn’t seem to be very closely related to the dangers of AI

It’s not… but it addresses “but AI will cure old age, sickness, and death, so it’s worth taking the risks.” That’s the standard response to “AI is is bad and we should stop it.”

it wants to be its own book

Yup. But it’s summarizing the work of other people who can do a better job. The piece by Michael Nielsen and Kanjun Qiu, for example; that’s the length of a short book, and they’re better-informed than me.

doable and is in fact being done, at a scale that is actually interesting because it makes more things possible and makes more data available. Sequencers and other next gen bio instrumentation is essentially doing this already. The impact on actual science and knowledge remains to be seen, bio has more data than it knows what to do with, but this is an area where better non-AI computational techniques can help a lot I think

I thought I said that? But maybe not clearly enough. I’m planning another overall round of revision based on feedback, and I’ve made a note to make this more explicit when I do.

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.