Comments on “Limits to experimental induction”
Adding new comments is disabled for now.
Comments are for the page: Limits to experimental induction
Lab automation
BTW, this whole chapter or section on improving science is very interesting and important, but also doesn’t seem to be very closely related to the dangers of AI. It almost feels like it wants to be its own book (as if you don’t have enough of those already).
This page seems to conflate two things: there’s AI for science, and automating routine laboratory operations. The first is problematic for the reasons you give; the second seems quite doable and is in fact being done, at a scale that is actually interesting because it makes more things possible and makes more data available. Sequencers and other next gen bio instrumentation is essentially doing this already. The impact on actual science and knowledge remains to be seen, bio has more data than it knows what to do with, but this is an area where better non-AI computational techniques can help a lot I think
Usefulness of "intelligence"
Okay, I think this has something that partially responds to my comment in the previous post: there you seemed to me to say simulation is limited, and I said “simulations are not in a different realm than experiments”, but you also here say the value of experiment itself is limited.
Again, wonderful points, individually. But they don’t add up to “not much reason to think it would be useful”!
If “seriality” is the new bottleneck (where previously it was experiment), why does it seem unlikely for very different architectures and methodologies to have orders of magnitude differences in speed?
I think it’s easiest to see this by just imagining a “lesser” primate trying to figure out whether slightly more advanced brains would help along certain dimensions. Your post then seems to prove too much. That’s a pretty standard argument, so I imagine you already have a response, maybe even in the next post :)
(One possibility is that you are, with these posts, in part trying to send the message “hey, (artificial) intelligence is not useful” because you’re worried about global catastrophes, and don’t want people to be working on capabilities. Catastrophe certainly undermines any “usefulness”. But that makes it hard to tell where I should take you literally.)