Comments on “Fear power, not intelligence”

The concerning superpowers have, in fact, been specified

Robert 2023-02-22

The AI risks literature generally takes for granted that superintelligence will produce superpowers, but which powers and how this would work is rarely examined, and never in detail.

I don’t know what level of detail you expect, but the go-to example of a concerning superpower is nanotechnology, which has the twin properties of:
1. an existence proof demonstrating its feasibility
2. being obviously sufficient to cause human extinction

Maybe this is rarely described in papers put up on arxiv, but Eliezer is hardly shy about explaining this, and Nate also describes it in detail here.