What makes “general” AI scary to most people is that we intuitively contrast it with AI following a goal given to it by humans, operating under human control. This intuition breaks under the analysis you provided on the previous page; AI doesn’t need human-level (or greater) intelligence, or the ability to determine its own goals, to be dangerous.
Why is "general" AI scarier?
What makes “general” AI scary to most people is that we intuitively contrast it with AI following a goal given to it by humans, operating under human control. This intuition breaks under the analysis you provided on the previous page; AI doesn’t need human-level (or greater) intelligence, or the ability to determine its own goals, to be dangerous.