The inescapable: politics

I hate thinking about politics as much as does the next autistic tech geek, but no realistic approach to future AI can avoid questions of power and social organization.

I’ve suggested that most people like much the same things; yet we are diverse and what we want does differ somewhat. Some conflict is inevitable. Functional politics steers, unsteadily, toward futures more people would like, despite that.1

Since preferences are diverse, a main feature of a decent future is that power is adequately distributed. I’ve emphasized that the danger in AI is that it can (and has) created large new pools of power, faster than countervailing checks and balances can be created. In the most extreme scenario, it creates infinite power instantaneously. A one-bit scenario is perfectly political: the only possible way forward is to organize society and culture so that no one can, or chooses, to destroy the world. Short of that, consider this, from Samuel Bowman:

Pretty much every bad outcome we’re seeing from present-day [AI text generators] could get a lot bigger and a lot worse. In particular, this is enough to get fine-grained surveillance and personalized persuasion to really work: Human-like cognitive abilities—plus cheap compute—would make it possible to deploy the equivalent of one censor or political strategist or intelligence service agent for every citizen in a country. It’s easy to imagine ways that even a clumsy implementation of something like this could lead to the rise of new stable totalitarian states.2

Checking the power of actually-existing AI can serve as practice, and a testbed, for countering the power of future AI. Mooglebook’s AI already reads your emails, and uses that to decide what political propaganda to show you. Do you want that? Are you on board with a movement to make it stop?

If we care about the future, we can’t avoid dealing with power, and that means not just “thinking about it” but wielding it. AI people in particular have more power than we realize, and with power comes responsibility.

  1. 1.I’ve deliberately framed the question as “what sorts of futures would we like” rather than “what sort of future do we want.” We’re not going to get the future “we” want, because wanting is nebulous, diverse, and context-dependent.
  2. 2.Why I Think More NLP Researchers Should Engage with AI Safety Concerns,” NYU Alignment Research Group, October 6, 2022.