TL;DR: I recommend three substacks:
-
Zvi Mowshowitz’ Don’t Worry About the Vase
-
Arvind Narayanan’s AI Snake Oil (coauthored with Sayash Kapoor)
-
Jon Stokes at jonstokes.com
These three authors are extremely different in orientation, but all excellent.
I’ve been tempted to blog about significant new developments in AI as they happen. I don’t have the time for that! But I do want to know what’s happening myself. Reading these three gives an excellent sense of what matters and why, with commentary from different angles as well as factual reporting.
Attitudes toward the future of AI could be described in terms of a 2x2, yielding four quadrants. Is AI going to become much more powerful soon, or not? Is AI, on balance, likely to be good, or bad? These three authors fill three of the quadrants.
Zvi provides a complete rundown of current AI news, as neutral just-the-facts reporting. He covers more detail than most people would want—I just skim it—but he highlights the most important stuff. He also includes his opinions about implications; but he is careful to be clear about what’s reporting and what’s op-ed. Zvi is in the “terrifyingly powerful too soon, and probably catastrophic” quadrant. I find I agree with nearly all his opinions.
Arvind is in the “much less powerful than hyped, but with potential for dire misuse” quadrant. He combines skeptical takes on current AI technical practice (like my Gradient Dissent) with analyses of the social impacts of current and near-future AI systems (like my Better without AI, which footnotes his work). As a Princeton academic, he combines deep knowledge and understanding with careful advocacy. I find I agree with nearly all his opinions.
Jon is in the “remarkably powerful and progressing rapidly, which is going to be awesomely good!” quadrant. He also combines extensive technical understanding with insightful analyses of the social and cultural consequences of current and near-future AI—but from the opposite side of the American culture war from Arvind. (Better without AI also footnotes his work.) I find I agree with nearly all his opinions.
It would be tidy if I could put myself in the fourth quadrant—“AI is weak and will stay that way, but it’s mildly useful and mostly harmless.” I wish I believed that, too! I’ve been inching more toward that view than eighteen months ago, when I wrote the first version of Better without AI. We’re far from out of the woods yet, though.
I’m actually in the “I have no clue about the future of AI more than a couple years out, and I think no one else does either” non-quadrant. That’s uncomfortable—but the most realistic position, in my opinion.