This piece originally appeared on Second Best.
AI safety means different things to different people, but whether the focus is job loss or the x-risk from an unaligned superintelligence, the concerns are always presented as relatively first-order. That is, AI safety is usually conceptualized in terms of what AI will do directly, rather than in terms of AI’s likely indirect, second-order effects on society and the shape of our institutions. This is an enormous blind spot.
Circa the early 2000s, “internet safety” discussions revolved around first-order issues like identity theft, cybercrime and child exploitation. But with the benefit of hindsight, these direct concerns were swamped by the internet’s second-order effects on our politics and culture. Indeed, between an information tsunami and new platforms for mass mobilization, the internet destabilized political systems worldwide, even leading to outright regime change in the case of the Arab Spring.
To the extent AI is simply the next stage in the digital revolution, I expect these trends to only intensify. The issue is not that AI and informational technology are inherently destabilizing. Rather, to put it in slightly Marxian terms, the issue is that society’s technological base is shifting faster than its institutional superstructure can keep up. Populist leaders who promise to root out corruption and reset the system are a symptom of governance structures that have in some sense lost their “direction of fit,” like clothes that shrank in the wash or a species outside its evolutionary niche.