This piece originally appeared at Second Best.
Where is this all heading?
You hear this question more and more these days, whether in the context of AI or technological capitalism more generally. What is the end game, our final destiny, the point of it all? Does human civilization just keep growing and expanding indefinitely, or do we live in a vulnerable world that’s teetering on the edge of extinction? Or maybe it’s a bit of both, as we invariably hand-off civilization to intelligent machines that go on colonizing the universe without us?
According to the Effective Accelerationist or e/acc worldview — the movement of rationalists and transhumanists who favor of accelerating technology for its own sake — we may not have much choice in the matter. As the founders of e/acc, Beff Jezos and Bayeslord, explain in their Notes on e/acc principles and tenets, life emerged “from an out-of-equilibrium thermodynamic process known as dissipative adaptation” in which configurations of matter for converting free energy into entropy are favored overtime. This same principle reappears at multiple scales, from the earliest biological replicators to the evolution of intelligent agents that model the future. Even capitalism can be thought of as a “meta-organism” for aligning individuals “towards the maintenance and growth of civilization” as a whole. The advent of superhuman AI is thus a thermodynamic inevitability — an attractor that any sufficiently advanced civilization is pulled towards by a series of positive feedback loops. We can either choose to accept this as the universe’s true purpose and accelerate the creation of our successor species, or we can attempt to freeze technology in amber and guarantee civilization’s collapse. In short, expand or die.
While e/acc has a growing number of online adherents, it’s not clear how many are true believers. For most, e/acc seems to be a declaration of techno-optimism — that AI will be a tool for humanity rather than the other way around. Yet in the true accelerationist analysis, human wants and preferences are already subordinated to the goals of the techno-capitalist meta-organism, making rank and file e/accs mere hosts to a memetic ideology. Why should an economy of superintelligences be any different? If superintelligent AIs inadvertently kill off humanity in the process of building a Dyson Sphere to power trillions of self-replicating robot automata, so much the better! Humans would just get in the way of harnessing all that irresistible negentropy.
Continue reading at Second Best.