
A core problem about the thing we today call “artificial intelligence policy” is that different people mean radically different things by the phrase “artificial intelligence.” I mean the term “radically” here in its literal sense: there are some who believe that AI systems capable of end-to-end autonomy of all human cognitive labor are coming soon, and there are others who assert, either explicitly or implicitly, that this is impossible, or who otherwise refuse to take the idea of this technology being built seriously.
Both groups are basically bullish: the latter group believes in “really good LLMs,” and the former group believes in “AGI,” “transformative AI,” “powerful AI,” and various other terms used to refer to future AI systems.
This creates a policy planning problem. Suppose that today’s large language models, and even, for the most part, tomorrow’s LLMs, do not give you much concern from a regulatory perspective, or you believe whatever policy challenges they pose can be solved by challenging but ultimately achievable adaptations of existing legal frameworks and institutions. This is, basically, my view about the current frontier of LLMs.