
A deviously simple consensus emerged following the debate over whether to impose a moratorium on state AI laws: no federal AI preemption without a corresponding national framework. We say “deviously simple” because that still left the scope and substance of a national framework up in the air. Fortunately, with recent release of the White House’s National AI Legislative Framework, the contours of a viable AI preemption law are finally taking shape, adding meat to the bones of the “four Cs” – child safety, communities, creators, and censorship—that AI Czar David Sacks foreshadowed last year.
On child safety, the framework calls for establishing reasonable age assurance requirements for AI platforms accessible to minors while preserving states’ rights over generally applicable child safety laws. This gets the balance of state and federal responsibilities right. Carving out a state role for AI laws aimed at protecting children was a core conservative demand, yet many such laws—such as for age verification and age gating—are among the most challenging to implement on a state-by-state basis. Pushing Congress to establish a national approach to age verification thus has the potential to level-up and standardize AI parental controls nationwide, thereby simultaneously simplifying compliance and enforcement. Importantly, the framework also calls for parallel protections on kids’ data for purposes of AI model training and advertising—another area where state-level legislation would likely prove inadequate given the inherently interstate nature of the internet.
On strengthening communities, the framework builds on the Ratepayer Protection Pledge by calling on Congress to streamline permitting for AI infrastructure and behind-the-meter power generation. Making it easier for large data center projects to generate their own energy on site helps reduce strain on the electrical grid, ensuring major AI projects don’t raise utility costs for commercial or residential customers. In some cases, flexible data center projects may even help lower energy costs for local ratepayers by smoothing out grid demand and financing upgrades to transmission infrastructure.
The communities section also includes a call on Congress to ensure “the appropriate agencies within the national security enterprise possess sufficient technical capacity to understand frontier AI model capabilities” and to “establish plans to mitigate potential concerns, including through consultation with frontier AI model developers.” This is an important reminder that the continued acceleration in the power of AI systems poses bona fide risks to U.S. national security, particularly in areas like autonomous cyber and bio. The federal government is the best actor for addressing AI-enabled threats to national security, but only if Congress authorizes and appropriates the resources necessary to attract and retain technical expertise in organizations such as the Center for AI Standards and Innovation – the federal government’s primary in-house capacity for evaluating AI models and developing AI security standards.
Regarding creators, the framework reiterates the view of the courts and most copyright experts that training AI models on copyrighted data is fair use and does not, by itself, constitute infringement. The fair use doctrine is perhaps America’s single greatest institutional advantage in AI, enabling U.S. companies to collect and train on the sorts of massive datasets the deep learning revolution requires. One only need look to the dismal AI ecosystem in Europe and other regions with restrictive copyright regimes to appreciate how sensitive AI leadership is to the legal treatment of training data. At the same time, the framework calls on Congress to establish a system for rights holders to negotiate collective licensing deals toward improving creator compensation and streamlining rights negotiations. Similarly, the fair use nature of AI training does not extend to all AI outputs, particularly outputs that attempt to replicate someone’s likeness or intellectual property. Given the newfound ease with which AI lets anyone clone someone’s voice or likeness, including for fraudulent purposes, Congress has an important role in protecting Americans from malicious deepfakes.
On censorship, the framework calls on Congress to “prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.” As AI systems grow more sophisticated, legislative protections against domestic mass surveillance and censorship will only grow more urgent. Yet this is also an area where the administration should heed its own advice. Indeed, there is some irony in calling on Congress to prevent the government from “coercing technology providers” based on “partisan or ideological agendas” the very week Anthropic and the Department of War are in court to debate the most egregious example of government jawboning in recent memory.
Next on innovation and regulation, the framework calls for a sector-specific approach, including through the creation of regulatory sandboxes to enable new, AI-enabled business models and applications. As we recently wrote about in the context of Utah’s approval of an AI service for prescription renewals, regulatory sandboxes will be essential to realizing AI’s full upside in regulated sectors like health care, law and financial services. Existing laws and regulations in these areas are often simply incompatible with AI-based solutions, having failed to foresee the recent progress in model autonomy. Whack-a-mole approaches to reform are also likely too slow and ad hoc relative to a sandbox approach, which can provide comprehensive regulatory relief by working backwards from the services made newly possible by AI.
Lastly, the framework makes gestures towards supporting the American workforce through apprenticeships and AI training programs. This is perhaps the weakest section of the framework, but also an area where more actionable proposals are genuinely hard to come by. There is simply enormous uncertainty about the scale and scope of AI’s near-term impact on the labor force. Moreover, the track record of U.S. employment and training programs is quite poor, and simply not geared to supporting white collar knowledge workers. We could at least stand to improve labor market data collection and help close the uncertainty gap through real-time indicators of AI’s effects on labor market dynamics.
More notable is what the framework fails to mention altogether. There are no calls for transparency or safety testing requirements on frontier AI companies, for instance, nor protections for industry whistleblowers. These are common features of AI laws already enacted in states like California and New York, and with other states introducing similar laws this legislative session, Congress would be wise to standardize frontier AI transparency and disclosure obligations at the national level sooner rather than later.
All in all, the White House’s National AI Legislative Framework is a major step forward in understanding the administration's vision for federal preemption. While it is still missing several essential elements for a viable national standard, the hard part will be translating its contents into legislative language that can earn the necessary votes. Nevertheless, as an anchor for negotiations, it represents an immensely reasonable baseline for Congress to build and improve upon.