Content

/

The TRUMP AMERICA AI Act Is a Disaster

Artificial Intelligence

The TRUMP AMERICA AI Act Is a Disaster

April 9, 2026
The featured image for a post titled "The TRUMP AMERICA AI Act Is a Disaster "

Senator Marsha Blackburn's TRUMP AMERICA AI Act is the most sweeping federal AI legislation proposed to date. The nearly 300-page discussion draft released last month touches on everything from children's safety to data center permitting, making it difficult to neatly summarize. However, beyond the occasional reasonable idea, the core provisions of the bill appear aimed at undermining American technological leadership irreparably, from declaring AI training data categorically not “fair use” under U.S. copyright law, to directing the Department of Energy to study the nationalization of AI labs.

We also note significant tension between this bill and the White House's own National Policy Framework for Artificial Intelligence, released on March 20. One of us previously argued that the White House framework, while incomplete in areas like company transparency, represented real progress and a solid baseline for congressional negotiations. Against the administration’s call for a "minimally burdensome national standard,” in contrast, the Blackburn bill is maximalist in its scope and ambition to a fault.

Indeed, the bill is enormous, and many of its provisions (which we have summarized here) deserve their own dedicated treatment. In lieu of a comprehensive review, we thus focus on three broad areas: model evaluation, copyright, and liability.

Model Evaluation: Relitigating Old Debates

The bill recognizes that the federal government needs institutional capacity to evaluate frontier AI systems. On this point, we agree. It even codifies the Center for AI Standards and Innovation (CAISI) within NIST—a meaningful step, but one that would also be addressed by Senators Young and Cantwell’s Future of AI Innovation Act without the extra baggage. Regardless, putting CAISI on a solid statutory footing is one of the bill's most common-sense contributions.

What is strange, then, is that the bill simultaneously creates an entirely separate evaluation program at the Department of Energy. Under Sections 602 and 603, DOE would be required to build out an "Advanced Artificial Intelligence Evaluation Program" within just 90 days of enactment. With zero additional appropriations, this provision represents another unfunded mandate. Nor does the bill explain why this capability should be duplicated at the DOE. CAISI, which already has relationships with frontier developers and the national labs while being embedded within NIST's standards infrastructure, is a far more natural home for model evaluations.

Even more concerning is the bill's requirement that developers provide DOE, upon request, with their underlying code, training data, model weights, and detailed architectural information as a precondition for deployment. Companies would not be allowed to deploy models if they refuse. This effectively relitigates the debate over whether CAISI and its predecessor, the U.S. AI Safety Institute, ought to have authority over pre-deployment testing and evaluation. As NIST is primarily a standards setting agency, the creation of CAISI settled this debate in favor of purely voluntary standards developed in consultation with industry, rather than serving as a kind of “FDA for AI models.” This is a healthy equilibrium that enables predeployment testing and evaluation through voluntary company partnerships, and one that lawmakers should be focused on reinforcing rather than upending.

Model weights are among the most sensitive commercial assets of any industry, compressing upwards of billions of dollars in research and investment into a file that can fit on a thumbdrive. They are also among the most sought after assets by our adversaries, and dangerous to let fall into the wrong hands. Frontier AI companies thus have an inherent incentive to protect their intellectual property through rigorous security protocols, and yet even they sometimes fall short. Requiring frontier developers to share model weights with the federal government merely opens up additional vectors for attack.

Like every civilian federal agency, DOE has faced significant cybersecurity challenges. Between 2010 and 2014, DOE suffered 159 successful cyber intrusions. In the SolarWinds breach of 2020, Russian intelligence compromised DOE and NNSA business networks. And in July 2025, foreign hackers breached the Kansas City National Security Campus—an NNSA facility that produces roughly 80 percent of the non-nuclear components in America's nuclear arsenal. Protecting assets of this sensitivity against nation-state adversaries is extraordinarily difficult, and even agencies tasked with managing our nuclear arsenal struggle with it. Frontier model weights would thus be among the highest-value targets in the federal government, and not something that can be disclosed casually.

The White House framework urges that “Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.” Supporters of the Blackburn bill should listen. We support congressional efforts to fund and empower CAISI to conduct evaluations collaboratively with frontier developers, but mandating the transfer of sensitive trade secrets to the government is a bridge too far.

Copyright: Contradicting the White House on Fair Use

The bill's treatment of copyright may be its most grave departure from the White House framework—and from established law.

The White House framework takes a deliberately restrained position: it advises Congress not to take legislative action that would influence judicial determinations regarding fair use, acknowledging that reasonable arguments exist on both sides and that resolution should rest with the courts. The Blackburn bill does the opposite. Section 1501 amends Section 107 of the Copyright Act—the fair use provision—to declare that unauthorized use of copyrighted works for AI training "shall not constitute fair use." This would require AI developers to obtain a license for any copyrighted material in their training data, a provision that is flatly incompatible with both the administration's stated position and a thriving American AI industry more generally.

The bill also introduces a new standard for output infringement that represents a dramatic departure from existing law. Under current doctrine, copyright infringement requires a showing of "substantial similarity" between the allegedly infringing work and the original. Section 1501(4) replaces this, for AI-generated content, with the far broader standard of "derives from": any AI output that "reproduces or derives from copyrighted works constitutes infringement." The term "derives from" is not defined anywhere in the bill, and its breadth is difficult to overstate. Every output of a language model derives from its training data in some sense. The same could be said of all works produced after reading previous ones. The substantial similarity standard makes sense because all creative output builds on what came before. The status quo of copyright law thus correctly separates actionable infringement from inspiration. Replacing “substantial similarity” with a "derives from" standard would invert America’s historical approach to copyright and render virtually every major AI model on the market illegal.

Finally, the bill creates a subpoena mechanism for copyright holders that goes further than even the widely criticized DMCA Section 512(h) process. Some context here is helpful. In ordinary civil litigation, once a lawsuit is underway, attorneys can issue subpoenas without a judge; a judge only becomes involved if the recipient moves to quash. The DMCA introduced an additional pathway to subpoena power, allowing copyright holders to obtain subpoenas to identify alleged infringers without filing a lawsuit at all—a mechanism that the Electronic Frontier Foundation and others have documented as a significant vector for harassment and abuse. Section 1302 of the Blackburn bill goes even further: rather than merely seeking the identity of an anonymous infringer, these subpoenas would compel disclosure of substantive, proprietary training data records. The threshold is a "subjective good faith belief" that one's copyrighted works were used in training—a standard that is effectively self-certifying. The potential for abuse at scale this would introduce should be obvious.

Liability: Some Good Mechanisms and a Strange One

The bill's liability provisions are a mixed bag, containing some of the draft's most sensible ideas alongside some of its most unusual structural choices.

On the positive side, Sections 741 through 743 require foreign AI developers to designate a registered agent in the United States for service of process before deploying any AI product here. This is a straightforward and important provision. Under current law, suing a foreign AI company with no U.S. presence—a Chinese developer that simply makes a model available on the open internet, for example—requires international service of process under the Hague Convention, a procedure that is especially slow with companies based in China. The registered agent requirement closes this gap. It also levels the playing field: there is no reason foreign AI companies should be able to serve American users while avoiding the legal accountability that American companies face.

Where the liability framework becomes unusual is in its broader structure. Product liability claims are traditionally governed by state tort law. These cases sometimes end up in federal court when the parties are from different states (diversity jurisdiction) or when a federal statute or constitutional question is implicated, but the federal court still applies underlying state law. Ever since Erie Railroad Co. v. Tompkins in 1938, it has been the case that “[t]here is no federal general common law.” The Blackburn bill's creation of a freestanding federal cause of action for AI product liability is a departure from the American legal tradition.

Looking Ahead

The TRUMP AMERICA AI Act is a discussion draft, and we appreciate the opportunity to provide our input. The bill contains a handful of genuine contributions, such as codifying CAISI and requiring foreign developer registration. But it also contains provisions that are in significant tension with the White House's own framework and that, if enacted, would be nothing short of existential to American leadership in AI.

This analysis has focused on model evaluation, copyright, and liability. There are many additional provisions that warrant close examination. The bill's age verification requirements, for example, address an important goal but could be better designed to deal with privacy concerns, rather than requiring Americans to upload sensitive identification documents to private companies. Nevertheless, focusing on these and other ancillary provisions in the bill would distract from the big-picture ways in which it is simply a non-starter. From its revocation of Section 230 and rejection of fair use for AI training data, to its flirtations with AI company nationalization and the creation of new private rights of action, the TRUMP AMERICA AI Act would cause incalculable harm to America’s AI ecosystem and innovative capacity more broadly.

Explore More Policy Areas

InnovationGovernanceEducation
Show All