
FAI’s scholars have been among the most widely cited voices in the ongoing dispute between Anthropic and the Department of War over the terms of use for Claude in classified military systems.
The Pentagon has given Anthropic a deadline of close of business today to accept an ultimatum requiring the company to agree to the military’s right to use their technology in all lawful cases. If Anthropic declines, the Pentagon has made two threats; either to invoke the Defense Production Act to force the company to provision Claude on the DoW’s terms, or to designate Anthropic a “supply-chain risk,” which would force other government contractors to strip Anthropic’s products from their operations. Anthropic CEO Dario Amodei declined to accept the ultimatum in a February 26 statement, writing that:
In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.
Below, we round up FAI expert commentary on this dispute and situate this in the broader context of maintaining U.S. frontier AI leadership.
Senior Fellow Dean Ball—who previously served as a senior AI policy advisor in the Trump White House and helped draft the President's AI Action Plan—has been sharply critical of the Pentagon's negotiating posture, calling it "incoherent" and the threat to simultaneously pursue both remedies "a whole different level of insane."
Key arguments:
- The two threatened penalties are contradictory. As Ball put it: "How can one policy option be 'supply-chain risk' (usually used on foreign adversaries) and the other be DPA (emergency commandeering of critical assets)?" The supply-chain risk label treats Anthropic as a threat to be expelled; the DPA treats Claude as an asset too vital to lose. Both cannot be true. (X post; cited in Fortune, TechCrunch, Politico)
- The supply-chain risk designation could be existential for Anthropic and the broader U.S. AI industry. Ball has warned that activating this power "would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling." This would increase regulatory risk for the industry at large and could significantly chill investment into U.S. frontier labs. (X post, cited in Reason, Lawfare, Business Insider, The Atlantic)
- Invoking the DPA would amount to "the quasi-nationalization of a frontier lab." Ball described the scenario in which the government strips all guardrails from Claude via executive fiat as fundamentally threatening to any company doing business with the government. "It would basically be the government saying, 'If you disagree with us politically, we're going to try to put you out of business.'" (TechCrunch, Washington Post)
- This amounts to the government overriding a contract it voluntarily signed. Ball has emphasized that the existing contract between Anthropic and the DoD was signed by both parties and already includes the two usage restrictions. The Pentagon's threat to use the DPA or a supply-chain designation to undo those terms isn't a national security necessity—it's the government strong-arming a private company out of contractual provisions it freely agreed to. As Ball wrote: "these are the strictest regulations of AI being considered by any government on Earth, and it all comes from an administration that bills itself (and legitimately has been) deeply anti-AI-regulation." (X thread)
- The DoD has no backup plan. Ball noted that Anthropic is the only frontier AI model deployed on classified networks and that the Pentagon appears to be falling short of the Biden-era NSM directive to avoid single-vendor dependence. "The DOD has no backups. This is a single-vendor situation here. They can't fix that overnight." (TechCrunch)
Chief Economist Sam Hammond has framed the dispute in terms of the emerging dynamics of state control over frontier AI, a subject he has written about extensively.
Key arguments:
- Invoking the DPA would constitute a "soft nationalization." Hammond characterized the scenario of the government forcing Anthropic to produce a version of Claude stripped of all safety guardrails as a form of soft nationalization—an unprecedented assertion of state power over a private AI company's product. (Cited in The Atlantic)
- The dispute fundamentally comes down to mutual distrust. As Hammond told The Atlantic: "What this really boils down to is a lack of trust on Anthropic's part that the Pentagon will always use their technology appropriately, and a lack of trust on the part of the Pentagon that Anthropic will let them use their technology in all relevant use cases."
- The two penalties are mutually incoherent. Echoing Ball, Hammond has noted that Claude cannot simultaneously be so vital to national security that its control must be wrested away from its creator and also such a risk that it must be banished from the military-industrial complex entirely.
In short, utilizing the Defense Production Act or issuing a supply-chain risk designation for Anthropic would have broad ripple effects that threaten U.S. leadership on frontier AI. Other industry leaders, including OpenAI’s Sam Altman, share our concerns. We urge the Department of War and Anthropic to find a way to preserve the conditions that keep the United States at the frontier of AI development.