
Anthropic, a frontier AI company, revealed recently that it had trained a large language model called Mythos. Unlike almost all similar announcements from AI firms, this wasn’t a product release. Anthropic—a developer known for its focus on safety—claims the model is too dangerous to be made generally available because of a significant leap in its hacking abilities. Mythos, the firm says, could knock out software systems that power plants, banks and even armed forces rely on every day. To simply release it, then, would be irresponsible, at least in Anthropic’s telling.
Some have cast doubt on whether the model is really so dangerous. But the firms that joined the company’s Project Glasswing—an effort to patch security vulnerabilities identified by Mythos in the world’s most important software—including JPMorgan Chase, Apple and Microsoft, all seem to think the potential is real. So, too, does America’s government, with Federal Reserve Chair Jay Powell and Treasury Secretary Scott Bessent convening an urgent gathering of bank leaders following the Mythos announcement to discuss its potential impact on financial stability.