
The Trump administration is weighing the creation of a “review system” for frontier AI models. According to the New York Times, in this proposed approach, AI labs would provide the federal government with “first access” to “get ahead” of models with significant cyber capabilities, presumably such as Anthropic’s Mythos. It’s unclear what legal authority would allow the president to accomplish these goals—specifically, mandating labs to undergo a vetting process and then sharing any essential information related to countering any detected risks with other parts of the government.
However, existing authorities would allow for a voluntary “kick the tires” testing period. Labs could opt to share models with materially new capabilities with the Center for AI Standards and Innovation (CAISI), which is housed within the Commerce Department’s National Institute of Standards and Technology; the director of the Cybersecurity and Infrastructure Security Agency (CISA) could then fund an effort to help a broad set of actors—including local, state, federal actors as well as other public and private entities—take any necessary cybersecurity precautions. This would help labs avoid popular backlash for knowingly introducing models that may threaten critical systems and public well-being and perhaps subvert more onerous, formal requirements.