Content

/

Research Papers

/

Calibrating Openness in AI

research papers

Calibrating Openness in AI

September 12, 2025
The featured image for a post titled "Calibrating Openness in AI"

Executive Summary

“Open” in AI has shifted from ambiguity to concrete practice and active policymaking. This paper calibrates openness along three axes - materials (what is published), permissions (what others may do), and production context (how development and maintenance are organized) - and distinguishes three baselines: publicly available weights, open weights, and open-source AI. We use this framework to read ongoing debates and policy approaches in the EU, United States, and China, highlighting how each accommodates or contests different forms of openness. Because open models depend most directly on broad access to training data, copyright and text-and-data-mining rules are especially determinative. The policy question is not whether AI should be “open” in the abstract, but how different coordinates of openness advance outcomes that matter: innovation, diffusion, and safety. An open-weights baseline drives diffusion; lightweight documentation bundles enable audit; and public programs can underwrite full-stack releases for scientific purposes. Properly calibrated, openness is positive sum: it builds talent, speeds up diffusion, and strengthens global capacity.

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All