
This piece was originally published in the National Interest.
Last November, OpenAI unleashed ChatGPT, its new artificial intelligence (AI) powered chatbot, on the world. Mere months before, conversations about AI were relegated to academic conferences and science fiction conventions. But, as ChatGPT exploded to become the fastest growing consumer application in history, AI rapidly became a kitchen table issue. Now, policymakers are shining a spotlight on the industry and asking the question: how much regulation is necessary to mitigate potential risks without stifling innovation?
From government reports to briefings and hearings to legislation, AI is the topic du jour on Capitol Hill as lawmakers attempt to answer this question. While legislative proposals regarding AI vary widely, the ethos behind such proposals can generally be grouped into two categories. The first category consists of proposals aimed primarily at mitigating potential risks of AI, which typically take a more heavy-handed approach to regulation in the name of consumer protection. The second category of proposals take a broader view of the AI ecosystem, attempting to foster innovation and global competitiveness with a more light-touch regulatory regime.
While both approaches are well intentioned, the second approach focusing on innovation and competitiveness holds greater promise. After all, the United States is not the only country developing AI systems, and amidst the Great Tech Rivalry, it is essential that we remain globally competitive in cutting edge technologies like AI. If Washington is too heavy handed in regulating AI, it risks becoming another innovation desert just like Europe.
The heavy handed approach to AI regulation is typified by Rep. Ted Lieu (D-CA). As one of the very few members of Congress holding degrees in computer science, Rep. Lieu has been one of the most vocal lawmakers on the question of AI regulation. Just before introducing the first federal piece of legislation written by a large language AI model, Rep. Lieu opined in the New York Times:
The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits…. What we need is a dedicated agency to regulate A.I.
Though Rep. Lieu admits that his proposal has little chance of actually passing through Congress this session—and he concedes that the first step toward an AI regulator is a “study and report” approach—Lieu and many of his colleagues are hyperfocused on heading off consumer harms that largely remain theoretical. Such an approach seeks to create a regulatory regime based on what these technologies “could” or “might” do in the future.
This prospective framework for thinking about AI regulation is antithetical to rapid innovation. For evidence, we need only look to Europe.
Brussels has a long tradition of onerously regulating technologies in the name of mitigating risks to consumers. Take the European Union’s comprehensive data privacy framework, the General Data Privacy Regulation (GDPR), for instance. The GDPR has three primary objectives: protecting consumers with regard to the processing of personal data, protecting the right to the protection of personal data, and ensuring the free movement of personal data within the Union. To differing degrees, the GDPR arguably succeeded at the first two of these goals; the GDPR created strong consumer protections around the collection and processing of personal data.
However, the GDPR has mostly failed to achieve the goal of ensuring the free movement of data. In no small part because data can flow seamlessly across physical borders, tech platforms and applications have had a difficult time complying with the GDPR, which, in turn, has restricted the voluntary, free flow of personal information rather than ensured it.
According to one study that examined over 4 million software applications, the implementation of the GDPR “induced the exit of about a third of available apps.” Perhaps even worse, the GDPR has led to a dearth of technological innovation throughout Europe. That same study found that market entry of new applications was halved following the implementation of GDPR.
Now, the European Parliament is developing legislation that it intends will become “the world’s first comprehensive AI law.” While the EU’s AI Act is not a one-size-fits-all policy akin to the GDPR and other European tech regulations, it will create strict rules for any system utilizing AI technology. Such strict rules around new applications for AI systems, imposed regardless of concrete, provable harms, are likely to strangle the little commercial innovation around AI that remains in Europe.
The United States cannot afford to follow in Europe’s footsteps and implement heavy handed regulations that might hamper innovation for the sake of mitigating unproven harms. With China leading the way in both AI innovation and AI regulation, we must be intentional in our approach to both innovation and regulation. AI systems certainly present novel and unique risks in practically every aspect of human life. But these new technologies also present novel and unique opportunities that should not be handicapped by heavy handed regulation driven by moral panic.
As two of my colleagues recently wrote in American Affairs, getting AI regulation right “requires a commonsense approach that can account for both the mind-bending dynamics of cutting edge AI systems, while also right-sizing the risks of AI regulations and AI gone wrong.” While Rep. Ted Lieu and his colleagues in the “sky is falling” camp of AI regulation go too far in the direction of onerous European tech regulation, there is another camp that recognizes the importance of light touch regulation to supporting domestic innovation and global competitiveness.
A prime example of this approach is the recently introduced legislation from Sens. Michael Bennet (D-CO), Mark Warner (D-VA), and Todd Young (R-IN). Based on the American Technology Leadership Act from the last Congress, this revised proposal would establish a new Office of Global Competition Analysis. The purpose of this new office would be to assess America’s global competitiveness in strategic technologies and provide policy recommendations on ways to protect and improve competitiveness. As Sen. Bennet stated, the goal of the legislation is to “lose our competitive edge in strategic technologies like semiconductors, quantum computing, and artificial intelligence to competitors like China.”
This second camp, as typified by Sen. Bennet and his colleagues, is approaching AI regulation from a less reactive and more constructive perspective. This approach considers the importance of global competition and recognizes that caution is necessary to avoid replicating the EU's heavy-handed regulations that have hindered innovation and hampered Europe's ability to keep pace with AI advancements. To be clear, these lawmakers are not ignoring the real risks presented by AI systems. Rather, they are putting such risks into a global perspective and making a more well informed calculus about the proper level of regulation.
By fostering an environment that encourages both domestic and global competition around AI technologies, and by providing a regulatory framework that promotes responsible AI use, the United States can maintain its global leadership in this crucial field. By embracing light regulation focused on global competitiveness, policymakers can encourage investment, attract top AI talent, and foster an environment that enables American companies to lead in AI development. By allowing room for experimentation and adaptability, the United States can remain at the forefront of AI innovation, providing economic and societal benefits while maintaining a competitive edge on the global stage.