Content

/

Letters And Testimony

/

Developing a National AI Strategy

letters and testimony

Developing a National AI Strategy

July 8, 2023

The featured image for a post titled "Developing a National AI Strategy"

Today, I submitted a comment to the Office of Science and Technology Policy in response to the Office's request for information toward the development of a national AI strategy. Click here to download a pdf of the comment.

Thank you for the opportunity to respond to the Office of Science and Technology Policy (OSTP) request for information toward the development of a National Artificial Intelligence (AI) Strategy. My name is Samuel Hammond, and I am a senior economist at the Foundation for American Innovation (FAI), a nonprofit dedicated to developing technology, talent, and ideas that support a better, freer, and more abundant future.

My work at FAI focuses on the governance of emerging technologies, particularly as it relates to AI. In recent months, I have published on the need for a proactive approach to AI safety and accountability in outlets ranging from Politico Magazine to the journal American Affairs, as well as on my personal website.

The comments herein build on my submission to the National Telecommunication and Information Administration's (NTIA) Request for Comment on AI Accountability Policy, in which I recommend that:

  • The regulatory definition of “AI system” should be refined to distinguish “narrow AI,” including conventional forms of rules-based automation, from generally intelligent AI systems with sui generis capabilities, such as common sense reasoning.
  • Regulatory frameworks related to model evaluation should recognize how bias and trustworthiness are operationalized differently in narrow and general AI systems, respectively. For example, language models may need to be capable of bias in order to understand how to avoid bias through appropriate prompting – the so-called “Waluigi Effect.”
  • The most stringent forms of regulation (licensing, pre-registered training runs, etc.) should prioritize systems that pose quantifiable risks to human life or wellbeing so as not to impede the beneficial uses of current AI systems.
  • The U.S. government must develop a tractable framework for promoting rigorous accountability and oversight mechanisms for the subset of companies capable of training AI systems that match or surpass GPT-4 in particular.

These comments seek to contextualize these and related recommendations in terms of a comprehensive national AI strategy. Yet what such a strategy should look like is highly sensitive to forecasts of near-term AI progress. Skeptics of Artificial General Intelligence (AGI) are more likely to emphasize risks related to bias and algorithmic discrimination, for example, as these concerns are both immediate and backwards compatible with existing policy discourses connected to big data, digital privacy, and disinformation.

If AGI is near, in contrast, these concerns pale in comparison to the urgency of preparing for an “intelligence explosion” that fundamentally transforms human civilization. As such, my comments to the NTIA emphasize the need to distinguish an inclusive definition of “AI system” – one which includes “dumb” automations and narrow applications of machine learning – from the highly general and autonomous forms of AI that match or surpass human cognitive capabilities. While this does not preclude distinct regulatory frameworks for weaker AI applications or lesser forms of risk, applying special scrutiny to these most powerful systems will be critical to securing a beneficial future for humanity.

Indeed, the risks from near-term AGI range from existential to unprecedentedly disruptive. As discussed in detail below, a forward-looking AI strategy should thus focus on three critical dimensions of AI safety:

  • The sui generis risks associated with powerfully general AI systems;
  • The national security imperative of U.S. technological leadership in AI vis-à-vis our foreign adversaries;
  • And the “failure to adapt” risk posed by large scale disruption to legacy institutions.

The short timeline to AGI

The extraordinary rate of improvement in large language models (LLMs) demonstrates the power of simple learning algorithms to endow deep neural networks with remarkable capabilities, from multilingual semantic understanding to sophisticated forms of causal reasoning. Experts anticipate the power of these and related models to grow rapidly in the years ahead given predictable improvements in training hardware, new insights into model architecture and optimization, and a tsunami of private sector investment.

Since the release of ChatGPT, timelines to the likely arrival of transformative Artificial Intelligence (TAI) have shortened dramatically. Jeff Clune, Senior Research Advisor at DeepMind, recently predicted a 30% chance of TAI arriving by 2030, operationalized as “any unified system that can perform more than 50% of economically valuable human work.” Similarly, the forecasting platform Metaculus offers a median forecast for the arrival of “strong AGI” with robotic capabilities by 2032. For context, a sufficient criterion for judging this latter forecast is any system with “general robotic capabilities, of the type able to autonomously, when equipped with appropriate actuators and when given human-readable instructions, satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.” The transition from strong AGI to a self-improving superintelligence is then forecasted to take as few as eight months.

Skepticism about these relatively short timelines to TAI / AGI is understandable. Exponential trends are counter-intuitive. Consider that human-level natural language processing was also once thought to be many years away, until suddenly it (or something very close to it) arrived in the form of Generative Pre-trained Transformer (GPT) models trained on large volumes of text. Large labor market impacts from the current generation of GPT-based architectures are already projected for knowledge-based sectors, but will soon extend to the physical realm as well, given rapid progress in vision and decision-making modalities that enable their use in robotic control systems.

As the power and generality of AI systems increases, so will the need for external audits and accountability mechanisms. Fortunately, regulation focused on the frontier of AI has the benefit of tractability, as only a small number of companies have the resources to pretrain large foundation models from scratch. Moreover, LLM scaling laws suggest the computing resources required to train frontier models is a reasonable proxy for model power and capability.

While open-source models are making rapid strides, the logarithmic nature of LLM scaling laws suggests the capital-intensive computing resources needed to train frontier models will grow, not shrink, in the years ahead. For example, according to OpenAI, the total cost of training GPT-3 was around $4.6 million. This contrasts with GPT-4, which is estimated to have cost over $100 million. The next doubling in model performance will thus likely exhibit a 10–100x increase in total training cost, even with specialized hardware. This is confirmed by a pitch deck from AI research startup, Anthropic, which projects a $1 billion capital expenditure to train a model 10x more powerful than GPT-4.

The short-run capital constraint on building models that match or surpass GPT-4 presents a window of opportunity for establishing a regulatory framework specific to the small number of companies racing to develop AGI. Specifically, organizations seeking to train models beyond a sufficiently high threshold of compute should be required to pre-register training runs with a coordinating agency charged with overseeing external safety audits and model evaluations.

To benchmark such a compute threshold, the original PaLM model from Google (the largest model in 2022) required 2.5×10^24 FLOPs to train. Credible estimates of the compute required to train PaLM-2 and GPT-4 yield an additional order of magnitude, i.e. 10^25 FLOPs. As such, a threshold of 10^26 FLOPs would likely suffice to segment next generation models for heightened oversight. The coordinating agency should further possess the authorities to update the threshold overtime, promulgate industry standards, and introduce additional criteria for picking out specific, high risk capabilities that aren’t well-proxied by raw compute, such as AI models with biosafety implications.

Importantly, given the rapid pace of progress in AI, the coordinating agency charged with oversight of advanced AI systems should have the flexibility and superseding authorities needed to work at arms-length with the companies it engages with, rather than serve a passive or purely procedural role. For instance, we may also wish the agency to audit the operational security of the most advanced AI companies; require sensitive research and development to relocate to secured facilities; and / or compel safety-enhancing coordination and information-sharing between competing companies that would otherwise run afoul of competition law.

As frontier models demonstrate new emergent abilities and strong AGI looks increasingly within reach, it may become prudent for the leading AI programs to enter into a joint research venture. Such a joint venture – whether analogized to the Manhattan Project or CERN – would serve to arrest competitive arms-race dynamics, reduce the risk of a single firm with runaway monopoly power, and enable collaborative work on AI alignment and methodical model deployments. A similar eventuality is also raised by OpenAI in their May 22nd statement, “Governance of superintelligence”:

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

The selection or creation of a governmental organization to coordinate oversight of companies pursuing AGI is naturally prerequisite to any such future, safety-related interventions. Through the lens of America’s national AI strategy, the benefits to this orientation are obvious: being the first nation to develop AGI and deploy it safely will potentially redound in the form permanent economic and technological dominance. Securing this future must, therefore, be the primary goal of our AI strategy going forward.

Tech competition and U.S. national security

Efficiently training advanced AI models requires access to equally advanced AI chips. Recognizing this, on October 7, 2022, the U.S. government implemented significant new, multilateral export controls on the sale of advanced computing and semiconductor manufacturing items to entities in China.

A central priority of U.S. national AI strategy should be to ensure AI-related export controls are enforceable at scale. The Bureau of Industry and Security (BIS) is charged with implementing export controls through the Export Enforcement Office (EEO). These new responsibilities will require providing BIS with additional resources to modernize its approach to export enforcement through upgraded information technology and access to machine learning tools for scaling global surveillance capacities.

Advanced chips networked together into large computing clusters are of particular concern in the context of limiting China’s access to powerful AI systems. To scale our capacity to monitor advanced computing clusters per se, conventional export controls based on the provision of export licenses should be complemented by hardware-based mechanisms, i.e. end-user controls implemented onto the chips themselves. Such controls could also be software based, such as through drivers that remotely throttle a chip’s interconnect bandwidth when networked together in an undisclosed datacenter. For example, the chipmaker, NVIDIA, demonstrated their capacity to implement controls through firmware that automatically throttled their “Lite Hash Rate” graphics cards when used to mine cryptocurrency.

Nevertheless, powerful AI systems can also be trained using legacy chips not currently covered by the October 7 controls. Training AI models with legacy chips adds a time cost that can be offset by simply building bigger computing clusters. Moreover, legacy chips account for a large proportion of international chip demand – a market segment China is positioned to have a strong comparative advantage in. This parallels China’s strong position in legacy telecommunications equipment, which ultimately necessitated the U.S. ban on Huawei Technologies and ZTE.

The race to dominate AI is also a race to be the dominant exporter of AI-related technologies to countries around the world, including developing nations for whom legacy chips and telecom equipment are often more attractive on a cost basis relative to the cutting-edge. As Reuters reports, “With U.S. export controls making it impossible to produce advanced chips, [Semiconductor Manufacturing International Corporation] (SMIC) is doubling down on mature technology chips and has announced four new facilities, or fabs, since 2020. When those come online, it would more than triple the company’s output.”

In response to these developments, my FAI colleague and international technology expert, Roslyn Layton, offers the following recommendations:

First, BIS must work to immediately impose meaningful export controls targeting SMIC and other PRC legacy chipmakers. Second, in the upcoming National Defense Authorization Act (NDAA) process, Congress must strengthen and expand Section 5949 of the FY2023 NDAA, to ensure that contractors servicing the federal government cannot use Chinese chips in their equipment. The U.S. government also has the power to impose tariffs under Section 301 of the Trade Act of 1974 when products put American national security at risk. It’s worth investigating whether Section 301 can be applied to make Chinese chips prohibitively expensive.
Finally, supply chain shortages such as those that plagued the automotive sector ravaged the American economy during the pandemic. Those kinks are a foretaste of the pain China could inflict on the U.S. if it dominates the legacy chip market. The Biden Administration can help prevent that scenario and give American companies an advantage over Chinese companies by ensuring that a healthy portion of the $39 billion appropriated for domestic chip production under the CHIPS Act supports legacy chip manufacturing.

AI-related export controls on China should also be extended to the model weights of advanced AI systems. The BIS issued the first temporary software export control of this kind in 2020, targeting “geospatial imagery software specially designed for training Deep Convolutional Neural Networks to automate the analysis of geospatial imagery and point clouds.” Advanced, multi-modal AIs will likewise be capable of analyzing geospatial imagery data, and much more.

In the meantime, the U.S. should refrain from denying Chinese’s firms access to American cloud service providers, as is reportedly being considered. While not ideal, allowing Chinese entities to train AI models on U.S. servers provides us with the benefits of home jurisdiction and may help to depress demand for China’s domestic cloud market.

Institutional adaptation

Circa the early 2000s, “internet safety” discussions often emphasized issues such as cybercrime or online bullying. Little did we know at the time that the internet and the subsequent mobile revolution would shift the balance of power between state actors and society, putting immense pressure on legacy institutions to adapt. Between an information tsunami and new means of mass mobilization, many legacy institutions thus suffered what former CIA media analyst, Martin Gurri, dubbed a “crisis of authority.” In Western democracies, this manifested in the form of rising populism and collapsing trust in government and legacy media. In weaker states such as those involved in the Arab Spring, the result was popular revolutions and even partial state collapse.

The internet’s impact on legacy institutions was not the result of explicit misuse. Rather, it was a second-order consequence of how the internet altered the broader environment of transaction costs, i.e. the legibility of information, the ability of principals to monitor their agents, the costs of coordinating around a common plan, and so forth. Transaction costs determine the efficient scope and structure of hierarchical organizations. Similarly, contemporary AI policy discussions are often overly focused on the direct implications of AI at the expense of anticipating second-order institutional impacts.

Across history, the advent of general purpose technologies has tended to proceed an institutional regime change: the agricultural revolution led nomadic tribes to settle city states; the printing press pressaged the Protestant Reformation, the Wars of Religion, and the Peace of Westphalia; and the Industrial Revolution ushered in centralized welfare states and a new constitutional order. The advent of AGI may likewise be a regime-change level event, only one with the potential to overshadow these past technological phase-transitions in both its magnitude and speed of diffusion.

A central focus of U.S. national AI strategy should be to assess and prepare for the structural reforms our institutions need to adapt to a post-AGI world – a world with immense new throughput demands and a vastly accelerated pace of change. Every aspect of the machinery of government should be on the table, from our social insurance systems to foundational laws such as the Administrative Procedures Act. In turn, this will require a degree of legislative activity not seen since at least the New Deal era, and thus robust investment in Congressional modernization, technical expertise and legislative capacity.

Adapting to AI means not shying away from integrating AI into government itself. Consider what the diffusion of AI lawyers will mean for the capacity of our overwhelmed court system. New venues for dispute resolution may need to be created that deploy AI in the adjudication process. More prosaically, lawyers in federal enforcement agencies may find themselves inundated by potential cases worthy of investigation. As it stands, when a lawyer at the Federal Trade Commission, say, does legal discovery, they often find themselves manually reading tens of thousands of subpoenaed emails – emails that existing language models could, in principle, read through to extract evidence of misconduct in a matter of seconds.

From AI-augment tax examiners to new demands on the FDA from AI-enabled drug discovery, the benefits of embracing AI within the civil service are too many to list here. A national commission on AI is thus surely warranted, however it will only be the start.

Respectfully submitted,

Samuel Hammond
Senior Economist
The Foundation for American Innovation
2443 Fillmore Street #380-3386
San Francisco, CA 94115
samuel@thefai.org

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.