Content

/

Letters And Testimony

/

NTIA Comment on AI Accountability Policy

letters and testimony

NTIA Comment on AI Accountability Policy

June 12, 2023

Today, I submitted a comment in response to the National Telecommunications and Information Administrations (NTIA)'s request for comment on AI system accountability measures and policies. Click here to download a pdf of the comment.

Thank you for the opportunity to respond to the National Telecommunications and Information Administrations (NTIA) request for comment on Artificial Intelligence (AI) system accountability measures and policies (henceforth “the Request”). My name is Samuel Hammond, and I am a senior economist at the Foundation for American Innovation (FAI), a nonprofit dedicated to developing technology, talent, and ideas that support a better, freer, and more abundant future.

My work at FAI focuses on the governance of emerging technologies, particularly as it relates to AI. In recent months, I have published on the need for a proactive approach to AI safety and accountability in outlets ranging from Politico Magazine to the journal American Affairs, as well as on my personal website.

The extraordinary rate of improvement in large language models (LLMs) demonstrates the power of simple learning algorithms to endow deep neural networks with remarkable capabilities, from multilingual semantic understanding to sophisticated forms of causal reasoning. Experts anticipate the power of these and related models to rapidly grow in the years ahead given predictable improvements in training hardware, new insights into model architecture and optimization, and a tsunami of private sector investment.

Ensuring that powerful AI models are deployed in a trustworthy and accountable manner in the future is of the utmost importance to our national security and the future of humanity. Nonetheless, it is vital for the most stringent regulatory mechanisms (such as external safety audits or licensing schemes) to be reserved only for the most powerful systems, especially ones that are yet to be developed.

In particular, I believe the NTIA should:

  • Refine the definition of “AI system” to distinguish “narrow AI,” including conventional forms of rules-based automation, from generally intelligent AI systems.
  • Recognize the differences in how bias and trustworthiness are handled in narrow and general AI systems, respectively. For example, LLMs may need to be capable of bias in order to understand how to avoid bias through prompt engineering.
  • Prioritize regulation of systems that pose quantifiable risks to human life or well-being so as not to impede the beneficial uses of current AI systems.
  • Develop a tractable framework for promoting accountability among the subset of companies capable of training AI systems that match or surpass GPT-4.

Defining the Scope of AI Systems

This Request incorporates the National Institute of Standards and Technology (NIST)'s definition of an “AI system,” as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments,” broadened to encompass “automated systems” with “the potential to meaningfully impact the American public's rights, opportunities, or access to critical resources or services.”

Unfortunately, the all-encompassing scope of this definition risks conflating generic automated systems with highly general AI systems that demonstrate sui generis capabilities. While this may be warranted given the NTIA’s statutory purview, a qualitative delineation of “AI system” by capability and generality is still necessary to carve nature at its joints and give regulation some hope of tractability, not least because “machine-based systems that can … generate outputs … influencing real or virtual environments” are essentially ubiquitous.

A useful starting point would be to taxonomize AI systems along the dimensions of generality (narrow to general) and risk (low to high). For example, a flaw or bias in a content recommendation algorithm is relatively low risk compared to a flaw or bias in a system used to automate Unemployment Insurance claims, say, even though both may represent relatively narrow forms of AI. Likewise, a general AI system capable of beating human opponents at a wide variety of video games is lower risk than a comparable system trained to compete with humans in real life.

Regulation of AI should prioritize the highest risk applications. At the same time, a purely risk- or use-based framework threatens to conflate narrow and general AI systems to a fault. Narrow forms of AI range from simple rules-based algorithms to advanced forms of statistics. General AI systems are no less statistical, however, they work by harnessing statistics (such as the statistics over patterns of text) to bootstrap emergent capabilities, such as common-sense reasoning or the ability to deceive and manipulate. Deep-learning researchers often refer to this as the notion that “more is different,” i.e. that scaling neural networks can lead to qualitatively different capabilities. As the NTIA considers AI accountability in its fullest scope, the unique risks posed by powerful and highly general AI systems are thus deserving of distinct treatment.

General and narrow AIs have different implications for bias

Complicating matters, more general AI systems may help to reduce the flaws and biases associated with conventional process automation. For example, the high audit rate among Earned Income Tax Credit (EITC) recipients (and the resultant disparate impact on Black single mothers) is in part a consequence of the IRS adopting its Automated Correspondence Exam (ACE) processing system. The ACE system is a software application that fully automates the initiation of EITC cases through the audit process, but as far as AI systems go, the technology is relatively “dumb.” It only targets EITC cases in the first place because the returns are simple enough for rules-based automation. Augmented with LLMs, future tax automations have the potential to be much more context-sensitive and thus less liable to trigger audits over minor discrepancies. Moreover, generally intelligent AI systems will be able to grapple with the complexities and idiosyncrasies of the taxes filed by high-income individuals, potentially reducing disparities in enforcement.

The outputs provided by LLMs are highly sensitive to how they are prompted. “Eliminating bias” from an LLM is thus not a well-defined concept. On the contrary, the main technique used for aligning LLMs is Reinforcement Learning from Human Feedback (RLHF). RLHF techniques can help make LLMs more trustworthy and controllable, but at a cost. For example, following RLHF, the base GPT-4 model becomes markedly less well-calibrated (i.e. the probabilities it assigns to its predictions become less accurate). RLHF thus increases model bias in the most literal sense of the word.

In the future, LLMs may be deployed in the context of arbitration and adjudication. Such LLMs will be valued for their neutrality relative to systems that depend on human discretion. Yet this neutrality may be independent of whether or not the training data was in some sense biased, and may instead derive from how the model is prompted. Indeed, given the so-called Waluigi Effect, it is plausible that LLMs may need to retain their capability for bias in order to understand how to avoid it.

Quantifying relative risks from current AI systems

To date, frontier LLMs (such as GPT-4) pose demonstrably limited risk to human life or well-being, but hold enormous potential to improve quality and access in areas ranging from health to education. Indeed, the American technology sector is now facing a capabilities overhang, meaning the capabilities of current models vastly outstrips their realized use in products or applications. Industry is racing to close that overhang by integrating the current generation of LLMs into every corner of the economy—what might be thought of as the “horizontal” dimension of AI innovation. The “vertical” dimension of AI innovation, in contrast, refers to the development of new and more powerful models.

Existing laws and regulations are equipped to deal with the potential harms resulting from current level AI systems. There may even be risks associated with failing to integrate AI systems rapidly enough. Near-term AI has the potential to drive rapid productivity growth in myriad sectors of the economy, improving standards of living and generating economic surpluses that could be put toward adaptation and risk mitigation. Regulation that fails to properly balance AI’s “use/misuse trade-off” may inadvertently truncate those benefits while doing comparatively little to stop bad actors. Legacy legal and regulatory structures may even need to adopt AI tools to keep up with the accelerated rate of change and the emergence of new threat models, such as through the use of AI-powered cybersecurity capabilities.

In contrast, new sectoral regulations, such the sweeping licensing and certification requirements envisioned by the EU’s pending AI Act, risk impeding the commercial deployment of current generation LLMs while doing little to stop bad actors. Consider that open-source vision and language models are intrinsically resistant to regulation while being sufficient to create convincing deep fakes and spam. The risks associated with open-source models will thus likely be addressed through defensive technologies, such as AI-enabled verification systems, that will require continuous, adversarial refinement.

Beyond being premature, regulation of existing AI systems could inadvertently shift the marginal dollar of private investment toward the vertical dimension of innovation, reducing our time to prepare for more powerful and genuinely dangerous systems. Given the accelerating rate of progress in AI research, regulators must therefore follow the sage advice of hockey legend Wayne Gretzky and “skate to where the puck is going.”

In particular, it is essential for accountability regulation to distinguish between AI systems per se and AI systems with highly general reasoning capabilities. Machine learning and algorithmic decision making are nothing new, nor are the potential issues they present (e.g. to bias or discrimination). In contrast, unified AI systems that demonstrate common-sense reasoning abilities and which surpass human-level performance in a wide range of tasks are new. The policy challenges they present should therefore not be conflated with the risk associated with automated systems more generally.

Prioritize accountability frameworks for the most powerful models

Since the release of ChatGPT, timelines to the likely arrival of transformative Artificial Intelligence (TAI) have shortened dramatically. Jeff Clune, Senior Research Advisor at DeepMind, recently predicted a 30% chance of TAI arriving by 2030, operationalized as “any unified system that can perform more than 50% of economically valuable human work.” Similarly, the forecasting platform Metaculus offers a median forecast for the arrival of “strong” Artificial General Intelligence (AGI) with robotic capabilities by 2032. For context, a sufficient criterion for judging this latter forecast is any system with “general robotic capabilities, of the type able to autonomously, when equipped with appropriate actuators and when given human-readable instructions, satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.”

In recent years, robotic AIs have struggled to fold laundry, let alone assemble cars in a zero-shot fashion. Skepticism about these relatively short timelines to TAI is, therefore, understandable. Exponential trends are known to be unintuitive. Consider that human-level natural language processing was also once thought to be many years away, until suddenly it (or something very close to it) arrived in the form of Generative Pre-trained Transformer (GPT) models trained on large volumes of text. GPT-based architectures are now making rapid progress in vision and decision-making modalities as well, enabling their use in robotic control systems.

As the power and generality of AI systems increases, so will the need for external audits and accountability mechanisms. Fortunately, regulation focused on the frontier of AI has the benefit of tractability, as only a small number of companies have the resources to pretrain large foundation models from scratch. While open-source models are making rapid strides, the logarithmic nature of LLM scaling laws suggests access to capital-intensive computing resources will grow, not shrink, in the years ahead. For example, according to OpenAI, the total cost of training GPT-3 was around $4.6 million. This contrasts with GPT-4, which is estimated to have cost over $100. The next doubling in model performance will thus likely exhibit a 10–100x increase in total training cost, even with specialized hardware.

LLM scaling laws also suggest the computing resources required to train large models are a reasonable proxy for model power and generality. Consistent with our argument for refining the definition of AI systems, the NTIA should thus consider defining a special regulatory threshold based on the computing cost needed to match or surpass the performance of GPT-4 across a robust set of benchmarks. Theoretical insights and hardware improvements that reduce the computing resources needed to match or surpass GPT-4 would require the threshold to be periodically updated. In the meantime, the handful of companies capable of training models that exceed this threshold should have to disclose their intent to do so and submit to external audits that cover both model alignment and operational security.


Compute thresholds will eventually cease to be a useful proxy for model capability as training costs continue to fall. Nevertheless, the next five years are likely to be an inflection point in the race to build TAI. The NTIA should thus not shy away from developing a framework that is planned to obsolesce but which may still be useful for triaging AI accountability initiatives in the near term.

Respectfully submitted,

Samuel Hammond
Senior Economist

The Foundation for American Innovation
2443 Fillmore Street #380-3386
San Francisco, CA 94115

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.