AI has the potential to significantly improve our lives in many ways, but it also poses significant risks. On the one hand, technological advances in how computer systems can execute tasks that traditionally require human intelligence have an unprecedented potential to benefit humanity in countless ways. AI has already led to medical, transportation, and education breakthroughs, from reducing the time and effort needed to complete tasks to improving decision-making.
But AI has already caused failures in critical domains. The 2010 “flash crash” destroyed more than $1 trillion of financial value in minutes as a result of faulty algorithms. Criminal justice sentencing algorithms used across the country were shown by a 2016 ProPublica report to be racially biased, demonstrating that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism.
More serious threats will continue to arise. AI could enable “stable totalitarianism” by allowing governments to become more repressive, granting rulers the ability to monitor and track citizens easily. AI could reinforce the destabilizing effects of advanced weaponry, increasing the speed of war and compressing the decision-making timeframe. This, in turn, could exacerbate tensions between nuclear-armed powers. Additionally, we lack the technical understanding of the “AI control problem” to reliably steer AI systems toward human values, which could be far more dangerous than any other concerns.
Such significant risks deserve equally serious attention from the U.S. government. The White House has led recent efforts on AI, including new export controls on semiconductor sales to China and a Blueprint for an AI Bill of Rights. The White House also established the National AI Initiative Office in 2021, in accordance with the National Artificial Intelligence Initiative Act of 2020. Congress previously championed the issue by sponsoring the National Security Commission on AI (NSCAI), but the commission ceased operations in 2021. Future leadership will require technical expertise on the challenges and opportunities posed by AI.
To address this challenge, Congress should leverage the technical capacity of the Government Accountability Office (GAO) to conduct a technical analysis of AI and study the associated current and emerging risks.
In 2019, GAO created a Science, Technology Assessment, and Analytics (STAA) team with a mission to inform Congress about critical science and technology issues. The group, which includes staff with expertise in STEM and computer science, is responsible for providing technical analysis and conducting foresight, providing lawmakers with information and actionable recommendations about emerging technologies and the opportunities and challenges they will create.
A 2019 GAO article highlighted some of AI's most exciting potential benefits, such as increasing transportation mobility and transforming healthcare. It also mentioned important risks to keep an eye on, such as managing the data needed to train AI models. The article claimed that AI technologies with broad reasoning abilities are highly unlikely in the foreseeable future. This information has changed since 2019, thanks to further research on AI risk measurement. For example, the recent Special Competitive Studies Project (SCSP), the successor of the NSCAI, has mentioned research on how this will be “the most important century of all time for humanity.” The biological anchors subsection from this research estimated a 50 percent chance of transformative AI by 2055; similarly, a recent survey of AI experts forecasted a 50 percent chance of human-level machine intelligence by 2060. Both forecasts have significant uncertainty, such that we may only have a decade or two to prepare for transformative AI.
In 2021, STAA released its AI Accountability Framework, a welcome addition to the tools available to federal policymakers overseeing AI systems. The framework surveyed all previous AI frameworks developed by the U.S. government, with an emphasis on AI safety and the known and unknown risks of AI. This emphasis is valuable, but more attention should be devoted to the kinds of catastrophic risks considered above. Congress should therefore direct STAA to conduct a technology assessment examining potential catastrophic risks in more detail to provide Congress with foresight. In particular, STAA should conduct foresight on AI risks in misinformation, cyber-attacks, critical infrastructure safety risks, and artificial general intelligence (AGI).
In its analysis, GAO should study the publications of the AI safety and forecasting community. Key AI forecasting organizations in this area include the University of Oxford’s Future of Humanity Institute, the University of Cambridge’s Leverhulme Center for the Future of Intelligence, the Centre for the Governance of AI, and AI Impacts.
Additional forward-looking analysis from GAO could help Congress address the issue more effectively. It could inform appropriations by providing recommendations to Congress on how to allocate resources to mitigate AI risks. It could also spur Congress to pass legislation—for example, measures to improve data security, or requirements for other agencies to follow the steps recommended by GAO.
In summary, as the substantial progress in AI continues in the coming decades, it could potentially reach the point at which machines outperform humans in many tasks. This could have enormous benefits and help solve currently intractable global problems, but it also poses severe risks, up to and including such catastrophic risks as geopolitical conflict involving nuclear powers. More technical work must be done to reduce these risks.
Our policymakers don’t yet have a roadmap for encouraging AI safety research to ensure that we integrate AI into our society without causing significant harm. A foresight study by GAO would be an important step in the right direction, preparing Congress to shape national priorities and fund AI safety technical research in the future.