The American AI Initiative, created by the Trump Administration in February 2019, highlighted the White House’s priorities to make artificial intelligence a pivotal asset in shaping America’s future. While the initial plan was criticized for a lack of specifics, on January 7th the Office of Management and Budget released its “Guidance for Regulation of Artificial Intelligence Applications” for federal agencies, articulating the government’s strategy to boost innovation and build public trust in AI.
The guidelines make clear that moving fast does not imply breaking things. There is a humility to the principles, acknowledging the lack of resources the government has to effectively govern AI while ensuring innovation accelerates. This approach sends a signal to Congress to take more action to increase its regulatory capacity in order to take decisive action to shape the future of AI.
The first objective laid out is the desire to forge a national market to boost innovation in AI, advocating for a hands-off approach to regulation, except where essential to reduce barriers. There is a clear emphasis on the need to override any attempts by State or local governments to hamper the development and diffusion of AI. This acknowledgement is important as State governments, like California with the Consumer Privacy Act, have set onerous standards that have become national norms, harming innovation. Pushing for federal agencies to take a more active role in maintaining open markets is welcome and needed.
Boosting innovation alone would be self-defeating given the widespread tech backlash that has been occurring across the United States. The anxieties around AI’s potential harms –from algorithmic bias worsening discrimination, to the privacy concerns around facial recognition technologies –have justifiably created concern. To build public trust in artificial intelligence the guidelines encourage transparency and public consultation in rule making.
Building public trust, while an important aim, is where the stated principles provide the least guidance – but much work is still to be done by the various agencies. Interagency coordination is encouraged, which by ensuring a government wide understanding of the various costs and benefits of technological developments and regulatory approaches, would mitigate the harms from piecemeal activity. A coordinated government strategy would provide the public more confidence in the vision crafted by the White House for the direction of AI and its strategic priorities.
This coordinated approach, however, rubs against some of the non-regulatory approaches the guideline highlights. The three mentioned explicitly are sector-specific frameworks, pilot programs and experiments, and voluntary standards. None of these serve to advance a coherent whole of government strategy in the way that is encouraged. Sector-specific frameworks and voluntary standards in particular introduce a significant risk of capture by corporations who would be providing the insights for self-governance, and whose internal ethical standards have already provoked significant controversy.
The best of these is a call for pilot programs and experiments, which has precedent in the financial innovation sandboxes first developed in the UK. This approach would allow new, untested technologies to have exemptions from regulatory standards for a period of time, so that agencies are able to monitor fast-moving advancements and adopt new standards in response to observed need. While beneficial, experimentation itself does not craft the strategic vision needed for a non-regulatory framework to build public trust.
The emphasis on non-regulatory frameworks comes from an acknowledgement of resource constraints in effectively governing a fast-moving field. Instead of allowing for industry to take the lead in developing the framework for its governance, however, the White House should seek to improve its capacity. Attracting more top talent to work at agencies and beefing up their research arms would go a long way to helping them take a more proactive role in shaping AI’s development.
Acknowledging that regulatory frameworks may hinder innovation is important, but there is room to get innovative in how agencies interact with AI. One approach is the recently proposed “regulatory markets” system recently proposed by Jack Clark and Gillian Hadfield at OpenAI. Regulatory markets would involve agencies licensing private regulators to set standards and monitor development within corporations, alleviating the public of the resource burden involved, while creating accountability mechanisms that are more enforceable than voluntary standards.
The guidelines are an excellent starting point for fleshing out a vital initiative in maintaining American national competitiveness, but much more work remains to be done. The encouragement for non-regulatory frameworks should be taken as a challenge for agencies to realize that faster private sector innovation requires greater policy innovation.