This piece was originally published in Tech Policy Press.
From education to media to medicine, the rapid development of artificial intelligence tools has already begun to upend long-established conventions. Our democratic institutions will be no exception. It’s therefore crucial that we think about how to build AI systems in a way that democratically distributes the benefits.
These tools could have a democratizing influence, making it easier for the average American to engage with policymakers—as long as they are built openly and not locked away inside walled gardens.
Late last year, ChatGPT—a chatbot built on top of OpenAI’s GPT-3 suite of large language AI models (LLMs)—took the online world by storm. As with other recent GPT-3 driven technologies, such as DALL-E and Whisper, many users have approached these tools in a lighthearted, playful way, enjoying silly conversations with ChatGPT and making DALL-E produce bizarre images. But what happens when the technology is applied, not to relatively frivolous uses, but by corporate America and the federal government?