Last Congress, antitrust and competition were the tech issues in the spotlight, but today, children's online safety is getting all the attention. On both sides of the aisle, concerns about social media’s impact on children have led to the introduction, and reintroduction, of numerous proposals. Now, Sens. Brian Schatz (D-HI) and Tom Cotton (R-AR) are throwing their own proposal into the ring, and it might be the worst one yet.
At a press conference earlier today, a rather odd coalition of senators unveiled the Protecting Kids on Social Media Act, which aims to protect kids from harm on social media. The overarching goal of the bill is to ban children under the age of 13 from using social media and requiring minors between the ages of 13 and 18 to receive affirmative parental consent to access social media. Regardless of one's opinions about the intent of such a proposal, the details of this piece of legislation mean that it would be totally ineffective at its goal of protecting children. What’s more, it would open children and adults to new violations of their privacy by both tech companies and the federal government.
The first problem with the Protecting Kids on Social Media Act is the way it defines social media. While the Federal Trade Commission and Meta quibble in courtrooms over how exactly to define the market for social media, Sens. Schatz and Cotton seem to believe the entire internet should be considered social media. Under their definition, social media is any online application or website that:
- Facilitates commercial transactions (e.g., Amazon);
- Facilitates teleconferencing and videoconferencing (e.g., Zoom);
- Facilitates subscription-based content or newsletters (e.g., Substack);
- Facilitates crowd-source content such as encyclopedias or dictionaries (e.g., Wikipedia);
- Provides cloud-based storage (e.g., Dropbox);
- Makes video games available for play (e.g., Steam);
- Reports or disseminates news (e.g., the New York Times);
- Provides information regarding businesses, products, or travel, including user reviews (e.g., Yelp); or
- Facilitates email or direct messaging (e.g., Gmail).
If that weren’t enough, the definition also includes a catch-all provision designating as social media any online application or website that has “any other function that provides content to end users but does not allow the dissemination of user-generated content.” Such a broad definition of social media is bound to unintentionally capture websites and applications that children should have access to, including educational applications used by schools. At a time when education technology is advancing rapidly, the Protecting Kids on Social Media Act would ban school children under the age of 13 from using practically any online educational tool.
Another problem with the bill is that it is practically unenforceable. One of the pillars of this legislative proposal is a requirement that social media platforms and applications take reasonable steps to verify the age of its users. With current technology, adequate age verification remains elusive. While the proposed legislation would establish a pilot program to examine age verification technology and requirements, and would establish a safe harbor for companies that comply with standards laid out by the National Institute of Standards and Technology (NIST), children under the age of 13 would still be able to access these platforms.
Currently, the Children’s Online Privacy Protection Act (COPPA) requires platforms to obtain verifiable parental consent before collecting personal information from children under the age of 13. To comply with COPPA, most tech companies have implemented age restrictions intended to keep children off their platforms. But children are notoriously adept at circumventing age restrictions, and it is unlikely that social media companies would be able to prevent them from doing so—even with a NIST standard. According to a recent report from THORN—a nonprofit organization that works to build technological solutions that protect children from sexual abuse—the majority of children have used social media. This report found that, among children aged 9–12, 98 percent have used YouTube, 69 percent have used iMessage, 66 percent have used Facebook, 66 percent have used TikTok, and 57 percent have played Minecraft. While some of these children accessed these services with parental consent, the report concludes that the majority of children set up social media accounts without their parents’ knowledge.
In other words, laws that already attempt to keep children from having their information collected by social media platforms are ineffective. Adding another federal mandate is unlikely to change this without a parallel advancement in age verification technology. Unfortunately, as opposed to other childrens’ online privacy bills that would examine the feasibility of new technologies for age verification and establish flexible duties of care for platforms, the new proposal from Sens. Schatz and Cotton would ban first and figure out feasibility later.
Finally, and perhaps most damningly, the Protecting Kids on Social Media Act would establish a pilot program to implement a federal digital identification credential system, better known as digital ID. Such a system would require an enormous amount of sensitive, personal information—including information about minors—to be collected and stored in a central, federal database. While there may be room for digital ID systems at the state or local level, where information is already centralized and stored by the government, at the federal level, it would be a nightmare for privacy and civil liberties.
The potential for governmental surveillance is one of the most significant privacy concerns associated with a federal digital identification credential system. The government could use the data collected through the system to track individuals' movements, monitor their online activity, and even identify individuals who attend protests or other public events. As the American Civil Liberties Union has highlighted, this type of surveillance could have a chilling effect on free speech and assembly, as individuals may be hesitant to express themselves or engage in activism if they fear that their activities are being monitored by the government.
With more research demonstrating the harms children face on social media coming out every day, it is understandable that policymakers are looking at ways to protect America’s youth. The Protecting Kids on Social Media Act is a well-intentioned but ultimately misguided attempt to protect children online. While it seeks to address real concerns about the potential dangers of social media, it does so by creating a system that is ineffective and could harm Americans' privacy. Rather than enacting ineffective and harmful legislation, lawmakers should work to develop a more nuanced and effective approach to protecting children online.