With the battle over the ownership of Twitter making its way into the courtroom, conservatives’ hopes that tech magnate Elon Musk would swoop in and solve the quagmire of free speech online have been put in jeopardy. As members of Congress debate legislation that fails to address conservatives’ core concerns, it is time for lawmakers to consider how technology could give power over content moderation to the people.
Musk shocked the world when he recently made a bid to purchase Twitter, promising to build the platform into an “arena of free speech.” He seems to truly believe that he could revert Twitter to its heyday of being the “free speech wing of the free speech party.” After much squabbling, Musk abruptly pulled out of the deal, pushing Twitter into the unique position of suing to enforce the buy-out. Legal analysts and pundits are split on whether Twitter will succeed in its lawsuit.
In his concerns about free speech online, Musk has growing company on the right: content moderation sits at the heart of most conservatives’ concerns with social media. Unfortunately, in an effort to promote freedom of expression online, conservative lawmakers have signed on to sweeping legislation that fails to address their primary concerns. Nowhere is this more apparent than with the American Innovation and Choice Online Act(AICOA), which would ban large online platforms from preferencing their own products and services over those of third parties, while imposing huge penalties for violations.
AICOA’s irrelevance to Republicans’ concerns was made clear at a recent press conference, when Representative Ken Buck (R-CO), Ranking Member of the House Judiciary’s subcommittee on antitrust, pointed to “the threat to free speech that Big Tech poses” as a primary reason for conservatives to support AICOA. Minutes later, Sen. Amy Klobuchar (D-MN)—the original sponsor of AICOA—contradicted Rep. Buck, stating that AICOA “is about competition; it’s not focused on content.”
Other proposed solutions to online content moderation are similarly rife with disagreement over what the problem is, let alone how to solve it. Consider the two approaches to free speech and content moderation online that have received the most attention from policymakers. One approach is to amend or repeal Section 230, which some argue currently gives platforms too much discretion over how they manage third-party content. But opponents contend that proposals to alter Section 230 could exacerbate free speech issues online.
Another model to promote freedom of expression is to treat online platforms as common carriers, requiring them to host all legal content. This approach is typified by Texas’ recently passed social media law which would ban large platforms from “censoring” a user based on viewpoint. Opponents contend that the idea of using state power to force platforms to host third-party speech is problematic. Trade groups have already sued to block the Texas law, arguing that it violates the First Amendment.
Rather than choosing between increased state involvement in online speech or allowing large platforms to police speech, policymakers should look to technological solutions to content moderation that empower the users themselves.
A recent University of Chicago Law Review paper by Daphne Keller of Stanford’s Cyber Policy Center outlines this approach. “Political conservatives are not alone in fearing that social media companies themselves have become the speech police and that concentration of users on so few platforms makes their policing consequential,” wrote Keller. “But laws like those in Texas and Florida don’t restore user choice … [they] just tell platforms to impose the state’s preferred speech rules.”
In opposition to state-controlled content moderation, Keller proposed supporting “middleware” solutions built on top of existing platforms. In the context of social media, middleware can alter how a platform functions for the user without altering the platform itself. As Keller explained:
For those concerned about censorship, [middleware] reduces platform power. It even lets users choose the firehose of lawful-but-awful garbage, if that’s what they want. … For platforms, it preserves editorial rights to set their own speech rules—they just can’t make them users’ only option.
While middleware could give power back to users, it is far from a panacea, at least in its current form. One impediment to middleware solutions is the Computer Fraud and Abuse Act (CFAA), which prohibits intentionally accessing a computer without authorization. The CFAA was intended to prevent hacking, but has been used to prevent third-party companies from interoperating with online platforms. For example, Facebook successfully used the CFAA to prevent a social media startup from allowing its users to post on Facebook. For middleware solutions to work, third parties would need to be able to adversarially interoperate with dominant platforms in ways that the CFAA currently makes difficult if not impossible.
If implemented, Elon Musk’s promise to open source the Twitter algorithm could lead to a blossoming of third-party solutions that give power over content moderation back to the users. But technological solutions rely on the support of policymakers. Passing flawed antitrust reform legislation or imposing speech codes would only damage middleware innovations. Encouraging middleware provides a third way for conservative lawmakers concerned with freedom of expression online without upending internet governance or antitrust law.