Blog Posts


Protecting Taylor Swift from Deepfake Nudes Doesn't Require a New Civil Right

blog posts

Protecting Taylor Swift from Deepfake Nudes Doesn't Require a New Civil Right

February 12, 2024

The featured image for a post titled "Protecting Taylor Swift from Deepfake Nudes Doesn't Require a New Civil Right"

When sexually explicit deepfakes of Taylor Swift were posted online, lawmakers and the general public were understandably outraged. The White House called the incident “alarming” and members of Congress were quick to call for legislation to protect future victims. And Swift was not the first victim: recent months have also seen deepfaked robocalls from President Biden, a deepfaked song from Drake and the Weeknd, deepfaked ads where Tom Hanks promotes dental plans.

Some politicians are attempting to leverage the public outrage over these incidents into support for legislation that would enshrine a new, expansive civil right to publicity in federal law. But such a broad change of federal policy would be catastrophic for the burgeoning industry of AI creatives. Although we can all agree on the need to protect victims of deepfakes—particularly young women whose likenesses are being sexually exploited at a shockingly high rate—there already are laws that can be used to address this kind of conduct, and it would be a mistake to hastily pass new laws that would cause a host of other problems.

The No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act) exemplifies the harmful approach taken by some lawmakers. Sponsored by Rep. Maria Elvira Salazar (R-FL), the Act aims to establish a new federal right to control one’s likeness and voice, defining these rights as intellectual property that are transferable and descendible. It would make the unauthorized use of digital technology to create replicas or imitations of an individual's likeness or voice illegal, regardless of what the “use” may be – and would even make it illegal to develop a technology that could be used for this purpose. The No AI FRAUD Act would apply to individuals’ likenesses regardless of whether they are living or dead, and violators would be subject to substantial damage awards.

Although some proponents of the bill claim that it “mirrors First Amendment jurisprudence in requiring that IP interests must be balanced against protected free speech interests,” the No AI FRAUD Act abandons carefully-drafted protections for free speech that already exist in most states. Their rationale for supporting this expansive new federal law is that state-level protections are “undeveloped and uncertain”—a facile comment that is terribly misleading, at best. Some states already have statutes that are specifically targeted at protecting against deepfakes, but in every state, there are existing statutes and common law protections that are more than adequate to address misuse of AI for commercial purposes.

State laws and common law across the country already recognize a “right of publicity,” which prohibits the unauthorized use of a person’s likeness for commercial purposes. Other state laws protect against false and harmful portrayals of people that would apply to deepfakes. But these highly developed state laws also allow for all sorts of free expression—including satire and parody—which the No AI FRAUD Act tramples. Just as there is no federal negligence cause of action because negligence has been sufficiently handled through common law and state statutes, it makes little sense to create a new federal right out of thin air where existing law can sufficiently handle any harms that arise from deepfakes.

In instances where common law protections are less firm, new rules should be narrowly tailored. For example, in the cases where deepfakes are used for fraudulent advertisement, a small amendment to the Lanham Act would suffice. In contrast, the No AI FRAUD Act attempts to drive a nail with a sledgehammer and, in the process, risks smashing the burgeoning AI industry and stifling creativity in expressive works.

Such harms have been widely noted by other critics. These critiques of the No AI FRAUD Act are compelling, and any one of them alone should be enough to make the bill a non-starter in Congress. But there’s an even more damning issue with the draft legislation that has not been sufficiently addressed:its focus on punishing developers of AI technology.

As drafted, the bill would make any entity that “distributes, transmits, or otherwise makes available” a tool that could be used to create a deepfake liable for up to a $50,000 fine for each distribution, transmission, or use of the tool. Meanwhile, a person or entity that actually uses the tool for malicious purposes, such as making a deepfake, is only liable for up to $5,000 in fines per violation. Although the bill includes some vague exceptions, imposing a ten-fold punishment on the developers of technology, as opposed to people who may use a tool for malicious purposes, sets a dangerous precedent.

Traditionally, federal law has placed most of the liability on the user who commits a nefarious act for good reason. Holding developers responsible for the potential misuse of their innovation is akin to blaming car manufacturers for speeding violations. Targeting developers in this way is certain to stifle innovation as developers become overly cautious and fearful of potential lawsuits, hindering the advancement of beneficial technologies. The responsibility should lie with users who misuse technology, not those who create it.

It is important to remember that deepfakes, while notorious for their misuse, also have positive applications. Filmmakers, music producers, journalists, educators, advertisers, and artists of all types are just beginning to explore the innovative ways AI can impact their field in a positive way. From Grimes allowing fans to deepfake her music for visual effects artists de-aging famous actors, creative industries in particular are discovering the artistic and commercial potential of digital cloning. Beyond the arts, another interesting example comes from the medical field where medical schools are experimenting with digital replicas that mimic human emotion and expression to help train new doctors how to empathize with patients. A law that indiscriminately targets technology rather than its wrongful use could smother the exploration of these applications.

Furthermore, the definitions provided in the bill are so broad as to cover practically any service that could be used to create “digital voice replicas or digital depictions of particular, identified individuals.” Broad definitions, taken with the lack of differentiation between malicious intent and legitimate uses of digital replication technologies, risk further curtailment of innovation around beneficial use cases. Creating a new federal right for publicity in this way has the potential to set back industries and fields that could benefit from digital innovation, just as the tech sector is beginning to rebound.

To be clear: no one should have their likeness stolen to sell snake oil or, far worse, to be exploited for sexual purposes. But laws already exist to protect against commercial uses of someone’s likeness without their consent, and already protect victims of deepfakes. Targeting the developers of technology rather than the individuals using these tools for malicious purposes will stifle innovation and creativity. Policymakers who think it’s necessary to pass a new federal law to protect Taylor Swift from being deepfaked should first take a closer look at the laws that currently exist.

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All

Stay in the loop

Get occasional updates about our upcoming events, announcements, and publications.