By Douglas Mirell, Partner at Greenberg Glusker

It is an extraordinarily rare day when Democrats and Republicans in the United State Senate can agree upon anything. But July 31, 2024 was just such a day.


On that date, two Democrats (Chris Coons, Delaware and Amy Klobuchar, Minnesota) joined two Republicans (Marsha Blackburn, Tennessee and Thom Tillis, North Carolina) to introduce the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024 (known colloquially as the “NO FAKES Act”)—a measure designed to protect the voice and visual likeness of all individuals from unauthorized AI-generated recreations.

Beginning in October 2023, these four Senators and their staffs spent countless hours working with numerous stakeholders to craft legislation that seeks to prevent individuals and companies from producing unauthorized digital replicas, while simultaneously holding social media and other sites liable for hosting such replicas if those platforms fail to remove or disable that content in a timely manner.

Nearly as unprecedented as the bipartisan co-authorship of this bill was the coalition of frequently adversarial organizations that endorsed the NO FAKES Act. Among these groups was not only the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), the Recording Industry Association of America, and the Recording Academy, but also the Motion Picture Association, Warner Music Group, and Universal Music Group. Other endorsers included Open AI, IBM, William Morris Endeavor, and Creative Artists Agency.

Key to the formation of this coalition was a set of exclusions that seek to safeguard First Amendment interests by, for example, exempting digital replicas that are “produced or used consistent with the public interest in bona fide commentary, criticism, scholarship, satire, or parody.”

Also exempted are “fleeting or negligible” usages, as well as a Forrest Gump-inspired exception that applies to documentary-style usages so long as the use of the digital replica does not create a “false impression that the work is ... [one] in which the individual participated.” In addition, advertisements and commercial announcements for such excluded uses are permissible, but only if the digital replica is “relevant to the subject of the work so advertised or announced.”

Notwithstanding the exceptional importance, and attendant dangers, of artificial intelligence to the entertainment community, the NO FAKES Act is not limited to protecting the rights of actors and recording artists. So long as what is being exploited is “the voice or visual likeness of an individual”—whether living or dead—the NO FAKES Act provides protection against unauthorized AI exploitation for all human beings, including minors.

Liability extends to two general categories of misconduct:

  1. The unauthorized production of a digital replica
  2. The unauthorized publication, reproduction, distribution, and transmission of a digital replica.

In the event of a violation, a civil lawsuit against individuals and online services can result in the imposition of damages of $5,000 per work, or $25,000 per work in the case of an entity other than an online service.

Alternatively, actual damages are available to the injured party, “plus any profits from the unauthorized use that are attributable to such use and are not taken into account in computing the actual damages.” Injunctive and other equitable relief is also available, as is an award of reasonable attorney’s fees to a prevailing plaintiff in the event of willful misconduct.

Finally, among the most significant aspects of the NO FAKES Act is its declaration that the rights it defines constitute “a law pertaining to intellectual property.” The ramifications of this characterization are profound. For nearly 30 years, Section 230 of the Communications Decency Act has effectively given blanket immunity to platforms that host user-generated content. This means that websites and social media sites have no legal obligation to respond to demands that defamatory and other offensive content created by such users be removed or disabled. But, under that same Section 230, this immunity dissolves in the face of any intellectual property law.

The trade-off for this major concession is that platforms can still receive protection for the third-party content they post if they abide by a notice-and-takedown regime akin to that which has worked reasonably well with copyright infringement claims brought under the Digital Millennium Copyright Act.

While it is perhaps an understatement to suggest that the U.S. Congress does not typically move with alacrity when confronted with legislation addressing new technologies, the bipartisan recognition of the manifest threat posed by generative artificial intelligence may yet prove to be a notable exception to this traditional paradigm.

Douglas Mirell has consulted with SAG-AFTRA on “deepfake” and other artificial intelligence issues affecting the entertainment community. Most recently, Mr. Mirell strategized with SAG-AFTRA about the conversations it was having with various stakeholders beginning in October 2023 that led to the introduction of the NO FAKES Act.