TL;DR:
- The No AI FRAUD Act, introduced in 2024, aims to protect individuals from unauthorized use of their voices and likenesses by generative AI platforms.
- It grants intellectual property rights in voice and likeness, making them transferable and descendible for a decade after an individual’s death.
- Liability extends to those offering “personalized cloning services” and those knowingly publishing or distributing unauthorized digital replicas.
- A First Amendment defense balances IP interests against the public’s right to access unauthorized uses.
- Liability hinges on specific harms suffered by the misappropriated individual, including financial, physical, and emotional distress or potential deception.
- Support for the No AI FRAUD Act comes from the Recording Industry Association of America (RIAA) and the Human Artistry Campaign, which are advocating similar state-level protections in Tennessee.
Main AI News:
In an era dominated by artificial intelligence, the protection of one’s voice and likeness has become a paramount concern for individuals. The No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act of 2024, introduced by U.S. Representatives María Elvira Salazar (R-FL) and Madeleine Dean (D-PA), aims to establish robust legal mechanisms to empower Americans against unauthorized uses of their voices and likenesses by generative AI platforms.
Recognizing the need for intellectual property (IP) rights in an individual’s voice and likeness, the No AI FRAUD Act endeavors to provide comprehensive remedies, including statutory damages and disgorged profits. It addresses the alarming rise of AI-generated recordings and images that have caused significant harm to individuals whose likenesses were misappropriated, citing recent incidents involving public figures like Tom Hanks and high school students.
The cornerstone of the No AI FRAUD Act is its recognition that every individual possesses a property right in their own likeness and voice, akin to other forms of IP rights. These rights are not only transferable but also descendible, ensuring their continuity for a decade after the individual’s passing. Remarkably, these IP rights need not be commercially exploited during the individual’s lifetime to endure beyond their death. However, if no exploitation occurs for two consecutive years or if all devisees and heirs pass away, the IP rights would cease.
The No AI FRAUD Act casts a wide net of liability for unauthorized AI-generated reproductions. It extends culpability to anyone offering a “personalized cloning service,” defined as any technology with the primary purpose of creating digital voice replicas or depictions of specific individuals. Liability also encompasses those who knowingly publish or distribute unauthorized digital voice replicas.
In defending against potential First Amendment challenges, the No AI FRAUD Act recognizes the public interest in access to unauthorized uses of likenesses and voices. It mandates courts to balance IP interests against the primary expressive purpose of the work, considering factors such as commercial intent and relevance to the violating work.
Importantly, liability under the No AI FRAUD Act is contingent upon non-negligible specific harms suffered by the misappropriated individual. Such harms include financial or physical risks, severe emotional distress, or the potential to deceive the public or a court. Notably, sexually explicit or intimate digital depictions and voice replicas are automatically considered harmful under the bill, leaving no room for ambiguity.
The No AI FRAUD Act has garnered support from key industry players, including the Recording Industry Association of America (RIAA) and the Human Artistry Campaign. The latter also champions similar protections against AI-powered misappropriations at the state level in Tennessee.
Conclusion:
The No AI FRAUD Act introduces crucial safeguards for individuals’ intellectual property rights in the face of AI advancements. It establishes a legal framework that promotes accountability and protects against unauthorized use of voices and likenesses, bolstering confidence in the market for AI-driven technologies and content creation.