Regulatory Response to Deepfake Threat: FTC Seeks Amendments


  • FTC endeavors to expand existing rules to combat deepfakes, covering all consumers.
  • Proposed changes may criminalize GenAI platforms offering services that facilitate consumer harm through impersonation.
  • Concerns rise over deepfake-enabled online scams and fraudulent activities targeting both individuals and corporations.
  • Public apprehension is evident, with surveys highlighting widespread worries about deceptive deepfake content.
  • FCC joins regulatory efforts by outlawing AI-generated robocalls, aligning with FTC’s proactive stance against deepfakes.
  • Despite the lack of federal legislation, states enact laws targeting deepfakes, which are likely to evolve as technology advances.

Main AI News:

As the peril of deepfakes continues to escalate, the Federal Trade Commission (FTC) is pushing for amendments to an existing regulation that prohibits the impersonation of businesses or governmental bodies, intending to extend its coverage to encompass all consumers.

The proposed alteration to the rule, contingent upon its final wording and the feedback garnered from the public, may also render it unlawful for GenAI platforms to offer products or services they are aware could be utilized to harm consumers through impersonation.

In a statement released to the press, FTC chair Lina Khan emphasized the urgency of the matter, stating, “Fraudsters are leveraging AI tools to mimic individuals with unsettling accuracy and on a significantly broader scale. With the surge of voice cloning and other AI-driven fraudulent activities, safeguarding Americans against impersonator fraud has become paramount.”

The scope of concern extends beyond public figures like Taylor Swift, with online romance scams utilizing deepfakes on the rise, alongside instances of scammers posing as employees to extort funds from corporations.

According to a recent YouGov poll, 85% of Americans express either significant or moderate apprehension regarding the proliferation of deceptive video and audio deepfakes. Similarly, a survey conducted by The Associated Press-NORC Center for Public Affairs Research indicates that nearly 60% of adults anticipate AI tools contributing to the dissemination of false and misleading information during the forthcoming 2024 U.S. election cycle.

The FTC’s initiative aligns with the Federal Communications Commission’s recent decision to outlaw AI-generated robocalls, coinciding with reports of a phone campaign in New Hampshire featuring a deepfake of President Biden aimed at discouraging voter turnout. These regulatory adjustments, coupled with the FTC’s proactive stance, currently represent the primary federal countermeasures against deepfakes and related technologies.

While no federal statute explicitly prohibits deepfakes, prominent targets such as celebrities theoretically have recourse to conventional legal avenues, including copyright statutes and privacy laws, albeit these legal remedies are often cumbersome and time-intensive to pursue.

In the absence of comprehensive federal legislation, ten states across the U.S. have implemented laws criminalizing deepfakes, primarily focusing on non-consensual pornography. However, with the evolution of deepfake technology, it’s likely these laws will undergo revisions to encompass a broader spectrum of deepfake applications, exemplified by Minnesota’s legislation already targeting deepfakes employed in political contexts.


The FTC’s proactive measures to expand regulatory oversight in response to the deepfake threat signal a growing awareness of the risks posed by AI-driven impersonation. This heightened scrutiny is likely to impact the market by necessitating increased diligence from GenAI platforms and potentially spurring further regulatory developments both at federal and state levels to address the evolving landscape of deepfake technology.