FCC Takes Stricter Stance Against AI-Generated Robocalls

TL;DR:

  • FCC is taking steps to criminalize unsolicited robocalls using AI-generated voices.
  • The recent incident involving a fake Biden message prompts action.
  • The proposed change to the TCPA will target automated calls without recipient consent.
  • Previous cases show FCC’s commitment to penalizing illegal robocallers.
  • FCC’s five-member commission is expected to vote on the change soon.
  • State attorneys general to gain more authority to combat AI-powered spammers.
  • FCC Chairwoman warns about the potential for AI-generated voice scams.
  • AARP supports FCC’s move, highlighting the vulnerability of seniors to such robocalls.

Main AI News:

Federal Communications Commission (FCC) has announced its intention to criminalize the majority of AI-generated robocalls. The agency’s decision comes in the wake of a disturbing incident where AI was used to emulate the voice of President Joe Biden, urging New Hampshire residents not to participate in the state’s primary election. This latest proposal aims to prohibit such unsolicited robocalls under the Telephone Consumer Protection Act (TCPA), legislation dating back to 1991, which governs automated political and marketing calls made without the recipient’s consent.

The TCPA has been utilized in various high-profile cases to prosecute those responsible for illegal robocalls. In a notable example from last year, the FCC imposed a hefty $5 million penalty against conservative activists who orchestrated calls to mislead Black voters, falsely claiming that voting could lead to debt collection and police involvement in the 2020 elections. Another case saw a $300 million fine imposed on a company that inundated phones with auto warranty advertisements.

The FCC, comprised of five commissioners, is expected to vote on and approve these regulatory changes in the coming weeks, as confirmed by an FCC spokesperson. Notably, this alteration in policy will bolster the authority of state attorneys general to pursue legal action against individuals or entities deploying AI in robocall schemes. New Hampshire’s attorney general’s office has already initiated an investigation into the fraudulent Biden call.

FCC Chairwoman Jessica Rosenworcel emphasized the urgency of addressing AI-generated voice cloning and imagery’s potential for confusion and deception. She stated, “AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate. No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

Kathy Stokes, the director of fraud prevention programs at AARP, formerly known as the American Association of Retired Persons, commended the FCC’s decisive action. Stokes pointed out that AI-enabled robocalls often prey on vulnerable senior citizens, highlighting the need to address fraud as a serious crime rather than placing blame on victims. She stressed, “We’ve deprioritized fraud as a crime in this country, which comes from us immediately having a knee-jerk reaction of blaming the victim for not knowing something. We cannot educate our way out of this.”

Conclusion:

The FCC’s decision to crack down on AI-generated robocalls marks a significant step in protecting consumers from deceptive practices. This regulatory change, combined with the agency’s history of imposing substantial penalties, signals a strong commitment to curbing illegal robocalls. State attorneys general will now have enhanced authority to combat AI spammers, ensuring stricter enforcement. This move aligns with growing concerns over AI-generated deception, particularly targeting vulnerable demographics like senior citizens.

Source