- FCC proposes new regulations requiring disclosure of AI use in calls and texts.
- It aims to help consumers identify and avoid AI-generated scams.
- The proposal follows a ban on AI-generated voices in robocalls.
- FCC seeks feedback on defining AI-generated calls and consumer alerts.
- Safeguards are considered to assist people with disabilities in phone communication.
- Industry leaders warn that transparency alone may not deter fraudsters.
- Calls for more robust guidance and proactive measures from telecom providers.
Main AI News:
The Federal Communications Commission (FCC) has introduced proposed regulations to enhance transparency in AI-powered communications. According to the recent Notice of Proposed Rulemaking (FCC 24-84), the FCC is considering new rules requiring callers to disclose the use of AI in both calls and texts. The initiative is part of the FCC’s broader strategy to safeguard consumers from fraud, with the agency asserting that such disclosures would help consumers better identify and avoid communications with a higher risk of scams.
This proposal follows the FCC’s earlier decision to ban AI-generated voices in robocalls after a high-profile incident involving a fake President Biden robocall directed at New Hampshire voters. The FCC is now extending its focus to encompass a broader range of AI applications in telecommunications.
To implement these new rules, the FCC is working to define what constitutes an AI-generated call. The notice also invites feedback from stakeholders on the proposed regulations and seeks additional input on methods to alert consumers about unwanted and potentially illegal AI-generated communications.
In addition to targeting potential scams, the FCC is also considering safeguards to ensure AI can be leveraged to assist people with disabilities in phone communication. However, industry leaders caution that transparency alone may not be enough to deter fraudsters. Kush Parikh, president of security solutions provider Hiya, warned that scammers will likely continue exploiting technology. Parikh also called for more robust guidance from the FCC on blocking AI-generated deepfakes in real-time and alerting consumers to such threats. While welcoming the FCC’s efforts, he stressed that these protections should be mandated, pointing out that telecom providers must be proactive in deploying advanced technology to counter deepfakes and protect consumers.
Conclusion:
The FCC’s proposed regulations signify a growing concern about the role of AI in telecommunications, particularly in preventing fraud. For the market, this could mean increased scrutiny and potential compliance costs for telecom providers. The need for advanced technology to detect and block AI-generated scams could drive innovation but also present challenges in ensuring effective implementation. Companies in the telecommunications sector may need to invest in AI detection tools and collaborate closely with regulators to meet these new standards, impacting their operations and competitive dynamics.