- Microsoft is urging Congress to regulate AI-generated deepfakes to address fraud, abuse, and manipulation.
- Brad Smith, Microsoft’s Vice Chair and President, highlights the need for a new “deepfake fraud statute” to empower law enforcement.
- Calls for updating federal and state laws to include AI-generated content in protections against child exploitation and non-consensual intimate imagery.
- Recent Senate bill allows victims of sexually explicit deepfakes to seek damages.
- Microsoft has reinforced safety measures following misuse of its AI tools to create explicit images.
- The FCC has banned AI-generated robocalls, but generative AI continues to produce fake media, impacting public perception.
- Smith advocates for clear labeling of deepfakes to maintain trust in information.
Main AI News:
Microsoft is advocating for Congress to enact comprehensive regulations addressing the misuse of AI-generated deepfakes, emphasizing the need to counteract fraud, abuse, and manipulation. Brad Smith, Microsoft’s Vice Chair and President, has stressed the urgency for legislative action to safeguard elections, protect seniors from fraud, and shield children from exploitation.
In a recent blog post, Smith highlighted the inadequacies of current laws in tackling deepfake fraud. “While the tech sector and non-profit organizations have made strides, it’s clear our legal framework must evolve to address the challenges posed by deepfake technology,” Smith stated. He advocates for the introduction of a “deepfake fraud statute” to equip law enforcement with the necessary tools to prosecute AI-driven scams and fraud. Smith also calls for updates to federal and state laws related to child exploitation and non-consensual intimate imagery to encompass AI-generated content.
The Senate has already taken steps by passing legislation targeting sexually explicit deepfakes, allowing victims to seek damages from creators. This move follows incidents where explicit AI-generated images of female students and celebrities, including Taylor Swift, surfaced online.
Microsoft has strengthened its own safety measures after its Designer AI tool was misused to generate explicit images of celebrities. “The private sector must lead in developing and implementing safeguards to prevent AI misuse,” Smith commented.
Despite the FCC’s ban on AI-generated robocalls, the proliferation of generative AI continues to facilitate the creation of fake audio, images, and videos, with such content already influencing the 2024 presidential election. Smith has urged Congress to mandate clear labeling of deepfakes, stressing that AI system providers should use advanced tools to indicate synthetic content. “This transparency is crucial for maintaining trust and ensuring the public can distinguish between genuine and manipulated media,” Smith concluded.
Conclusion:
Microsoft’s push for legislative action on AI-generated deepfakes underscores a growing concern about the impact of advanced technologies on fraud and public trust. The call for a dedicated deepfake fraud statute reflects the need for updated legal frameworks to address emerging threats in digital media. As generative AI continues to evolve, the demand for clear regulatory guidelines and transparency in synthetic content will likely shape market dynamics and influence the development of AI technologies and safety measures.