TL;DR:
- Deepfakes generated by AI are gaining popularity, with recent examples involving celebrities and politicians.
- Google announced plans to label deceptive AI-generated political ads, prompting US lawmakers to question social media platforms like X, Facebook, and Instagram.
- Two Democratic lawmakers, Sen. Klobuchar and Rep. Clarke, expressed concerns about the potential impact of AI-generated political ads on the 2024 elections.
- They called for transparency from Meta (Facebook’s parent company) and X (formerly Twitter) regarding rules to combat election-related misinformation.
- Google will require disclaimers on AI-generated election ads, while Meta has a policy against “faked, manipulated, or transformed” content but lacks a specific rule for AI-generated political ads.
- A bipartisan Senate bill aims to ban “materially deceptive” deepfakes related to federal candidates.
- Sen. Klobuchar emphasized the importance of addressing this issue before the election.
- The Federal Election Commission is considering rules for AI-generated deepfakes in political ads, with a public comment period ending on October 16.
Main AI News:
Deepfakes, the product of artificial intelligence’s creative prowess, have taken the world by storm, leaving us in awe as celebrities don unfamiliar roles. From Tom Hanks pitching dental plans to Pope Francis sporting a trendy puffer jacket and even US Sen. Rand Paul lounging on the Capitol steps in a crimson bathrobe – these AI-generated marvels have captured our imagination.
But as we approach the US presidential election in 2024, the question looms large: What lies ahead?
Google, the tech giant, took the initiative by committing to label deceptive AI-generated political advertisements that mimic a candidate’s voice or actions. However, some US lawmakers now turn their scrutiny towards social media powerhouses – X, Facebook, and Instagram, demanding explanations for their inaction.
Two Democratic stalwarts, US Sen. Amy Klobuchar of Minnesota and US Rep. Yvette Clarke of New York, penned a letter to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino. They express “serious concerns” about the surge in AI-generated political ads on these platforms and insist on transparency regarding the rules designed to safeguard the integrity of free and fair elections.
“Given their status as two of the largest platforms, voters deserve to know the safeguards being put in place,” Sen. Klobuchar stated in an interview with The Associated Press. “We’re simply asking, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”
Their letter forewarns, “With the 2024 elections fast approaching, a lack of transparency regarding this type of political ad content could unleash a dangerous torrent of election-related misinformation and disinformation on your platforms – where voters often turn to gather information about candidates and issues.”
At present, neither X (formerly Twitter) nor Meta (the parent company of Facebook and Instagram) has responded to these inquiries. However, Clarke and Klobuchar have set a deadline for their response – October 27.
This pressure on social media giants coincides with both lawmakers’ efforts to regulate AI-generated political ads. Clarke introduced a House bill earlier this year, aiming to amend federal election laws to mandate disclaimers on election advertisements featuring AI-generated images or videos.
Sen. Klobuchar, meanwhile, sponsors companion legislation in the Senate, with hopes of swift passage. She believes that, in the interim, major tech platforms should proactively adopt these standards.
Google, for its part, has announced plans to require clear disclaimers on AI-generated election ads altering people or events on YouTube and other Google products, starting in mid-November. This policy will apply both in the US and in countries where Google verifies election ads. While Meta lacks a specific rule for AI-generated political ads, its policy restricts “faked, manipulated, or transformed” audio and imagery used for misinformation.
A bipartisan Senate bill, co-sponsored by Sen. Klobuchar and Republican Sen. Josh Hawley of Missouri, goes even further by proposing to ban “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire.
AI-generated ads have already infiltrated the 2024 election, exemplified by the Republican National Committee’s April airing depicting a dystopian future if President Joe Biden is reelected. This ad used fake but convincing images of boarded-up storefronts, military patrols, and waves of immigrants – all designed to incite panic.
Sen. Klobuchar believes that such ads, along with other deceptive content, should be barred under the proposed rules. She underscores the importance of distinguishing truth from falsehood in the context of a presidential race.
In a recent hearing presided over by Sen. Klobuchar, the Senate Rules and Administration Committee discussed AI and its impact on future elections. Witnesses included Minnesota’s secretary of state, a civil rights advocate, and some skeptics. While some, like Ari Cohn of think-tank TechFreedom, argue that the deepfakes of 2024 have drawn substantial scrutiny and ridicule, they haven’t significantly misled voters or influenced their decisions. Cohn raises questions about the necessity of new rules, asserting that even false speech enjoys First Amendment protection and that voters should be the arbiters of political truth and falsehood.
The Federal Election Commission has also taken steps towards potentially regulating AI-generated deepfakes in political ads, opening a public comment period on a petition submitted by the advocacy group Public Citizen, which calls for rules governing misleading images, videos, and audio clips. The public comment period for this petition ends on October 16.
Conclusion:
The scrutiny of AI-generated deepfakes and their potential impact on the 2024 US election highlights the growing need for regulation and transparency in the digital advertising landscape. Social media platforms and tech companies must proactively address this issue to ensure the integrity of future elections and the dissemination of accurate information. This evolving landscape presents opportunities for companies specializing in AI content verification and authentication to play a pivotal role in maintaining trust and credibility in the market.
Source
AI-Generated Deepfakes and the 2024 US Election: Lawmakers Demand Accountability from Tech Giants
TL;DR:
Main AI News:
Deepfakes, the product of artificial intelligence’s creative prowess, have taken the world by storm, leaving us in awe as celebrities don unfamiliar roles. From Tom Hanks pitching dental plans to Pope Francis sporting a trendy puffer jacket and even US Sen. Rand Paul lounging on the Capitol steps in a crimson bathrobe – these AI-generated marvels have captured our imagination.
But as we approach the US presidential election in 2024, the question looms large: What lies ahead?
Google, the tech giant, took the initiative by committing to label deceptive AI-generated political advertisements that mimic a candidate’s voice or actions. However, some US lawmakers now turn their scrutiny towards social media powerhouses – X, Facebook, and Instagram, demanding explanations for their inaction.
Two Democratic stalwarts, US Sen. Amy Klobuchar of Minnesota and US Rep. Yvette Clarke of New York, penned a letter to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino. They express “serious concerns” about the surge in AI-generated political ads on these platforms and insist on transparency regarding the rules designed to safeguard the integrity of free and fair elections.
“Given their status as two of the largest platforms, voters deserve to know the safeguards being put in place,” Sen. Klobuchar stated in an interview with The Associated Press. “We’re simply asking, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”
Their letter forewarns, “With the 2024 elections fast approaching, a lack of transparency regarding this type of political ad content could unleash a dangerous torrent of election-related misinformation and disinformation on your platforms – where voters often turn to gather information about candidates and issues.”
At present, neither X (formerly Twitter) nor Meta (the parent company of Facebook and Instagram) has responded to these inquiries. However, Clarke and Klobuchar have set a deadline for their response – October 27.
This pressure on social media giants coincides with both lawmakers’ efforts to regulate AI-generated political ads. Clarke introduced a House bill earlier this year, aiming to amend federal election laws to mandate disclaimers on election advertisements featuring AI-generated images or videos.
Sen. Klobuchar, meanwhile, sponsors companion legislation in the Senate, with hopes of swift passage. She believes that, in the interim, major tech platforms should proactively adopt these standards.
Google, for its part, has announced plans to require clear disclaimers on AI-generated election ads altering people or events on YouTube and other Google products, starting in mid-November. This policy will apply both in the US and in countries where Google verifies election ads. While Meta lacks a specific rule for AI-generated political ads, its policy restricts “faked, manipulated, or transformed” audio and imagery used for misinformation.
A bipartisan Senate bill, co-sponsored by Sen. Klobuchar and Republican Sen. Josh Hawley of Missouri, goes even further by proposing to ban “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire.
AI-generated ads have already infiltrated the 2024 election, exemplified by the Republican National Committee’s April airing depicting a dystopian future if President Joe Biden is reelected. This ad used fake but convincing images of boarded-up storefronts, military patrols, and waves of immigrants – all designed to incite panic.
Sen. Klobuchar believes that such ads, along with other deceptive content, should be barred under the proposed rules. She underscores the importance of distinguishing truth from falsehood in the context of a presidential race.
In a recent hearing presided over by Sen. Klobuchar, the Senate Rules and Administration Committee discussed AI and its impact on future elections. Witnesses included Minnesota’s secretary of state, a civil rights advocate, and some skeptics. While some, like Ari Cohn of think-tank TechFreedom, argue that the deepfakes of 2024 have drawn substantial scrutiny and ridicule, they haven’t significantly misled voters or influenced their decisions. Cohn raises questions about the necessity of new rules, asserting that even false speech enjoys First Amendment protection and that voters should be the arbiters of political truth and falsehood.
The Federal Election Commission has also taken steps towards potentially regulating AI-generated deepfakes in political ads, opening a public comment period on a petition submitted by the advocacy group Public Citizen, which calls for rules governing misleading images, videos, and audio clips. The public comment period for this petition ends on October 16.
Conclusion:
The scrutiny of AI-generated deepfakes and their potential impact on the 2024 US election highlights the growing need for regulation and transparency in the digital advertising landscape. Social media platforms and tech companies must proactively address this issue to ensure the integrity of future elections and the dissemination of accurate information. This evolving landscape presents opportunities for companies specializing in AI content verification and authentication to play a pivotal role in maintaining trust and credibility in the market.
Source