TL;DR:
- Over 100 deepfake video ads impersonating Rishi Sunak appeared on Facebook in a month.
- Concerns raised over AI’s impact on the upcoming election.
- Ads violated Facebook policies but reached up to 400,000 people.
- £12,929 was spent on 143 ads from 23 countries.
- One ad featured fabricated footage of a BBC newsreader reporting false claims.
- Researchers warn of the increasing quality of deepfake manipulation.
- The UK government and BBC emphasize the importance of trusted news sources.
- Regulators aim to address AI’s influence on elections.
- Meta claims to remove policy-violating content, with most problematic ads already disabled.
Main AI News:
In a concerning turn of events, over 100 deepfake video advertisements featuring an impersonation of Rishi Sunak have surfaced on Facebook in the past month, raising significant concerns about the impact of AI on the upcoming general election. Despite appearing to violate several of Facebook’s policies, these ads may have reached up to 400,000 individuals, marking the first instance of a systematic, large-scale manipulation of the Prime Minister’s image.
A substantial sum of £12,929 was spent on 143 ads, originating from 23 different countries, including the United States, Turkey, Malaysia, and the Philippines. One particularly alarming deepfake features fabricated footage of BBC newsreader Sarah Campbell, seemingly reporting on a false scandal involving Sunak secretly amassing enormous wealth from a project initially intended for ordinary citizens. The ad misleadingly asserts that Elon Musk has launched an application capable of “collecting” stock market transactions. It then showcases a fabricated clip of Sunak announcing the government’s decision to test the application instead of risking the funds of the general public. These manipulated clips ultimately lead viewers to a spoofed BBC News page promoting a fraudulent investment opportunity.
This troubling revelation comes from research conducted by Fenimore Harper, a communications company founded by Marcus Beard, a former Downing Street official who previously led No 10’s efforts to combat conspiracy theories during the Covid crisis. Beard emphasized that these ads, representing a significant leap in deepfake quality, pose a grave risk to the integrity of this year’s elections.
“With the proliferation of inexpensive and user-friendly voice and face cloning technology, it requires minimal expertise to exploit someone’s likeness for malicious purposes,” Beard stated. He further highlighted the inadequacy of Facebook’s advertising policy enforcement, noting that many of the ads that were encountered seemed to remain in circulation despite violating multiple policies.
In response, a spokesperson for the UK government affirmed their commitment to safeguarding the democratic process. They mentioned the existence of a defending democracy task force and specialized government teams actively addressing threats to democracy. The spokesperson also referred to the Online Safety Act, which imposes new obligations on social platforms to promptly remove illegal misinformation, including AI-generated content, once identified.
The BBC, too, emphasized the importance of sourcing news from trusted outlets in an era of escalating disinformation. They launched BBC Verify in 2023, a specialized team equipped with forensic and open-source intelligence (OSINT) capabilities to investigate, fact-check, and counter disinformation. The BBC aims to build trust with audiences by demonstrating how their journalists verify information and by providing resources to swiftly identify fake and deepfake content.
Regulators have expressed concerns that time is running out to implement comprehensive changes to the electoral system to adapt to advances in artificial intelligence, particularly before the anticipated November general election. Discussions between the government and regulatory bodies, such as the Electoral Commission, have revolved around new requirements introduced in 2022, mandating digital campaign material to include an “imprint” indicating its source or sponsor. This step is seen as a significant move toward transparency in political advertising.
A spokesperson from Meta commented on their commitment to policy enforcement, stating that they remove content violating their policies, regardless of whether it is AI-generated or human-created. They also noted that the majority of these problematic ads had already been disabled before the report’s publication and that less than 0.5% of UK users had seen any of the ads that did go live. Meta reaffirmed its ongoing efforts to enhance transparency in advertising related to social issues, elections, or politics.
The infiltration of deepfake ads on social media platforms underscores the urgent need for robust measures to safeguard the integrity of elections in an increasingly AI-driven world. As technology continues to advance, ensuring transparency and accountability in political advertising remains a paramount concern.
Conclusion:
The proliferation of deepfake advertisements on social media, particularly those impersonating public figures, presents a growing threat to election integrity. As the quality of deepfake manipulation continues to improve, it underscores the need for rigorous measures to ensure transparency and accountability in political advertising. This issue demands the attention of policymakers, tech companies, and the wider market to safeguard the democratic process and public trust in information sources.