TL;DR:
- Computer engineers and political analysts have warned about the dangers of AI-generated deep fakes in elections.
- Generative AI tools have advanced significantly, creating realistic content that can deceive voters.
- Misleading campaign tactics using AI-generated content could reach unprecedented levels in the 2024 campaigns.
- Deepfakes can be used to impersonate candidates, spread disinformation, and incite violence.
- Legislation has been proposed to label AI-generated campaign ads and synthetic images.
- Some states are proposing their own solutions to address the concerns about deepfakes.
- Political consultants and trade associations denounce deep fakes as deceptive and unethical.
- AI has already been integrated into political campaigning for tasks like social media targeting and donor tracking.
- Campaign strategists see potential benefits in AI tools like ChatGPT and Quiller for campaign activities.
- Ensuring transparency, accountability, and responsible use of AI is crucial to safeguard democracy.
Main AI News:
Computer engineers and tech-savvy political analysts have long issued warnings about the impending danger of cheap and powerful artificial intelligence (AI) tools. These tools have the potential to create deceptive images, videos, and audio that are so realistic they can deceive voters and potentially influence elections.
Until recently, the synthetic content that emerged was often crude, unconvincing, and expensive to produce, especially when compared to the low cost and ease of spreading other forms of misinformation on social media. The looming threat of AI-generated deep fakes seemed to be a year or two away.
However, today’s sophisticated generative AI tools have taken a monumental leap forward. They can now produce cloned human voices, hyper-realistic images, videos, and audio within seconds and at a minimal cost. When combined with powerful social media algorithms, this synthetic and digitally created content can spread rapidly, targeting specific audiences with unprecedented precision. Consequently, the utilization of such content for deceptive campaign tactics may reach an all-time low.
The implications for the upcoming 2024 campaigns and elections are both extensive and deeply concerning. Generative AI not only enables the rapid production of targeted campaign emails, texts, or videos, but it also holds the potential to mislead voters, impersonate candidates, and undermine elections on an unprecedented scale and speed.
A.J. Nash, the vice president of intelligence at the cybersecurity firm ZeroFox, bluntly stated, “We’re not prepared for this.” Nash highlights the major leap forward in audio and video capabilities that have emerged, emphasizing the profound impact it will have when deployed on a large scale through social platforms.
AI experts can readily envision several alarming scenarios in which generative AI is harnessed to create synthetic media for the explicit purpose of confusing voters, defaming candidates, or inciting violence. For instance, automated robocall messages could impersonate a candidate’s voice, instructing voters to cast their ballots on the wrong date.
Audio recordings could surface, allegedly featuring a candidate confessing to a crime or expressing racist views. Additionally, manipulated video footage could falsely portray an individual delivering a speech or participating in an interview they never actually took part in. The creation of fake images resembling local news reports could falsely claim a candidate’s withdrawal from the race.
Consider the potential impact if someone like Elon Musk were to personally call you, urging you to vote for a particular candidate. Oren Etzioni, the founding CEO of the Allen Institute for AI, who now leads the nonprofit AI2, poses this thought-provoking question. Etzioni notes that many people would be inclined to heed Musk’s advice, but it would not truly be him.
Even former President Donald Trump, a candidate for the 2024 election, has shared AI-generated content with his social media followers. Recently, he posted a manipulated video on his Truth Social platform featuring CNN host Anderson Cooper. The video distorted Cooper’s reaction to a CNN town hall event, employing an AI voice-cloning tool.
In a dystopian campaign ad unveiled last month by the Republican National Committee (RNC), we are provided with a chilling glimpse into a digitally manipulated future. Released following President Joe Biden’s announcement of his reelection campaign, the online ad begins with a peculiar, slightly distorted image of Biden accompanied by the text: “What if the weakest president we’ve ever had was re-elected?“
What follows are a series of AI-generated images that paint a grim picture: Taiwan under attack, boarded-up storefronts across the United States as the economy crumbles, and soldiers with armored vehicles patrolling local streets while panic ensues amidst tattooed criminals and waves of immigrants.
Described by the RNC as “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024,” this ad serves as a stark reminder of the potential power of AI in shaping public opinion.
While the RNC openly acknowledged its use of AI in this instance, Petko Stoyanov, the global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas, warns that nefarious political campaigns and foreign adversaries may employ AI and synthetic media without disclosure. Stoyanov predicts that groups seeking to undermine U.S. democracy will exploit these technologies to erode public trust, leaving us to question the impact and potential recourse if an international entity, be it a cybercriminal or a nation-state, impersonates someone for their own agenda.
As we approach the 2024 election, AI-generated political disinformation has already begun to spread virally online. Examples include a doctored video of Biden appearing to deliver a speech attacking transgender individuals and AI-generated images depicting children allegedly learning about Satanism in libraries.
Even images purporting to show former President Donald Trump’s mug shot fooled some social media users, despite the fact that he never had one taken during his booking and arraignment in a Manhattan criminal court for falsifying business records. Additionally, AI-generated images portrayed Trump resisting arrest, although their creator promptly acknowledged their artificial origin.
In response to these emerging challenges, legislation has been introduced in the House by Representative Yvette Clarke, a Democrat from New York. The proposed legislation would require candidates to label campaign advertisements created with AI, and Rep. Clarke has also sponsored a separate bill that mandates the addition of a watermark to any synthetic images to indicate their artificial nature.
Recognizing the potential dangers posed by deepfakes, some states have taken the initiative to propose their own solutions. Rep. Yvette Clarke, who has been at the forefront of addressing these concerns, expresses her greatest fear: the use of generative AI to create videos or audio that incite violence and further divide Americans.
To tackle this issue, Rep. Clarke emphasizes the need to keep pace with advancing technology and establish appropriate safeguards. She highlights the risk of deception, with people often lacking the time to thoroughly fact-check every piece of information they encounter. The weaponization of AI during a political season has the potential to cause significant disruption.
Recently, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, unequivocally stating that they have no place in legitimate and ethical campaigns. The association denounced deep fakes as deceptive and emphasized the need for their exclusion.
While deepfakes raise concerns, other forms of artificial intelligence have long been integrated into political campaigning. Data and algorithms have been utilized to automate tasks such as social media voter targeting and donor tracking. Campaign strategists and tech entrepreneurs anticipate that the latest innovations in AI will also bring positive contributions to the 2024 campaigns.
Mike Nellis, CEO of the progressive digital agency Authentic, attests to the daily use of ChatGPT and encourages his staff to leverage its capabilities as long as any content generated by the tool undergoes human review. Nellis is currently engaged in a new project in collaboration with Higher Ground Labs, which involves an AI tool named Quiller.
This tool streamlines the typically tedious tasks of writing, sending, and evaluating the effectiveness of fundraising emails. Nellis envisions a future where every Democratic strategist and candidate will have a virtual copilot in their pocket, enhancing their campaign efforts.
Conlcusion:
The rapid advancement of generative AI tools and the emerging threat of AI-generated deep fakes in the political landscape has significant implications for the market. As campaigns and elections become increasingly influenced by synthetic media, businesses operating in the digital advertising and cybersecurity sectors will witness a growing demand for innovative solutions. The need to combat misinformation, protect the integrity of democratic processes, and restore public trust will drive market opportunities for technologies that can detect, verify, and mitigate the impact of AI-generated content.
Furthermore, as the use of AI expands beyond political campaigns, businesses that can harness AI in an ethical and responsible manner, providing transparency and accountability, will gain a competitive advantage. The market will witness a surge in demand for AI tools that can enhance authenticity, verify content integrity, and empower users to make informed decisions in an era of sophisticated synthetic media.