TL;DR:
- Disinformation surge in California politics since 2020 is attributed to generative AI.
- Concerns about fake audio recordings and fabricated videos impacting elections.
- Berkeley IGS poll shows 84% of Californian voters worried about disinformation, deepfakes, and AI.
- The majority believe that the state government should take action.
- 78% hold social media companies responsible for disinformation but doubt their ability to solve the problem.
- 87% support increased transparency and accountability for deepfakes and algorithms.
- 2024 is expected to be the first AI-driven election.
- California Institute for Technology and Democracy (CITED) was formed to tackle these threats with diverse expertise.
- CITED’s advisors include former legislators, social media executives, and cybersecurity experts.
- CITED to pursue a legislative strategy in 2024, playing a critical role in safeguarding elections.
Main AI News:
In recent years, California and the entire nation have witnessed a disturbing surge in disinformation that has cast a dark shadow over our political landscape. At the heart of this disconcerting trend lies generative artificial intelligence, a force capable of exacerbating the problem at an alarming rate. With just one year remaining until another pivotal presidential election, the urgency of addressing this issue cannot be overstated.
Current AI tools have made it shockingly simple to create fabricated audio recordings, portraying figures like Joe Biden saying virtually anything one desires. Consider the ramifications of a robocall from “Joe Biden” disseminating false information about polling place changes to hundreds of thousands of voters on the eve of an election. Or envision a scenario where a conspiracy theorist fabricates a video featuring an elections official supposedly “caught on tape” admitting that voting machines are susceptible to hacking, subsequently sharing it on a counterfeit news site camouflaged as a local newspaper. The truth is that new AI tools have made all of these scenarios distressingly feasible.
Yet, as it stands, there is no direct recourse to counter the pernicious effects of these digital threats on our electoral process. Californians are now demanding more from their elected representatives, a sentiment that resoundingly echoes in the findings of a recent Berkeley IGS poll.
Remarkably, a staggering 84% of California voters, spanning all demographics and political affiliations, express deep concern about the perils posed by disinformation, deepfakes, and AI to our democracy. Nearly three out of every four voters believe that the state government bears a responsibility to take decisive action.
Furthermore, the poll reveals that 78% of voters hold social media companies accountable for the propagation of disinformation while simultaneously doubting their ability to solve the problem. An overwhelming 87% of voters endorse increased transparency and accountability concerning deepfakes and algorithms.
These statistics leave no room for doubt – consensus is a rare commodity, yet nearly all Californians concur on the imperative need to thwart AI and disinformation’s potentially calamitous impact on our elections.
The 2024 election looms as the nation’s inaugural full-scale AI election, ushering in an era where AI-generated deepfakes will become a routine fixture within our information ecosystems. The early signs of these threats are already discernible. Without immediate action, voters will be left grappling with the uncertainty of distinguishing trustworthy images, audio, and video content.
Addressing a challenge of such magnitude necessitates a multifaceted approach, leveraging diverse expertise and concurrent strategies to effect tangible change. Regrettably, the federal government is ill-prepared to enact the swift and essential measures required. Furthermore, Sacramento lacks an impartial, nonpartisan entity equipped to spearhead these efforts.
This is precisely why entities like the California Institute for Technology and Democracy (CITED) have emerged. It is fitting that solutions to these existential threats are incubating in the heart of America’s technology hub – California.
CITED has assembled a coalition of thought leaders encompassing technology, law, public policy, civil rights, civic engagement, and academia. Together, they are committed to driving pragmatic, high-impact, state-level solutions aimed at safeguarding our democracy in the modern age. Notably, the group’s advisors include former Democratic and Republican legislators, former executives from major social media platforms dedicated to civic trust and integrity, as well as leading academics specializing in cybersecurity and digital threats, among other luminaries.
In 2024, CITED will embark on a legislative strategy, actively engaging during the election cycle to combat the use of AI and deepfakes – an endeavor as crucial as policy formulation itself.
Conclusion:
The increasing concern among Californians regarding disinformation and AI’s impact on elections presents a substantial market opportunity for organizations like the California Institute for Technology and Democracy (CITED). As awareness grows, demand for innovative solutions and expertise in countering these threats will likely drive the growth of specialized services and technologies aimed at securing democratic processes in the modern age.