The Dangers of AI-Generated Disinformation: How Authoritarian Regimes Are Using Technology to Mislead the Public

TL;DR:

  • AI-generated avatars are used to spread disinformation by regimes like Venezuela and China.
  • Synthesia’s software allows users to generate videos in multiple languages, sync them with avatars and include them in various media.
  • Venezuelan regime used AI-generated ‘journalists’ to push favorable narratives without basis.
  • China also uses AI-generated avatars to promote CCP interests through Wolf News.
  • AI-generated fake news highlights the need for ethics codes and regulations in technology use.
  • Misinformation caused by AI-generated fake news can lead to public distrust.
  • Vigilance is required to ensure technology is used responsibly for society’s benefit.

Main AI News:

The use of artificial intelligence (AI) to create digital newscasters that spread disinformation is becoming increasingly prevalent, as evidenced by the Venezuelan regime’s use of such technology. According to a recent report in El País, the regime has created two ‘journalists’ called Daren and Noah using Synthesia’s software, which allows users to generate videos that can be synced with multiracial avatars and presented in a variety of languages.

The use of such technology highlights how AI can be used to further a particular narrative and distort reality for political purposes. Regimes like Venezuela and China use fake avatars, photographs, and videos to make people distrustful of institutions, people, and even nations that represent competition to the regime. As the founder of the Mexico-based cybersecurity center SILIKN, Victor Ruiz, pointed out in a recent interview with Diálogo. This is a way for these regimes to retain power over people.

Synthesia’s software allows users to generate videos in more than 100 languages, and they can be made to sync with the mouth of avatars and be included in images, soundtracks, and videos. What’s more, it only costs around $30 a month, and no knowledge of video creation is required for its operation. However, according to Synthesia’s terms of service, its website can be used to create training, tutorial, and marketing videos only and political or religious content is restricted. The company also stresses that stock avatars are not to be used in user-generated content for TV broadcasting.

Despite these restrictions, the Venezuelan regime has used Daren and Noah’s videos to push narratives favorable to the regime, which have garnered hundreds of thousands of views on YouTube and gone viral on TikTok. State-owned Venezolana de Televisión has also broadcast these videos, which unilaterally favor Venezuela without any basis. Héctor Mazarri, a collaborator of Venezuelan NGO Cazadores de Fake News, said that after studying these videos, they detected “an organized attempt to push narratives favorable to the [Venezuelan] regime.”

This abuse of power and disinformation can make anyone a victim, as Ruiz warns. It highlights the need for more responsible use of technology and stricter regulations to prevent the spread of fake news and disinformation. As the use of AI in digital media continues to grow, it is essential that we remain vigilant and ensure that it is used for the benefit of all rather than just for the benefit of those in power.

The rise of fake news and disinformation has become a significant problem, with authoritarian regimes such as Venezuela and China using artificial intelligence (AI) to further their political goals. A prime example of this is the use of AI-generated avatars to create fake news broadcasts that promote a positive view of their countries, despite economic and social reports suggesting otherwise.

In Venezuela, the House of News broadcast presented a positive view of Caracas, highlighting Venezuelan beaches full of tourists and fully booked travel agencies for the carnival season. The goal was to create a perception that reports about the poverty line being crossed by over 90% of the population in Venezuela were not accurate. This creates distrust and uncertainty among people, which could lead to a potential rebellion against the country itself.

Similarly, China has been using AI-generated avatars to promote the interests of the Chinese Communist Party (CCP) through Wolf News, where fake anchorwomen herald China’s role in geopolitical relations at international summits. Graphika, a US firm that studies disinformation, located a campaign promoting pro-China avatar video footage in 2022, saying that publicly available AI products enable influence operation actors to create misleading content at greater scale and speed.

The use of AI in creating fake news and disinformation highlights the need for organizations and governments to have codes of ethics for their operations. It’s essential to incorporate regulations for the use of AI to curb the spread of misinformation. The CPP’s official media outlet, People’s Daily, unveiled an AI news anchor, which can only respond from a dictated script that follows the CCP’s editorial and official line. Codes of ethics could curb some of this misinformation, which can cause confusion and harm the public’s trust in media. It’s essential to remain vigilant and ensure that technology is used responsibly to benefit society as a whole.

Conlcusion:

The use of AI-generated fake news and disinformation by authoritarian regimes like Venezuela and China highlights the potential harm that can be caused by the misuse of technology. This not only poses a threat to public trust in media but also to the stability of markets. As businesses increasingly rely on digital media to engage with customers, it is essential that they remain vigilant and ensure that their content is factual and trustworthy.

Furthermore, the rise of AI-generated fake news and disinformation emphasizes the need for stricter regulations and codes of ethics to prevent the spread of misinformation and protect the interests of consumers and society as a whole.

Source