BBC Report: Tech Companies’ Use of AI in Danger of Concealing Evidence of Potential War Crimes

TL;DR:

  • Tech companies use of AI algorithms for content moderation may result in the deletion of crucial evidence related to potential war crimes.
  • Platforms like Facebook, Instagram, and YouTube rely on AI to remove graphic videos, but the lack of nuanced understanding can lead to the removal of videos documenting war crimes.
  • Citizen journalists documenting Russian war crimes in Ukraine have faced difficulties as their videos are taken down by social media platforms.
  • Social media firms deleting these videos deprive victims of war criminals of essential evidence.
  • Mnemonic, a Berlin-based company, has developed a tool to automatically save deleted videos of war crimes, but it cannot compensate for the scale of social media platforms.
  • Meta, the parent company of Facebook, aims to improve the decision-making process for content moderation and discern potential war crimes from other graphic content.

Main AI News:

In a recent report by the BBC, concerns have been raised about the unintended consequences of tech companies’ use of Artificial Intelligence (AI) in moderating content. While platforms like Facebook, Instagram, and YouTube rely on AI algorithms to identify and remove graphic videos, there is a growing fear that this process might lead to the permanent erasure of crucial evidence related to potential war crimes.

The primary objective of these AI systems is to protect viewers from harmful and inappropriate content. However, their lack of nuanced understanding often results in the removal of videos documenting possible war crimes, as they fail to distinguish between obscene violence intended for entertainment purposes and footage with legal implications.

This issue has already hindered citizen journalists who have been documenting Russian war crimes in Ukraine. One such journalist, Ihor Zakharenko, shared his experience with the BBC. After capturing footage of 17 individuals murdered in a Kyiv suburb by Russian invasion forces, Zakharenko uploaded the videos to Facebook and Instagram, only to have them swiftly taken down.

To shed light on the severity of this problem, the BBC conducted an experiment involving the uploading of Zakharenko’s videos to YouTube and Instagram using dummy accounts. Shockingly, Instagram removed three out of the four videos within a minute, while YouTube followed suit within ten minutes.

Over the past decade, citizen journalism and social media platforms have played a crucial role in documenting war crimes and attacks in conflict zones such as Syria, Yemen, and Sudan. By removing these videos, social media firms are essentially depriving victims of war criminals of a vital weapon—undeniable evidence.

Imad, a former pharmacy owner in Aleppo whose business was destroyed by one of Assad’s barrel bombs in 2013, recounted to the BBC how the deletion of such videos nearly cost him asylum. When he sought refuge in the European Union years after the attack, he was asked to provide evidence of the incident. His only recourse was to turn to social media and YouTube for videos of the bombing, but to his dismay, they had all been deleted.

Fortunately, Berlin-based company Mnemonic has developed a tool that automatically saves videos of war crimes, ensuring they are not lost due to overcautious AI algorithms. Mnemonic has already managed to preserve 700,000 deleted videos from social media, including the crucial footage needed by Imad.

Nevertheless, the scale of social media platforms is vast, and even with the efforts of companies like Mnemonic, it is impossible to compensate fully for the deletion of vital videos documenting atrocities caused by overly cautious AI programs.

In response to these concerns, Meta, the parent company of Facebook, has expressed its commitment to improving the decision-making process in content moderation. They aim to develop more reasonable mechanisms, whether through human intervention or enhanced AI, to accurately discern potential recordings of war crimes from other forms of graphic content, as reported by the BBC.

Conclusion:

The use of AI by tech companies for content moderation poses a significant risk of erasing evidence of potential war crimes. This raises concerns for the market as it highlights the need for improved AI algorithms that can accurately differentiate between harmful content and videos documenting atrocities. Companies like Mnemonic provide valuable solutions, but there is still a gap to be filled. Meta’s commitment to developing more reasonable mechanisms shows a recognition of the issue and a step in the right direction. The market should prioritize the development of AI systems that strike a balance between user protection and preserving important evidence.

Source