Oversight Board Urges Meta to Overhaul Policies on AI-Generated Explicit Images

  • The Oversight Board is urging Meta to update its policies regarding AI-generated explicit images.
  • Recommendations include changing terminology from “derogatory” to “non-consensual” and relocating related policies to the “Sexual Exploitation Community Standards” section.
  • The board also suggests replacing “photoshop” with a broader term for manipulated media.
  • Meta’s current policy against non-consensual imagery is criticized for being too restrictive and conditional.
  • Two notable cases involving AI-generated explicit images of public figures on Meta’s platforms highlighted these policy issues.
  • Breakthrough Trust and other organizations have raised concerns about the trivialization of non-consensual imagery and the secondary victimization of reportees.
  • Meta has pledged to review and consider the board’s recommendations.

Main AI News:

The Oversight Board has called for significant changes to Meta’s handling of AI-generated explicit images, pressing the company to refine its policies and terminologies. Following recent investigations into how Meta manages these images, the board, an independent body that monitors Meta’s adherence to its own content standards, has recommended that the company adjust its approach to better address the unique challenges posed by AI-generated content.

The Oversight Board’s main suggestions include changing the terminology used in Meta’s policies from “derogatory” to “non-consensual” and relocating these policies from the “Bullying and Harassment” section to the “Sexual Exploitation Community Standards” section. The current policy framework, which categorizes explicit AI-generated images under the “derogatory sexualized photoshop” rule, is seen as insufficiently addressing the complexities of manipulated media.

Specifically, the board has urged Meta to replace the term “photoshop” with a more generalized term that encompasses all forms of manipulated media, acknowledging the broader range of technologies involved in creating such content. Additionally, Meta’s existing policy against non-consensual imagery—limited to cases deemed “non-commercial or produced in a private setting”—is criticized. The board argues that this clause should not be a prerequisite for removing or banning AI-generated or manipulated images lacking consent, suggesting that Meta’s approach should be more inclusive and less restrictive.

These recommendations come in response to two high-profile cases where AI-generated explicit images of public figures created significant controversy. The first case involved an AI-generated nude image of an Indian public figure that was posted on Instagram. Despite multiple user reports, the image remained on the platform for 48 hours before the Oversight Board intervened, resulting in the removal of the content and the banning of the offending account. The second case involved an image of a U.S. public figure posted on Facebook. Meta’s Media Matching Service (MMS) repository, a database designed to detect and address policy violations, had previously flagged the image due to media reports, leading to its swift removal when re-uploaded by another user.

The board’s intervention highlighted the need for Meta to improve its policies and practices regarding non-consensual content. The board expressed concern that many victims of deepfake intimate images are not public figures and face difficulties in addressing the spread of such content. Breakthrough Trust, an Indian organization dedicated to reducing online gender-based violence, has also voiced concerns about Meta’s policies. The organization argues that non-consensual imagery is often trivialized as identity theft rather than recognized as a serious form of gender-based violence. They have reported that victims frequently encounter secondary victimization when attempting to report such cases, with law enforcement and legal systems often questioning the victims rather than addressing the underlying issues.

Barsha Chakraborty from Breakthrough Trust has criticized Meta’s practice of automatically marking reports as resolved within 48 hours and suggested that the company needs to build greater user awareness and provide more nuanced support. She emphasized that the uniform application of timelines for all cases does not account for the unique challenges associated with synthetic media and the rapid spread of such content.

Aparajita Bharti of The Quantum Hub has called for Meta to allow users to provide more context when reporting content, noting that users may not fully understand the various categories of rule violations under Meta’s policies. Bharti advocates for more flexible and user-centered reporting channels to ensure that genuine issues are not overlooked due to technicalities in Meta’s content moderation policies.

In response to the Oversight Board’s recommendations, Meta has pledged to review and consider these proposed changes to enhance its policies on AI-generated explicit images. The company’s commitment to addressing these issues reflects ongoing efforts to adapt to the evolving challenges of content moderation in the digital age.

Conclusion:

The Oversight Board’s recommendations signal a potential shift in how social media platforms, particularly Meta, handle AI-generated explicit content. If Meta adopts these changes, it could lead to more robust protections against non-consensual imagery and improve content moderation practices. This move may set a precedent for other tech companies, influencing industry standards for managing AI-generated content. Enhanced policies could also address public concerns about digital safety and the ethical use of AI, potentially impacting user trust and regulatory scrutiny across the market.

Source