- Thorn introduces Safer Predict, an AI-driven tool for detecting child sexual abuse material (CSAM) and exploitation risks on content platforms.
- Safer Predict addresses both new and previously unreported CSAM and text-based indications of child sexual exploitation.
- The tool uses machine learning models trained on verified CSAM data, including input from NCMEC’s CyberTipLine.
- Thorn’s CSE text classification model analyzes conversation contexts to identify potential abuse, offering detailed labeling and risk assessments.
- Safer Predict provides customizable workflows to develop detection strategies, prioritize high-risk accounts, and enhance content moderation.
- Beta testing with social media platform X showed Safer Predict’s effectiveness in improving content moderation and prioritizing reports for NCMEC.
Main AI News:
Thorn, a leading nonprofit focused on child protection from sexual abuse, has announced Safer Predict, a groundbreaking AI solution designed to proactively detect and address risks associated with child sexual abuse material (CSAM) and text-based interactions indicative of child sexual exploitation (CSE) on content-hosting platforms.
The National Center For Missing And Exploited Children (NCMEC) reported over 36 million suspected child sexual exploitation cases last year alone. Safer Predict aims to tackle the challenges of identifying both new and previously unreported CSAM and CSE, including text content that may suggest potential threats to children.
Julie Cordua, Thorn’s CEO, highlighted the critical need for scalable protection tools, stating, “With the rise in child safety risks, platforms need advanced solutions to enhance their protection capabilities. Safer Predict provides platforms with Thorn’s leading-edge technology to detect CSAM and CSE across images, videos, and text, allowing for quick removal of harmful content and creating a safer online environment.”
The tool employs Thorn’s advanced machine learning models, which are trained on verified CSAM data, including information from the NCMEC CyberTipLine. These models predict the presence of CSAM in images and videos, while Safer Predict’s text models focus on messages related to child sexual exploitation.
Thorn’s new CSE text classification model examines conversation contexts to identify potential abuse and allows for detailed labeling of problematic accounts. Various language models review conversations line by line, offering risk assessments for different forms of abuse including CSAM, child access, sextortion, and self-generated content.
Safer Predict features customizable workflows, enabling platforms to develop specific detection strategies, prioritize high-risk accounts, and expand detection coverage. It also improves content moderation by streamlining the investigative process and reporting harmful material more effectively.
Before its full launch, Thorn tested Safer Predict’s text detection capabilities with social media platform X. The beta test highlighted the tool’s effectiveness, with the text classifier aiding X’s moderation team in conducting in-depth investigations and prioritizing reports for NCMEC.
Kylie McRoberts, Head of Safety at X, remarked, “Thorn’s expert knowledge and quality training data made us enthusiastic about the beta test for their child sexual abuse text classifier. Safer Predict has improved our efficiency in identifying actionable content and supports our ongoing efforts to advance our technology-driven approach to combatting online child exploitation and high-harm content.”
Conclusion:
Thorn’s introduction of Safer Predict represents a significant advancement in the field of child protection technology. By providing a proactive, AI-driven solution for detecting both visual and textual indications of child exploitation, Safer Predict addresses a critical need for effective, scalable tools to combat these issues. The enhanced capabilities offered by this solution could set a new standard for content-hosting platforms, compelling other organizations to adopt similar technologies. This development may drive increased investment in AI solutions for online safety and further emphasize the importance of robust protective measures in the digital space.