TL;DR:
- OctoML ends its business relationship with Civitai following reports of CSAM generation.
- 404 Media’s investigation revealed sexually explicit content and nonconsensual images on Civitai.
- OctoML initially planned to continue collaboration but introduced NSFW content filters.
- Civitai responded with new moderation methods, including Civitai Safe Helper (Minor).
- Civitai had faced prior scrutiny for its “bounties” feature and deepfake generation.
- OctoML, using Amazon Web Services, emphasizes commitment to responsible AI use.
Main AI News:
In a surprising turn of events, OctoML, the cloud computing provider, has officially severed ties with Civitai, the embattled text-to-image platform, following recent reports of potential Child Sexual Abuse Material (CSAM) generation on the platform. The decision comes on the heels of an investigation by 404 Media, shedding light on the alarming misuse of Civitai’s technology.
Initially, OctoML had expressed its intention to continue its collaboration with Civitai, implementing measures to curb the creation of harmful content. However, the revelation in 404 Media’s December 5 report has prompted a decisive change of course. Internal communications exposed OctoML’s awareness of Civitai users generating sexually explicit and nonconsensual images of real individuals, including pornographic depictions of children.
In a subsequent move, OctoML introduced a filtering system to block the generation of Not Safe for Work (NSFW) content on Civitai. Nevertheless, this proved insufficient to rectify the situation, leading to OctoML’s decision to terminate its association with Civitai entirely.
Civitai responded to the investigation by implementing new moderation methods, including the introduction of “Civitai Safe Helper (Minor),” a mandatory embedding that prohibits the model from generating images featuring children when mature themes or keywords are detected.
Civitai, backed by Andreessen Horowitz, had previously drawn scrutiny for its “bounties” feature, which encouraged users to create realistic images of real individuals for rewards. In November, 404 Media exposed the platform’s involvement in the production of nonconsensual deepfakes of celebrities, influencers, and private citizens, often of a sexual nature, with a predominant focus on women. Civitai subsequently incorporated a filter to prevent the generation of NSFW content involving specific celebrities.
OctoML, whose infrastructure relies on Amazon Web Services servers, has made it clear that it no longer wishes to be associated with Civitai. In a statement to 404 Media, OctoML emphasized its commitment to ensuring the safe and responsible use of AI, aligning with its decision to terminate the business relationship with Civitai. This development underscores the increasing scrutiny and responsibility surrounding the use of AI technology in the modern era.
Conclusion:
OctoML’s decision to sever ties with Civitai reflects the increasing importance of responsible AI use. This move signifies heightened awareness and scrutiny within the market, highlighting the need for stringent measures to prevent harmful content generation in AI platforms. Companies must prioritize ethical considerations in AI technology to maintain trust and credibility in the evolving landscape.