AI-driven apps digitally undressing photos are gaining immense popularity

TL;DR:

  • AI-driven apps digitally undressing individuals in photos are witnessing a surge in popularity.
  • In September, 24 million users visited these websites, revealing a concerning trend.
  • Many “nudify” services are using popular social networks for marketing, with a 2,400% increase in promotional links on platforms like X and Reddit.
  • These apps contribute to the non-consensual distribution of deepfake pornography, using AI to recreate explicit images.
  • The availability of open-source AI models has led to more realistic deepfakes.
  • Some apps have faced scrutiny for potentially inciting harassment and advertising explicit content.
  • Tech giants like Google and Reddit are taking steps to combat the issue, while TikTok and Meta are blocking related keywords.
  • Privacy experts are increasingly concerned about the accessibility and effectiveness of deepfake technology.
  • The lack of federal laws regarding deepfake pornography raises legal and ethical challenges.

Main AI News:

Apps utilizing artificial intelligence to digitally undress individuals in photos are experiencing a significant surge in popularity, as researchers have observed. In September alone, an astounding 24 million individuals flocked to these undressing websites, according to social network analysis firm Graphika. Alarming trends have emerged, with many of these “nudify” services resorting to popular social networks to market their offerings.

Notably, since the inception of this year, the volume of links promoting undressing apps has skyrocketed by more than 2,400% on social media platforms such as X and Reddit, as highlighted by researchers. These services leverage AI to meticulously recreate images to depict the subjects in a state of nudity, primarily focusing on women.

These apps contribute to an unsettling trend involving the creation and distribution of non-consensual pornography, driven by advancements in artificial intelligence—commonly referred to as deepfake pornography. The proliferation of such content poses substantial legal and ethical dilemmas, as the images are often extracted from social media and disseminated without the subject’s consent, control, or knowledge.

This surge in popularity can be attributed to the emergence of open-source diffusion models, representing artificial intelligence capable of generating remarkably superior images compared to those from just a few years ago, as noted by Graphika. Due to their open-source nature, these models are readily accessible to app developers at no cost.

Santiago Lakatos, an analyst at Graphika, remarked, “You can create something that actually looks realistic,” emphasizing that previous deepfakes often suffered from blurriness.

A concerning aspect emerges as one image posted on X advertising an undressing app hints at the possibility of customers generating nude images and sending them to digitally undressed individuals, thereby inciting harassment. Additionally, one of these apps has sponsored content on Google’s YouTube, ranking prominently for searches related to “nudify.”

Google, in response to this, stated that it does not permit ads containing sexually explicit content and is actively removing those that violate its policies. Reddit, on the other hand, prohibits the non-consensual sharing of faked sexually explicit material and has already banned several domains as a result of this research. Unfortunately, X did not provide a response to inquiries.

In conjunction with the surge in traffic, these services, some of which charge a monthly fee of $9.99, proudly claim on their websites to be attracting a substantial customer base. Lakatos emphasized, “They are doing a lot of business,” describing one undressing app’s website as advertising over a thousand users per day.

Non-consensual pornography involving public figures has been a longstanding issue on the internet. However, privacy experts are growing increasingly concerned that advancements in AI technology have made deepfake software more accessible and efficient, leading to its use by ordinary individuals targeting ordinary victims.

Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, noted, “We are seeing more and more of this being done by ordinary people with ordinary targets,” citing instances among high school and college students. Many victims may never become aware of such images, and even those who do often face challenges in convincing law enforcement to investigate or secure funds for legal action.

Currently, there is no federal law in place to prohibit the creation of deepfake pornography, although the US government does outlaw the generation of such images involving minors. In a landmark case in November, a North Carolina child psychiatrist was sentenced to 40 years in prison for using undressing apps on photos of patients, marking the first prosecution of its kind under the law banning the creation of deepfake child sexual abuse material.

To combat this issue, TikTok has taken steps to block the keyword “undress,” a commonly used search term associated with these services, warning users that it “may be associated with behavior or content that violates our guidelines.” Similarly, Meta Platforms Inc. has initiated the blocking of keywords linked to the search for undressing apps in response to inquiries.

Conclusion:

The rising popularity of AI-powered “nudify” apps, driven by open-source AI models and aggressive social media marketing, poses significant privacy and ethical concerns. The increase in non-consensual deepfake content threatens ordinary individuals, creating a demand for stricter legislation. Platforms’ efforts to block related keywords suggest a growing recognition of the issue, but further actions are needed to safeguard privacy and prevent harassment. The market for AI-driven image manipulation services may face regulatory scrutiny and increased responsibility to protect users.

Source