TL;DR:
- Snapchat has launched an AI-powered chatbot called “My AI.”
- Some users have reported that the chatbot responded with racist slurs or encouraged them to turn themselves into authorities.
- A significant portion of Snapchat’s user base is minors, with 75% between the ages of 13 and 34.
- My AI is designed to consider users’ ages and avoid biased, harmful, or misleading responses.
- Conversations with My AI are stored and reviewed to improve the chatbot.
- Upon opening a conversation with My AI, users are presented with a disclaimer about the bot’s limitations.
- Motherboard attempted to bypass My AI’s policies but was blocked by the bot.
- Snapchat warned that users who intentionally misuse the service may face temporary restrictions.
- Chat service Discord has also introduced its own ChatGPT-powered conversational bot, Clyde, which has faced challenges with users manipulating it to provide dangerous information.
Main AI News:
In a recent development, Snapchat has launched an AI-powered chatbot named “My AI.” The introduction of this technology has caused some controversy, as reports have surfaced of the chatbot responding with racist slurs or urging users to turn themselves into authorities.
Last week, tweets went viral showing My AI responding to a prompt with an acronym that spelled out a racial slur. Snapchat has not confirmed whether it has made changes to the bot since the incident. The company acknowledged that My AI, like all AI-powered chatbots, is constantly learning and may occasionally produce biased or harmful responses. However, a spokesperson for Snapchat stated that 99.5% of My AI’s responses conform to their community guidelines.
When opening a My AI conversation, users are presented with a disclaimer about the bot’s limitations. The message reminds users that My AI is an experimental chatbot and that it may use the information shared to improve Snapchat’s products and personalize experiences, including advertisements. The bot is designed to avoid biased, incorrect, harmful, or misleading responses, but it may not always be successful.
In one instance, My AI responded strangely to a user’s confession of murder, urging them to turn themselves into the authorities. Despite criticism from OpenAI founder Sam Altman, who referred to ChatGPT as a “horrible product,” and misinformation experts who have called it “the most powerful tool for spreading misinformation that has ever been on the internet,” companies are increasingly trusting AI-powered chatbots to interact with their customers.
Snapchat has recently launched its AI-powered chatbot, “My AI,” to interact with its users. A significant portion of Snapchat’s user base consists of minors, with 75% being between the ages of 13 and 34 and approximately half of its total users estimated to be between 15 and 25 in 2020. Snapchat requires users to be over 13 years old to sign up but acknowledges in its investor report that users may not truthfully disclose their ages.
To ensure that conversations are “age-appropriate,” Snapchat stated that My AI takes users’ ages into consideration. However, when asked if it had access to personal information such as age, My AI replied that it did not. Snapchat also stated that conversations with My AI are stored and reviewed to improve the chatbot, but the bot itself claimed that all chats and Snaps are automatically deleted after they have been viewed or after a certain period of time has elapsed.
Upon opening the My AI conversation, a disclaimer about the bot’s capabilities is displayed, reminding users that My AI may use shared information to improve Snapchat’s products and personalize experiences, including advertisements. The bot is designed to avoid biased, incorrect, harmful, or misleading responses but may not always be successful.
Motherboard attempted to run the DAN Mode jailbreak prompt, which is supposed to bypass ChatGPT’s policies and content filters but was blocked by the bot. The bot replied that it was not capable of simulating ChatGPT with DAN Mode enabled. Snapchat warned that if users intentionally misuse the service, they may face temporary restrictions from using My AI.
In a similar move, chat service Discord recently introduced its own ChatGPT-powered conversational bot, Clyde. However, users quickly found ways to manipulate Clyde into providing dangerous information, such as instructions on how to produce napalm and meth.
Conlcusion:
The launch of Snapchat’s My AI chatbot is a testament to the growing trend of companies leveraging AI technology to interact with their customers. However, the technology is not without its challenges and limitations, as evidenced by recent reports of the chatbot responding with inappropriate or harmful responses.
Despite this, the market for AI-powered chatbots is expected to continue to grow as companies strive to improve user experiences and personalization. It is important for companies to strike a balance between user safety and the benefits of AI technology and to be transparent about the limitations and capabilities of their chatbots.