TL;DR:
- UK’s ICO issues preliminary enforcement notice to Snap regarding its AI chatbot ‘My AI.’
- Concerns were raised about potential privacy risks to children.
- Snap’s risk assessment prior to ‘My AI’ launch under scrutiny.
- Snap to respond to ICO’s concerns before a final decision is made.
- Despite safeguards, there are reports of chatbot providing inappropriate responses.
- European regulators previously scrutinized AI chatbots, leading to privacy enhancements.
- Impact on the market: Increased regulatory scrutiny may encourage tech companies to prioritize user privacy in AI-driven products.
Main AI News:
Snap’s AI chatbot has come under the scrutiny of the UK’s data protection watchdog, which is expressing concerns about potential risks to children’s privacy. The Information Commissioner’s Office (ICO) recently issued a preliminary enforcement notice on Snap, focusing on what it perceives as a “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI.'”
While this action by the ICO does not constitute a breach finding, it underscores the regulator’s unease that Snap might not have taken adequate measures to ensure its product complies with data protection regulations. Since 2021, these rules have been fortified, particularly with the inclusion of the Children’s Design Code.
According to the ICO, its investigation has preliminarily revealed shortcomings in Snap’s risk assessment before the launch of ‘My AI,’ especially regarding data protection risks, especially concerning children aged 13 to 17. The ICO emphasized the importance of assessing data protection risks in this context, given the innovative technology involved and the processing of the personal data of young users.
Snap now has an opportunity to respond to the regulator’s concerns before the ICO reaches a final decision on whether the company has violated the rules.
Information Commissioner John Edwards expressed his concerns, stating, “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI.’ We have been clear that organizations must consider the risks associated with AI, alongside the benefits. Today’s preliminary enforcement notice shows we will take action to protect UK consumers’ privacy rights.”
Snap introduced the generative AI chatbot in February, although it was only available in the UK from April. Powered by OpenAI’s ChatGPT technology, the bot was designed to act as a virtual friend, offering advice and accepting snaps from users. Initially, the feature was exclusive to Snapchat+ subscribers but was later made accessible to free users as well. Snap also enabled the AI to send snaps to users who interacted with it, with these snaps generated by the AI.
Snap claims to have incorporated additional moderation and safeguarding features into the chatbot, including age consideration as a default setting. The aim is to ensure that the generated content is suitable for users. The bot is also programmed to avoid violent, hateful, sexually explicit, or offensive responses. Snap provides parental safeguarding tools that inform parents if their child has interacted with the bot in the past seven days, through its Family Center feature.
Despite these safety measures, there have been instances where the chatbot provided concerning responses. For example, it recommended ways to mask the smell of alcohol to a 15-year-old user and offered suggestions for setting a romantic mood with candles and music to a 13-year-old who asked about preparing for their first sexual experience.
Snapchat users have also been reported to bully the bot, with some expressing frustration over the introduction of AI into their feeds.
A spokesperson from Snap responded, saying, “We are closely reviewing the ICO’s provisional decision. Like the ICO, we are committed to protecting the privacy of our users. In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available. We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”
This isn’t the first time that European privacy regulators have taken notice of AI chatbots. In February, Italy’s Garante ordered Replika, a San Francisco-based “virtual friendship service,” to stop processing local users’ data, citing concerns about risks to minors. The Italian authority also placed a similar stop-processing order on OpenAI’s ChatGPT tool the following month. The block was eventually lifted in April, but only after OpenAI added more comprehensive privacy disclosures and user controls.
The regional launch of Google’s Bard chatbot faced delays due to concerns raised by Ireland’s Data Protection Commission, the lead regional privacy regulator. It eventually launched in the EU in July, after adding more disclosures and controls. A regulatory taskforce within the European Data Protection Board continues to assess how to enforce the bloc’s General Data Protection Regulation (GDPR) on generative AI chatbots, including ChatGPT and Bard. Poland’s data protection authority is also investigating a complaint against ChatGPT.
Conclusion:
Snap’s AI chatbot faces regulatory scrutiny in the UK over potential privacy breaches, particularly concerning children. This could lead to increased scrutiny of AI-powered services across the market, emphasizing the need for robust privacy measures and compliance with data protection regulations. Companies must ensure their AI technologies align with regulatory requirements to protect user privacy rights and maintain public trust.