CyberSafeKids is worried about Snapchat’s AI chatbot

TL;DR:

  • Safeguarding concerns raised over Snapchat’s AI chatbot by CyberSafeKids.
  • Snapchat recently launched ‘My AI’ feature pinned to users’ chat feeds, only removable for paid subscribers.
  • Criticism and confusion surround the app’s use of location data, especially with underage users.
  • CyberSafeKids CEO expresses concerns about the lack of proper testing and potential risks.
  • Snapchat claims ‘My AI’ considers users’ ages and aims to keep conversations age-appropriate.
  • EU’s AI Act lacks strong provisions for child safety, according to CyberSafeKids.
  • Mixed reports on whether ‘My AI’ has access to user location despite Snapchat’s denial.
  • Snapchat suggests parents use ‘Family Center’ to monitor teens’ interactions with ‘My AI.’
  • Tech companies should prioritize creating safe environments for children.
  • Safety measures must be central considerations in developing and releasing new technologies.

Main AI News:

Snapchat’s recent introduction of its own artificial intelligence chatbot, called ‘My AI,’ has raised serious safeguarding concerns. CyberSafeKids, an organization dedicated to online safety for children, claims that the service has not undergone adequate testing. ‘My AI’ is a feature pinned to the top of users’ chat feeds, with only paid subscribers having the option to remove it. This move has faced criticism, accompanied by confusion regarding the app’s use of location data, particularly concerning children as young as 13 can sign up officially.

Alex Cooney, Chief Executive of CyberSafeKids, expressed her worry over Snapchat’s implementation of the new technology without proper road testing, particularly considering the app’s large user base. She noted that 42% of children between the ages of eight and 12 surveyed by CyberSafeKids over the past year use Snapchat, indicating that the chatbot would also be utilized by underage users. Cooney voiced her concern, stating, “This potential lack of testing raises alarm bells, especially given the userbase of Snapchat.”

Snapchat has responded by stating that ‘My AI’ takes into account Snapchatters’ ages and aims to ensure age-appropriate conversations. The company also emphasized its commitment to monitoring usage patterns and implementing necessary improvements to enhance the fun, usefulness, and safety of the AI feature for the entire community.

Despite Snapchat’s claims, Cooney believes there is still a risk of things going wrong. She pointed out that while the European Union’s AI Act is on the horizon, regulations surrounding artificial intelligence fail to sufficiently address child safety concerns. Cooney stressed the need for safety to be a central consideration during the development and rollout of new technologies before they reach the general public.

Concerns regarding location data have also surfaced. Snapchat clarifies that ‘My AI’ only accesses a user’s location if it has already been shared with friends on Snap Map or with Snapchat at the device level. The company acknowledges that this has caused confusion among users but emphasizes that ‘My AI’ does not collect any new location information.

In response to Snapchat’s explanation, Cooney mentioned receiving mixed reports where children asked the AI chatbot location-specific questions and sometimes received accurate answers, while other times the response was “I don’t know where you are.” Cooney remains skeptical, stating, “I believe they have access to location data despite their claims.”

Snapchat assures parents that they can use the ‘Family Center’ feature to monitor their teens’ interactions with ‘My AI’ and the frequency of those engagements. However, Cooney expressed concerns about the added pressure on parents, stressing that tech companies should take greater responsibility for creating safe environments for children using their platforms.

The introduction of Snapchat’s AI chatbot has undoubtedly sparked safeguarding concerns, particularly regarding proper testing, age-appropriate content, and the collection of location data. As the discussion around artificial intelligence regulations progresses, it becomes crucial to prioritize the safety of young users before releasing new technologies to the public. Only through comprehensive safety measures can tech companies fulfill their responsibility to protect their users, especially the most vulnerable among them.

Conlcusion:

The introduction of Snapchat’s AI chatbot, accompanied by the safeguarding concerns raised by CyberSafeKids, highlights the need for tech companies to prioritize user safety, particularly when catering to a young and vulnerable demographic. The criticism and confusion surrounding the app’s use of location data, coupled with the concerns over proper testing and potential risks, underscore the importance of robust safety measures in the development and implementation of AI technologies.

This serves as a reminder for the market that user trust and security are critical factors in ensuring the success and sustainability of any platform targeting younger audiences. Companies that proactively address these concerns and prioritize safety will likely gain a competitive advantage, building stronger relationships with their user base and fostering a trustworthy brand image in the market.

Source