Common Sense Media Rates AI Products for Kids’ Safety

TL;DR:

  • Common Sense Media has assessed popular AI tools for their suitability for children.
  • Concerns have been raised about the safety of AI products, including Snapchat’s My AI, DALLE, and Stable Diffusion.
  • The ratings include dimensions like trust, kids’ safety, privacy, fairness, and more.
  • Generative AI models exhibited biases and privacy issues.
  • AI chatbots like Google’s Bard, ChatGPT, and Toddle AI received mid-tier ratings with warnings about potential biases.
  • Educational AI products received positive reviews for responsible AI practices.
  • OpenAI emphasizes safety measures for users.
  • Common Sense Media plans to continue publishing ratings and reviews of new AI products.

Main AI News:

In a recent assessment of popular AI tools, Common Sense Media, a respected nonprofit advocacy group for families, has raised concerns about the safety of several products for children. Common Sense Media, well-known for providing media ratings for various forms of entertainment consumed by children, has extended its reach to evaluate AI products, including chatbots, image generators, and more, by introducing “nutrition labels” for these technologies.

Earlier this year, responding to parental demand, the organization initiated the development of a ratings system to assess AI products. A survey of parents revealed that 82% sought assistance in determining the safety of new AI products for their children, with only 40% being aware of reliable resources for this purpose.

Today, Common Sense Media has unveiled its inaugural AI product ratings, covering several key dimensions, including trust, kids’ safety, privacy, transparency, accountability, learning, fairness, social connections, and societal benefits. The organization evaluated ten popular apps on a 5-point scale, encompassing educational tools, AI chatbots such as Bard and ChatGPT, and generative AI products like Snap’s My AI and DALL-E, among others. Unfortunately, the latter category received the lowest ratings.

Tracy Pizzo-Frey, Senior Advisor of AI at Common Sense Media, emphasized the inherent biases in generative AI models due to their training on vast internet data, including cultural, racial, socioeconomic, historical, and gender biases. She expressed hope that these ratings would encourage developers to implement safeguards against misinformation and protect future generations from unintended consequences.

TechCrunch’s own testing found that Snapchat’s My AI generative AI features, while generally more peculiar than harmful, received a 2-star rating from Common Sense Media. The AI chatbot produced responses that reinforced ageism, sexism, and cultural stereotypes, in addition to occasional inappropriate and inaccurate responses. Privacy concerns were also raised due to personal user data storage.

Snap defended My AI, highlighting its optional nature and clear disclosure as a chatbot with limitations. Other generative AI models like DALL-E and Stable Diffusion exhibited similar risks, including objectification and sexualization of women and girls and the perpetuation of gender stereotypes.

Notably, generative AI models are increasingly being exploited for producing explicit content, prompting debates about accountability within the AI community.

Common Sense Media positioned AI chatbots like Google’s Bard, ChatGPT, and Toddle AI in the mid-tier of its ratings. The organization cautioned about potential biases in these bots, particularly concerning users with diverse backgrounds and dialects. These chatbots could also generate inaccurate information and reinforce stereotypes, potentially shaping users’ worldviews and making it challenging to discern fact from fiction.

OpenAI responded to the rankings by emphasizing its commitment to safety and privacy, with age requirements and parental consent measures in place for users. The only AI products to receive favorable reviews were those designed for educational purposes, such as Ello’s AI reading tutor, Khanmingo (from Khan Academy), and Kyron Learning’s AI tutor. These products prioritized responsible AI practices, fairness, diverse representation, and data privacy transparency.

Common Sense Media plans to continue publishing ratings and reviews of new AI products, aiming to inform parents, families, lawmakers, and regulators about the safety and ethical considerations surrounding these technologies. Founder and CEO James P. Steyer stressed the importance of clear “nutrition labels” for AI products, particularly those used by children and teens, to safeguard data privacy and well-being.

Conclusion:

The evaluation of AI products for children’s safety by Common Sense Media highlights growing concerns about biases and privacy issues in these technologies. Developers need to address these issues to protect young users and build trust in the market. Educational AI products that prioritize responsible practices are emerging as a positive example. As the scrutiny of AI for children intensifies, the market will likely see increased demand for safer and more transparent AI solutions.

Source