Australia’s eSafety Commission will compel search engines to remove AI-generated child sexual abuse content from their results

TL;DR:

  • Australia’s eSafety Commission mandates search engines to remove AI-generated child sexual abuse material from search results.
  • The move is in response to the increasing threat of AI and deepfakes to children’s privacy and rights.
  • New online safety codes and standards will address the risks posed by generative AI across the online industry.
  • Industry players must take measures to combat class 1 material, including child sexual abuse content, within Australian services.
  • AI functionality integrated with search engines cannot be used to create “synthetic” versions of harmful material.
  • The regulations align with the Online Safety Act 2021 (Cth), granting the eSafety Commissioner new powers.
  • The scope of industry standards expands to include Designated Internet Service and Relevant Electronic Services.

Main AI News:

Australia’s eSafety Commission, led by Julie Inman Grant, has taken a bold step in ensuring the safety of children online. In response to the growing threat posed by AI-generated child sexual abuse content and deepfakes, the Commission has announced stringent measures to compel search engines to remove such material from their search results.

The move comes as a crucial response to the increasing risks faced by children’s privacy and rights due to the proliferation of artificial intelligence and deepfake technology. The forthcoming online safety codes and standards will be specifically tailored to address the emerging challenges brought about by generative AI in various segments of the online industry.

Under these new regulations, industry participants will be obligated to adopt comprehensive measures to combat class 1 material, including child sexual abuse content, on their platforms within the Australian digital landscape. Of particular importance is the prohibition on using AI functionality within search engines to generate “synthetic” versions of such objectionable material.

Julie Inman Grant emphasized the need for these measures, stating, “When the industry’s major players announced their plans to incorporate generative AI into their search functions, our existing code proved inadequate in delivering the community protections we demand and expect.”

The development of the online safety code will align closely with the Online Safety Act 2021 (Cth), which serves as Australia’s primary legal framework for regulating illegal and restricted online content. This act grants the eSafety Commissioner significant new authority to shield Australians from online harm.

Currently, registered industry codes apply to five online sectors: Social Media Services, Internet Carriage Services, App Distribution Services, Hosting Services, and Equipment Providers. However, the new industry standards will expand their scope to include two additional sectors: Designated Internet Service and Relevant Electronic Services.

Conclusion:

Australia’s proactive measures to protect children from AI-generated abuse content signify a commitment to online safety. For the market, this means increased regulatory scrutiny and compliance requirements for industry participants, potentially leading to changes in business practices and investments in content moderation technology.

Source