Australia Contemplates Ban on High-Risk AI Applications: Safeguarding Against Deepfakes and Algorithmic Bias

TL;DR:

  • Australia’s government is considering banning high-risk uses of AI, such as deepfakes and algorithmic bias.
  • A report and discussion paper will be released, focusing on safe and responsible AI deployment.
  • Generative AI, including large language models, has gained popularity but raises concerns about potential harm.
  • Algorithmic bias is identified as a significant risk, impacting recruitment and racial groups.
  • Positive AI applications in medical imaging and building safety are acknowledged.
  • The concentration of generative AI resources in a few US-based companies poses risks to Australia.
  • Different global approaches range from voluntary measures to stricter regulations.
  • The government aims to ensure appropriate safeguards for high-risk AI applications and automated decision-making.
  • Harmonizing governance with trading partners is considered to leverage AI growth.
  • Stakeholder consultation seeks input on potential bans and implications for the domestic tech sector.
  • Legal cases involving AI misinformation and child grooming highlight the need for regulations.
  • A proposed Australian AI Commission aims to regulate AI practices.

Main AI News:

As artificial intelligence (AI) continues to advance at a rapid pace, the Albanese government of Australia is contemplating a ban on the “high-risk” applications of AI and automated decision-making. The potential dangers associated with these technologies, such as the proliferation of deepfakes and algorithmic bias, have prompted policymakers to consider measures to ensure the responsible and safe use of AI. In this regard, the industry and science minister, Ed Husic, is set to release a report by the National Science and Technology Council, along with a discussion paper outlining strategies for achieving such goals.

One of the notable areas of AI that have seen a surge in popularity is generative AI. This branch of AI involves the creation of new content, ranging from text and images to audio and code, by machines. Prominent examples of generative AI include ChatGPT, Google’s chatbot Bard, and Microsoft Bing’s chat feature. While educational institutions and authorities are grappling with the ethical implications of this technology, the industry department’s discussion paper emphasizes the potential harms associated with AI, encompassing activities such as generating deepfakes to manipulate democratic processes, spreading misinformation and disinformation, and even encouraging self-harm.

The paper also highlights the issue of algorithmic bias, which is often considered one of the major risks associated with AI. Algorithmic bias can lead to the prioritization of male candidates over their female counterparts in recruitment processes or the targeting of minority racial groups. However, it is worth noting that AI has already demonstrated positive applications in various fields. For instance, AI has proven effective in analyzing medical images, enhancing building safety, and reducing costs in legal services. Nevertheless, the discussion paper does not delve into the implications of AI on the labor market, national security, or intellectual property, as these topics lie outside its scope.

A report by the National Science and Technology Council raises concerns about the concentration of generative AI resources in a few large multinational technology companies, primarily based in the United States. Such a concentration poses potential risks to Australia, given the country’s relative weakness in core fundamental capacities related to large language models and similar areas. This weakness is primarily attributed to high barriers to access. In response to these challenges, the paper presents various approaches adopted by countries worldwide. Singapore, for instance, has taken a voluntary approach, while the European Union and Canada lean towards greater regulation.

The paper highlights that there is an emerging international trend toward adopting a risk-based approach to govern AI. To ensure the safe and responsible use of AI and automated decision-making, the government aims to implement appropriate safeguards, particularly for high-risk applications. Seeking input from stakeholders, the paper poses the question of whether certain high-risk AI applications or technologies should be completely banned and, if so, what criteria should be employed in determining such bans. However, the paper acknowledges the necessity for Australia to harmonize its governance with major trading partners to leverage AI-enabled systems on a global scale and foster AI growth within the country.

In addition to considering the implications for Australia’s domestic tech sector and its current trading and export activities with other nations, stakeholders are urged to reflect on the potential benefits of adopting a more rigorous approach to banning high-risk AI activities. Ed Husic highlights the delicate balancing act involved in safely and responsibly utilizing AI, acknowledging the tremendous potential it holds, from combating superbugs with AI-developed antibiotics to preventing online fraud. However, he stresses the importance of implementing appropriate safeguards to instill trust and inspire public confidence in these critical technologies.

The Australian government has shown its commitment to AI development through investments, allocating $41 million to establish the National AI Centre under the science agency CSIRO. Furthermore, a new program called Responsible AI Adopt has been introduced to support small and medium enterprises in adopting responsible AI practices. The paper acknowledges that, to some extent, AI is already regulated under existing laws in Australia, which are designed to be technology-neutral. Consumer protection, online safety, privacy, and criminal laws already play a role in regulating AI activities. Notably, penalties have been imposed on companies like Trivago, a hotel booking website, for misleading consumers through algorithmic decision-making.

The potential risks associated with AI have also raised legal concerns. In April, an Australian regional mayor expressed intentions to sue OpenAI if it failed to rectify false claims made by ChatGPT, an automated text service, alleging that he had been involved in bribery. Such a lawsuit would mark the first defamation case involving an automated text service. Moreover, in May, the eSafety commissioner warned about the potential for generative AI programs to be exploited by predators for automating child grooming. These instances underscore the need for regulatory frameworks to mitigate the risks posed by AI.

Labor MP Julian Hill, who has previously voiced concerns about the uncontrollable military applications of AI in parliament, has called for the establishment of an Australian AI Commission to regulate AI practices. This proposal reflects the growing recognition of the importance of comprehensive governance and oversight to ensure the responsible and safe use of AI technologies.

Conclusion:

Australia’s contemplation of a ban on high-risk AI applications underscores its commitment to addressing the potential harms associated with deepfakes and algorithmic bias. The government’s focus on safe and responsible AI deployment, along with stakeholder consultation, demonstrates a proactive approach to balancing the benefits and risks of AI. The concentration of generative AI resources in a few multinational companies poses challenges for Australia, requiring careful governance considerations.

However, opportunities exist for the domestic tech sector to thrive by leveraging AI-enabled systems on a global scale. Regulations and the establishment of an AI Commission will play a crucial role in fostering trust and public confidence and ensuring the responsible use of AI technologies in the market.

Source