OECD’s Revised AI Definition: Influencing the EU’s AI Act

TL;DR:

  • The OECD has updated its definition of Artificial Intelligence (AI), which is set to be incorporated into the EU’s upcoming AI regulation.
  • The OECD’s role in shaping AI policy stems from its historical significance and its influential principles for trustworthy AI.
  • The EU has been aligning with the OECD’s AI definition, but uncertainties arose due to the OECD’s own revisions.
  • The rationale behind the update includes international alignment, reflecting developments, enhancing technical accuracy, and future-proofing.
  • The revised definition removes the requirement for human-defined objectives and expands the scope of AI outputs.
  • This development highlights ongoing collaboration between international organizations and the EU in regulating AI.

Main AI News:

In a pivotal development for the European Union’s forthcoming AI legislation, the Organisation for Economic Co-operation and Development (OECD) has recently unveiled its updated definition of Artificial Intelligence (AI). This revised framework is poised to play a central role in shaping the EU’s AI Act, which is currently in the final stages of formulation. Let’s delve into the details of this significant development and its potential implications for the regulation of AI.

The OECD’s Role in Defining AI 

The OECD, originally founded to oversee the Marshall Plan’s reconstruction efforts in post-World War II Europe, has evolved into an influential international platform for economic collaboration, boasting 38 member countries. In 2019, the organization introduced a set of principles for trustworthy AI policies, including an initial definition of AI.

With the recent decision by the OECD Council on this matter, the definition has now been officially updated, carrying profound implications for the impending EU AI regulation. This definition is pivotal, as it delineates the scope of the forthcoming law.

According to the updated definition, “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Alignment with the EU’s AI Act 

The EU’s Artificial Intelligence Act represents a legislative proposal aimed at regulating AI to mitigate potential harm. EU institutions are diligently working towards finalizing this groundbreaking AI legislation by year-end. In a noteworthy move, MEPs involved in crafting the AI Act decided to align with the OECD’s AI definition to maintain semantic consistency with international partners.

However, EU lawmakers grappled with the challenge that the OECD itself was in the process of updating its AI definition in response to technological and market developments. Consequently, the parliamentarians worded the definition in a manner that attempted to anticipate the OECD’s future revisions – an endeavor they executed with remarkable precision.

As the AI Act progressed to its concluding phase of the legislative process, known as “trilogies,” where the EU Commission, Council, and Parliament collaborate to finalize provisions, policymakers opted to suspend discussions on the definition until the OECD finalized its stance.

The Rationale for Change 

The rationale behind updating the AI definition, as revealed in a joint presentation, encompasses several key factors. These include the need for international alignment with AI definitions, reflecting developments over the past five years, enhancing technical precision and clarity, and future-proofing the definition.

One notable alteration was the removal of the reference stipulating that objectives must be human-defined, to encompass scenarios where AI systems can learn new objectives. According to a draft explanatory memorandum, “design objectives can be supplemented by user prompts when the system is in operation,” mirroring the case with foundation models.

The memorandum also highlights the potential for misalignment between explicit objectives and outputs, particularly in terms of unforeseen consequences. The introduction of the phrase ‘infer how to generate outputs’ addresses instances where AI models process environmental inputs and produce appropriate outputs through algorithms.

Moreover, the revised definition expands the types of output AI can generate, including content such as text, videos, or images, aligning with generative AI models like ChatGPT and Stable Diffusion. Lastly, the reference to adaptiveness acknowledges that some AI systems can evolve post-design and deployment, particularly those employing machine-learning techniques.

With the OECD’s updated AI definition now official, it is poised for integration into the EU’s AI legislation. However, it’s worth noting that EU policymakers have been privy to the revised definition since mid-October, and as of now, no internal text reflecting this change has been disseminated. This development underscores the ongoing collaboration and alignment between international organizations and the European Union in shaping the regulatory landscape for artificial intelligence.

Conclusion:

The OECD’s updated AI definition, now poised for integration into the EU’s AI legislation, signifies a crucial step towards harmonizing AI regulations internationally. This alignment enhances clarity and consistency in AI policy, offering businesses operating in the AI sector a more predictable and standardized regulatory framework, ultimately fostering greater trust and innovation in the market.

Source