EU Council Paves the Way for Innovation Measures in AI Act Negotiations

TL;DR:

  • EU Council and Parliament engage in trilogue discussions on the AI Act, focusing on innovation measures.
  • Member states present differing positions on the establishment of regulatory sandboxes for AI experimentation.
  • The debate arises over granting presumption of conformity for AI developers exiting sandboxes, with concerns about compliance control and competition.
  • EU countries express varying opinions on the inclusion of real-world testing in the AI Act, considering potential risks.
  • EU member states discuss legal instruments, favoring implementing acts over delegated acts for determining sandbox modalities and SME conditions.

Main AI News:

In the ongoing trilogues to shape the AI Act, EU member states have taken a firm stance on innovation measures, highlighting their commitment to fostering a culture of technological advancement. The AI Act, a groundbreaking legislative proposal aimed at regulating Artificial Intelligence’s potential for harm, has reached its final phase of discussions among the EU Council, Parliament, and Commission.

Scheduled for 18 July, the next trilogue holds great significance as EU policymakers are poised to find common ground on several contentious aspects of the text, particularly those related to innovation measures. Notably, the positions of the co-legislators appear to be complementary rather than conflicting, setting a positive tone for constructive dialogue.

During a recent meeting of the Telecom Working Party, a technical body of the Council, the Spanish Presidency of the EU Council of Ministers presented an options paper concerning articles on sandboxes and innovation (51-55). This paper, viewed exclusively by EURACTIV, was intended to provide member state delegations with a preview before the revised mandate is presented to the Committee of Permanent Representatives (Coreper).

Regulatory Sandboxes: Balancing Control and Access

One significant aspect discussed in the options paper is the establishment of regulatory sandboxes. These controlled environments offer companies the opportunity to experiment with new AI applications under the watchful eye of competent authorities, thereby fostering innovation while ensuring safety and accountability.

While the initial version of the AI Act allowed national authorities the option to create sandboxes, the European Parliament advocated making this provision mandatory. Their rationale was to guarantee that companies in smaller member states would also have access to these valuable resources. However, the EU Council maintains the position that participation in sandboxes should remain voluntary.

During the discussions, four countries showed their support for the European Parliament’s approach, endorsing the idea that sandboxes could be established jointly with other member states. Moreover, an additional four national governments voiced their approval, suggesting that countries could participate in sandboxes at the EU level.

Presumption of Conformity: Encouraging AI Development

Another point of contention revolves around the presumption of conformity for AI developers who exit a sandbox. The European Parliament proposed granting these developers the presumption of conformity for their systems, incentivizing their participation. However, the Spanish Presidency raised concerns that this approach might undermine the supervisory authorities not involved in the sandbox, potentially diminishing their control over the compliance process. Furthermore, this approach could create an unlevel playing field, disadvantaging companies not participating in sandboxes.

EU member states showcased a diversity of opinions on this matter. While nine member states advocated for preserving the Council’s text, only one expressed support for the Parliament’s position. On the other hand, five national governments suggested accepting the parliamentarians’ text, but with the condition that sandboxes’ results be included as a requirement for the declaration of conformity for high-risk AI systems. The relevant authorities and vetted auditors would then take these results into consideration.

Real-World Testing: Balancing Progress and Safety

The Council’s position emphasized the importance of real-world testing, allowing AI providers to assess their models outside the confines of laboratories and sandboxes. This approach enables more realistic experimentation while requiring adherence to safe testing procedures and authorization from the relevant market surveillance authority. However, MEPs voiced concerns that even under these conditions, real-world testing could pose risks to people and therefore did not include this possibility in their proposal.

Among EU member states, seven expressed support for the Council’s text, which includes provisions for real-world testing. Conversely, only one country stood behind the Parliament’s approach. However, four nations displayed openness to limiting real-world testing within the regulatory sandbox, suggesting a potential compromise.

Determining Legal Instruments: Shaping the Future Framework

Lastly, EU countries discussed the appropriate legal instruments the European Commission should employ to determine the operational modalities and conditions for regulatory sandboxes, as well as the specific considerations for small and medium-sized enterprises (SMEs). The overwhelming majority of member states advocated for implementing acts, which would involve a committee of national representatives. This approach contrasts with the delegated acts favored by the EU Parliament, where MEPs possess a more influential role.

Conclusion:

The EU Council’s determination to incorporate innovation measures in the AI Act signifies a proactive approach toward shaping AI governance. The ongoing discussions on sandboxes, conformity presumption, real-world testing, and legal instruments reflect the delicate balance between promoting technological advancement and ensuring regulatory responsibility. This commitment to fostering innovation while addressing potential risks will contribute to a dynamic and competitive AI market in Europe.

Source