TL;DR:
- The Spanish presidency of the EU Council has proposed a governance structure for overseeing obligations on foundation models and high-impact foundation models.
- These obligations, including those affecting models like OpenAI’s GPT-4, are being discussed within the AI Act framework, aiming for a risk-based approach to AI regulation.
- The EU’s AI law is in its final legislative phase, with governance proposals set to influence ongoing discussions.
- The European Commission will have exclusive powers to supervise foundation model obligations, enforce provisions, and conduct audits.
- High-impact foundation models may undergo adversarial evaluations by red teams, potentially external, with strict criteria for vetted red-teamers.
- Sanction regimes for non-compliance are proposed, though specific penalties remain undecided.
- A ‘governance framework’ featuring the AI Office and a scientific panel is introduced to support Commission activities.
- The scientific panel’s tasks include evaluating foundation model capabilities and monitoring safety risks.
- A revised procedure for addressing non-compliant AI systems at the EU level is outlined, including potential market withdrawal.
Main AI News:
In the evolving landscape of AI regulation, the governance of foundation models within the European Union’s AI law is taking shape. Spearheading this effort is the Spanish presidency of the EU Council of Ministers, which has put forth a comprehensive governance architecture aimed at overseeing the obligations imposed on both foundation models and high-impact foundation models. This innovative approach includes the establishment of a scientific panel to provide expert guidance.
The obligations imposed on foundation models, such as OpenAI’s GPT-4, which powers ChatGPT, the world’s most renowned chatbot, are currently under deliberation within the framework of the AI Act. This legislative proposal adopts a risk-based approach to regulate Artificial Intelligence.
The AI law is now in its final phase of the legislative process, involving trilogies between the EU Council, Parliament, and Commission. Consequently, the governance approach presented by the presidency holds significant sway in ongoing discussions.
According to the proposed text, the European Commission will be endowed with exclusive powers to supervise the obligations pertaining to foundation models, including those deemed ‘high-impact’ and subject to stricter regulations. The Commission can initiate investigations and enforce these provisions independently or in response to complaints from AI providers with contracts with the foundation model provider or from the newly established scientific panel.
To ensure effective monitoring, the Commission will define the procedures for enforcing obligations among foundation model providers through implementing acts. This includes specifying the role of the AI Office, appointing the scientific panel, and outlining the modalities for conducting audits.
The Commission will also have the authority to conduct audits on foundation models, with a focus on the input from the scientific panel. These audits aim to assess the provider’s compliance with the AI Act and investigate safety risks triggered by a qualified report from the scientific panel. The audits may be performed directly by the Commission or delegated to independent auditors or vetted red-teamers who can request access to the model through an Application Programming Interface (API).
For high-impact foundation models, the Spanish presidency suggests adversarial evaluations by red teams, potentially from the provider itself or externally, with the Commission having the power to designate ‘vetted red-teamers’ in case of external evaluation. These vetted testers must possess expertise and independence from the foundation model providers and uphold diligence, accuracy, and objectivity in their evaluations. The Commission will establish a register of vetted red-teamers and define the selection procedure through delegated acts.
In cases where serious concerns about risks arise from audits, the EU executive can engage in dialogue with foundation model providers and request the implementation of necessary measures to ensure compliance with the AI law and mitigate risks.
Additionally, the Commission can request documentation from foundation model providers regarding the capabilities and limitations of their models. This documentation may also be accessible through downstream economic operators who build AI applications on the foundation models. If concerns regarding potential risks emerge from this documentation, the Commission can request further information, initiate a dialogue with the provider, and mandate corrective measures.
The Spanish presidency has also introduced a sanction regime for foundation model providers who fail to adhere to AI Act obligations or cooperate with requests for documentation, audits, or corrective measures. The specific percentage of the total worldwide turnover to be levied as a penalty is yet to be determined.
Furthermore, the presidency has proposed the establishment of a ‘governance framework’ for foundation models, including ‘high-impact’ ones, featuring the AI Office and a scientific panel to support the Commission’s activities. These activities encompass regular consultations with the scientific community, civil society organizations, and developers to assess the state of AI model risk management and promote international cooperation.
The scientific panel’s responsibilities involve contributing to the development of methodologies for evaluating the capabilities of foundation models, offering guidance on the designation and emergence of high-impact foundation models, and monitoring potential material safety risks associated with foundation models. Panel members will be selected based on recognized scientific or technical expertise in AI and must act objectively while disclosing any potential conflicts of interest. They may also seek approval as vetted red-teamers.
Lastly, the presidency has outlined a revised procedure for addressing non-compliant AI systems posing significant risks at the EU level. In exceptional circumstances where the functioning of the internal market is jeopardized, the Commission may conduct an emergency evaluation and impose corrective measures, including market withdrawal.
Conclusion:
The proposed governance framework within the EU’s AI law signifies a concerted effort to regulate AI models, particularly foundation models and high-impact variants. It introduces rigorous oversight, audits, and evaluation procedures. Market players should anticipate increased regulatory scrutiny and compliance requirements, which could shape the AI landscape and influence strategic decisions in this evolving industry.