TL;DR:
- Ongoing negotiations for the world’s first AI treaty face difficulties over the inclusion of private companies.
- The Convention on AI, Human Rights, Democracy, and the Rule of Law, led by the Council of Europe, struggles to reach a consensus.
- The United States and other observer countries seek to limit the treaty’s scope to public entities, introducing an ‘opt-in’ mechanism for private firms.
- Recent developments indicate a consistent weakening of the treaty’s provisions, with exemptions for national security and defense that surpass existing regulations.
- Political considerations play a significant role, with the U.S. unlikely to ratify the treaty, aiming to address AI-related human rights concerns without binding commitments.
- The Council of Europe and Switzerland may gain diplomatic victories, but concerns arise about the treaty’s credibility due to its limited private sector focus.
Main AI News:
The quest for a comprehensive international treaty on Artificial Intelligence (AI) remains a contentious affair as recent developments show further softening of the text. The negotiations have encountered a stumbling block concerning the inclusion of private companies, leading to an already weakened draft becoming even more diluted.
The Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law represents a groundbreaking binding treaty being crafted within the Council of Europe (CoE), a human rights institution comprised of 46 member states. While countries like the United States, Canada, Israel, and Japan participate as observers, they lack voting rights but exert influence through their potential to withhold signatures.
From the outset, Washington and its allies, including the United Kingdom, have championed restricting the treaty’s applicability primarily to public entities, while introducing an ‘opt-in’ mechanism for private companies. This approach has encountered resistance, with European governments urging flexibility to prioritize the treaty’s international adoption.
Despite the pressure, the European Commission, as revealed by Euractiv, presented an ‘opt-out option’ aimed at accommodating the US administration’s concerns during a recent plenary of the Committee on Artificial Intelligence. Regrettably, this plenary failed due to the inflexibility of both sides, leaving only the EU’s opt-out and the US’ opt-in options on the table.
A Consistent Trend of Dilution
To meet the May deadline for ministerial adoption, the next pivotal plenary in mid-March looms large. However, the treaty’s provisions have been systematically diluted due to the European Commission’s insistence on aligning it closely with the AI Act, even when there was room for the human rights treaty to extend beyond without conflicting with EU product safety regulations.
This alignment has led to the introduction of exemptions for national security, defense, and law enforcement that are even broader than those in the AI Act. These exemptions could create significant loopholes for AI systems used in both civilian and military contexts. If the treaty were to cover only public bodies, it would result in an exceedingly limited scope, with the original text receiving little support.
In the latest revised version of the text, as seen by Euractiv on January 26th, the convention’s weakening has progressed to the point where it resembles more of a declaration than a binding treaty. Key provisions, such as safeguarding the democratic process and procedural guarantees, now merely suggest that signatory parties should “seek to ensure” adequate measures, implying no binding obligation.
Crucial elements, including the protection of health and the environment, measures to promote trust in AI systems and the requirement for human oversight in AI-driven decisions affecting human rights, have been omitted. Research activities are also excluded from the treaty’s scope, with the possibility of extending this exception to the development phase, potentially leaving the initial stages of AI systems devoid of human rights protections.
Furthermore, in terms of risk management, the obligation to publish details of risk analysis and mitigation measures, even if only ‘where appropriate,’ has been removed.
Political Considerations at Play
Insiders involved in the AI convention, who must remain anonymous due to the sensitive nature of the discussions, have cited political motivations as a significant factor behind observer countries like the United States wielding considerable influence in shaping the treaty.
These political motives are underscored by the realization that although the US administration might sign the treaty, the likelihood of Congress ratifying it is close to nil, an essential step for making it legally binding. Yet, Washington can portray itself as taking steps to address AI-related human rights violations without committing to binding language that could set a precedent in other international forums, like the United Nations.
Simultaneously, the United States’ signature could be viewed as a significant diplomatic victory for the Council of Europe, which would be home to the world’s first genuinely international AI treaty. This development coincides with the race for the institution’s top leadership position, adding an extra layer of intrigue.
Switzerland also stands to gain from this situation, given that the chair of the AI Committee is a Swiss representative, Thomas Schneider. Interestingly, one of the leading candidates for the top position in the CoE is former Swiss President Alain Berset.
Insiders familiar with the negotiations have confirmed that the chair and the CoE’s secretariat played a non-neutral role during the discussions, aligning themselves with the arguments of the US and other observer countries while sidelining opposing viewpoints. One participant aptly noted, “An international organization committed to safeguarding human rights but failing to secure commitments from the private sector, where most violations originate, risks losing credibility and jeopardizing its legitimacy.”
Conclusion:
The ongoing softening of the international AI treaty and the debate over its scope suggest that the market for AI technology and services may continue to lack comprehensive global regulations. Companies operating in the AI sector should closely monitor these developments and consider the implications of a potential treaty that primarily addresses public entities while leaving private firms with more discretion.