US-China AI Safety Talks: Forging New Frontiers in Collaboration

TL;DR:

  • President Biden initiates AI safety discussions with China after a high-profile summit.
  • Three key points of agreement emerged, with a focus on AI being the standout issue.
  • The US administration aims to establish global norms for military AI use.
  • China shows receptiveness, particularly concerning AI control of nuclear weapons.
  • Ambiguity surrounds whether a binding ban or voluntary norms will be agreed upon.
  • The US leads a global push for responsible AI use, with 45 countries endorsing the approach.
  • China resists endorsing “responsible” practices, preferring diplomatic ambiguity.
  • Efforts to create binding AI laws face hurdles, with consensus issues in Geneva.
  • A UN General Assembly resolution seeks views on AI weaponry, gaining strong support.
  • The US may achieve progress in establishing AI governance norms with allies.

Main AI News:

In the aftermath of a high-profile meeting between US President Joe Biden and Chinese President Xi Jinping, a strategic triad emerged. While much attention has been devoted to the resumption of military-to-military communications and counternarcotics collaboration, the third and most intriguing pillar to emerge on the US-China agenda was none other than artificial intelligence (AI).

During a press conference following the summit, President Biden unveiled the groundbreaking initiative, stating, “Thirdly, we’re convening our foremost experts to engage in comprehensive discussions concerning the risks and safety aspects inherent to artificial intelligence. As those who accompany me across the globe are well aware, every major leader seeks discourse on the repercussions of artificial intelligence. These concrete steps represent progress in the quest to discern what is beneficial, what is perilous, and what is acceptable.”

While the press conference primarily focused on issues such as fentanyl, Taiwan, and Gaza, President Biden refrained from delving into the specifics of the AI plan. Nevertheless, the administration’s commitment to AI was underscored not only by a sweeping executive order but also by a relentless push for global standards governing the military utilization of AI.

Remarkably, China has demonstrated a willingness to engage in discussions, especially regarding the relinquishment of AI command-and-control systems within the realm of nuclear weaponry. Although this connection between AI and nuclear armaments was not explicitly addressed in either President Biden’s comments or the White House’s official statement, experts had hinted at its potential significance even before the summit. The Pentagon has designated China as America’s “pacing” adversary, and amidst ongoing tensions, any agreement with the Chinese is seen as low-hanging fruit.

Any agreement, I believe, that we are poised to reach with the Chinese will likely be straightforward,” noted Joe Wang, a former State and NSC staffer now associated with the Special Competitive Studies Project. Wang and other experts interviewed by Breaking Defense concurred that “nuclear C2” holds promise as an ideal candidate for alignment. After all, no one desires AI-controlled nuclear armaments, not even the most resolute dictators.

The Chinese have expressed interest in participating in discussions to establish regulations and norms for AI, and we should welcome that,” emphasized Bonnie Glaser, the head of the Indo-Pacific program at the German Marshall Fund. Reciprocally, she stated, “The White House is interested in engaging China on limiting the role of AI in command and control of nuclear weapons.”

Bans or Norms?

Expectations of a joint statement have been running high since the South China Morning Post reported that “Presidents Joe Biden and Xi Jinping are poised to pledge a ban on the use of artificial intelligence in autonomous weaponry, such as drones, and in the control and deployment of nuclear warheads.” The term “ban” raised eyebrows among experts, as there is no indication that either China or the US would accept binding restrictions on their AI capabilities.

In fact, US law may prohibit the President from making such commitments without congressional approval. Some reports merely suggested that “China is seeking an expanded dialogue on artificial intelligence,” while Matt Murray, the US Ambassador to APEC, expressed doubt about an “agreement” on AI during a pre-summit press briefing.

Tong Zhao, a scholar at the Carnegie Endowment, remarked, “The two sides may not be there yet in terms of reaching a formal agreement on AI.”

A Global Initiative

The endeavor transcends the US-China dynamic. Over the past nine months, the United States has been building momentum toward voluntary international norms governing the military use of AI. This approach not only addresses autonomous weapons like drones but also encompasses applications ranging from intelligence analysis algorithms to logistics software. The intention is to avert calls for a binding ban on “killer robots,” leaving room for responsible use of this rapidly evolving technology by the US and its allies.

The American initiative was twofold. In February, the Pentagon unveiled a comprehensive overhaul of its military AI and autonomous systems policy. Subsequently, at The Hague, the State Department’s Ambassador-at-Large for Arms Control, Bonnie Jenkins, introduced a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” that outlined the US approach for international adoption. Since then, 45 countries have joined the US in endorsing this declaration, including core allies such as Australia, Britain, France, Germany, and South Korea, as well as nations facing complex geopolitical challenges.

Unsurprisingly, China has not aligned with the US-led approach. “Its diplomatic strategy is still focused on rivaling and counterbalancing US efforts to set future AI governance standards, especially in the military sphere,” explained Tong Zhao. “In managing new military technologies, China frequently resists endorsing ‘responsible’ practices, contending that ‘responsibility’ is a politically charged concept lacking objective clarity.”

In diplomacy, ambiguity can serve a purpose. “The Political Declaration… is not binding, so I think it gives us some flexibility,” commented Joe Wang.

However, this flexibility, which allows for a wide range of automated weaponry, contradicts the aspirations of activists seeking a binding ban.

Catherine Connolly, a lead researcher at the international activist group Stop Killer Robots, stated, “We obviously would like to see the US now moving towards clear and strong support for legal instruments restricting lethal autonomous weapons systems. We don’t think that guidelines and political declarations are enough, and the majority of states don’t think they’re enough either.

The Fear of ‘Killer Robots’

Efforts to establish new international laws have been hindered by a decade of impasse in Geneva, where the UN-convened Group of Government Experts has consistently failed to reach a consensus, a prerequisite in the Geneva process. Consequently, the anti-AI-arms movement shifted its focus to the United Nations General Assembly in New York, proposing a draft resolution. Rather than calling for an immediate ban, which would have likely failed, the resolution merely “requests the Secretary-General to seek the views of Member States, industry, academia, and non-government organizations, submit a report, and officially put the issue on the UN agenda.”

The resolution passed with overwhelming support, with 164 votes in favor and only five against, including Russia. China, on the other hand, abstained from voting.

Catherine Connolly observed, “It’s great that the US has joined the large majority of states in voting yes. It’s a little disappointing that China abstained, given that they have previously noted their support for a legal instrument. There were some parts of the resolution that they didn’t agree with in terms of characteristics and definitions.”

Beijing has adopted a narrow definition of “autonomous weapon,” one that excludes most autonomous systems from the purview of a potential ban, as long as these systems retain human oversight and can be terminated.

China seems hesitant to enhance the United Nations General Assembly’s role in regulating military AI,” noted Tong Zhao. The preference remains for the Group of Government Experts in Geneva, where consensus provides a de facto veto.

While binding law might not be on the horizon, if the US can rally support from allies like the UK, France, and the EU, progress on establishing norms in the realm of AI governance is feasible.

James Lewis, a scholar at CSIS, observed, “Binding law is not in the cards, but if the US can pull in others like the UK, France, and maybe the EU into a comprehensive effort, there can be progress on norms.”

Conclusion:

The US-China AI safety talks indicate a growing global interest in defining norms for AI in military applications. While the specifics remain uncertain, the alignment of interests between major nations on AI control in nuclear weaponry suggests potential progress in establishing governance standards. However, the complexities of regulating conventional AI weaponry pose ongoing challenges for the market and international diplomacy.

Source