Moves to Prohibit Live Facial Recognition Technology in the EU

TL;DR:

  • The European Parliament is set to vote on banning live facial recognition technology in the EU.
  • The ban is part of proposed AI laws that could result in fines or expulsion for companies that breach the regulations.
  • The ban is expected to face opposition from center-right MEPs who argue for the use of biometric scanning to combat serious crimes like terrorism.
  • The proposed legislation also aims to prohibit “emotional recognition” of AI and increase transparency in AI development.
  • Concerns have been raised about the potential abuse of live facial recognition by state agencies and border police.
  • Supporters of the ban believe it is a strong safeguard for public spaces.
  • The amended AI Act will be presented to the wider parliament in June and is expected to pass by the end of the year.
  • The EU’s AI regulations may become a global standard, with companies voluntarily adopting them.
  • The legislation aims to address employment market disruption and mitigate the spread of fake news and human rights infringements.
  • The Dutch Green Party MEP strongly opposes the use of live scanning facilitated by AI.
  • The AI Act has been under development for almost two years and includes recent amendments to address risks associated with AI systems like ChatGPT.

Main AI News:

In a momentous vote at the European Parliament on Thursday, lawmakers will test the waters for banning the use of live “Big Brother” real-time facial recognition technology on the streets of the European Union and by border officials. This proposed amendment is part of a groundbreaking package of proposals encompassing the world’s inaugural artificial intelligence (AI) laws. Breaches of these regulations could lead to hefty fines of up to €10 million (£8.7 million) or even expulsion from trading within the EU for offending companies.

The prohibition on live facial recognition technology constitutes one of twelve sets of compromise amendments agreed upon by a committee of Members of the European Parliament (MEPs), meticulously whittled down from over 3,000 submissions received a year ago. However, the ban, outlined in the final text that will be subject to the vote, is anticipated to face opposition from a cohort of center-right MEPs. They argue that biometric scanning is crucial in the fight against severe crimes such as terrorism.

Should this legislation pass, it will also outlaw the use of “emotional recognition” AI, which could potentially be utilized by employers or law enforcement agencies to identify fatigued workers or drivers. Charitable organizations have raised concerns that live real-time facial recognition could be susceptible to misuse by state authorities and border police.

Nevertheless, Dragos Tudorache, co-rapporteur of the AI Act in the European Parliament, expressed hope that there would be substantial support to enforce the ban, stating, “There is no stronger safeguard [than this ban]. A border crossing point is a public space. According to the text we have right now, you will not be able to deploy AI biometric recognition technology in a public space.”

The AI Act will also mandate transparency for those developing artificial intelligence, requiring them to disclose the original literary, scientific research, musical works, and other copyrighted materials used to train machine learning models. This provision will enable artists, academics, and others to pursue legal action if they believe copyright laws have been violated.

Co-rapporteur Brando Benifei expressed his optimism that the legislation would address concerns about AI’s potential disruption of employment markets, as well as the proliferation of fake news, disinformation, and infringements on human rights. He conveyed to reporters, “With our text, we are also showing what kind of society we want, a society where social scoring, predictive policing, biometric categorization, emotional recognition, and discriminatory scraping of facial images from the internet are considered unacceptable practices.”

The amended version of the AI Act will be presented to the wider parliament in mid-June. If approved, it will represent a strong mandate for further discussions with the European Commission and the Council of the European Union. The law is expected to be enacted by the end of this year.

Many experts anticipate that the AI Act will set a gold standard for global regulation, with major players like Google, Microsoft, and social media companies embracing its provisions. This phenomenon, known as the “Brussels effect,” suggests that if the EU takes the lead in establishing sensible standards, other countries will likely adopt the EU rules when formulating their own regulations. Zach Meyers, a research fellow at the Centre for European Reform, explained, “Even if they don’t, companies may voluntarily adopt the EU rules globally because it makes the cost of doing business cheaper.”

Kim van Sparrentak, a Dutch Green Party MEP, vehemently opposes the use of live scanning facilitated by AI, asserting that it runs counter to fundamental rights and poses an unacceptable risk. The AI Act, the first of its kind, has been under development for nearly two years. Recent amendments have been introduced to address the risks associated with “general purpose” AI systems, including ChatGPT.

Conlcusion:

The potential ban on live facial recognition technology in the EU, as part of the proposed AI laws, signifies a significant development in the market. If enacted, it could have profound implications for companies involved in the development and deployment of facial recognition systems.

Furthermore, the increased transparency requirements and prohibitions on certain AI applications highlight a growing emphasis on ethical considerations and the protection of individual rights. These regulations are likely to shape the future of the AI market, with the EU setting a potential gold standard for global regulation. As a result, businesses operating in the AI sector will need to adapt to these evolving regulatory frameworks to ensure compliance and maintain public trust in their technologies.

Source