EU and member states agree on strict limitations for real-time AI-driven biometric surveillance

TL;DR:

  • EU and member states have agreed to strict regulations on the use of AI-driven biometric surveillance.
  • Judicial authorization is required for most AI biometric operations, even in public and private spaces.
  • Exceptional cases, like terrorist threats, require approval within 24 hours and a fundamental rights assessment.
  • Notification requirements have been established and denied authorizations result in immediate AI system deactivation.
  • The goal is to prevent “predictive policing” and discrimination based on algorithmic predictions.
  • A specific list of 16 serious crimes allows for exceptions to the surveillance ban.
  • Member states must comply with these regulations within six months.
  • Additional prohibitions address AI manipulation of human behavior, “social scoring,” and emotional recognition AI in the workplace.

Main AI News:

In a groundbreaking agreement reached after three days of intense negotiations between the European Parliament and EU member states, the use of real-time biometric data driven by artificial intelligence (AI) in policing and national security operations is set to face stringent limitations. These measures, intended to safeguard fundamental rights, will be applicable in both public and private spaces, encompassing areas ranging from parks to sports facilities.

Under the new regulations, the use of AI-driven biometric surveillance, often compared to George Orwell’s “Big Brother,” will necessitate judicial authorization in nearly all circumstances, excluding specific serious crimes, terrorist threats, or urgent victim searches. Even in these exceptional cases, law enforcement agencies will be required to obtain approval from a judge or an independent administrative authority before deploying AI biometric tools.

Only in the rarest of situations, such as responding to an active terrorist threat, will the police have the authority to activate AI biometric tools without prior judicial consent. However, within 24 hours of activation, they must still secure authorization and provide a “prior fundamental rights impact assessment” to the appropriate authority.

Additionally, notification requirements have been established, obligating law enforcement to inform the relevant market surveillance authority and the data protection authority. In the event that permission is denied by the judge or administrative authority, the AI system must be promptly deactivated, and all data pertaining to the suspect(s) must be expunged.

These safeguards are expressly designed to prevent the emergence of “predictive policing,” a practice feared by Members of the European Parliament (MEPs) who were concerned that it might perpetuate racial profiling and discrimination based on algorithmic predictions.

The EU and MEPs have jointly outlined a specific list of 16 serious crimes that may warrant an exception to the surveillance ban. These crimes encompass terrorism, murder, rape, organized or armed robbery, grievous bodily injury, child sexual abuse, kidnapping, hostage taking, international criminal court jurisdiction crimes, unlawful seizure of aircraft or ships, sabotage, human trafficking, illegal drug trade, weapons or radioactive material trafficking, and involvement in criminal organizations linked to any of these offenses.

Furthermore, each EU member state is required to implement these AI restrictions within six months of their adoption into EU law. The legislation governing artificial intelligence in the EU will include additional prohibitions aimed at mitigating the societal and ethical risks associated with AI.

These prohibitions extend to AI systems that manipulate human behavior to circumvent free will, as well as systems that facilitate government or corporate “social scoring” similar to China’s “social credit” system. Such measures are designed to prevent the exploitation of individuals based on factors like age, disability, or social and economic status.

Emotional recognition artificial intelligence, capable of analyzing real-time facial expressions to assess stress or fatigue, will also be strictly prohibited in workplace settings.

MEPs who championed these prohibitions were unwavering in their commitment to ensuring that the EU does not descend into a surveillance state akin to China, where even traffic police can intervene based on perceptions of driver fatigue.

Conclusion:

These stringent regulations on AI biometric surveillance in the EU demonstrate a commitment to protecting fundamental rights and preventing abuses of AI technology. This will likely encourage responsible AI development and usage in the market while reducing the potential for discrimination and privacy violations. Companies operating in this space should be prepared to adapt to these new standards and prioritize ethical AI practices.

Source