The Role of AI in Crime Fighting: Balancing Benefits and Ethical Concerns

TL;DR:

  • AI aids police in emergency calls, reducing operator workload.
  • Facial recognition software raises concerns due to racial bias.
  • Independent tests reveal improved accuracy but potential for false positives.
  • Predictive policing using AI faces criticism for relying on biased historical data.
  • Ethical considerations and unbiased data are crucial in AI’s role in crime prevention.

Main AI News:

Artificial intelligence (AI) has been increasingly harnessed by police forces worldwide. However, the pivotal question remains – are the advantages consistently outweighing the risks?

In a hypothetical scenario, Sarah, a victim of domestic abuse, dials the emergency line, trembling with fear as her ex-husband attempts to break into her home. While Sarah communicates with a human operator, AI software simultaneously transcribes the call, accessing UK police databases. The AI swiftly retrieves her husband’s information, highlighting that he possesses a gun license, signaling an urgent need for police intervention.

This illustration, though not an actual emergency, stems from a three-month pilot program conducted by Humberside Police last year, featuring AI-powered emergency call software from UK startup Untrite AI. The software, trained on two years of historical domestic abuse call data, is designed to streamline the thousands of calls received daily.

Kamila Hankiewicz, CEO and co-founder of Untrite, explains, “Our AI model scrutinizes the call’s transcript and audio, producing a triage score – low, medium, or high. A high score necessitates police presence within minutes.” Untrite’s trial suggests that the software could reduce operators’ workload by nearly a third, during and after each call.

Untrite is not alone in this endeavor, as other tech companies like Corti and Carbyne in the United States also offer AI-powered emergency call software systems. The next step for Untrite involves deploying its AI system in a live setting, with discussions underway with multiple police forces and emergency services.

The potential for AI to revolutionize crime investigation is evident. It can identify patterns and connections within evidence and swiftly analyze extensive datasets, outpacing human capabilities. However, there have been missteps in its use, notably in AI-powered facial recognition software in the United States.

Reports surfaced last year about the failure of such software to accurately identify black faces, leading to bans in cities like San Francisco and Seattle. Albert Cahn, executive director of the Surveillance Technology Oversight Project (Stop), expresses concern, particularly regarding racial bias.

Facial recognition technology can be employed in three main ways: live facial recognition, retrospective facial recognition, and operator-initiated facial recognition. The UK’s Policing Minister, Chris Philp, urged an increase in searches using retrospective facial recognition technology.

Independent testing conducted by the UK’s National Physical Laboratory (NPL) examined these technologies, revealing improved accuracy. However, it also identified a significant likelihood of false positive identifications for black faces, sparking further scrutiny.

West Midlands Police has established its ethics committee, chaired by Prof. Marion Oswald, to evaluate new tech tools. Prof. Oswald emphasizes the need for rigorous analysis, particularly for tools like facial recognition.

AI’s transformative potential in crime prevention is another aspect to consider. The University of Chicago has developed an algorithm claiming to predict future crimes with 90% accuracy, albeit relying on historical data. Yet, concerns arise about the inherent biases in predictive policing, as AI systems primarily rely on flawed historical data.

As Mr. Cahn points out, “original sin” in predictive policing lies in biased historical data, and its crude deployment can lead to disastrous outcomes. Prof. Oswald underscores the need to account for multiple factors and comprehensive information before making determinations about individuals. In this intricate landscape of AI in crime fighting, ethical considerations and unbiased data remain paramount.

Conclusion:

The integration of AI in crime fighting offers significant benefits in terms of efficiency and data analysis. However, the market must address critical issues such as racial bias in facial recognition and the ethical implications of predictive policing. The future success of AI in this sector will depend on the ability to mitigate these concerns while harnessing its transformative potential.

Source