Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Maria Lungu
January 13, 2025
Artificial Intelligence (AI) is increasingly becoming a key player in various sectors, and policing is no exception. From predictive analytics that anticipate crime hotspots to facial recognition systems aiding in suspect identification, AI promises to revolutionize law enforcement. Predicting policing requires and relies on a forecasting model to determine the likelihood of crime occurring in certain places by certain people, using historical data and algorithmic processes. Today, hundreds of police departments have instituted mechanisms to enforce more predictive policing.
Facial recognition software is a technology that uses categorized images through training datasets and utilizes algorithms to match images to a specific database. These systems can match a suspect’s face to an existing database or serve as a surveillance measure to identify individuals in public areas. While such applications may enhance policing efficiency, they also raise questions about misuse, inaccuracies, and the erosion of public trust in law enforcement practices.
However, with this technological leap come pressing questions about regulation at multiple levels: local, national, and global. Who gets to decide how these technologies are deployed, and what safeguards are in place to prevent harm to communities disproportionately targeted by policing? How will agencies ensure that these systems enhance safety without eroding civil liberties? How will AI be controlled to ensure it serves the public good without infringing on rights or exacerbating inequalities? By integrating AI into policing, ethical challenges will require a comprehensive and multi-faceted regulatory approach. While much of the current discussion is focused on the nexus between efficiency and innovation in policing, we must also grapple with questions about unintended consequences: what happens when AI tools misfire or when flawed predictions lead to over-policing in already marginalized communities?
Artificial Intelligence systems in policing face well-established criticisms in three key areas: bias, privacy, and transparency. Bias arises from the data used and the design of the systems, potentially amplifying existing prejudices against marginalized groups or creating new forms of bias. For instance, historical crime data often reflects systemic inequities, meaning predictive tools can reinforce patterns of over-surveillance and over-policing in Black and Brown communities. Similarly, facial recognition systems have been shown to exhibit higher error rates for people of color, particularly Black women, raising questions about their reliability and fairness.
Privacy concerns are particularly pressing, as the integration of AI in policing increases the potential for mass surveillance. As data collection and analysis costs decrease, law enforcement agencies can more easily monitor individuals on a large scale, raising serious concerns about privacy, free speech, and anonymity. Unchecked surveillance also creates an environment of perpetual monitoring, which can stifle democratic freedoms and disproportionately affect communities already skeptical of law enforcement. Data aggregation can create detailed records of individuals’ past and real-time movements, eroding personal privacy and civil liberties.
Transparency is another significant issue. Many AI systems in policing operate as “black boxes,” where the decision-making processes are often opaque and misunderstood. This lack of transparency can lead to real-world consequences, such as unjustified police stops or decisions that cannot be easily explained or contested. Companies developing these AI systems can also invoke intellectual property protections to withhold information about how their systems work. This secrecy, compounded by non-disclosure agreements with public agencies, further limits public oversight and accountability, making it difficult to assess the true impact of these technologies. Moreover, a lack of transparency undermines public trust, which is critical for effective and legitimate policing. Communities deserve to know how and why these tools are being used, particularly when their deployment can have profound consequences on civil liberties and justice.
Local and State Levels: Balancing Innovation and Accountability
Regulating predictive policing in the United States presents a complex challenge, requiring careful consideration at both the local and national levels. Discourse in the academic and practitioner communities expresses concern over the development of regulations and feasibility in the United States. While some states have taken steps toward establishing oversight mechanisms, these efforts remain uneven and reactive rather than proactive. First, local and state ordinances regulating police use of surveillance technologies have been proposed across the US. In 2016, the ACLU’s Community Control Over Police Surveillance (CCOPS) campaign pushed for laws requiring oversight and transparency in police technology use. However, by 2020, only fourteen local governments had passed such ordinances, and many proposals stalled due to political challenges.
There is little consensus on how regulation will occur at any level, but the state legislative landscape has increased much faster than national and global structures. Given the proximity of state governments to community concerns, state-level regulation presents an opportunity to create tailored solutions that balance innovation with accountability. At the state level, where police departments first adopt new technologies, I anticipate regulatory structures will involve striking a balance between innovation and constitutional accountability. Local and state governments are working towards integrating AI into policing, and this should include a preemptive specific, and articulable structure focused on transparency, robust oversight, and ongoing education for law enforcement. This might include measures such as requiring public hearings before adopting new technologies, mandating periodic audits of AI systems for bias and accuracy, and establishing independent oversight boards to review complaints and assess system performance. The goal being AI regulatory standards should not be an afterthought.
National Level: Creating a Unified Framework?
At the national level, a unified regulatory framework is even more challenging. National regulatory development has been fragmented. Similar to state and local goals, national regulations should emphasize data privacy, algorithmic transparency, and civil rights protection, focusing on preventing the marginalization of vulnerable communities. A federal baseline could ensure minimum safeguards, such as prohibiting the use of AI tools without rigorous bias testing and requiring disclosure of data collection practices to affected communities. The regulation of AI in local policing should start with clear guidelines on its use. Policies must mandate transparency about how AI systems function, the data they use, and the criteria for their deployment. One of the major grievances with predictive policing is the lack of understanding of how these instrumentalities are used and how data is stored. Citizens question the constitutional standards considered in implementation and beyond.
Like state structures, a focus on standards, impact assessments, and education are necessary. Independent review boards comprising community members, technologists, and legal experts could provide an impartial check on the use of AI, ensuring it does not perpetuate bias or injustice. Moreover, training is crucial. Police officers and staff must be adequately trained to use these tools and understand their limitations. This training should emphasize the ethical implications of AI, ensuring that officers are not only equipped with technical skills but also understand the societal impact of their decisions.
The US has a complex regulatory system. Perhaps regulation would be more applicable if it were siphoned based on industry practices and dimensionalities. Perhaps a one-size-fits-all approach is an expectation we should abandon. A one-size-fits-all approach to regulation falls short because it fails to account for the unique ethical, operational, and societal challenges specific to different industries, particularly in sensitive areas like policing. Regardless, all legislation introduced must be enacted through ethical guidelines tailored to the unique challenges of policing. This approach will provide the necessary oversight to address the gaps left by the current patchwork of local efforts and create a more coherent and effective regulatory framework.
Author: Maria Lungu is Postdoctoral Research Associate at the Karsh Institute of Democracy at the University of Virginia. She can be reached at [email protected] or [email protected] and followed on Twitter @maria_lungu13
Follow Us!