Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Systemic Approach and Leverage Points in AI Policies. Part II

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Mauricio Covarrubias
January 17, 2024

In the first part of this column, we explored how artificial intelligence (AI) is reshaping complex systems such as healthcare, education, the economy and justice. We discussed the inherent risks of integrating AI into these sectors, including algorithmic biases and unintended consequences and emphasized the importance of adopting a systemic approach to identify leverage points—strategic areas where targeted interventions can lead to significant and lasting change.

Practical Applications: The Case of Algorithmic Justice

Building upon this foundation, we now turn our attention to a concrete example: the criminal justice system. This sector vividly illustrates the challenges and opportunities of implementing AI in complex environments. By examining the use of recidivism prediction algorithms, we can understand how different levels of intervention—from technical adjustments to paradigm shifts—can be applied to create AI systems that are both effective and equitable.

Parameter Adjustments: Integrating Fairness Metrics

The simplest intervention involves adjusting the metrics used to evaluate algorithmic performance. Traditional metrics like accuracy often fail to capture the disparate impacts of algorithms across demographic groups. To address this, policymakers and developers can incorporate fairness-focused metrics, such as disparate impact ratio, false positive rates or equalized odds, which assess how equitably an algorithm’s decisions affect various populations.

For instance, Angwin et al. (2016) demonstrated that COMPAS, a widely used recidivism prediction tool, was more likely to falsely predict high recidivism risk for Black defendants while underestimating the risk for White defendants. By prioritizing fairness metrics, these disparities can be reduced, minimizing systemic biases. However, while fairness metrics are a crucial first step, they must be accompanied by broader efforts addressing the structural roots of inequality in AI systems.

Data Infrastructure: Ensuring Representative and Transparent Data

The quality and representativeness of data are essential for algorithmic fairness. Many biases in AI systems stem from historical data reflecting societal inequities. Public policies should mandate the use of representative datasets and enforce standards for data transparency. This includes detailing the provenance, demographics and limitations of the datasets used to train algorithms.

For example, public repositories of inclusive datasets can help build fairer algorithms. Initiatives like the UK’s public data standards require openness and diversity, improving fairness and public trust.  Additionally, synthetic data generation can address gaps in representativeness by simulating data that reflect underrepresented populations, mitigating biases while preserving privacy.

Structural Reforms: Implementing Independent Audits and Accountability Mechanisms

Structural reforms embed accountability into the lifecycle of algorithmic systems. Independent audits are essential for evaluating an algorithm’s fairness, accuracy and transparency. These audits should involve diverse stakeholders, including civil rights organizations, technical experts and affected communities.

For instance, the European Union’s Artificial Intelligence Act mandates regular audits for high-risk AI systems. In criminal justice, such audits could examine whether recidivism prediction tools disproportionately impact certain groups or exacerbate inequalities. Audits also guide corrective actions, such as recalibrating models, improving data quality or redesigning workflows.

Another effective reform is the establishment of algorithmic accountability boards, which oversee the deployment and operation of AI systems in sensitive domains. These boards act as intermediaries between developers, policymakers and the public, ensuring ethical considerations remain central to AI governance.

Paradigm Shifts: Redefining Justice Through AI

The most transformative leverage point involves reimagining the purpose of algorithmic systems in justice.  Currently, many AI tools operate under a punitive paradigm focused on efficiency, risk management, and crime prevention. While these goals are valuable, they often overlook broader dimensions like rehabilitation, community restoration and systemic inequities.

A paradigm shift toward restorative justice could fundamentally redefine how AI is used in the criminal justice system.  Restorative justice emphasizes repairing harm, fostering accountability and rebuilding trust. Embedding these principles into AI systems could prioritize holistic outcomes, such as reducing recidivism through targeted support rather than punishment.

For example, AI could help identify patterns in underlying social issues—such as poverty or mental health challenges—that contribute to criminal behavior. By shifting the focus from punitive measures to addressing root causes, AI can transform into a tool for systemic improvement.

Fostering such a paradigm shift requires public engagement. Policymakers must create inclusive dialogues to ensure affected communities shape how AI is developed and deployed. Initiatives like the Montreal Declaration for Responsible AI demonstrate how collective action can redefine the values guiding technological innovation.

Final Thoughts

The criminal justice system highlights the importance of a systemic approach to AI policy, leveraging technical adjustments, structural reforms and paradigm shifts to address its complexities. By integrating fairness metrics, improving data infrastructure and establishing accountability mechanisms, policymakers can mitigate biases and enhance public trust. However, the most transformative impact arises from reimagining AI’s role to prioritize equity, rehabilitation and community well-being.

As AI evolves, public policies must balance innovation with ethical considerations. Leveraging interventions across multiple levels ensures AI systems serve societal goals while upholding justice and fairness. Through systemic thinking, we can harness AI’s potential to create inclusive, equitable and sustainable technologies.


Author: Mauricio Covarrubias is Professor at the National Institute of Public Administration in Mexico.  He is co-founder of the International Academy of Political-Administrative Sciences (IAPAS).  He is the founder and Editor of the International Journal of Studies on Educational Systems (RIESED). Member of the National System of Researchers of CONACYT.  He received his Ph.D. from the National Autonomous University of Mexico.  He can be reached at [email protected] and followed on Twitter @OMCovarrubias

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *