Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Can AI & Bureaucratic Discretion Coexist?

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization. 

By Josemaria Salazar
March 21, 2025

Public administration has long been shaped by the tension between bureaucratic discretion and administrative control. Max Weber envisioned bureaucracy as a rational, rule-bound system meant to ensure fairness and consistency. Herbert Simon later introduced the idea of bounded rationality, showing how real-world decisions are shaped by limited information, cognitive limits and institutional pressures.

Nowadays, artificial intelligence introduces a new layer to this old discussion. While tech enthusiast praise blindly AI’s potential to correct human errors, the reality is far more complex. This column argues that AI, if left unchecked, risks eroding the democratic and humanistic foundations of public administration. It does not eliminate discretion; it reshapes it, typically in forms that are opaque, less accountable and more prone to reinforcing systemic biases.

The Discretion Dilemma

Discretion has always been central to the practice of governance. Michael Lipsky’s Street-Level Bureaucracy theory reminds us that frontline bureaucrats operate in complex, uncertain contexts. They are required to interpret laws and regulations, allocate resources and exercise moral judgment in their daily work. AI, however, threatens to strip them of this discretion, and impose rigid, automated decision structures that leave little room for context and empathy.

This shift raises an urgent question: is it in line with Dwight Waldo’s vision of public administration as a political and ethical endeavor by nature, or is it in line with Foucault’s critique of surveillance and discipline as mechanisms of social control?

Consider, for instance, the case of Michigan’s Unemployment Insurance Agency, which implemented the Michigan Integrated Data Automated System (MiDAS) in 2013, a $47 million rules-based AI project. Within two years, the system falsely accused over 40,000 individuals of fraud. Many were later exonerated, but not before enduring severe financial and psychological hardship. Instead of improving efficiency, the system created a new form of automated injustice where bureaucrats had neither the discretion nor the incentive to intervene.

Lipsky would argue that discretion exists to humanize governance. It allows administrators to consider nuance, fairness and social complexity. By contrast, AI lacks the capacity for moral reasoning. It cannot account for unforeseen circumstances. If we delegate this discretion to machines, we risk transforming public administration into an efficient yet indifferent apparatus.

Automation Bias and the Illusion of Objectivity

Complicating this is how officials interact with AI. A 2022 study by Saar Alon-Barkat and Madalina Busuioc, conducted in the Netherlands, found that bureaucrats didn’t blindly follow algorithmic recommendations. They were more likely to follow AI advice when it confirmed their own assumptions or stereotypes. AI reinforces rather than diminishes bias and tends to do this in decision-making that affects marginalized communities.

This suggests that AI hasn’t removed discretion from governance; it has distorted it, making it harder to recognize, criticize or reform. Algorithmic decision-making is not a cure for human bias; it codifies it.

Kingdon’s Multiple Streams Framework is especially relevant in this context. It shows how policy change results from the convergence of problems, policies and politics. If AI simply replicates institutional biases under the guise of objectivity, it doesn’t open policy windows—instead it shuts them and traps inequality in place.

Technocratic Governance and the Crisis of Accountability

The greater the incorporation of AI into government, the greater is technocratic drift. Are we shifting decision-making away from democratic institutions and into the hands of opaque algorithms? Here, Habermas’s account of colonization of the lifeworld is uncannily applicable.

Accountability that was once reserved for public institutions and officials is now grounded in algorithmic decision-making. Who audits the code? Who ensures that marginalized populations are not unfairly denied services by predictive systems? Most critically, who is accountable when AI fails?

This reality became tragically clear in the Dutch Child Welfare Scandal. Between 2004 and 2019, an AI system falsely accused thousands of low-income and immigrant families of fraud. The impact was catastrophic: illegal debt collection, extreme psychological trauma and in some cases, forced separation of families. The algorithm relied on defective indicators like income and nationality and amplified racism and xenophobia. It was opaque to citizens and was nearly impossible for them to appeal. This bureaucratic error was escalated into a national scandal that finally resulted in the resignation of the Prime Minister in 2021.

Conclusion: Designing for Discretion, Not Substitution

The challenge ahead is not merely technological—it is institutional. AI must be integrated within administrative frameworks that reinforce democratic accountability, requiring ex-ante regulatory constraints, ex-post evaluative oversight and ongoing public engagement.

AI-generated decisions should never be above explanation. Bureaucrats should be able to justify them in a simple way, and ethical review should be part of everyday administrative life. Most of all, people deserve a voice in the decisions that affect them.

If designed with strong democratic safeguards, AI can do more than improve efficiency, it can help strengthen legitimacy. But the future of public administration shouldn’t be written in Python alone. It should be shaped by institutions that preserve human judgment, protect dignity and stay true to the values that make governance important.


Author: Josemaria Salazar is a dual master’s student in Public Administration and Global Public Policy at Suffolk University in Boston. He supports the development of the AI Climate Platform at I2UD, a data-driven tool helping governments in the Global South assess climate risks in informal settlements. Open to collaboration—You can reach him at https://www.linkedin.com/in/salazarjosemaria/

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 4.50 out of 5)
Loading...

3 Responses to Can AI & Bureaucratic Discretion Coexist?

  1. K.S. Reply

    April 11, 2025 at 1:15 pm

    Amazing work. Well thought out and put together. Be proud of this..

  2. Carlos Rufin Reply

    April 5, 2025 at 3:56 pm

    Excellent article! We urgently need critical analyses of AI in public administration to understand not only the positive potential of AI, but also its risks and limitations with regard to the fundamental goals and values of public administration.

  3. Marco Reply

    March 27, 2025 at 1:27 pm

    Excellent contribution and input with the use of technology.

Leave a Reply

Your email address will not be published. Required fields are marked *