Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Artificial Intelligence and the Administrative State

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Malik Dulaney
October 22, 2021

This column will delve into the relationship between artificial intelligence (AI) and the administrative state. E-government has been at the forefront of public administration studies. Within the area of E-government are administrative systems built using artificial intelligence and algorithmic based technologies. AI computer systems associated with administrative functions that affect citizens are here and will be an increasing part of administrative computing services. Oftentimes, these systems are critical or life dependent applications, making their effects and mistakes intolerable or catastrophic. The most important concern is how we will live with and adjust to disruptive AI technologies.

What is Artificial Intelligence?

Artificial intelligence is an inanimate machine, device or program that has the ability to learn, reason, create and make decisions. AI technologies use large datasets and algorithms (instructions) to train computers on how to make decisions. In computer science, they were originally called expert systems, then evolved into machine learning and modern day artificial intelligence. For users of AI technology, they function as a black box. Data goes in and results come out. The inner workings of the technology usually aren’t transparent and neither are the consequences.


Artificial intelligence is in widespread use. We come into contact with AI everyday without realizing it. When you utilize automated customer service phone calls, you are using AI. Chatbots on websites are AI. Personal digital assistants, like Siri or Alexa, are AI. When applying for auto loans, mortgage loans, leasing agreements and credit cards, AI decision technologies are being used to determine credit worthiness.

For public administrative computing, AI has some unique applications. Currently, AI is being used for bail and sentencing, human resource decisions and predictive policing. Governments are using AI chatbots to answer questions from the public. AI is also used for an array of diverse functions such as utility outage predictions, facial recognition at airports, wildfire detection, forecasting ocean activities and other tasks that can utilize machine learning with data mining.


Artificial intelligence elicits worker anxiety surrounding fears of replacement. However, AI is not replacing workers, but it may free workers from doing menial duties to focus on tasks that require human interaction and collaboration. AI can automate the mundane functions of government, like interfacing with the public for support phone lines and websites. It is also effective when used to parse through large datasets, like climate and biometric data.

Consequently, AI can pose ethical and discriminatory problems when it is developed with underlying bias in the algorithms. Case in point, AI systems are used for pretrial risk assessment decisions. These systems are supposed to provide information to judges to make better informed decisions, but if there is built-in bias in how certain factors are weighed, it can create unjust outcomes for defendants. If the number of previous arrests are weighed too heavily in the algorithm, it could correlate with over policing in disadvantaged communities. This may have the effect of creating a discriminatory bias against those defendants without creating an accurate risk assessment. 

AI is used in human resource hiring, termination, evaluation and productivity assessment. There is a good case for using AI for hiring and onboarding tasks, but other HR functions may require human intervention. There are recent examples of employees of a large retail organization being terminated solely based on automated performance assessments and carried out through automated termination procedures. These employees never had any human interaction regarding their termination. In using AI for human resources applications, the tradeoffs between using AI to assist a human making decisions and autonomous decisionmaking have to be considered. Decisions that involve higher-stakes should have direct human involvement.

Looking forward

We’ve discussed some problems surrounding AI use in PA. There are many more that could be covered, but they all allude to more questions. What are the implications of AI use in government? How do we best scrutinize AI systems? Which systems should be considered high risk? How do underlying AI algorithms function? What is the track record for the technology? Are AI algorithms ethical, equitable and transparent? In evaluating the technology, what was the design process? During machine learning processes, were training examples, selected by people, labelled by people and derived by people? What were their biases and what was done to counteract their biases injected into the process? What are the rules governing these technologies in the PA realm? How do we govern AI use for citizen applications and exposure?

Given the positive and negative potential of artificial intelligence, we may need to view it as we view nuclear technologies. As such, there is a need for guidelines governing AI use, containment, proliferation and deterrence. Ethics need to be at the center of all scenarios of artificial intelligence use. PA technologists need to be vigilant in evaluating AI technologies that will affect their citizens. They need to conduct full scale audits and evaluations to determine if the technology works as expected and to determine if there are foreseeable negative consequences for their constituency. 

Malik H. Dulaney, PhD, CISSP is an information technology professional with the University of Dallas and an adjunct cybersecurity professor with the Gupta College of Business at the University of Dallas. He is also a public sector researcher with research interests in cybersecurity in public and nonprofit organizations, cyber warfare and information technology policy. He can be reached at [email protected].

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

Leave a Reply

Your email address will not be published. Required fields are marked *