Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Friend or Foe? Equity and Ethics of Artificial Intelligence in Government

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Tanya Settles
April 6, 2023

For generations, policy makers, public administrators and the public have wondered what a future supported by artificial intelligence (AI) might look like. We’ve been simultaneously intrigued and horrified by the possibility of sentient AI creating and executing public policy decisions, patrolling our streets, protecting our infrastructure, replacing human bureaucrats with greater efficiency, providing health care, and the scariest vision of all: machines outlearning humans as we become the victims of our own invention. Think “Minority Report” meets “The Terminator” in “The Matrix”. 

We don’t have to think much about this future because in many ways it is already here. Current uses of AI include license plate readers, facial recognition in airports and other transportation hubs, fraud detection technology and adaptive learning. The relative recent addition of language learning modules (LLMs) punctuated AI fear with reports of people forming human-like relationships with an LLM that probably uses a generative AI model, such as OpenAI’s ChatGPT, to formulate and engage in interactions with humans. Generative AI works by the user inputting conversational language to which AI then generates a response that is highly engineered, yet very human in nature. Be polite with “please” and “thank you”, the program will respond the same; ask a question, and it always produces a response. Need to pass a licensing exam? More often than not, generative AI will produce a response that approaches or exceeds the threshold to pass. It can be used like a search engine, but it also can answer questions that no human has ever posed, responding in an uncannily human way. 

This presents ethical and legal questions for governments that may be both consumers and regulators of AI. Even in areas where decisions regarding law and policy are quickly decided upon can quickly become murky when AI enters the scene. AI is less regulated than other technologies including online intermediaries like Google and Twitter because the information is AI-generated and unique to the person engaging with the chat. Public policy is nearly non-existent and law is vague and untested. People who use generative AI may anthropomorphize the tool and, in some cases, assume that they are interacting with a human when in fact, they are not. AI software can be potentially misleading and exploited by both users and hosts with little accountability. For government, this creates unique problems associated with the risk of community members interacting with an LLM they assume to be human only to find that they have been offered, and taken, direction from a machine language module. 

Still, the notion of expanding our knowledge and capabilities as governments is intriguing. The first question we must ask is whether we’re replacing human thought or augmenting it. Either way, there’s a moral obligation to be transparent and clear to the community about whether they’re interacting with a live agent or AI. We’re also obligated to come to terms with the idea that AI is imperfect because it is created by humans who, by definition, are fallible. There’s little doubt that AI-generated decisions would be more efficient in terms of cost, speed and possibly accuracy, but would these decisions produce equitable outcomes? Furthermore, can AI be used to tackle the most challenging policy decisions we currently face related to the eradicating of structural and systemic inequity in public policy?

Probably not. Machines are not burdened with moral judgement. They generate responses based on a complex series of algorithms, and are unable to intuitively recognize dimensions of human diversity.  They only “know” what humans tell them. On its face, this may appear to be a form of decision making that is unclouded by human emotions like empathy, remorse or an innate sense of fairness. Similar situations with similar data characteristics will always produce similar results. And while this might sound a lot like equity, it isn’t. Treating people equally doesn’t always result in equity especially when realized outcomes are different for people who are categorized as “others” by public policy decisions made long ago.   

When algorithms are based on law and policy and founded in structural and systemic inequity, then machine generated results will continue to perpetuate inequity. In this sense, AI will do little to help us. Maybe there’s a use for LLMs in terms of efficiency improvements, but today—right now—tackling inequity is a human task. Step aside, LLMs. Ensuring equity in public policy for the future is something we’ll have to do on our own using human attributes of empathy, courage and vulnerability—all the qualities that people are experts in, but machines lack. For now.


Author:  Tanya Settles CEO of Paradigm Public Affairs, LLC.  Tanya’s areas of work include relationship building between local governments and communities, restorative justice, and the impacts of natural and human-caused disasters on at-risk populations.  Tanya can be reached at [email protected]. The opinions in this column and any mistakes are hers alone.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *