Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Ask Chatgpt if Government Is Ready for the Challenges of Artificial Intelligence

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Bill Brantley
February 3, 2023

In the two months that ChatGPT was open to the public to use freely, 30 percent of professionals admitted to using the artificial intelligence application to do their work. “Marketing professionals have been particularly keen to test-drive the tool: 37% said they’ve used AI at work. Tech workers weren’t far behind, at 35%. Consultants followed with 30%. Many are using the technology to draft emails, generate ideas, write and troubleshoot bits of code and summarize research or meeting notes.

I haven’t found any research that shows how many government employees are using ChatGPT for work, but I bet many government workers are using ChatGPT and similar tools. I’ve seen several YouTube videos explaining how academic researchers can use ChatGPT for brainstorming research articles, so using artificial intelligence (AI) tools to create government research reports is likely. What makes ChatGPT different from earlier AI tools is the sophistication of the responses and the ability to learn.

“I saw a demo of a system last week that took existing courseware in software engineering and data science and automatically created quizzes, a virtual teaching assistant, course outlines and even learning objectives. This kind of work typically takes a lot of cognitive effort by instructional designers and subject matter experts. If we ‘point’ the AI toward our content, we suddenly release it to the world at scale. And we, as experts or designers, can train it behind the scenes.” -Josh Bersin, January 24, 2023, Human Resource Executive

Unethical AIs

AI tools like ChatGPT learn by examining vast amounts of data to build predictive models. ChatGPT (and its predecessor, GPT-3) essentially read billions of lines of text to create algorithms that predict what words will most likely appear after each other. For example, if you start a sentence with “the water is,” then there is a certain probability that the next word is “wet,” “cold” or “hot.” Much less probable are the words “toast,” “rough” or “furry.”

However, using existing data can cause unethical behaviors by AI tools. For example, the algorithm that manages kidney transplant waiting lists has been shown to discriminate against African-Americans. Another widely-used algorithm that helps identify patients with complex health needs has displayed significant racial bias. As Donald Kettl writes in Government Technology, “The problem isn’t that the algorithms are evil. It’s that they rely on data that fail to account for the needs of everyone and that they don’t learn rapidly enough to correct underlying problems of inequity and violations of privacy.”

Competitors to ChatGPT have already announced plans to build ethical rules into their competing AI tools. As Josh Bersin explains, “The Google competitor to GPT-3 (which is rumored to be Sparrow) was built with ‘ethical rules’ from the start. According to my sources, it includes ideas like ‘do not give financial advice’ and ‘do not discuss race or discriminate’ and ‘do not give medical advice.’” Who is going to be responsible for writing ethical rules for AI tools? Will it be private industry, governments or a combination of the private sector and government? What happens if an AI tool is created without ethical rules? How will AI tools be policed? Josh Bersin imagines one rogue AI tool scenario.

“Imagine, for example, if the Russians used GPT-3 to build a chatbot about ‘United States Government Policy’ and point it to every conspiracy theory website ever written. It seems to me this wouldn’t be very hard, and if they put an American flag on it, many people would use it. So the source of information is important.”

AI-Assisted Public Servants

A state lawmaker used a new chatbot that has gained popularity in recent months for its ability to write complex content to author new legislation that regulates similar programs, arguing legislators need to set guardrails on the technology while it is still in its infancy.”

I wonder how many other public servants have used ChatGPT to “bounce ideas off of” while drafting legislation and policy. It might be interesting to run some of the latest Congressional bills through GPTZero.me to determine if ChatGPT helped in the drafting. AI tools can be a great boon to public servants because the tools free up the employee from the mundane tasks of report writing or data analysis so that the employee can engage in strategic thinking and creativity. AI tools are as transformative as the first electronic spreadsheets were in the 1980s. However, public servants must be careful in how they use AI tools. Like spreadsheets, AI tools are neither good nor bad. It’s how the tools are used that can be ethical or unethical. Governments have many challenges from the new AI tools, and governments must act quickly.


Author: Bill Brantley teaches at the University of Louisville and the University of Maryland. He also works as a Federal employee for the U.S. Navy’s Inspector General Office. All opinions are his own and do not reflect the views of his employers. You can reach him at https://www.linkedin.com/in/billbrantley/.

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *