Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Misuse of Artificial Intelligence in Public Administration

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Michael Turtz and Jill Goldstein
April 5, 2024

Artificial Intelligence (AI) platforms are gaining widespread attention throughout the world—much like Tesla when it debuted its line of electric vehicles and self-driving capabilities using AI. As everyone wants the opportunity to use it, not everyone has access to it or knows how to use it. In so many ways, the world is still not ready for electric vehicles. Currently, there aren’t as many charging stations as there are gas stations, and in some cities, the Tesla-brand charging stations are not in convenient locations. AI also faces similar challenges, for the new AI user, there remains a lack of comprehensive guidance on how to use it and an understanding of its vast capabilities. Regulatory scrutiny of Tesla spans areas such as vehicle safety, labor practices and data privacy, while overarching federal legislation on AI has not advanced yet. Both Tesla and AI represent new-age technology that holds the potential to revolutionize various aspects of our society. However, this also brings uncertainty surrounding the full extent of AI’s capabilities.

AI platforms are becoming more widely utilized, especially since Google debuted Gemini (formerly Bard) in 2023. Given Google’s extensive user base, Gemini has the potential to be the world’s leading AI platform. Gemini was developed by Google, which is in direct competition with OpenAI’s platform, which includes their notorious ChatGPT, DALL-E and their premium subscription of GPT-4. Focusing on Gemini, as it gives free multi-modal capabilities including creating images, and uses up-to-date information from Google, it has the potential to surpass ChatGPT’s name brand over time. However, as AI advances throughout our society and workforce, concerns are arising regarding its ethical use, even echoing the concept of “guerrilla government.”

Rosemary O’Leary introduced the term guerrilla government in her 2006 book titled Ethics of Dissent. Guerrilla government is defined as career public servants who implicitly or explicitly work against the wishes of their superiors. Guerrilla government differentiates from organizational misbehavior. Public administrators who display organizational behavior act openly. However, those who perform guerrilla government work behind the scenes.

In 2017, O’Leary envisioned a perfect storm for guerrilla government with the combination of big data, social media and contracting out in her article titled The New Guerrilla Government: Are Big Data, Hyper Social Media and Contracting Out Changing the Ethics of Dissent? O’Leary discussed the easy access for government officials to collect data starting at an entry level and access to multiple social media outlets. Finally, contractors can easily implement government activity sometimes better than career civil servants.

O’Leary used the polarizing examples of both Bradley Manning and Edward Snowden due to the easy access they both had to the complete enormity of national intelligence data. For example, career bureaucrats had less access to top-secret documents than Snowden did as a contractor. Snowden acquired 1.7 million government files and sent them to journalists at prestigious newspapers. Manning on the other hand was able to download thousands of government files from the Iraq War to a compact disc and then leaked it to WikiLeaks.

These were two extreme examples that involved national intelligence. However, guerrilla government can occur at any level of government and any government entity. In both cases, AI did not have the easy access or mainstream advances that it does today. If one of O’Leary’s harsh realities was that “The combination of big data, hyper social media and contracting is likely to increase the incidents of guerrilla government” was a perfect storm, what would provide these contractors a tool such as AI be called?

Based on the Associated Press article AI-powered misinformation is the world’s biggest short-term threat, Davos report says, on January 10, 2024, the World Economic Forum believed that AI could create major problems for democracy as well as the global economy. Specifically, AI can empower malicious actors to carry out cyberattacks and phishing attempts. Anyone can use AI and they do not need to be an expert.

Government organizations on the federal, state and local levels could be years behind by the time they implement guidance on how to use AI since tech companies are constantly creating AI models open to the public. On the federal level, in October 2022, the White House’s Office of Science and Technology issued a 73-page Blueprint for an AI Bill of Rights. However, there are no legal ramifications with the Blueprint. Congress has also considered several proposals for AI legislation; however, they haven’t enacted any of them.

To conclude, we asked Gemini to list us five hypothetical examples of “guerrilla government” scenarios where AI plays a role, and these are what scenarios they came up with: “Scenario One: Eco-Warriors in the Department of Energy—A group of engineers within the Department of Energy, concerned about climate change, secretly use government AI to analyze energy grid vulnerabilities and simulate the potential impact of implementing renewable energy sources at a faster pace than official policy allows. They leak anonymized reports to environmental groups, hoping to pressure for change.”

The other scenarios included: Scenario Two: Whistleblower AI in the Justice Department, Scenario Three: Rogue AI in the Department of Defense, Scenario Four: Algorithmic Robin Hood in the IRS and Scenario Five: AI-Powered Protests in the Justice Department. AI not only grants easy accessibility for any government employee but also can provide them with the idea of how to implement guerrilla government or help them understand documents at a faster rate than outsourcing to WikiLeaks or the press.


Author: Michael Turtz, MPA, is currently the Director of Administration at the Charles E. Schmidt College of Medicine at Florida Atlantic University. Michael is also currently a Ph.D. Candidate in FAU’s School of Public Administration.

Author: Jill Goldstein, MS, is currently the Administrative Coordinator at the Charles E. Schmidt College of Medicine and is an Adjunct Faculty member in the Department of Communications at Florida Atlantic University. Jill received her M.S. in Strategic Communications from the University of Maryland.

Emails: [email protected]; [email protected]

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *