Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

AI Policy: Who’s Being Served?

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Parisa Vinzant
May 30, 2025

Tech innovation by the government and especially any that involves artificial intelligence (AI) without meaningful engagement and transparency of process is likely to be viewed with skepticism by the public. The government also forfeits the opportunity to gain early buy-in and valuable feedback to improve the technology before implementation.

In public governance, according to a June 2024 paper from the Organization for Economic Co-operation and Development (OECD), “it is important to develop, deploy and use AI in a safe, secure and trustworthy way in the public interest.” OECD notes that “some significant AI failures in the public sector have highlighted the need for governments to assess, test and monitor AI’s impacts on the public” and to “consider how AI systems may affect men, women or marginalized communities differently; ensure that the benefits of AI are distributed equitably; and mitigate potential harm.” Many governments across the globe are increasingly aligning with these best practices, including those exemplified in the European Union’s AI Act.

Yet, as is widely understood, the United States is undergoing a period of tremendous change in its government since the new administration took power in early 2025 and long-standing institutional norms relating to human resources, privacy, IT security among others are being dismantled. Leading the charge has been Elon Musk’s Department of Government Efficiency (DOGE). There has also been a systematic purging of diversity, equity and inclusion (DEI) considerations in federal governmental policy and programs by presidential executive order. The administration has even asserted its power to pressure academic institutions, law firms and other businesses to discontinue their DEI policies.

This being the case, there is no expectation that this administration would consider the public sector values-based principles for AI. Thus, the United States now charts a singular path as it pursues a rapid pace of AI innovation but with the guiding principles of the private sector—prioritizing efficiency, profitability and other unknown metrics. As a result, the interests of the general public are likely to be overlooked and the impact of AI will be negative for many.

In contrast to the ethos of private sector business, that of the public sector must act in the public interest and for the public good. The principle to Advance the Public Interest, as defined by the Code of Ethics by the American Society for Public Administration, is to “promote the interests of the public and put service to the public above service to oneself.”

The private sector, led by tech giants and investment firms, has seemingly been able to influence the nature and pace of AI adoption within the administration. The involvement of the DOGE team, OpenAI and VC firm Andreessen Horowitz only recently came to light with the planned large-scale and rushed changes to fully implement generative AI at the US Food and Drug Administration (FDA) by the end of June 2025. The newly appointed FDA Commissioner Martin Makary’s May 2025 announcement of his AI vision raises multiple concerns, including insufficient policy development, rigorous testing and review from expert and public stakeholders. Also, tough questions must be asked: what does AI-assisted scientific review mean? What does the 4-year plan look like for the use, reduction and/or potential elimination of human FDA reviewers?

It should be noted that the goal of applying AI within the FDA and other public agencies is not necessarily without merit, but in the public sector, process and outcomes particularly matter. Even pharma and healthcare groups urge caution. As one board member of the Alliance for AI in Healthcare notes in response to Makary’s announcement, “policy, transparency and ethical training data must come first,” all of which takes time, longer than a few months-long pilot. What’s occurring at the FDA puts into focus the question of whether special private sector interests are trumping those of the public at large.

As underscored by the OECD, there needs to be “the recognition that there is a domain where the public interest has to prevail and that this public interest has to be free from individual private interests’ interference.” The OECD further states that “this public domain has to be regulated by specific legal principles, and that the actors performing within this public sphere are, as such, to be subject to these principles.”

And yet, in one example of many, the FDA’s negotiations with OpenAI are being conducted quickly and secretly, more like a private company conducting its own dealings. This is not in accordance with public contracting best practices and those advised in the US federal public participation playbook. When tech executives essentially take over and drive AI adoption in government, they have effectively hollowed out the public sector field. Without accountability checks and adherence to public sector values-based principles and administrative regulations, the actions of these Trump officials and proxies lack legitimacy and open the public up to potential risks and avoidable harms.

It’s not yet too late for the public and the scientific community to make their voices heard on the FDA’s proposed AI vision. It’s vital to remember that this is an administration that prizes efficiency and speed above all else, so once their AI vision is fully operationalized at the FDA, expect to see it rolled out at other government agencies. The reality is that AI in certain contexts and when executing specific objectives can act as a form of control. We must ensure the public’s interests are the ones in control and the ones being served.


Author: Parisa Vinzant, MPA, works as a private and public sector strategist. She provides coaching to ICMA members. She served as a technology/innovation commissioner in Long Beach, CA. Parisa applies an intersectional equity lens in her writing on topics such as technology, ethics and democracy. Connect with her by email at [email protected] or on Bluesky @pvinzant.bsky.social. Signal contact by request.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *