Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

The New White House Executive Order on Artificial Intelligence: A Step Forward in AI Governance?

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By James L. Vann
April 1, 2024

Concerns over the risks posed by artificial intelligence (AI) have prompted governments and organizations worldwide to develop a plethora of ethical principles and frameworks to control AI. Initiatives range from the Future of Life Institute’s Asilomar AI Principles to international guidelines such as the European Parliament’s Artificial Intelligence Act. By now, most nations, organizations and corporations with AI technology interests have adopted governance frameworks of some sort. In the United States, White House Executive Order (E.O.) No. 14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) is one of the latest government initiatives on AI governance. At 35 pages, E.O. 14110 is far more comprehensive than previous U.S. legislation and executive orders:

  • Advancing American AI Act, PL 117-263 (2022)
  • AI in Government Act of 2020, PL 116-260
  • E.O. 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Dec. 3, 2020
  • E.O. 13859, Maintaining American Leadership in Artificial Intelligence, Feb. 11, 2019

E.O. 14110 tries to strike a balance between encouraging AI innovation and governing its use. It directs federal agencies to assume more roles and responsibilities and take specific actions, such as: designate a Chief AI Officer; publicly release compliance plans and AI use cases; practice AI risk management; and establish thresholds for human review. It invokes the Defense Production Act to apply some requirements to industry. The technical concepts introduced by the order, such as generative AI, synthetic content and watermarking are noteworthy, indicating that parties with expertise in AI had a hand in its drafting. Researchers at Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) commented that “(t)he EO is remarkable in how comprehensively it covers a wide range of AI-related issue areas.” Sorelle Friedler, Janet Haven and Brian Chen, in an excellent analysis for the Brookings Institution, describe it as “one of the most detailed pictures of how governments should establish rules and guidance around AI,” and that the order’s hard accountability is a model for accountable AI in the federal government.

While the E.O.’s coverage and accountability features are noteworthy, it also reveals inherent drawbacks of federal policy making that will challenge AI governance. It attempts an ambitious top-down and cross-government approach, drawing in different stakeholders and policy subsystems. It contains ambiguous references to AI which will be problematic for administration and compliance. Like all executive orders, the E.O. can only be implemented “consistent with applicable law and subject to the availability of appropriations.”

As the new order is being hashed-out by federal agencies, commercial AI development is moving forward at an astonishing rate. In the past year alone, powerful new open-source multi-modal products are poised to enter the generative AI marketplace. Synthetic data generating capabilities will allow AI models to self-train on data that is not subject to legal protections. Agent-based models are making autonomous decisions in a variety of task environments. Arguing that they are in a better position to govern AI, companies have begun self-governance initiatives, such as risk-based licensing and forums such as The AI Safety Alliance. E.O. 14110 may have already missed-out on some of these developments.

Critics of AI governance initiatives often raise the question: Are our traditional systems of policy making capable of governing AI, or will the technology advance at such an explosive rate that governance is futile? Luke Munn of the University of Queensland laments the “uselessness” of AI ethics, stating that they are difficult or even impossible to put into practice. In writing for George Mason University’s Mercatus Research Center, Adam Thierer, Andrea C. O’Sullivan and Raymond Russell argue for “permissionless innovation,” noting that “(u)nless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later.” In an analysis for the Brookings Institution, Blair Levin and Tom Wheeler suggest that agencies’ discretion to regulate AI could be vulnerable under the U.S. Supreme Court’s “major questions doctrine,” much like environmental regulations that were struck down in West Virginia v. EPA (2022).

E.O. 14110 is a commendable attempt at AI governance that goes well beyond the principles and frameworks of the past. However, it is an incremental step in a cumbersome policy making process. The neural self-learning nature of AI, its rate of growth, associated market forces and its importance to national security, will quickly overwhelm current policy making efforts. While federal agencies will comply in principle with the order’s mandates, the overall social impact will likely be minimal. Until the harmful manifestations of AI (such as personal safety, criminality and privacy and civil rights abuses, etc.) punctuate the equilibrium of U.S. political agendas, incremental efforts to broadly regulate AI will be only marginally beneficial. In the interim, a more constructive approach will rely on permission-less innovation, the legitimacy of law and jurisprudence and decentralized enforcement within specialized domains of “narrow” AI applications. As Luke Munn notes, this will involve a substantial amount of granular grunt work. However, students and practitioners of public administration should find this burgeoning new field of work both challenging and exciting.


Author: James L. Vann is a researcher in public management. He is a former Presidential Management Intern, served in the federal civil service, and has held positions as a corporate ethics officer with UK-based Lucas Industries and as an analyst with the MITRE Corporation. He holds his PhD in Public Policy and Administration from Virginia Tech.

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *