Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Considering AI Olive Branches: Bridging Policy Divides

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Kiersten Farmer
December 15, 2023

Introduction

As we look toward the future, it’s clear that the path to a responsible and beneficial AI ecosystem lies in balanced and informed technology policy. This requires a concerted effort from all stakeholders involved—from innovators and technologists to policymakers and the community. In the United States, despite collaborative efforts akin to olive branches, such as the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) public request for information and initiatives by groups like the AI Alliance launched by industry leaders such as IBM and Meta, comprehensive AI legislation remains elusive. This article explores these olive branches and challenges in establishing unified AI laws in the United States and contrasts this with the EU’s significant strides towards their own AI regulatory framework.

AI Policy Development Olive Branches

The United States has seen several olive branches intended to facilitate comprehensive AI regulation. These promotions of cooperation and understanding signify a willingness to work towards a more harmonious future with the intention to advance collectively. NIST’s open call for public input on implementing the US Government National Standards Strategy for Critical and Emerging Technology (USG NSSCET) asks that experts and stakeholders share their best strategy implementation ideas. The goal is to develop technically sound standards to strengthen the United States’ competitiveness in 8 emerging technology sectors.

Similarly, the AI Alliance, an international group of leading organizations across industry and academia, holds several objectives focused on fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, safety and economic competitiveness. The most prescient goal is to develop educational content and resources to inform the public discourse and policymakers on benefits, risks, solutions and precision regulation for AI. These educational efforts are conciliatory gestures toward policymakers and the public to foster a deeper understanding of AI and its implications. Both the NIST and AI Alliance campaigns aim to foster better communication and build stronger relationships, with a shared goal of advancing through collaborative efforts. However, these initiatives may not be enough to ameliorate the challenges in facilitating comprehensive legislation.

International Efforts

Despite bipartisan interest and efforts, including nearly three dozen hearings in 2023 and over 30 AI-focused bills in U.S. Congress, there remains a lack of consensus on substance and process. In contrast, the European Union (EU) has significantly advanced in AI regulation. Though the European Parliament will vote on the AI Act proposals early next year, and legislation will not take effect until at least 2025, EU officials have reached a provisional deal on the world’s first comprehensive laws to regulate the use of artificial intelligence. Perhaps the difference lies in the focus of the approach.

In the United States, AI legislative and regulatory proposals encompass a wide array of objectives, including promoting AI R&D leadership, ensuring national security and regulating the use of AI within federal agencies. These proposals manifest in diverse forms, from comprehensive legislative frameworks to precise bills designed to tackle specific AI-related issues, accompanied by educational initiatives directed toward policymakers. However, the legislative landscape is fragmented and lacks a unified direction. And despite bipartisan cooperation, these efforts feel exclusionary—overlooking vested stakeholders and disregarding the potential advantages of co-creation.

In contrast, the EU’s fundamental approach to AI regulation centers on evaluating its potential societal harm, utilizing a ‘risk-based’ methodology where the gravity of the risk dictates the regulatory rigor. This focused strategy draws from the EU’s extensive experience in cross-sector regulation garnered through the General Data Protection Regulation (GDPR) and significantly emphasizes safeguarding individual rights and ensuring transparency in AI systems. Such a human-centered approach feels more inclusive and facilitates the development of comprehensive and consistent policies that balance innovation with ethical considerations and prioritize consumer protection.

Conclusion

Reflecting on these divergent approaches, it is evident that the United States can draw valuable lessons from the EU’s strategy. In our pursuit of a responsible and advantageous AI ecosystem, olive branches to promote collaboration and understanding can hold paramount importance. NIST’s call for public input and the AI Alliance’s educational endeavors demonstrate a genuine intent to bridge divisions and advance together. These initiatives serve as gestures of goodwill, signifying a willingness to navigate the intricate AI landscape collectively. As is the case with the path pursued by the EU, with their risk-based approach to AI regulation and emphasis on human centricity. The olive branch of inclusive, transparent and responsible AI policy and procedure development not only builds trust but also lays the focal foundation for legislative cooperation that benefits technological advancement and intends societal well-being.


Author: Kiersten Farmer is an author, speaker, and data professional committed to empowering state and local governments. With nearly 20 years in public administration, she excels in optimizing operational efficiency and strategy through data and technology. Kiersten specializes in creating comprehensive methodologies for evaluating processes, sharing insights, and enhancing administrative procedures. She holds degrees from Florida Agricultural & Mechanical University and the University of Maryland,College Park, and is currently a Ph.D. Candidate at the University of Nevada, Las Vegas. Kiersten’s mission extends to transmuting data-related challenges into pragmatic, actionable solutions that yield tangible, community-transformative results. Connect with her on LinkedIn: https://www.linkedin.com/in/kiersten-farmer/ to collaborate on driving positive community transformation. 

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *