Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Transparency in Artificial Intelligence and Machine Learning

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Craig P. Orgeron
December 5, 2022

With a deep-seated belief in the innate value of technology and its power to better the lives of citizens, reading The Weapons of Math Destruction and Automating Inequality came as a wakeup call for the inherent perils and potential for systematic bias in the collection and analytical use of big data with artificial intelligence (AI) and machine learning (ML). As public sector leaders develop strategies in the aftermath of the COVID-19 pandemic, there is a rekindled determination focused on transforming government technologies to reinvent the citizen experience. More than other emerging technologies, the utilization of artificial intelligence and machine learning in digital transformation initiatives has taken center stage to deliver improved digital services to citizens.

For technology services in the public sector, the COVID-19 pandemic served as an accelerant, surging the demand and enhancing the expectations of citizens for improved digital services. Across state governments, there has been a focused effort to modernize online citizen services, with data collected by National Association of State Chief Information Officers (NASCIO) suggesting that 94 percent of CIOs report an increase in demand for digital services for an improved citizen experience. With targets of opportunity including more automation and digital identity services, the ideal utilization of technologies such as artificial intelligence and machine learning has the potential to reinvent citizen online service delivery, forge new levels of both institutional engagement and trust and achieve high functioning governments.

Yet, with the potential in the application of artificial intelligence and machine learning technology taking advantage of the troves of data collected by government agencies to create enhanced constituent experiences and address intractable civic and societal challenges, care must be granted to a multitude of concerns. In many ways, in both the private and public sectors, data can be thought of as an inexhaustible natural resource. Data can be collected and processed to provide enhanced citizen services, but, like other natural resources, it can be misused and pose new risks for governments, businesses and society. Paramount in grappling with the ethical use of these advanced technologies is assuring that data sets are unbiased, which is no small task. In addition, as artificial intelligence and machine learning technologies are employed, governments must also contend with learning algorithms making predictions that cannot be fully explained, as well as thoughtfully affording transparency and accountability to outcomes determined by artificial intelligence and machine learning.

A machine learning algorithm is a computer program that learns to perform a specific task directly by inferring patterns from data, without the need for explicit instructions or significant domain expertise. Over time, and at scale, the algorithms can become smarter with minimal human mediation, and can forecast outcomes. There can be value in differentiating between two artificial intelligence and machine learning model design philosophies: Interpretable AI, which suggests interpreting AI data as it is collected; and, Explainable AI, which seeks to explain data-driven predictions retroactively. With either design philosophy, a lack of transparency in these models employed to deliver public sector services can be challenging and should be mitigated by building AI/ML models that are intrinsically understandable. And, beyond model transparency, a key strategy must also place emphasis on ways to reduce the impact of bias and inaccurate data, which can be enormously challenging as well since artificial intelligence and machine learning projects analyze immense volumes of data at breathtaking speeds. Thus, data anomalies at scale can result in skewed or inaccurate outcomes and predictions.

Understanding the task of developing, deploying and managing AI/ML systems responsibly, the Government Accountability Office (GAO) has developed the federal government’s first framework to help assure accountability and responsible use of AI/ML technologies. As outlined by Stephen Sanford of the GAO in the Harvard Business Review, the framework defines the essential conditions for accountability throughout the entire AI/ML life cycle, and prescribes particular questions for organizations to ask when evaluating AI/ML technologies and systems. The four phases of the AI/ML life cycle include Establishing Governance: requiring input from people in multiple fields and diverse disciplines to make sure underserved individuals and populations are not overlooked; Collecting Data: maintaining documentation of how data is being used in two different stages of the system, when it is being used to build the model and while the AI/ML system is in operation; Supporting Performance: measuring the accuracy and effectiveness of AI/ML predictions; and Continually Monitoring: checking the results of self-correcting learning algorithms.

Success in the development, deployment and management of AI/ML technologies requires assembling diverse human talent and data sources in each of the four phases of the framework. Increasingly in the public sector, efforts are focused on the ability to quickly deliver personalized, integrated and optimized citizen engagement and experience across multiple channels via artificial intelligence and machine learning technologies. In doing so, public organizations must mitigate data bias and foster transparency and accountability in AI/ML initiatives with a sound strategy for each phase of the framework.

Author: Dr. Orgeron has extensive technology experience in both the private sector and the federal and state level of the public sector. Currently, Dr. Orgeron is Professor of MIS at Millsaps College. Dr. Orgeron has served as an Executive Advisor at Amazon Web Services (AWS) and as the CIO for the State of Mississippi, and President of the National Association of State Chief Information Officers (NASCIO).

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

Leave a Reply

Your email address will not be published. Required fields are marked *