Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Dennis McBride
June 17, 2024
It is an irony that evidence is accumulating—convincingly, in support of evidence-based policy-making (EBPM). More and more organizations—government as well as business and non-profit—are using EBPM methods and techniques to better execute their respective missions. A major positive disruption in this enterprise is already underway. This leap fundamentally alters how, where, when and most importantly, why, EBPM will improve policy-making at every conceivable tier and practice of government—locally to nationally. The disruption comes from the rapid maturation of (1) explainable (and potentially, well-regulated) artificial intelligence, (2) the proliferation of eye-popping interactive visualization and most importantly, (3) the growing acceptance of high-tech modeling and simulation.
EBPM brings richness to policy arguments, ideally via application of randomly-controlled trials (RCTs)—experiments! RCTs provide a basis for examining the value of a policy as it is applied to a microcosm of a population of interest. If a hypothesized idea appears to be valuable, especially as compared to a “placebo” or other comparison group, then in theory, the new policy can be scaled up and applied to a larger, perhaps the ultimate population of interest. When “randomness” cannot be used, a form of so-called quasi-experimentation can be. This shortcut approach is obviously not “as valid” as the RCT method, but it can also provide extremely useful insight. Full-blown use of RCTs, like EBPM itself, carries baggage that must be dealt with—but that’s a conversation for another day. For now, the upside.
Trials of any sort—RCT or quasi-random—generate workable models—these are down-scaled renditions of what a policy should or at least might look like if taken to scale. A model can be focused on economic, financial, environmental, societal, health-related or any combination of outcome variables. But a model, especially a static one, such as a spreadsheet, is just that: static. Simulation, on the other hand, takes a model—actually a combination of models, and plays them out dynamically, so that observers can at least begin to understand the value and shortcomings of what a policy should provide in the “real world.”
But the real value of M&S is not that it provides interesting answers—even as important as answers inherently are. The ultimate value is that M&S helps identify, validate and refine the important questions associated with policy development. It provides a platform for establishing what it is that should be measured—quantitatively—and how government officials, representatives and most importantly, how the electorate can objectively evaluate the benefits of policy alternatives—before, during and after such policies are put into place. M&S provides a highly valuable means for developing (1) key performance indicators (KPIs), (2) ways of testing the relative importance of various KPIs, (3) the sensitivities of outcomes to aspects of policies and (4) a unique means for establishing—ahead of time—a test and evaluation master plan.
Emerging artificial intelligence can now help build, run, portray and even explain the results of policy simulations. Advanced graphics can aid, amazingly, in visualizing how variables interact to produce likely outcomes of manipulated policy approaches. In fact, policymakers can alter the characteristics of models and simulations—in real-time—and play “okay but what if”—on the fly. The value is not that computers provide “answers!” The value is that the process provides invaluable *insight* about phenomenology as well as proposed solutions. Users of M&S often feel that they have vast “experience” with policy projections, even though the experience is “synthetic.”
Dr. Bill Rouse (emeritus professor at Georgia Tech and now research professor at Georgetown University McCourt School of Public Policy) has used M&S gaming technologies for numerous policy explorations, ranging from pure business to highly complex public policy. One of his projects focused on potential mergers and acquisitions of otherwise competing New York City hospitals. The outcomes of interest included measures of health services provided to patients, in tandem with measures of economic outcomes (e.g., sustainable profit margin) for the hospitals in the simulation.
The results of the simulations included several completely unexpected trajectories, and they provided a wealth of “emergent” outcomes—results that could not possibly have been predicted rationally. Importantly, the outcomes provided a very close match to what eventually, actually, happened—that is, they were validated. But even more to the point, the gamed modeling and simulation phenomenology exhibited “why” the outcomes “came out” as they did. The why’s developed in the process are now highly useful for developing policy that can include the equities of all community stakeholders—government, private enterprise, as well as those of patients and providers. Bill’s lab is not alone. There are already more than two dozen, eager to collaborate, university-based, policy test labs in the United States.
As the example implies, advanced, AI-based M&S and visualization technologies already provide extremely useful “evidence” to support life-cycle nurturance and sustainment of public policy. This is the beginning of a new paradigm in evidence-based policy-making.
Author: Dennis McBride is president, Institute for Regulatory Science, president emeritus, Potomac Institute for Policy Studies, and former editor-in-chief, Review of Policy Research. He has taught public policy at Georgetown University and Virginia Tech (currently professor of practice). Degrees include Ph.D. in experimental psychology, University of Georgia; MSPA/MPA in public administration; and M.S., Viterbi School of Engineering, University of Southern California.
Follow Us!