Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By The National Center for Public Performance
February 28, 2017
In the decades following Vice President Al Gore’s National Performance Review, countless governments around the world have adopted performance measurement reforms assuming they will lead to improved efficiency and effectiveness. Many studies have examined the motivations and challenges of performance; however, can we be sure information collected through these systems is used for decisionmaking? And if it’s indeed utilized, is it being used for the intended purposes?
The evidence suggests most performance information never sees the light of day. Indeed, only 19 percent of legislators in Organisation for Economic Co-operation and Development countries utilize it. They argue that the performance data they receive is usually insufficient in quality and too broad in nature. Thus, the information does not have a strategic purpose. Additionally, politicians have argued that this data lacks timeliness as decisions are often made in real time. Non-public managers make these same claims. A common symptom (or perhaps cause) of ineffective performance systems is a lack of buy-in from executive leadership. Unless the entire organization commits, these reforms are doomed to exist solely for symbolic purposes. This is what happened with the Government Performance and Results Act of 1993. On the other hand, the adoption of these reforms is more effective when there is an active role and involvement of political authorities and top managers on their strategic implementation, as well on the use of performance information, the adoption of these reforms is more effective.
Effective adoption of performance management implementation is not only about using or not using performance information, the presentation of indicators also matters. In fact, the presentation of information can drive how it is interpreted by the readers. This is one reason why the link between better performance does not always result in higher levels of trust in government. Citizens, for example, tend to have strong biases when interpreting performance information: Does 95 percent customer satisfaction mean the same as five percent customer dissatisfaction? This is what Olsen examined, and he finally confirmed substantial differences in the way people interpret performance information depending on the how it’s framed. The positive phrasing of performance reports can lead to a positive interpretation and by the same token, a negative phrasing of performance information can cause individuals to have a critical response.
This presents a potential issue for the open-data movement. Rather than serving to increase transparency and collaboration between an interested citizenry and their government, these data portals may be fueling the latent biases already in place in those reading the reports. What can governments do to mitigate this? A logical conclusion, then, is to control the biases presented to citizens when they are confronted with performance data. Simply put, it may be the role of government to “curate” information because citizens are likely unfamiliar with programmatic operations and technical nuances. The situation is certainly less troublesome than the “wild west” media and information landscape President Obama described during the recent election, but how governments disseminate information to the public is a matter worth addressing.
To throw yet another wrench in the works, partial results from a study presented in the Public Performance and Reporting Conference in 2015 show public managers and professionals are also susceptible to biases when they interpret performance information. For example, they tend to be more critical of agencies that missed five percent of their performance targets than those that met 95 percent of their performance targets. Depending on who is writing (or re-writing) a report, officials within the same organization may come to very different conclusions.
This leaves us with a burning question: If both the government and citizens are susceptible to information framing bias, how exactly should we be presenting performance data? What’s clear is raw data pasted into a report or presented online won’t do. Governments should be telling a story with their data. If they’re doing well, explain why the numbers support it. If they are behind, the audience should know that too. Leaving it up for interpretation might be asking for trouble.
As Bob Behn acknowledges, performance management is very hard. The bad news is the story does not end with its effective implementation and the use of the performance data within agencies. It’s also crucial to understand how the information is used and what are the biases of the audiences when interpreting information. Otherwise all the time and effort spent building these grand data collection systems may result in the opposite of their intended purpose, a system simply reinforcing our preconceptions.
Andrew Ballard is the Managing Director of the National Center for Public Performance in the School of Public Affairs and Administration (SPAA) at Rutgers University, Newark. He is pursuing his PhD at SPAA and researches how public organizations develop and sustain performance management systems.
Javier Fuenzalida is the Assistant Director of the National Center for Public Performance and a PhD(c) in the School of Public Affairs and Administration (SPAA) at Rutgers University, Newark. His research interests are in public management, civil service and performance management.
The National Center for Public Performance (NCPP) at Rutgers University is a research and public service organization devoted to improving productivity in the public sector. Founded in 1975, NCPP serves as a vehicle for the study, dissemination and recognition of performance measurement initiatives in government.