Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Lessons in Performance Measurement from Wisconsin

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Michael Ford
August 6, 2018

In the 2011-2012 school year, 81.9 percent of Wisconsin K-12 student scored advanced or proficient on the Wisconsin Knowledge and Concepts Exam. The next year, just 35.8 percent of students scored advanced or proficient on the same test. No, the students did not get worse at reading — the state changed the cut scores for determining if a student is an advanced reader. The whole episode, as well as Wisconsin’s larger recent history regarding standardized testing, reveals some of the challenges of effective public sector performance measurement.

Foremost is the fact that success is a human construct. I cannot help but wonder what a reading teacher was supposed to do with the news that what was proficient last year no longer means proficient now. The same methods and curriculum that resulted in a large majority of students achieving acceptable test scores are now resulting in an unacceptable outcome. Such is the challenge of performance management in a sector where success is defined by political actors expressing the changing and often contested values of the governed. Thus, like humans, even the best measures of success are flawed to some degree.

Second, changing performance indicators can undermine their usefulness. Wisconsin, like many states, frequently updates its required education tests. In 2013-2014 it was the Wisconsin Knowledge and Concepts Exam. In 2014-2015 students took the Badger Exam, only to have it replaced with the Wisconsin Forward Exam in 2015-2016. The pros and cons of each test are debatable, but it is beyond debate that switching exams made longitudinal tracking of student performance impossible. A good performance measure will inform practice by giving practitioners actionable information, and enhance accountability by providing policymakers the proper tools by which to evaluate organizational outcomes. Changing state tests made year-to-year comparisons impossible and thus limited the ability to make management decisions based on performance trends. Changing indicators is particular harmful when attempting to evaluate reforms whose outcomes take several years to manifest. Simply, unstable performance measures, regardless of the rationale for their development, create collateral damage.

Third, simplicity is difficult. More than once in my practitioner career, a well-meaning policymaker asked me to describe the “moneyball” for school success. Was it funding, demographic balance, class sizes or something else? My response, that no such thing exists, never failed to disappoint. While it would be incredible to accurately describe the quality of everything that happens in a complex organization via a single indicator, be it a school, hospital, police department, etc., it is impossible. In Wisconsin we have school accountability report cards that place schools into one of five categories based on a scale of 0 – 100. As a parent it seems easy, a quick scan of the report tells me whether my child’s school is meeting expectations. But a look behind the curtain at the technical report reveals that what goes into that simple number is anything but simple. Human decisions about what factors should matter and to what extent are reflected in every school’s score. And yes, the state changed how it calculates the performance measure in 2016-2017, prompting a caution message about making year-to-year comparisons.

I am not suggesting that performance measures have no value, or that public organizations should not work to create them. Quality performance measurement is essential in setting benchmarks, measuring progress and, ultimately, improving public service delivery. That hypothetical reading teacher will alter their approach, presumably in a way that improves student learning, because of the changing performance metric. However, administrators must be open about what is quantitatively measureable and what is not. If a public good or service is adding value that cannot be measured via traditional analytics, alternative non-quantitative approaches are needed. It is equally important to evaluate the accuracy of existing performance measures at regular intervals. When flaws are discovered, policymakers and managers must weigh the costs and benefits between comparability and accuracy. At times a flawed measure providing longitudinal data will be of more practical value than an improved metric. Finally, it is important to embrace simplicity, but not to the point of undermining the utility of the measure.

Overall the Wisconsin case demonstrates how hard it is to create accurate, useful and accepted performance measures for high-profile public goods and services. A good metric can improve performance and increase public confidence in government organizations. A poor metric can subvert policy intent and obfuscate the public’s perception of the administrative state. We live in an era of results-based accountability where demand for measurable indicators of public sector performance is not going away. Heeding the lessons from Wisconsin and elsewhere can help administrators meet this demand in a productive manner.


Author: Michael R. Ford is an assistant professor of public administration at the University of Wisconsin – Oshkosh, where he teaches graduate courses in budgeting and research methods. He has published over two-dozen academic articles on the topics of public and nonprofit board governance, accountability and school choice. Prior to joining academia, Michael worked for many years on education policy in Wisconsin.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *