Evidence-Based Management? A Great Idea, But Proceed with Caution
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Nathan Favero
September 28, 2019
We should use evidence. It’s hard to argue with that.
But as talk of, “Evidence-based,” government grows in volume, I can’t help but think about how complex it really is to make wise use of the evidence.
Certainly, social scientists provide useful insights about society. And what we learn from careful research should help to inform how we structure and run public programs.
But consider a typical public manager facing a tough decision. Which job candidate shall I hire? How should I pitch my initiative to other stakeholders? Can I really trust this employee with this task?
There may be relevant evidence you can look to when making these sorts of decisions, but Google Scholar is not going to provide any easy answers.
Nonetheless, there is an important evidence base out there for many (though certainly not all) topics. How can you responsibly use it?
- Don’t rely on a single study
This is a mistake I often see journalists make. They write up an entire story about how, “Everything we thought we knew about X is wrong,” based on a single academic study. Most empirical studies (especially in the social sciences) come with a relatively long list of caveats and limitations. Thus, you should only draw tentative conclusions from each one. Not all studies are created equally, so it is appropriate to give some findings greater weight than others. But assessing the relative methodological strengths of various studies usually requires deep expertise, making it inadvisable for the average public manager to wade through the world of individual academic studies, with the intention of basing an important management decision on the cumulative results.
A much better model is for academics to sort through the literature and write careful, comprehensive reviews that summarize the main findings and limitations of a given literature in a format that is tailored to practitioners. Two particularly well-executed examples of such efforts have emerged from the field of organizational behavior: Becoming the Evidence-Based Manager and Handbook of Principles of Organizational Behavior.
For topics where no such practitioner-oriented resources are available, you can at least try to find a literature review (perhaps a, “Systematic review,”) or a meta-analysis of the existing literature on a given topic rather than looking at individual studies.
- Beware of inconsistent (heterogeneous) effect
Most quantitative studies of a policy or practice calculate effects in a way that estimates some type of average effect across the sample. But we know that a simple average doesn’t (usually) describe everyone. It’s entirely possible that some program (or managerial practice) is highly effective for some individuals and mildly harmful to others. When the effects of this program are estimated, one will likely find a weakly positive effect, reflecting some type of average of the various effects for different people in the study sample.
Researchers sometimes examine their data for evidence of, “Heterogeneous effects,” as they are often called in the academic literature. But researchers also face some pretty serious limitations on their ability to detect such inconsistent effects. At a very basic level, it’s hard to pick up on inconsistent effects unless they follow a pattern that corresponds to some demographic characteristic (or another variable that has been measured). For example, I might estimate a separate effect for men and for women, but this analysis won’t detect much if it’s actually extroverts who are affected positively and introverts who are affected negatively by the program I’m examining.
The broader takeaway is that the evidence (even in a format where many studies have been carefully reviewed and summarized) is usually more suggestive than definitive. The literature may indicate that a particular practice is effective more often than not, but it can’t say definitively whether this practice will work well in your specific situation. This leads directly to my final point.
- Know your organization (or program)
There’s no substitute for deep expertise in the context you’re working in. Every organization has idiosyncrasies. Learn what makes your organization unique, and carefully consider how that might affect whatever dilemma you are facing before you decide to follow the advice of a body of external evidence.
It might also be important to consider what types of organizations or programs were being evaluated whenever an evidence base was constructed. Common sense says that evidence coming from a context more similar to yours is more likely to be relevant to you. So if you work in a school, other studies of schools might offer better evidence than studies of other organizations. If you work in an urban area, studies of urban-based programs might be more relevant than studies of rural or suburban programs. If you work in the United States, studies conducted in Europe may or may not be particularly relevant evidence for you to consider.
There are lots of ways to define, “Similar.” And there’s no rule saying when a context is similar enough to yours that you should look to it for evidence. So again, use your knowledge of your context to make your best judgement. There’s no substitute for careful thought and consideration when making an important decision.
Author: Nathan Favero (nathanfavero.com) is an assistant professor in the School of Public Affairs at American University. His research focuses on topic related to public management, education policy, social equity, and research methods. Twitter: @favero_nate
(No Ratings Yet)
Loading...
Evidence-Based Management? A Great Idea, But Proceed with Caution
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Nathan Favero
September 28, 2019
We should use evidence. It’s hard to argue with that.
But as talk of, “Evidence-based,” government grows in volume, I can’t help but think about how complex it really is to make wise use of the evidence.
Certainly, social scientists provide useful insights about society. And what we learn from careful research should help to inform how we structure and run public programs.
But consider a typical public manager facing a tough decision. Which job candidate shall I hire? How should I pitch my initiative to other stakeholders? Can I really trust this employee with this task?
There may be relevant evidence you can look to when making these sorts of decisions, but Google Scholar is not going to provide any easy answers.
Nonetheless, there is an important evidence base out there for many (though certainly not all) topics. How can you responsibly use it?
This is a mistake I often see journalists make. They write up an entire story about how, “Everything we thought we knew about X is wrong,” based on a single academic study. Most empirical studies (especially in the social sciences) come with a relatively long list of caveats and limitations. Thus, you should only draw tentative conclusions from each one. Not all studies are created equally, so it is appropriate to give some findings greater weight than others. But assessing the relative methodological strengths of various studies usually requires deep expertise, making it inadvisable for the average public manager to wade through the world of individual academic studies, with the intention of basing an important management decision on the cumulative results.
A much better model is for academics to sort through the literature and write careful, comprehensive reviews that summarize the main findings and limitations of a given literature in a format that is tailored to practitioners. Two particularly well-executed examples of such efforts have emerged from the field of organizational behavior: Becoming the Evidence-Based Manager and Handbook of Principles of Organizational Behavior.
For topics where no such practitioner-oriented resources are available, you can at least try to find a literature review (perhaps a, “Systematic review,”) or a meta-analysis of the existing literature on a given topic rather than looking at individual studies.
Most quantitative studies of a policy or practice calculate effects in a way that estimates some type of average effect across the sample. But we know that a simple average doesn’t (usually) describe everyone. It’s entirely possible that some program (or managerial practice) is highly effective for some individuals and mildly harmful to others. When the effects of this program are estimated, one will likely find a weakly positive effect, reflecting some type of average of the various effects for different people in the study sample.
Researchers sometimes examine their data for evidence of, “Heterogeneous effects,” as they are often called in the academic literature. But researchers also face some pretty serious limitations on their ability to detect such inconsistent effects. At a very basic level, it’s hard to pick up on inconsistent effects unless they follow a pattern that corresponds to some demographic characteristic (or another variable that has been measured). For example, I might estimate a separate effect for men and for women, but this analysis won’t detect much if it’s actually extroverts who are affected positively and introverts who are affected negatively by the program I’m examining.
The broader takeaway is that the evidence (even in a format where many studies have been carefully reviewed and summarized) is usually more suggestive than definitive. The literature may indicate that a particular practice is effective more often than not, but it can’t say definitively whether this practice will work well in your specific situation. This leads directly to my final point.
There’s no substitute for deep expertise in the context you’re working in. Every organization has idiosyncrasies. Learn what makes your organization unique, and carefully consider how that might affect whatever dilemma you are facing before you decide to follow the advice of a body of external evidence.
It might also be important to consider what types of organizations or programs were being evaluated whenever an evidence base was constructed. Common sense says that evidence coming from a context more similar to yours is more likely to be relevant to you. So if you work in a school, other studies of schools might offer better evidence than studies of other organizations. If you work in an urban area, studies of urban-based programs might be more relevant than studies of rural or suburban programs. If you work in the United States, studies conducted in Europe may or may not be particularly relevant evidence for you to consider.
There are lots of ways to define, “Similar.” And there’s no rule saying when a context is similar enough to yours that you should look to it for evidence. So again, use your knowledge of your context to make your best judgement. There’s no substitute for careful thought and consideration when making an important decision.
Author: Nathan Favero (nathanfavero.com) is an assistant professor in the School of Public Affairs at American University. His research focuses on topic related to public management, education policy, social equity, and research methods. Twitter: @favero_nate
(No Ratings Yet)
Loading...
Follow Us!