Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

What Counts As “Evidence?”

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Nathan Favero
July 12, 2020

Managers and policymakers should make decisions based on evidence. It’s hard to disagree with that.

But what do we mean by “evidence?” And is all evidence created equal?

These two questions seem to crop up in every policy discussion.

For example, at various stages in the COVID-19 pandemic, the World Health Organization (WHO) has been sharply criticized for statements it has made about current scientific evidence regarding the disease. A recent New York Times article summarized one common theme in these criticisms by saying that the WHO is, “Bound by a rigid and overly medicalized view of scientific evidence,” according to some experts.

Or, consider the issue of police reform. The recently launched 8 Can’t Wait campaign—which advocates for the adoption of eight policing policies aimed at reducing police violence—has been criticized for its bold claim that, “Data proves that together these eight policies can decrease police violence by 72%.” At the same time, many would argue that the magnitude of the problem of police violence against Black Americans demands immediate overhaul of policing on a scale that goes well beyond the realm of what empirical studies of policing have generally considered.

There are Many Kinds of Evidence

Let’s think about a less controversial topic for a moment. Suppose a manager needs to hire for an open position. What evidence can they draw on when setting up the interview process?

As an academic, one of the first things I think of is the large scholarly literature on various interviewing techniques.

If this manager has been in their job a long time, they probably also have some past experiences they can draw on to inform their decisions about the interview process.

The manager might also consult with others, soliciting their advice and effectively expanding the number of experiences they can draw on.

In sorting through all this information, the prudent manager will also bear in mind the specific characteristics of the position they’re hiring for, and will carefully consider how any general advice about hiring might need to be tailored to their particular situation.

But even more importantly, none of these sources of evidence are going to be perfect. Relying on personal experience is risky because people often make mistakes when trying to explain the world around them. They overact to recent events, mistake correlation for causality and tend to interpret new information in a way that confirms their prior view of the world. Consulting with others may help to broaden one’s own perspective, but an entire group or industry may fall prey to the famous problem of “group think.” Academic researchers try to design studies that will take a more systematic and reliable approach to assembling and sorting through information, but these methods are not perfect, and some studies have stronger research designs than others.

Avoid the Two Big Pitfalls

I see two pitfalls that many people make when they discuss what evidence is available.

The first pitfall is defining evidence too narrowly. People make this mistake when they say that only peer-reviewed research counts as evidence, or that only quasi-experimental studies or randomized control trials count as evidence. When people define evidence too narrowly, they end up giving bad advice, like saying that you don’t need to floss or—for many years—that smoking is safe.

The second pitfall is failing to distinguish between higher and lower quality sources of evidence. Just as dismissing certain categories of evidence is problematic, failing to acknowledge that some methods of gathering information yield more reliable information than others will leave you ill-informed. Low-quality evidence is better than nothing, but it often leads to incorrect conclusions. We may not be able to wait for better information before taking action, but we can acknowledge the uncertainty caused by a lack of high-quality evidence. We can also create contingency plans that will allow us to course-correct if the approach we pursue appears to be failing.

One way to mentally organize available evidence is to sort it according to a hierarchy of evidence. There are things we are highly certain of—perhaps demonstrated through randomized control trials or a long-established body of scientific studies. There is evidence that is suggestive—maybe a compelling pair of new studies or a well-supported theory that is difficult to test directly. And finally, there is evidence that is preliminary and highly uncertain—something suggested by a single study or by an informal evaluation of several personal experiences.

When it comes to making decisions, we should obviously rely on the highest quality of evidence that is available. Given how complex the world is, important decisions often require making several assessments simultaneously or trying to fit several different pieces of evidence together to describe our specific situation. And often, there is no high-quality evidence that speaks to some of what we want to know.

In other words, public policy and administration is full of difficult decisions with high uncertainty. No policymaker is going to get every decision right. But a careful and appropriate assessment of available evidence will lead to decisions that are more correct more often.


Author: Nathan Favero (nathanfavero.com) is an assistant professor in the School of Public Affairs at American University. His research focuses on public management, education policy, social equity, and research methods.
E-mail: [email protected]. Twitter: @favero_nate

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 4.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *