Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Risky Business

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Burden S. Lundgren
November 15, 2024

A family member, a man well over fifty, asked me if I would advise him to get one of those home genetic tests to assess his disease risks. I admit I laughed. First, most truly deleterious genes would have already made their appearances by age fifty. Second, gene science is in its infancy. Identification of this or that gene alone doesn’t tell us much. But my family member is also a heavy smoker and drinker and aware of the actual risks this entails. In other words, he willingly ignores the actual high risks he is running while hoping to find “maybe” risks.

The definition of risk centers around probability, but when we look at risk in real life, we consider the gravity of the outcome. If you live in the United States, your chance of being bitten by a rabid dog is vanishingly low, but if it happens, the consequences can be as serious as death. The lifetime chances of being bitten by a non-rabid dog are considerably higher, but few bites are that serious.

Like my family member, during the recent pandemic, despite all the evidence in favor of masking and vaccination, people made their own decisions with some paying attention to the science and others not. But the science does not actually tell us our personal risk. The notion that what is true for a population is true for every individual in that population is called the ecological fallacy. So, if the statistics show a lowered risk of 85% for hospitalizations for vaccinated people, it does not tell us what our personal risk is.   

If you have studied statistics, one of the first concepts you learned was the p-value. It is supposed to tell you whether a study had statistical significance or whether the result was due to chance. It is calculated for all kinds of studies: medical, public health and, of course, public administration. Although we usually look for p≤ .01 or p≤ .05 as the cutoff for significance, there really is no standard value. In fact, the investigator can set the level of significance at whatever level she wants. 

In their 2013 article “The Rise of Statistical Significance Testing in Public Administration Research and Why This Is a Mistake” (Journal of Business and Behavioral Sciences), Raymond Hubbard and C. Kenneth Meyer argue that the problems with risk evaluation run deeper than the uncertain levels of significance, that the null hypotheses being tested actually don’t exist.

Three years later, the American Statistical Association put out a statement deeply critical of the use of the p-value especially for its main use: it does not measure probability or rule out results of chance.  Still, it persists in almost every field where statistical analysis is used. In a pass through the latest New England Journal of Medicine, one of the foremost scientific journals on the planet, the p-value still pops up. There are tests, less familiar to most research audiences, that estimate risk more precisely, but they do not enjoy the near-religious popularity of the p-value.

Familiar numerical values present themselves itself as stand-ins for scientific certainty, and are comfortingly familiar to professions that base their practices on quantitatively deterministic precepts. Attaching a number to a result gives us a soothing sense of certainty. A number is science. A number is specific. It gives us a nice mental click and the feeling that we have grasped reality. But have we? Michael Blastland and Andrew Dilnot  (The Numbers Game: The Commonsense Guide to Understanding Numbers in the News, in Politics, and in Life, 2009). Numbers go up and down. If you measure the same numbers (e.g., heart attack hospital admissions) over time, there will be fluctuations. Researchers may fall all over themselves to provide explanations, but the fact is simply that numbers change over time.

Then there is the matter of clusters of cases. We love to jump in with possible explanations, but the most likely explanation is chance because risk does not spread itself evenly over populations. And there is the even simple case of averages. You probably learned about skewed distributions even before P-values.

The real issue is that we tend to gravitate to the statistical analysis of a study —the end—while we should be paying far more attention to the beginning. Does the study follow ethical guidelines? Is the background research adequate to establish a problem? Is the research question well-defined? Does the study design home in on the answer to the question? Is the sample truly random and large enough to imbue the results with some power? What are the study limitations? If the study is poorly set up, the statistical analysis really doesn’t matter.

All our statistical analyses cannot remove risk from our lives. But, most of all, numbers cannot give us meaning. What we choose to study and how we apply the study results cannot be a number.     


Author: Burden S Lundgren, MPH, PhD, RN practiced as a registered nurse specializing in acute and critical care.  After leaving clinical practice, she worked as an analyst at the Centers for Medicare and Medicaid Services and later taught at Old Dominion University in Norfolk VA.  She has served as a consultant to a number of non-profit groups.  Presently, she divides her time between Virginia and Maryland. She can be reached at [email protected].

 

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *