Statistics can be misleading and sometimes mind-boggling. Statistical studies can be used to hide the true nature of a relationship, or stated in such a way that falsely supports or detracts from a certain position. I am not an expert in statistics. I’ve taken the usual college courses in classical statistics, and later on, was trained in geostatistics which differs from classic statistical methods in that one has to pay attention to the context, not just the pure mathematics. I have, however, learned from experience to recognize some red flags in statistical studies, things that should make you suspicious of the study results.
The following examples highlight some of the pitfalls.
1. Relative risk versus absolute risk
Newspaper articles frequently report on studies, especially medical studies, that say something like this: If you use substance X, you double your chances of contracting dread condition Y. These studies are reporting a relative risk which is essentially meaningless. Relative risk is the risk in relation to something else. It can sound scary, but it tells you nothing about the actual risk. Actual or absolute risk is simply the probability of something happening.
For example, let’s say that the incidence of condition Y in the general population is 1 in 100,000. Among long-time users of substance X, the incidence of condition Y is 2 in 100,000. The relative risk says you double your chances of getting condition Y if you use X; sounds scary. But the real risk or chance of contracting condition Y rises from an absolute risk of 0.00001 to an absolute risk of 0.00002. Not so scary.
The reasoning in this example can be applied in reverse. For instance, you may have seen claims like this: dietary supplement X or drug Z cuts the incidence of condition Y in half? Again, this is relative risk, while the real benefit might actually be very, very small like the difference between 0.00002 and 0.00001.
Another example: a couple of years ago the Arizona Daily Star reported on a study which claimed that “Women who said they walked briskly had a 37 percent lower risk of stroke than those who didn’t walk.” (That’s an even lower relative risk than in the example above.) “The research involved about 39,000 female health workers 45 or older enrolled in the Women’s Health Study. The women were periodically asked about their physical activity. During 12 years of follow-up, 579 had strokes.”
A good characteristic about this study is that it had a large number of subjects and a long duration. However, the reported risk is a relative risk, so we don’t know if a 37 percent reduction is significant. What is the incidence of stroke in the general population?
Another red flag in the study is the phrase “women who said.” This was not a clinical trial, but relied on the memory and veracity of the study participants. Did the study take into account possible confounding factors such as general health or hereditary health conditions, diet, alcohol or tobacco use, and environmental conditions? The basic input data were completely uncontrolled, which makes the reported results meaningless. We should always ask: “what are the odds of something occurring just by chance?”
2. What are the odds?
(Adapted from an essay by Tom Siegfried, Science News, 27Mar2010)
Let’s pose a hypothetical example and say that our favorite baseball player, “Slugger Bob” is one of a group of 400 players who were tested for steroid use, and that Slugger Bob tested positive. We will stipulate that the test correctly identifies steroid users 95 percent of the time. The test also has a 5 percent incidence of false positives. So, what are the odds that Slugger Bob is a steroid user?
Most people might say that there is a 95 percent chance that Slugger Bob is a steroid user.
But here is where the real world collides with classical statistics and where context matters. Let’s say that we know from prior testing and other experience that about 5 percent of all baseball players are actually steroid users. We would expect, therefore, that out of the 400 players tested, 20 are users and 380 are nonusers.
Of the 20 users, 19 (95 percent of 20) would be identified correctly as users according to the stipulated test.
Of the 380 nonusers, 19 (5 percent false positives) would incorrectly be indicated as users.
If 400 players were tested under these conditions, 38 would test positive. Of those, 19 would be actual users and 19 would be innocent nonusers. So if any single player’s test is positive, the chance that he really is a user is 50 percent, since an equal number of users and nonusers tested positive. It’s just like flipping a coin. The devil is in the details.
3. Clusters and Patterns
We read from time to time stories of cancer clusters or about some other malady concentrated in a particular location, for instance, the apparent high incidence of childhood leukemia at Fort Huachuca. Such clusters must be investigated to see if a cause can be identified. But sometimes, such clusters occur just by chance.
As a geologist, I look for patterns in nature because patterns can give clues to special structural situations and mineral deposits. Here is an example of assessing the significance of apparent clusters.
In the figure accompanying this article, we see an array of red dots superimposed over a geologic map of Arizona. Let’s say for the moment that the dots represent assays. We humans can rationalize patterns in arrays such as this and also rationalize a cause.
What might catch a geologist’s eye in this array is the line of dots extending from the northwest to the southeast, exactly along the Mogollon rim which is a structural separation between the lowlands of the southwest and highlands of the Colorado Plateau. There also is a cluster near Ajo, site of a copper mine.
There is a dot near Rosemont, another copper deposit, and a dot in the Galiuro Mountains, again an area with copper deposits. Dots also occur near uranium and coal deposits on the Colorado Plateau and near gold deposits in western Arizona.
So, do the dots represent anything having actual significance? No, the dots are a scatter plot of random numbers. On my computer, I generated 100 random numbers and normalized them to values between 1 and 100. I used the first 50 as the X-coordinates and paired them with the second 50 for the Y-coordinates and made a scatter plot of the data. The dots have no significance at all. The patterns occur just by chance, but our rationalizations can give them meaning when there is none.
So, be skeptical. Look for weasel words such as “linked to,” “it is widely believed,’ “may,” “experts say,” “most feel.” Use of words like these in a study indicates the authors have no real hard evidence.
Nothing can be proven with statistics; correlation does not prove causation, but sometimes statistics help us look in the right direction.
Copyrighted by Jonathan DuHamel. Reprint is permitted provided that credit of authorship is provided and linked back to the source.