Critical reading 101 – “Studies have shown…” Part II

This is a continuation of yesterday’s post about how to evaluate claims by AP/NPers (or anybody else, for that matter) that their positions are supported by scientific studies.

——————————————————————————————-

Are the results of the study statistically significant? “Statistical significance” refers to the likelihood of the result gotten in a particular study occurred by chance. While not every statistically significant result is practically significant and vice versa, you usually can’t say you’ve proven something in the world of science unless the results are statistically significant.

Usually, the exact calculations of significance in scientific studies are done by medical statisticians running formuale in statistical software, and I won’t even attempt to get into the math of it all. However, the meaning of the results is clear to most scientists and doctors. I’ll try to explain the two most common results seen in research studies and how to determine their statistical significance.

The p value lets us know whether a difference seen between two groups in the study is likely to be due to mere chance or not. The p value is expressed in a number between 0-1 which conveys a percent. In order to claim statistical significance, it’s standard to set the p value at 0.05 or lower, i.e, there has to be a 5% likelihood or less that the results are due to chance. If you see a higher p value, the results are not statistically significant. The lower the p value, the higher the statistical significance.

The Odds Ratio has a fairly straightforward explanation – the ratio of the odds of an event occurring in one group vs. the odds of it occurring in another group. For example, if breastfed children have a 10% chance of contracting a disease and bottlefed children have a 20% of contracting it, the odds ratio of bottlefed to breastfed children for contracting the disease is 0.2/0.1 = 2. A OR larger than 1 usually protrays a risk; between 0-1, a benefit.

Around an odds ratio we’ll always find a confidence interval (CI), usually a 95% confidence interval. That means the researchers are saying” we’re 95% sure the real odds ratio lies inside the confidence interval”. You’ll usually see this written as something like, OR=2.4 (1.2-7.5). The narrower the CI, the more accurate the result; and a CI that straddles the number 1 (for example, OR=1.5 (0.2-6.7)) is, by definition, not statistically significant. That’s because you can’t claim an OR infers both risk and benefit simultaneously!

OR’s are usually used for retrospective and case-control studies, which are the ones we see most; randomized controlled studies and prospective studies most often use the similar (though not exactly the same) terms Relative Risk +/- Standard Error.

Do the study’s results have anything to do with the claims made by the one quoting it? This isn’t always as easy as it looks. Check to see if the study deals with humans, animals or stuff in Petri dishes (just because something is proven in animal studies or in the lab doesn’t mean the results are applicable to humans; no drug company would dare mass-market a medicine that was shown to be safe and effective only in animal studies, for example). If in humans, is the human population similar to the population under discussion (e.g, does the study only deal with a certain gender, a certain race, people in special circumstances)? Much of the AP literature quotes studies performed decades ago on children in orphanages or hospitals, for example. Can one really compare those children’s situation to your own?

Most importantly, are the results of the study quoted truthfully? Don’t assume they are – check a few of the abstracts in PubMed, or better yet, Google the title of the article to see if you can find the full-text article somewhere (this happens more often than you think it does). For example, Ed Friedlander, a pathologist, once looked up cites from various anti-immunization websites. He found many of the cites didn’t support the arguments made by the anti-vaccinationists (surprise, surprise!). It also helps to read the “discussion” section of a scientific article, where the researchers attempt to explain the results they got and identify shortcomings in their research. You’ll find that most honest reserchers are a good deal more circumspect about their results than those who gleefully quote them.

I hope you’re still reading and haven’t gone off for an Advil yet…but believe me, this is merely scratching the surface. If all you come away with is that you have to examine scientific claims made by anyone – the media, medicos, and especially those with an agenda to sell – by yourself, then I’ve accomplished something with all this verbiage 🙂 .

add to del.icio.us : Add to Blinkslist : add to furl : Digg it : add to ma.gnolia : Stumble It! : add to simpy : seed the vine : : : TailRank : post to facebook