You will often see on AP/NP-promoting books and websites claims such as “studies have shown that practice X is superior”, or “science has shown conclusively that X”. Very often (Dr. Sears is often guilty of this, but not only him) these “studies” remain nameless, and you can imagine why…no such studies exist. The media sometimes acts similarly – in that it will claim a study has shown something conclusively, when in fact the results are far from conclusive. In those cases, however, the usual motive is sensationalism, not deliberate deception.
But some activists will attach a bibliography, often chock-full of what looks like references to medical articles supporting whatever position they espouse. I would guess that the overwhelming majority of their readers come away with the impression that the medical literature indeed supports their position. The slightly more diligent may do a perfunctory search in PubMed , find the citations are there, and conclude the information in the article to be genuine.
However, not all references are the same, and not all citations say what you think they say. But how can the layperson know the difference between proper and improper use of science?
Evidence-based medicine is a whole discipline that can take weeks and months to learn – and is now being taught routinely at medical schools. While it certainly helps in separating the wheat from the chaff, it can hardly be expected of the layperson to be proficient in the tools of EBM. Some rules of thumb are useful even to the lay reader, however:
Do I understand the terminology used? Sometimes, the activist will attempt to pull a smoke-and-mirrors trick by using unfamiliar medical/scientific terminology. Take time to look the terms up in Google, Wikipedia or Steadman’s Online Medical Dictionary and you may find the citation talks about something unrelated to the subject matter, or to the claims made.
What is the citation of? Often, you’ll find bibliographies bolstered by citations of unreliable data, such as single case studies letters to the editor, or rapid responses to online articles. Making a general claim based upon a single case study or, worse, a non-peer-reviewed letter to the editor or rapid response (sometimes written by the author of the article him/herself!) Is the hallmark of the scientific con artist. A short trip to PubMed will often tell you what type of “study” the author is quoting.
As a general rule, the best type of medical evidence comes from a systematic review, preferably of randomized controlled trials. A systematic review looks at all the studies available, grades them according to quality of evidence, and attempts to answer the specific question the studies address by pooling their data. For example, here is a systematic review which addresses whether Vitamin C is effective in treating and/or preventing the common cold. The Cochrane Collaboration is the source for many quality systematic reviews.
The meta-analysis is usually next highest in quality of evidence. As in the systematic review, data is pooled from several different studies to draw the necessary conclusions. However, the conclusions are only as good as the data, and sometimes, only as good as the analysis done. For example, here is a meta-analysis which purports to show that circumcised men are at greater risk for HIV infection than uncircumcised men. However, a few months later, a re-analysis of the same studies by a more competent team of researchers found exactly the opposite: that circumcised men were less likely to contract HIV (which, incidentally, is what most of the studies ‘meta-analyzed’ before and since have shown as well).* But most meta-analyses are useful and competently done, and are usually more valuable than a single study if they encompass several well-done ones.
Reviews are not studies, but contain an overview of other studies selected by the author. The quality of the review depends on the thoroughness of the author – sort of like a Wikipedia article.
Single studies are what we usually see brought up in the media and in bibliographies (on the web and elswhere). Those considered most accurate are the randomised controlled trials, meaning you divide a group of people into 2 groups at random (hoping the groups have similar characteristics), and perform an intervention on one group, leaving the other as a control. A classic example would be dividing a group of 20 men with heart disease into 2 groups of 10 men, giving one group a medicine and the other a placebo and noting the difference between the two groups (e.g, mortlaity rates) over time. RCT’s outside of drug/therapy studies, however, are hard to come by, as people are often self-selecting and it may be unethical to ramdomize people into groups (e.g, vaccinated vs. unvaccinated to a known killer disease). Which is why many studies in human beings are observational cohort studies – we take a cohort of people with certain characteristics (say, women between the ages of 20 and 40 who smoke, or men who live in a certain area), match them up with a similar control group, and observe them for signs of disease. You can do this prospectively – i.e, take the two groups, follow them for X amount of time, and see which group develops a higher % of the disease, or retrospectively – take people with the disease, match them up with healthy controls, and try and figure out what risk factors the people who got the disease had that the controls didn’t.
In general – the larger the study population, the better; prospective studies are more reliable than retrospective ones; and controlled studies are better than without a control group.
*Given that Robert S. Van Howe, the author of the original study, is a prominent anti-circumcision activist, one can speculate that the mis-analysis wasn’t necessarily due to researcher incompetnce; but that really is beyond the scope of this particular post.
Filed under: Uncategorized |