Journalists are supposed to ask probing questions, verify what sources say, then be selective in what they report. This is part of performing their journalistic mission. What this means for medical reporters is that they must work to separate scientific fact from science fiction. But, in daily coverage, such ideals are often betrayed.

In my book, “Medical Journalism—Exposing Fact, Fiction, Fraud,” I illuminate many of the most common pitfalls in health and medical reporting. These include:

  • Reducing reporting to dueling quotes, which results in “he said-she said” reports.
  • Failing to ask sources to substantiate their claims by scientific evidence.
  • Being misled by number games.
  • Depending on anecdotes for evidence, rather than relying on scientific studies.
  • Failing to question findings about treatment effects.
  • Extrapolating from research to clinical practice, such as hyping findings of basic research, animal studies, or clinical subgroup findings.
  • Mistaking risk factors for diseases and assuming that treatment of risk factors will do more good than harm.
  • Misjudging risks by failing to give a realistic idea of the actual odds involved.

These pitfalls can be overcome when medical reporters keep four key questions in mind and conduct their reporting in a way that unearths answers to them:

  1. Is this claim valid?
  2. Where is the evidence?
  3. Is the evidence strong and relevant?
  4. How can the news be reported fairly and accurately?

Is This Claim Valid?

To determine this, a reporter should figure out whether the promised effects or danger signals seem at all realistic. Here, the golden rule is that the greater the claim, the more reason to be skeptical. The stronger the claim, the more determined ought to be the demand for evidence. For example, if researchers claim they’ve developed a drug which cures Alzheimer’s disease, the demand for evidence should be greater than if the claim centers on a drug that changes the behavior of braindamaged rats.

Exposing unrealistic claims also calls for the ability to penetrate the rhetorical techniques some medical experts use to be persuasive. Be on the lookout, for example, when medical sources try to impress a reporter with the use of unnecessary technical jargon or excessively precise figures (4.86 percent instead of 5 percent). And, sometimes, untested medical technologies are successfully launched amid false accusations of a “conspiracy” against the technology. Such rhetoric can whet the appetite of a naive reporter, drawn to a more dramatic slant. And good medical reporters shine a light on vague hypotheses that are not supported by scientific evidence. For example, when a disease is shown to be associated with a particular genetic disorder, solid medical reporting tells readers that this does not support the hypothesis that a cure has been found.

Another key issue is whether an expert who makes a claim is sufficiently knowledgeable about the topic. In what specific field is he or she an expert? Since medical research is so highly specialized, reporters cannot assume that an expert on gastrointestinal cancer is sufficiently knowledgeable to also address gynecological tumors. What is the reputation of this expert among his or her colleagues? What type of research has he or she published? Especially important to find out—and report—is information about sponsoring organizations, companies or other important affiliations. Are factors such as potential research grants or media exposure playing a role in the release of this information? Will the source gain from publicity? Some naive reporters might regard doctors and other medical experts as objective seekers of truth. However, figuring out if there is a hidden agenda is as essential in medical reporting as it is on any other beat.

Where Is the Evidence?

When medical reporters face tight deadlines, finding the evidence means acquiring at least a rough idea whether relevant studies are available to support the claim. Sweeping statements by experts, such as “breakthrough,” or “research shows,” should not be accepted or quoted unless evidence can be produced. Sources should be able to back up their claims by peer-reviewed articles in recognized journals. If they can’t, then the absence of such evidence should be reported. Reporters should always ask to see the articles or references that experts cite.

There are questions medical journalists should be asking to get at this evidence. A few of these include:

  1. Where is the evidence? (Ask to see articles or references. Are the journals well known?)
  2. Who has been studied and who is affected? (What was the status of their disease, their age and gender, social/cultural background, and follow-up?)
  3. Are the research methods reliable? (With regard to treatment methods, retrospective studies are generally weaker than prospective, uncontrolled studies generally weaker than controlled, and nonrandomized trials generally weaker than randomized.)
  4. How great were the effects? (Changes should be reported not only in percentages, but also in absolute numbers. How many patients underwent treatment as compared with the number of successful cases?)
  5. How precise are the results? (What is the margin of error? Are the results statistically significant or not? Beware of statements such as “up to” or “as little as” if they are not presented together.)
  6. How well do the conclusions concur with other studies? (Ask if other studies point in the same direction. If so, the results are probably more reliable. Small, single studies can be unreliable. Systematic reviews of many studies, sometimes including meta-analyses, are often more reliable.)

Reporters can use a handbook (such as “Clinical Evidence,” published regularly by BMJ Publishing Group) as a starting point for important questions. Systematic reviews, such as Cochrane reviews (, cover a wide range of verified information and identify beneficial or harmful interventions. Other helpful resources include health technology assessments, including economic, social and ethical analyses.

Is the Evidence Strong and Relevant?

Medical reporters are flooded with published research findings from sources who want to promote their products and ideas. Given the time pressure under which most journalists work, a complete assessment of scientific quality is unrealistic. However, a skeptical attitude and a few basic principles go a long way.

For example, good reporters realize that weak findings about treatments often emerge from studies that do not use control groups, have not been randomized, or are based on few observations or a narrow sample. Similarly, a high dropout rate among trial subjects often leads to false conclusions, as does an excessively short follow-up time. Many researchers draw conclusions about a method’s benefits based solely on changes in lab values and test results, so-called surrogate endpoints. However, as a rule, special studies of hard endpoints—patients’ symptoms, quality of life, and survival—are necessary to backup claims about the benefits and risks of an intervention. For example, a study showing that a treatment reduces tumor size in cancer patients does not necessarily mean it also saves lives. It might, in fact, do more harm than good.

Judging whether or not the evidence is relevant to a larger group of patients involves asking who has been studied and who is affected by the condition. Therefore, the basic questions include: Do these results really apply to other patients? How do you know?

When looking for clues about what is weak scientific evidence for treatments, what follows is a list of familar characteristics:

  • Preliminary results (often presented at conferences and said to be “based on my experience”)
  • No control group (only before and after measurements)
  • No randomization (often resulting in systematic errors)
  • Few observations (often making it impossible to draw conclusions)
  • Biased samples (particularly sick/ healthy or old/young, or narrow subgroups)
  • Major dropout (resulting in systematic error)
  • No use of blinding (allowing expectations to influence the results and how they are interpreted)
  • Short follow-up (leading to premature judgments of treatment success or failure)
  • Lab values only (rather than symptoms, quality of life, and survival, which matter the most to patients).

How Can the News Be Reported Fairly and Accurately?

Balance is often considered a hallmark of fair reporting. In the medical beat, this means, for example, reporting the effects and the side effects, as well as the benefits and harm. Thus, when an expert discusses treatment from a single point of view, a good medical reporter will inquire about the other side of the issue and ask for such evidence.

Balance also means conveying important ambiguity and controversy. Both sides of an argument should be presented. More specifically, exposing a lack of scientific support for either side of an argument is equally important. In fair medical reporting, it is also important to learn how to choose typical examples. At times, medicine offers examples of odd phenomena—incurable diseases that mysteriously disappear after a treatment that has been shown to be ineffective or perfectly healthy people who die suddenly from a chemical that has been proven quite harmless. Given their rarity, these cases attract journalistic attention. But when reporting such events, journalists must make it clear that these are exceptions to the rule. And when interviewing a patient with a particular disease, the public needs to know whether the patient is a typical or an exceptional case.

Accurate reporting also entails helping the audience distinguish between correlation and cause. When two events occur at the same time—for example, a patient’s symptoms improve when a new treatment is started—this does not necessarily imply that one causes the other. Correlation is not causation. In reporting about a particular health risk, it may be helpful to give the odds, but to also compare them with the odds of other risks to allow the public to have information by which to compare. For example, the risk of acquiring cancer from a particular food can be compared to the risk of acquiring cancer from smoking. Finally, good medical reporters return to important topics and follow-up their reports. They might reevaluate claims by approaching the subject from new angles.

Seasoned medical reporters are distinguished from gullible ones by their ability to remain skeptical toward unproven claims—whether in interviews, in press releases, at conferences, in journal supplements, and on the Internet. While doing research for my textbook on medical journalism, I interviewed many excellent medical journalists. The lesson they had learned was clear: It does not take a medical degree to be a good medical reporter. What it requires is basic knowledge of a few scientific ground rules (many of which I describe in my book) and, above all, common sense and a whole lot of healthy skepticism.

As psychiatrist Thomas Szasz said, “Formerly, when religion was strong and science weak, men mistook magic for medicine; now, when science is strong and religion weak, men mistake medicine for magic.” Let us not add to the confusion, but try to help the audience by sorting the wheat from the chaff.

Ragnar Levi, M.D., is an award-winning medical editor with a background in both medicine and journalism. Since 1992, Levi has been the executive editor of “Science & Practice,” published by SBU, Sweden. He has written “Medical Journalism—Exposing Fact, Fiction, Fraud” (Iowa State Press, 2001) and also authored a monograph on evidence-based medicine.

Most popular articles from Nieman Reports

Show comments / Leave a comment