Covering the Campaign
In a year in which there’s been a sprawling presidential race, a fractured media landscape, and unprecedented opportunities for candidates to appeal directly to voters, campaigning and campaign coverage are being transformed. The press faces a genuine risk of being displaced from its role as a crucial source of insight and information for the electorate. Avoiding this fate will entail a return to some of the basics of coverage, while simultaneously adapting new digital strategies and technologies to remain relevant and accessible.
A condition called ductal carcinoma in situ (DCIS) is—or is not—a precursor of breast cancer. It does—or does not—require treatment. Doctors differ on these questions because definitive scientific evidence doesn’t exist. Some women with DCIS, a collection of abnormal cells in the milk ducts of the breast, choose to have a mastectomy. Some a lumpectomy and radiation. A few watch and wait.
So when a study addressing this issue appeared in JAMA Oncology last August, it drew widely divergent coverage. Gina Kolata at The New York Times wrote that DCIS posed little or no risk: “Patients with this condition had close to the same likelihood of dying of breast cancer as women in the general population.” Alice Park in Time came to a different conclusion: “DCIS may not be as benign as doctors once thought.” And Jennifer Calfas at USA Today cited dueling experts, drawing no conclusion at all. The study, she wrote, “sparked a debate on the importance of treatment options for women diagnosed with the earliest stage of breast cancer”—if, in fact, DCIS is an early stage of breast cancer.
Confused? The study, as well as comments from lead researcher Steven A. Narod and his hospital, was confusing, too. Narod, of Women’s College Research Institute at Women’s College Hospital in Toronto, told Kolata at the Times that after a biopsy to remove the abnormal cells, “the best way to treat DCIS is to do nothing.” Yet the hospital’s press release quotes Narod as saying that DCIS “has more in common with small invasive cancers than previously thought” and “there’s an inherent potential for DCIS to spread to other organs.” In fact, nothing in the study supports the broad assertions made in many stories. And few stories focused on the clearest findings: Age and ethnicity are risks.
Coverage of the DCIS study highlights what is perhaps the biggest challenge facing today’s science journalists: Evaluating and interpreting complex and sometimes contradictory results at a time when so many news stories—from climate change and health reform to energy and environmental regulation to political polling and economics—rely on a fairly sophisticated understanding of science. That makes journalism’s role in developing public science literacy more crucial than ever. “We need a science-literate world because, as science and technology change the environment we live in, we need to understand how we can be smart in navigating those changes,” says Deborah Blum, a Pulitzer Prize-winning science writer, director of the Knight Science Journalism program at MIT, and publisher of Undark, a new science magazine based there. “How do we do that if we don’t understand how it works?”
That was part of the challenge posed by the DCIS story. Even science writers who specialize in medical coverage—or, more narrowly, cancer coverage, or even breast cancer coverage—had trouble navigating the subject. And few science writers have the luxury of such narrow beats. Most cover multiple fields, writing about a NASA experiment one day and a toxic waste controversy the next. Yet they must quickly and accurately translate the science behind these stories, while also being wary of mistakes and fabrications.
The task is made more difficult by recent difficulties within science itself. During a period of sometimes severe budget constraints, scientists are under increasing pressure to get results. Failure to do so can jeopardize careers and future funding. As a result, studies finding an effect for an experimental drug, for example, are far more likely to be published than unsuccessful trials—even though negative results can sometimes be just as important. A study that finds a promising effect of a drug against a tumor might sound exciting, but the reception of the finding might change if a dozen other papers were published on the same drug and none found any effect.
In addition, there’s been an increase in retractions of research papers, either due to errors or to fabrications. Data needed to assess a study’s validity is sometimes kept confidential, either for commercial or competitive reasons. Plus, the way research is funded—and who funds it—further complicates the search for truth. Much of the funding for academic research comes to universities from the government. Universities encourage a publish-or-perish model in which researchers have incentives to publish every time they collect a sliver of new data—because they get promoted based partly on the number of studies they’ve published and where they’ve published.
The legitimacy of scientific research is often said to depend upon rigorous peer review. That is, when a paper is submitted to a journal, other experts in the field are asked to review it for accuracy. But confidence in peer review is waning. Sometimes, inferior or fraudulent work slips through.
Peer review can fail even when applied to important papers by respected scientists. In a recent multi-year project, for example, a group of researchers tried to replicate 100 psychology studies. They succeeded in confirming only 39. In March of 2015, the British journal BioMed Central, after finishing its investigation, retracted 43 articles involving efforts to “positively influence the outcome of peer review by suggesting fabricated reviewers.” According to Ivan Oransky, co-founder with Adam Marcus of the blog Retraction Watch, this is grounds for skepticism by science writers. He argues that they should be as skeptical of scientists as political writers are of politicians. “When we think about holding politicians accountable, companies accountable, we look for fraud, corruption, and dishonesty,” Oransky says. “But the accountability metric for science is whether or not what scientists are claiming is going to hold up.” Given the uptick in retractions, journalists can no longer rely on peer review to be confident of a study’s legitimacy.
Of course, failure to replicate results doesn’t mean the original findings were wrong or that the research was fraudulent. Science involves trial and error. In fact, part of the challenge for science writers is to express the nuances and uncertainties in even the best-performed experiments. “It’s pretty complicated to replicate findings,” says Sarah Brookhart, executive director of the Association for Psychological Science. “There are always issues of reproducibility, replication, and generalizability” due to animal models not translating to humans, for example, or differences in methodology. That point was underscored in early March by the publication in Science of a paper by four Harvard researchers that called the replication study statistically flawed and wrong. Reporters, Brookhart argues, should view research papers as part of a process, not as an outcome with a firm conclusion.
[pq]Research papers should be regarded as part of a process, not a firm conclusion[/pq]
Sometimes peer reviewers don’t properly do their jobs. But the sheer number of papers being published has created another problem: there are not enough reviewers with the time to give each study a thorough vetting. Atul Gawande, a surgeon at Boston’s Brigham and Women’s Hospital, a New Yorker staff writer, and author of “Being Mortal,” suggests journalistic fact checking may sometimes be more thorough than scientific peer review. “The peer review process is helpful,” he says, “but when I’m being fact-checked in The New Yorker … The New Yorker doesn’t just look at my footnotes. They look to see whether I cherry-picked my footnotes. They read the article and see: Did I quote something out of context? Are there five other articles that suggest otherwise? They’re doing peer review for me.” Given that some 2.5 million scientific papers are published each year, Gawande acknowledges that that kind of rigorous fact checking just isn’t possible for most peer-reviewed journals—or most news publications, for that matter.
Smart and careful reporting can, however, get around some of these problems. Christie Aschwanden, lead science writer for the data-journalism site FiveThirtyEight, showed how last fall when she analyzed a study claiming young users of e-cigarettes were eight times more likely than non-users to start smoking traditional cigarettes. Brian Primack, a professor at the University of Pittsburgh School of Medicine, was the lead author. The study, which appeared in JAMA Pediatrics in September, surveyed 694 participants between the ages 16 and 26, and then surveyed them again a year later.
Sixteen of those surveyed had smoked electronic cigarettes at the beginning of the study. Aschwanden found that the paper’s headline-making findings—the Los Angeles Times: “Teens who vape are more likely to smoke later, study says”; Time: “E-cigarettes are a gateway to tobacco, study says”—arose from the fact that six e-cigarette users had become traditional smokers between the first and second surveys. Six. “So because six people started smoking, news reports alleged that e-cigs were a gateway to analog cigs,” Aschwanden wrote. It was a large study, but the key misreported finding relied on only a handful of the participants.
The fault, according to Aschwanden, lay in confusing correlation with causation. Though there was a substantially increased risk associated with those who smoked e-cigarettes, there were so few e-cigarette users surveyed—just 16—that the sample is too small for definitive results. “It’s a perfectly fine piece of research,” Aschwanden says, “but it seemed like a hypothesis-generating study rather than one we can actually conclude anything from. You had to read the fine print to see that they were looking only at a small subset.”
Another challenge for science writers, is that reporters, like scientists, cannot be experts in everything. In a recent two-month stretch, for example, New York Times columnist Carl Zimmer wrote about warming oceans, the emerald ash borer (an Asian beetle that attacks ash trees), animal mimicry, the paleo diet, vaccines, a salamander fungus, and the passage of fetal cells to mothers. The trick to covering such a broad terrain, Zimmer says, is not to be an expert in each of these areas, but to track down the people who are. That’s not always easy, and many outside experts have their own biases. That’s why a quick call to one authority might not be enough. In complicated stories, Zimmer, who also is national correspondent for Stat, Boston Globe Media's new online science and health publication, might talk to half a dozen sources, even though only one or two appear in the story.
The idea that reporters must use outside sources to help evaluate research might sound obvious. But the remarkable thing is how few do it, especially when science forms a key part of another type of article, say, a political story. Zimmer urges all reporters, whether they are science writers or not, to report out the science when it’s a central factor in a story. Find credible sources to help vet the research. Reporters who don’t have a virtual Rolodex filled with scientists should ask colleagues on the science desk.
Because so many scientific results are provisional, sometimes the challenge lies in explaining to readers what is certain and what is speculative. Kathryn Schulz, a staff writer at The New Yorker, faced that problem in her July 2015 story “The Really Big One,” in which she detailed the massive earthquake predicted for the Pacific Northwest. If the Cascadia subduction zone, the 700-mile fault line running from northern California to near Canada’s Vancouver Island, gives way completely, the resulting earthquake could have a magnitude of up to a 9.2, according to seismologists. That would be greater than the 2011 earthquake and subsequent tsunami off the coast of Japan that caused such devastation. Researchers agree that an earthquake will occur—Cascadia subduction zone quakes happen on average every 243 years; it’s been 316 years since the last one—but it’s impossible to say exactly when it will happen or precisely what the consequences might be.
Given the scientific evidence for a quake, Schulz wrote the piece in the future tense rather than the more cautious conditional: “The area of impact will cover some hundred and forty thousand square miles, including Seattle, Tacoma, Portland, Eugene, Salem (the capital city of Oregon), Olympia (the capital of Washington), and some seven million people … Water heaters will fall and smash interior gas lines. Houses that are not bolted to their foundations will slide off … Unmoored on the undulating ground, the homes will begin to collapse.” Things will only get worse when the tsunami hits.
[pq]Kathryn Schulz’s take on a coming earthquake is an article that roared[/pq]
Chris Goldfinger, a seismologist at Oregon State University whose work was featured in the piece, says Schulz may have used a bit of showmanship in her writing (the article won a 2016 National Magazine Award for feature writing) but she got the science right. “Adding a little color and a little humor, as long as you don’t stray from the facts,” Goldfinger says, “I see absolutely nothing wrong with that. It’s a way to get people’s attention, to get people to talk about it. If you write the exact same information in a dry, dusty way, it wouldn’t have gone viral, and it wouldn’t have done as much good as it’s done.”
Schulz’s article generated massive public interest—and distress. Forums were held across the Pacific Northwest about the region’s lack of preparedness, and measures to reverse that neglect—from creating tsunami evacuation routes to updating older buildings so they can withstand high-magnitude earthquakes—are being discussed by policymakers, if not already being undertaken.
Gawande says he tries to address some of the challenges inherent in science journalism by waiting on a story to see whether it holds up. “I’ll tend to sit on it for a while and say, ‘How’s it going to look in three months or six months? Where is this evolving?’ If you’re in short-form reporting, that means circling back on the stories you did six months or a year ago and saying, ‘Okay, now how does it look from that perspective?’ and developing it as a beat. That’s your way around getting caught out in the moment by whatever the current hype is.”
Veteran science journalists quickly learn that decisions about what not to cover are as important as decisions about what to jump on right away. Last September, for example, the journal Nature published a study claiming to have found evidence for infectious transmission of Alzheimer’s disease—a big story, and a frightening one. But Virginia Hughes, science editor at BuzzFeed News, decided not to cover it. “It’s a ridiculous study,” she says. “It was eight people, I think, and very, very speculative. We knew it was going to make huge headlines, and we didn’t cover it.” A day later, Kelly Oakes, science editor at BuzzFeed UK, did decide to do the story, but not in the conventional way. She wrote a story that knocked down the study and the wild headlines it had generated.
The point is that reporters shouldn’t accept anything at face value. “Journalists should be covering the problems in science as a matter of routine,” says Ben Goldacre, a British physician who wrote the Bad Science column in The Guardian for a decade. “A big dramatic new finding on a new treatment? Cover the work showing that early findings tend to overstate benefits. A positive clinical trial from a university that does a lot of clinical trials? Maybe they’ve left less dramatic data unpublished. Find out what proportion of their previous trials has gone unreported.”
Oransky believes journalists should completely change the way they think about published papers. Reporting “shouldn’t stop when a paper is published … You have to treat every paper like a living document. You have to treat every finding as provisional”—especially those that invite big headlines. Adds MIT Knight Science Journalism director Blum: “Science is a process and every study is a data point in that process. You have to figure out where it is in the arc of that process.”
[sidebar style="full"]With reporting by Eryn M. Carlson[/sidebar]
So when a study addressing this issue appeared in JAMA Oncology last August, it drew widely divergent coverage. Gina Kolata at The New York Times wrote that DCIS posed little or no risk: “Patients with this condition had close to the same likelihood of dying of breast cancer as women in the general population.” Alice Park in Time came to a different conclusion: “DCIS may not be as benign as doctors once thought.” And Jennifer Calfas at USA Today cited dueling experts, drawing no conclusion at all. The study, she wrote, “sparked a debate on the importance of treatment options for women diagnosed with the earliest stage of breast cancer”—if, in fact, DCIS is an early stage of breast cancer.
Confused? The study, as well as comments from lead researcher Steven A. Narod and his hospital, was confusing, too. Narod, of Women’s College Research Institute at Women’s College Hospital in Toronto, told Kolata at the Times that after a biopsy to remove the abnormal cells, “the best way to treat DCIS is to do nothing.” Yet the hospital’s press release quotes Narod as saying that DCIS “has more in common with small invasive cancers than previously thought” and “there’s an inherent potential for DCIS to spread to other organs.” In fact, nothing in the study supports the broad assertions made in many stories. And few stories focused on the clearest findings: Age and ethnicity are risks.
Coverage of the DCIS study highlights what is perhaps the biggest challenge facing today’s science journalists: Evaluating and interpreting complex and sometimes contradictory results at a time when so many news stories—from climate change and health reform to energy and environmental regulation to political polling and economics—rely on a fairly sophisticated understanding of science. That makes journalism’s role in developing public science literacy more crucial than ever. “We need a science-literate world because, as science and technology change the environment we live in, we need to understand how we can be smart in navigating those changes,” says Deborah Blum, a Pulitzer Prize-winning science writer, director of the Knight Science Journalism program at MIT, and publisher of Undark, a new science magazine based there. “How do we do that if we don’t understand how it works?”
That was part of the challenge posed by the DCIS story. Even science writers who specialize in medical coverage—or, more narrowly, cancer coverage, or even breast cancer coverage—had trouble navigating the subject. And few science writers have the luxury of such narrow beats. Most cover multiple fields, writing about a NASA experiment one day and a toxic waste controversy the next. Yet they must quickly and accurately translate the science behind these stories, while also being wary of mistakes and fabrications.
The task is made more difficult by recent difficulties within science itself. During a period of sometimes severe budget constraints, scientists are under increasing pressure to get results. Failure to do so can jeopardize careers and future funding. As a result, studies finding an effect for an experimental drug, for example, are far more likely to be published than unsuccessful trials—even though negative results can sometimes be just as important. A study that finds a promising effect of a drug against a tumor might sound exciting, but the reception of the finding might change if a dozen other papers were published on the same drug and none found any effect.
In addition, there’s been an increase in retractions of research papers, either due to errors or to fabrications. Data needed to assess a study’s validity is sometimes kept confidential, either for commercial or competitive reasons. Plus, the way research is funded—and who funds it—further complicates the search for truth. Much of the funding for academic research comes to universities from the government. Universities encourage a publish-or-perish model in which researchers have incentives to publish every time they collect a sliver of new data—because they get promoted based partly on the number of studies they’ve published and where they’ve published.
The legitimacy of scientific research is often said to depend upon rigorous peer review. That is, when a paper is submitted to a journal, other experts in the field are asked to review it for accuracy. But confidence in peer review is waning. Sometimes, inferior or fraudulent work slips through.
Peer review can fail even when applied to important papers by respected scientists. In a recent multi-year project, for example, a group of researchers tried to replicate 100 psychology studies. They succeeded in confirming only 39. In March of 2015, the British journal BioMed Central, after finishing its investigation, retracted 43 articles involving efforts to “positively influence the outcome of peer review by suggesting fabricated reviewers.” According to Ivan Oransky, co-founder with Adam Marcus of the blog Retraction Watch, this is grounds for skepticism by science writers. He argues that they should be as skeptical of scientists as political writers are of politicians. “When we think about holding politicians accountable, companies accountable, we look for fraud, corruption, and dishonesty,” Oransky says. “But the accountability metric for science is whether or not what scientists are claiming is going to hold up.” Given the uptick in retractions, journalists can no longer rely on peer review to be confident of a study’s legitimacy.
Of course, failure to replicate results doesn’t mean the original findings were wrong or that the research was fraudulent. Science involves trial and error. In fact, part of the challenge for science writers is to express the nuances and uncertainties in even the best-performed experiments. “It’s pretty complicated to replicate findings,” says Sarah Brookhart, executive director of the Association for Psychological Science. “There are always issues of reproducibility, replication, and generalizability” due to animal models not translating to humans, for example, or differences in methodology. That point was underscored in early March by the publication in Science of a paper by four Harvard researchers that called the replication study statistically flawed and wrong. Reporters, Brookhart argues, should view research papers as part of a process, not as an outcome with a firm conclusion.
[pq]Research papers should be regarded as part of a process, not a firm conclusion[/pq]
Sometimes peer reviewers don’t properly do their jobs. But the sheer number of papers being published has created another problem: there are not enough reviewers with the time to give each study a thorough vetting. Atul Gawande, a surgeon at Boston’s Brigham and Women’s Hospital, a New Yorker staff writer, and author of “Being Mortal,” suggests journalistic fact checking may sometimes be more thorough than scientific peer review. “The peer review process is helpful,” he says, “but when I’m being fact-checked in The New Yorker … The New Yorker doesn’t just look at my footnotes. They look to see whether I cherry-picked my footnotes. They read the article and see: Did I quote something out of context? Are there five other articles that suggest otherwise? They’re doing peer review for me.” Given that some 2.5 million scientific papers are published each year, Gawande acknowledges that that kind of rigorous fact checking just isn’t possible for most peer-reviewed journals—or most news publications, for that matter.
Smart and careful reporting can, however, get around some of these problems. Christie Aschwanden, lead science writer for the data-journalism site FiveThirtyEight, showed how last fall when she analyzed a study claiming young users of e-cigarettes were eight times more likely than non-users to start smoking traditional cigarettes. Brian Primack, a professor at the University of Pittsburgh School of Medicine, was the lead author. The study, which appeared in JAMA Pediatrics in September, surveyed 694 participants between the ages 16 and 26, and then surveyed them again a year later.
Sixteen of those surveyed had smoked electronic cigarettes at the beginning of the study. Aschwanden found that the paper’s headline-making findings—the Los Angeles Times: “Teens who vape are more likely to smoke later, study says”; Time: “E-cigarettes are a gateway to tobacco, study says”—arose from the fact that six e-cigarette users had become traditional smokers between the first and second surveys. Six. “So because six people started smoking, news reports alleged that e-cigs were a gateway to analog cigs,” Aschwanden wrote. It was a large study, but the key misreported finding relied on only a handful of the participants.
The fault, according to Aschwanden, lay in confusing correlation with causation. Though there was a substantially increased risk associated with those who smoked e-cigarettes, there were so few e-cigarette users surveyed—just 16—that the sample is too small for definitive results. “It’s a perfectly fine piece of research,” Aschwanden says, “but it seemed like a hypothesis-generating study rather than one we can actually conclude anything from. You had to read the fine print to see that they were looking only at a small subset.”
Another challenge for science writers, is that reporters, like scientists, cannot be experts in everything. In a recent two-month stretch, for example, New York Times columnist Carl Zimmer wrote about warming oceans, the emerald ash borer (an Asian beetle that attacks ash trees), animal mimicry, the paleo diet, vaccines, a salamander fungus, and the passage of fetal cells to mothers. The trick to covering such a broad terrain, Zimmer says, is not to be an expert in each of these areas, but to track down the people who are. That’s not always easy, and many outside experts have their own biases. That’s why a quick call to one authority might not be enough. In complicated stories, Zimmer, who also is national correspondent for Stat, Boston Globe Media's new online science and health publication, might talk to half a dozen sources, even though only one or two appear in the story.
The idea that reporters must use outside sources to help evaluate research might sound obvious. But the remarkable thing is how few do it, especially when science forms a key part of another type of article, say, a political story. Zimmer urges all reporters, whether they are science writers or not, to report out the science when it’s a central factor in a story. Find credible sources to help vet the research. Reporters who don’t have a virtual Rolodex filled with scientists should ask colleagues on the science desk.
Because so many scientific results are provisional, sometimes the challenge lies in explaining to readers what is certain and what is speculative. Kathryn Schulz, a staff writer at The New Yorker, faced that problem in her July 2015 story “The Really Big One,” in which she detailed the massive earthquake predicted for the Pacific Northwest. If the Cascadia subduction zone, the 700-mile fault line running from northern California to near Canada’s Vancouver Island, gives way completely, the resulting earthquake could have a magnitude of up to a 9.2, according to seismologists. That would be greater than the 2011 earthquake and subsequent tsunami off the coast of Japan that caused such devastation. Researchers agree that an earthquake will occur—Cascadia subduction zone quakes happen on average every 243 years; it’s been 316 years since the last one—but it’s impossible to say exactly when it will happen or precisely what the consequences might be.
Given the scientific evidence for a quake, Schulz wrote the piece in the future tense rather than the more cautious conditional: “The area of impact will cover some hundred and forty thousand square miles, including Seattle, Tacoma, Portland, Eugene, Salem (the capital city of Oregon), Olympia (the capital of Washington), and some seven million people … Water heaters will fall and smash interior gas lines. Houses that are not bolted to their foundations will slide off … Unmoored on the undulating ground, the homes will begin to collapse.” Things will only get worse when the tsunami hits.
[pq]Kathryn Schulz’s take on a coming earthquake is an article that roared[/pq]
Chris Goldfinger, a seismologist at Oregon State University whose work was featured in the piece, says Schulz may have used a bit of showmanship in her writing (the article won a 2016 National Magazine Award for feature writing) but she got the science right. “Adding a little color and a little humor, as long as you don’t stray from the facts,” Goldfinger says, “I see absolutely nothing wrong with that. It’s a way to get people’s attention, to get people to talk about it. If you write the exact same information in a dry, dusty way, it wouldn’t have gone viral, and it wouldn’t have done as much good as it’s done.”
Schulz’s article generated massive public interest—and distress. Forums were held across the Pacific Northwest about the region’s lack of preparedness, and measures to reverse that neglect—from creating tsunami evacuation routes to updating older buildings so they can withstand high-magnitude earthquakes—are being discussed by policymakers, if not already being undertaken.
Gawande says he tries to address some of the challenges inherent in science journalism by waiting on a story to see whether it holds up. “I’ll tend to sit on it for a while and say, ‘How’s it going to look in three months or six months? Where is this evolving?’ If you’re in short-form reporting, that means circling back on the stories you did six months or a year ago and saying, ‘Okay, now how does it look from that perspective?’ and developing it as a beat. That’s your way around getting caught out in the moment by whatever the current hype is.”
Veteran science journalists quickly learn that decisions about what not to cover are as important as decisions about what to jump on right away. Last September, for example, the journal Nature published a study claiming to have found evidence for infectious transmission of Alzheimer’s disease—a big story, and a frightening one. But Virginia Hughes, science editor at BuzzFeed News, decided not to cover it. “It’s a ridiculous study,” she says. “It was eight people, I think, and very, very speculative. We knew it was going to make huge headlines, and we didn’t cover it.” A day later, Kelly Oakes, science editor at BuzzFeed UK, did decide to do the story, but not in the conventional way. She wrote a story that knocked down the study and the wild headlines it had generated.
The point is that reporters shouldn’t accept anything at face value. “Journalists should be covering the problems in science as a matter of routine,” says Ben Goldacre, a British physician who wrote the Bad Science column in The Guardian for a decade. “A big dramatic new finding on a new treatment? Cover the work showing that early findings tend to overstate benefits. A positive clinical trial from a university that does a lot of clinical trials? Maybe they’ve left less dramatic data unpublished. Find out what proportion of their previous trials has gone unreported.”
Oransky believes journalists should completely change the way they think about published papers. Reporting “shouldn’t stop when a paper is published … You have to treat every paper like a living document. You have to treat every finding as provisional”—especially those that invite big headlines. Adds MIT Knight Science Journalism director Blum: “Science is a process and every study is a data point in that process. You have to figure out where it is in the arc of that process.”
[sidebar style="full"]With reporting by Eryn M. Carlson[/sidebar]