This Page

has been moved to new address

Science and Steamrollers: How Research Stories Can Go Off the Rails

Sorry for inconvenience...

Redirection provided by Blogger to WordPress Migration Service
Cancer Research 101: Science and Steamrollers: How Research Stories Can Go Off the Rails

Friday, April 6, 2012

Science and Steamrollers: How Research Stories Can Go Off the Rails

Earlier this week I wrote a post about “personal genomics”. Actually I wrote 2 posts the same day (part 1 and 2) but who’s counting <smile>?

Short Recap:

In the second article (read it again, here) I talked about a new study that was announced at the Annual Meeting of the American Association for Cancer Research (AACR) in Chicago (meeting website here), and published simultaneously in the journal Science Translational Medicine.  This study, authored by a team at Johns Hopkins in Baltimore and led by world-famous researcher Dr. Bert Vogelstein, was entitled The Predictive Capacity of Personal Genome Sequencing. The researchers used disease registries from several countries, and looked at data from over 50,000 identical twins and tracked how often one twin or the other developed one of 24 different diseases.

Because twins have identical genomes, by comparing the results of one twin to the other, the question was asked: how much do their genes predict any increased chances of getting a disease. It was concluded that most of the twins were at average risk for most of the 24 diseases, pretty much the same as anyone from the general population. 

In other words, the authors suggested that widespread use of genome sequencing will likely provide very little useful information to enable  prediction of future disease.

Because of the fanfare around this paper, and not in the least because of the reputation of the research team in general and Dr. Vogelstein in particular, this study became instantly of interest in the mainstream  media, and in the social media universe, especially on Twitter.

One of the first to pick up on this story was Gina Kolata, writing for the New York Times. Ms Kolata is, to my mind, a seasoned, very well respected and well-known science journalist and her article in the Times (Study Says DNA’s Power to Predict Illness Is Limited) offered both a recap and this caveat:

“While sequencing the entire DNA of individuals is proving fantastically useful in understanding diseases and finding new treatments, it is not a method that will, for the most part, predict a person’s medical future.”

Another well-known journalist, Robert Bazell posted an article on MSNBC entitled "Gene tests: Your DNA blueprint may disappoint, scientists say" that carried pretty much the same cautionary message: 
“If everyone has a complete gene profile, a small number can learn they have a great risk for something.  But for most, the information is minimally significant.”

I confess that, given the reputations of Dr. Vogelstein and of the mainstream journalists covering this story, that I too felt a bit deflated in that moment, and said in my own blog post: 
“Bottom line, it seems to me, is that we really have to be more careful than ever about exposing ourselves to privacy, confidentiality, insurability and other legal and ethical dilemmas, especially if the risks might outweigh the gains in many, if not most, cases.”

What Happened Next?

At least I was open-minded (or prescient?) enough  to have ended my post with the caveat that:
 “Clearly there is no definitive pronouncement to be made one way or the other yet - it is far too early days for that. But it is good to have these debates with our eyes wide open.”

Indeed, as is so often the case, closer inspection with eyes wide open and sober second thought reveals that there is more to this story than meets the eye.

Actually, perhaps it might be better said that there may be “less” to this story than meets the eye...

Initially, I was rather surprised to see a very vigorous, but negative reaction from a number of other  journalists and scientists alike, especially in the ‘Twitterverse’ not only to the Vogelstein study, but to the media attention that it was getting. Some of the critiques explored how the study was flawed, or at least how its conclusions might have been flawed given the design of the study.

But the main critique that I read loud and clear from several independent sources was essentially that this result was to be 100% anticipated, and that geneticists and other molecular biologists  have been saying this for some time. In other words that ‘there is no news here’.  And so they were perplexed at why the study had been positioned to have been some brand new discovery. Worse, they were very concerned that the rush to judgment of the mainstream media, by lacking critical perspective, might seriously set back genomic research progress by unfairly damning this whole area of research without benefit of having asked many critical (and contrary) questions. 

The Other Side of the Story

I’m sure there must have been many more, but I will highlight 3 excellent pieces that have appeared since the original story broke and the original media attention flourished.

One of the very best was written by Erika Check Hayden in a blog post entitled “DNA has limits, but so does study questioning its value, geneticists say” published in Nature’s Newsblog. In that post she writes that:
 “Geneticists don’t dispute the idea that genes aren’t the only factor that determines whether we get sick; many  of them agree with that point. The problem, geneticists say, is not that the study ... arrived at a false conclusion, but that it arrived at an old, familiar one via questionable methods and is now being portrayed by the media as a new discovery that undermines the value of genetics.”

She went on to list 5 main critiques which I will enumerate here, but which you should go back to the original article to read the details:
  1. This study critiques the power of genomic medicine but does not contain any genome data. 
  2. This study is beating a dead horse.
  3. The mathematical model used in the study is unrealistic.
  4. The study doesn’t correct for errors that can affect twin studies.
  5. The media coverage of the study could weaken support for genetic research.
 To me, another  very well written and compelling “rebuttal” was penned by Luke Jostins on the Blog “Genomes Unzipped” in an article entitled “Identical twins usually do not die from the same thing”. 

In his post he ponders why “a not particularly original or particularly well done attempt to answer a question that many other people have answered before, got so much press (including a feature in the [New York Times]).”

He goes on to try to answer his own question, and the insight is commendable:

But of course, the reason is relatively obvious. All of the papers I linked to there are by statistical geneticists ... and never came with a press release or an attempt to talk to the public about them. The message, to those who can read them, is clear and well established – genetic risk prediction (or any form of risk prediction) will never be able to perfectly predict disease incidence, and will never replace diagnostic tests. But the fact that the results of Bert Vogelstein’s study seems to have come as a surprise to people, when it comes as no surprise to us, shows us that we have failed in one of our primary duties to keep the public informed about the results of our research. The paper’s failure as a work of statistical genetics stands in contrast to its success as a work of public outreach. If we are annoyed that a bad paper got the message across, then we should be annoyed with ourselves that we never communicated our own results properly”

And finally, a blog post yesterday from Paul Raeburn in the Knight Science Journalism Tracker entitled “What everyone should know about genome scans”, not only provides a very nice summary of the debate, but goes one step further to pose questions about the role of the press and of journalists who cover science and research that gets exceedingly complex. In some very insightful comments, Mr. Raeburn asks, for example:

“The question here is how reporters might have suspected these criticisms and produced better stories–or how their reporting might have done a better job of uncovering the potential pitfalls of the study. Few reporters are qualified to assess the statistical soundness of the study. But why did they not find out more about this in their reporting? Perhaps some were so interested in the contrarian nature of the story–genomes aren’t all they’re cracked up to be–that they didn’t push hard enough to discover potential problems with the study.

One tip-off was the many stories that have been written questioning the value of commercial genome scans. Reporters should have asked whether the findings were new. That would not necessarily have uncovered the statistical issues, but it might have led reporters to scale back their coverage.”

Sober Second Thoughts?

Paul Raeburn, in the article cited above, concluded:

“If I had covered this story, I fear I, too, would have missed the issues that Hayden presents so clearly. The main lesson I can draw from this is that reporters ought to be as skeptical and vigilant as they can be, especially when writing about subjects, such as this one, that they have written about many times before–enough to have formed opinions that might be getting in their way.”

 I myself have posted before about the “good, the bad and the ugly” of public engagement in research. Science can never again be an ivory tower exercise – much of the research, including that study in question, is done at public expense, whether that be taxpayers’ dollars or charitable donations. The public has a right to be informed as to how their dollars are used, and researchers have a responsibility and accountability to inform them. Very often, it is science journalists, health reporters, broadcasters and the like who have a central and trusted role to play smack in the middle, as a critical conduit to the public to make sure that happens.

But they have to get it right if they are to hold that public trust.

Still, as Mr. Raeburn said, how many reporters have the qualifications and expertise to really dissect increasingly specialized science and increasingly complex data sets? I *HAVE* some qualifications and I certainly can NOT keep up with, nor even understand, much of the highly complex, jargon-filled science that even I try to write about.

So, while it is easy to criticize the reporters who may have rushed to judgment and perhaps overly sensationalized what for many is a non-story, on balance one surely also has to hold accountable the very scientists themselves who may have allowed this steamroller to roll down the hill, and indeed, from what I can surmise, may have even given it quite a little push to get it going downhill in the first place, whether intentionally or not.

Labels: , , , , ,


Post a Comment

A major purpose of this Blog is to encourage discussion, debate and thoughtful exchange. Please feel free to comment, or ask a question...

Subscribe to Post Comments [Atom]

<< Home