This Page

has been moved to new address

Cancer Research 101

Sorry for inconvenience...

Redirection provided by Blogger to WordPress Migration Service
Cancer Research 101

Tuesday, April 3, 2012

Sloppy Science? System Failure? Or...?

This week, something on the order of perhaps 15,000 to 20,000 of the world’s top cancer researchers, from PhD students all the way to Nobel laureates, are gathering in Chicago for the Annual Meeting of the American Association for Cancer Research  (AACR). Although I am not attending this year, if the current program resembles those of years gone by, you can be sure that there will be hundreds, if not thousands, of scientific presentations that are describing potential new targets for cancer therapy. Some of these will be in huge plenary sessions, some will be oral talks in mini-symposia and many will be presented in concurrent poster sessions.

Why? As we learn more and more about what makes cancer cells tick, we are discovering more and more pathways that are implicated in the origin or the continuation of the cancer “state”. Every molecule that gets identified as being part of one of these pathways is, at least potentially, a target for intervention – either to turn it off (if it is implicated in the initiation or progression of cancers), or to turn it on (if it is implicated in some “protective” mechanisms against cancers), and so on.

And every one of these studies could be important in its own right since each one adds to our burgeoning understanding of the molecular basis of cancers.

And some of these might turn out to be even more important if the so called “target” can be validated as really being involved in cancer causation (as opposed to an incidental bystander). 

But the real “jackpot” comes when one of these targets is not only validated as being important and centrally involved in one cancer or another, but is also considered to be a so-called “druggable” target. By that, we mean that we expect to be able to discover or develop a drug, usually a small molecule or an antibody, that can then interfere with, or in some other way modulate, the cancer state and thus be an effective anti-cancer therapeutic.

The success of this model has been evidenced by drugs like Imatinib (Gleevec), a small molecule drug, or Trastuzumab (Herceptin), a monoclonal antibody. The search for the next “druggable targets” and the subsequent discovery of the next Gleevec or the next Herceptin continues to drive preclinical laboratory-based research the world over, since those avenues are where many of the next cancer therapeutics are expected to originate. 

Undoubtedly, only a small number of these putative targets will actually traverse that magic line from being a preclinical “observation” to actually being of demonstrated clinical utility. This is the realm of so-called “translational research” - to translate, or move forward, research from the lab so it can end up in the clinic for the care and treatment of real patients in the real world.

That path is often a long and arduous one, mind you, fraught with frustration, but every long journey starts with a single step, as they say. Still, there will be palpable excitement as more and more of these potential targets are described, understood and tested for clinical utility.

Last week, however, a bit of a bucket of cold water was thrown on a number of highly touted studies that presumably had shown great promise of such translation into the clinic.

In a Commentary entitled “Drug development: Raise standards for preclinical cancer research” published in the respected journal, Nature on March 29, 2012 authors C. Glenn Begley and Lee M. Ellis reported that, sadly, not only have the vast majority of such studies have NOT resulted in translation into the clinic, but worse, they said, reputable scientists working at pharmaceutical or biotech companies have not been able to replicate most of the results that had been lauded at one time as potential “breakthroughs” (italics mine). 

 

In total, they reported that at least 47 out of 53 publications – all from reputable researchers and published in reputable peer-reviewed scientific journals -  had not been able to be replicated during the time the one of the authors (Dr. Begley) had been the head of research at the biotech company Amgen. 

 

This rather shocking finding prompted the authors to make some specific recommendations to try to ensure that this situation did not persist.

 

And it prompted an Editorial in the same issue of Nature, entitled “Must Try Harder” that opined that “too many sloppy mistakes are creeping into scientific papers. Lab heads must look more rigorously at the data — and at themselves”.   

 

The Editorial went on to say:


[This] "Comment article ... exposes one possible impact of such carelessness. Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data."

Please do note that, as the editorial says, no one is suspecting, suggesting nor accusing any fraudulent behaviour. And indeed there are many potential legitimate explanations why not all results can be reproduced. But the publication of this Commentary and the accompanying Editorial have certainly ignited a firestorm of subsequent comments, newspaper articles, blog posts and Twitter activity.

I found one online response to the Nature Editorial to be particularly telling, especially since it came from a friend and colleague whose opinions I respect immensely. Dr. Jim Woodgett, Director of Research at Toronto’s famed Samuel Lunenfeld Research Institute at Mount Sinai Hospital wrote:

"The issue with inaccuracies in scientific publication seems not to be major fraud (which should be correctable) but a level of irresponsibility. When we publish our studies in mouse models, we are encouraged to extrapolate to human relevance. This is almost a requirement of some funding agencies and certainly a pressure from the press in reporting research progress. When will this enter the clinic? The problem is an obvious one. If the scientific (most notably, biomedical community) does not take ownership of the problem, then we will be held to account. If we break the "contract" with the funders (a.k.a. tax payers), we will lose not only credibility but also funding. There is no easy solution. Penalties are difficult to enforce due to the very nature of research uncertainties. But peer pressure is surely a powerful tool. We know other scientists with poor reputations (largely because their mistakes are cumulative) but we don't challenge them. Until we realize that doing nothing makes us complicit in the poor behaviour of others, the situation will only get worse. Moreover, this is also a strong justification for fundamental research since many of the basic principles upon which our assumptions are based are incomplete, erroneous or have missing data. Building only on solid foundations was a principle understood by the ancient Greeks and Egyptians yet we are building castles on the equivalent of swampland. No wonder clinical translation fails so often."

As someone who ran the research operations of two major Canadian national cancer research funding agencies over the past two decades, I wonder if my own organizations have inadvertently been “complicit” in this. We always tried our very best not to “over-hype” any results from investigators we funded, but there is always a need, especially in a national health charity, to “excite” the public and the prospective donor, and to be accountable to previous donors by showcasing for them any success their generosity has won.

Perhaps we all need to take a closer look at the pressures we place on researchers globally to “publish or perish”. Are our incentives and the way we measure “success” all wrong? 

Perhaps, indeed, it is long overdue that we take a very hard look at how we conceive, fund, undertake, promote and analyse cancer research results, and how and what we value in cancer research and in cancer researchers.

Labels: , , , , , ,

Tuesday, March 20, 2012

Personalized Medicine [Part 2] - Time for a Reality Check?

Despite the enormous promise of personalized or precision medicine coming from the  genomics era, I think we need to collectively take a deep breath and also ponder the reality of just how far this new technology can take us.

Without in any way diminishing the huge potential of the "$1000 genome" era, I think there are at least two important areas where we need to do a reality check.

 The first of these I have already written about - the need for debate in society about how we want to view privacy and confidentiality, and how are we going to deal with the influx of personal, genetic information that could overwhelm and confuse us despite good intentions to the contrary.

The second area stems (no pun intended) from the reality that cancer is, at its heart, a set of diseases marked by tremendous  genetic instability. The reason that so many cancers are hard to treat is because every time you think you have it pinned down, it morphs into something a bit different.

For example, when a number of the first Gleevec patients started to relapse, the sound of people jumping OFF the bandwagon was an audible thud. Skeptics said "see, we knew it couldn't really work so easily!" Subsequent studies showed, however, that Gleevec indeed worked exactly as advertised, but in the interim, the cancers had “evolved” – they developed some secondary mutations that essentially allowed the Gleevec roadblock to be bypassed.  If  you put roadblocks up on the main highways, cancer will find a way to take a side road to get out of town. If you block the side roads, cancer often will find some other route.

So, the advent of an international consortium like ICGC  that is  so very powerful, coupled with the fact that gene sequencing costs are spiralling downward, leads us logically to anticipate a new era of personalized and precision medicine. The idea is out there that if every patient’s tumour could be biopsied and his/her cancer genome sequenced so that we can determine and understand the underlying genetic defects, then we will be able to choose a tailored therapeutic regimen to treat that patient and his/her cancer in a more targeted way than ever before possible.

But that kind of future scenario depends not only on “cheap” sequencing technologies and an enormous database of mutations associated with cancers (both of which are now or will soon be in our reach), but it also depends, at least in part, on one other crucial factor. If we do a biopsy on a patient’s cancer, are we confident that what we will learn will be sufficient to give us the depth and detail of understanding that we need so that we can put this therapeutic precision and personalization to the test?

As is so often the case with cancers, the answer is, maybe…..

Why the hedge? Because we haven’t yet fully accounted for the idea that tumours are undoubtedly NOT homogeneous, that is, they do not have a uniform structure or character. There may well  be many different types of cancer cells even in a single patient’s cancer. We call this “tumour heterogeneity” which in simple terms means that the tumour may be a “dog’s breakfast” of different kinds of cells and different kinds of mutations.

As Dr. Dan Longo wrote in an editorial entitled Tumor Heterogeneity and Personalized Medicine in the March 8, 2012 issue of the New England Journal of Medicine:  

“A new world has been anticipated in which patients will undergo a needle biopsy of a tumor in the outpatient clinic, and a little while later, an active treatment will be devised for each patient on the basis of the distinctive genetic characteristics of the tumor,” he wrote.  “But a serious flaw in the imagined future of oncology is its underestimation of tumor heterogeneity.”

This “complication” came to the fore earlier this month with the publication of a very important study, entitled IntratumorHeterogeneity and Branched Evolution Revealed by Multiregion Sequencing published in the same New England Journal of Medicine issue. 
That’s a very technical title, and indeed a very specialized and technical paper, but the bottom line of it is this: a team of researchers led by Drs. Marco Gerlinger and Charles Swanton from London, UK found that there was an astonishing degree of genetic variation in biopsies from the same tumour from the same patient. In fact, multiple biopsies taken from single patients with kidney cancer (renal carcinoma) showed that there were many different mutations in each biopsy, and that not all of them showed up in all of the biopsies. In fact, the majority (over 60% of the mutations) did NOT show up across all of the biopsies.

Even worse, the researchers found that the mutations and gene “signatures” found in one region of the tumour were consistent with what we would currently have said is a good prognosis, whereas gene “signatures” found in a different part of the very same tumour were consistent with what we would have expected to be a poor prognosis!

This study, if typical for other tumours, suggests that a simple, i.e., non-invasive biopsy of a limited region of a tumour might NOT be at all sufficient to proceed with a very targeted therapeutic regimen. What if we targeted treatment to the wrong cells, cells that maybe by chance only represented 10% of the tumour?  What if we chose not to treat aggressively based on an ostensibly great prognosis from the biopsied material, only to find out later to our detriment that we were fooled by a “sampling error” of lamentable proportions?

So, bottom line, looking at both sides of the coin of "personalized medicine" (e.g., this post and the previous post), what does this all mean?

Are genomics, DNA sequencing and the building of mutation databases of enormous proportion tantamount going to lead us single-handedly to the Holy Grail of cancer treatments? Hardly.

Does the Swanton et al. study on genetic variation in kidney cancers mean that we are wasting our time with  the pursuit of genomics and precision cancer therapies? Again, hardly.

Like all things cancer, black and white approaches are simply not the way to go. This may be a bump in the road, as some have alluded, but if it is, it is not the end of the road by any means. We will learn some breathtaking insights from genomics, but it will be only one powerful tool in the arsenal, not the whole answer.

As one blogger eloquently put it in describing the kidney cancer study (Jessica Wapner, March 9, 2012, in a PublicLibrary of Science blog)

“It’s for this reason that the idea of personalized medicine—and here we are talking specifically about drugs targeted against the genetic make-up of an individual cancer, not about a whole-person regimen for life based on your personal DNA quirks—is one that has to be held with a long-view. It took decades for the first useful chemotherapy drug to be discovered. If we absorb the notion that targeted therapy is still in its nascent stage, then this new study isn’t a bump in the road, but rather another description of the scenery.”

Labels: , , , , , , , ,

Thursday, January 12, 2012

Picking The Lock…

Most everyone has heard the old adage that the three most important things in real estate are "location, location, location".  If you were to use a similar approach for cancer therapeutics it might be "specificity, specificity, specificity".  To my mind, 'specificity' may be the single most important attribute for any cancer therapeutic to be maximally effective, and therefore the search for absolute specificity is in many ways the Holy Grail of cancer research.

Why do I say this?  To start with, there is a myth in the public's mind that it is difficult to kill cancer cells.  Frankly, this is nonsense.  Generally speaking, it is very easy to kill cancer cells.  What is difficult is killing ONLY cancer cells and leaving normal cells unscathed. This is where, by and large, cancer treatments of the past have failed us. 

But aren’t anti-cancer drugs, by their very name and nature anti-“cancer” drugs? There is where a second misconception enters the fray: that most chemotherapeutic agents have been specific anticancer drugs.  Actually, for the most part, most of the “classical" chemotherapy agents have in fact been drugs that interfere with cell division as opposed to being anticancer drugs per se. In fact, most of these drugs of the last generation target rapidly dividing cells, not necessarily only cancer cells. 

While it is very true that most cancers are comprised of rapidly growing cells, unfortunately they are not the only cells in the body that divide rapidly. For example, for those unlike myself were not follicularly challenged <smile>, your hair cells divide rapidly and replenish quickly. The cells in your digestive system and your gut are being replaced at a very fast pace.  And the cells that populate your blood system are also dividing quickly and on a constant basis to provide a (usually) never-ending supply of blood cells of all sorts.

By now, you can probably see where I'm going with this.  What are the major side effects at we usually associate with chemotherapy?  Your hair falls out, you get sick to your stomach, and more often than not you get anemic.  That's because the normally rapidly-dividing cells in your hair, your gut and your bloodstream are also under attack. The chemotherapeutic agents interfere with their rapid division in much the same ways they interfere with the rapid cell division of cancer cells.

So the trick is to discover and develop treatments that recognize truly unique properties of cancer cells, i.e., properties that are not shared with non-cancer cells. Simply targeting rapidly dividing cells is no longer be adequate (not that it ever was...).  We need to discover better signposts that define and identify cancer cells as opposed to normal cells.  We need to find new ways to make cancer cells stand out from the crowd, ways that make cancer cells scream out at us "I am the cancer cell.  Don't waste your time with those other normal cells. Take me!"
 
Look at the two accompanying pictures: I like to think of this as the old barn door analogy.  No longer is it acceptable to do a scatter-shot at the side of the barn in the hopes of hitting the barn door.  Now we want to go in and pick the lock...


I doubt that very many of my cancer research colleagues would appreciate being called the next generation of lock-pickers, but in one very real sense that's exactly what they are! The more and more specific, the more and more targeted and the more and more selective we can make our future cancer therapeutics, the better will be the treatment, the better will be the outcome for patients and the better will be the quality of life for patients during and after treatment.

This notion of targeting and specificity will be a constant thread throughout many of the posts to follow. You've all heard by now, I am sure, of the notion of "personalized medicine" or a related term "precision medicine".  This is a very important part of the whole notion of attaining maximum specificity in the treatment of cancers of all types.

Labels: , , ,