Author Archive

Understanding the Complexities of Cancer and Progress in Personalized Medicine

March 14, 2012

Personalized cancer care holds much promise; it gives us confidence that targeted treatments will eliminate cancers and spare us the one-size-fits-all cancer treatments of our past. For the sake of simplicity, we want to think of personalized cancer care as a lock and key; each tumor has a specific puzzle interface, and when the puzzle piece is identified then the cancer melts. This simple message of “lock and key” is comforting because it suggests that we can achieve a new reality where we assess a tumor and choose a treatment in a very simple way, based on a tumor’s genetic makeup.

We have many exciting new targeted cancer treatments in our toolbox – consider, crizontinib (Xalkori®) for non-small cell lung cancer and vemurafenib (Zelboraf®) for BRAF mutation-positive metastatic melanoma. These therapies have been effective in patients with specific gene mutations. At the same time, as the science advances, research continues to reveal the underlying complexities of cancer and the diversity of tumors.

In the March 8 issue of The New England Journal of Medicine, the authors of “Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing” provide an exquisite roadmap of how heterogeneous and complex one individual’s cancer can be with many mutations lurking within each tumor mass, and the mutations themselves evolving over time. The picture presented by Gerlinger and colleagues, suggests the need for many keys to a myriad of locks, each tailored for a particular tumor at a particular point in time. The simple message of lock-and-key personalized medicine, where simple one-to-one decisions can be achieved, would not hold true.

This may create concern that a personalized medicine approach has created a false sense of hope. I don’t think that it is false hope, but rather a false sense of the simplicity.

There is hope. While a single needle biopsy may not create a complete picture of a cancer, biopsies are still a crucial part of characterizing tumors, using growing scientific knowledge to understand which treatments might work best. Biopsies are a starting point. A number of institutions are conducting genetic and genomic analyses on tumors. This is probably not the Holy Grail, but rather an iterative step in a long trail of scientific advances that will support personalization of cancer medicine.

In addition, new technologies are needed to improve on the information gathered, including methods to understand whole tumor heterogeneity. If we believe a personalized approach is simple, we are at risk of not building the systems needed to deal with the deluge of complex information that will need to be sorted, understood, and applied clinically in real time.

We need both the science and the information technology solutions to keep improving. New understandings highlight new opportunities to target more central cancer mutations, and to create more thoughtful treatment cocktails.

The underlying message here is that although we have made great progress, we are still early in our journey. In reality, this is a very comforting message. It would be disheartening if we had already exhausted the targeted treatment toolbox and that the best we could achieve is delayed cancer progression but not cures. By embracing the complexity of more individualized information for any particular patient and tumor, we really do have the hope of achieving personalized cancer care.

As we uncover new and unimagined pathways during the scientific discovery process, we should not be deterred or disheartened. We just aren’t there yet. We need to ensure that public policies support the continued investment of energy, attention to detail and resources required to achieve a vision for personalized cancer care.

How Personalized Medicine is Changing Cancer Care – A First Hand View

June 28, 2011

I care for people with melanoma. To give you an idea how personalized medicine is impacting care, take the example of a patient I treated at Duke Comprehensive Cancer Center 18 months ago – let’s call her Sarah.

A 37-year-old nurse, Sarah and her husband were trying to start a family. Like many melanoma patients she had fair skin and red hair. Her mother died of melanoma. She came to me after her surgeon had removed a small black-pigmented skin growth and lymph nodes under her arm. After complete review of her clinical story we diagnosed her cancer as Stage IIIB (“3B”) melanoma. Reviewing cancer statistics and matching them to Sarah’s disease tells us that Sarah had about a 50% chance of surviving 5 years.  In order to maximize her chances of long-term survival without melanoma, together with Sarah, I was trying to decide on a treatment and care plan that would reduce the chance of the melanoma returning; this is called adjuvant treatment. Sarah and I were considering whether an adjuvant treatment plan was right for her, and if so, what medicine, for how long, and with what personal impact on her life.

At this point we have one chance to select the right treatment plan, because melanoma that returns is more difficult to treat and rarely goes away for good. You go to the “cookbook,” look up the treatment guidelines and try to balance the standard-of-care with what you know about the individual needs and preferences of the patient.  In Sarah’s case, finding a balance between effective adjuvant treatment, preserving her ability to have children, and helping her maintain her job were important considerations.  The treatment guideline advised interferon, a clinical trial or no treatment, but we had few clues as to the best treatment path and certainly no information about fertility or the influence of Sarah’s genetic makeup on her ability to derive benefit from treatment.

In the year and a half since I treated Sarah, we now have new tests to predict risk and learn more about the specific tumor each patient has. Molecular descriptors about the cancer and striking radiological images provide important clues. The information available to us is getting better and better, which makes it more likely that we will hit the mark in the one shot we have to find the right treatment. In the future, I anticipate that other factors such as Sarah’s heritage, her symptom profile, environmental exposures, and personal values will also be incorporated into the decision-making process.  Sarah’s case reminds us that these are real people undertaking real and exceptionally critical decisions; we need to be able to harness all available information to maximize the chance that we make the right decision for Sarah and patients like her.

At the same time this growing wealth of information brings new challenges for physicians. The “cookbook” does not work well as we gather more detailed information about potential risks, likely side effects, and benefits of different treatment combinations. 

Physicians care about getting these decisions right for each patient. The current “cookbook”, though, is one-size-fits all. We need information science and technology (IT) to help match guidelines and predictive mathematical models informing patient care with new information available about individual patients and the type of tumor they have.  And it has to happen at the point of care, when we need it. The amount of information can be overwhelming. Evolving data systems and IT infrastructure that are reliable and trustworthy will be key to making the most of the information available. Along with this, we simply need more time with patients. We often have about 8 to15 minutes to make a decision with the patient that will affect their entire future. This is not what patient-centered care and personalized medicine are all about.

Someday I hope that when a patient like Sarah comes in I can look up her exact type of tumor, incorporate her personal family history and other medical information into the story, see what treatments she is likely to respond to, understand the potential risks, and balance her personal concerns like maintaining her fertility. Personalized medicine is advancing quickly, but we need information systems to support good decision-making for physicians and patients as the information available continues to expand exponentially.

Personalized medicine, comparative effectiveness research, and the Human Genome Project

July 9, 2010

A monumental achievement – the release, in 2000, of a draft of the human genome – is now being celebrated at its 10th anniversary.  In the same decade, personalized medicine gained traction as the wave of the future for clinical practice, and comparative effectiveness research (CER) emerged as the research paradigm to shape US healthcare policy, delivery, and reform.  But personalized medicine is also, clearly, about research; a central purpose is to use genomic mapping and associations to identify the right intervention for the unique individual patient.  And CER is also, clearly, about patient care; its studies conducted in “real-world” contexts are designed to improve understanding of treatments and outcomes for individual patients with like characteristics.  Are personalized medicine and CER separate approaches or competitors?

Neither.  Rather than adversaries or parallel silos, these two movements are happy bedfellows sharing common goals and intentions, and contributing synergistic approaches and methodologies.  Both march to a common mantra: “the right intervention for the right patient at the right time.”  Both, while heavily steeped in research and development of translational methodology, are at their heart concerned with clinical practice; their purpose is to generate information for doctors and patients that can guide decisions about the individual’s care.  Both seek to advance science and its methods.  In personalized medicine, attention has focused on describing the precise make-up of the individual and matching treatment to individual characteristics; CER  has advanced data interoperability and the use of aggregated data to allow ever more finer-tuned understanding of outcomes.  The Human Genome Project unites them in the common quest to use genomic profiles and associations to understand the health of populations and of individuals within them.

Both personalized medicine and CER require health information technology (HIT) and interoperable data, with access to massive interlinked datasets, to explore outcomes for individuals and populations.  Both require that HIT enable continuous expansion and refinement of those datasets, so that we can incrementally improve prediction, individualized targeting of care, and effectiveness of interventions.   

Continuing to view personalized medicine and CER as distinct paradigms belies a 20th century, pre-Human Genome Project, world view.  In the 21st century, with the vast power of genomic medicine lying ahead, and with HIT and interoperable data as tools to support us, we have the opportunity to transition to a framework of personalized CER.  Our government has heralded this new understanding; personalized medicine, CER, and HIT are all embedded in US healthcare reform.  At this critical juncture, we must cultivate the vision of a unified story, one that moves us as a nation towards personalized CER enabled through and leveraging the results of national investments like the Human Genome Project.

Health IT to the Rescue: Managing Data in the Age of Genomics

June 17, 2010

The 10th anniversary of the drafted human genome, released by the Human Genome Project in 2000, is a milestone for personalized medicine.  Our mantra – “get the right intervention to the right patient at the right time” – all but mandates the roll-out of genomic information in clinical practice.  As we come closer to the goal of the $1,000 genome, I can now imagine a world in which an individual genomic profile allows us to tailor cancer treatment to a patient’s personal situation. In the next decade, genomics will provide us with the opportunity to refine treatment planning so that we use drugs when they are going to work, spare patients unnecessary side effects, and avoid wasting precious time, emotional resources, or funds on drugs that are unlikely to be effective.  Even intimately personal decisions, such as how to preserve fertility, can be elucidated by and based on genomic data mixed with an understanding of effectiveness and toxic risk.  

Coexisting with excitement at the possibilities of genomically-guided personalized medicine is a pervasive angst.  The profusion of new information can be daunting.  How will I know all of the relevant inputs into decision-making in the era of personalized medicine?  How will I balance multiple important factors for each patient, without a roadmap or algorithm for this new type of clinical decision-making?  In personalized medicine, when treatment choices rely on unique genomic data for each patient, the quantity of potential data points to be factored into any single clinical decision boggle the mind.  How can I intelligently coordinate and consider all of this data?

Recent progress in health information technology (HIT) and in advancing our country’s data infrastructure provide hope that technology may come to the rescue, saving us from a morass of data and helping us make sense of the new plethora of information.  The Human Genome Project yielded vast amounts of data; its completion required development of interoperable data, novel statistical methods, and new HIT systems.  These same tools can also help us use genomic information, as well as rapidly increasing bodies of clinical and research evidence, to inform decision-making.  Genomically-guided biomarkers and predictive tests will help generate personalized information, but tools will be needed to help clinicians understand and use the resulting data, integrated with myriad other personal data types like blood chemistries, clinical exam findings, pre-existing toxic exposures over a life-time, and patient reported concerns. The development of personalized clinical decision support tools and prediction models, tailored and designed for efficient use at the point of care, will assist us in connecting the dots between the promise of The Human Genome Project and the vision of personalized medicine. May this 10th anniversary energize us to move from theory to action, and to strive for ever more finely individualized care that optimizes outcomes for our patients.

Harnessing the Power of Health IT in a New Era of Translational Research

March 8, 2010

In the final years of the 20th century and the first decade of the 21st century, tremendous progress has been made toward bridging a recognized chasm between science and the real world, and specifically in medicine, between biomedical research and its application in healthcare.  Three identified “blocks” to translation have impeded the use of research findings to better the lot of our patients: T1, the translation of laboratory findings to clinical care, T2, the application of best evidence identified during T2 to everyday clinical care; and T3, wider generalization of research findings to improve the health of the community and, more broadly, the public.  The recent deluge of funding for comparative effectiveness research (CER) represents, in large part, an attempt to conquer T2 and T3.  T1, however, persists and presents a fundamental impediment to personalized medicine.

To overcome T1, and transfer T1 knowledge to T2, will require true integration of the clinical and research spheres – an integration that necessitates bidirectional information flow from the patient and physician in the clinic to the research scientist and back again, in an iterative cycle of hypothesis, question, answer, and testing of that answer in the real world setting.  To support this sort of information exchange, we will need: new coordinated health information technology (HIT) systems that span former “silos” in the biomedical community, and that can collect and manage large volumes of disparate and heterogeneous data; culture change that engages clinicians and researchers in a common mission of inquiry to improve care; communication channels that fuel hypothesis generation, and that support the translation of research findings into change in clinical practice, and; decision support mechanisms that help clinicians leverage the power of large-scale aggregated data to improve care for the individual patient.  In short, we need a new model of care, one that harnesses the potential of HIT and integrated clinical/research data to dismantle the T1 block. The purpose, fundamentally, of such a model will be to enable personalized medicine.

In advancing “rapid learning healthcare,” the Institute of Medicine has spearheaded the development of a new healthcare paradigm in which personalized medicine could become a reality.  Efforts are underway to develop this paradigm and its prerequisites.  As one such example, the Cancer Biomedical Informatics Grid (caBIG®) championed by the National Cancer Institute has tackled the development of an infrastructure promoting large-scale data interoperability spanning the data type boundaries from the basic sciences to clinical care and the patient report.  As we seek to match novel therapeutics and trials to patients, and to personalize care using individually relevant information, critical steps will be: (1) providing access to data, (2) generating data, and (3) making sense of the data.  Making sense of data needs to be facilitated at the levels of basic science (to guide translation of in silico research results into clinical practice change and further discovery), the population (to allow CER to guide health services decisions and policy), and the individual patient (to enable personalized medicine).  New data generated in any of these steps should be reinvested in the system to iteratively update the knowledge base.

The caBIG® experience has taught us that just having access to better HIT does not, in itself, advance personalized medicine.  Though a powerful tool, HIT alone is not enough to bulldoze the translation blocks.  Why?  Because healthcare is not a purely technical matter; rather, it is a human system, fundamentally dependent upon human understanding, acceptance, and behavior.  All of these must change in order to transform information flow through HIT, implement a new data-driven model of healthcare, and thus realize the vision of personalized medicine.  The individual stakeholders in medicine need to be aligned behind the new vision – through incentives to participation that speak to each one.  First, the new model needs to be structured, and to function, so that the HIT makes sense to real human beings using the system (clinicians, staff, patients, administrators, clinical researchers, basic scientists); HIT must represent “value added” to the existing system from the perspective of each stakeholder.  Second, interoperable data must be generated, so that the system has “grist for the mill” of inquiry; we have to start somewhere and someone needs to be encouraged to put their first big toe in the water — there is nothing like a “big story” to bring along the naysayers.  Third, to build confidence in the approach, we must make sure that privacy, confidentiality and the sanctity of personal health information are preserved. And fourth, novel ways to make sense of ever-growing databanks need to be developed; these methods may include new approaches for visualization, decision support systems, Bayesian and other branched analytic approaches, CER, and in silico research.  Current efforts focus on generating interoperable data (the middle step), but neglect to create systems that make sense of the data and promote its use, or that provide a structure and an engine to produce the data.

Finally, and, in my mind, most importantly, a critical next step is to define a reorganization of medicine at the point of care.  The new model must fully utilize available data and linked datasets, and must help clinicians understand the data and apply it in tailoring care to their individual patients.  If it makes sense to them, clinicians and patients will drive this.  In the background, interdigitated with the growing body of clinical experiences captured in linked clinical/research databases, will be the robust evidence base comprising published results of basic science, clinical research, and translational studies.  The resulting combination creates a system in which each patient’s care is guided by personal history and characteristics, the experiences of similar patients included in local and massive national longitudinal datasets, and the historical evidence base constituting an up-to-date state of the science.  Our challenge, today, is to develop this system beginning at the point of care, with the patient.


%d bloggers like this: