Archive for the ‘Health information technology’ Category

Balancing the Need for Guidance, Communications, and Education to Support Innovation in Personalized Medicine Diagnostics

May 13, 2013

Recently, I had the opportunity to moderate a thought-provoking panel at the PMC/BIO Solutions Summit. The summit brought together key stakeholders to discuss solutions to barriers in the development of innovative personalized medicine diagnostics. A big question for those developing potentially game-changing technologies in an increasingly cost conscious environment is the need for “Evidentiary Standards and Data Requirements for Payer Coverage.”

Determining the data requirements for coverage is becoming an increasingly frustrating issue for diagnostics manufacturers, which face rising demands for evidence but continued lack of clarity about payer standards for evidence-based decision-making, leading many to ask the question, “Why can’t payers just tell us what their standards are?” Complicating the picture is that diagnostics can come to market via different pathways with different levels of supporting evidence (e.g., companion diagnostics reviewed by FDA for clinical validity with the companion therapeutic and tests developed, validated, and introduced to the market by laboratories).

The panelists – who represented leaders from industry, non-profit advocacy, and government working to create solutions for some of these market access barriers – noted a couple of issues at play. One is that having a payer “pick list” or hard criteria for coverage removes the flexibility that is so often needed in these gray-area coverage decisions. The second is that given the volume of products they are evaluating, most payers don’t have the bandwidth to be experts in the nuances of the trial design for every technology. Third is that across all stakeholders, there is a wide range of knowledge on innovative products and personalized medicine and that basic education for the majority of stakeholders to better understand these products is lacking.

Several lessons and next steps came out of this discussion. First, panelists agreed that there must be more education for all stakeholders so that each stakeholder can actually evaluate novel products appropriately, a key finding echoed throughout the day. Second, the emphasis on outcomes must shift from only clinical outcomes to clinical outcomes and quality of life for patients. Finally, all panelists agreed the ideal situation is open, trusting lines of communication and split of the responsibility according to expertise.

At the end of the day, it may be incumbent on the molecular diagnostic community to shape the paradigm for evidence requirements so that payers can act as enablers, rather than watchdogs.

National Bioeconomy Blueprint Showcases Personalized Medicine as Model for Strengthening U.S. Bioeconomy

May 7, 2012

Last week, the White House released its National Bioeconomy Blueprint.  It lays out some strategic objectives designed to help realize the full potential of the U.S. biotechnology sector to generate economic growth by creating jobs and addressing societal needs.

As an example of how the government’s efforts can facilitate the development of a more robust bioeconomy, the report discusses the impact of the Human Genome Project and the development of personalized medicine. The Blueprint cites the Case for Personalized Medicine, 3rd Edition, noting “advances in recent technologies have increased the momentum of personalized medicine – customized healthcare based on specific genetic or other information of an individual patient.”

While we agree that policies are needed to help support research and development (R&D), improve translational and regulatory science, improve regulation in other areas, enhance workforce training, and develop new public-private partnerships and precompetitive collaborations, we are concerned they are not sufficient to allow us to realize Adriana Jenkins’ dying wish, that all patients have access to personalized treatments.

The White House is correct to shine a light on FDA and direct the agency to focus attention on application review times, coordinated parallel reviews of products, and continued improvement of regulatory science. In reality, we are already seeing the benefits of this increased coordination – with FDA’s accelerated review and approval of Kalydeco™ for cystic fibrosis and Xalkori® for non-small cell lung cancer, each with a companion diagnostic approved together and in advance of FDA time lines.

Still, engaging on regulatory science and streamlining FDA processes will only go so far to improve the bioeconomy and bring personalized medicine to patients. For instance, current comparative effectiveness research (CER) and health technology assessment (HTA) models, which are not addressed in the Blueprint, are not well aligned with the science of personalized medicine and such misalignment causes additional barriers to market entry and patient access after FDA approval. The Blueprint highlights coverage with evidence development (CED) as a potential HTA model, but it should be noted that although CED can be a good tool, if done wrong, it can also chill innovation.

Likewise, unclear and unrealistic expectations for obtaining Medicare coverage and adequate payment for personalized medicine products and services are a substantial hurdle, preventing companies from seizing the full scientific potential and translating that into new treatments and the resulting high-paying jobs and economic contributions that follow.

We look forward to working with the Administration and with Congress to shape policies that will help support the ability of companies to continue to develop personalized medicines and bring them to patients, including research and development tax credits, delivery system reforms, and regulatory and reimbursement policies. Given the tremendous potential of personalized medicines, it’s key that we get the policies right to foster companies working on personalized medicine and thereby improve patients’ lives and our economy.

Understanding the Complexities of Cancer and Progress in Personalized Medicine

March 14, 2012

Personalized cancer care holds much promise; it gives us confidence that targeted treatments will eliminate cancers and spare us the one-size-fits-all cancer treatments of our past. For the sake of simplicity, we want to think of personalized cancer care as a lock and key; each tumor has a specific puzzle interface, and when the puzzle piece is identified then the cancer melts. This simple message of “lock and key” is comforting because it suggests that we can achieve a new reality where we assess a tumor and choose a treatment in a very simple way, based on a tumor’s genetic makeup.

We have many exciting new targeted cancer treatments in our toolbox – consider, crizontinib (Xalkori®) for non-small cell lung cancer and vemurafenib (Zelboraf®) for BRAF mutation-positive metastatic melanoma. These therapies have been effective in patients with specific gene mutations. At the same time, as the science advances, research continues to reveal the underlying complexities of cancer and the diversity of tumors.

In the March 8 issue of The New England Journal of Medicine, the authors of “Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing” provide an exquisite roadmap of how heterogeneous and complex one individual’s cancer can be with many mutations lurking within each tumor mass, and the mutations themselves evolving over time. The picture presented by Gerlinger and colleagues, suggests the need for many keys to a myriad of locks, each tailored for a particular tumor at a particular point in time. The simple message of lock-and-key personalized medicine, where simple one-to-one decisions can be achieved, would not hold true.

This may create concern that a personalized medicine approach has created a false sense of hope. I don’t think that it is false hope, but rather a false sense of the simplicity.

There is hope. While a single needle biopsy may not create a complete picture of a cancer, biopsies are still a crucial part of characterizing tumors, using growing scientific knowledge to understand which treatments might work best. Biopsies are a starting point. A number of institutions are conducting genetic and genomic analyses on tumors. This is probably not the Holy Grail, but rather an iterative step in a long trail of scientific advances that will support personalization of cancer medicine.

In addition, new technologies are needed to improve on the information gathered, including methods to understand whole tumor heterogeneity. If we believe a personalized approach is simple, we are at risk of not building the systems needed to deal with the deluge of complex information that will need to be sorted, understood, and applied clinically in real time.

We need both the science and the information technology solutions to keep improving. New understandings highlight new opportunities to target more central cancer mutations, and to create more thoughtful treatment cocktails.

The underlying message here is that although we have made great progress, we are still early in our journey. In reality, this is a very comforting message. It would be disheartening if we had already exhausted the targeted treatment toolbox and that the best we could achieve is delayed cancer progression but not cures. By embracing the complexity of more individualized information for any particular patient and tumor, we really do have the hope of achieving personalized cancer care.

As we uncover new and unimagined pathways during the scientific discovery process, we should not be deterred or disheartened. We just aren’t there yet. We need to ensure that public policies support the continued investment of energy, attention to detail and resources required to achieve a vision for personalized cancer care.

How Personalized Medicine is Changing Cancer Care – A First Hand View

June 28, 2011

I care for people with melanoma. To give you an idea how personalized medicine is impacting care, take the example of a patient I treated at Duke Comprehensive Cancer Center 18 months ago – let’s call her Sarah.

A 37-year-old nurse, Sarah and her husband were trying to start a family. Like many melanoma patients she had fair skin and red hair. Her mother died of melanoma. She came to me after her surgeon had removed a small black-pigmented skin growth and lymph nodes under her arm. After complete review of her clinical story we diagnosed her cancer as Stage IIIB (“3B”) melanoma. Reviewing cancer statistics and matching them to Sarah’s disease tells us that Sarah had about a 50% chance of surviving 5 years.  In order to maximize her chances of long-term survival without melanoma, together with Sarah, I was trying to decide on a treatment and care plan that would reduce the chance of the melanoma returning; this is called adjuvant treatment. Sarah and I were considering whether an adjuvant treatment plan was right for her, and if so, what medicine, for how long, and with what personal impact on her life.

At this point we have one chance to select the right treatment plan, because melanoma that returns is more difficult to treat and rarely goes away for good. You go to the “cookbook,” look up the treatment guidelines and try to balance the standard-of-care with what you know about the individual needs and preferences of the patient.  In Sarah’s case, finding a balance between effective adjuvant treatment, preserving her ability to have children, and helping her maintain her job were important considerations.  The treatment guideline advised interferon, a clinical trial or no treatment, but we had few clues as to the best treatment path and certainly no information about fertility or the influence of Sarah’s genetic makeup on her ability to derive benefit from treatment.

In the year and a half since I treated Sarah, we now have new tests to predict risk and learn more about the specific tumor each patient has. Molecular descriptors about the cancer and striking radiological images provide important clues. The information available to us is getting better and better, which makes it more likely that we will hit the mark in the one shot we have to find the right treatment. In the future, I anticipate that other factors such as Sarah’s heritage, her symptom profile, environmental exposures, and personal values will also be incorporated into the decision-making process.  Sarah’s case reminds us that these are real people undertaking real and exceptionally critical decisions; we need to be able to harness all available information to maximize the chance that we make the right decision for Sarah and patients like her.

At the same time this growing wealth of information brings new challenges for physicians. The “cookbook” does not work well as we gather more detailed information about potential risks, likely side effects, and benefits of different treatment combinations. 

Physicians care about getting these decisions right for each patient. The current “cookbook”, though, is one-size-fits all. We need information science and technology (IT) to help match guidelines and predictive mathematical models informing patient care with new information available about individual patients and the type of tumor they have.  And it has to happen at the point of care, when we need it. The amount of information can be overwhelming. Evolving data systems and IT infrastructure that are reliable and trustworthy will be key to making the most of the information available. Along with this, we simply need more time with patients. We often have about 8 to15 minutes to make a decision with the patient that will affect their entire future. This is not what patient-centered care and personalized medicine are all about.

Someday I hope that when a patient like Sarah comes in I can look up her exact type of tumor, incorporate her personal family history and other medical information into the story, see what treatments she is likely to respond to, understand the potential risks, and balance her personal concerns like maintaining her fertility. Personalized medicine is advancing quickly, but we need information systems to support good decision-making for physicians and patients as the information available continues to expand exponentially.

Health IT and Personalized Medicine: Making the Connection

February 10, 2011

In his recent blog entry, Darrell West, Ph.D., of the Brookings Institution discussed the importance of health information technology (HIT) for the advancement of personalized medicine.  He argues that HIT can serve as the bridge enabling the flow of information between the researcher’s lab and the clinician’s office, helping to realize the full potential of personalized medicine: patient care guided by an understanding of disease at the molecular level, resulting in health system savings as we move away from a trial-and-error approach to medical care.

Taking a more in-depth look at the need for synergy between personalized medicine and HIT, the Brookings Institution published Enabling Personalized Medicine Through Health Information Technology, a paper by Dr. West outlining public policy changes needed to ensure that HIT facilitates innovation and clinical adoption of personalized medicine.  While I won’t summarize all eight of the policy recommendations highlighted in the paper, a couple have particularly important ramifications for personalized medicine.

Better Data-Sharing Networks. With proper policy in place, HIT can help to facilitate the connectivity, integration, and data analysis that is necessary to help researchers understand what types of therapies will work for what types of people, and provide a platform for health care providers to use this information in clinical practice. However, as suggested in a report last year from the President’s Council of Advisors on Science and Technology (PCAST), connecting data across the nation’s 650,000 doctors and 5,800 hospitals is one of the most significant challenges in achieving the necessary exchange of health information that can better inform research and patient care. As Dr. West rightfully points out, HIT policy must facilitate the creation of a system that not only improves the accounting and administrative aspects of health care, but also facilitates the tracking of information on treatment guidelines, medical tests, and clinical outcomes – helping to achieve greater value in health care. Furthermore, we can only realize the full benefits of our improved understanding of disease when new genomic and other personalized information is included in a patient’s electronic health record and can be connected and compared to clinical outcomes data from other patients to allow researchers to spot trends and build knowledge.

Of course improved data sharing capabilities will require a balance between protecting the privacy of personal health information and enabling appropriate access to aggregate data and analyze results.  Dr. West offers additional insight in the paper as to what this may look like.

Ending the Catch-22 of Reimbursements. As discussed in the Personalized Medicine Coalition’s issue brief The Adverse Impact of the U.S. Reimbursement System on the Development and Adoption of Personalized Medicine Diagnostics, diagnostic tests play a pivotal role in the practice of personalized medicine, but as of yet, coding systems have not been updated to reflect the multitude of tests available and reimbursement does not adequately reflect test value or interpretation costs. Dr. West expounds on this in his discussion of the CMS Coverage with Evidence Developmental (CED) system. In 2009, CMS rendered a CED for warfarin diagnostics where they could only cover the cost of the test for participants in two specific research studies.  Though CED could help demonstrate the value of genetic tests in improving clinical outcomes, it also creates a catch-22 in that insurers are reluctant to cover “experimental” diagnostics. Without updates to the coding and reimbursement system, and HIT-driven pathways to capturing information, innovative test developers will not be able to gather the level of evidence required to qualify for reimbursement. Speaking at Brookings, David Brailer, M.D., Ph.D., Chairman of Health Evolution Partners and the former National Health Information Technology Coordinator suggested, “We should develop the reimbursement coding schema [for personalized medicine technologies…[The lack of a reimbursement coding framework] has had a more adverse effect than any other aspect on the development of these technologies.”   

As advances in science transform our understanding of disease, we must also adapt our policies and perspectives on how medicine is practiced.  We have the opportunity to leverage HIT to enable a new era in medicine – one that can integrate research findings into treatment guidelines by applying what we know about disease to more effectively and efficiently direct research, treat patients, and reduce costs in our overburdened health care system.

For more in depth examination of the policy recommendations in the Brookings paper, the full report is available for download at:  A transcript and audio recording of the release event featuring Dr. Brailer, and a panel discussion including Paul Billings, M.D., Ph.D., of Life Technologies Corp., Mark Boguski, M.D., Ph.D., of Harvard Medical School’s Center for Biomedical Informatics, Emad Rizk, M.D., of McKesson Health Solutions, Inc., and Donald Rucker, M.D., of Siemens Medical Solutions USA is also available for download at

Enabling Personalized Medicine Through Health Technology Innovations

January 24, 2011

As numerous studies show – and the Personalized Medicine Coalition’s education and advocacy efforts further underscore – personalized medicine is poised to play a big role in improving patient care and health care delivery in the U.S.  It offers earlier prevention, better-targeted treatments, health system cost savings and enhanced understanding of the differences in effectiveness of different treatment options for different patients.  

Despite its promise, personalized medicine faces challenges; the science is emerging and complex, regulatory pathways are not optimal, and health care financing and delivery create barriers to its adoption.  The successful implementation of health information technology can help address these challenges by creating a new infrastructure for developing data on personalized medicine interventions, and by giving them up to date information on available treatment options.  
Even while the government has made significant strides toward improving the technological infrastructure in the U.S. health care system, we must keep in mind widespread use of health information technology does not guarantee the advancement of personalized medicine.  As the government continues to expand and adapt the criteria for “meaningful use” of electronic health records (EHRs) in the coming years, we must continue to work to ensure that new systems are capable of handling, sharing and analyzing the genetic and outcomes data needed to promote the continued development of personalized medicine.
On January 28, I will be presenting findings from a paper I’ve developed that outlines key findings and recommendations about the public policy actions needed to ensure that health information technology facilitates the adoption of personalized medicine in the U.S. health care system. A fantastic panel of experts, including Mark Boguski (Center for Biomedical Informatics, Harvard Medical School), Donald W. Rucker (Siemens Medical Solutions USA), Emad Rizk (McKesson Health Solutions) and Paul Billings (Life Technologies Corp.), will further discuss the policy and operational changes that would facilitate connectivity, integration, reimbursement reform and secondary analysis of information.  Plus, David Brailer, chairman of Health Evolution Partners and the first “health information czar” during the Bush administration, will deliver a keynote address.   

We encourage you to join us on January 28.  If you’d like to attend, please register here.  We look forward to seeing you there!

The Patient as Collaborator: How Personalized Medicine is Giving Back to the Patient

December 21, 2010

While the advancement of personalized medicine hinges on collaboration among all stakeholders from researchers to industry to clinicians to policymakers, the ultimate stakeholders in personalized medicine efforts are the patients themselves.  In Total Cancer Care™, patients are not only the ultimate beneficiary, but also the major contributor to the effort.  More than 90% of patients who are invited to participate in the Total Cancer Care™ Protocol accept this offer.  This high participation rate is primarily a reflection of the intrinsic altruistic nature of patients and their desire to contribute to the solution through research.    We formed a Patient Advocacy and Ethics Council to assist us in developing and implementing the Total Cancer Care™ Protocol.  We asked this group, “What can we give back to the patient, not just those patients who may develop recurrent disease, but also those who may be cured by initial therapy?” (Approximately 55% are long term survivors.)  Without hesitation, the council told us that all patients desire to have access to their own information in a usable and understandable format.

To that end, we developed a Patient Portal to the data warehouse which provides patients with their own medical histories, data and other important information.  Under the leadership of Mark Hulse (formerly of Partners Healthcare) and Dr. David Fenstermacher, we began this effort at Moffitt in October 2009, and we are gradually extending the portal access to all patients.   Our goal is to extend this service and resource not only for patients at Moffitt but for all patients at all consortium sites.  Much work needs to be done in this area and requires a much improved “real-time” information system.  We also are developing the system to not only be a repository of patient’s personal health records, but a portal where the patients can use the information to make informed decisions.  Again, working with the Institute of Human and  Machine  Cognition (IHMC) we are developing virtual learning technology and applying a process called C-map tools, originally developed at IHMC, to assist patients and physicians to navigate the Internet resources to ultimately meet patient’s needs.

In summary, these are very exciting times.  I truly believe we are at the threshold of translating and just as importantly, DELIVERING on the promise of personalized medicine.  We hope that our effort in developing Total Cancer Care™ will be a major part of the foundation of what some day will be considered common place—a healthcare system and technologies that are organized in such a way that every patient’s needs are identified and inform an individualized approach to meet their needs.  Primary stakeholders in developing personalized medicine, including researchers, clinicians, industry, policymakers, and patients themselves must come together to organize the framework and environment to promote personalized medicine.  It is unrealistic to expect any one stakeholder to collect the resources needed to create a rapid learning information system that will be required to capture data, leverage and enhance informatics needed for analysis, and communicate new knowledge to all the stakeholders involved in developing a better healthcare system built on the foundation of personalized medicine.  Teams comprised of broad expertise across the healthcare and research spectrums, and an unprecedented effort by all will be required to exploit the advantages of the necessary team science approach.  To complement the scientific infrastructure and technology that has already been developed, additional resources will be required including expanding information systems to community hospitals and physician practices, such as electronic medical records, biomedical informatics applications, and information technology professionals.  Ultimately, by developing evidence-based healthcare systems, we will improve the quality of healthcare by identifying best options for patients based on their personal traits and characteristics; such is the promise of personalized medicine.

The Role of Comparative Effectiveness Research in Total Cancer Care™

December 20, 2010

In my previous entry, I discussed how the launch of the Total Cancer Care™ initiative at Moffitt Cancer Center nearly eight years ago led to the development of one of the largest prospective observational studies in the world.  Through the enrollment of more than 60,000 patients and collection and genetic profiling of tens of thousands of tumors, Total Cancer Care™ collaborators have generated a vast information system to be leveraged as a clinical decision tool, and as a means of quality performance and comparative effectiveness research (CER).

One of the stated aims of Total Cancer Care™ is to raise the standard of care for all patients by integrating new technologies in an evidence-based approach to maximize benefits and reduce costs.  Although we developed this aim over seven years ago, I believe it is completely consistent with the current definition of comparative effectiveness being used by AHRQ and other policymakers. 

As I mentioned in my previous entry, strategic partnerships are an essential component to achieving the goals of Total Cancer Care™, and this is clearly demonstrated in our efforts in CER. Dr. David Fenstermacher and colleagues from Moffitt as well as the Institute of Human and  Machine  Cognition (IHMC), in Pensacola, Fla., are collaborating on a major NIH/NCI grant to enhance the Total Cancer Care™ infrastructure to support CER by expanding data management resources, integrating  automated data extraction methodologies (including natural language processing technology a particular area of expertise for IHMC), and creating user interfaces to data for researchers, clinicians and even patients. 

A major focus of our current efforts in CER is to determine the information and technology gaps in the CER infrastructure for data capture and data sharing.  Ultimately, it will be important to involve the community at large who are enrolling patients in the Total Cancer Care™ Protocol so that they can use the Total Cancer Care™ data warehouse as a decision tool based on evidence generated by the study itself.  The importance of the community network cannot be over emphasized both for populating the Total Cancer Care™ biorepository and database, and the ultimate utilization of the information and evidence generated for delivering the right treatment for the right patient.

To enhance Moffit’ts ability to establish this large research initiative the cancer center formed a wholly owned for-profit company, M2Gen, in 2006.  Merck and Co., Inc., through a Merck affiliate, signed on as our ”Founding Collaborator”. This experience has taught us how to service a global healthcare client and produce measurable scientific insights to accelerate drug candidates through translational medicine advances.

“Partnering for Cures” to Advance Personalized Medicine

December 17, 2010

I think we all would agree that finding cures and improved treatment options for cancer are a moral imperative. They will have a dramatic impact not only for those fighting the disease, but also for the families, friends, healthcare providers, and other caretakers that support them in their battle.   Personalized molecular medicine provides a promising path forward in cancer care, but accelerating this research requires the brightest minds, great laboratories, cross-disciplinary collaboration, rich software tools and LOTS of relevant, annotated, real-time data. Today, we are missing the “LOTS of data” piece, because our health information technology (HIT) and consent systems are not effectively connected for either the improvement of care or the acceleration of research.

Estimates in the U.S. indicate that more than 1.5 million will be diagnosed with – and more than a half million people will die of – cancer in 2010. And, as of 2007, 11.7 million Americans were living with the disease. Of those 11.7 million cancer survivors, it’s estimated that only 5% are enrolled in clinical trials, and only 15% are being treated at major research centers – which means more than 9 million people with cancer are not part of formalized research.  This is a highly motivated community, many of whom would welcome the chance to participate in research that could help their children, or their children’s children, receive more effective treatments if they suffer from the disease.

In partnership with the National Cancer Institute (NCI) and SAIC, we have built a prototype to demonstrate that we can solve this problem now. Earlier this week at the Partnering for Cures conference in NYC, Ken Buetow, Ph.D., Director at the Center for Biomedical Informatics and Information Technology at the NCI and Dr. Jon Handler, from Microsoft’s Health Solutions Group, presented a jointly developed prototype that showcases the potential for information technology to accelerate personalized healthcare research and improve clinical care.  Dr. Buetow talks here about the information challenges faced by researchers, providers and patients, and looks at the potential for technology to drive meaningful transformation in support of these stakeholders’ needs.

The prototype uses Microsoft HealthVault and the Patient Outcomes Data Service (PODS) created by the NCI to collect provider and patient-generated data on cancer diagnoses, treatments and outcomes. Since PODS and HealthVault are easily accessible outside research centers, the prototype highlights ways to engage a broader set of clinicians and patients in research – making it easier to reach those 9 million people who are not currently represented in research studies. In addition, gathering regular reports from patients on their experience with cancer treatments – for example, tracking daily pain levels, sleep patterns and mood – can provide researchers and clinicians with a richer set of data for understanding the impact of cancer treatments, particularly among certain patient sub-types and populations.

Using Microsoft Amalga, this data can be made anonymous and aggregated with data available in other research databases to create a disease registry that enables more complete analyses of the efficacy of cancer treatments.  Providers and patients have the opportunity to not only contribute their own information to benefit others; they can also view trended data from across similar patient populations, enabling shared decision-making around diagnoses and treatment plans.  

While the prototype we built focused on cancer research, there is potential to use HIT to further the personalization of treatments for other diseases  – Parkinson’s, Multiple Sclerosis, and Alzheimer’s – and begin to see how we can use the power of technology to create closer connections and valuable feedback loops across providers, patients and researchers.  Ultimately, we hope this will translate to people arriving at critical insights more quickly and partnering with each other to not only improve the care of the individual patient, but also to find cures for cancer and other devastating diseases.

Peter Neupert is a Corporate Vice President in Microsoft Health Solutions Group.

Health IT to the Rescue: Managing Data in the Age of Genomics

June 17, 2010

The 10th anniversary of the drafted human genome, released by the Human Genome Project in 2000, is a milestone for personalized medicine.  Our mantra – “get the right intervention to the right patient at the right time” – all but mandates the roll-out of genomic information in clinical practice.  As we come closer to the goal of the $1,000 genome, I can now imagine a world in which an individual genomic profile allows us to tailor cancer treatment to a patient’s personal situation. In the next decade, genomics will provide us with the opportunity to refine treatment planning so that we use drugs when they are going to work, spare patients unnecessary side effects, and avoid wasting precious time, emotional resources, or funds on drugs that are unlikely to be effective.  Even intimately personal decisions, such as how to preserve fertility, can be elucidated by and based on genomic data mixed with an understanding of effectiveness and toxic risk.  

Coexisting with excitement at the possibilities of genomically-guided personalized medicine is a pervasive angst.  The profusion of new information can be daunting.  How will I know all of the relevant inputs into decision-making in the era of personalized medicine?  How will I balance multiple important factors for each patient, without a roadmap or algorithm for this new type of clinical decision-making?  In personalized medicine, when treatment choices rely on unique genomic data for each patient, the quantity of potential data points to be factored into any single clinical decision boggle the mind.  How can I intelligently coordinate and consider all of this data?

Recent progress in health information technology (HIT) and in advancing our country’s data infrastructure provide hope that technology may come to the rescue, saving us from a morass of data and helping us make sense of the new plethora of information.  The Human Genome Project yielded vast amounts of data; its completion required development of interoperable data, novel statistical methods, and new HIT systems.  These same tools can also help us use genomic information, as well as rapidly increasing bodies of clinical and research evidence, to inform decision-making.  Genomically-guided biomarkers and predictive tests will help generate personalized information, but tools will be needed to help clinicians understand and use the resulting data, integrated with myriad other personal data types like blood chemistries, clinical exam findings, pre-existing toxic exposures over a life-time, and patient reported concerns. The development of personalized clinical decision support tools and prediction models, tailored and designed for efficient use at the point of care, will assist us in connecting the dots between the promise of The Human Genome Project and the vision of personalized medicine. May this 10th anniversary energize us to move from theory to action, and to strive for ever more finely individualized care that optimizes outcomes for our patients.

%d bloggers like this: