Each year, the United States spends billions of dollars on cancer research. In fact, the National Cancer Institute’s budget is $5.665 billion for 2018, and that does not include money from private donations. Despite all the money and effort spent on cancer research, there is a bit of a “big data crisis” preventing healthcare institutions from optimally learning from each other.
The New York Times recently published an article titled, “New Cancer Treatments Lie Hidden Under Mountains of Paperwork,” which talked about this problem and the difficulty of extracting meaningful data from medical records. And I have to admit, the article struck a nerve.
Lack of sharing = fewer insights
The problem is that too much cancer data is trapped in medical records that stay within a hospital. This means that cancer researchers can’t easily tap into the mountains of data being produced at other institutions. It’s the combination of research data that will provide true insights. The author of the Times article, Gina Kolata, writes:
In the United States, there is no single format used by all providers, and hospitals have no incentive to make it easy to transfer records from one place to another. The medical records mess is hobbling research and impeding attempts to improve patient care.
This sentiment also appears in a 2016 Wired magazine article, titled, “The Cure for Cancer is Data – Mountains of Data.” Author Lola Dupre interviewed Eric Schadt, who started the Icahn Institute for Genomics and Multiscale Biology at Mount Sinai Hospital in New York. Schadt discussed how, despite a plethora of research data, it’s difficult to piece together the information for true advancements. Schadt says:
“In the five years that I’ve been here, I’ve realized that’s just not going to happen within the medical centers. They’re too isolated from each other, too competitive, and they’re not woven together into a coherent framework that enables the kind of advancements we’re seeing in nearly all other industries.”
Why I care about this
I care about this issue as someone involved in healthcare technology and as an employee of a company that strives to help organizations gain insights into their data. But even more than that, I care about this issue for personal reasons. Most of us know someone who has had cancer. For me, that person was my father.
Two years ago, my dad died from acute myeloid leukemia (AML). Dad was a patient at Dana Farber Cancer Institute in Boston, the same hospital discussed in the Times article. My father underwent conventional chemotherapy, and when that didn’t work, he was one of the first to enroll in a clinical trial at Dana Farber. He was the first person to go into remission with that therapy, but ultimately, the leukemia came back and Dad passed away.
I share this because I know there’s a lot that researchers can learn from my dad’s experience with the clinical trial. My dad’s disease was complicated due to genetic factors. Because of this, my father went into the trial cognizant that his prognosis was poor. However, he knew that his effort could potentially help other patients down the road, especially those with this particular genetic mutation. That’s why I felt frustrated upon reading the Times article and learning that healthcare institutions, in many cases, are not able to make the most of their data by sharing it. What good is research if we are not going to learn as much as we can from it? What good is my dad’s data if there are roadblocks to it being shared freely with other researchers working on the same disease at other institutions?
What is currently being done by healthcare organizations
Some healthcare organizations are trying to better aggregate and share cancer research data. Here are some of the areas where they are focusing.
Interoperability: Right now, the process of obtaining medical records can be cumbersome, and as noted in the Times article, it must often be done by fax. With all of our technological advances, this seems simply unacceptable. Healthcare systems must be able to talk to each other and exchange information. Recently at HIMSS18, presidential advisor Jared Kushner spoke about the Trump administration’s plan to make interoperability a priority. The Office of the National Coordinator (ONC) for Health Information Technology would like to achieve interoperability by 2024.
Data exchanges: There are currently various data exchanges in place, including the National Cancer Institute’s Genomic Data Commons, which provides a data repository in support of precision medicine. In addition, various healthcare organizations are spearheading initiatives to share data, such as St. Jude Children’s Research Hospital, which recently launched its public repository of children’s cancer genomic data.
Cancer Moonshot: In 2016, former Vice President Joe Biden announced the Cancer Moonshot, an initiative to accelerate cancer research. Among the program’s goals are plans to build a national cancer data ecosystem and to use machine learning and predictive analytics to develop better treatments for patients.
When thinking about cancer research, it’s vitally important that we support not only the researchers who are in the lab day in and day out working on therapies and breakthroughs, but also the technology that underpins the research. In order to truly make headway in cancer research, there must be a collaborative effort between healthcare organizations, the federal government, and technology companies. Think of the advancements that could be made and the lives that could be saved if research data was freely accessible.
Latest posts by Kathy Sucich (see all)
- Healthcare Rocks: How Technology Is Helping to Drive Improved Outcomes - October 31, 2019
- How to Improve the Performance of Your Emergency Department - October 23, 2019
- The State of Healthcare Interoperability in 2019 - October 16, 2019