Precision oncology: an interview with Karina Eterovic
Audio podcast: Precision oncology: an interview with Karina Eterovic. This content is hosted on SoundCloud by Taylor & Francis Group.
This In the Zone podcast, guided by Dr Karina Eterovic, Eurofins-Viracor (KS, USA), explores how precision oncology is currently utilized in the bioanalytical field, the use of liquid biopsies to support these studies and the major challenges impacting drug development.
Podcast transcript
Hello and welcome to our In the Zone podcast on Precision Oncology. I’m Ellen Williams, Digital Editor for Bioanalysis Zone and today I’m joined by Dr Karina Eterovic, R&D Director for oncology and sequencing at Eurofins Viracor. At Viracor, Dr Eterovic leads a team which develops and validates tests that support both cancer care and drug development. Before joining Eurofins, Karina was an Associate Professor at MD Anderson Cancer Centre, where she led the Cancer Genomics Laboratory within the Institute for Personalized Cancer Therapy. Dr Eterovic has over 12 years of experience in cancer genomics and precision oncology, and has many publications in the field.
Thank you, Karina, for joining me on the podcast. It’s so great to be able to speak with you. It’s my pleasure to be here. Thank you for this invitation.
I guess my first question really is, what is precision oncology? So for many decades, we have been treating cancer patients with a “one-size-fits-all” approach. That means that tumors that look identical or very similar under the microscope have been treated using the exact same protocol, but with a very low success rate. The field of precision oncology aims to stratify patients according not only to the tumor histology, but also according to the molecular profile of the tumors. That includes identifying genes that are mutated or translocated, or proteins that are under or overexpressed. These molecular aberrations often drive tumor behavior, progression and metastasis. So, treating tumors using these markers as a guide can increase the chances of success and improve patients’ outcomes.
So what technological advancements have been made through this increased focus on precision oncology? In the last 10 to 15 years, advancements in mass parallel sequencing technologies, also called next-generation sequencing or NGS, drastically decreased the cost of DNA sequencing. That allowed more and more tumor samples to be screened for genomic changes. The high-throughput data generated has tremendously increased our understanding of what aberrations impact tumor progression and where we should focus on when it comes to drug development. And with that invaluable information in hands, researchers and pharmaceutical companies were able to unveil new targeted drugs that are more effective, but also produce less side effects.
Another key advancement in recent years happened in the immuno-oncology front. Immunotherapy drugs, also called immune checkpoint inhibitors, have demonstrated excellent performance for some types of advanced tumors, such as melanoma or non-small-cell lung cancer. These drugs work by blocking checkpoint proteins, which are proteins made by our immune system, from binding with their partner receptor. The binding of these checkpoint proteins to a partner molecule in a cancer cell can weaken immune responses, often preventing T-cells from killing cancer cells. So immunotherapy drugs can act by removing this “blindfold” or this “off-signal”, making immune cells more apt to attack and kill tumor cells.
Do these new technologies bring about potentially significant data analysis challenges? And how critical is the bioinformatics component for interpretation and obtaining actionable results? When you talk about next-generation sequencing, or NGS, for precision oncology, there are different levels of data analysis and interpretation. The first one is called secondary bioinformatics data analysis, and it basically takes the data coming out of the sequencers, aligns it to the human reference genome, and then makes calls for nucleotide changes that differ from that reference genome. Sequencers with very high throughput can generate many, many terabytes of sequencing data at a time, which makes it cost-effective, but at the same time, it can bring challenges regarding data processing and storage. Storage is particularly important, mainly because, from a regulatory standpoint, the data must be stored for a minimum period of time, 5 years, sometimes 10 years, but 5 in most cases. Also, this data must be secure, as it can serve as identifiers for a patient, and we must protect patient identities under HIPAA regulation.
The second layer of analysis is the clinical interpretation of those genomic aberrations called during the secondary analysis. Clinically, when we utilize a genomic panel of, let’s say, 500 or so cancer-related genes, we would see an average of 20 to 30 somatic events for each tumor sample. In this context, somatic means the aberration identified belongs to the tumor DNA, not to the patient DNA. In other words, they are exclusive to the tumor, which is, in fact, what we are looking for. So, busy oncologists would rarely have the time to investigate the clinical significance of each of them and then make a decision based…a treatment decision based on that. Therefore, clinical interpretation should be added to the reports, and that would include approved drugs that match some of the genomic changes, as well as clinical trials in cases where we don’t have an approved drug.
Most of the oncology trials currently in place now have a biomarker or a genomic aberration tied to the drug that is being evaluated. As we talk about clinical trials, the exact same type of sequencing panel can guide drug development and can be used at any stage from preclinical studies through all the phases in human testing. For example, preclinically, the information coming out of this type of testing can identify new biomarkers that could be drivers of a particular type of tumor.
The data can also shed some light into mechanisms of resistance to treatment when a drug is not performing the way we expect. Importantly, a retrospective analysis of any leftover material can also help to stratify responders versus non-responders based on the genomic profile of their tumors, and that will give us a better understanding of those mechanisms of resistance when the drug fails.
We’ve heard a lot about liquid biopsies lately, so can you explain how they can be used within precision oncology? Yeah, in oncology we call liquid biopsy a test done on a blood sample to look for tumor DNA that is circulating in the plasma. This tumor circulating DNA, or ctDNA, comes from tumor cells that die due to a variety of processes. It is important to mention that different types of tumors can shed circulating tumor DNA differently, with some tumors more prone to shedding than others. Also, high-grade tumors shed more DNA than those that are early-stage.
As for utility, liquid biopsies can serve two main, distinct purposes. One is early detection for patients without a previous diagnosis of cancer, and the other one is treatment decision guidance for patients that have been already diagnosed with cancer by other standard methods. For both, one of the main challenges is the low amount of material that comes out of these blood samples. Even for late-stage cancers, the amount of circulating tumor DNA can be very low, sometimes not even enough to be processed in the lab. Another factor that can affect the yield of ctDNA is systemic treatment, such as chemotherapies. As a lot of cancer cells will die during this treatment, we will see a lot, very high levels of ctDNA in the blood right after a treatment. So we need to consider all of those factors before collecting blood when we are planning to use this type of test so we can make sure we have enough ctDNA to test.
Can you talk a little bit more about the differences between using liquid biopsies for early detection versus later on in the diagnostic process? When we use liquid biopsies for early detection, ideally we would want to have one test that could identify any type of tumor. But this is challenging because finding a biomarker or a set of biomarkers that can tell if a person has cancer with more than 90% accuracy is extremely difficult. And lower levels of accuracy, meaning high level of false positives, can throw a patient into an endless process of chasing a tumor that might not even exist in the first place. And the financial and psychological burden are not to be ignored in this situation, and that is why we need to be very careful with tests like this when we put them in the market. On the other hand, using liquid biopsies to identify genomic aberrations to guide treatment decisions for a patient that already has a diagnosis of cancer is a lot more straightforward. Once we have the minimum amount of ctDNA to work with in the lab, with new technologies and error correction additions to the current NGS workflows, we can identify aberrations with confidence and physicians can use that information to make decisions about treatment. And the knowledge that we still have a lot of patients that will not respond to these new therapies.
What are the main challenges that we face during drug development? I truly believe we need to come up with more effective therapies, and the only way to do that currently is by having better drug design and improved clinical trial design. Starting on how we select a drug candidate or a chemical compound to move into human testing. In research labs at academic institutions, every day we see a vast number of drugs that kill cells, cancer cells, on a glass plate. Some of them will be further tested in animal models, and the ones showing promising results will then move into human testing. But these steps all together can take 5, sometimes 10 years to be completed.
However, 90% of the drugs tested under these extremely regulated, lengthy and costly processes will prove to not [to] be safe or not [to] be effective enough to have a final approval by a regulatory agency such as the FDA. So we need better preclinical models before moving to humans so we don’t waste time and resources. And once safety is tested, we need more effective inclusion criteria to make sure that we include the right population during the clinical testing. We also need high-throughput testing using DNA, but only RNA and protein as biomarkers, to tell us what is happening during treatment resistance. We need artificial intelligence to integrate this massive amount of data generated and provide the answers we are looking for with statistical power and high confidence. In summary, there is a lot of room for improvement in clinical trial design, but I believe the community understands that we are moving towards the right direction. We have made a tremendous amount of progress in the last decade, and I feel that we will continue doing so as newer technologies and AI tools emerge.
Thank you so much for joining me today, Karina, and for sharing your knowledge and experience on precision oncology. And to our listeners, you can find more In the Zone features at www.bioanalysiszone.com.
About the speaker
Karina Eterovic, PhD
R&D Director, Oncology and Sequencing
Eurofins-Viracor (KS, USA)
Karina is the R&D Director for Oncology and Sequencing at Eurofins-Viracor, where she leads a team which develops and validates tests that support both cancer care and drug development. Prior to this, Karina was an Associate Professor at MD Anderson Cancer Center (TX, USA), where she led the Cancer Genomics Laboratory within the Institute for Personalized Cancer Therapy (both TX, USA). Karina has over 12 years of experience in cancer genomics and precision oncology and has many publications in the field.
This interview is a part of our In the Zone feature with Eurofins Viracor on clinical development for oncology. For more expert opinions on this topic, visit our feature homepage:
In association with: