The new frontier: using LC–MS/MS for quantitative assays to support gene therapies


In the third episode of The New Frontier podcast, we’re joined by Daniel Schulz-Jander, Senior Director of Mass spectrometry Bioanalysis at QPS Netherlands (Groningen). Daniel explains how he uses immune-precipitation and immunoaffinity LC–MS/MS techniques for his work relating to macromolecular pharmaceuticals like gene therapies, as well as their benefits and challenges. Daniel also covers bottom-up, middle-down and top-down approaches and their suitability for his work in clinical support versus research and discovery.


Podcast transcript

00:01 Intro

[00:07] Ellen Williams: Hello and welcome to the third episode of the Bioanalysis Zone podcast on Cell and Gene Therapies sponsored by QPS Holdings. I’m your host, Ellen Williams. And today, I’m joined by Daniel Schulz-Jander, Senior Director of Mass Spectrometry Bioanalysis at QPS Netherlands. Welcome, Daniel.

[00:25] Daniel Schulz-Jander: Thank you. Thank you for giving me the opportunity to talk to you today.

00:28 Could you start by explaining how you use LC–MS/MS techniques in your work for quantitative assays relating to macromolecular pharmaceuticals like gene therapies?

[00:28] Ellen Williams: So, Daniel, your experience is primarily focused on LC–MS/MS, specifically immune precipitation and immunoaffinity LC–MS. Could you maybe start by explaining how you use these techniques in your work for quantitative assays relating to macromolar pharmaceuticals like gene therapies?

[00:46] Daniel Schulz-Jander: Absolutely. We utilize this technology really for a wide variety of problems in macromolar entities for analyses. So, antibody drug conjugate, peptides, protein, antibodies, depending on really the challenges that we face and the questions that need to be answered. Typically, what we focus on is to utilize the immunoaffinity technology to purify and to concentrate the samples. And thereby getting a pure extract for analysis with the mass spectrometer, separating it out from the respective matrix, and thereby being able to be more specific and getting lower detection limits for the analysis of these large molecular entities.

01:32 There are several ways in which you can do LC–MS/MS including bottom-up, middle-down and top-down analyses. Can you describe the methods you use and when it’s appropriate to use each of them?

[01:32] Ellen Williams: So, from my understanding, there are several ways in which you can do LC–MS/MS, including a bottom-up approach, a middle-down or top-down analyses. Could you describe the methods that you use and when it’s appropriate to use each of those?

[01:47] Daniel Schulz-Jander: Absolutely. So, the bottom-up technology is typically what we use most of the time. So, we digest the protein or the large compound into smaller peptides that then can be analyzed by mass spectrometer. And the reason why we do that is, for one for the sensitivity perspective, which is a big challenge with the top-down and also to be very specific on the molecule we’re looking at. This obviously has some challenges as you chop up the large molecule, you lose certain information. You are very specific on one, what we call a signature peptide, to analyze the entity. So, there are certain aspects that you miss, like post-translational modifications and alike that [can] no longer be monitored because perhaps the reference compounds [don’t] have that, or we are selecting a different signature peptide. But that’s the most commonly used methodology, so a highly digested protein mixed typically with trypsin or something, other protease enzymes to get to smaller molecules.

If you look then at the middle down, which is, so as the name indicates, in the middle, we thereby utilize that to capture a little bit of the uniqueness of the larger molecule. The molecule is not completely chewed up or chopped into smaller peptides, [it’s] a little larger, whereby we can monitor some of the post-translational modifications or uniquenesses that come with the different peptides or proteins that we’re looking at.

And obviously, there are some unique challenges because, the digestive enzymes don’t know typically when to stop. And so to control that provides a lot more challenges from an assay or from a process control and reproducibility perspective. [There are] some unique enzymes that are used that are a little bit more beneficial for these aspects, but in the end of the day, all these parameters — temperature, pH, time, which enzyme is being used — is something that needs to be very closely controlled to get reproducible data. And it’s also used in the lab.

The top-down is really more on the research side because you have the whole molecule that you basically introduce into the mass spectrometer and you do mass spectrometry on it. Thereby, you can get a lot of information, but it’s really on the research side more so than what my laboratory does, which is more on the clinical support [side] when a lot of information is already available and be running a much larger number of samples.

So, it becomes a lot more on the really investigative aspect, interpreting data and the sensitivities of the top-down methodologies are also a lot less. So, for a clinical application, which is our kind of whole work focus, the top-down methodologies are much less, if at all, utilized.

04:50 Could you comment on the benefits and challenges of these methods supporting gene therapy projects?

[04:50] Ellen Williams: You did touch there on some of the challenges of those approaches, but could you maybe talk to me a little bit more about some of the benefits for supporting specifically gene therapy projects?

[05:01] Daniel Schulz-JanderYes. So, for gene therapy projects we do a lot of oligonucleotide analyses. With these methodologies come some challenges that are inherent to mass spectrometry. And the first thing that comes to mind is the internal standard. A lot of times we like to use stable label internal standards and for shorter peptide sequences, sometimes they can be synthesized. For larger molecules, if you have very high molecular weight proteins in the gene space, it becomes very, very hard to get a fully stable label internal standard. And so, you use then a shorter sequence where you can act from post digestion for the internal standard. So, a lot of aspects that are critical to the assay performance when you do the pull down with the immunotherapy precipitation and the digestion [is] then not captured with the internal standards. So, variabilities and other variance that come with these very critical aspects are no longer captured. So, that introduces a lot more variability and challenges there.

For the oligonucleotide, a lot of times there is no stable label internal standard available and so we use a lot of times analogs. And analogs behave similarly, but not exactly the same. You get crosstalk. These provide extra challenges as these molecular entities are highly charged. There’s a lot more chance of a crosstalk, even if you have a couple of nucleotides [of] larger molecular weight, they still do introduce a lot of possibilities for crosstalk that can come with the internal standard challenges.

So, the internal standard is certainly one of the big aspects. Another aspect that is providing a big challenge is you need to have a very good antibody or entity to pull down your gene therapy drugs, so to speak. And so to generate those [and] make them very specific is a lot of work and these are critical reagents that need to be very clearly defined and reproducibly made, so that you can support a larger clinical trial. So, those are very critical and provide some real challenges.

Moreover, some of the other aspects that depending on the binding affinities, they can provide some challenges overall because they could shift one way or another and thereby influence the data that you obtain.

Post-translational modifications, as I mentioned before, depending on the digestion, we may mis-dose if we don’t look for them. It is benefit and the the downside of mass spectrometry. Mass spectrometry is very specific, but you’re looking for something very specific, and you need to know beforehand what you’re looking for. So, some of those aspects may not be captured if you are very specific in your mass spectrometry’s experimental setup. So, thereby, we can use sometimes time-of-flight instruments, where we have a much larger range of masses that we look [for] all the time, and thereby we can perhaps capture some of those modifications, post-translational, as well as the existence or the presence of metabolites that come along.

08:14 How do you see these methods evolving in the future?

[08:14] Ellen Williams: How do you see these methods evolving in the future?

[08:17] Daniel Schulz-JanderI think there’s a lot of work being done on the computational side, deep learning technology, AI, to really provide a lot more predictability on what fragments are there, what sequences are to be looked at, especially for some of the initial digestion, which are really unique to really hone in on the specific masses to look for those, be a little bit more predictive on those. Overall, technology always evolves, so our tools and toys, so to speak, hopefully continue to evolve to provide better sensitivity, more mass accuracy.

More data can be acquired in a shorter time, so that more mass transitions can be monitored similarly at the same time. The development of an antibody for the pull-down for the immunocapture is something that hasn’t really evolved that much. So, there’s a lot of opportunity for that to become more specific and have better and faster turnaround to develop an antibody for the immunocapture and precipitation.

And then the internal standard for the mass spectrometry is kind of an Achilles’ heel. To have the full protein, stable label internal standard available at reasonable cost is something where I think a lot of efforts are being put into to be more specific, more accurate over time. And that is something that various labs are working on to have the full protein being generated in the aspect of having a stable label.

And then I think, ultimately, what really, if we look at how we focus at the moment on monitoring these gene therapy drugs overall, there is a very similar focus like we do for small molecules. We look for the drug that we dose and we monitor that. But in gene therapy, it is not the drug that we give that is causing really what we’re looking for. We’re looking for the peptides, the proteins that are being translated post the dosing of the RNA or the gene therapy drug.

So, I think there will be more and more of a shift on rather than looking at what the gene therapy drug levels are there, to a focus on what are the proteins that are being actually made due to the fact that we dose a gene therapy drug. So, we actually see these proteins are dosed, and then we get a transcription, and ultimately, these proteins that are doing something that we want to facilitate or elevate or lower the expression, that is something that we would monitor. There’s probably a lot more work going on into that direction.

10:58 Managing the data generated from LC–MS/MS experiments can be complex. What tools do you use for data analysis and interpretation in your lab? Do you have any top tips?

[10:58] Ellen Williams: So, managing the data generated from LC–MS/MS experiments can be very complex. So, what tools do you use for data analysis and interpretation in your lab and do you have any top tips?

[11:11] Daniel Schulz-JanderIt’s a bit of a challenge for a lab like mine. [It’s a] little later stage in the development process, so a lot of knowledge is already present. So, we typically use the software that is provided by our equipment manufacturers for deconvolution/convolution of the mass spec data, so we can really see do we have the full peptide? Do we have the large molecule? And analyze that data. So, it’s less a deep learning and a very deep data analysis that you would have in an early phase.

Certainly, we use various software packages for in silico analysis that in the public domain, like MS or similar or the—from the NIH various software packages that are on the web to look for in silico analysis. But typically, we are very focused on what we’re looking for. It’s not a massive dataset like you have in an earlier development, say, so where folks may look at OpenSWATH or Skyline or similar software packages or some of the commercial softwares from various software vendors that are utilized for deep data mining when you really have no clue what you’re looking at.

12:23 As the field of CGTs is expanding rapidly, how do you stay updated on the latest developments and integrate this new knowledge into your practices?

[12:23] Ellen Williams: Great, thank you. And then finally, as the field of cell and gene therapies is very rapidly expanding, how do you usually stay updated on the latest developments and integrate this new knowledge into your practices?

[12:35] Daniel Schulz-Jander: Typically, we go out and mine the scientific literature, [we] go to Bioanalysis and similar, very targeted publications, [to] look where the trends are and what technology there is. We talk with vendors on a routine basis to see what new developments on technology they come up with. And we attend various conferences like EBF (European Bioanalysis Forum) and similar to really exchange knowledge with our peers and colleagues at other organizations to really learn where the trend is. [The] primary source of information is the scientific literature that we [get] form the publications.

13:13 Outro

[13:13] Ellen Williams: Thank you so much, Daniel, for joining me on the podcast. It has been an absolute pleasure to speak with you. Have you got any final comments to leave us with?

[13:20] Daniel Schulz-Jander: No. I enjoyed this as well and thank you very much for giving me the opportunity to talk to you today.


About the speaker:

Daniel Schulz-Jander
Senior Director, Bioanalysis (Mass spectrometry)
QPS Netherlands (Groningen, Netherlands)

Daniel received his Dipl Chem from the University of Kassel (Kassel, Germany) in chemistry, his Doctor rerum naturalium from the Technische Universität München (Munich, Germany) in environmental chemistry, followed by a post-doctoral fellowship in pesticide metabolism at UC Berkeley (CA, USA). Since then, Daniel’s scientific passion has been in bioanalysis. Starting in biotech at Ligand and Arena Pharmaceuticals (CA, USA), he then moved to Medtronic (CA, USA) where he built a bioanalytical team specializing in drug in tissue analyses. Since February 2022, Daniel is currently the Senior Director of bioanalysis mass spectrometry at QPS Netherlands, where his team supports small and large molecule bioanalysis with LC–MS/MS as well as ICP/MS.

If you enjoyed Daniel’s episode, you can listen to his thoughts on LBA and LC–MS/MS hybrid assays here.

 

    Listen to other episodes in this series here.

    Be the first to hear about the next episode by signing up for email notifications!

 


In association with: