Trends, tools and transformation: the evolution of flow cytometry in bioanalysis


In this podcast episode, sponsored by ICON, we explore the advancements and future of flow cytometry in bioanalysis. Joined by ICON experts Thanh-Long Nguyen and Henko Tadema, the discussion covers the evolution of the field, the assays and instrumentation used at ICON, and emerging trends like AI-driven data analysis. The guests share insights into the growing complexity of assays and reporting requirements, offering valuable perspectives on the future of flow cytometry and its critical role in bioanalysis.

Podcast transcript

00:08 Intro

[00:08] Emma: Hello and welcome to our Spotlight podcast on flow cytometry, sponsored by ICON. I’m your host Emma Hall, and today I’m joined by Thanh-Long Nguyen, Manager of Bioanalytical Services at ICON, and Henko Tadema, Associate Director of Science at ICON. Thank you both so much for joining us.

00:26 Please can you introduce yourselves and share a bit about your flow cytometry work at ICON?

[00:27] Long: Thank you Emma and thank you to Bioanalysis Zone for giving us the opportunity to speak to you and the audience about a passion of ours. As you said my name is Long Nguyen, I am a Manager of Bioanalytical Services at ICON Bioanalytical Labs located in Lenexa, Kansas. I lead the team that develops the flow cytometry assays here for our global ICON services.

My personal background is in immunology, where I did my graduate work and postdoc studying T cell biology, and I’ve been performing flow cytometry assays for over 15 years in both the academic and CRO setting.

[01:02] Henko: And my name is Henko Tadema, I’m an Associate Director of Science based in Assen, the Netherlands. I have a comparable background to Long, also focusing on immunology and I was actually looking into mechanisms that underlie autoimmune disease.

We did a lot of cell-based types of assays with flow cytometry analysis at that time and I was really lucky to, after finishing my PhD, [be] able to join ICON within the flow cytometry group here in Assen. Actually ICON had already been performing flow cytometry in clinical trials for a bit more than 5 years. Overall we have about 20 years of flow cytometry experience across our different laboratories.

So, when I joined we just had our first eight color instrument installed here in Assen. A lot of the work we did was connected to the Phase I clinic. It’s located pretty close to our laboratory and a lot of the studies we did were samples from those Phase I trials.

I really remember how exciting it was to see [that] with our technology we are able to demonstrate efficacy of a compound rather than measuring concentration of the drug for example. We’re really able to see the biological effect of a compound. Obviously you need to have the right assay for that.

But there’s really a clear example that made me realize I made the right choice of moving from academia to the CRO world, where [we were] running a receptor occupancy assay. The receptor was on monocytes and our data actually showed that in contrast to the pharmacokinetic data there was still occupancy. So receptors occupied while the pharmacokinetic data suggested at least that the compound was cleared already from the system in those volunteers in that study.

So, obviously we had to take a deep dive into the data and make sure our data were right. They really were and we were literally showing that the receptor’s state [was] occupied a lot longer than [the] circulating drug was being measured. So it was a perfect example of showing that you need multiple pieces of information and really highlighting the value of flow cytometry in these types of studies. So early information on efficacy of the drug [was] actually really important for supporting the optimized dose selection. So this assay even moved along to later phase studies. The drug was administered to different patient groups.

Unfortunately in those studies they did not find significant enough efficacy so the drug was killed after a couple of trials. Yeah, that’s also part of our daily business I guess. So we still do those Phase I studies but things have moved a little to more global patient trials. Within ICON we have six laboratory locations where we perform flow cytometry. So I’m based in Assen and Long is based in our Lenexa laboratory. It was established about 10 years ago. We have a lot of contacts so flow cytometry is one of those techniques where we need to have frequent interaction. We are running the same assays within the same trial across our labs so we need to make sure that we stay harmonized and do our work as best as we can.

[04:10] Emma: Amazing, thank you Henko.

04:11 Long, could you expand on your experience working at ICON Lenexa? How have you seen the field of flow cytometry evolve during your time there?

[04:11] Emma: Long, if we come to you, could you expand on your experience working at ICON Lenexa? How have you seen the field of flow cytometry evolve during your time there?

[04:22] Long: Yeah, so I joined ICON Lenexa about 4 years ago and what I would say amazes me the most in my 4 years here just at ICON, is the kind of explosion and the, you know, significant increase in the complexity of the assays that we’re running. And that just speaks towards how fast the medical and the pharmaceutical field is moving where you need more complex assays to really kind of get at the question and answers that are needed to develop drugs.

So, at the time when I arrived [the] majority of the requests we were getting were just, you know, things that could fit on an eight color instrument. However, we quickly escalated to assays that are in the double digits and now we have spectral flow cytometers that at this point, really the sky’s the limit, or the region’s the limit. So it just really highlights how important flow can be and just how fast technology has moved and how important it is for ICON and the industry to be able to move along with it.

05:21 Can you share some more details about the type of assays and instrumentation you use at ICON?

[05:21] Emma: Yeah thank you so much. And could you both share some more details about the type of assays and instrumentation that you use at ICON?

[05:28] Long: I would say the majority of the types of assays that we currently do [are] amino phenotyping followed closely by target occupancy. I would say the majority of the phenotyping that we’re focused on is often T-cells and regulatory T-cells. We also do B-cells, NKs and monocytes. And I would say recently there have been shifts for more requests on B-cells and monocyte subsetting.

So, you really see an ebb and flow [on] what is being asked depending on the drug targets. We do have conventional and spectral flow cytometers. Our conventional instruments can handle between eight and 25 colors and they’re spread across our six labs.

And like Henko said, we recently added Cytek AuroraTM to our spectral instruments that are currently located in the US and EU. Our eight color Canto instruments are fully aligned throughout ICON. So you can really depend on getting equivalent data across the sites. And we will go further into how we normalize and harmonize our instruments later on in a future Spotlight that we’ll put on. And so our goal is to really get our instruments globally and so we’re [aiming] to get Cytek AuroraTM in our Singapore lab so that we can have global spectral coverage as well as conventional coverage.

[06:43] Emma: Thank you so much Long.

06:45 As the field of flow cytometry continues to advance, what trends are you seeing in terms of the types of assays?

[06:45] Emma: As the field of flow cytometry continues to advance, what trends are you seeing in terms of the types of assays?

[06:53] Long: So, I would say the bread and butter [of] flow cytometry is still immunophenotyping. And like I said previously, it’s what we do the most. But I would say as we continue to expand our knowledge of the human immune systems and our drugs are becoming more advanced, we’re able to target more specifically and our understanding of those targets needs to expand. So this leads to deeper immunophenotyping. While the standard TBNK is still useful I would say the average custom assay that we’re being asked to develop [is] between 12 and 18 on average. And that’s just standard to look at all the different subsets.

So what do I mean by that? It says rather than looking at just T cells in general, looking at CD4 and CD3, there’s a need to look at naive central and effector memory cells. An example for monocytes [is], rather than looking at CD14 positive monocytes, you know, there’s a need to break them down to their classical intermediate and non-classical monocytes. So the questions that are needing to be answered [are] really becoming more targeted.

Now beyond just basic immunophenotyping I would say we’re seeing a lot more requests and the need for flow cytometry to inform on the biological activities and the mechanism of actions of different drugs. So there’s been [an] increased need to look at the increase or decrease of either activation markers or target receptors of drugs. We’re seeing a lot more complex receptor occupancy as well, where we see molecules and drugs that are either bispecific [or] trispecific, you know.

And we recently had a to deal with a challenge [in] which a molecule binds to a target in a pH dependent manner. So the way we target things [has] become more complex, the way we have to go after the answers [is] more complex. Additionally, we’ve seen an increased need to look at signaling pathways. So, whether or not we see an increase and decrease in phosphorylated proteins such as STATs or the ERG pathway. And lastly, when you’re looking at signaling pathways or any other things, we have to look at intranuclear and intracellular proteins. So complexity of the assay goes into the types of buffers and the types of levels that you need to penetrate the cells and still acquire a good signal.

[09:09] Henko: And maybe I can add another trend which is, in a way, also again connected to complexity. You know we have more capability, more technical possibilities. So I would like to mention cell and gene therapy trials as well. So, by definition, flow cytometry is an ideal technique to involve in cell therapy studies. So yeah, this has required us to implement a bit more off-standard assays. You could also say they have become more standard already. But comparing them to the traditional phenotyping, cell therapy monitoring has been definitely a bit of an off-standard because you’re basically performing [a] pharmacokinetic assay by flow cytometry.

And connected to those cell therapy studies we’ve also been working with the cell-based detection of anti-cell therapy antibodies. So basically combining flow cytometry with immunogenicity. [It] was really interesting to do, we were lucky enough to have, you know, people with the knowledge from both flow cytometry and also the immunogenicity side of things. So yeah, that worked out really well. But in addition to the complexity, I think those studies are especially interesting as well, because they require very long-term follow-up of patients. So when they are treated with a cell or a gene therapy, it may require up to 10 or sometimes even 15 years [of] follow-up of those patients. Which also means that we need to keep in mind and make sure that those assays are kept up and running during that time. So, [for] some more long-term planning [you have to] consider the instrumentation. We’re always trying to renew and stay up to date but sometimes the assays are running on the current instrument and you need to keep that up and running as well. And the other thing [is that] you’re working according to best practices [and] certain guidance, and during those years that may change a little as well.

So yeah, in summary I think we can say that the overall development occurring and [also] drug development is summarized well by increased complexity, which means that we need to also work with that more in-depth analysis and more complex ways of developing and optimizing our assays.

[11:27] Emma: Amazing, thank you both so much for those insights.

11:30 Are there any additional needs or requirements you are seeing?

[11:30] Emma: Are there any additional needs or requirements that you’re seeing?

[11:34] Long: Yeah I would say, in general I think a need and requirement that I’m seeing is really proximity. So, having your flow cytometers close to where you’re having your clinical trials. So we’re seeing, as were previously saying, more complex assays that are needed; whether it’s getting at the mechanism of actions, needing to perform some sort of simulation, or handling of the samples. It’s important to have your samples close to where you can analyze the samples. So for ICON, we have clinical research units that are co-located with our biochemical labs. So we can draw the blood and then we can quickly take that blood and perform whatever is needed to get that important readout. And so I think that’s a very powerful thing as things get more complex.

Additionally, as you move on to Phase II and Phase III global trials, classically, flow cytometry asteroids have very short stability windows. So it’s very important that you have global sites that can be at least close to where you know that your samples are being drawn. So sites in Europe, sites in the Americas, and sites in the Asia-Pac region. I think it’s very important to consider where your labs are located relative to the clinical sites.

[12:57] Henko: Yeah, and it almost seems, at least that’s what [we’ve been] seeing the past 1 to 2 years, [that] we get more and more the request [for] off-the-shelf assays. So where we used to have a lot of dedicated focused discussions with sponsors about individual assays, it seems to be more and more [a] situation [where] the sponsors like us to have the assay basically available off-the-shelf. So saving them some time and costs on development.

So this is clearly something we’re currently adapting to. I must say that the spectral [and] high parameter instruments we had are helpful in that. So it enables us to develop some larger backbone panels that we can offer to multiple clients for multiple types of studies, but still have room for some specific drop-in markers to add to those panels. I guess [it’s] important to mention that for receptor occupancy, or very specific assays, we will always require dedicated development and validation of those assays. But clearly, you know, offering the off-the-shelf assay is an additional requirement we need to work with.

[14:07] Emma: Yeah, absolutely.

14:08 With the field increasing in complexity and the more in-depth reporting requirements, what are your thoughts regarding the future of flow cytometry data analysis and handling, and how might the use of artificial intelligence come into this?

[14:08] Emma: With the field increasing in complexity and the more in-depth reporting requirements, what are your thoughts regarding the future of flow cytometry data analysis and handling, and how might the use of artificial intelligence come into this?

[14:21] Henko: This is an extremely interesting and relevant topic at the moment, I guess not only for us, [but also] in the whole bioanalytical world or drug development world. Everybody’s looking at artificial intelligence. How can this be of any help? How can we speed up the development of a new drugs? I mean, there are companies that are really focusing on this, right? They’re using modeling to design new drugs. So, in our world, what we clearly see with the new development is the increased size of the data. But also the number of markers that we add to our panels.

We spoke already about it a couple of times now during the podcast and we’re just getting to the point where it becomes too time consuming to do everything with the conventional way of gating, so basically manually in a 2D data representation. So yeah, we need to adopt the possibilities that there are. And there are algorithms, machine learning tools available to us. We just need to start making use of them. So we have one of the White Papers we want to publish as part of this Spotlight [that] is going to go into more depth on this. But I just want to mention a couple of things here, I guess. So what I think we’re going to be doing in the next couple of years is moving towards clustering rather than gating. So, what we’ve been doing until now is manually selecting our subsets of interest, which is based on 2D data representations. Human beings already struggle when we start plotting data in 3D, let alone in higher dimensions. And the way we’re going to be looking at our data is more and more as a large data table with 20, 25, 30, 35 dimensions. And that’s where you need algorithms to handle the data for you.

And [the] good thing is that it can do that very well for you. So rather than doing the manual gating we can rely on algorithms. You know, a quite well-known example is FlowSOM, which is used a lot for spectral data handling, for example. And clustering basically provides you with unique subsets in your data. So it uses all the dimensions, all the markers that are available in your data and provides you back clusters. The next thing you then have to do is identify the clusters. So that’s still [a] piece of work there and there are different ways of doing that. So we hope that’s something we can automate as well in the future. [A] work in progress I would say, but already the whole clustering part is a big win.

And in addition to clustering, there are other tools available as well. I expect we will be using those algorithms in a way to basically get better data. So the algorithms, for example, are available to normalize your data. So potentially you can remove some variability, for example between analysts, or between instruments or even between lab locations. Like Long explained earlier, you know, we’re doing a lot to harmonize as much as we can but we’re still different people at different locations on different instruments performing our work. So if we can use algorithms to improve that data a little, that can already be a nice win I would say.

And what algorithms can do from the data, which we manually would have more difficulty with, is do some cleaning up. So for example, taking into account the flow rate during certain moments in your acquisition of the sample. Take into account the dynamic range of your detectors and then remove cells or events that were measured really at the edges of the known detector margins.

So all in all, you can add those tools to data evaluation or data analysis pipelines and do a lot more automat[ion] and that will really help us handling those larger data sets that we’re currently seeing and will only grow further in the future. Like I said, we’ll go more into depth and show some nice examples in the upcoming White Paper during the Spotlight.

[18:41] Emma: Perfect thank you Henko. Long would you like to add anything?

[18:45] Long: Yeah, I think there’s one bigger topic that I [would] like to bring up and maybe leave with the audience to think about: harmonization and standardization of flow cytometry. And I know that’s a very broad term but I think it is very important, as we move into larger datasets and as we start to use AI or machine learning [to] assist with our analysis and interpretation, that we really start to think about that as we design and develop and analyze our data.

So what do I mean by harmonization and standardization? I think, as you expect, it can be on many different levels but on the very basic level it’s really what we call the population of cells and how do we get to those populations. So as you know, there’s dozens of different instruments that people use, whether or not it’s from different companies such as BD Biosciences or Sony or Cytek, how can we generate data and how can we consistently compare those data sets across different companies or different industries? And then it gets back to how do we even define a certain population? You know, let’s say we want CD4 T cells. Some people might call them CD4 to 5 positive, CD3 positive, but some people might require a TCR alpha-beta to distinguish them.

And then, another big topic is how we define it in terms of the gating tree. Is there a consistent way we should go about it? And if you think about it, the strength and effectiveness of the models that we use or the algorithms that we use is really completely dependent on the information and data that we put into the data as we train those models or algorithms. So really bad information in is bad information out. So I think to really maximize the effectiveness there, I would like to highlight a few groups that I’ve seen recently in the community that are really trying to put effort into pointing in a direction as a place that we could reference for people to start with.

The first is the National Institute of Standards and Technology. They have a flow cytometry standards consortium that ICON is a part of. And their goal is to really develop reference standards that people can use to calibrate and measure across and get those reference standards to include [so] that it could be included as biological references or reference data. Really [it] could serve as a point of reference and a way for us to start to compare the data.

The other group is called SoulCap, [and] is a group of volunteers whose passions align in terms of trying to really get a standard nomenclature and their aim is to establish a globally accepted standard for identifying immune cell populations through cytometry. And this is a group I recently just started to get involved with and I think it’s very important again to have a place where someone’s trying to say “this is what we should call things” and at least at some point reference here so that we can start comparing the data.

One other thing I [would] like to highlight is a group of leading T cell biologists in the academic world [that] all got together and put out a paper, in I think Nature Reviews, really highlighting and recommending nomenclatures for T cells, and so when you start to read and start to study this, you can understand what you might call a T memory cell or things like that.

So there are various efforts to try to harmonize and standardize these and I guess what I would encourage listeners to do is when they start to do flow cytometry or start to implement some new panels, really look towards standardization and come up with a way to maybe align with some of these groups or at minimum, align within your own lab, then you can align with your own company and align your industry. So again, I think the highlight is the information that we want to use is only as good as the information that we put in.

23:04 BZ outro

[23:04] Emma: Yeah, thank you both so much for sharing your experience and insights. Our audience I’m sure will find it highly valuable. For our audience, you can find more resources on flow cytometry on our Spotlight page, which makes up the rest of this feature and of course, you can also visit ICON for even more information. Thank you and goodbye!

 

About the Speakers:

Thanh-Long Nguyen
Manager of Bioanalytical Sciences
ICON

Thanh-Long Nguyen is a Manager of Bioanalytical Science at ICON. He holds a Doctorate degree in the field of Immunology from Saint Louis University School of Medicine (MO, USA). Long’s field of expertise includes flow cytometry and biomarkers and his role currently includes leading the flow cytometry method development team in Lenexa, Kansas, as well as providing scientific support to the business development and operational teams.

Henko Tadema
Associate Director of Bioanalytical Sciences
ICON

Henko Tadema is an Associate Director of Bioanalytical Science at ICON. He holds a Doctorate degree in the field of Immunology from the University of Groningen, The Netherlands and his field of expertise includes flow cytometry and other cell-based assays. Henko is responsible for method development and validation of flow cytometry and other cell-based methods at ICON and supports the operational science and business development teams of ICON’s bioanalytical laboratories.

 

Enjoy this podcast? Explore the full Spotlight feature on flow cytometry here.


This feature was produced in association with: