Webinar Q&A follow-up: Immunogenicity testing for gene therapy


Thank you to everyone who attended our webinar: ‘Immunogenicity testing for gene therapy‘. Below are the speaker’s responses to the questions posed during the live event. We hope this is a useful resource, and we thank our attendees and our speaker, David Escobar, for his insight.

Meet the speaker

David Escobar

David Escobar
Associate Director, Ligand Binding Assay Technologies
Navigate BioPharma Services, Inc. (a Novartis Subsidiary) (CA, USA)

Originally from La Paz, Bolivia, David emigrated to the USA in 1988. He received his BS in molecular biology from San Diego State University (CA, USA) in 1998. He has been fortunate to interact and learn from talented scientists from all walks of life. He joined the bioanalytical world in 2006, conducting pharmacokinetic and immunogenicity evaluations. 11 years and countless validations later, he joined Navigate BioPharma Services in 2017, where he is currently the manager for the ligand binding group.

You mentioned that screening and confirmatory steps add little value for detecting pre-existing antibodies in GTx. Could you elaborate on the data or rationale supporting this claim?

For monitoring purposes (i.e., post-dose of GTx), there is ample literature indicative of seroconversion. Post-dose, titers have been reported in the millions. It seems to me given that seroconversion is imminent, the screening and confirmatory step are creating an additional testing burden that adds little value. For enrolment purposes, cross reactivity among the serotypes has been reported. To address this concern, during analytical validation, specificity of the signal response can be addressed via the classic confirmatory step (or through immunodepleting) but not implemented in the testing workflow.

How do you see the tiered approach evolving for gene therapy applications, especially in light of the ‘directtotiter‘ strategy?

Context of use is key. The ‘directtotiter strategy is related to a standard cutoff for enrolment. In this case, the conservative approach would leverage the analytical cut point to determine if a sample is positive or negative. If this is the enrolment criteria, then there is no need to titrate. However, in the event a Sponsor is looking to assess the level of positivity (titer) that would still render a GTx therapy efficacious, then, one could argue, that skipping the screening assay to determine the titer is the quickest route to obtain the level of positivity. For monitoring purposes, titers have been reported in the millions post GTx dosing. In this case, it is hard to make an argument to screen a GTx post-dose sample to then titrate further. In the latter two situations, the screening dilution, which is the minimal required dilution (MRD) of the assay, is also included. 

Given the trade-offs between TAb and NAb assays, how do you recommend balancing sensitivity with functional relevance in clinical trial settings?

Both characteristics – sensitivity and functional relevance – for Nab or Tab are highly dependent on the specificity of the positive control (PC). A properly derived PC antibody should provide an accurate measure of sensitivity and functional relevance in a TAb or Nab system, respectively. Once these are attained, functionality becomes a priority of a PC antibody that will be used as a system suitability control in a NAb system. 

What are the key validation challenges when using signal-to-noise (S/N) as a surrogate for titer?

For GTx and for the purpose of monitoring immune response, the assay system must demonstrate linearity (i.e., no hook effect) through the anticipated immune response (e.g., titers in the millions post GTx dosing). In context of biologicals, drug onboard is a key parameter that needs to be fully interrogated so that the correct titer is inferred from S/N.

How do you ensure assay robustness when using floating cut points, especially across different sites or time points?

To ensure floating cut points are robust across different laboratories, the same reagents must be used in addition to well characterized system suitability control acceptance ranges. Between different timepoints, for a GTx, we know the patient will seroconvert with an immune titer response in the millions). Thus, a floating CP may theoretically fluctuate slightly but precision with the titer result should take a back seat given the expectation of seroconversion.

How do you address the potential overestimation of sensitivity when using high-affinity monoclonal antibodies as positive controls?

The reported sensitivity concentration is a relative result that demonstrates the optimized nature of the assay, assuming the PC Ab demonstrates specificity.

What are the regulatory implications of bypassing screening and confirmatory steps in favor of a direct-to-titer approach?

Always good to start conversations with Health Authorities to get a buy in.

For the last figure, is it TAb or NAb?

TAb.

Are there negative samples?

Yes. I can share that the positivity rate was concordant with seropositivity rates available in the public domain.

For the humanized positive control, how can we obtain this? Is it recommended by regulation agencies? For gene therapy products, how many lots are needed to establish the cut point? 120 lots?

Yes, 120 is recommended.

Why do we need so many lots?

Statistics is the short answer to this question. A higher number of ‘lots’ enables accurate estimation of the 95th percentile for cut point setting. CLSI EP28-A3c is probably a good resource in addition to previous White Papers on Recent Issues in Bioanalysis (2023).

Is there any chance that the screening process will be unsuccessful?

By unsuccessful, I interpret an inability of the assay to properly identify a negative and/or positive result. With this context, yes, it is possible that the screening process will be unsuccessful. In my opinion, this would be a consequence of a poorly optimized assay system that should fail validation (e.g., cut point, precision) so that patients are not put at risk.

If immunogenicity testing is performed for a particular viral vector, is it mandatory to repeat the studies for a different RNA content but same vector?

Yes.

How can we evaluate/monitor gene therapy success?

I believe gene therapy success is based on clinical outcome(s).

How do you see the technology evolving and impacting patient care in the future?

Apparently, quantum computing may be able to support epitope mapping and support personalized therapy design. I’m not an expert in this field but coupled with the growth of AI, the sky is the limit.

 


The opinions expressed in this Q&A are those of the speaker and do not necessarily reflect the views of Bioanalysis Zone or Taylor & Francis Group.

In association with: