Regulation and standardisation of immunogenicity assessments: a CRO’s perspective

Written by Alexandra Hawes, Deborah McManus, Gareth Satchell, Karen Law and Nicola Stacey (all LGC Ltd; Cambridgeshire, UK)

All immunogenicity assessments present specific challenges. As a contract research organisation (CRO), a major challenge we face is related to the lack of harmonisation and standardisation.

For pre-clinical immunogenicity assessment, there is a lack of clear industry guidance and due to this many companies follow the FDA 2016 draft guidance and EMA 2017 guideline revision. However, neither of these documents specifically provides guidance for pre-clinical studies and both are more focused on clinical trials. As stated in the EMA 2017 guideline revision, pre-clinical studies often involve human(ised) proteins that will be recognised as ‘foreign’ by animals and are more likely to induce an immune response in animals than in humans. As such, the presence of an immunogenicity response to a therapeutic protein in pre-clinical samples cannot be used to predict an equivalent response in humans. In addition, the assessment of immunogenicity in animals is primarily for appropriate interpretation of pharmacology data, preliminary studies, dose range finding studies and repeat dose toxicology studies (PK and TK). In humans, immunogenicity assessment is a critical safety and efficacy requirement. In fact, pre‑clinical immunogenicity samples are routinely collected but may only be analysed in order to confirm the presence of anti-drug antibodies (ADA) when unusual PK/TK profiles are observed.

As pre-clinical immunogenicity is primarily focussed on the impact to the PK/TK data, it has been suggested (EBF presentation 2016, J. Munday) that a more efficient ‘fit-for-purpose’ strategy could be implemented. This utilises a one-tiered approach (no confirmatory, titration or nAbs assay) using a 1% or even 0.1% false positive rate in order to calculate the cut point. As a CRO, we have observed a trend towards this one-tiered approach that we believe makes pre-clinical immunogenicity testing more efficient, whilst still maintaining the scientific integrity of the data. However, this ‘fit-for-purpose’ approach is not harmonised across the industry, with some sponsors reluctant to move away from the ‘traditional’ approach due to fear of regulatory challenges.

LGC (Cambridgeshire, UK) understands that a level of harmonisation is required in order to ensure that data packages in support of new drug products meet the expectations of the regulators. While there are industry guidelines for clinical immunogenicity assessments, we have observed that the interpretation and application tend to differ between sponsors.

There are a variety of experimental methods and formats for immunogenicity assays but key to all of them is the calculation of an assay cut point. This is used to define the response threshold that indicates if a sample is positive for the specific ADA. A widely used approach is detailed in Shankar et al., 2008 [1] and there is now also adoption of the approaches discussed in Devanarayan, 2017 [2]. There are no specific regulatory guidelines for the approach to be taken; hence the calculation of the cut point can vary. The approach taken should be interpretive and data-driven, with suitability confirmed by the observed false-positive rates and low positive control performance from subsequent sample analysis.

BZ - Q4 spotlight (1)_Leaderboard 728x90 copy catch up

The draft FDA 2016 guidance recommends that assay sensitivity should be 100 ng/mL or lower (for both screening and confirmatory assays). From our experience the drive to meet this recommendation (and associated improvements in drug tolerance) has sometimes resulted in the development of methods that reduce analytical and biological variability. There is then a potential risk that the low cut point generated may not be robust enough to deal with minor variations in assay performance, particularly for later phase studies performed over a prolonged period of time. We have observed a wider acceptance of the importance of assay variability when developing immunogenicity assays, though the challenge here is to find the appropriate balance to produce a robust method with an appropriate level of sensitivity and drug tolerance. The robustness of the method is particularly critical where low biological variability is observed as the analytical variance has a greater influence on the calculation of the cut point. Robustness should be determined during development and critical reagents should be well characterised. Trending analysis is recommended particularly for late phase studies.

For clinical immunogenicity assessments for submission in North American territories, sponsors now request ADA methods to be validated to the FDA 2016 draft guidance. In some cases, sponsors attempt to ‘future-proof’ submissions by developing assays that implement a stricter interpretation of the guidance. Developing a standardised approach to immunogenicity assays could eliminate differences between submissions that might impact the timelines for regulatory review. However, we also accept that a ‘one-size-fits-all’ approach is not always suitable, especially if it is tailored to meet one set of regulatory guidance. From our experience, the lack of specific guidance can lead to an attempt to “future-proof” the method via over interpretation of the current guidelines. This in turn may drive regulatory expectations. One issue we then face, as a CRO, is whether we should encourage sponsors to develop and validate assays that meet the ‘future-proof’ approach we have observed, even in cases where it might not be necessary at the time.

For pre-clinical immunogenicity assessments there has been industry discussion to implement a ‘fit-for-purpose’ approach but for clinical immunogenicity assessments there is less harmonisation and agreement. As the regulatory guidelines and client expectations are not completely aligned, it can be challenging to adopt an approach that meets all of the current and future requirements.

Overall, we recommend that in the future there is industry and regulatory consensus on how both pre-clinical and clinical immunogenicity assays should be developed, validated and reported. However, we acknowledge that the landscape is ever changing and all parties involved need to work together to ensure that ADA methods are ‘fit-for-purpose’. The scientific community as a whole can positively influence the way forward. In order to ensure that the direction is driven by good science rather than fear of regulatory submission failure, contributions from CROs, Biotech, Pharma and regulatory agencies need to be balanced. We must be aware in our attempts to ‘future-proof’ immunogenicity assessments that we do not compromise strong science and a ‘fit-for-purpose’ rationale.


1. Shankar GM, Li S, Mehta TH et al. Amyloid-beta protein dimers isolated directly from Alzheimer’s brains impair synaptic plasticity and memory. Nat Med. 14(8), 837-42 (2008).
2. Devanarayan V, Smith WC, Brunelle RL, Seger ME, Krug K, Bowsher RR. Recommendations for systematic statistical computation of immunogenicity cut points. AAPS J. 19(5), 1487-1498 (2017).

BZ - Q4 spotlight (1)_Leaderboard 728x90 copy catch up