Bioanalysis Zone

Quantification of biomarkers – remarks on the current state of affairs


A biomarker assay is not a PK assay. This was the notion most frequently mentioned and discussed at the AAPS Crystal City VI meeting this year, as the scientists present attempted to begin assembling a framework for which the US FDA could base their future guidance and regulations. The demand for more regulatory clarity regarding the relevant criteria for biomarker assay performance has recently become very prevalent. Many questions are being asked, and the number of answers currently falls short as the bioanalytical community ponders.

The industry relies on the use of standardization and defining criteria to function. It is not so easy, however, to establish acceptance criteria for biomarker assays, particularly if the aspiration is to a single universal approach. The big misnomer in this arena is the accuracy term. Due to the endogenous nature of biomarkers and realizing how there will be inherent variability in native levels, accuracy cannot be a realistic goal. What should be pursued are, most importantly, precision and parallelism. Harmonization too, in the sense of correlation of results between different labs performing the same analyses. The idea of ‘megapools‘ of control matrix was brought up in the context of harmonization, these being pools constituted from several hundred donors. The resultant volume is good for a great many analyses over many locations, as necessary, and of course, stability and adsorptive effects notwithstanding, gives a constant baseline of marker levels.

There was a good deal of discussion on the subject of precision. There were two principal factors put forward as to how tight the precision needs to be for a given biomarker assay. One was the biochemical nature of the compound and the expectations arising, which would relate to the analytical technique used. The other factor, also biological in essence, relates more to how the concentration of the biomarker in question naturally fluctuates. If there is a twofold fluctuation, it was asked, does there really have to be a precision of less than 20% for measurements from the assay? Even though this question makes sense, it brought about some consternation in many of those present. It was reassuring to me that there was evidently a strong sense of responsibility to maintain control over method performance. It was clarified nonetheless that the notion was not mentioned to question a standard rigorous analytical approach, but rather for situations where the usual precision has proven hard to attain and effort may be needlessly wasted if the existing poorer precision is adequate to answer the question being posed and give an analytical outcome. Something also briefly alluded to was that, considering the importance of being able to verify that a given measured concentration is significantly different from a basal range of marker concentrations, surely statistical tests are appropriate. I do agree with this, even though it is something many would have to become familiarized with. For instance, performing t-tests where the null hypothesis is that there is no significant difference between two sets of measured concentrations, at a carefully chosen level of significance. This may have to involve the analysis of several replicates of a given incurred sample, and comparator samples, in order to give the required substance to the test.

There was also much mention of the use of subjective terminology and how it can be misleading. The cliché ’fit-for-purpose‘, for instance, was pounced on for its subjectivity. The statement about a biomarker assay not being a PK assay is itself subjective. Are they completely different entities? To me, the answer seems to be ’yes‘, since there are very important differences and at this stage we can get used to distinguishing the two. However, the answer is also ’no‘, at least in the sense that, as already alluded to, we still ought to strive for the best analytical performance, hence confidence in our analytical data.

For me and the LC–MS background that dominates my work, I feel that more should be made of the use of isotopically labeled surrogate analytes, isotopologues, which, with the right labels, approach physicochemical equivalence to their unlabeled analogue biomarker. This is further to their classical role of internal standardization. We know from research already conducted that it would involve the expense of synthesizing 13C and/or 15N labeled isotopologues, rather than deuterated, to invoke true effective response equivalence. Clearly the physics is interesting here but, more importantly, the approach is full of promise. From a parallelism perspective, we could dispense with speculation about suitability of surrogate matrices, and the associated calibration curves would not be skewed by endogenous levels, furthermore they would be in effect as appropriate for interpolating incurred sample responses as they would be in a PK assay. At Crystal City VI there were murmurs of complaint as to the difficulties involved in having to use non-LIMS software such as Excel to process data, but surely with enough of the bioanalytical community engaged in persuasion of LIMS providers to allow for this, it would cease to be an obstacle. Similarly, the need to find two different non-interfering isotopologues to perform an assay like this is seen as a hurdle, but again, as far as one could speculate, it seems reasonable that if the demand was there then supply would follow, particularly if the nominated internal standard (not surrogate analyte) could be inexpensively labeled such as with deuteration. However, the expense put towards synthesis with adequately high isotopic purity is critical. At the end of the day, isotopologues – especially involving heavy isotopes of carbon and nitrogen rather than of hydrogen are a great tool for the bioanalytical mass spectrometric world because of excellent physicochemical mimicking. The best should be made of them where assay performance and reliability demands it, and biomarker quantification with a mass spectral endpoint is no exception.

One thing is for sure, and that is that there is plenty more dialogue to come involving the industry and regulators.


Leave A Comment