Ask the Experts: quantifying translational biomarkers
In this Ask the Experts feature with Waters Corporation (MA, USA), our experts delve into translational biomarkers and their role in drug discovery and development. Each expert reports on the challenges and growth opportunities in the field, the current toolset for biomarker studies, and the regulatory guidance to incorporate biomarkers as part of regulatory submissions. The experts also touch on the strengths of LC–MS for biomarker studies and how to ensure scalability and reproducibility when moving from small pilot studies to large cohort studies.
Get perspectives from Robert Plumb, Jinming Xing, Regina Oliveira and Craig Wheelock. Read more about the experts here.
Questions
Robert: Research into translational biomarkers to identify mechanism of action (MOA) in early drug discovery is increasing, especially in the areas of cancer, neurological disease, stroke and cardio metabolic conditions. There is an urgent need to develop and validate translational biomarkers of neurodevelopment disorders, cancer, stroke and diabetes that bridge human and animal studies to improve chances of success.
Jimmy: Over the last decade, translational biomarkers have become a core component of drug discovery and development rather than an add-on. The development pipeline has shifted toward more complex modalities — biologics, ADCs and cell and gene therapies — which inherently require better molecular understanding, patient stratification and pharmacodynamic readouts. At the same time, development timelines are more competitive and sponsors are under unprecedented pressure to demonstrate mechanism, dose rationale and differentiation earlier.
The need for speed comes from the increasingly rigorous, competitive landscape and rapidly evolving clinical practice. This has driven a shift from single, exploratory biomarkers to more systematic, high-throughput and multi-omics approaches enabled by advances in NGS, MS and digital technologies. Regulatory guidance has also evolved to encourage earlier and more structured use of biomarkers, particularly when they can support decision-making, accelerate development or reduce late-stage risk.
Regina: The biomarker landscape has matured from largely exploratory measurements to strategic, decision-enabling assets embedded across the drug development lifecycle. Biomarkers are no longer viewed as secondary exploratory tools; they now integrate target validation, dose selection, patient stratification and proof-of-mechanism strategies. In early discovery, biomarkers are increasingly used to de-risk targets by demonstrating mechanistic relevance in human systems before program advancement. As programs progress, translational biomarkers serve as quantitative bridges between preclinical models and first-in-human studies, enabling dose selection grounded in biology rather than relying solely on toxicology margins. In later phases, biomarkers define biologically characterized subpopulations, supporting more precise clinical trial designs and improving the probability of technical and regulatory success — particularly in neuroscience, where disease heterogeneity requires deeper biological characterization.
Several key drivers have shaped this evolution. Advances in high-resolution analytical platforms, particularly MS, multiplex immunoassays and NGS have dramatically increased sensitivity, specificity and throughput. Parallel progress in bioinformatics and machine learning has enabled the integration of multi-omic datasets into coherent mechanistic insights rather than isolated signals. But perhaps the most significant shift is conceptual. Biomarkers are no longer viewed as supportive endpoints but as central to translational strategy, informing go/no-go decisions, accelerating proof-of-mechanism and shaping precision medicine approaches. The field is moving toward a more predictive, data-integrated ecosystem — where biomarkers not only measure response but actively guide development strategy and clinical impact.
Craig: The field has moved from predominantly single molecular markers toward panels or signatures that capture pathway activity and disease endotypes. The concept of treatable endotypes, in particular, has become a unifying strategy. This transition reflects recognition that complex diseases rarely map to single molecular alterations and that pathway- or network-level information is more robust for prediction and stratification. We are seeing a broader use of omics molecular biomarkers (e.g., genomic, transcriptomic, proteomic, metabolomic) in clinical trials for diagnosis, patient selection and pharmacodynamic readouts. There is also an expansion of biomarker types beyond circulating analytes to include imaging markers, digital and physiological biomarkers and composite clinical–molecular scores.
This evolution has been driven by advances in omics technologies and data science, the creation of large-scale biobanks and collection of extensive real-world data (e.g., wearables, electronic health records), in combination with increasingly mature regulatory frameworks that allow biomarker-based trial enrichment, dose selection and surrogate endpoints. Biomarkers can now play central roles in trial design and regulatory decisions, including patient stratification and dose selection. Essentially, the data acquisition and analysis methods have sufficiently co-advanced to enable the integrative multi-omics analyses necessary for identification and application of new, multi-parametric biomarker signatures.
Robert: One of the biggest challenges is in detecting and validating targets. After this, there is a need to develop robust, reliable, bias-free methods that can be deployed across researchers and the clinic to monitor and quantify these markers. The biggest opportunity is in cheap, deployable methods. Beyond that comes the issue of validation: is the target valid? Does it represent a true method for monitoring the disease or efficacy of treatment? After that, method validation should be considered. This will become more complex as the number of required features to be measured increases.
Jimmy: One of the biggest challenges in the field is generating the proper evidence package to convince regulators. Such a body of evidence typically includes clear analytical validation, clinical validation and clinical utility. The process is often multi-year and resource-intensive, spanning multiple development phases. Drug developers who pre-specify context of use (COU), start fit-for-purpose validation early, and engage regulators sooner are better positioned and can significantly de-risk timelines and avoid rework later.
Regina: One of the biggest challenges is achieving robust clinical translation. Biomarkers that demonstrate strong mechanistic relevance in discovery settings often fail to perform consistently across diverse patient populations, disease stages or study designs. This is particularly evident in neuroscience, where biological heterogeneity in complex disorders complicates reproducibility and limits predictive reliability.
Variability introduced by sample handling, matrix effects, assay platforms and cross-site standardization continues to pose practical barriers. Although analytical sensitivity has advanced substantially, reproducibility and scalability remain defining constraints. At the same time, these limitations highlight important opportunities. Strengthening bioanalytical rigor, aligning biomarker strategies with clear development decisions, and integrating multi-omic, spatial and clinical datasets through advanced computational approaches will support a transition from descriptive measurement to predictive modeling. The field’s growth lies in transforming biomarkers from supportive tools into quantitatively validated drivers of development strategy — deploying the right biomarker at the right stage, with assays designed to meet both scientific and regulatory standards.
Craig: Currently, translational biomarker science is limited more by implementation than by discovery, with major challenges involving analytical/clinical validation, standardization, and real-world utility. There are significant challenges with heterogeneous pre-analytical variables, assay platforms and SOPs, which limit reproducibility and cross-cohort comparability, especially for high-dimensional omics and complex imaging/digital biomarkers. A more formidable obstacle is that many biomarkers are discovered in small, homogeneous cohorts and fail in large, prospective, multi-center trials. There is a need for longitudinal, adequately powered studies to establish the prognostic or predictive value of putative biomarkers. While meta-analysis can be a reasonable solution, many studies are impaired by fragmented datasets/cohorts, inconsistent metadata and limited data sharing. Data privacy and the associated legal constraints can also render data sharing difficult and an absence of standardized data models often means that “shared” data lacks the metadata and harmonization necessary for reuse. There is also an under-representation of ethnic and age groups as well as comorbidity profiles in discovery and validation cohorts, which risks identifying biased biomarkers that perform poorly in real-world populations.
The main opportunities for innovation include integrative multi-omics and systems-level biomarkers. A major growth area includes methodological and infrastructural advances that enable stable, cross-platform multi-omics signatures (e.g., integrating genomics, proteomics, metabolomics, and imaging). There is a distinct opportunity in pathway- and network-based biomarkers that tie directly into mechanism, PK/PD modeling and combination therapies. The use of continuous, real-world data from wearables, smartphones and home sensors can yield digital endpoints that reduce sample size and capture phenotypes not accessible via clinic-based measures. Finally, there is opportunity in designing biomarkers from the outset around specific decision points (e.g., treatment selection, de-escalation, early stopping) and patient-relevant outcomes, rather than “discovery-first, utility-later” paradigms.
Robert: This is mainly performed by ELISA or tandem quadrupole multiple reaction monitoring coupled with liquid chromatography (LC–MRM MS). Target identification is a complex process that includes cell-based assays, clinical observations and discovery sciences such as genomics, transcriptomics, LC–MS-based proteomics and metabolomics. There is a real need for kit-based solutions, not broad panels, but a disease-focused approach.
Jimmy: The toolset is broad and increasingly multi-modal. In the genomic and transcriptomic toolbox, studies commonly use qPCR, RNA-seq, targeted NGS panels and cfDNA/ctDNA approaches. For pharmacodynamics, target engagement and disease activity, immunoassays such as ELISA, MSD and Simoa are widely used, alongside proteomics approaches such as Olink and SomaScan. For metabolic and lipidomics, LC–MS-based platforms are the backbone. There are also cellular and tissue-based assays, including flow cytometry, IHC and digital pathology. Recently, more studies are also incorporating wearable and digital technologies such as CGM, actigraphy and ECG to collect high-frequency, longitudinal data to allow continuous monitoring while enabling more patient-centric trial designs.
Regina: The toolset for biomarker studies is best viewed not as a collection of technologies, but as a strategically integrated framework designed to generate data that guide development decisions. Platform selection is inherently fit-for-purpose and closely aligned with the biological question, required sensitivity and stage of development. LC–MS platforms play a central role due to their structural specificity, quantitative robustness and adaptability across biomarker classes, including metabolites, lipids, peptides, and proteins. Targeted LC–MS/MS assays, supported by stable isotope–labeled internal standards, are often the foundation for decision-critical measurements where reproducibility and molecular selectivity are essential. Hybrid immunoaffinity LC–MS approaches further extend sensitivity for low-abundance proteins while preserving analytical confidence. Ligand binding assays remain highly valuable, particularly when ultra-high sensitivity or throughput is required. In practice, orthogonal deployment of LC–MS and LBAs strengthens analytical confidence, leveraging antibody-based sensitivity while relying on MS for structural confirmation and multiplexing.
Ultimately, the strength of the analytical toolset lies in integration — combining complementary technologies, rigorous validation practices and computational infrastructure to ensure that biomarker data are not only biologically informative, but quantitatively reliable and development-ready.
Craig: Much of the biomarker discovery landscape is still anchored in chromatography-MS, which enables quantitative profiling of small molecules, lipids and proteins with high specificity, extensive multiplexing and broad dynamic range. In parallel, modern high‑plex proteomic platforms such as Olink and SomaLogic have expanded omics‑scale discovery by allowing relative quantification of thousands of proteins from minimal sample input, providing an efficient front end for candidate generation that can subsequently be bridged to more quantitative bioanalytical assays. Immunoassay technologies like ELISA and bead‑based multiplexing remain central for protein and cytokine biomarkers, particularly in regulated settings where platform maturity, throughput and well‑characterized performance are essential. Molecular biology methods, including PCR‑based assays and sequencing‑driven readouts, continue to be used for nucleic acid biomarkers. More recently, adjacent technologies such as flow cytometry and particularly imaging‑enabled assays further extend the toolset to cellular and spatial biomarkers, enabling multiparametric immune‑phenotyping and precise localization of biomarker expression within tissues. These more novel approaches are exciting, but there are cost and logistical challenges in adapting them for routine analyses.
Robert: The recent ICH guidelines on biomarkers cover the development and validation of biomarker assays.
Jimmy: Regulators are increasingly expecting biomarkers where they are scientifically justified to support patient selection, patient stratification, clarify MOA, inform dose selection and strengthen benefit-risk assessment. Take oncology and rare genetic diseases as an example, where the therapies often target a biologically defined subgroup. Regulators may expect biomarkers and/or CDx to define and justify that population. At the same time, regulators have provided clearer guidance and formal pathways, such as the FDA’s Biomarker Qualification Program, CDx guidance and EMA qualification and scientific advice procedures, to support appropriate use. Under the right conditions, biomarkers can serve as critical enablers, such as surrogate endpoints, in supporting accelerated approval.
Regina: Regulatory agencies are increasingly receptive to well-justified biomarker strategies, particularly when biomarkers are clearly linked to MOA, patient stratification or dose selection. While I would not characterize this as regulatory pressure, there is a clear and growing expectation that biomarker data supporting development decisions be analytically robust, reproducible, and appropriately validated for their intended use.
The level of expectation is highly context-dependent. Exploratory biomarkers used for internal decision-making operate within a fit-for-purpose framework, whereas biomarkers intended to support labeling, patient selection or benefit–risk assessments are held to substantially higher analytical and clinical validation standards. Early clarity on the intended role of a biomarker within a program is essential to aligning analytical rigor with regulatory strategy. Overall, the regulatory dynamic is less about mandating biomarkers and more about ensuring that development decisions are supported by reliable, scientifically grounded evidence. Programs that integrate biomarkers proactively and with clear biological rationale tend to experience more constructive regulatory dialogue and stronger submissions.
Craig: It appears that regulatory agencies are now more actively encouraging biomarker incorporation into submissions. However, the emphasis is still within the COU definition and fit‑for‑purpose validation rather than blanket mandates. Regulatory agencies can incentivize biomarkers by accepting them as primary endpoints, surrogates or decision tools in accelerated approvals in cases where they are supported by robust evidence. This trend can be seen for new therapeutics in oncology and neurology, where biomarkers increasingly influence approval and labeling. Submissions must demonstrate biomarker performance (sensitivity, specificity, reproducibility) within a defined context, which often requires orthogonal confirmation, large cohorts and statistical rigor.
Robert: LC–MRM MS is the gold standard in trace level quantification, whether it be drugs and metabolites, contaminants in food, environmental monitoring or biomarker quantification. The exquisite sensitivity and wide dynamic range of ESI MS make it ideally suited for the measurement of biomarkers in biofluids. Although LBAs have extremely high throughput, they lack the specificity afforded by MRM-based MS and are susceptible to interference from isomers, etc.
What sets LC–MS/MS apart from other techniques for biomarker quantification is that methods can be quickly developed using automated tools, publicly available libraries and in-silico tools. These assays can be multiplexed such that several hundred to thousands of compounds can be measured in one rapid analysis. These LC–MS assays can be implemented as broad screens or hypothesis-driven methods focused on specific pathways or diseases. LC–MS methods can be quickly adjusted to include new compounds or exclude unwanted analytes. Added to this, well-curated, commercially available standard libraries for metabolites, lipids or mixtures of both are commercially available, allowing methods to be quickly developed, component identities to be confirmed and calibration lines created. Furthermore, isotopically labelled authentic standards are also commercially available at biologically relevant levels, allowing accurate quantification to be performed no matter the matrix.
Perhaps most importantly, the data from these quantitative biomarker assays can easily be transferred between laboratories or PIs and easily integrated with clinical measurements. These assays can also be transferred between laboratories and CROs and validated.
Jimmy: LC–MS is well known for its high specificity through chromatographic separation and mass-to-charge (m/z) resolution. It supports accurate and reproducible absolute quantification across a wide dynamic range. It is highly multiplexable and supports both targeted bioanalysis and untargeted discovery work. Compared to immunoassays, LC–MS is less dependent on antibody reagent availability, which can be a challenge at times. It is considered the reference method for many biomarkers, including small molecules, metabolites and lipids.
Regina: LC–MS platforms offer distinct strengths in biomarker quantification, particularly when analytical specificity and quantitative robustness are critical. One of the primary advantages is analytical selectivity. By measuring defined m/z transitions, LC–MS enables unambiguous identification of analytes, reducing the risk of cross-reactivity or interference that can arise with antibody-based assays.
Quantitative performance is another major strength. Stable isotope–labelled internal standards allow correction for matrix effects, extraction variability and instrument drift, supporting high accuracy and reproducibility across complex biological matrices. This becomes especially important for low-abundance biomarkers, wide dynamic ranges and longitudinal studies where consistency across time points and sites is critical.
The platform’s versatility further strengthens its role in translational research. LC–MS can be applied across small molecules, metabolites, lipids, peptides and proteins, enabling multiplexed measurement and integration of pharmacokinetic, pharmacodynamic and mechanistic biomarkers within a unified analytical framework. As development strategies increasingly rely on biomarker panels rather than single endpoints, this flexibility becomes strategically valuable. LC–MS also serves as a powerful orthogonal tool. It is particularly advantageous when antibodies are unavailable or insufficiently selective, and when differentiation of closely related species is required. Hybrid immunoaffinity LC–MS approaches further extend sensitivity while preserving molecular specificity. Ultimately, the strength of LC–MS lies in its ability to generate highly specific, quantitatively reliable data that withstand scientific and regulatory evaluation.
Craig: LC–MS platforms provide highly selective, structurally resolved and quantitatively robust biomarker measurements, which makes them particularly well-suited for biomarker analysis. The combination of chromatographic separation with m/z-based detection enables the resolution of isobars and closely related metabolites or proteoforms that would otherwise be indistinguishable. Generally, the linear dynamic range and precision are good, enabling accurate quantification from low‑to-high abundance biomarkers in a single assay. Because LC–MS detects analytes directly rather than via binding reagents, it minimizes dependence on antibodies, supports faster method development, and provides inherent traceability to reference standards and isotopically labeled internal standards. Targeted LC–MS workflows (e.g., SRM/MRM or PRM) support multiplexed quantification of panels of biomarkers in a single run, improving efficiency while maintaining control over inter‑assay variability. In combination with immunocapture or other enrichment strategies, LC–MS can achieve high sensitivity for protein and peptide biomarkers, while retaining the structural information needed to distinguish isoforms and post‑translational modifications. A primary strength is also the maturity of the method. There is a wealth of information in the literature which significantly potentiates method development and there are many well-trained analytical chemists familiar with LC–MS approaches.
Robert: From an analytical targeted LC–MS perspective, there is little difference, with the exception that sample preparation in proteomics is more time-consuming and costly. The dynamic range of metabolomics is often much greater than that of proteomics, but this may be less of an issue in targeted analysis of a few compounds than in broad screening.
Jimmy: All three system-level approaches cast a wide net in capturing biological system activity by analyzing an entire class of molecules at once. Proteomic assays measure proteins and peptides and are often used to confirm MOA, target engagement and pathway modulation. Lipidomic assays measure membrane and signaling lipids and are often used to inform metabolic and inflammatory processes, as well as to complement metabolomics. Metabolomic assays measure small molecules and provide a real-time biochemical function readout, often used for pathway modulation and disease monitoring. Each modality offers a different perspective and are more powerful when paired with other modalities.
Regina: Proteomic, lipidomic and metabolomic biomarker assays differ in the molecular level they interrogate and, consequently, in both analytical complexity and translational insight. Proteomic assays often require enzymatic digestion, careful management of pre-analytical variables, and strategies to address proteoform diversity and post-translational modifications. Analytical challenges include dynamic range limitations and matrix interference, particularly in plasma-based studies. However, proteomics offers strong mechanistic value, especially when assessing target engagement, pathway modulation or functional protein states.
Lipidomic assays provide insight into membrane composition, inflammatory signaling and metabolic regulation, but demand rigorous control of extraction efficiency, ion suppression, and data annotation to maintain quantitative reliability. High-resolution chromatography and accurate mass detection are essential to resolve closely related lipid classes.
Metabolomic assays measure small-molecule metabolites, representing a downstream reflection of biological activity. These assays are typically analytically sensitive and adaptable to high-throughput workflows, yet are strongly influenced by physiological state, diet, microbiome composition and environmental factors. As such, metabolomics excels at capturing dynamic pathway perturbations and phenotypic shifts but requires stringent pre-analytical control and normalization to support translational interpretation. The optimal approach depends on the biological question, required specificity and intended COU, with increasing value emerging from integrated multi-omic strategies that leverage complementary strengths across platforms.
Craig: From a quantitative perspective, proteomic biomarker assays often rely on surrogate peptides, isotopically-labeled standards and sophisticated normalization to achieve robust absolute or relative quantification across cohorts, whereas lipidomic and metabolomic assays more commonly achieve high‑precision, relative or semi‑absolute quantification across hundreds to thousands of features using class‑specific internal standards and tightly standardized LC–MS workflows. While all three domains can operate in targeted and untargeted modes, proteomics tends to be favored for mechanistic pathway mapping and complex protein networks, lipidomics for dissecting membrane remodeling and lipid signaling, and metabolomics for capturing integrated metabolic responses. Rigorous biomarker programs often exploit their complementary strengths in a coordinated, multi‑omics strategy rather than treating them as interchangeable tools.
Robert: This is probably one of the biggest challenges in targeted metabolomics and lipidomics. For small pilot studies, there is a need to develop assays quickly to test a hypothesis and generate data that will allow decisions to be made quickly. This may involve using a pre-existing method that may contain more analytes than necessary or be at a level of sensitivity that is not required, but it will get the job done and allow pilot data to be generated. These methods do not require anything more than fit for purpose, discovery validation. Notwithstanding this, it is still important to perform the analysis according to appropriate standards with study QC and system conditioning steps [1].
Once a decision has been made to progress to a larger cohort study, there is an opportunity to invest time in optimizing the method for throughput, sensitivity, specificity, dynamic range and coverage. Although there are international guidelines for biomarker assay validation, such as ICH M10, the European Bioanalysis Forum has argued for a more pragmatic “context of use” approach to biomarker assay development, evaluation and validation. Thus, considerations should be made for how the data is going to be used and assay validation should be adjusted appropriately, e.g., an assay to support a Phase III drug efficacy study for a new drug application will require more rigorous validation than one used for a large-scale epidemiological study.
We have previously developed and validated, according to the FDA 2011 guidelines, an assay for the quantification of amino acids and biogenic amines in urine and plasma [2] and found this to be an extremely large resource burden. In contrast, we adopted a more fit-for-purpose approach for the validation of an LC–MS/MS assay for the tryptophan pathway metabolites [3] and HILIC–MS/MS quantification of bioactive lipids in plasma [4]. Thus, our current approach for LC–MS/MS assays to support large cohort studies is to focus on robustness and reproducibility, considering issues such as lot-lot column comparison, matrix source variation and intra-day reproducibility. Assay accuracy and bias are only evaluated around the expected concentration range, with less emphasis placed on determining the LLOQ. Effectively, for LC–MS assays to support large cohort studies, our major focus is on robustness, reproducibility and reliable deployment.
Jimmy: It starts with fit-for-purpose validation, including inter-operator, inter-instrument and inter-lot acceptability. If scaling across labs, assay transfer and bridging using a shared sample set is critical. Planning lab qualification and audits early as needed is key. It is also important to ensure consistent, pre-analytical conditions by standardizing tube type, anticoagulant, processing time, centrifugation, freeze/thaw cycles and shipping/handling conditions across sites. Ongoing monitoring and trending of QC samples across runs is important to detect assay drifts. Together, these steps allow you to scale without losing confidence in the data.
Regina: It begins with intentional assay design. Bioanalytical considerations must be incorporated early, with throughput, robustness and long-term performance in mind. Methods that perform well in small pilot studies can become fragile when applied to hundreds or thousands of samples, so simplified sample preparation, stable chromatographic conditions and platform choices that can be sustained over time are critical from the outset. Standardization is foundational; harmonized sample collection protocols, controlled pre-analytical variables, and clearly defined acceptance criteria minimize variability, particularly in multi-site or longitudinal studies. Routine inclusion of system suitability tests, pooled quality controls and reference materials throughout analytical runs enable continuous monitoring of assay performance and early detection of drift. As programs expand, validation strategies must evolve. Fit-for-purpose performance characteristics — precision, accuracy, dynamic range, sensitivity and stability — should be re-evaluated as assays transition from exploratory to decision-critical applications. When analyses are decentralized or instruments are added, cross-validation across platforms or laboratories is essential to ensure comparability and maintain confidence in the data.
Scalability also depends on infrastructure. Automation of extraction workflows reduces operator-dependent variability and supports increased throughput, while standardized and version-controlled data processing pipelines help ensure that growth in sample volume does not compromise reproducibility or interpretability.
Ultimately, scalability is not simply increasing capacity — it is preserving quantitative integrity and biological relevance as complexity grows. When reproducibility is engineered into the analytical framework from the beginning, expansion becomes a deliberate progression rather than a reactive adjustment.
Craig: The ability to scale to large, multi-center cohorts is a key necessity for biomarker discovery platforms. The logistics necessary should not be underestimated. There is a need to essentially lock down the entire workflow (clinical, analytical, computational) and then rigorously monitor its performance over time. In my experience, the pre‑analytical handling (collection, processing, storage) is key, and it is necessary to have rigorous SOPs that are harmonized and locked across clinical centers. In addition, all components from sample preparation to LC–MS settings and data processing need to be harmonized and optimized before scaling up. Protocol changes mid‑study should be avoided at all costs, and if they do become necessary, then bridged samples/analyses are vital. Good practice should, when possible, include implementation of structured batch layouts that are ideally randomized for key covariates and interweave pooled QC samples, system suitability samples and reference materials across all runs to track drift and batch effects. There should be predefined quantitative QC metrics (e.g., CV thresholds, signal drift limits) and objective pass/fail criteria that trigger potential reanalysis. If the analysis plan involves multi-site analyses, then there is a need to evaluate method robustness, including reproducibility across instruments and sites. There is also a need for statistical planning and validation, including power calculations for the scaled cohort, multiple‑testing control for high‑dimensional panels, and independent validation cohorts or train/test splits to guard against overfitting.
Meet the experts
Robert Plumb
Scientific Advisor
Waters Corporation (MA, USA)
Rob Plumb is currently a Scientific advisor in the Research Development Analytical Testing organization at Waters Corporation. He has several years of experience in the pharmaceutical industry, beginning in 1987 at GlaxoSmithKline’s (London, UK) R&D DMPK Department, focusing on regulatory bioanalysis and metabolite identification. In 2001 he moved to Waters Corporation where he has worked in R&D, pharmaceutical business development, metabolic phenotyping and discovery omics. His research interests include the application of fast chromatography and advanced mass spectrometry (MS) in bioanalysis and omics studies. Robert has published over 150 papers on LC–MS/MS and NMR for bioanalysis, metabolomics and metabolite identification.

Jinming (Jimmy) Xing
Associate Director, Clinical Biomarker Lead
Moderna, Inc. (MA, USA)
Jimmy Xing is an Associate Director at Moderna, where he leads biomarker strategy across clinical-stage programs in rare diseases and oncology. In this role and in prior roles, he has been responsible for developing and executing comprehensive biomarker plans, encompassing strategic planning, assay development, validation and transfer, vendor oversight, and biospecimen management. He also evaluates and implements emerging technologies and devices to enhance biomarker collection and analysis. Jimmy is an active member of the Patient Centric Sampling Interest Group, a non-profit organization dedicated to promoting the adoption of patient-centric sampling technologies. His work aims to facilitate their integration into both clinical trials and standard of care and reduce patient burden associated with traditional sample collection. He holds a BSc in Biological Sciences with an emphasis in Analytical Chemistry from the University of Delaware (DE, USA) and earned his Pharm.D. from The Ohio State University (OH, USA).
Regina Oliveira
Senior Scientist, LC–MS Neuroscience
Bristol Myers Squibb (MA, USA)
Regina Oliveira is a bioanalytical scientific leader with over 15 years of experience advancing LC–MS-driven strategies that enable biomarker discovery, translational decision-making and IND-enabling development. At Bristol Myers Squibb, she shapes and executes biomarker and bioanalytical strategies across multiple neuroscience programs, collaborating cross-functionally to translate molecular insight into clinical impact. Her expertise spans discovery through clinical development, integrating small and large molecule bioanalysis, proteomics, metabolomics and biomarker science within regulatory frameworks aligned with FDA, EMA and ANVISA expectations. She is recognized for architecting fit-for-purpose analytical frameworks, advancing qualification strategies, and establishing scalable platforms that convert complex datasets into translational intelligence.
Throughout her career — including experience at the National Institutes of Health (MD, USA), Quintiles (now Q2 Solutions; NJ, USA), and as Associate Professor of Chemistry at the Federal University of São Carlos (Brazil) — Dr Oliveira has consistently bridged innovation with execution, fostering scientific collaboration, strengthening data integrity practices and aligning multidisciplinary teams toward program advancement. An author of over 100 scientific publications and presentations and an editorial board member of the Journal of Pharmaceutical and Biomedical Analysis, Regina is focused on advancing next-generation precision bioanalysis through multi-omics integration, emerging technologies and data-driven approaches that accelerate personalized medicine and redefine how bioanalytical science informs therapeutic development.
Craig Wheelock
Principal Researcher
Karolinska Institutet (KI; Stockholm, Sweden)
Craig Wheelock is Head of the Unit of Integrative Metabolomics in the Institute of Environmental Medicine at KI, where he also serves as Director of the small molecule mass spectrometry core facility (KI-SMMS). Research in his group focuses on molecular phenotyping of respiratory disease, with a particular interest in investigating the role of lipid mediators in pulmonary physiology. His recent efforts have centered on understanding the role of lipid mediators derived from 18-carbon-containing dietary fatty acids (e.g., linoleic and alpha-linolenic acid) in lung disease. These so-called octadecanoids represent a new subclass of bioactive lipids that are potentially involved in the etiology of respiratory disease. The overarching aim of his research is to identify personalized molecular profiles that can be associated with an individual’s lifestyle, environmental exposure and susceptibility to asthma. When not balancing his time between Sweden and Japan, he enjoys teaching his kids to free dive and stand-up paddleboarding with his dog.
In association with:
