How AI is reshaping compliance with smart bioanalytical validation

After presenting at the AAPS Summer Scientific Forum 2025 (Indianapolis; 14-17 July), Regulatory Compliance Specialist Srividya Narayanan sat down with us to take a deep dive into her talk.
Srividya Narayanan,
Graduate Student – Regulatory Affairs,
Northeastern University (MA, USA)
Srividya Narayanan is a regulatory affairs professional with a clinical background as a dentist specialised in periodontics. After earning her Master’s in Regulatory Affairs from Northeastern University, she now serves as a Regulatory Compliance Specialist at Culture Care Collective (MA, USA), where she has streamlined documentation workflows and reduced preparation time by 30% while maintaining 100% Health Insurance Portability and Accountability Act compliance, Hi-Trust and SOC2 compliance templates. An innovator at heart, she developed Regulatory Navigator — an AI compliance platform serving professionals across 50 major markets — and won second place at MIT Grand Hack Medicine for SynapSense, an AI tool for early Parkinson’s detection. With expertise spanning FDA submissions, clinical evaluation documentation, and global regulatory strategies across pharmaceuticals, medical devices and digital health, Srividya combines clinical insight with regulatory expertise to drive successful product launches while prioritising patient safety.
- To kick us off, could you share what first sparked your interest in the intersection of AI and bioanalytical method validation? Why do you believe this is such a critical moment for the field?
- In your talk at the AAPS Summer Scientific Forum in July, you discussed several case studies demonstrating the successful integration of AI in method validation processes. Could you share some insights from these?
- In which specific areas of bioanalytical validation do you see AI providing clear superiority over traditional methods, and where do traditional approaches still hold advantages?
- Are there any surprising patterns or insights AI revealed about bioanalytical validation processes that human-driven methods had consistently overlooked?
- How do you differentiate between AI as a support tool vs. AI as a decision-maker in validation processes, and how does that distinction impact regulatory acceptance?
- How do you approach the challenge of making AI-driven bioanalytical validation decisions transparent and explainable to regulatory authorities? What techniques or frameworks do you use to ensure that AI recommendations can be properly justified and audited?
- Given that regulatory bodies may have different digital strategies globally, how would you suggest designing AI-driven validation processes that are ‘globally compliant’?
- What would you say is the most important first step for a lab that’s interested in integrating AI into its bioanalytical workflows but is concerned about potential risks to data integrity?
To kick us off, could you share what first sparked your interest in the intersection of AI and bioanalytical method validation? Why do you believe this is such a critical moment for the field?
Ah, that’s a great question! We have to go back 2 years to my master’s in dental surgery, where I conducted my thesis research on comparing hydroxyapatite crystals with gingival regenerative tissue. This involved validating my radiographic methods to accurately and reproducibly measure bone regeneration. This meant manually validating image acquisition, pixel-to-density conversion and bone fill via X-ray precision, while also assessing for clinically meaningful changes in bone density. The only difference from what I’m doing today is that I was measuring bone density instead of drug concentration, but the principle is the same. The spark came when I realized I was spending 80% of my time ensuring measurement reproducibility rather than advancing actual science. That frustration led me to explore how AI algorithms could validate bone density in seconds, but what fascinated me was imagining this technology being applied to the thousands of bioanalytical methods, validating every drug on the market. This realization drove me to explore regulatory affairs. As we can see, the FDA/EMA have released AI/ML frameworks, the technology has matured and the healthcare industry has also come to a point where we can’t afford months of manual validation when AI can deliver better quality at speed. I believe we have both the technology and regulatory pathways to make this transformation happen.
In your talk at the AAPS Summer Scientific Forum in July, you discussed several case studies demonstrating the successful integration of AI in method validation processes. Could you share some insights from these?
There are many compelling case studies, but I’m happy to share two that I find the most exciting. In one of the research articles published, a random forest model — a kind of ML model classification — was trained on historical data from 100+ bioequivalence trials. When applied prospectively to 30 new formulations, it flagged high-risk candidates, which in turn reduced the need for roughly 40% of in vivo studies. Isn’t that fascinating! AI allowed us to make more ethical decisions and also avoid unnecessary human trials.
The next one is like a Harry Potter WAND: a WAND decision tree algorithm that analyzes screening data from 150+ anti-idiotype antibody pairings and predicts the optimal reagent combination in under an hour. If we compare this with traditional wet-lab methods, it would take days or even weeks. With WAND, we will achieve a more than 70% cut in development time, and this will happen all while maintaining GLP-compliant documentation. What excites me most in both these cases is how AI is enhancing data quality and addressing ethical concerns, and not just speed.
Read the two case studies here:
-
Machine learning driven bioequivalence risk assessment at an early stage of generic drug development
-
Automated bioanalytical workflow for ligand binding based pharmacokinetic assay development
In which specific areas of bioanalytical validation do you see AI providing clear superiority over traditional methods, and where do traditional approaches still hold advantages?
You know, when we are talking about how AI is transforming every sector, it’s really about the ability to predict and prevent problems before they occur. Let me give you an example: column degradation. In traditional methods, we find out a column is dead when our run fails. But with ML, we can spot the warning signs weeks ahead by picking up on subtle changes in peak shape that our human eyes miss. That’s equivalent to identifying an issue four runs before manual quality control finds out. It’s like a car warning you of a tire blowout a week ahead, instead of causing costly failures. Another cool thing is that AI can look at hundreds or even thousands of old chromatograms and predict how the method will behave under new conditions. That’s huge when we are working with expensive biologics.
On the flip side, traditional validation isn’t going anywhere, and honestly, it shouldn’t vanish completely. When a researcher is running a routine small or single analyte assay that has been around for decades, do we need AI? Maybe not, because traditional methods are also faster, cheaper and the researchers/technicians already know them inside out. Moreover, when something unusual occurs, say an unexpected contamination, human (traditional) troubleshooting will still outperform AI every time, because we think outside of the datasets and apply creative problem-solving ability, which algorithms lack. AI is not needed in all aspects of bioanalytical validation methods; the key is knowing when to use which tool. It’s about being smart about the best tool to fit the job.
Are there any surprising patterns or insights AI revealed about bioanalytical validation processes that human-driven methods had consistently overlooked?
Oh absolutely! Some of the patterns AI has uncovered have been a real “aHA” moment for the industry. In one analysis of 20 labs and over 10,000 HPLC injections, the first runs on Monday mornings were often bad, but nobody knew why. Everyone just accepted it. “Oh, it’s Monday, run it again.” Nobody really questioned why. Then AI comes along and analyzes data from dozens of HPLC systems across the different labs, and boom! It finds a clear pattern. The main issue was these microscopic bubbles, which, when the instruments sat idle over the weekend, would form in the solvent lines. These bubbles were messing up those first few Monday runs, shifting retention times, you name it. So, once we knew what the problem was, the solution was simple. Labs then started the quick conditioning cycle on Sunday night, flushing the system and eliminating Monday outliers entirely. This example shows the value of AI in validation, which is not only about automation, but also seeing what we’ve been blind to for years.
How do you differentiate between AI as a support tool vs. AI as a decision-maker in validation processes, and how does that distinction impact regulatory acceptance?
That’s a great question. The way I see it, there’s a huge difference between AI helping us make decisions versus AI making decisions for us. When AI is a support tool, it provides recommendations or flags data for humans to review, but it doesn’t make the final decision. For instance, an ML model may highlight chromatograms when peak shape is off, but then we look at it, check the raw data and decide if we need to rerun it. In this way, AI helps us to spot the issues faster, but ultimately a human still signs off and makes the call. Now, when AI becomes the decision maker, that’s totally different. If AI is allowed to automatically accept or reject a batch or adjust parameters without human oversight, it crosses into a high-risk domain. That makes regulators like me really nervous and here’s why. When AI is a support tool, regulators are comfortable when it’s following FDA guidances because we see it like any other analytical tool; it helps but doesn’t replace human judgment. The accountability is clear, even if something goes wrong. But if you think of AI as a decision-maker, it’s scary because we want to know who is accountable when AI messes up. How do we validate the AI’s judgment, or what if it can’t make decisions in some situations? These systems need far more scrutiny, extensive validation and proper evidence that they’re safe. Maybe someday we’ll trust AI to make decisions on its own, but right now, human supervision is needed.
How do you approach the challenge of making AI-driven bioanalytical validation decisions transparent and explainable to regulatory authorities? What techniques or frameworks do you use to ensure that AI recommendations can be properly justified and audited?
This is one of the biggest challenges we face, as we know transparency is non-negotiable. Let me share my approach. First, I always start by defining the context of use, exactly what decision/s the model supports, and what the output data is used for. No need for mysteries, just pure clarity. Second, I keep “humans-in-the-loop” validation at critical checkpoints. As is the case in the examples above, even when AI flags anomalies and suggests parameter adjustments, a trained scientist always reviews and signs off before any batch is accepted/rejected. AI suggests and humans decide, and we document both parts.
Another cool thing to note is that there are techniques like LIME, where AI will show the step-by-step process of why it came to a particular conclusion. We can trace exactly why it made each recommendation. Apart from these, there is also an FDA credibility framework for version and change controls. When auditors come, I can walk them through each step of what the pattern was, show where it was flagged for review, and the decision we came to.
Given that regulatory bodies may have different digital strategies globally, how would you suggest designing AI-driven validation processes that are ‘globally compliant’?
That’s a tricky question because building them on a common risk-based foundation while respecting each locality’s requirements is difficult. I always start with the basics that everyone agrees on: defining exactly what the AI does. Whether you’re in India, Boston or anywhere in between, everyone needs to understand the scope. Then I will look at what that specific region cares about, for instance, the FDA wants to know how you’ll handle validating and documenting each planned model update without any new review post-market. On the other hand, Japan focuses on the key parameters like predictability, data quality and autonomy. So instead of using a separate framework, I want to keep a universal audit trail, so whatever updates or human overrides we are making, everything goes directly into that central system. That way, whether an FDA inspector or a Pharmaceuticals and Medical Devices Agency auditor shows up, they can all follow the same trail to understand my company’s decision. The key is to document evidence that the system is safe, effective and under control. So instead of starting from scratch, we can build a strong foundation and adjust the presentation for each audience. What I’ve learned is that, worldwide, regulators want the same assurances; they just ask for them differently.
What would you say is the most important first step for a lab that’s interested in integrating AI into its bioanalytical workflows but is concerned about potential risks to data integrity?
If I have to choose one thing, it would be: invest in your data before you invest in your algorithm.
The very first step is to examine the data you have. I’m talking about a full audit, going through 6 months of your assay data and asking yourself, is this complete? Can I trust it? What am I missing? What I’ve learnt is that auditing data is like detective work and checking all printouts and results retrospectively works really well. We could find out if there were missing quality control runs that no one has noticed, or manual changes without any explanations, all the nitty-gritty details. When AI learns from incomplete data, we get messy analysis in return. I would recommend blocking out 2-3 months if it’s a huge dataset, and cleaning up before touching any AI. Yes, I get it. Nobody wants to spend months cleaning all the old data. But with AI, we cannot escape; with its magnifying glass, it amplifies every detail that we give it. So, when auditors show up, we will be glad that we did this boring work. They care way more about our data quality than our algorithm choice.