AI Model Trust & Compliance: Enabling Scalable and Responsible AI in Drug Discovery

Insights from the BiotechX Panel

Alliance for ARTIFICIAL Intelligence in Healthcare

Panel:

Ahmet Hamdi Cavusoglu (Merck Digital Sciences Studio; moderator)

Caroline Farrell (Foley Hoag LLP)

Jay Modh (Intuitive Cloud)

Dr. Stacie Calad‑Thomson (NVIDIA)

Executive Summary

Healthcare is past asking whether AI adds value and is now accountable for how to deploy it in ways that are explainable, compliant, and reliably beneficial to patients. The BiotechX panel convened by AAIH examined what it takes to translate promising pilots into production systems that fit inside drug discovery and development workflows. The clear message was that trust must be designed in from the beginning. This means defining context of use, engaging regulators early, documenting decisions and data lineage, and validating performance in wet‑lab settings where it counts.

Panelists converged on a practical view of evidence. Rather than claiming end-to-end automation, the strongest programs show cycle time compression and resource efficiency while maintaining experimental confirmation as the gold standard. Proof that matters looks like reaching target to IND faster than historical baselines, synthesizing far fewer molecules to reach a development candidate, and maintaining auditable records that demonstrate why decisions were made. These operational gains are amplified by new collaboration models, such as federated learning consortia for ADME datasets, that improve model performance without requiring raw data pooling. Across the discussion, compliance was reframed as an enabler. When guardrails are embedded into daily operations, teams move faster and make fewer costly mistakes.

This paper synthesizes the panel’s insights into a set of design principles, operating practices, and policy recommendations for organizations seeking to scale trustworthy AI in drug discovery. It is intended as a pragmatic companion to FDA’s concise, conceptual guidance: a field-tested view on what “show your work” means in practice.

Panel Context

The session, AI Model Trust & Compliance: Enabling Scalable and Responsible AI in Drug Discovery, was held at BiotechX on September 17, 2025. Over thirty minutes, a cross section of perspectives (operator, builder, regulator, and ecosystem convener) examined the realities of moving from pilots to production. Panel moderator Ahmet Hamdi Cavusoglu, Senior Director, Merck Digital Studios framed the discussion by noting, “This conversation is about more than powerful models. It’s about embedding them into real workflows, navigating regulation, ensuring trust, and creating measurable impact for patients.” The panelists’ experiences ranged from federal health policy and reimbursement to cloud scale deployment, autonomous laboratories, and cross-industry collaboration. The remarks summarized below reflect that diversity of viewpoints while maintaining a shared focus on measurable impact for patients.

Conclusion

The BiotechX panel underscored that trustworthy AI in drug discovery will not emerge from technical novelty alone. Success depends on embedding compliance into daily operations, demonstrating measurable acceleration with rigorous validation, and building collaborative models that scale without sacrificing privacy or integrity. The FDA’s sparse draft guidance leaves space for industry to lead, and organizations that can “show their work” will set the standards others follow. By reframing compliance as an accelerator, aligning partnerships around shared evidence requirements, and keeping reimbursement in view, the field can move from promising pilots to reliable practice. Above all, the path forward is one of transparency, accountability, and community stewardship, ensuring that AI shortens timelines without compromising safety or trust.

Read the Full White Paper
Next
Next

AI in Health Sciences Playbook