Health and Explainable AI Podcast Podcast By Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence cover art

Health and Explainable AI Podcast

Health and Explainable AI Podcast

By: Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence
Listen for free

The Health and Explainable AI podcast is a collaborative initiative between the Health and Explainable AI (HexAI) Research Lab in the Department of Health Information Management at the School of Health and Rehabilitation Sciences, and the Computational Pathology and AI Center of Excellence (CPACE), at the University of Pittsburgh School of Medicine. Led by Ahmad P. Tafti, Hooman Rashidi and Liron Pantanowitz, the podcast explores the transformative integration of responsible and explainable artificial intelligence into health informatics, clinical decision-making, and computational medicine.Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence
Episodes
  • Martin Raison CTO of Nabla on Architecting the Agentic AI Era in Healthcare
    Mar 18 2026

    Martin Raison, Co-founder and CTO of Nabla speaks with Pitt HexAI host Jordan Gass-Pooré about Nabla’s central role in architecting the agentic AI era in healthcare. Martin details Nabla’s evolution from a specialized ambient scribing tool into a comprehensive "Adaptive Agentic Platform". They discuss the significant challenges involved in making it possible for AI agents to perform complex clinical tasks and how Nabla has been thrust into tackling a labyrinth of structural and data hurdles. These range from the integration of fragmented, unstructured patient charts and hospital guidelines to the complex technicalities of agent discoverability, interoperability, and the establishment of standardized accountability frameworks.


    The interview highlights a significant shift in Nabla's technical strategy: moving from probabilistic Large Language Models (LLMs) toward world models. Raison explains that while LLMs are effective at generating text, they lack a fundamental understanding of cause-and-effect and the ability to simulate evolving environments. To address this, Nabla has entered an exclusive partnership with Advanced Machine Intelligence (AMI), a research lab co-founded by Yann LeCun. This collaboration provides Nabla with early access to world model technologies that can "imagine" different scenarios and simulate the consequences of actions, providing a more deterministic and auditable path for AI in high-stakes clinical settings.


    In discussing the technical foundations of computational health, Martin addresses the critical need for inference optimization to manage the millions of model executions required daily at scale. Furthermore, Martin envisions a fundamental shift in the paradigm of AI inference through the adoption of world models. He suggests that these architectures will blur the traditional boundary between training and inference by enabling continuous learning, where the model adjusts and evolves in real-time based on new data and clinician feedback, rather than being limited by the static context windows of current LLMs.


    Beyond the core technology, Martin and Jordan discuss the critical importance of explainability and interoperability in the "agentic web" of healthcare. They specifically highlight architectural initiatives like MIT’s Project NANDA, which focuses on the foundational layers of the agentic web, including critical elements like discoverability and authentication that go beyond the AI layer alone. Martin emphasizes that the sector must move toward standardized "Agent Fact Files" to ensure accountability and ease of governance as organizations begin to manage thousands of agents. He concludes by looking toward a future of "emergent intelligence," where the collaboration between multiple models creates sophisticated patterns that can eventually help clinicians improve their own professional practice over time.

    Show more Show less
    38 mins
  • Ekaterina Kldiashvili from the Tbilisi Medical Academy on Responsible Uses of AI, Medical Education and Inter-University Collaboration
    Feb 7 2026

    Ekaterina Kldiashvili, Vice Rector for Research at Petre Shotadze Tbilisi Medical Academy, and Pitt’s HexAI podcast host, Jordan Gass-Pooré, discuss public health, the incorporation of AI into healthcare, responsible uses of AI, medical education and inter-university collaboration.

    Ekaterina and Jordan explore opportunities and concerns surrounding commercial AI applications, noting that while AI can improve healthcare efficiency, it must support clinical reasoning rather than replace it. They cover the Tbilisi Medical Academy’s work on responsible AI usage, particularly in educating providers and patients, demonstrating how AI-enhanced text and visuals can significantly improve patient understanding and follow-up rates. They also touch on challenges associated with the use of AI in non-English languages like Georgian and delve into advances in computational genomics and rapid molecular diagnostics. Looking ahead, they discuss the strengthening ties between the University of Pittsburgh and the Tbilisi Medical Academy through knowledge sharing and faculty training and broadly discuss inter-university collaboration and the idea of seeing students investigate how different cultures and communities trust and accept AI in healthcare settings.

    Show more Show less
    28 mins
  • Richard Bonneau from Genentech on Drug Discovery, Computational Sciences and Machine Learning
    Dec 18 2025

    Richard Bonneau, Vice President of Machine Learning for Drug Discovery at Genentech and Roche, provides Pitt’s HexAI podcast host, Jordan Gass-Pooré, with an insider view on how his team is fundamentally changing and accelerating how new drug candidate molecules are designed, predicted, and optimized.

    Geared for students in computational sciences and hybrid STEM fields, the episode introduces listeners to uses of AI and ML in molecular design, the biomolecular structure and structure-function relationships that underpin drug discovery, and how distinct teams at Genentech work together through an integrated computational system.

    Richard and Jordan use the opportunity to touch on how advances in the molecule design domain can inspire and inform advances in computational pathology and laboratory medicine. Richard also delves into the critical role of Explainable AI (XAI), interpretability, and error estimation in the drug design-prototype-test cycle, and provides advice on domain knowledge and skills needed today by students interested in joining teams like his at Genentech and Roche.

    Show more Show less
    30 mins
No reviews yet