Episodes

  • Zero-Shot vs Few-Shot - The Secret of Few-Shot Prompting
    Mar 25 2026

    Don't just ask the AI to summarise; give it three examples. Learn why "Few-Shot" prompting is the easiest way to double your AI's accuracy.

    #PromptEngineering #LifeHacks #MedicalAI #ai in medicine Music generated by Mubert https://mubert.com/render

    healthaibrief@outlook.com

    Show more Show less
    2 mins
  • Is Perplexity Health the Future of Medical AI? The Surprises Behind the Launch
    Mar 24 2026

    Consumer health AI is moving at lightning speed, but is the clinical safety keeping up? We break down the newly launched Perplexity Health, its powerful data connectors, and the regulatory grey area of AI medical advice.


    Perplexity has officially launched Perplexity Health, a powerful new suite of data connectors that integrates Apple Health, wearable data via Terra API, and electronic health records through b.well. By aggregating this highly fragmented personal health data, Perplexity's AI agents provide highly personalized answers to user health queries. However, a deep dive into the launch reveals a stark contrast between its aggressive medical marketing and its strict educational disclaimers, highlighting a growing trend of tech giants bypassing traditional pre-market clinical validation.


    Soures:

    - https://www.perplexity.ai/hub/blog/introducing-perplexity-health

    - https://www.perplexity.ai/hub/blog/introducing-the-perplexity-health-advisory-board

    - https://www.perplexity.ai/hub/legal/privacy-policy


    Key Takeaways:

    • How Perplexity Health technically unifies fragmented data from EHRs, Apple Health, and wearables.

    • The critical contradiction between AI health marketing claims and legal "non-medical" disclaimers.

    • Why the retroactive assembly of clinical advisory boards signals a major shift in medical AI regulation.


    0:00 Introduction: The Healthcare Data Land Grab

    0:41 The Evolution of Perplexity: From Search Engine to Specialized Verticals

    1:18 The Architecture of Perplexity Health: Integrating Fragmented Medical Data

    2:30 The Marketing Paradox: Confidence vs. Legal Disclaimers

    4:00 Contradictory Advice: Is It for Patient Prep or Professional Guidance?

    4:45 A Shift in Validation: Launching Before Clinical Testing

    6:00 The Clinical Advisory Board: Stellar Names and Future Safeguards

    7:25 The Regulatory Grey Area: Search Utility vs. Medical Device

    8:30 Conclusion: Great Infrastructure vs. The Need for Clinical Rigor


    Clinical Governance & Educational Disclosure

    This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.

    • Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).

    • Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.

    • Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.


    Music generated by Mubert https://mubert.com/render

    https://substack.com/@healthaibrief


    #HealthAI #PerplexityHealth #DigitalHealth #MedicalAI #HealthTech #EHR #FutureOfHealthcare #ClinicalAI #MedTech



    Show more Show less
    9 mins
  • Fitbit AI Health Coach - Medical Records & Google Gemini Integration
    Mar 23 2026

    Fitbit’s new Gemini-powered AI Health Coach is now integrating your full medical records, here is what it means for the future of clinical data and patient care.

    In this deep dive, we analyse Google’s latest update to the Fitbit ecosystem: the integration of Electronic Health Records (EHR) with consumer wearable data. We break down the 15% improvement in sleep staging accuracy, the move into insulin resistance and hypertension research, and the strategic use of IAL2 identity standards via CLEAR and b.well. More importantly, we address the growing regulatory tension between "wellness" marketing and "clinical" reality as AI begins to interpret lab results and medications.

    Key Takeaways

    • The EHR Integration: How IAL2 standards allow Fitbit to securely pull lab results and visit history into a consumer app.
    • The Wellness Loophole: Analysis of the regulatory strategy behind Google’s "not a medical device" disclaimers vs. their metabolic health coaching.
    • Clinical Accuracy: What a 15% increase in sleep staging accuracy means for aligning consumer tech with clinical gold standards.


    0:00 – Introduction - EHR Integration into Fitbit’s AI Health Coach

    0:27 – Strategic Positioning: Google’s Race for Health Data

    0:51 – The Regulatory Paradox: Wellness vs. Medical Advice

    1:18 – Technical Refinement in Sleep Tracking Accuracy

    1:54 – Predictive Modelling for Metabolic Health

    2:16 – CGM Integration and Glycaemic Response Analysis

    2:40 – The Mechanism: Identity Verification and Record Syncing

    3:03 – Personalization vs. Strategic Friction

    3:43 – The Clinical Grey Area and Physician Liability

    4:31 – Brand Risk Management: Why Fitbit Over Google Health

    5:01 – Privacy Policies and the "Black Mirror" Trade-off

    5:31 – Using Clinical Data to Train Future Generative AI Models

    5:50 – External Data Processing and the Right to be Forgotten

    6:18 – Summary: Technical Successes vs Safety Hurdles

    7:18 – The Future of Algorithmic Wellness Frameworks

    7:44 –Innovation vs Human Professional Responsibility


    Clinical Governance & Educational Disclosure

    This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.

    • Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).

    • Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.

    • Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.

    Music generated by Mubert https://mubert.com/render

    https://substack.com/@healthaibrief

    #HealthAI #Fitbit #GoogleHealth #MedicalRecords #GeminiAI #DigitalHealth #HealthTech #Wearables #MedTech #ClinicalAI #EHRIntegration

    Show more Show less
    8 mins
  • Data Privacy & HIPAA - Is Your Patient Data Leaking to OpenAI
    Mar 20 2026

    The million-dollar question: Can you use ChatGPT in a hospital? We discuss BAA agreements, local models, and keeping medical data private.


    #HIPAA #GDPR #DataPrivacy #CyberSecurity #ai in medicine Music generated by Mubert https://mubert.com/render


    healthaibrief@outlook.com

    Show more Show less
    2 mins
  • How ChatGPT and AlphaFold Helped Shrink a Terminal Tumour by 75%
    Mar 19 2026

    Discover how a Sydney data engineer used DeepMind's AlphaFold and ChatGPT to design a world-first personalised mRNA cancer vaccine for his dog.


    In this episode, we deconstruct the "n-of-1" case of Rosie the Staffy, whose terminal mast cell tumours were treated using a bespoke vaccine designed by a non-biologist. We move past the headlines to look at the actual technical workflow: from genomic sequencing and protein-structure prediction to the synthesis of mRNA nanoparticles. This analysis explores the democratization of drug discovery and the role of AI as a scientific project manager in modern oncology.


    Key Takeaways

    • How AlphaFold 3D protein modeling identifies neoantigens for vaccine design.

    • The role of LLMs in navigating complex scientific infrastructures and genomic pipelines.

    • The regulatory and ethical challenges of "rapid-response" personalised medicine.


    0:00 – Meet Paul and Rosie: A DIY AI Success Story

    0:27 – Deconstructing the AI-Driven Medical Workflow

    1:10 – The Data-First Mindset in Genomic Sequencing

    1:48 – Using Google DeepMind’s AlphaFold for Protein Prediction

    2:25 – Synthesizing a Custom mRNA Cancer Vaccine

    2:43 – Results: 75% Reduction in Tumor Volume

    3:00 – Why This Isn’t a "Cure" Yet: The Reality of Metastasis

    3:30 – The Challenge of Tumor Heterogeneity

    4:05 – Pragmatic Skepticism: Analyzing AlphaFold Confidence Scores

    4:30 – Regulatory Hurdles: AI Speed vs. Healthcare Red Tape

    4:51 – Avoiding Narrative and Survivorship Bias in Medical News

    6:10 – The Future of Democratised Drug Discovery

    7:00 – The New Role of Clinicians in the AI Era


    Clinical Governance & Educational Disclosure

    This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.

    • Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).

    • Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.

    • Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.


    Music generated by Mubert https://mubert.com/render

    https://substack.com/@healthaibrief


    #HealthAI #AlphaFold #mRNA #CancerVaccine #PrecisionMedicine #DeepMind #ChatGPT #Biotech #DigitalHealth #Oncology

    Show more Show less
    7 mins
  • Microsoft Copilot Health AI and The Systemic Failures Driving Us Towards Similar Medical AI
    Mar 18 2026

    Are tech giants using late-night health searches to justify a massive medical data grab? Discover the strategy behind Microsoft’s Copilot Health launch.


    We analyse the newly released data on how 500,000 people use conversational AI for health, and contrast it with the immediate launch of Copilot Health, a system that ingests EHRs and wearable data to provide what Microsoft calls "medical superintelligence." This breakdown explores the contradiction between regulatory disclaimers and product capabilities, the reality behind late-night symptom searching, and the risks of deploying diagnostic AI without tracking clinical outcomes.


    Source materials including Microsoft’s blog posts describing:

    - How people search for health information: https://microsoft.ai/news/health-check-how-people-use-copilot-for-health/

    - Report that came from in full: https://www.microsoft.com/en-us/research/blog/msr-research-item/how-people-use-copilot-for-health/

    - Product release: https://microsoft.ai/news/introducing-copilot-health/


    Key Takeaways:

    • Understand the real data behind how patients are using conversational AI, including the heavy reliance by caregivers coordinating family health.

    • Discover the capabilities of Copilot Health, how it integrates EHRs and wearables, and the strategic use of "trixie" compliance language.

    • Learn why evaluating AI based on engagement metrics rather than downstream clinical outcomes poses a massive risk to patient safety.


    00:00 - 01:13 - Introduction to the co-pilot health launch

    01:13 - 02:40 - Analysis of the Microsoft AI report

    02:40 - 03:13 - Breakdown of how AI is being used

    03:13 - 04:29 - Analysis of AI usage and a critical lens

    04:29 - 05:40 - Introduction to co-pilot health

    05:40 - 06:44 - Comparison to professional medical advice

    06:44 - 07:30 - The psychological trap: cognitive surrender

    07:30 - 08:30 - The lack of independent clinical evaluation

    08:30 - 09:08 - Analysing the AI chat interface

    09:08 - 10:48 - The path forward and the need for clinical trials

    10:48 - 12:04 - Summary and closing thoughts


    Clinical Governance & Educational Disclosure

    This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.

    • Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).

    • Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.

    • Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.


    Music generated by Mubert https://mubert.com/render

    https://substack.com/@healthaibrief


    #HealthTech #ArtificialIntelligence #DigitalHealth #CopilotHealth #MedicalData #HealthAI #HealthcareInnovation #EHR

    Show more Show less
    12 mins
  • The Age of the Medical Generalist: Foundation Models in Healthcare
    Mar 17 2026

    The era of single-task medical algorithms is over. Discover how multimodal foundation models can transform radiology, ultrasound, and metabolic tracking.


    Healthcare AI is moving rapidly beyond text-based large language models. This comprehensive analysis breaks down the latest wave of medical foundation models, including MedVersa, OMAFound, BrainIAC, EchoJEPA, and GluFormer. We examine how self-supervised learning, latent predictive architectures, and LLM-orchestrators are solving the data-scarcity bottleneck and enabling multi-cancer screening from a single scan.


    References:

    https://www.nature.com/articles/s41593-026-02202-6 - brain MRI

    https://www.nature.com/articles/s44360-026-00055-8 - breast and lung cancer CT

    https://ai.nejm.org/doi/full/10.1056/AIoa2500595 - diverse medical imaging

    https://www.nature.com/articles/s41467-026-70077-z - retinal imaging

    https://www.nature.com/articles/s41586-025-09925-9 - glucose monitoring

    https://arxiv.org/abs/2602.02603 - echocardiography

    https://arxiv.org/abs/2602.15913 - review


    Key Takeaways:

    • How latent predictive architectures (JEPA) ignore ultrasound noise to achieve state-of-the-art echocardiogram analysis with 1% data.

    • The operational workflow of OMAFound, which opportunistically screens for breast cancer on routine lung CTs, boosting radiologist sensitivity by nearly 40%.

    • Why tokenizing continuous glucose monitoring (CGM) data like language predicts long-term cardiovascular risk better than standard HbA1c metrics.


    00:00 Introduction to Medical Foundation Models

    00:18 Overview of Multimodal Foundation Models

    00:46 Key Challenges and Operational Hurdles

    01:06 Why LLMs Struggle with Medical Data

    01:22 The Visual and Temporal Nature of Medicine

    01:43 The Shift to Multimodal Reasoning

    01:58 Fine-Tuning and Model Adaptation

    02:10 Real-World Medical AI Architectures

    02:35 Chest X-Ray and Segmentation Models

    03:12 Strengths and Weaknesses of Foundation Models

    04:06 Case Study 1: Volumetric Imaging (BrainIAC)

    06:36 Case Study 2: Non-Contrast CT (OMAFound)

    08:44 Case Study 3: MedVersa (Multimodal Generalist)

    10:23 Case Study 4: EchoJEPA (Echocardiography)

    13:10 Case Study 5: Glucose Monitoring (GluFormer)

    15:13 Maturation of the Medical AI Field

    17:14 Final Reflections and Future Outlook


    𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:

    This concise summary of AI technology is for 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐚𝐧𝐝 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐩𝐮𝐫𝐩𝐨𝐬𝐞𝐬 𝐨𝐧𝐥𝐲. It provides a technical analysis of AI capabilities in healthcare and does not constitute medical advice, diagnosis, or treatment.

    • 𝐂𝐥𝐢𝐧𝐢𝐜𝐚𝐥 𝐀𝐜𝐜𝐨𝐮𝐧𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲: If you are a healthcare professional, ensure any implementation of AI tools complies with your local Trust’s policies, data governance protocols, and professional regulatory standards (GMC/NMC/HCPC or equivalent).

    • 𝐈𝐧𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐭 𝐄𝐯𝐢𝐝𝐞𝐧𝐜𝐞-𝐁𝐚𝐬𝐞𝐝 𝐑𝐞𝐯𝐢𝐞𝐰: The views expressed are my own and do not represent the official position of any University, Hospital Trust, employer, or regulatory body.

    • 𝐏𝐚𝐭𝐢𝐞𝐧𝐭 𝐒𝐚𝐟𝐞𝐭𝐲: This video does not establish a doctor-patient relationship. Members of the public should always seek the advice of a qualified healthcare provider regarding any medical condition.

    Music generated by Mubert https://mubert.com/render

    https://substack.com/@healthaibrief

    Medical AI, Healthcare Foundation Models, Radiology AI, Multimodal AI, EchoJEPA, OMAFound, MedVersa, Brain MRI segmentation, Continuous Glucose Monitoring AI, self-supervised learning medical imaging, clinical AI integration.

    #HealthTech #MedicalAI #Radiology #DigitalHealth #ArtificialIntelligence

    Show more Show less
    18 mins
  • Google AI vs Human Doctor - AMIE AI Clinical Trial - Real-World Primary Care Results
    Mar 16 2026

    Is Google’s AMIE AI ready to replace the clinical intake interview? We break down the first real-world clinical feasibility study of conversational AI in primary care.


    In this episode, we analyse a major prospective trial from Google Research and DeepMind testing the AMIE system on 100 urgent care patients. While the AI achieved zero safety stops and matched human doctors in diagnostic accuracy, a closer look at the workflow reveals significant hurdles. We explore the mechanics of clinical trust, why the messy reality of patient dialogue is the ultimate stress test, and why human doctors still beat AI on practical, cost-effective care plans.


    Link to research report: https://arxiv.org/abs/2603.08448

    DOI: https://doi.org/10.48550/arXiv.2603.08448

    Link to associated blog post: https://research.google/blog/exploring-the-feasibility-of-conversational-diagnostic-ai-in-a-real-world-clinical-study/


    Key Takeaways

    • How conversational AI performs in a real-world primary care clinic without simulated patients.

    • Why diagnostic accuracy doesn't automatically equal clinical trust, and why seeing the actual history-taking process is vital.

    • The critical difference between an AI’s theoretical management plan and a human doctor’s practical, cost-effective clinical decision-making.


    00:00 – Intro: A scenario of a patient completing an AI-led clinical interview.

    00:32 – Study Introduction: Google’s AMIE (Articulate Medical Intelligence Explorer) powered by Gemini 2.5 Pro.

    01:30 – Methodology: Real-world trials in a Boston primary care clinic with physician safety monitoring.

    02:30 – Safety Results: Zero safety stops required during the trial encounters.

    03:01 – Accuracy Results: Diagnostic performance compared to human primary care providers.

    04:03 – Patient Feedback: Acceptance levels.

    04:35 – Limitations: Issues with dialogue realism and the need for transcript transparency.

    06:18 – Practicality Gaps: Why human doctors still outperformed AI on cost-effective management plans.

    07:50 – Implementation Hurdles: Hardware limitations and demographic skews in the study.

    09:31 – Governance & Validation: The importance of independent peer review (contrasted with Amazon).

    10:51 – Future Outlook: Integration with Electronic Health Records (EHR) and multimodal (voice/image) capabilities.

    13:34 – Conclusion: Summary of AMIE as a robust proof of concept for the future of patient journeys.


    Clinical Governance & Educational Disclosure

    This analysis is for educational and informational purposes only. It provides a technical review of AI in healthcare and does not constitute medical advice or treatment.

    • Professional Accountability: If you are a healthcare professional, ensure your use of AI complies with local Trust policies and professional standards (GMC/NMC/HCPC).

    • Evidence-Based Review: These views are my own and do not represent the official position of my University or Hospital Trust.

    • Patient Safety: This video does not establish a doctor-patient relationship. Always seek the advice of a qualified healthcare provider regarding any medical condition.


    Music generated by Mubert https://mubert.com/render

    https://substack.com/@healthaibrief


    #HealthTech #MedicalAI #GoogleHealth #PrimaryCare #ClinicalInformatics #DigitalHealth #DeepMind #FutureOfMedicine #EHR #MedicalInnovation

    Show more Show less
    15 mins