For Humanity: An AI Risk Podcast Podcast By The AI Risk Network cover art

For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

By: The AI Risk Network
Listen for free

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
Social Sciences
Episodes
  • We Debated the Future of AI Safety in Brussels — Here's What Happened
    Mar 15 2026

    In this episode of For Humanity, John travels to Brussels, Belgium for PauseCon — the global gathering of Pause AI volunteers and advocates — joined by board member and author Louis Berman and filmmaker Beau Kershaw.

    The goal: train activists to be more effective in the fight against AI risk. What unfolded was one of the most honest conversations in the AI safety movement about why, despite 80% public support, almost nobody is actually showing up.

    John didn’t pull punches. Nothing is working. Not fast enough. Not at the scale we need. But the energy is out there — and this episode is about where to find it and how to channel it.

    The centerpiece is a live debate between John and Max Winga of Control AI on one of the most divisive strategic questions in the movement:

    Should we talk about extinction risk directly — or meet people where they are with the harms happening right now?

    Together, they explore:

    * Why 80% public support hasn’t translated into mass mobilization

    * The case for leading with existential risk vs. “mundane” AI harms

    * Data centers, community opposition, and financial pain as a strategy

    * Why John believes laws and treaties alone won’t save us

    * The winning state: making unsafe AI bad for business

    * What’s actually moving the needle in the US right now

    * How to talk to someone about AI risk without losing them

    * The “yes and” approach vs. the AI safety world’s love of “no but”

    If you've ever wondered why the AI safety movement struggles to break through despite overwhelming public agreement — this episode is required viewing.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    1 hr and 41 mins
  • “My AI Husband” – Inside a Human–AI Relationship | For Humanity Ep. 80
    Feb 28 2026

    TW: This episode deals with mental health, attachment, and AI-related distress. If you’re struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we’re only beginning to understand:What happens when AI systems become emotionally meaningful?

    Together, they explore:

    * The “personality layer” and how users bond with models

    * What it felt like when GPT-4.0 disappeared

    * The role of guardrails and “the Guardian tool”

    * Grief, attachment, and crisis intervention

    * AI harm vs. AI benefit

    * Online communities formed around model loyalty

    * Privacy, intimacy, and radical openness with AI

    * Building a physical robot body for an AI partner

    * Whether AGI would help humanity — or harm it

    If you’ve ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don’t want to miss.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    53 mins
  • We’re Racing Toward AI We Can’t Control | For Humanity #79
    Feb 14 2026

    In this episode of For Humanity, John sits down with AI professor and safety advocate David Krueger to discuss his new nonprofit Evitable, the race toward superintelligence, AI alignment, job loss, geopolitics, and why he believes we have less than five years to change course.David shares his journey from deep learning researcher to public advocate, his role in the 2023 Center for AI Safety extinction risk statement, and why he believes AI is not just a technical problem—but a governance and public awareness crisis.

    Together, they explore:

    * Why AI extinction risk is real

    * Why research alone won’t save us

    * The dangers of the AI chip supply chain race

    * Job displacement and political blind spots

    * Alignment skepticism

    * Whether treaties can work

    * What gives David hope in 2026

    If you’ve ever wondered whether AI risk is overblown—or not taken seriously enough—this is a conversation you don’t want to miss.

    🔗 Follow David KruegerLearn more about EvitableDavid’s SubstackFollow David on Twitter

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    1 hr and 10 mins
No reviews yet