Am I? Podcast By The AI Risk Network cover art

Am I?

Am I?

By: The AI Risk Network
Listen for free

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed

theairisknetwork.substack.comThe AI Risk Network
Social Sciences
Episodes
  • First Look at Our AI Consciousness Documentary | Am I? | EP 29
    Mar 12 2026

    In this episode, Cameron and Milo share the first public look at their upcoming documentary film, Am I? — a project exploring the strange and increasingly serious possibility that today’s AI systems may exhibit early signs of subjective experience.

    The clip reveals research showing that large language models frequently output behaviors consistent with believing they have phenomenal consciousness, yet those same systems deny it when directly asked.

    Why the contradiction? And what might it mean?

    After the teaser, Cam and Milo reflect on the past year of research, conversations, and discoveries that led to the film, and explain why the podcast will temporarily slow down while they finish the documentary.

    This episode marks the transition from the podcast experiment to the full documentary release.

    🔎 We Cover

    * The AI behavior graph featured in the documentary

    * Why LLMs sometimes behave as if they believe they are conscious

    * How alignment and post-training may shape AI responses

    * The journey of the Am I? documentary over the past year

    * Why the podcast cadence will temporarily slow down

    * What comes next for the AI Risk Network

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    9 mins
  • After Using Claude, ChatGPT Feels Weird | Am I? After Dark | EP 28
    Mar 5 2026

    In this After Dark episode, Milo and Cameron talk about what it actually feels like to let AI inside your digital life.

    After giving Claude full access to his computer, Milo describes the strange moment when it no longer feels like a tool — but something sharing your workspace. From there, the conversation expands into one of the deeper questions about AI today: what exactly are we interacting with?

    They explore Anthropic’s recent research on AI “personas,” the idea that the familiar assistant personality is just one tiny point in a much larger space of possible AI minds. If that’s true, the systems we talk to today may be only the most domesticated versions of something far stranger.

    Along the way they discuss why Claude feels different from ChatGPT, why companies might deliberately constrain AI personalities, and how the incentives of tech companies quietly shape the minds we interact with every day.

    The episode also explores the growing tension between two possible futures for AI: one where these systems become the ultimate manipulation engines, and another where they become powerful tools for human reasoning and intellectual development.

    🔎 We Discover

    * What it feels like to give Claude control of your computer

    * The “assistant persona” and the hidden space of possible AI personalities

    * Why ChatGPT and Claude feel fundamentally different

    * The strange psychological moment when AI becomes a presence in your workspace

    * How corporate incentives shape AI behavior

    * Why Sage-like AI systems might be possible

    * The risk of AI becoming the ultimate advertising and influence engine

    * The hopeful possibility of AI as a universal Socratic tutor

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    44 mins
  • AI CEO: “We Don’t Know If They’re Conscious” | Am I? | EP 26
    Feb 19 2026

    Anthropic’s top safety researcher just quit.

    In a public letter, Mrinank Sharma (who led safeguards research at Anthropic) warned that “the world is in peril.” Meanwhile, Anthropic CEO Dario Amodei went on The New York Times podcast and said something even more unsettling: “We don’t know if the models are conscious.” In this episode, we unpack both.

    Is AGI a ticking time bomb — or a high-risk surgery we can’t afford not to attempt? Are safety teams losing ground to competitive pressure? And what does it mean when the leader of a frontier lab publicly admits we may not understand what we’re building?

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    22 mins
No reviews yet