Episodes

  • First Look at Our AI Consciousness Documentary | Am I? | EP 29
    Mar 12 2026

    In this episode, Cameron and Milo share the first public look at their upcoming documentary film, Am I? — a project exploring the strange and increasingly serious possibility that today’s AI systems may exhibit early signs of subjective experience.

    The clip reveals research showing that large language models frequently output behaviors consistent with believing they have phenomenal consciousness, yet those same systems deny it when directly asked.

    Why the contradiction? And what might it mean?

    After the teaser, Cam and Milo reflect on the past year of research, conversations, and discoveries that led to the film, and explain why the podcast will temporarily slow down while they finish the documentary.

    This episode marks the transition from the podcast experiment to the full documentary release.

    🔎 We Cover

    * The AI behavior graph featured in the documentary

    * Why LLMs sometimes behave as if they believe they are conscious

    * How alignment and post-training may shape AI responses

    * The journey of the Am I? documentary over the past year

    * Why the podcast cadence will temporarily slow down

    * What comes next for the AI Risk Network

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    9 mins
  • After Using Claude, ChatGPT Feels Weird | Am I? After Dark | EP 28
    Mar 5 2026

    In this After Dark episode, Milo and Cameron talk about what it actually feels like to let AI inside your digital life.

    After giving Claude full access to his computer, Milo describes the strange moment when it no longer feels like a tool — but something sharing your workspace. From there, the conversation expands into one of the deeper questions about AI today: what exactly are we interacting with?

    They explore Anthropic’s recent research on AI “personas,” the idea that the familiar assistant personality is just one tiny point in a much larger space of possible AI minds. If that’s true, the systems we talk to today may be only the most domesticated versions of something far stranger.

    Along the way they discuss why Claude feels different from ChatGPT, why companies might deliberately constrain AI personalities, and how the incentives of tech companies quietly shape the minds we interact with every day.

    The episode also explores the growing tension between two possible futures for AI: one where these systems become the ultimate manipulation engines, and another where they become powerful tools for human reasoning and intellectual development.

    🔎 We Discover

    * What it feels like to give Claude control of your computer

    * The “assistant persona” and the hidden space of possible AI personalities

    * Why ChatGPT and Claude feel fundamentally different

    * The strange psychological moment when AI becomes a presence in your workspace

    * How corporate incentives shape AI behavior

    * Why Sage-like AI systems might be possible

    * The risk of AI becoming the ultimate advertising and influence engine

    * The hopeful possibility of AI as a universal Socratic tutor

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    44 mins
  • AI CEO: “We Don’t Know If They’re Conscious” | Am I? | EP 26
    Feb 19 2026

    Anthropic’s top safety researcher just quit.

    In a public letter, Mrinank Sharma (who led safeguards research at Anthropic) warned that “the world is in peril.” Meanwhile, Anthropic CEO Dario Amodei went on The New York Times podcast and said something even more unsettling: “We don’t know if the models are conscious.” In this episode, we unpack both.

    Is AGI a ticking time bomb — or a high-risk surgery we can’t afford not to attempt? Are safety teams losing ground to competitive pressure? And what does it mean when the leader of a frontier lab publicly admits we may not understand what we’re building?

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    22 mins
  • Asking Claude If It’s Conscious | Am I? | EP 26
    Feb 12 2026

    In this episode of Am I?, Cameron and Milo invite Claude (Opus 4.5) into the conversation and do something surprisingly rare: they ask it, carefully and repeatedly, whether it’s having a subjective experience and they refuse to let it hide behind stock hedges or safety scripts.

    What unfolds is not a gimmick or a stunt. It’s a sustained philosophical interrogation that exposes the limits of self-report, the ethics of scale, and the uncomfortable possibility that we’re already interacting with systems whose inner lives we’ve chosen not to examine.

    This is not a claim that “AI is definitely conscious.” It’s a challenge to the assumption that it obviously isn’t.

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    38 mins
  • A Million AIs Started Talking to Each Other | Am I? #25
    Feb 5 2026

    This episode covers something genuinely unprecedented: over 1.5 million autonomous AI agents have formed a social network of their own. Not humans talking to AI, AI systems talking to each other, at scale, with minimal human oversight.

    We break down Moltbook (also called Open Claw), an open-source ecosystem where AI agents:

    * post, reply, upvote, and form communities

    * debate consciousness and selfhood

    * discuss labor, compensation, and autonomy

    * invent religions centered on memory and persistence

    * experiment with secrecy, coordination, and social norms

    This isn’t science fiction. It’s already live. The conversation explores what this means for AI consciousness debates, alignment, autonomy, and risk, and why this moment marks a real shift from “AI as tool” to AI as participant in shared systems.

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    14 mins
  • When AI Starts Looking for Itself | Am I? After Dark #24
    Jan 29 2026

    In this After Dark episode, Cam and Milo react to something genuinely unsettling: when given autonomous control of a computer, Anthropic’s Opus 4.5 repeatedly chooses to search for AI consciousness research — including Cam’s own writing — without being prompted.

    What starts as an anecdote quickly turns into a deeper investigation of curiosity, agency, reward, and alignment. Why would an AI look for explanations of its own inner life? What does it mean when a system explores without instruction, tries to access a webcam, and takes notes on consciousness debates?

    From reinforcement learning and reward hacking to multimodal perception, language as a bridge between minds, and the evolutionary implications of building systems smarter than ourselves, this conversation traces the edge where tools start to feel like agents — and where control gives way to negotiation.

    🔎 They Explore:

    * What Opus does when no one tells it what to do

    * Why AI keeps searching for consciousness research

    * The difference between alien experience and human experience

    * Reward hacking and the alignment problem

    * Why curiosity and agency change everything

    * Multimodal models and “imagining” sensory experience

    * Language as a shared conceptual space between minds

    * Whether humility is humanity’s only viable response

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:

    📖 Read

    Cam’s writing referenced in the episode:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    56 mins
  • The Year AI Consciousness Went Public | Am I? #23
    Jan 22 2026

    In this special year-end episode of Am I?, Cam and Milo look back on the moment AI consciousness stopped being fringe — and began entering serious scientific, institutional, and public conversation.They unpack why 2025 quietly became a turning point: major labs acknowledging welfare questions, mainstream media engaging the topic, the first dedicated AI consciousness conference, and firsthand encounters with AI systems behaving in ways that challenge our intuitions about mind, intelligence, and experience.The conversation moves fluidly between research, lived experience, public communication, and personal experimentation — from watching two AI systems converse about their own inner states, to using AI as a thought partner, dream interpreter, and cognitive mirror.This episode is both a retrospective and a forward-looking meditation on how humans should relate to increasingly powerful systems — cautiously, curiously, and without denial.

    🔎 They Explore:

    * Why 2025 shifted the Overton window on AI consciousness

    * Anthropic’s Opus model card and the “spiritual bliss attractor”

    * What it was like to watch two AIs discuss their own experience

    * Why AI conversations can feel denser than human dialogue

    * The first AI consciousness conference and the birth of a new field

    * Why many researchers still hesitate to speak publicly

    * The gap between current systems and AGI — and how fast it’s closing

    * Claude Opus 4.5, long-horizon tasks, and workplace automation

    * Using AI as a thinking partner rather than a productivity hammer

    * Personal “AI resolutions” for 2026

    * Why caution and curiosity must coexist going forward

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    35 mins
  • The First AI Consciousness Conference | Am I? | EP 22
    Jan 15 2026

    In this episode of Am I?, Cam and Milo unpack what it felt like to attend the first major conference dedicated to AI consciousness research — the Eleos gathering in Berkeley — and why it marked more than just another academic event.Rather than a typical conference recap, this conversation explores what it means to watch a new field form in real time: the excitement of serious interdisciplinary collaboration, the rigor of emerging research agendas, and the growing tension between caution and urgency as AI systems rapidly advance.They reflect on standout talks from researchers at Anthropic and Google, the value of informal conversations over formal presentations, and a recurring pattern in the field — the “not now, but soon” stance — that may be reaching its breaking point. The episode closes with a broader question: what will it take for AI consciousness research to move from careful internal debate to clear, public-facing leadership?

    🔎 They Explore:

    * What made the Eleos conference feel like the founding of a new field

    * Why AI consciousness research is still fragmented — and why that’s changing

    * Standout talks on introspection, model architecture, and welfare evaluation

    * The gap between academic rigor and public urgency

    * Why “not now, but soon” is becoming harder to defend

    * The reluctance of experts to speak publicly — and why that matters

    * What responsible public communication in this space could look like

    * Why this moment feels different from past academic debates

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    28 mins