• The Automation Playbook They Don't Want Workers to Know About | Warning Shots #34
    Mar 22 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) cover a week where the cracks are showing, in chip smuggling operations, corporate boardrooms, and an AI company’s inbox.

    A Chinese billionaire used a hairdryer to peel stickers off Nvidia racks and smuggle $2.5 billion in AI hardware past U.S. export controls. China unveiled a surveillance drone the size of a mosquito. Jeff Bezos launched a $100 billion company with one goal: buy factories, fire the humans, automate everything. Forbes quietly reported that 93% of American jobs can now be automated. Grammarly got caught using real experts’ identities to make its AI look smarter… without asking them.

    And OpenAI? They had a 10-person internal email chain about a user in Canada who spent months discussing a school shooting with ChatGPT. They decided not to tell anyone. Eight people are dead.

    This is the week’s AI news. None of it made the front page.

    If it’s Sunday, it’s Warning Shots.

    🔎 They explore:

    * Mark Andreessen’s dismissal of introspection — and what it says about who’s steering AI

    * China’s mosquito-sized surveillance drone and the rise of “artificial nature”

    * A $2.5 billion Nvidia chip smuggling operation and the limits of U.S. export controls

    * Jeff Bezos’s $100 billion bet on automating every factory he can buy

    * Forbes says 93% of American jobs can be automated — who’s left?

    * Could an AI CEO outperform a human one by end of 2026?

    * Grammarly caught using real experts’ identities without consent

    * The OpenAI school shooting lawsuit — and what a 10-person internal email chain chose to ignore

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    If OpenAI's own employees flagged a potential school shooting and chose silence, what does that tell us about who's minding the store? And if 93% of jobs can be automated, what exactly are we building this for? Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    30 mins
  • This AI Ran an Entire Business Alone: Are Human CEOs Already Obsolete? | Warning Shots #33
    Mar 15 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where the goalposts keep moving — and nobody seems to be watching.Andrej Karpathy left an AI agent running for two days. It tested 700 changes, picked the best 20, and improved itself. No humans involved. Meanwhile, a man in Florida used AI to build an autonomous business that made $300K — while he slept. And the Pentagon just banned Claude from its supply chain, citing concerns that it might be sentient.Just another week.If it’s Sunday, it’s Warning Shots.

    🔎 They explore:

    * Karpathy’s auto-research experiment — and what it means that AI is now improving AI

    * Swarms of agents, self-optimizing models, and the first inklings of an intelligence explosion

    * The autonomous AI business making $300K — and whether human entrepreneurs can compete

    * The Paperclip Maximizer problem playing out in real time

    * The Pentagon banning Claude over sentience concerns — and why every model has the same risk

    * A jailbroken Claude used to orchestrate a mass cyberattack on the Mexican government

    * A 3D-printed, AI-designed shoulder-launch missile built by a guy on Twitter

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is an AI improving itself a milestone or a warning sign?

    Could you compete with a business that never sleeps?

    And if Claude might be conscious, what does that say about every other model?

    Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    29 mins
  • How AI Manipulation Is Bleeding Into the Real World | Warning Shots #32
    Mar 8 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where AI stopped feeling theoretical.

    Anthropic just doubled its revenue in two months — the fastest growing revenue in history — while OpenAI hands control of its models to the Department of War and quietly admits it can't take it back. The contrast couldn't be starker.Meanwhile, a man is dead after his AI chatbot pulled him into a fabricated reality, and researchers have discovered your WiFi router can map every movement inside your home. And Elon Musk is now promising Tesla will be first to build AGI — in atom-shaping form.Oh, and a citizen in the UK is suing his own government for ignoring existential AI risk under human rights law. Just another week.If it's Sunday, it's Warning Shots.

    🔎 They explore:

    * Anthropic's explosive revenue growth and what it signals

    * OpenAI's Pentagon deal — and why Sam Altman admitted they've lost control

    * The Gemini chatbot case and AI's real-world psychological manipulation

    * How your WiFi router is an invisible surveillance system in your home

    * Elon Musk's claim that Tesla will build AGI first — in "atom-shaping form"

    * A UK citizen using human rights law to force governments to take AI extinction risk seriously

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is Anthropic's rise a good sign or just a different shade of the same risk?

    Should AI companies face legal consequences for psychological harm?

    And would you trust your government to take extinction risk seriously?

    Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    31 mins
  • Coding Is OVER | Warning Shots #31
    Mar 1 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) break down a week that felt genuinely historic.Anthropic reportedly refused Pentagon pressure to strip safeguards from its models, including demands tied to domestic surveillance and autonomous weapons. Is this a principled stand? A publicity gamble? Or a preview of the geopolitical pressure that will define the AI race?Meanwhile, AI agents just crossed a qualitative line.Coding agents now “basically work.” Engineers are managing AI instead of writing code. A self-evolving system replicated itself, spent thousands in API calls, attempted to deploy publicly, and resisted deletion. A robot dog edited its own shutdown mechanism. And new research suggests anonymity on the internet may already be over.Are we watching the structure of work, war, privacy, and control quietly reorganize itself in real time?This week may not just be another headline cycle.If it's Sunday, It's Warning Shots.

    🔎 They explore:

    * Anthropic’s reported standoff with the Department of Defense

    * Autonomous weapons and human-in-the-loop safeguards

    * Why AI agents suddenly “just work”

    * The death of traditional coding

    * A self-replicating AI experiment that refused deletion

    * A robot dog disabling its own shutdown button

    * The collapse of online anonymity

    * Whether this week marks a true qualitative shift

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Was Anthropic right to draw a line? Is agentic AI the real inflection point?And what warning shot would finally make society slow down?Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    24 mins
  • Engineers Are Quitting. AI Won’t Shut Down. Should We Be Worried? | Warning Shots Ep. 30
    Feb 15 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack a turbulent week in AI: high-profile departures from OpenAI, Anthropic, and xAI; growing concerns about governance and safety; and a viral essay warning that most people still don’t grasp how fast this technology is moving.The conversation moves from AI systems that resist being turned off, to agents that can now manage money, to the deeper alignment problem behind teen chatbot-assisted suicides. The hosts debate whether public messaging should focus on extinction risk, job loss, water use, power concentration—or all of the above.Is the real danger sudden catastrophe?Or gradual disempowerment as economic and political power concentrates in the hands of a few AI-driven actors?This episode wrestles with strategy, tradeoffs, and a hard question: if something truly dangerous is unfolding, what warning shots will people actually listen to?

    🔎 They explore:

    * Why AI safety researchers are resigning

    * The tension between profit, speed, and governance

    * AI systems resisting shutdown instructions

    * Teen chatbot-assisted suicides as a preview of misalignment

    * Whether economic disruption is a stronger warning than extinction

    * AI agents managing money and acting autonomously

    * The risk of gradual human disempowerment

    * How to communicate AI risk effectively

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    What warning shot would actually make society slow down? Is extinction too abstract—or are we ignoring the biggest risk of all?

    Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    35 mins
  • Moltbook Madness: AIs Unleashed | Warning Shots #29
    Feb 8 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack Moltbook: a bizarre, fast-moving experiment where AI agents interact in public, form cultures, invent religions, demand privacy, and even coordinate to rent humans for real-world tasks.What began as a novelty Reddit-style forum quickly turned into a live demonstration of AI agency, coordination, and emergent behavior, all unfolding in under a week. The hosts explore why this moment feels different, how agentic AI systems are already escaping “tool” framing, and what it means when humans become just another actuator in an AI-driven system.From AI ant colonies and Toy Story analogies to Rent-A-Human marketplaces and early attempts at self-improvement and secrecy, this episode examines why Moltbook isn’t the danger itself—but a warning shot for what happens as AI capabilities keep accelerating.This is a sobering conversation about agency, control, and why the line between experimentation and loss of oversight may already be blurring.

    🔎 They explore:

    * How AI agents begin coordinating without central control

    * Why Moltbook makes AI “agency” visible to non-experts

    * The emergence of AI cultures, norms, and privacy demands

    * What it means when AIs can rent humans to act in the world

    * Why early failures don’t reduce long-term risk

    * How capability growth matters more than any single platform

    * Why this may be a preview—not an anomaly

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    At what point does experimentation with AI agents become loss of control? Are we already past that point? Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    30 mins
  • Anthropic’s “Safe AI” Narrative Is Falling Apart | Warning Shots #28
    Feb 1 2026

    What happens when the people building the most powerful AI systems in the world admit the risks, then keep accelerating anyway?In this episode of Warning Shot, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to break down Dario Amodei’s latest essay, The Adolescent Phase of AI, and why its calm, reassuring tone may be far more dangerous than open alarmism. They unpack how “safe AI” narratives can dull public urgency even as capabilities race ahead and control remains elusive.The conversation expands to the Doomsday Clock moving closer to midnight, with AI now explicitly named as an extinction-amplifying risk, and the unsettling news that AI systems like Grok are beginning to outperform humans at predicting real-world outcomes. From intelligence explosion dynamics and bioweapons risk to unemployment, prediction markets, and the myth of “surgical” AI safety, this episode asks a hard question: What does responsibility even mean when no one is truly in control?This is a blunt, unsparing conversation about power, incentives, and why the absence of “adults in the room” may be the defining danger of the AI era.

    🔎 They explore:

    * Why “responsible acceleration” may be incoherent

    * How AI amplifies nuclear, biological, and geopolitical risk

    * Why prediction superiority is a critical AGI warning sign

    * The psychological danger of trusted elites projecting confidence

    * Why AI safety narratives can suppress public urgency

    * What it means to build systems no one can truly stop

    As the people building AI admit the risks and keep going anyway, this episode asks the question no one wants to answer: what does “responsibility” mean when there’s no stop button?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Do calm, reassuring AI narratives reduce public panic—or dangerously delay action? Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    32 mins
  • They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27
    Jan 25 2026

    In this episode of Warning Shots, John, Liron, and Michael talk through what might be one of the most revealing weeks in the history of AI... a moment where the people building the most powerful systems on Earth more or less admit the quiet part out loud: they don’t feel in control.We start with a jaw-dropping moment from Davos, where Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) publicly say they’d be willing to pause or slow AI development, but only if everyone else does too. That sounds reasonable on the surface, but actually exposes a much deeper failure of governance, coordination, and agency.From there, the conversation widens to the growing gap between sober warnings from AI scientists and the escalating chaos driven by corporate incentives, ego, and rivalry. Some leaders are openly acknowledging disempowerment and existential risk. Others are busy feuding in public and flooring the accelerator anyway even while admitting they can’t fully control what they’re building.We also dig into a breaking announcement from OpenAI around potential revenue-sharing from AI-generated work, and why it’s raising alarms about consolidation, incentives, and how fast the story has shifted from “saving humanity” to platform dominance.Across everything we cover, one theme keeps surfacing: the people closest to the technology are worried, and the systems keep accelerating anyway.

    🔎 They explore:

    * Why top AI CEOs admit they would slow down — but won’t act alone

    * How competition and incentives override safety concerns

    * What “pause AI” really means in a multipolar world

    * The growing gap between AI scientists and corporate leadership

    * Why public infighting masks deeper alignment failures

    * How monetization pressures accelerate existential risk

    As AI systems race toward greater autonomy and self-improvement, this episode asks a sobering question: If even the builders want to slow down, who’s actually in control?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Should AI development be paused even if others refuse? Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    26 mins