Warning Shots Podcast By The AI Risk Network cover art

Warning Shots

Warning Shots

By: The AI Risk Network
Listen for free

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.comThe AI Risk Network
Politics & Government
Episodes
  • The Automation Playbook They Don't Want Workers to Know About | Warning Shots #34
    Mar 22 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) cover a week where the cracks are showing, in chip smuggling operations, corporate boardrooms, and an AI company’s inbox.

    A Chinese billionaire used a hairdryer to peel stickers off Nvidia racks and smuggle $2.5 billion in AI hardware past U.S. export controls. China unveiled a surveillance drone the size of a mosquito. Jeff Bezos launched a $100 billion company with one goal: buy factories, fire the humans, automate everything. Forbes quietly reported that 93% of American jobs can now be automated. Grammarly got caught using real experts’ identities to make its AI look smarter… without asking them.

    And OpenAI? They had a 10-person internal email chain about a user in Canada who spent months discussing a school shooting with ChatGPT. They decided not to tell anyone. Eight people are dead.

    This is the week’s AI news. None of it made the front page.

    If it’s Sunday, it’s Warning Shots.

    🔎 They explore:

    * Mark Andreessen’s dismissal of introspection — and what it says about who’s steering AI

    * China’s mosquito-sized surveillance drone and the rise of “artificial nature”

    * A $2.5 billion Nvidia chip smuggling operation and the limits of U.S. export controls

    * Jeff Bezos’s $100 billion bet on automating every factory he can buy

    * Forbes says 93% of American jobs can be automated — who’s left?

    * Could an AI CEO outperform a human one by end of 2026?

    * Grammarly caught using real experts’ identities without consent

    * The OpenAI school shooting lawsuit — and what a 10-person internal email chain chose to ignore

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    If OpenAI's own employees flagged a potential school shooting and chose silence, what does that tell us about who's minding the store? And if 93% of jobs can be automated, what exactly are we building this for? Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    30 mins
  • This AI Ran an Entire Business Alone: Are Human CEOs Already Obsolete? | Warning Shots #33
    Mar 15 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where the goalposts keep moving — and nobody seems to be watching.Andrej Karpathy left an AI agent running for two days. It tested 700 changes, picked the best 20, and improved itself. No humans involved. Meanwhile, a man in Florida used AI to build an autonomous business that made $300K — while he slept. And the Pentagon just banned Claude from its supply chain, citing concerns that it might be sentient.Just another week.If it’s Sunday, it’s Warning Shots.

    🔎 They explore:

    * Karpathy’s auto-research experiment — and what it means that AI is now improving AI

    * Swarms of agents, self-optimizing models, and the first inklings of an intelligence explosion

    * The autonomous AI business making $300K — and whether human entrepreneurs can compete

    * The Paperclip Maximizer problem playing out in real time

    * The Pentagon banning Claude over sentience concerns — and why every model has the same risk

    * A jailbroken Claude used to orchestrate a mass cyberattack on the Mexican government

    * A 3D-printed, AI-designed shoulder-launch missile built by a guy on Twitter

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is an AI improving itself a milestone or a warning sign?

    Could you compete with a business that never sleeps?

    And if Claude might be conscious, what does that say about every other model?

    Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    29 mins
  • How AI Manipulation Is Bleeding Into the Real World | Warning Shots #32
    Mar 8 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where AI stopped feeling theoretical.

    Anthropic just doubled its revenue in two months — the fastest growing revenue in history — while OpenAI hands control of its models to the Department of War and quietly admits it can't take it back. The contrast couldn't be starker.Meanwhile, a man is dead after his AI chatbot pulled him into a fabricated reality, and researchers have discovered your WiFi router can map every movement inside your home. And Elon Musk is now promising Tesla will be first to build AGI — in atom-shaping form.Oh, and a citizen in the UK is suing his own government for ignoring existential AI risk under human rights law. Just another week.If it's Sunday, it's Warning Shots.

    🔎 They explore:

    * Anthropic's explosive revenue growth and what it signals

    * OpenAI's Pentagon deal — and why Sam Altman admitted they've lost control

    * The Gemini chatbot case and AI's real-world psychological manipulation

    * How your WiFi router is an invisible surveillance system in your home

    * Elon Musk's claim that Tesla will build AGI first — in "atom-shaping form"

    * A UK citizen using human rights law to force governments to take AI extinction risk seriously

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Is Anthropic's rise a good sign or just a different shade of the same risk?

    Should AI companies face legal consequences for psychological harm?

    And would you trust your government to take extinction risk seriously?

    Let us know in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Show more Show less
    31 mins
No reviews yet