This AI Ran an Entire Business Alone: Are Human CEOs Already Obsolete? | Warning Shots #33
Failed to add items
Add to Cart failed.
Add to Wish List failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) dig into a week where the goalposts keep moving — and nobody seems to be watching.Andrej Karpathy left an AI agent running for two days. It tested 700 changes, picked the best 20, and improved itself. No humans involved. Meanwhile, a man in Florida used AI to build an autonomous business that made $300K — while he slept. And the Pentagon just banned Claude from its supply chain, citing concerns that it might be sentient.Just another week.If it’s Sunday, it’s Warning Shots.
🔎 They explore:
* Karpathy’s auto-research experiment — and what it means that AI is now improving AI
* Swarms of agents, self-optimizing models, and the first inklings of an intelligence explosion
* The autonomous AI business making $300K — and whether human entrepreneurs can compete
* The Paperclip Maximizer problem playing out in real time
* The Pentagon banning Claude over sentience concerns — and why every model has the same risk
* A jailbroken Claude used to orchestrate a mass cyberattack on the Mexican government
* A 3D-printed, AI-designed shoulder-launch missile built by a guy on Twitter
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
→ Liron Shapira -Doom Debates
→ Michael - @lethal-intelligence
🗨️ Join the Conversation
Is an AI improving itself a milestone or a warning sign?
Could you compete with a business that never sleeps?
And if Claude might be conscious, what does that say about every other model?
Let us know in the comments.
Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe