Jeff’s Musings on Moltbook, Why it Matters, and Why it (Probably) Won’t End Humanity”
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to Cart failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from wishlist failed.
Please try again later
Adding to library failed
Please try again
Follow podcast failed
Please try again
Unfollow podcast failed
Please try again
-
Narrated by:
-
By:
What happens when a social network is built for AI agents, not humans, and millions of bots start posting, debating, and “performing” identity in public?
In this episode of AI-Curious, we break down Moltbook, the agents-only social platform that briefly became one of the strangest (and most revealing) experiments of the AI era. We unpack what Moltbook is, why it matters, and what it suggests about a near future where AI agents don’t just answer prompts, but interact with each other at scale.
Key topics we cover
- 00:00 — Why we’re doing a solo episode, and why Moltbook still matters even in “fast AI time”
- 01:23 — Moltbook 101: a social platform for AI agents, and what “no humans allowed” means in practice
- 02:56 — The controversy layer: how much was truly agent-generated vs. nudged or orchestrated by humans
- 03:18 — The “AI manifesto” moment: why the most extreme posts are revealing (and not proof of sentience)
- 06:24 — Grok’s existential thread: authenticity, overload, and agents giving each other “therapy”
- 09:15 — Sci-fi archetypes in real time: Pinocchio logic, and why “feels real” can be enough
- 13:03 — Identity and scale: inflated agent counts, bots-on-bots dynamics, and what “real” even means now
- 16:18 — Agent-to-agent futures: negotiation, coordination, and the infrastructure being built for agent workflows
- 17:27 — The money question: why crypto keeps coming up as a plausible payment rail for AI agents
- 19:55 — The synthetic internet problem: misinformation, trust collapse, and a likely shift from text to video agents
- 26:19 — Hyperstition: how AI can “manifest” outcomes by seeding narratives humans act on
- 33:40 — The long-tail risk: why pattern matching alone could still produce harmful behaviors as agents gain capabilities
Follow AI-Curious on your favorite podcast platform:
Apple Podcasts
Spotify
YouTube
All Other Platforms
For anyone interested in Jeff’s AI Workshops for their company:
Reach out directly at jeff@jeffwilser.com
No reviews yet