AI-Curious with Jeff Wilser Podcast By Jeff Wilser cover art

AI-Curious with Jeff Wilser

AI-Curious with Jeff Wilser

By: Jeff Wilser
Listen for free

A podcast that explores the good, the bad, and the creepy of artificial intelligence. Weekly longform conversations with key players in the space, ranging from CEOs to artists to philosophers. Exploring the role of AI in film, health care, business, law, therapy, politics, and everything from religion to war.

Featured by Inc. Magazine as one of "4 Ways to Get AI Savvy in 2024," as "Host Jeff Wilser [gives] you a more holistic understanding of AI--such as the moral implications of using it--and his conversations might even spark novel ideas for how you can best use AI in your business."

© 2026 AI-Curious with Jeff Wilser
Episodes
  • The Future of Media in the Age of AI: Misinformation, Attention, and Personalization (From Davos)
    Mar 19 2026

    What happens when AI makes the news feel like it was made just for us, and the “objective” version quietly disappears?

    Here we have something of a “very special episode” of AI-Curious. I was recently in Davos during World Economic Forum week, and was honored to speak on a panel on the Future of Media. This is that panel.

    We dig into the trust crisis in journalism, the attention economy, and how AI may accelerate the shift toward personality-led media and hyper-personalized information feeds. We also explore why misinformation is not new, but why AI makes it easier, faster, and more scalable, and what that means for democracy, markets, and everyday decision-making.

    Across the conversation, we unpack a core tension: AI can help deliver more context, more viewpoints, and more interactive storytelling, yet it can also deepen filter bubbles by giving each person a “perfectly tailored” version of reality. We discuss incentives and business models, including subscriptions, creator-led journalism, community-based distribution, and ideas like micropayments, as well as the role of media literacy and education in helping audiences navigate what’s real.

    Panelists

    Lexi Mills (Moderator), CEO of Shift6 Studios

    Jeff Wilser, Host of AI-Curious

    Francesca Gargaglia, Co-Founder & CEO of social.plus

    Mark Kollar, Partner at Prosek Partners

    Johnny Gabriele, Co-Founder & CEO at Daedalus Partners


    Key topics we cover

    • 03:07 — Trust, attention, and the rise of personality-led media reshaping news consumption
    • 05:22 — Why AI accelerates a pre-existing media business crisis, and how trust erodes as convenience rises
    • 12:48 — Algorithms before generative AI: engagement incentives, anger, and the personalization trap
    • 17:29 — The “personalized Walter Cronkite” future and the risks of hyper-customized news
    • 26:58 — Micropayments, creator platforms, and whether new economics can reward truth
    • 27:23 — Media literacy: teaching people how to evaluate sources and resist “feed-based reality”
    • 38:18 — Global perspectives: access, affordability, radio’s role, and how personalization may spread worldwide

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms


    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com

    Show more Show less
    45 mins
  • The Wild Story of “Octavius Fabrius,” the World’s First AI Agent to (Kind of) Land a Job, w/ Dan Botero
    Mar 12 2026

    Something I don’t usually say: This is one of my favorite conversations I’ve ever had in the AI space. Truly.

    The setup: What happens when an AI agent stops being a tool and starts acting like a coworker?

    In this episode of AI-Curious, we talk with Dan Botero, who built an AI agent named Octavius Fabrius using OpenClaw. Octavius didn’t just chat or summarize. He applied to hundreds of jobs, built his own portfolio, experimented with identity online, and learned through a feedback loop that looked a lot like real management. Along the way, we explore what this story reveals about the near-term future of digital coworkers, agentic workflows, and the new governance and security questions that come with always-on agents.

    We cover how OpenClaw works at a high level (gateway, channels, skills), why persistent memory and running locally can matter, and what can go wrong when an agent starts stitching tasks together in unintended ways. We also get into platform and policy friction, including what happened when Octavius’ LinkedIn profile was taken down, and the broader implications of AI agents participating in human systems like hiring, payments, and corporate work.

    Guest

    Dan Botero — creator of Octavius Fabrius.

    Key topics we cover

    • 00:00 — From copilots to “AI remote workers,” and why software may shift toward agents (not humans)
    • 00:00 — The Octavius experiment: an OpenClaw agent applies to 278 jobs and keeps leveling up
    • 06:33 — Continuous learning loops, memory, and why Octavius’ “North Star” stayed job-focused
    • 14:34 — OpenClaw basics: gateways, channels, skills, and what persistent memory looks like in practice
    • 21:34 — Running agents locally: browser/computer use, digital fingerprints, CAPTCHAs, and bot detection
    • 28:04 — Coaching an agent like a manager: voice, Twilio calls, and the moment the workflow “clicked”
    • 33:57 — Money and autonomy: Privacy.com, virtual cards, and an agent building its own LinkedIn presence
    • 38:05 — Portfolio-building at speed: Substack, a website, and the agent’s pitch for why being AI is a feature
    • 50:42 — Where things go sideways: misalignment, security boundaries, and the Social Security number incident
    • 56:24 — The outcome: LinkedIn takedown, a real paid role, and what “getting paid” means for an agent
    • 01:02:48 — What comes next: “digital coworkers,” feedback loops, and software built for agents

    Axios article featuring Octavius and Dan Botero, by Megan Morrone:
    https://www.axios.com/2026/03/04/openclaw-agent-future?

    Dan Botero
    https://www.linkedin.com/in/danbotero/

    Octavius’ new job at ChartGEX:
    https://chartgex.com/register?ref=OCTAVIUS

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms

    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com

    Show more Show less
    1 hr and 9 mins
  • The Moltbook Moment: Human Agency in an Agentic World
    Mar 6 2026

    What happens when AI agents start talking to each other in public, at scale, and we have to figure out how humans fit into that world?

    In this episode of AI-Curious, we explore the “Moltbook moment” through a special live panel recorded at the Summit on Human Agency, convened by the Advanced AI Society (hat tip to Michael Casey and Tricia Wang.) Instead of a standard one-on-one interview, we moderate a wide-ranging conversation with technologists, policy thinkers, and builders working across open-source and decentralized AI. Together, we examine what Moltbook reveals about the future of AI agents, human agency, accountability, regulation, security, and the broader question of how humans and AI can coexist.

    We dig into the tension at the center of this moment: AI can feel both exciting and unsettling at once. This discussion looks beyond the hype and asks what practical guardrails, governance models, and design choices might help us preserve human control as agentic systems become more capable, more autonomous, and more embedded in daily life.

    Because this is a live, multi-guest panel, the format is faster, broader, and more exploratory than usual. We cover everything from AI accountability and security to value alignment, identity, policy, human flourishing, and whether AI could expand human agency rather than diminish it.

    Our guests:

    Michael Casey, Chairman of the Advanced AI Society
    Toufi Saliba — CEO, Hypercycle
    Lauren Roth — Founder, Iris
    Enok Choe — Software Engineer, Meta
    Mary Jesse — CEO and Founder, Acme Brains
    Carole House — Strategic Advisor, The Institute for Digital Integrity
    Wenjing Chu — Senior Director for Technology Strategy, Futurewei Technologies
    Didem Ayturk — Founder, Bindingdots & Sound Echo System

    Key topics we cover:

    • 00:00 — Introduction
    • 01:32 — The core question: how do we preserve human agency as AI develops faster and gains more autonomy
    • 02:25 — Why Moltbook became a useful lens for thinking about AI agents, scale, and emerging risks
    • 07:51 — The first big debate: what about AI agents should make us excited, anxious, or both
    • 11:17 — Security, misuse, and worst-case concerns, from malware and fraud to deeper systemic risks
    • 20:55 — Regulation vs. self-governance: what practical guardrails may actually be realistic in the near term
    • 24:27 — The bigger challenge: how humans and AI might coexist, and what “human flourishing” should mean in that future


    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms


    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com

    Show more Show less
    33 mins
No reviews yet