The AIQUALISER Podcast: discover how people are really using AI Podcast By John Bennett cover art

The AIQUALISER Podcast: discover how people are really using AI

The AIQUALISER Podcast: discover how people are really using AI

By: John Bennett
Listen for free

The AIQUALISER Podcast examines what changes when AI becomes part of everyday life and work.

Each episode is a conversation with someone using AI in their business, profession, or career. We talk about how they use it, how it fits into their existing work, and the challenges they have encountered along the way.

These practical, reflective conversations are hosted by John Bennett, author of Don’t Surrender Your Thinking, and are for anyone interested in adapting their work and keeping their thinking sharp as AI advances.

If you have a question you’d like explored on the podcast, please visit frmdb.ly/pod

2026 John Bennett
Episodes
  • AI Won't Answer for Its Mistakes. You Will.
    Mar 12 2026

    In this episode of The AIQUALISER Podcast, John Bennett talks with James O'Regan, co-host of The Impact of AI Explored, about who is actually accountable when AI gets something wrong.

    James has been podcasting about AI since February 2024. His view of the technology is practical and consistent: useful, incremental, and nowhere near as groundbreaking as the hype suggests.

    The conversation moves through the hype that has failed to deliver, the security risks that get glossed over in the rush to try new things, and the guardrails question that James returns to throughout. Autonomous agents do not stop when something goes wrong. They keep going until told not to. That requires precise instructions, clean data, and documented processes. Most AI pilots skip all three. That is why most of them fail.

    The episode ends with a simple question: if AI disappeared tomorrow, what would James miss most? His answer is the efficiency. There are not things AI can do that humans cannot do, it just makes you quicker.

    In This Episode

    • Two years of change: from experimentation to daily use

    • The AI hardware that flopped, and what it says about hype

    • Security risks in open-source agents and AI browsers

    • Autonomous agents and the guardrails problem

    • Why 70 percent of AI pilots fail

    • What James will not hand over to AI, and why

    • Talking to children about what is real

    • Agents versus automation: how to tell the difference

    • Custom instructions, sycophancy, and the AI relationship problem

    • Listener question: keeping company data out of public AI systems

    • If AI disappeared tomorrow: efficiency, not capability

    Chapters

    00:00 Introduction to James O'Regan

    03:35 Two years of talking about AI

    06:34 The biggest letdowns

    14:12 Cool but scary

    19:03 Staying in control

    32:04 Kids and AI

    40:12 Agents or automation?

    46:08 Day-to-day use and personalisation

    56:30 Listener question: blocking AI

    Show more Show less
    1 hr and 6 mins
  • Why speed isn't always an advantage with AI, with Corinne Thomas
    Feb 18 2026

    Join John as he talks with Corinne Thomas, founder of Ethical Sales, about what responsible AI adoption actually looks like inside real organisations, and how to implement it without creating confusion, risk, or resistance.

    They discuss how AI adoption is usually driven by leadership, and why pressure to “move fast” often clashes with reality. Corinne shares what she sees when individuals respond very differently to AI, from enthusiasm to scepticism to outright fear, and why those reactions need to be handled deliberately rather than smoothed over.

    The conversation explores why the biggest risks often come from overconfidence rather than caution, and why slowing down can actually accelerate progress.

    They also dig into what helps people learn AI properly, and the continued importance of face-to-face learning, even when the tools themselves are digital.

    The discussion also explores where AI is genuinely making a difference. Much of the value comes from unglamorous work, admin, proposals, funding applications, and internal processes, rather than the headline use cases people often fixate on. The episode returns repeatedly to the idea that AI works best when it supports structure, not when it replaces thinking.

    The episode closes with a listener question on using AI for prospecting, and why expecting it to act as a data source often leads to unreliable results. Corinne explains where AI fits in sales research, and where human judgement and proper data still matter.

    Visit the Ethical Sales website to sign up to Corinne's newsletter.

    In this episode:
    • The different ways individuals react to AI, and why that matters
    • Why moving too fast often creates more risk than value
    • The problem of shadow AI and uncontrolled experimentation
    • What effective AI learning actually looks like in practice
    • Why face-to-face still plays a role in building capability
    • Where AI is quietly making the biggest difference
    • Keeping human judgement in charge as AI becomes more powerful
    • What AI can and can’t do in prospecting

    Chapters:

    00:00 Introduction to Corinne Thomas

    05:40 Who drives the decision to use AI?

    09:01 The three approaches to AI

    17:05 Why face-to-face still matters

    20:24 The risk of going too fast

    25:17 The beauty and challenge of AI progress

    30:02 Building AI capability

    35:42 Where AI is actually making a difference

    45:35 "I'm the human here"

    51:50 What AI can and can’t do in prospecting

    Show more Show less
    1 hr and 2 mins
  • Why you need to treat AI like the new guy, with Russ Henneberry
    Feb 4 2026

    In this episode of The AIQUALISER Podcast, John Bennett talks with Russ Henneberry, co-author of Digital Marketing for Dummies, about why AI often frustrates us, and why structure and judgement matter more than prompts, tools, or model choice.

    Russ reflects on a career shaped by repeated reinvention, from early internet marketing through to content, SEO, and platform shifts such as Google, Facebook, and now AI. He positions AI not as a creative shortcut or a mysterious intelligence, but as a general-purpose system that behaves predictably once its true nature and limits are understood.

    A central idea in the conversation is the “new guy” analogy. When AI delivers generic, bloated, or inconsistent outputs, it is usually because it lacks context. Russ explains that most frustration with AI comes from treating it as if it already knows the job, rather than recognising that it needs onboarding just like any new team member.

    The discussion moves on to why clever prompting rarely compensates for weak intent, unclear scope, or missing structure, and why letting AI run in auto mode can quietly undermine human thinking. AI will almost always overproduce, and the real work happens in editing, cutting back, and deciding what matters.

    Russ also cautions against constantly switching tools in search of better results. Staying with a small number of systems allows understanding to build properly, while novelty keeps attention scattered.

    If you have a question you’d like us to pick up in a future episode, you can get in touch at frmdb.ly/pod


    To find out more about Russ, visit theClick

    In This Episode
    • Why AI often feels inconsistent or disappointing
    • The “new guy” analogy, and what it explains about generic outputs
    • Why structure matters more than prompts or model choice
    • How auto mode can trade speed for judgement
    • Why AI overproduces, and why editing is essential
    • The risks of tool hopping versus going deep with a few systems
    • Why responsibility and authorship do not disappear as AI improves

    Chapters

    00:00 Introduction to Russ Henneberry

    10:11 What's Surprising About AI?

    14:42 Structuring AI for Effective Use

    23:43 The Importance of Learning AI Deeply

    36:28 Diving Deep into AI Tools

    46:29 Structuring AI for Business Planning

    57:34 Taking Responsibility for AI Outputs

    Show more Show less
    1 hr and 5 mins
No reviews yet