AI Won't Answer for Its Mistakes. You Will.
Failed to add items
Add to Cart failed.
Add to Wish List failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
In this episode of The AIQUALISER Podcast, John Bennett talks with James O'Regan, co-host of The Impact of AI Explored, about who is actually accountable when AI gets something wrong.
James has been podcasting about AI since February 2024. His view of the technology is practical and consistent: useful, incremental, and nowhere near as groundbreaking as the hype suggests.
The conversation moves through the hype that has failed to deliver, the security risks that get glossed over in the rush to try new things, and the guardrails question that James returns to throughout. Autonomous agents do not stop when something goes wrong. They keep going until told not to. That requires precise instructions, clean data, and documented processes. Most AI pilots skip all three. That is why most of them fail.
The episode ends with a simple question: if AI disappeared tomorrow, what would James miss most? His answer is the efficiency. There are not things AI can do that humans cannot do, it just makes you quicker.
In This Episode
• Two years of change: from experimentation to daily use
• The AI hardware that flopped, and what it says about hype
• Security risks in open-source agents and AI browsers
• Autonomous agents and the guardrails problem
• Why 70 percent of AI pilots fail
• What James will not hand over to AI, and why
• Talking to children about what is real
• Agents versus automation: how to tell the difference
• Custom instructions, sycophancy, and the AI relationship problem
• Listener question: keeping company data out of public AI systems
• If AI disappeared tomorrow: efficiency, not capability
Chapters
00:00 Introduction to James O'Regan
03:35 Two years of talking about AI
06:34 The biggest letdowns
14:12 Cool but scary
19:03 Staying in control
32:04 Kids and AI
40:12 Agents or automation?
46:08 Day-to-day use and personalisation
56:30 Listener question: blocking AI