I Don't Want to Believe Audiobook By Eduardo Valencia cover art

I Don't Want to Believe

AI, Prediction, and the Discipline of Delay

Virtual Voice Sample

Audible Standard 30-day free trial

Try Standard free
Select 1 audiobook a month from our entire collection of titles.
Yours as long as you’re a member.
Get unlimited access to bingeable podcasts.
Standard auto renews for $8.99 a month after 30 days. Cancel anytime.

I Don't Want to Believe

By: Eduardo Valencia
Narrated by: Virtual Voice
Try Standard free

$8.99 a month after 30 days. Cancel anytime.

Buy for $3.99

Buy for $3.99

Background images

This title uses virtual voice narration

Virtual voice is computer-generated narration for audiobooks.

AI, Prediction, and the Discipline of Delay The next AI prediction you receive will probably be reasonable. That is exactly the problem. It will arrive as a ranking, a default, a pre-filled field. It will feel like information. It will function as a decision. And the distance between the two will be invisible — because no one designed a moment to notice it. Eduardo Valencia kept seeing the same pattern: not AI failing, but AI succeeding in ways that quietly displaced the judgment it was supposed to support. A pharmaceutical pricing system where the human always chose the safer number. A language learning platform where instructors overrode the system to make learners feel better — and measurably slowed their retention. A hiring committee that stopped discussing candidates below a certain score threshold. No one decided to exclude anyone. Attention simply narrowed. In each case, the prediction was reasonable. In each case, belief arrived before anyone noticed it had. I Don't Want to Believe proposes the discipline of delay: not hesitation, but the practice of stating conditions before committing to outcomes. Drawing on Popper's falsifiability as an operational principle, it offers a framework for organizations that want to use AI without being used by it. The book makes three testable claims — and commits to being shelved if any prove wrong. A book about resisting premature belief should be willing to fail on its own terms. Book 3 of the Thinking AI series. Book 1: AI Requires More Human Intelligence — the human override problem. Book 2: Shadow AI — how ungoverned AI becomes structural dependency.

Management Management & Leadership
No reviews yet