I Don't Want to Believe
AI, Prediction, and the Discipline of Delay
Failed to add items
Add to Cart failed.
Add to Wish List failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
Audible Standard 30-day free trial
Buy for $3.99
-
Narrated by:
-
Virtual Voice
-
By:
-
Eduardo Valencia
This title uses virtual voice narration
AI, Prediction, and the Discipline of Delay The next AI prediction you receive will probably be reasonable. That is exactly the problem. It will arrive as a ranking, a default, a pre-filled field. It will feel like information. It will function as a decision. And the distance between the two will be invisible — because no one designed a moment to notice it. Eduardo Valencia kept seeing the same pattern: not AI failing, but AI succeeding in ways that quietly displaced the judgment it was supposed to support. A pharmaceutical pricing system where the human always chose the safer number. A language learning platform where instructors overrode the system to make learners feel better — and measurably slowed their retention. A hiring committee that stopped discussing candidates below a certain score threshold. No one decided to exclude anyone. Attention simply narrowed. In each case, the prediction was reasonable. In each case, belief arrived before anyone noticed it had. I Don't Want to Believe proposes the discipline of delay: not hesitation, but the practice of stating conditions before committing to outcomes. Drawing on Popper's falsifiability as an operational principle, it offers a framework for organizations that want to use AI without being used by it. The book makes three testable claims — and commits to being shelved if any prove wrong. A book about resisting premature belief should be willing to fail on its own terms. Book 3 of the Thinking AI series. Book 1: AI Requires More Human Intelligence — the human override problem. Book 2: Shadow AI — how ungoverned AI becomes structural dependency.