• The Doomer's Error: Why AGI Is An Incoherent Concept
    Mar 25 2026

    What's the strongest anti-AGI case, the argument that reveals the fallacies underlying the belief that AGI is a viable goal – as well as the AI doomerism that believing AGI will soon arrive often spawns? Princeton professor Arvind Narayanan recently made a statement that we feel deserves amplification: For real-world problems, machines face some of the same key fundamental limits and challenges that humans face.

    Listen to Luba and Eric unpack, explore, and expound. #noAGI

    Show more Show less
    48 mins
  • Predictive AI vs. GenAI: A Crucial, Unavoidable Comparison
    Mar 17 2026

    In this episode we cover:

    - Why predictive AI and generative AI are destined to remain inherently distinct

    - Why comparing them is unavoidable, even though they solve different problems

    - How they compare

    - How companies should balance investments between the two

    Show more Show less
    48 mins
  • Pushing the ultimate limits: helping genAI realize its promise of autonomy
    Mar 11 2026

    In this episode, we talk about real, truly deployed LLM-based systems that push the limits of autonomy. How can we "tame" LLMs to create feasible, practical solutions that are viable for deployment? What are their ultimate limitations?

    Show more Show less
    54 mins
  • Superhuman Adaptable Intelligence: LeCun's New Buzzword Challenges AGI
    Mar 5 2026

    In this episode, Luba Gloukhova and Eric Siegel unpack the new paper, "AI Must Embrace Specialization via Superhuman Adaptable Intelligence," by Yann LeCun and others.

    The paper endeavors to "address what’s wrong with our conception of AGI, and why, even in its most coherent formulation, it is a flawed concept to describe the future of AI."

    That aligns so well with our episode just two days ago that one of the paper's authors, Philippe Wyder, tweeted us about the paper, bringing it to our attention!

    The paper presents the new term "Superhuman Adaptable Intelligence," which is defined as "intelligence that can learn to exceed humans at anything important that we can do, and that can fill in the skill gaps where humans are incapable."

    Listen to our break-down and take, and access the full paper here: https://arxiv.org/abs/2602.23643

    Show more Show less
    56 mins
  • The Whole Problem with AGI and Its Ridiculous Definitions
    Mar 3 2026

    In this very special episode, the first with a co-host (Luba Gloukhova), Dr. Data and Miss Information explore why people are messing with the definition of artificial general intelligence, the problem with the concept, how it feeds AI hype, and how we can feasibly realize a good portion of genAI's overzealous promise of autonomy.

    Show more Show less
    40 mins
  • Predictive AI Thrives, Despite GenAI Stealing The Spotlight
    Feb 11 2026

    In this episode, listen to a narration of Eric Siegel's article in Forbes:

    Predictive AI Thrives, Despite GenAI Stealing The Spotlight

    GenAI and predictive AI battle for resources, but even as the overwhelming attention focuses on genAI, enterprises are still adopting predictive AI just as much.

    Access the original article here: https://www.forbes.com/sites/ericsiegel/2026/02/11/predictive-ai-thrives-despite-genai-stealing-the-spotlight/

    You can access an overview of HYBRID AI 2026 and a description of each enterprise presentation here: https://machinelearningweek.com/

    Show more Show less
    10 mins
  • Hybrid AI: Industry Event Signals Emerging Hot Trend (article)
    Feb 9 2026

    In this episode, listen to a narration of Eric Siegel's article in Forbes:

    Hybrid AI: Industry Event Signals Emerging Hot Trend

    AI is not yet the success that it should be, so two dozen enterprises will disclose their move toward a crucial new paradigm – hybrid AI – at a 2026 conference.

    Access the original article here: https://www.forbes.com/sites/ericsiegel/2026/02/09/hybrid-ai-industry-event-signals-emerging-hot-trend/

    Show more Show less
    11 mins
  • A Gooder AI case study: profiting on machine learning
    Sep 3 2025

    The biggest hurdle for data science teams isn't building the model; it's proving its dollar value. This presentation shows how a dental group could translate a no-show prediction model into a clear business case worth $$$

    It's about shifting the conversation from abstract metrics to tangible ROI.

    Henry Castellanos is a data scientist extraordinaire. He goes beyond establishing a strong technical performance for his ML models to also maximizing their business value. Let this sink in: Most data scientists don't do that – most ML projects don't plan and sell predictive AI deployment according to the the explicit business value.

    Interestingly, Henry points out that using Gooder AI (www.gooder.ai) to do this even bucks up his own confidence in his models and their business value.

    Listen to Henry's presentation to see exactly how to bridge the gap from ML to real-world value.

    To view this presentation as a video, go to: https://youtu.be/BT-GnnuN3jA

    Show more Show less
    34 mins