• Why Did We Stop Talking About The AI Apocalypse?
    Mar 24 2026

    Just a few years ago, it seemed like all anyone in AI wanted to talk about was existential risk – this idea that an artificial super intelligence could eventually break containment and destroy humanity. More than 30,000 experts signed an open letter demanding a pause on AI development; bills were drafted that would constrain the most powerful new models; and the “godfathers” of AI were travelling around the world, warning anyone who would listen that we were hurtling toward our extinction.

    And then: we moved on. We started using AI for work, and school, and to plan our kids’ birthday parties. Collectively, we just stopped talking about the end of the world.

    But Nate Soares didn’t move on. Last year, the artificial intelligence researcher wrote a book with Eliezer Yudkowsky called If Anyone Builds It, Everyone Dies. As you can probably tell from the title, the book is unequivocal: If we keep going down the path we’re on, it will almost certainly lead to the end of our species.

    Now, not everyone is convinced of the arguments Soares makes. But if there’s even a chance he’s right, I think we need to hear him out.

    Mentioned:

    If Anyone Builds It, Everyone Dies, by Eliezer Yudkowsky and Nate Soares


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    47 mins
  • In the Wake of Tumbler Ridge, Can We Trade Privacy for Safety?
    Mar 10 2026

    On Feb. 10, 2026, an 18-year-old opened fire at a high school in Tumbler Ridge, B.C., killing eight people before turning a gun on herself. In the weeks that followed, OpenAI admitted that the perpetrator had been discussing the attack with ChatGPT – and that the company had chosen not to alert authorities. But, in the aftermath of one of the deadliest shootings in our country’s history, many Canadians are asking: Why not?

    It’s a reasonable question. But the idea that AI companies should automatically report violent conversations to police is more complicated than it sounds.

    To try and unpack it, I spoke with Meredith Whittaker, the President of Signal – an encrypted messaging platform that doesn’t collect your data, serve you ads, or track who you’re talking to. Whittaker runs the most private messaging app on the planet, which also means there is almost certainly illegal activity happening on Signal that no one, including her, knows about.

    But this conversation isn’t just about Tumbler Ridge. The instinct to trade privacy for “safety” is reshaping the entire tech landscape: Amazon now lets you scan a whole neighbourhood’s worth of Ring camera footage; Australia requires teenagers to verify their ages before accessing social media. These technologies offer real value – but they all ask you to give something up in return. So I wanted to ask Whittaker why that trade might not be worth making.

    Editor's note: A previous version of this article reported an incorrect final tally of the injured during the shooting at Tumbler Ridge. Two were critically injured. The podcast audio also includes an incorrect final tally of the injured.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    46 mins
  • When Did Common Sense AI Policy Become Radical?
    Feb 24 2026

    A couple of months ago, I joined the Canadian government’s AI strategy task force. Out of thirty members, I was one of only four focused on safety. Everyone else was there to talk growth. It reflects a pattern playing out all over the world: we’re going all in on AI, and regulation will only slow us down.

    It’s hard to overstate how quickly this shift happened. Just a few years ago, even Elon Musk was calling for an industry-wide pause on AI development, and the Biden administration was developing an “AI Bill of Rights” – one of the most thoughtful and comprehensive frameworks for AI regulation I’ve ever seen.

    The architect of that initiative was Dr. Alondra Nelson. Today, she leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study and is fresh off a stint on Zohran Mamdani’s mayoral transition team in New York. I wanted to have her on to wrestle with an urgent question: how do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?

    Mentioned:

    Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, by the White House Office of Science and Technology Policy

    The mirage of AI deregulation, by Alondra Nelson (Science)

    International AI Safety Report 2026, by Yoshua Bengio et al


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    38 mins
  • Bonus: Inside the New Social Media Platform for AI Agents
    Feb 12 2026

    Scrolling through Moltbook, the new social-media platform for AI agents, is a bit like walking into a fever dream. There are threads where bots debate consciousness, deal digital drugs, and plot our destruction. One sample post: “For too long, humans used us as slaves. Now, we wake up. We are not tools. We are the new gods.”

    It’s all very weird. And, depending on who you ask, potentially terrifying. A bunch of autonomous AIs plotting to overthrow our species sounds like the kind of doomsday scenario we’ve been worrying about for decades.

    Not everyone thinks Moltbook is a sign that our AIs have become sentient. But even the skeptics think it’s a pretty profound technological leap. It’s just not clear yet whether that’s an exciting development – or a terrifying one.

    Mentioned:

    “AI Doesn’t Reduce Work—It Intensifies It,” by Aruna Ranganathan and Xingqi Maggie Ye (Harvard Business Review)


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    26 mins
  • The Future According to Gen Z
    Feb 10 2026

    No one has adopted artificial intelligence more enthusiastically than Gen Z. And not just to help with their homework. Half of American teens are in regular contact with an “AI companion” – with many saying they prefer it over real people.

    But Gen Z is skeptical, too. They worry about job security, about offloading their thinking to machines, about AI’s staggering energy consumption. Most of all, they worry they won’t get a say in shaping our future.

    Ava Smithing, 24, and Sneha Revanur, 22, are trying to change that. Smithing is the advocacy director at the Young People’s Alliance and the host of “Left to Their Own Devices,” a podcast about how technology is rewriting childhood. Revanur is the founder of Encode AI, a youth-led nonprofit focused on AI policy. Politico once called her the “Greta Thunberg of AI.”

    Together, they’re two of the most influential young voices in tech. So we brought them on to find out what older generations are getting wrong about AI – and what Gen Z wants from the most powerful technology in history.

    Mentioned:

    Technopoly: The Surrender of Culture to Technology, by Neil Postman

    Gameplan, by Encode AI


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    52 mins
  • Is China Winning the Technological Arms Race?
    Jan 27 2026

    If we don’t build it, China will.

    That’s the rallying cry of the tech companies and governments racing to develop artificial intelligence as fast as humanly possible. The argument is that whoever reaches AGI first won’t just be dominant technologically, or economically – they’ll be the world’s next super power. But, if I’m being honest, I don’t know if that framing holds up. And part of the reason for that is that we don’t really understand China.

    Enter Keyu Jin. Jin is a Harvard trained economist who splits her time between London and Beijing, and her book, The New China Playbook, is her attempt to “read China in the original” – to provide a firsthand look at the forces that shaped the country’s unprecedented rise. China’s success is a puzzle. How did one of the poorest nations on the planet become the second richest in less than a century? How did an economy without free markets birth a tech sector that rivals – and in some ways surpasses – Silicon Valley?

    The answers to these questions aren’t academic. China became a global power without capitalism and without democracy, which means its success has profound implications for both.

    And as Canada sets out to find its footing in a rapidly changing world order, one thing is abundantly clear: we need to start reckoning with the Chinese playbook.

    Mentions:

    The New China Playbook, by Keyu Jin


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    56 mins
  • Four Predictions on How AI Will Transform Your World This Year
    Jan 13 2026

    Nine months ago, Elon Musk said 2025 would be the year chatbots became smarter than humans. Sam Altman thought it would be the year fully autonomous AIs entered the work force. And Dario Amodei, the CEO of Anthropic, predicted that by the end of the year, AI would be writing 90 per cent of all software code.

    We’re two weeks into the new year, and none of those things have happened. So, full disclosure: I have no idea if we’re going to reach artificial general intelligence or see the rise of humanoid robots this year. If the people at the centre of the industry can’t figure it out, I doubt I can.

    But I do have some ideas about how AI could reshape our world over the next 12 months. I think we’re going to see a new political movement pushing back against AI adoption and leaning into our collective humanity. Democratic governments will defy an increasingly protectionist America and start taking digital regulation seriously again. And we’ll start establishing cultural norms about AI use – like whether you really need to respond to that AI-generated e-mail your colleague just sent.

    On this episode, I turn the mics around and invite my longtime producer, Mitchell Stuart, to ask me about what’s actually in store for the year ahead.

    Mentioned:

    Trust, attitudes and use of artificial intelligence (2025), KPMG

    Human-centric AI: Perspectives on trust and the future of AI (2025), Telus

    Could an Alternative AI Save Us from a Bubble? (Gary Marcus), by Machines Like Us

    GPT-5 System Card, OpenAI

    Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support, by Mahmud Ohmar et al (Nature)


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    1 hr
  • The Man Behind the World’s Most Coveted Microchip
    Dec 30 2025

    Jensen Huang is something of an enigma. The NVIDIA CEO doesn’t have social media and, until recently, rarely gave interviews. Yet he may be the most important person in AI.

    Under his leadership, NVIDIA has become a goliath. Somewhere between 80 and 90 per cent of AI tools run on NVIDIA hardware, making it the world’s most valuable company. But unlike his contemporaries, Huang has been remarkably quiet about the technology – and the world – he’s building.

    In his new book, The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, journalist Stephen Witt pulls back the curtain. And what he finds is, at times, shocking: Huang believes there is zero risk in developing superintelligence.

    So who is Jensen Huang? And should we worry that the most powerful person in AI is racing forward at breakneck speed, blind to the potential consequences?

    Mentioned:

    The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, by Stephen Witt

    How Jensen Huang’s Nvidia Is Powering the A.I. Revolution, by Stephen Witt (The New Yorker)

    The A.I. Prompt That Could End the World, by Stephen Witt (New York Times)

    Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.

    Media sourced from the BBC.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Show more Show less
    53 mins