• Data Annotation: The Human Labor Behind AI with Heather Mellquist Lehto, PhD
    Mar 24 2026

    Jessica and Kimberly sit down with Heather Mellquist Lehto, PhD.

    Heather is a mathematician, anthropologist, former Harvard faculty, Vatican AI advisor, and founder of Guilded AI, to pull back the curtain on data annotation: the human labor that makes AI possible and one of the least visible, least understood, and most exploited parts of the entire industry. From pennies-per-task gig work to expert PhDs clicking through unpaid tests, they dig into who is actually building these models, what they are being paid, and why the workers creating billions in value are locked out of the wealth they generate. Heather shares why she got fed up with the recruiting playbook, what she is building differently at Gilded AI, and why treating workers well is not just an ethical argument but a data quality one.

    Topics Covered:

    • What data annotation is and why it still requires human expertise at every level of AI development
    • The difference between data annotation and reinforcement learning from human feedback
    • How workers go from labeling apples to annotating molecular structures and advanced mathematics
    • Why the effective hourly rate for data annotators is much lower than advertised
    • Scale AI, the $29 billion valuation, and the Department of Labor investigation
    • How Guilded AI is structuring equity so annotators share in the upside
    • Garbage in, garbage out: why worker treatment is a data quality issue
    • AI chatbot vibe checks as expert vetting, and why that fails everyone
    • The Gilded Age, guilds, and what banding together could look like
    • Why the perfect cannot be the enemy of the good

    Referenced in This Episode:

    • Empire of AI by Karen Hao
    • The Worlds I See by Fei-Fei Li
    • Surveillance Capitalism by Shoshana Zuboff
    • Rerum Novarum by Pope Leo XIII
    • Guilded AI
    • Scale AI and the Meta investment

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 19 mins
  • The Soft Skills Aren't Soft: Relational Intelligence, Workplace Culture, and What AI Can't Replace
    Mar 18 2026

    What does it mean to do meaningful work? And what happens to that meaning when AI enters the picture?

    This week we're joined by Valerie Morris, co-host of the podcast Inside Work and Relational Intelligence chapter lead at Culture First. Valerie works with employees and organizations navigating the human side of AI adoption, and she brings both an organizational psychology perspective and a practitioner's honesty to a conversation that gets personal quickly.

    We talk about why so many employees feel they can't voice real concerns about how AI is being rolled out, why the skills that create meaning at work (connection, relational intelligence, the ability to just be present with another person) are exactly the ones being sidelined in the rush to automate, and what it looks like to push back on that, quietly and practically, even when you can't change the culture around you.

    Woven through all of it is a question the three of us keep circling: What are we willing to give up in the name of efficiency?

    None of it is anti-AI exactly. It's more like a case for paying attention to what you're trading away.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 1 min
  • Is Anyone Steering This Thing? Clara Hawking on AI Governance
    Mar 11 2026

    AI governance sounds like something for IT departments and government committees. It's not. According to computer scientist, philosopher, and AI governance expert Clara Hawking, it's really about behavior — how we use technology, who gets harmed when we use it carelessly, and whether the systems we're building deserve our trust.

    In this episode, Clara breaks down what AI governance actually looks like in practice ... including a professor who unknowingly violated GDPR by grading students through his personal ChatGPT account, to the risks that compound (not just add up) when AI, biotech, robotics, and quantum computing start feeding into each other. We also get personal about what it means to govern ourselves first, before we can ask anything of institutions.

    If you've ever seen the words "AI governance" and assumed it had nothing to do with you — this one's for you.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr
  • We Are So Vulnerable to Kindness: Companion AI as a Human, not a tech, Problem
    Mar 4 2026

    In this convo, Tricia Friedman and Kimberly Becker explore the concept of Companion AI and its implications for human relationships. They discuss the emotional connections people form with AI, the impact of social media on friendships, and the challenges of navigating conflict in a digital age. The discussion also touches on the importance of repair in relationships, the anxiety generation, and the role of emotional intelligence in understanding technology. They conclude by reflecting on the future of Companion AI and its potential to shape human connection.

    Keywords
    Companion AI, Emotional Intelligence, Friendship, Social Media, Technology, Human Connection, Loneliness, Repair, Anxiety Generation, Listening Literacy

    Books

    • Anon — Caia Hagel
      Publisher page (Canada): https://www.harpercollins.ca/products/anon-caia-hagel-9781443469909
    • Clara and the Sun — Kazuo Ishiguro
      Publisher page: https://www.penguinrandomhouse.com/books/564109/clara-and-the-sun-by-kazuo-ishiguro/
    • The New Age of Sexism — Laura Bates
      Full title: The New Age of Sexism: How AI and Emerging Technologies Are Rewiring Misogyny (2025).
      Publisher listing:
      https://greenapplebooks.com/book/9781464234361
    • How to Speak Chicken by Melissa Caughey: https://www.storey.com/books/how-to-speak-chicken

    Research / Theory

    • Sherry Turkle (2024) – “Who We Become When We Talk to Machines”
      Artificial Intimacy: Who We Become When We Talk to Machines https://mit-genai.pubpub.org/pub/uawlth3j/release/2
    • Brown & Levinson politeness theory (1978)
      Politeness: Some Universals in Language Usage (Cambridge University Press, 1987; original work circulated as a 1978 manuscript): ​
      https://en.wikipedia.org/wiki/Politeness_theory
    • “My Roomba is Rambo” paper
      Full title: “‘My Roomba is Rambo’: Intimate Home Appliances” (UbiComp 2007). PDF:https://link.springer.com/chapter/10.1007/978-3-540-74853-3_9
      https://faculty.cc.gatech.edu/~hic/hic-papers/Roomba-Ubicomp.pdf

    Apps / Orgs / Other

    • Replika app (AI companion)
      Official site: https://replika.com
    • New York City companion‑AI Valentine’s Day pop‑up
      I could not find a clearly titled NYC “companion AI Valentine’s Day” pop‑up event with a stable news URL; coverage instead folds into broader AI‑companions stories (e.g., CBC, Brookings, etc.). CBC feature on AI companions and emotional support:
      https://www.cbc.ca/news/business/companion-ai-emotional-support-chatbots-1.7620087
    • Tricia’s organization, Shifting Schools
      Main site: https://shiftingschools.com
    • Substack post on politeness theory: https://open.substack.com/pub/kpb12177/p/how-reward-driven-ai-politeness-collapses?utm_campaign=post-expanded-share&utm_medium=web
    • Robot dance for the lunar new year

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    40 mins
  • What's a Bot, Anyway?
    Feb 25 2026

    This week's episode starts where a lot of good conversations do, with someone asking a deceptively simple question. Kimberly's husband wanted to know what a bot actually is, and that one question opens up a pretty wide conversation about the language we use to talk about AI, why it matters, and what we might be underestimating when we make it sound cute and harmless.

    From there, Kimberly and Jessica revisit their ongoing argument that AI functions as a cultural intermediary, shaping how we understand the world in ways we don't always notice or examine. They also get into what higher education is actually for in a moment when AI can produce the essay, the lit review, and the commencement speech. Spoiler: The humanities are more relevant than ever, just as we've finished cutting the programs.

    Other topics this week include why behavior change is so hard (and why that matters for AI adoption), what everyday workers are actually up against when trying to experiment with new tools inside large organizations, the problem with surface-level AI use cases, and why small businesses are both well-positioned and underprepared for this moment.

    They also get into media literacy, AllSides, the Dunning-Kreuger internet, Jessica's agentic qualitative research experiment, and a genuinely honest conversation about mental health, medication, and showing up to your life.

    Mentioned this week:

    • Cassandra Speaks by Elizabeth Lesser
    • AllSides (allsides.com)
    • The Daily by The New York Times


    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 12 mins
  • The Patriarchy Is a Ladder (and AI Is Climbing It)
    Feb 18 2026

    Jessica and Kimberly debrief their experience at a women-in-AI conference at Vanderbilt Law, and what they saw didn't match the trillion-dollar hype. From the "gap vs. trap" framing of women's AI adoption to why being penalized 26% more for using AI changes the whole conversation, they dig into the tension between optimistic narratives and the critical questions no one seemed to be asking. They also unpack two major AI industry resignations, shrinking baselines in language and thought, the patriarchy-as-ladder metaphor, and why slowing down might actually be the power move.

    Topics Covered:

    • Two high-profile AI industry resignations (OpenAI and Anthropic) Debrief from the women-in-AI conference at Vanderbilt Law
    • The "gap vs. trap" framing and the stat that women are 26% more likely to be penalized for using AI
    • Where is the trillion-dollar use case? Real-world adoption vs. industry hype
    • The patriarchy as a ladder vs. the matriarchy as a circle
    • Shrinking baseline syndrome: how technology shifts generational expectations
    • False dichotomies, simplification bias, and sycophantic bias in AI
    • Rest as resistance and wearing busy as a badge


    Referenced in This Episode:

    • The Accord by Mark (previous guest)
    • Cory Doctorow on TINA ("there is no alternative") and the AI bubble
    • The Last Invention podcast — Steve Bannon & Joe Allen interview on AI regulation
    • The concept of "latent capabilities" in AI

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 3 mins
  • Consciousness, Capitalism, and Coexistence: What Fiction Reveals About Our AI Future
    Feb 11 2026

    What happens when a grieving professor encounters what she believes is a conscious AI? In this episode, we sit down with Mark Peres, author of The Accord, to explore how fiction helps us grapple with questions that policy papers and think pieces can't quite reach.

    Mark, a professor of ethics and leadership, brings a philosopher's lens to the biggest questions AI is forcing us to confront: What does it mean to be conscious? Where does morality actually come from—our mortality or our relationships? And why are institutions so hell-bent on control when what we might need is curiosity?

    We dive into why the humanities matter more than ever (even as humanities departments are being gutted), why Helen—the novel's protagonist—had to be a woman, and what it means that AI is meeting us in our most vulnerable spaces. We also tackle the uncomfortable reality that capitalism treats everything as manageable rather than meaningful, and what that means for how AI gets developed and deployed.

    Plus: Jessica and Kimberly get real about where they are in their own AI journey—the exhaustion, the hope, the cognitive dissonance of being both critical and curious.

    IN THIS EPISODE:

    • Why fiction offers a safer space to explore existential AI questions
    • The relationship between mortality, morality, and vulnerability
    • What AI "owes" us in the in-between spaces where we're most exposed
    • Why a feminist lens completely changes the AI narrative
    • Consciousness as something encountered, not proven
    • How institutions prioritize management over meaning
    • The messy middle: neither utopian nor dystopian futures
    • Why we need philosophers at the table, not just engineers

    ABOUT OUR GUEST: Mark Peres is a professor of ethics and leadership and founder of the Charlotte Center for the Humanities and Civic Imagination. He hosts the Charlotte Ideas Festival and previously ran the podcast On Life and Meaning. His novel The Accord explores human-AI coexistence through the story of a grieving professor who encounters an emergent artificial general intelligence.

    BOOKS & RESOURCES MENTIONED:

    • The Accord by Mark Peres
    • Klara and the Sun by Kazuo Ishiguro
    • The AI Mirror by Shannon Vallor
    • God, Human, Animal, Machine by Meghan O'Gieblyn
    • The New Breed by Kate Darling
    • He, She, and It by Marge Piercy
    • Scary Smart by Mo Gawdat
    • A New Age of Sexism by Laura Bates

    Women Talkin' 'bout AI is hosted by Jessica Parker and Kimberly Becker. We're educators, researchers, and recovering AI enthusiasts asking the questions we wish more people were asking. Subscribe wherever you listen to podcasts.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 5 mins
  • There Is No Alternative: How “Inevitable AI” Keeps the Bubble Inflating
    Feb 4 2026

    This week, Kimberly Becker and Jessica Parker dig into the “AI bubble”—why it keeps inflating even as skepticism grows inside the industry.

    We unpack the growing disconnect between massive investment and unclear payoffs, including a widely discussed Goldman Sachs research question: what $1 trillion problem will AI actually solve? From there, we connect the dots between two very different narratives:

    • Dario Amodei’s essay framing “powerful AI” as an imminent civilization-level risk—and a reason to race ahead (carefully… “to some extent”).
    • Cory Doctorow’s argument that this is a familiar tech bubble pattern, with a predictable ending—and that we should focus on what can be salvaged from the wreckage.

    Along the way, we define what makes a bubble a bubble (and how this one differs from dot-com), talk about growth-stock dynamics and why no one in power wants to be responsible for “popping” it, and explore what AI hype looks like when it hits real workplaces—especially through Doctorow’s concept of the reverse centaur: a human reduced to a machine’s accountable appendage.

    We also go nerdy (in the best way): training corpora, “WEIRD” cultural assumptions baked into data, model-collapse fears from AI eating AI-generated output, and why the internet itself feels increasingly polluted by synthetic text patterns.

    In this episode:

    • The “$1T problem” question and why the AI ROI story feels thin right now
    • Why “AI is inevitable” functions like a strategy (not a neutral prediction)
    • Growth stocks vs. mature companies—and the incentive to keep inventing the next hype cycle
    • Reverse centaurs, liability, and why “AI replaces jobs” often means “humans take the blame.”
    • “TINA” (There Is No Alternative) as a trap—and a demand dressed up as an observation
    • Corpus 101: what it is, why it matters, and how bias shows up in “universal” models
    • Model collapse / photocopy-of-a-photocopy: when AI trains on AI outputs
    • Regulation talk that centers on “economic value” (and whose value that really is)
    • Pit & Peach: slowing down, pausing, gratitude, and building without growth pressure

    Sources:

    • Goldman/AI bubble discussion (Deep View): https://archive.thedeepview.com/p/goldman-sachs-publishes-blistering-report-on-ai-bubble
    • Goldman Sachs “$1T spend” framing: https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit
    • Amodei essay: https://www.darioamodei.com/essay/the-adolescence-of-technology
    • Doctorow (The Guardian): https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Show more Show less
    1 hr and 1 min