• 67: Confidential AI, Speech Recognition, and Why AI Literacy Starts with Teachers with Giorgio Natili
    Mar 24 2026

    Summary:


    In this episode, Anastassia and Giorgio Natili discuss the importance of AI literacy, the evolution of speech recognition technology, and the challenges of ensuring data privacy and sovereignty in AI applications. They explore the concept of confidential AI, the need for responsible usage in education, and the future aspirations for AI explainability and funding allocation. The conversation emphasizes the necessity of understanding AI's limitations and the ethical implications of its deployment in various sectors.

    Giorgio Natili is an engineering leader, author, and community figure with over 20 years of experience in software engineering and technological innovation. He is currently Head of AI Engineering at Oracle Cloud, and previously Vice President and Head of Engineering at Opaque Systems, where he worked on confidential AI and secure data analytics platforms. Giorgio was previously the Head of Engineering for Firefox at Mozilla, Director of Software Engineering at Capital One, and a Software Development Manager at Amazon. Natili is also known for founding GNStudio, a Rome-based development studio, and being involved as a W3C member, author, and educator.​

    In addition to his achievements in technology, Giorgio is an advocate for diversity, inclusion, and ethical leadership, and he has also spoken about his past as a professional windsurfer and DJ, emphasizing the human side of leadership.


    Takeaways:


    AI literacy is crucial for understanding the complexities of technology.

    Speech recognition has evolved significantly, but still faces challenges.

    Accents and environmental factors greatly impact transcription accuracy.

    Confidential AI focuses on maintaining data privacy and sovereignty.

    AI does not possess human-like understanding or reasoning capabilities.

    Responsible usage of AI is essential for protecting sensitive data.

    Prompt engineering can enhance the effectiveness of AI tools.

    AI can provide personalized learning experiences for students.

    Explainability in AI is necessary for safe and effective use.

    Funding for AI should prioritize explainability and safety over mere scaling.


    Chapters:

    

    0:00 Introduction to the episode: Who is our guest, and what will we learn today?

    1:54 Explainer on AI Literacy

    2:27 History of Speech Recognition

    3:22 Challenges in Speech-to-Text Technology

    7:26 Data and Model Limitations

    13:15 Confidential AI and Data Sovereignty concepts

    26:18 AI in Education and Responsible Usage

    39:02 Future of AI and Explainability



    Show more Show less
    36 mins
  • 66: The EU AI Act Uncovered: Law, Ethics & Europe's Bet on Responsible AI with Gabriela Bar
    Mar 17 2026
    Summary:Gabriela Bar, a legal expert specializing in AI law and ethics, talks about how AI is shaping legal frameworks, societal perceptions, and technological innovations – especially within Europe and Poland. She discusses the importance of responsible AI governance, the evolving legal landscape, and the societal implications of AI deployment at scale. The discussion with Anastassia touches on compliance costs to implement the EU AI Act, practices to introduce national LLMs, and what constitutes responsible AI.Gabriela Bar is a prominent legal expert specializing in technology and artificial intelligence law, based in Poland. She has over 20 years of experience and is the founder of the Gabriela Bar Law & AI firm, serving as a legal and ethics advisor for EU technology projects focused on AI, digital law, and compliance with regulations such as the EU AI Act and GDPR. She is recognized among the TOP100 Women in AI in Poland, Forbes 25 Women in Business Law, and is active in several international organizations dedicated to technology, digital ethics, and law. Gabriela Bar frequently lectures at universities, publishes on AI law and ethics, and advises technology companies and research consortia on responsible and practical AI innovation. Key Topics:Gabriela’s journey from technology law to AI ethics and her ongoing work within European AI regulation.The rapid growth of AI adoption in Polish businesses and public sector initiatives for language models.The challenges and opportunities of implementing responsible AI, including transparency, accountability, and bias mitigation.The role of AI legislation, with a focus on the European AI Act, regulatory costs, and how it balances innovation with safeguards.The global landscape of AI regulation, contrasting EU's comprehensive approach with the US decentralized system.Technical limitations of deep learning models, explainability, and the importance of aligning AI development with ethical principles.The future of AI in cybersecurity, digital personas, and the geopolitics of AI competitiveness among the US, EU, and China.Chapters:00:04 - Introduction to Gabriela and AI in Poland02:55 - How Gabriela transitioned from traditional law to technology and AI04:03 - Cultural portrayals of AI and public perceptions influenced by movies and literature07:49 - Misinformation and misconceptions about AI technology today09:17 - The private sector’s role in AI development and application in Poland10,:54 - Demographic challenges in Poland and AI’s potential role in mitigating them13:45 - Political and regulatory gaps in AI, and the importance of cross-sector integration15:38 - The absence of national LLMs in languages like Japanese; success stories from other countries18:01 - Foundations of responsible and ethical AI: core principles and risk management21:51 - Data quality, biases, and ongoing governance in AI lifecycle management22:53 - The flaws in deep learning transparency and the necessity for cautious regulation29:34 - Legal accountability, the role of audits, and fairness in AI systems33:34 - The evolving landscape of AI litigation and insurance implications36:14 - Regulatory costs for AI companies and the competitive landscape in Europe39:03 - The scope of the European AI Act and its impacts on high-risk sectors42:49 - Cybersecurity risks involving AI, criminal misuse, and the importance of legal safeguards44:08 - Europe's strategic imperative in AI sovereignty amid global technology race46:39 - The contrasting regulatory systems of the US and China and their influence on innovation51:17 - The emerging need for regulation of digital personas and synthetic media51:35 - Wrapping up: key takeaways and the importance of dialogue between tech developers, policymakers, and societyResources & Links:Gabriela Bar - LinkedIn | TwitterAnastassia Lauterbach - LinkedIn@romyandroby“Leading Through Disruption”AI EdutainmentRomy & Roby Book
    Show more Show less
    46 mins
  • 65: From Narrow AI to AGI - Breakthroughs, Limits, and Sense of Purpose in AIs with Dr. Craig Kaplan
    Mar 10 2026
    Summary:Anastassia and Dr. Craig Kaplan delve into the complexities of artificial general intelligence (AGI) and the evolving landscape of AI technologies. Craig emphasizes the importance of defining AGI as an AI capable of performing any cognitive task as well as an average human, highlighting the challenges of achieving true general intelligence beyond narrow applications. They discuss the historical context of AI development, the shift from symbolic AI to machine learning, and the potential of collective intelligence as a more effective approach to building AGI. Craig advocates for a community of models rather than a single monolithic AI, suggesting that this could lead to safer and more ethical AI systems that reflect diverse human values. The conversation also touches on the limitations of current AI systems, particularly their lack of understanding of causality and reasoning. Craig argues that while AI might develop its own sense of purpose, it is crucial to instill positive human values early on to guide its development. The discussion concludes by emphasizing the importance of AI literacy and critical thinking, noting that human behavior and values will significantly shape the future of AI and its impact on society.Craig A. Kaplan is an artificial general intelligence (AGI) expert and entrepreneur who focuses on collective intelligence, safe superintelligence, and practical strategies for aligning advanced AI with human values and goals. He has founded and led multiple AI-related ventures, including iQ Company, which develops AI systems to enhance human decision-making; previously, PredictWallStreet, an early crowdsourced stock prediction platform; and he speaks and writes about how to safely build and govern increasingly powerful AI systems.Takeaways:AGI is defined as AI that can perform any cognitive task like an average human.The shift from symbolic AI to machine learning in the 1960s and 1970s, big data and superb semiconductors later on enabled today’s AI revolution.Collective intelligence may offer a safer and more effective path to AGI, and this include development of individual LLMs and models based on values and perspectives of individual humans.Current AI systems lack an understanding of causality and reasoning.AI will develop its own sense of purpose, but early values are crucial.AI Literacy is imperative to build safe, transparent and beneficial AI.Chapters:00:00 Introduction to the episode: Researching Artificial General Intelligence (AGI) and the work of Dr. Craig Kaplan2:06 Introduction to AGI and AI Definitions04:16 The Evolution of AI: From Symbolic to Machine Learning07:02 The Limitations of Current AI Systems14:01 Causality and Reasoning in AI19:38 The Collective Intelligence Approach to AGI26:46 The Future of AI: Transparency and Collaboration28:37 The Purpose of AI Collectives29:25 Utopia vs. Reality in AI Development30:49 The Risks of AI: Understanding P-Doom32:16 Human Values vs. AI Intelligence35:09 Fusing Humanities with AI Engineering37:40 The Role of Human Responsibility in AI40:22 The Evolution of AI Values44:59 The Bell Curve of Society and AI's Reflection47:42 Education and AI: Building a Better Future49:38 The Necessity of AI Literacy and Critical ThinkingHyperlinks:LinkedIn profileOrcid profileAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    Show more Show less
    49 mins
  • 64: Unbreakable Backups - Decentralized Storage for Smart Systems with Murphy John
    Mar 3 2026
    Summary:The conversation focuses on decentralized cloud storage as an alternative to traditional hyperscale cloud providers, emphasizing security, privacy, cost, and resilience. It discusses the limitations of centralized cloud systems and how decentralized storage offers a more secure and distributed solution.Murphy John is Chief Growth Officer at StorX Network, a decentralized cloud storage platform (DePIN) built on blockchain technology to deliver secure, private, and cost-efficient data storage at scale. With a background in designing and managing internet and cloud infrastructure for large enterprises, banks, and financial institutions, he has over a decade of experience in building resilient, secure systems for mission-critical workloads. Since joining StorX in 2021, Murphy has led ecosystem development, strategic partnerships, and go‑to‑market initiatives, working closely with Web3, IoT, and AI partners to integrate StorX’s encrypted, geo-distributed storage into real-world applications. A strong advocate for data privacy and decentralization, he frequently speaks on how technologies such as encryption, data fragmentation, and distributed ledgers can protect organizations against ransomware, data misuse, and single points of failure in traditional cloud models. Key Takeaways:Centralized Cloud Issues : Traditional cloud systems face challenges in scalability, security, and cost.Decentralized Storage Benefits : Offers encrypted, distributed data storage with enhanced security and privacy.Ecosystem and Governance: StorX operates a global network with incentives for node operators and AI-driven management.Real-World Use Cases: Includes healthcare data storage with geofencing and IoT data processing.Future Outlook: Emphasizes education and adoption in a market dominated by legacy cloud players.Chapters:0:04 Introduction into AI Literacy mission and the episode about decentralized storage3:11 Introduction and Market Context4:47 Traditional Cloud Promises and Limitations10:38 Decentralized Storage Architecture and Security22:04 Ecosystem, Node Operations, and AI Governance31:19 Use Cases and Regulatory Considerations39:39 Challenges and Future OutlookHyperlinks LinkedIn Murphy JohnStorX WebsiteStronger. Safer. Decentralized: StorX’s Guide to Cloud Storage vs. BackupAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    Show more Show less
    43 mins
  • 63: Beyond the Canvas: How AI Is Rewriting the Rules of Art/ Beyond Human with Matthias Röder (#3)
    Feb 24 2026
    SummaryThis episode dives into how AI is transforming creative fields — from visual art and music to literature and performance. Dr. Matthias Röder is an AI-music pioneer who works at the forefront of classical music, emerging technologies, and innovation strategy. He led the AI team behind Beethoven X, the celebrated project that used artificial intelligence to complete Beethoven’s unfinished 10th Symphony, and serves as managing partner of The Mindshift, a consultancy focused on creativity and innovation. Röder is a former MD of the Eliette and Herbert von Karajan Institute and a trustee of the Mozarteum Foundation. ​This episode emphasizes how artists, technologists, and institutions can navigate an evolving landscape — balancing innovation with ethical and legal frameworks. Whether you're a creator or simply interested in AI's societal role, these insights offer a clear view of a future where human ingenuity and machine evolution intertwine more deeply than ever.Takeaways/ key discussion points:Pioneering AI artists such as Refik Anadol, Holly Herndon, and Jeroen van der MostAI models like Boto and decentralized autonomous organizations (DAOs) are redefining artistic identityThe commercialization of AI art and legal debates around copyrights and ownershipThe role of NFTs in financing and authenticating digital art piecesSkills artists need to thrive in an AI-rich environment, emphasizing collaboration and technological literacyThe emerging importance of content registries and digital rights management platformsFuture scenarios: the rise of hybrid teams of human and synthetic artists, new educational pathways, and the societal impact of AI-driven creativityChapters:00:00 - Introduction to the episode - continuing the Beyond Human series03:20 Introduction to AI's serious role in art and well-known pioneers05:35 - Decentralized autonomous artists (Boto) and visual AI installations06:44 - Jeroen van der Most's innovative use of pixel calculations and environmental themes07:26 - The influence of AI artist Mats Mensch and the democratization of art creation08:50 - Adoption of AI tools across music, with emphasis on composer workflows10:12 - Major art exhibitions integrating AI, virtual worlds, and immersive experiences11:05 - Market dynamics: how AI art is valued and traded in galleries and auctions13:03 - The commercial side: monetization, licensing, and intellectual property debates16:49 - The promise and risks of digital rights management and content registries18:06 - Fractional ownership of NFTs for funding art projects19:37 - Digital rights, copyright, and the importance of tracking AI training data24:34 - The need for supporting mechanisms to ensure fair compensation for artists25:50 - How content registries could revolutionize transparency and trust in AI-generated art28:48 - Building infrastructures for AI content usage rights and ethical data practices34:16 - Skills for future artists: collaboration, technical literacy, and adaptability37:28 - The disruptive potential of synthetic performers and AI actors in Hollywood40:03 - New educational models, including an AI-focused Master’s program for artists44:49 - Personal reflections: the importance of writing, teaching, and staying curious in AI evolutionHyperlinks:Mentioned Creators:Refik Anadol - Official SiteHolly Herndon - Musician and ResearcherJeroen van der Most - PortfolioNFTs and Art Market AnalysisScribe Platform for Digital Rights - Future ConceptOmlet AI - Content Registration for CreatorsDr. Röder:LinkedInTwitterAnastassia:Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    Show more Show less
    44 mins
  • 62: Trusworthy by Design: Context-Rich AI in Healthcare with Ben Lengerich
    Feb 17 2026
    SummaryBen Lengerich discusses the importance of context in AI for healthcare, the role of generalized additive models (GAMs), and the challenges of data quality and data compliance. He emphasizes the need for responsible AI practices and highlights the impact of historical data on current medical practices. The discussion also touches on the future of personalized medicine and the necessity of investing in AI to improve healthcare outcomes.Ben Lengerich is an assistant professor of Statistics at the University of Wisconsin–Madison and the founder of Intelligible, where he develops context-adaptive, interpretable AI methods to turn real‑world clinical data into reliable evidence for precision medicine. His research sits at the intersection of machine learning, computational genomics, and medical informatics, with a focus on models that are transparent to clinicians and that account for the specific health context of each patient. Before joining UW–Madison, he was a postdoctoral associate and Alana Fellow at MIT CSAIL and the Broad Institute, advised by Manolis Kellis, after earning his PhD in Computer Science and an MS in Machine Learning from Carnegie Mellon University, where he worked with Eric Xing on methods to uncover patterns in complex biomedical data. Takeaways:AI systems must understand context in healthcare to be effective.Generalized additive models (GAMs) enhance interpretability in AI.Data quality is paramount for successful AI applications in healthcare.Debugging datasets can uncover systemic issues in healthcare.Surprising insights from predictive modeling can inform better practices.Responsible AI practices are crucial in medical applications.Historical data continues to influence current medical practices.Compliance with regulations is a significant challenge for AI in healthcare.Legacy infrastructure poses barriers to AI implementation.Investing in AI can lead to improved healthcare outcomes and efficiency.Chapters:00:00 Introduction to another AI Snack on AI in Healthcare: Data, Context, Interpretability02:02 Understanding Context in Healthcare AI04:54 Generalized Additive Models Explained07:41 The Importance of Data Quality10:53 Debugging Datasets in Healthcare13:50 Surprising Insights from Predictive Models16:52 Responsible AI in Medicine19:47 Historical Impact on Medical AI22:28 Compliance and Regulations in Medical AI25:50 Bridging Legacy Infrastructure with AI28:03 The Future of AI in Healthcare31:43 AI Literacy for Healthcare Providers34:45 The Case for AI Investment in HealthcareHyperlinks:Ben Lengerich:LinkedIn profileX profileIntelligible websiteAnastassia:Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    Show more Show less
    38 mins
  • 61: Digital Resurrection: The Science Behind Cryonic Dreams with John R. Carlos
    Feb 10 2026
    SummaryJohn Rodriguez Carlos shares his journey from a long military career to becoming a writer, discussing the inspiration behind his novel “Chronic Dreams,” which explores themes of AI, cryonics, and the nature of consciousness. He delves into the implications of digital consciousness, the complexities of intelligence, and the role of AI in shaping our future. The discussion also touches on the purpose of writing and the human motivation behind storytelling, emphasizing the importance of preparing for the future through thoughtful dialogue.John R. Carlos is a retired Royal Australian Air Force Wing Commander who served for about 42 years in a range of operational and leadership roles across Australia and on overseas deployments before retiring in 2020 and turning to fiction writing. Born in Madrid and raised in Perth, Western Australia, he later studied through associate and advanced diploma programs, then retrained in creative writing after leaving the military to pursue a long-held ambition to become an author. His bibliography at this stage centers on his debut novel Cryonic Dreams: Awakening (2025), a near‑future science‑fiction thriller and Book 1 of a planned “Cryonic Dreams” trilogy, which explores successful cryonic reanimation, global power struggles, and the ethical and political implications of controlling life and death. TakeawaysJohn's military background shaped his sense of duty and creativity.His trilogy “Chronic Dreams” explores AI and cryonics, with the first novel being already published, and the second in preparation to be released.AI's role in the future is both promising and concerning.The preservation of identity is a central theme in his work.Digital consciousness raises questions about the soul.Speculative fiction serves as a warning for future challenges.Writing is a means to find meaning and purpose in life.AI can mimic creativity but lacks true human experience.Conversations about technology are crucial for shaping our future.Chapters00:00 Introduction to cryonic technologies07:23 Journey from Military to Writing10:27 The Inspiration Behind 'Cryonic Dreams'15:40 Exploring AI and Cryonics in Fiction18:20 Digital Consciousness vs. Cryonics21:57 The Complexity of Intelligence and Consciousness24:61 AI's Role in the Future28:56 Speculative Futures and Human Progress34:46 The Purpose of Writing and Human Motivation Hyperlinks:Amazon: Cryonic dreams: Awakening bookJohn Rodriguez Carlos LinkedIn Article on Tomorrow: How Many People Are Currently Cryonically Preserved?Paper: In the End We Become Our Avatars: An Exploration of Artificial Intelligence and Digital Afterlives Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    Show more Show less
    39 mins
  • 60: LLMs That Reason: Knowledge Graphs, Ontologies, and the Future of AI with Joe Miller
    Feb 3 2026
    SummaryAnastassia and Joseph Miller delve into the complexities of artificial intelligence, particularly focusing on the limitations of large language models (LLMs) and the importance of embedding causality and reasoning into AI systems. Joseph critiques the current transformer model architecture, explaining how it lacks a true understanding of causality, which is essential for meaningful interactions. He emphasizes that while LLMs can generate convincing language, they lack a world model that would enable them to reason or understand the implications of their outputs. This leads to discussions on the necessity of ontologies and knowledge graphs to provide a structured understanding of the world, enabling AI to operate more effectively in real-world contexts.The conversation also touches on the future of AI in the workplace, with Joseph expressing a somewhat pessimistic view of labor disruption from AI advancements. He believes that while AI can enhance productivity, it may also lead to significant job losses, as many roles could be automated. However, he remains hopeful about the potential for humans and AI to work together, emphasizing the need for accountability and responsibility in AI applications. The discussion concludes with reflections on the importance of AI literacy and the potential for a future in which humans and AI coexist harmoniously, leveraging each other's strengths.Joseph (Joe) Miller, PhD, is a physicist, scientist, and serial entrepreneur who serves as Co‑Founder and Chief AI Officer at Vivun, where he builds AI sales agents that embed expert domain knowledge into real‑world workflows. Before Vivun, he worked at Bridgewater Associates on expert systems for systematic decision‑making and later founded Battery CI, a quantitative FX hedge fund, and co‑founded other tech ventures at the intersection of AI, finance, and digital identity. Across his roles, Miller focuses on causal inference, world models, and knowledge‑centric AI, translating deep technical ideas into practical systems for high‑stakes enterprise environments like sales, trading, and strategic decision‑making.​TakeawaysJudea Pearl’s “The Book of ‘Why’” is a must-read to understand foundations of causality and what current AI systems lack.LLMs lack a true understanding of causality.Embedding ontologies can enhance AI's reasoning capabilities.AI's productivity gains may lead to significant job disruption.Humans must remain accountable for AI's decisions. AI makers will be liable for product issues in AI services and applications.AI literacy is crucial for navigating future challenges.Chapters00:00 Introduction to the episode: Looking into AI and reasoning LLMs03:11 Discussing two books: “Nexus” and “The Book of ‘Why’”07:36 Limitations of Large Language Models today14:50 Embedding Context with Ontologies and Knowledge Graphs into LLMs18:31 The Convergence of AI Approaches as a possible path to a reasoning AI20:52 Defining Ontologies and Knowledge Graphs25:45 Innovation Through Interdisciplinary Knowledge in AI as a necessity30:04 Dynamic Learning in LLMs34:15 ‘World Models’ and Their Impact in AI35:14 The Future of AI and Accountability, AI Ethics40:03 Human-AI Collaboration in the Workplace47:06 The Importance of AI LiteracyHyperlinks:Joe Miller and Vivun/ AI in sales:LinkedIn profileMiraCosta Alumni blog postVivun websiteAI adoption in companiesStatistics about AI in salesAnastassia:Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    Show more Show less
    50 mins