AI Snacks With Romy & Roby: Democratizing AI Technologies Podcast By Dr. Anastassia Lauterbach: Democratizing AI Expert cover art

AI Snacks With Romy & Roby: Democratizing AI Technologies

AI Snacks With Romy & Roby: Democratizing AI Technologies

By: Dr. Anastassia Lauterbach: Democratizing AI Expert
Listen for free

AI Snacks with Romy&Roby is a podcast that translates AI and robotics technologies from complex scientific concepts into easy-to-understand discussions, making them accessible for teens, parents, teachers, and anyone curious about AI. Through real-world stories and expert interviews, the show is dedicated to democratizing AI knowledge and empowering the general population to understand how AI is developed and applied in everyday life. The podcast is part of the Romy&Roby and AI Edutainment universe.Copyright 2024
Episodes
  • 67: Confidential AI, Speech Recognition, and Why AI Literacy Starts with Teachers with Giorgio Natili
    Mar 24 2026

    Summary:


    In this episode, Anastassia and Giorgio Natili discuss the importance of AI literacy, the evolution of speech recognition technology, and the challenges of ensuring data privacy and sovereignty in AI applications. They explore the concept of confidential AI, the need for responsible usage in education, and the future aspirations for AI explainability and funding allocation. The conversation emphasizes the necessity of understanding AI's limitations and the ethical implications of its deployment in various sectors.

    Giorgio Natili is an engineering leader, author, and community figure with over 20 years of experience in software engineering and technological innovation. He is currently Head of AI Engineering at Oracle Cloud, and previously Vice President and Head of Engineering at Opaque Systems, where he worked on confidential AI and secure data analytics platforms. Giorgio was previously the Head of Engineering for Firefox at Mozilla, Director of Software Engineering at Capital One, and a Software Development Manager at Amazon. Natili is also known for founding GNStudio, a Rome-based development studio, and being involved as a W3C member, author, and educator.​

    In addition to his achievements in technology, Giorgio is an advocate for diversity, inclusion, and ethical leadership, and he has also spoken about his past as a professional windsurfer and DJ, emphasizing the human side of leadership.


    Takeaways:


    AI literacy is crucial for understanding the complexities of technology.

    Speech recognition has evolved significantly, but still faces challenges.

    Accents and environmental factors greatly impact transcription accuracy.

    Confidential AI focuses on maintaining data privacy and sovereignty.

    AI does not possess human-like understanding or reasoning capabilities.

    Responsible usage of AI is essential for protecting sensitive data.

    Prompt engineering can enhance the effectiveness of AI tools.

    AI can provide personalized learning experiences for students.

    Explainability in AI is necessary for safe and effective use.

    Funding for AI should prioritize explainability and safety over mere scaling.


    Chapters:

    

    0:00 Introduction to the episode: Who is our guest, and what will we learn today?

    1:54 Explainer on AI Literacy

    2:27 History of Speech Recognition

    3:22 Challenges in Speech-to-Text Technology

    7:26 Data and Model Limitations

    13:15 Confidential AI and Data Sovereignty concepts

    26:18 AI in Education and Responsible Usage

    39:02 Future of AI and Explainability



    Show more Show less
    36 mins
  • 66: The EU AI Act Uncovered: Law, Ethics & Europe's Bet on Responsible AI with Gabriela Bar
    Mar 17 2026
    Summary:Gabriela Bar, a legal expert specializing in AI law and ethics, talks about how AI is shaping legal frameworks, societal perceptions, and technological innovations – especially within Europe and Poland. She discusses the importance of responsible AI governance, the evolving legal landscape, and the societal implications of AI deployment at scale. The discussion with Anastassia touches on compliance costs to implement the EU AI Act, practices to introduce national LLMs, and what constitutes responsible AI.Gabriela Bar is a prominent legal expert specializing in technology and artificial intelligence law, based in Poland. She has over 20 years of experience and is the founder of the Gabriela Bar Law & AI firm, serving as a legal and ethics advisor for EU technology projects focused on AI, digital law, and compliance with regulations such as the EU AI Act and GDPR. She is recognized among the TOP100 Women in AI in Poland, Forbes 25 Women in Business Law, and is active in several international organizations dedicated to technology, digital ethics, and law. Gabriela Bar frequently lectures at universities, publishes on AI law and ethics, and advises technology companies and research consortia on responsible and practical AI innovation. Key Topics:Gabriela’s journey from technology law to AI ethics and her ongoing work within European AI regulation.The rapid growth of AI adoption in Polish businesses and public sector initiatives for language models.The challenges and opportunities of implementing responsible AI, including transparency, accountability, and bias mitigation.The role of AI legislation, with a focus on the European AI Act, regulatory costs, and how it balances innovation with safeguards.The global landscape of AI regulation, contrasting EU's comprehensive approach with the US decentralized system.Technical limitations of deep learning models, explainability, and the importance of aligning AI development with ethical principles.The future of AI in cybersecurity, digital personas, and the geopolitics of AI competitiveness among the US, EU, and China.Chapters:00:04 - Introduction to Gabriela and AI in Poland02:55 - How Gabriela transitioned from traditional law to technology and AI04:03 - Cultural portrayals of AI and public perceptions influenced by movies and literature07:49 - Misinformation and misconceptions about AI technology today09:17 - The private sector’s role in AI development and application in Poland10,:54 - Demographic challenges in Poland and AI’s potential role in mitigating them13:45 - Political and regulatory gaps in AI, and the importance of cross-sector integration15:38 - The absence of national LLMs in languages like Japanese; success stories from other countries18:01 - Foundations of responsible and ethical AI: core principles and risk management21:51 - Data quality, biases, and ongoing governance in AI lifecycle management22:53 - The flaws in deep learning transparency and the necessity for cautious regulation29:34 - Legal accountability, the role of audits, and fairness in AI systems33:34 - The evolving landscape of AI litigation and insurance implications36:14 - Regulatory costs for AI companies and the competitive landscape in Europe39:03 - The scope of the European AI Act and its impacts on high-risk sectors42:49 - Cybersecurity risks involving AI, criminal misuse, and the importance of legal safeguards44:08 - Europe's strategic imperative in AI sovereignty amid global technology race46:39 - The contrasting regulatory systems of the US and China and their influence on innovation51:17 - The emerging need for regulation of digital personas and synthetic media51:35 - Wrapping up: key takeaways and the importance of dialogue between tech developers, policymakers, and societyResources & Links:Gabriela Bar - LinkedIn | TwitterAnastassia Lauterbach - LinkedIn@romyandroby“Leading Through Disruption”AI EdutainmentRomy & Roby Book
    Show more Show less
    46 mins
  • 65: From Narrow AI to AGI - Breakthroughs, Limits, and Sense of Purpose in AIs with Dr. Craig Kaplan
    Mar 10 2026
    Summary:Anastassia and Dr. Craig Kaplan delve into the complexities of artificial general intelligence (AGI) and the evolving landscape of AI technologies. Craig emphasizes the importance of defining AGI as an AI capable of performing any cognitive task as well as an average human, highlighting the challenges of achieving true general intelligence beyond narrow applications. They discuss the historical context of AI development, the shift from symbolic AI to machine learning, and the potential of collective intelligence as a more effective approach to building AGI. Craig advocates for a community of models rather than a single monolithic AI, suggesting that this could lead to safer and more ethical AI systems that reflect diverse human values. The conversation also touches on the limitations of current AI systems, particularly their lack of understanding of causality and reasoning. Craig argues that while AI might develop its own sense of purpose, it is crucial to instill positive human values early on to guide its development. The discussion concludes by emphasizing the importance of AI literacy and critical thinking, noting that human behavior and values will significantly shape the future of AI and its impact on society.Craig A. Kaplan is an artificial general intelligence (AGI) expert and entrepreneur who focuses on collective intelligence, safe superintelligence, and practical strategies for aligning advanced AI with human values and goals. He has founded and led multiple AI-related ventures, including iQ Company, which develops AI systems to enhance human decision-making; previously, PredictWallStreet, an early crowdsourced stock prediction platform; and he speaks and writes about how to safely build and govern increasingly powerful AI systems.Takeaways:AGI is defined as AI that can perform any cognitive task like an average human.The shift from symbolic AI to machine learning in the 1960s and 1970s, big data and superb semiconductors later on enabled today’s AI revolution.Collective intelligence may offer a safer and more effective path to AGI, and this include development of individual LLMs and models based on values and perspectives of individual humans.Current AI systems lack an understanding of causality and reasoning.AI will develop its own sense of purpose, but early values are crucial.AI Literacy is imperative to build safe, transparent and beneficial AI.Chapters:00:00 Introduction to the episode: Researching Artificial General Intelligence (AGI) and the work of Dr. Craig Kaplan2:06 Introduction to AGI and AI Definitions04:16 The Evolution of AI: From Symbolic to Machine Learning07:02 The Limitations of Current AI Systems14:01 Causality and Reasoning in AI19:38 The Collective Intelligence Approach to AGI26:46 The Future of AI: Transparency and Collaboration28:37 The Purpose of AI Collectives29:25 Utopia vs. Reality in AI Development30:49 The Risks of AI: Understanding P-Doom32:16 Human Values vs. AI Intelligence35:09 Fusing Humanities with AI Engineering37:40 The Role of Human Responsibility in AI40:22 The Evolution of AI Values44:59 The Bell Curve of Society and AI's Reflection47:42 Education and AI: Building a Better Future49:38 The Necessity of AI Literacy and Critical ThinkingHyperlinks:LinkedIn profileOrcid profileAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    Show more Show less
    49 mins
No reviews yet