Artificial Developer Intelligence Podcast By Shimin Zhang Dan Lasky & Rahul Yadav cover art

Artificial Developer Intelligence

Artificial Developer Intelligence

By: Shimin Zhang Dan Lasky & Rahul Yadav
Listen for free

Three engineer friends argue about AI so you don't have to. Shimin Zhang, Dan Lasky, and Rahul Yadav are working developers who've been watching AI transform their profession in real time, and they got opinions on the robot takeover. Every week the three get together to riff on the latest AI news, geek out over research papers, roast each other's tool choices, and occasionally have an existential crisis about whether the craft is dying or just getting weird. What you're signing up for: - AI news without the LinkedIn cringe: model drops, acquisitions, open-source drama, and the other stuff that actually matters if you write code for a living. - Technique corner: real tips from the trenches: spec-driven development, multi-agent orchestration, Claude.md tricks, and all the ways they've wasted hours so you don't have to. - Two Minutes to Midnight: the show's running AI bubble tracker, complete with circular funding diagrams, hyperscaler CAPEX math, and a doomsday clock they keep arguing about moving. - Deep dives that (occasionally) go deep: hallucination neurons, agentic memory, workflow automation economics, LLM architectures the papers nobody else is covering because they're hard. - Dan's Rant: Dan frequently gets mad about things. It's a whole thing. - The feelings segment: Yes, Shimin reads Tennyson on a tech podcast. Yes, Rahul wrote an AI-generated country song. No, they're not sorry. Three friends with strong opinions, questionable metaphors, and genuine love for the craft they're also mourning for. If you want to understand AI deeply, use it without embarrassing yourself, and laugh at the absurdity of it all, pull up a chair.ADIPod Politics & Government
Episodes
  • Ep 19: Thinking Fast Slow and Artificial, Meta's Trouble with Rogue Agents, and FOMO in the Age of AI
    Mar 27 2026

    This week, Rahul, Shimin, and Dan covers Claude Code's new channels and scheduling features, a Meta security incident caused by AI-generated advice, Anthropic's survey of 81,000 people on AI expectations, Dan's vibe-coded vector memory CLI project, a deep dive on the paper "Thinking, Fast, Slow and Artificial" about cognitive surrender to AI, a rant about AI tokens as employee compensation, and bubble watch updates including NVIDIA's trillion-dollar demand projections and OpenAI shutting down Sora.

    Takeaways:

    • Claude Code is rapidly absorbing community-developed workflows — the moat may only be in the general model capabilities, not tooling
    • The Meta incident illustrates the emerging pattern of AI-caused production incidents and the need for process guardrails around agent usage
    • Cognitive surrender to AI creates a widening gap: those with high need-for-cognition benefit more while those who dislike effortful thinking defer even more
    • AI confidence inflation (12 percentage point boost) may stem from treating AI like authoritative reference material (encyclopedias, Wikipedia)
    • Historical technology resistance (Socrates on writing, farmers on tractors) suggests the battle against AI adoption may already be lost
    • OpenAI shutting Sora just 4 months after a 3-year Disney partnership signals deeper financial or strategic issues

    Resources Mentioned
    Push events into a running session with channels
    Perhaps not Boring Technology after all
    Meta is having trouble with rogue AI agents
    What 81,000 people want from AI
    Dan's vec-memory-cli
    Thinking—Fast, Slow, and Artificial
    Are AI tokens the new signing bonus or just a cost of doing business?
    Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere
    Accelerated FOMO in the Age of AI
    OpenAI shutters AI video generator Sora in abrupt announcement

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 13 mins
  • Ep 18: 8 Levels of AI Engineering, Meta AI Delays, and LLM Neuroanatomy
    Mar 20 2026

    This week, Dan, Shimin & Rahul covers Meta's struggles with its delayed "Avocado" AI model and potential Gemini licensing, NVIDIA's enterprise-ready NemoClaw fork of OpenClaw, SWE-bench analysis showing PRs wouldn't pass human review, prompting superstitions and developer identity, the 8 levels of agentic engineering, mainstream media framing of AI coding, legal liability for agent-written code, and a deep dive into LLM neuroanatomy where a researcher topped leaderboards by repeating model layers without changing weights.

    Takeaways:

    • Meta may end up licensing Gemini despite massive AI investment — mirroring Apple's path
    • SWE-bench failures were mostly code quality, not functionality — suggesting good enough may be good enough with proper agents.md
    • A coworker analyzed 4.5 years of PRs to create a personalized coding style document for AI priming
    • The fastest software paradigm adoption cycle ever may be the claw/agent paradigm
    • Legal frameworks and insurance haven't caught up to agent-written code shipping to production
    • Repeating later model layers (the "thinking" layers) can boost performance without fine-tuning — raising questions about whether chain-of-thought reasoning is essentially exercising these layers repeatedly
    • Developers compared to ancient Egyptian scribes — language literacy as leverage

    Resources Mentioned
    Meta Delays Rollout of New A.I. Model After Performance Concerns
    NVIDIA NemoClaw
    Research note: Many SWE-bench-Passing PRs Would Not Be Merged into Main
    The Collective Superstitions of People Who Talk to Machines
    The 8 Levels of Agentic Engineering
    Coding After Coders: The End of Computer Programming as We Know It
    Built by Agents, Tested by Agents, Trusted by Whom?
    LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight

    Chapters

    • (00:00) - Introduction to AI in Software Development
    • (02:42) - Meta's AI Model Delays and Market Position
    • (09:51) - NVIDIA's New AI Developments
    • (13:58) - Benchmarking AI Models and Code Quality
    • (19:00) - Techniques Corner: AI Prompting and Creativity
    • (22:56) - The Evolution of Coding and Creativity
    • (28:46) - Levels of Agentic Engineering
    • (34:58) - Mainstream Perspectives on AI and Software Development
    • (43:00) - Trusting AI-Generated Code
    • (44:40) - Metrics for Success in Autonomous Teams
    • (46:59) - Legal and Ethical Implications of Autonomous Code
    • (50:21) - Innovations in Language Model Architectures
    • (01:01:02) - User Experience Challenges in Tech Development
    • (01:03:47) - Market Predictions and Financial Insights

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 8 mins
  • Ep 17: Slop Garbage Collection, Cleanroom Rewrites, and Will Claude Ruin our Teams?
    Mar 13 2026

    In this episode, Dan and Shimin follow-up on the Anthropic Pentagon drama (supply chain risk designation, lawsuit, and big tech backing Anthropic), open-source licensing controversy around AI-generated clean room rewrites, team dynamics in the age of AI coding tools, OpenAI's harness engineering blog post, two vibe-and-tell segments (Dan building custom Arch Linux images for a TuringPie cluster board, Shimin building FlatterProof — an AI sycophancy training app), and a bubble clock update driven by Oracle job cuts and AWS AI-related downtime.

    Takeaways

    • AI as a force multiplier for team culture: good teams move faster, bad teams explode faster
    • Prompt debt is now a real concern alongside technical debt — agents.md files rot just like code
    • Code garbage collection (periodic AI-driven cleanup) is emerging as a best practice
    • Cross-functional pair programming with AI (PM + engineer) represents a bright future for team collaboration
    • Senior engineers now required to sign off on AI-assisted changes at Amazon, but review fatigue is unsustainable
    • AI-generated SVG icons are surprisingly good and practical for real projects
    • Oracle may be the canary in the coal mine for the AI bubble, not the frontier labs themselves

    Resources Mentioned
    Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target
    Anthropic sues to block Pentagon blacklisting over AI use restrictions
    Alibaba’s Qwen tech lead steps down after major AI push
    Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release
    Can coding agents relicense open source through a “clean room” implementation of code?
    GNU and the AI reimplementations
    Will Claude Code ruin our team?
    Harness engineering: leveraging Codex in an agent-first world
    Oracle plans thousands of job cuts as data center costs rise, Bloomberg News reports
    After outages, Amazon to make senior engineers sign off on AI-assisted changes

    Chapters

    • (00:00) - Introduction
    • (02:50) - Anthropic and Pentagon Drama
    • (05:01) - Alibaba's Qwen Development Team Changes
    • (07:34) - Open Source Drama with CharDat Library
    • (24:08) - The Impact of AI on Team Dynamics
    • (29:15) - Harness Engineering and Codecs in AI Development
    • (31:39) - Empowering Agents with Tools
    • (34:06) - The Importance of Documentation
    • (36:26) - Architectural Boundaries and Testing
    • (38:29) - Innovative Projects and Personal Experiments
    • (46:56) - Flatterproof: Combating AI Synchrofancy
    • (54:27) - The AI Bubble Clock: Current State of Affairs
    • (01:02:29) - ADI Intro.mp4

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 3 mins
No reviews yet