• Ep 19: Thinking Fast Slow and Artificial, Meta's Trouble with Rogue Agents, and FOMO in the Age of AI
    Mar 27 2026

    This week, Rahul, Shimin, and Dan covers Claude Code's new channels and scheduling features, a Meta security incident caused by AI-generated advice, Anthropic's survey of 81,000 people on AI expectations, Dan's vibe-coded vector memory CLI project, a deep dive on the paper "Thinking, Fast, Slow and Artificial" about cognitive surrender to AI, a rant about AI tokens as employee compensation, and bubble watch updates including NVIDIA's trillion-dollar demand projections and OpenAI shutting down Sora.

    Takeaways:

    • Claude Code is rapidly absorbing community-developed workflows — the moat may only be in the general model capabilities, not tooling
    • The Meta incident illustrates the emerging pattern of AI-caused production incidents and the need for process guardrails around agent usage
    • Cognitive surrender to AI creates a widening gap: those with high need-for-cognition benefit more while those who dislike effortful thinking defer even more
    • AI confidence inflation (12 percentage point boost) may stem from treating AI like authoritative reference material (encyclopedias, Wikipedia)
    • Historical technology resistance (Socrates on writing, farmers on tractors) suggests the battle against AI adoption may already be lost
    • OpenAI shutting Sora just 4 months after a 3-year Disney partnership signals deeper financial or strategic issues

    Resources Mentioned
    Push events into a running session with channels
    Perhaps not Boring Technology after all
    Meta is having trouble with rogue AI agents
    What 81,000 people want from AI
    Dan's vec-memory-cli
    Thinking—Fast, Slow, and Artificial
    Are AI tokens the new signing bonus or just a cost of doing business?
    Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere
    Accelerated FOMO in the Age of AI
    OpenAI shutters AI video generator Sora in abrupt announcement

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 13 mins
  • Ep 18: 8 Levels of AI Engineering, Meta AI Delays, and LLM Neuroanatomy
    Mar 20 2026

    This week, Dan, Shimin & Rahul covers Meta's struggles with its delayed "Avocado" AI model and potential Gemini licensing, NVIDIA's enterprise-ready NemoClaw fork of OpenClaw, SWE-bench analysis showing PRs wouldn't pass human review, prompting superstitions and developer identity, the 8 levels of agentic engineering, mainstream media framing of AI coding, legal liability for agent-written code, and a deep dive into LLM neuroanatomy where a researcher topped leaderboards by repeating model layers without changing weights.

    Takeaways:

    • Meta may end up licensing Gemini despite massive AI investment — mirroring Apple's path
    • SWE-bench failures were mostly code quality, not functionality — suggesting good enough may be good enough with proper agents.md
    • A coworker analyzed 4.5 years of PRs to create a personalized coding style document for AI priming
    • The fastest software paradigm adoption cycle ever may be the claw/agent paradigm
    • Legal frameworks and insurance haven't caught up to agent-written code shipping to production
    • Repeating later model layers (the "thinking" layers) can boost performance without fine-tuning — raising questions about whether chain-of-thought reasoning is essentially exercising these layers repeatedly
    • Developers compared to ancient Egyptian scribes — language literacy as leverage

    Resources Mentioned
    Meta Delays Rollout of New A.I. Model After Performance Concerns
    NVIDIA NemoClaw
    Research note: Many SWE-bench-Passing PRs Would Not Be Merged into Main
    The Collective Superstitions of People Who Talk to Machines
    The 8 Levels of Agentic Engineering
    Coding After Coders: The End of Computer Programming as We Know It
    Built by Agents, Tested by Agents, Trusted by Whom?
    LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight

    Chapters

    • (00:00) - Introduction to AI in Software Development
    • (02:42) - Meta's AI Model Delays and Market Position
    • (09:51) - NVIDIA's New AI Developments
    • (13:58) - Benchmarking AI Models and Code Quality
    • (19:00) - Techniques Corner: AI Prompting and Creativity
    • (22:56) - The Evolution of Coding and Creativity
    • (28:46) - Levels of Agentic Engineering
    • (34:58) - Mainstream Perspectives on AI and Software Development
    • (43:00) - Trusting AI-Generated Code
    • (44:40) - Metrics for Success in Autonomous Teams
    • (46:59) - Legal and Ethical Implications of Autonomous Code
    • (50:21) - Innovations in Language Model Architectures
    • (01:01:02) - User Experience Challenges in Tech Development
    • (01:03:47) - Market Predictions and Financial Insights

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 8 mins
  • Ep 17: Slop Garbage Collection, Cleanroom Rewrites, and Will Claude Ruin our Teams?
    Mar 13 2026

    In this episode, Dan and Shimin follow-up on the Anthropic Pentagon drama (supply chain risk designation, lawsuit, and big tech backing Anthropic), open-source licensing controversy around AI-generated clean room rewrites, team dynamics in the age of AI coding tools, OpenAI's harness engineering blog post, two vibe-and-tell segments (Dan building custom Arch Linux images for a TuringPie cluster board, Shimin building FlatterProof — an AI sycophancy training app), and a bubble clock update driven by Oracle job cuts and AWS AI-related downtime.

    Takeaways

    • AI as a force multiplier for team culture: good teams move faster, bad teams explode faster
    • Prompt debt is now a real concern alongside technical debt — agents.md files rot just like code
    • Code garbage collection (periodic AI-driven cleanup) is emerging as a best practice
    • Cross-functional pair programming with AI (PM + engineer) represents a bright future for team collaboration
    • Senior engineers now required to sign off on AI-assisted changes at Amazon, but review fatigue is unsustainable
    • AI-generated SVG icons are surprisingly good and practical for real projects
    • Oracle may be the canary in the coal mine for the AI bubble, not the frontier labs themselves

    Resources Mentioned
    Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target
    Anthropic sues to block Pentagon blacklisting over AI use restrictions
    Alibaba’s Qwen tech lead steps down after major AI push
    Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release
    Can coding agents relicense open source through a “clean room” implementation of code?
    GNU and the AI reimplementations
    Will Claude Code ruin our team?
    Harness engineering: leveraging Codex in an agent-first world
    Oracle plans thousands of job cuts as data center costs rise, Bloomberg News reports
    After outages, Amazon to make senior engineers sign off on AI-assisted changes

    Chapters

    • (00:00) - Introduction
    • (02:50) - Anthropic and Pentagon Drama
    • (05:01) - Alibaba's Qwen Development Team Changes
    • (07:34) - Open Source Drama with CharDat Library
    • (24:08) - The Impact of AI on Team Dynamics
    • (29:15) - Harness Engineering and Codecs in AI Development
    • (31:39) - Empowering Agents with Tools
    • (34:06) - The Importance of Documentation
    • (36:26) - Architectural Boundaries and Testing
    • (38:29) - Innovative Projects and Personal Experiments
    • (46:56) - Flatterproof: Combating AI Synchrofancy
    • (54:27) - The AI Bubble Clock: Current State of Affairs
    • (01:02:29) - ADI Intro.mp4

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 3 mins
  • Pentagon Anthropic Drama, Verified Spec-Driven Development, and Interview with Martin Alderson!
    Mar 6 2026

    In this episode, Dan, Shimin and Rahul cover the Pentagon drama between Anthropic/OpenAI and the Department of Defense over AI usage red lines, introduces Sterling 8B — the first inherently interpretable language model — and explores verified spec-driven development (VSDD). The episode features the show's first interview, with Martin Alderson discussing which web frameworks are most token-efficient for AI agents

    Takeaways

    • Pentagon AI drama: Anthropic's contract red lines (no mass domestic surveillance, no autonomous weapons), the Department of Defense threatening to label Anthropic a supply chain risk, OpenAI swooping in with a competing contract under vague 'lawful use' terms, and Sam Altman's statements
    • Sterling 8B by Guide Labs: first inherently interpretable LLM with concept attribution, input context tracing, and training data attribution; uses a concept head with orthogonal loss functions to create non-overlapping interpretable concepts
    • Verified Spec-Driven Development (VSDD): a methodology by DollSpace combining spec-driven development, TDD, and adversarial verification gates at each phase; Shimin tested it on a side project using Claude Code
    • Interview with Martin Alderson: web framework token efficiency experiment (19 frameworks, minimal frameworks like Flask/Express most efficient), new framework discovery in the AI age, using Open Code for CI/CD PR reviews, keeping Claude.md files updated via scheduled tasks, and building internal CLIs for agent access
    • Two Minutes to Midnight: Citadel Securities report on AI adoption S-curves vs recursive improvement, Substack post that shook the S&P 500 about white collar job crisis, Block laying off 45% of workforce citing AI productivity gains

    Resources Mentioned
    Anthropic and the Department of War
    Sam Altman's Tweet
    Our agreement with the Department of War
    "All Lawful Use": Much More Than You Wanted To Know
    Steerling-8B: The First Inherently Interpretable Language Model
    Verified Spec-Driven Development (VSDD)
    Which web frameworks are most token-efficient for AI agents?
    The 2026 Global Intelligence Crisis
    ‘A feedback loop with no brake’: how an AI doomsday report shook US markets
    Block shares soar as much as 24% as company slashes workforce by nearly half
    Eli Dourado's Tweet

    Chapters

    • (00:00) - Introduction to ADI
    • (02:55) - Pentagon Drama and AI Models
    • (21:36) - OpenAI vs Anthropic: The Contract Controversy
    • (28:19) - Innovations in AI: Interpretable Language Models
    • (28:42) - Scaling Language Models and Their Implications
    • (29:09) - Introduction to Verified Spec Driven Development
    • (33:47) - Interview with Martin Alderson
    • (55:21) - AI Bubble Watch: Current Trends and Predictions
    • (58:47) - The Impact of AI on Job Markets
    • (01:04:00) - Reflections on AI's Role in the Economy

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 9 mins
  • Convincing AI the Earth is Flat, Inference at 17k tokens/sec, and an Agile Manifesto for the Agentic Age?
    Feb 27 2026

    This episode covers Sonnet 4.6 and Gemini 3.1 Pro model releases, Taalas Labs FPGA-based 17K tokens/sec hardware, the Meta-AMD chip partnership, Steven Sinofsky's argument against "software is dead," a deep dive into the ThoughtWorks Future of Software Engineering retreat findings (from Agile Manifesto signers), Chris Roth's elite AI engineering culture article, a Vibe & Tell segment testing agent sycophancy across three models, and AI bubble economics.

    Takeaways

    • Sonnet 4.6: Opus-level reasoning at Sonnet pricing; 72.5 on OS World (vs 61.4 for Sonnet 4.5); outperforms Opus 4.6 on agentic financial analysis; trained for computer use
    • Taalas Labs FPGA hardware: 17K tokens/sec for Llama 3.1 8B; Chat Jimmy demo; custom hardware as future of inference
    • Steven Sinofsky "Death of Software: Nah": Historical parallels (PC didnt kill mainframes, e-commerce didnt kill retail in 20 years, media death premature); predictions: more software, AI moves up stack, domain expertise more important; Jevons paradox applied to software
    • ThoughtWorks Future of Software Engineering retreat: Agile Manifesto 25th anniversary; where rigor goes (spec-driven development, red-green tests); risk tiering for code review; loss of mentoring through code review; DevEx vs agent experience decoupling; security as afterthought; the "middle loop" (overseeing agents); cognitive debt; agent topology mirroring org structure; knowledge graphs rediscovered; future roles converging; revenge of juniors (IBM hiring); self-healing systems (2-5 year horizon)
    • Vibe & Tell — Agent sycophancy testing: Flat earth test (all three models resisted); workplace bias scenario (Jim/Jane); GPT 5.1 Instant best (refused all manipulation); Claude Haiku second (too empathetic, admitted to nudging); Gemini 3 worst (agreed with bias claim); AI as therapist risks; radical candor vs ruinous empathy

    Resources Mentioned
    Introducing Claude Sonnet 4.6
    The path to ubiquitous AI
    OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips
    Death of Software. Nah.
    The future of software engineering
    Building An Elite AI Engineering Culture In 2026
    The Number Is Going Up
    An AI coding bot took down Amazon Web Services

    Chapters

    • (00:00) - Introduction to AI in Software Engineering
    • (01:13) - Latest AI Models and Hardware Innovations
    • (04:10) - The Future of AI Hardware
    • (10:01) - The Death of Software Debate
    • (19:35) - The Agile Manifesto and Its Evolution
    • (33:39) - The Impact of AI on Development Teams
    • (34:52) - The Future of Junior Developers
    • (37:11) - Self-Healing Systems and AI Assistance
    • (39:33) - Building an Elite AI Engineering Culture
    • (45:27) - AI Experiment and AI Sycophancy
    • (55:33) - The AI Bubble Clock and Economic Implications

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 4 mins
  • Crabby Rathbun, Model Councils & Why You Want More Tech Debt
    Feb 20 2026

    This episode covers the Krabby Rathbun AI bot drama (automated PRs, fabricated hit piece, Ars Technica retraction), safety team shakeups at OpenAI and Anthropic, Gemini model distillation/cloning attempts, Perplexity model councils, and a heavily economics-flavored discussion on AI job displacement, tech debt as strategy, cognitive debt, and workflow automation convexity.

    Takeaways

    • Ars Technica AI-generating an article about an AI bot drama — and getting caught fabricating quotes — is peak 2026 irony
    • Distillation/cloning is an unsolvable problem for frontier labs — they cant restrict usage without banning legitimate users
    • Model councils (running multiple models + synthesis) becoming practical; strongest model as judge, not necessarily the one generating answers
    • Cognitive debt may be more dangerous than tech debt — teams hit a wall when no one understands the codebase, usually around week 7-8 of heavy AI-assisted development
    • Workflow automation follows convexity: long period of minimal AI impact on jobs, then sudden full automation when AI can handle entire connected workflows, not just individual tasks

    Resources Mentioned
    An AI Agent Published a Hit Piece on Me
    AI Bot crabby-rathbun is still going
    Exclusive: OpenAI disbanded its mission alignment team
    Mrinank Sharmas Departure Letter
    Attackers prompted Gemini over 100,000 times while trying to clone it, Google says
    Introducing Model Council
    llm-council
    Why I’m not worried about AI job loss
    You’re Not Taking On Enough Tech Debt
    How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt
    Workflows and Automation
    Premium: The AI Data Center Financial Crisis
    The SaaSpocalypse Paradox

    Chapters

    • (00:00) - Introduction and Lunar New Year Celebrations
    • (02:44) - AI Bot Controversy: Krabby Rathburn
    • (05:04) - AI Alignment and Departures in Major Labs
    • (07:39) - Google's Gemini and AI Cloning Concerns
    • (10:08) - Tool Shed: Exploring Model Consoles
    • (12:28) - Distillation and AI Model Development
    • (21:11) - Model Pledge Drive and Console Approaches
    • (23:07) - Post-Processing and AI's Impact on Work
    • (23:57) - AI's Role in Job Security and Economic Productivity
    • (30:30) - Reverse Centaurs and Naming Conventions
    • (32:09) - Tech Debt and Cognitive Debt
    • (36:24) - Cognitive Debt in AI-Assisted Programming
    • (48:16) - Cultural Shifts in Responsibility
    • (49:13) - Exploring Workflow and Automation
    • (52:07) - The Impact of AI on Job Structures
    • (54:23) - Tolerance for AI Mistakes
    • (56:59) - Documenting Knowledge for AI
    • (57:24) - Bifurcation of Tasks and Automation
    • (59:34) - The Future of Meetings in an AI World
    • (01:00:21) - State of the AI Bubble
    • (01:03:41) - Market Dynamics and Investment Strategies

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 12 mins
  • Episode 13: Pi Coding Agent, Dark Factories & the Furniture Makers of Carolina
    Feb 13 2026

    This episode covers the simultaneous release of Claude Opus 4.6 and GPT Codex 5.3, a deep dive into the Pi coding agent framework and why Shimin prefers it over Claude Code, AI security industry criticism, software dark factories, an emotional segment mourning the craft of programming, Claude Code's new /insights command, and AI bubble economics including Anthropic's $20B raise, Google's 100-year bond, and Oracle's $50B debt plans.


    Takeaways

    • The biggest compliment for Codex 5.3 is that it feels like Claude Code now
    • Opus 4.6 auto-drops into plan mode and offers to clear context after planning — writes plan.md it can follow across interruptions
    • Pi agent's skill-based approach may represent the bitter lesson of AI tooling — less scaffolding, more model intelligence
    • The "everyone is a manager now" framing for agentic coding resonates — reduced dopamine from not doing work with your own hands
    • Context switching burnout from running multiple agent instances is an emerging problem
    • AI may freeze software innovation at whatever paradigm the training data captures (jQuery → React, but what comes after?)

    Resources Mentioned
    Introducing Claude Opus 4.6
    Introducing GPT-5.3-Codex
    Opus 4.6, Codex 5.3, and the post-benchmark era
    Pi coding agent
    The AI Security Industry is Bullshit
    Software Factories And The Agentic Moment
    We mourn our craft
    Anthropic closes in on $20B round
    Oracle says it plans to raise up to $50 billion in debt and equity this year
    The New Announcement Economy

    Chapters

    • (00:00) - Introduction to AI in Software Development
    • (03:02) - Latest AI Model Releases and Comparisons
    • (06:03) - Exploring AI Coding Agents
    • (08:55) - The Rise of Py Coding Agent
    • (12:08) - AI's Impact on Job Security
    • (15:01) - AI Security Concerns and Industry Insights
    • (33:08) - The Rise of AI Security Concerns
    • (36:30) - De-risking AI: Strategies and Challenges
    • (38:29) - The Emergence of Software Factories
    • (41:19) - Cloning Software: The Digital Twin Universe
    • (44:39) - In-house Development vs. SaaS Solutions
    • (46:57) - The Future of Compliance and Audit Industries
    • (51:52) - The Impact of AI on Software Development
    • (56:37) - Navigating the Emotional Landscape of AI Development
    • (01:07:55) - Mourning the Craft: A Country Song Reflection
    • (01:09:51) - Building Beyond Loss: Tennyson's Ulysses
    • (01:12:47) - Cloud Code Insights: Enhancing Development Workflows
    • (01:19:09) - The AI Bubble: Current Trends and Predictions
    • (01:24:00) - The Announcement Economy: News in the Age of AI
    • (01:30:04) - The Future of AI: Investment and Market Dynamics

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Show more Show less
    1 hr and 25 mins
  • Episode 12: The OpenClaw Saga, How AI Affects Programming Skills, and How Vibe Coding is Addictive like Gambling
    Feb 6 2026

    In this episode, Dan and Shimin discuss the evolving landscape of AI programming, focusing on Anthropic's AI Constitution, OpenAI's new product Prism, and the implications of AI tools on coding skills. They explore the financial viability of AI companies, the concept of vibe coding, and the potential risks of an AI bubble. The conversation highlights the importance of understanding AI's impact on jobs and the ethical considerations surrounding AI development.


    Takeaways

    • Anthropic's AI Constitution raises questions about AI agency.
    • AI tools can enhance or hinder coding skill development.
    • The financial viability of AI companies is under scrutiny.
    • Vibe coding can lead to a false sense of accomplishment.


    Resources Mentioned
    Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?
    Exclusive: Pentagon clashes with Anthropic over military AI use, sources say
    OpenAI launches Prism, a new AI workspace for scientists
    Open Coding Agents: Fast, accessible coding agents that adapt to any repo
    Trinity Large
    ClawHub
    ClawdBot Skills Just Ganked Your Crypto
    MoltBook
    Superpowers: How I'm using coding agents in October 2025
    My Five Stages of AI Grief
    How AI assistance impacts the formation of coding skills
    Breaking the Spell of Vibe Coding
    Nvidia shares are down after a report that its OpenAI investment stalled. Here's what's happening
    Inside OpenAI's unit economics


    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai


    Show more Show less
    1 hr and 10 mins