Absolute AppSec Podcast By Ken Johnson and Seth Law cover art

Absolute AppSec

Absolute AppSec

By: Ken Johnson and Seth Law
Listen for free

A weekly podcast of all things application security related. Hosted by Ken Johnson and Seth Law.
Episodes
  • Episode 316 - w/Coffee, Chaos, and ProdSec - Agentic Development Lifecycle
    Mar 17 2026
    In episode 316 of Absolute AppSec, hosts Ken Johnson and Seth Law participate in a crossover with Kurt Hendle and Cameron Walters from the Coffee, Chaos, and ProdSec podcast to discuss the radical transformation of security roles in an AI-driven landscape. The guests share origin stories rooted in gaming and "mischievous" curiosity, which evolved into deep careers in security architecture and engineering. The primary discussion centers on the industry's shift toward an "Agentic Development Lifecycle" (ADLC), where the sheer volume of AI-generated code renders traditional manual review gates obsolete. This acceleration risks a "rubber stamp" culture where developers approve fixes in seconds rather than minutes, potentially leading to a mountain of technical debt. Consequently, the role of security is shifting from manual bug finding to high-level governance and "context infusion," requiring practitioners to manage AI agents that automate complex tasks. Economically, the group highlights how frontier model announcements have caused massive market volatility, wiping billions from traditional security stocks. Ultimately, they conclude that while older "primitive" tools are failing, professionals who lean into AI as a "superpower" for governance and oversight will be essential for navigating this new, non-deterministic reality.
    Show more Show less
    Less than 1 minute
  • Episode 315 - Risks of "AI-Native" Security Products, Rapid Software Development
    Mar 3 2026
    In episode 315 of Absolute AppSec, Ken Johnson and Seth Law discuss the rapidly evolving challenges of securing software in an era of AI-assisted development. The hosts provide updates on their "Harnessing LLMs for Application Security" training, noting that the field is changing so fast that they must constantly update their exercises to include new agents and advanced tools like Claude Code. A primary concern raised is the "naivete" of many new security tools, where prompts are often automatically generated by AI rather than expertly crafted, causing a loss of essential nuance. The hosts also warn against AI companies building security products without specialized expertise, citing a zero-click exploit in the "Comet" AI browser that could exfiltrate sensitive secrets via calendar summaries. As development teams now ship code at "AI speed," the hosts argue that traditional AppSec methods are too slow, necessitating a strategic pivot toward automated design reviews, governance, and observability rather than just chasing individual vulnerabilities. Despite the inherent risks and the ongoing difficulty of managing AI reasoning drift, they remain optimistic that these tools can eventually unlock more efficient, hands-off AppSec workflows if managed with proper guardrails and deterministic oversight.
    Show more Show less
    Less than 1 minute
  • Episode 314 - LLM AppSec Disruption, Limitations of AI in Security, AppSec Oversight
    Feb 24 2026
    In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic’s "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.
    Show more Show less
    Less than 1 minute
No reviews yet