• TSL.P Labs 🧪 Initiative: Why 96% AI Accuracy Still Fails Lawyers: Ethics, Hallucinations, and the Future of the Billable Hour ⚖️🤖
    Mar 6 2026
    📌 To Busy to Read This Week's Editorial? Welcome to the TSL Lab's Initiative. 🤖 This weeks episode builds on my March 3rd, 2026, editorial "Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖" is a misleading comfort blanket for lawyers, and how ABA Model Rules on confidentiality, competence, diligence, candor, supervision, and client communication must govern every AI prompt you run. Our Google LLM Notebook hosts translate the theory into practical workflows you can implement today—from document grounding and tokenization to vendor due diligence and line‑by‑line verification—so you can leverage AI confidently without sacrificing ethics, privilege, or your professional license. You will hear how document grounding changes what LLMs actually do, why uploading active case files to cloud AI tools can quietly trigger Rule 1.6 problems, and how cross‑border data flows, vendor training rights, and retention policies can erode privilege if you do not negotiate them carefully. 🔐 We also unpack practical safeguards like tokenization, internal sandbox testing, and bright‑line "danger zones" where AI must never operate unsupervised—especially on open‑ended research, choice of law, and any task that turns statistical text into real‑world legal risk. Finally, we confront the economic paradox: when AI can compress 100 hours of document review into seconds, but partners must still verify every line to protect their licenses, what exactly are clients paying for—and how does the billable hour survive? 💼 👉 Tune in now to learn how to stay tech‑forward without becoming the next ethics cautionary tale, and start designing AI policies that actually protect your clients, your firm, and your bar license. In our conversation, we cover the following 00:00 – Why "96% fewer hallucinations" is still not good enough in law ⚖️01:00 – How the remaining 4% error rate can trigger malpractice, sanctions, and ethics violations02:00 – From IT issue to ethics issue: ABA Model Rules as the real constraint on AI adoption03:00 – Document grounding 101: turning a free‑floating LLM into a reading‑comprehension engine04:00 – The hidden danger of "just upload the file": how Rule 1.6 confidentiality is instantly implicated05:00 – Cloud AI architecture, cross‑border data transfers, GDPR, and privilege risk 🌐06:00 – Model training nightmares: when your client's trade secrets leak back out through someone else's prompt07:00 – Negotiating no‑training clauses and ring‑fencing vendor data use (before you upload anything)08:00 – Tokenization explained: turning John Doe into "Plaintiff 01" without losing legal meaning 🔐09:00 – What AI does well today: grounded summarization, clause extraction, and playbook‑based redlines10:00 – The "danger zone" of tasks: open‑ended research, choice of law, and abstract legal reasoning11:00 – Phantom case law: how LLMs manufacture perfect‑looking but fake citations (and Rule 3.3 candor)12:00 – Sandboxing AI tools internally and measuring real‑world failure rates against known outcomes 🧪13:00 – Building bright‑line firm policies around forbidden AI use cases14:00 – Verification as a workflow, not a suggestion: what Model Rules 5.1 and 5.3 demand from supervisors15:00 – The efficiency paradox: when partner‑level verification erases associate‑level time savings ⏱️16:00 – Making AI verification as routine as a conflict check in your practice17:00 – Falling hallucination rates, rising risk: why better AI can still make lawyers more vulnerable18:00 – Client communication under Rule 1.4: when and why clients may be entitled to know you used AI19:00 – "You can delegate the task, not the liability": Rule 1.2 and ultimate responsibility for AI‑assisted work20:00 – Treating every AI prompt and ToS as a potential ethics document 📝21:00 – The existential question: if AI drafts in seconds, what exactly are clients paying lawyers for?
    Show more Show less
    24 mins
  • 🎙️ Ep. #133 | AI Search, GEO & Legal Marketing Tech: How Small Law Firms Win Cases — Not Just Clicks!
    Mar 17 2026
    My next guest is Nick Cohen, Chief Operating Officer of Matador Solutions — a legal marketing think tank and agency — and a newly minted partner at Cohen Injury Law Group. Nick brings a rare dual perspective: he lives the daily grind of running a law firm AND helps over 170 firms across the country use technology and marketing strategy to grow their practice. With more than $1 billion in case value generated for clients, Nick knows what separates the law firms that thrive from the ones that spin their wheels. 🚀 Whether you are just hanging out your shingle or you have been practicing for years and feel overwhelmed by the alphabet soup of SEO, GEO, PPC, and AI, this episode breaks it all down in plain language. Nick shares actionable steps — some of which cost nothing — to help your firm show up where your next great client is already looking. ⚖️ Join Nick Cohen and me as we discuss the following three questions and more! 🤔 What are the top three ways a small or mid-size law firm can leverage AI-driven search — like Google AI Overviews and ChatGPT — to reliably generate better cases, not just more clicks?💡 For firms that feel overwhelmed by SEO, paid search, and social media, what are the top three pieces of marketing technology or automations they should implement first to turn their website into a true new case acquisition system?🏆 Looking across $1 billion+ in case value generated for over 170 law firms, what are the top three technology habits the most successful firms share — and what are their less successful peers simply not doing? In our conversation, we cover the following: [0:00] 🎤 Introduction & five-star review shoutout[0:45] 👨‍💼 Nick's background: Matador Solutions, Cohen Injury Law Group, and tech stack overview (Jira, Google Suite, Claude, ChatGPT, WordPress, Slack)[1:30] 💻 Hardware setup: MacBook Pro M4, desktop, HDMI monitor — what Nick runs on daily[3:00] 📱 iPhone, planned obsolescence, and the Apple ecosystem slowdown conversation[4:00] ❓ Question 1: Leveraging AI-driven search (Google AI Overviews, ChatGPT) to get better cases — not just traffic[5:00] 🔍 GEO vs. SEO explained — what is Generative Engine Optimization and why it matters for your law firm right now[6:30] 📖 The difference: SEO = Google ranking; GEO = getting cited by ChatGPT, Claude, Perplexity, and Grok[8:00] 🤖 Schema markup, robots.txt, and opening your website to LLM crawlers — practical steps any firm can take[9:00] 📋 Attorney directory listings (Avvo, Super Lawyers, FindLaw) — are they worth the money in 2026?[10:30] ✍️ Tip #2: High-quality thought leadership content as a GEO and SEO powerhouse[11:30] ⭐ Tip #3: Reviews, reviews, reviews — the single highest-ROI, zero-cost activity for any law firm[12:00] 📲 The "one-click review link" strategy: why text beats email every time[13:00] 😬 How to handle negative reviews — call first, respond professionally, and why a 4.9 rating beats a perfect 5.0[15:00] ❓ Question 2: Top three marketing tech tools/automations for overwhelmed firms — CallRail, case management software, and understanding your channels[17:30] ❓ Question 3: The technology habits that separate high-growth firms from stagnant ones — intake systems, engagement, and growth mindset[19:30] 🗺️ How Matador Solutions walks a brand-new firm from zero to a steady stream of cases — step by step[22:00] 📬 Where to find Nick Cohen Resources 🔗 Connect with Nick Cohen 📧 Email: nick@matadorsolutions.net💼 LinkedIn: https://www.linkedin.com/in/nickecohen/🌐 Website: matadorsolutions.net 📚 Mentioned in the Episode (Non-Hardware / Non-Software) 🎙️ Apple Podcasts — podcasts.apple.com ⚖️ Matador Solutions — Legal marketing agency — matadorsolutions.net📋 Avvo — Attorney directory — avvo.com⚖️ Cohen Injury Law Group — Nick's law firm — https://cohenandcohen.net/⭐ Facebook Reviews — facebook.com📊 GEO (Generative Engine Optimization) — The emerging discipline of optimizing for AI-driven search engines⭐ Google Reviews — google.com/business📋 FindLaw — Attorney directory — findlaw.com📋 Super Lawyers — Attorney directory — superlawyers.com⭐ Yelp — yelp.com 💻 Hardware Mentioned in the Conversation 📱 Apple iPhone 15 — Nick's smartphone (approximate model) — apple.com/iphone📱 Apple iPhone (latest, annual upgrade) — Michael's smartphone — apple.com/iphone🖥️ Apple Mac Studio (M3 chip) — Michael's desktop — apple.com/mac🖥️ Apple MacBook Pro (M4 chip) — Nick's primary laptop — apple.com/macbook-pro ☁️ Software & Cloud Services Mentioned in the Conversation 📞 CallRail — Call tracking & marketing ROI — callrail.com🤖 ChatGPT (OpenAI) — AI assistant & AI search — chatgpt.com🤖 Claude (Anthropic) — AI assistant — claude.ai🤖 ...
    Show more Show less
    24 mins
  • TSL.P EP# 132 (Special Episode): AI, Deepfakes, and Metadata: Guest-Hosting Capital University Law School's First Law Library Podcast Club with Professor Jennifer Wondracek 🎙️⚖️
    Mar 3 2026
    In this special episode, I join Professor Wondracek virtually to guest-host Capital's very first Podcast Club session for a live conversation about AI, legal ethics, deepfakes, and metadata. We talk candidly with law students about how AI-generated evidence, consumer AI tools, and digital footprints are already impacting sanctions, privilege, and professional responsibility, then translate those issues into practical safeguards for everyday practice. Whether you are in law school, running a small firm, or managing litigation for a larger organization, this inaugural Podcast Club episode shows how to stay competent, secure, and credible when AI and technology are part of your case strategy. Questions section Join Professor Jennifer Wondracek and me as we discuss the following three questions and more! How do deepfakes and manipulated digital evidence challenge a lawyer's ethical duties under core rules on competence, candor to the tribunal, and honesty?What can we learn from recent cases involving deepfake videos, privilege risks in consumer AI tools, and sanctions for hallucinated citations when designing our own AI workflows?How can lawyers and law students build realistic, sustainable practices for reviewing metadata, using VPNs and secure Wi‑Fi, and choosing secure legal AI and eDiscovery tools? Timestamps In our conversation, we covered the following: 00:00 – Welcome to Capital University Law School's first Podcast Club: live recording and today's focus on AI and ethics 🎓01:00 – Introducing Michael D.J. Eisenberg as guest host and his work with veterans, and The Tech-Savvy Lawyer.Page blog and podcasting 📚02:30 – What is a deepfake, and how a staged "Bethesda incident" highlights the real-world risk of fake video evidence 🚨04:00 – Applying competence rules to technology: why "I didn't know" is not a sustainable defense for lawyers05:00 – Everyday tech risks: public Wi‑Fi, airports, coffee shops, and why lawyers must use VPNs when client information is involved 🌐06:30 – Discussing NordVPN, ExpressVPN, and how unsecured sessions can compromise client portals, trust accounts, and email 🔐07:30 – First steps in vetting digital evidence: what to look for in image files and when to call in a forensic expert08:30 – Lessons from deepfake litigation and obviously altered video: shadows, color-in-black-and-white, and credibility with the court 🎥10:00 – Candor to the tribunal and rules against dishonesty, fraud, and misrepresentation in the AI era11:00 – Student question: can you rely on built-in operating system tools to review metadata, or do you need specialist software? 🖼️13:30 – Live demo: opening file properties, reading timestamps, device info, and geotags to validate or challenge evidence16:00 – When scrubbed metadata makes sense, when it doesn't, and how to request original metadata in discovery18:00 – Five practical safeguards for new and experienced lawyers: education, protocols, client transparency, updated letters, and constant monitoring of AI changes ✅20:00 – Why refusing to learn AI and tech is itself a risk to your bar license and your clients' interests21:00 – Student Q&A: low-resource firms, large volumes of data, and using sampling plus AI to stretch limited budgets22:30 – Using legal AI to surface anomalies in documents and metadata while still protecting privilege23:00 – How consumer AI terms and conditions can put privilege and work product at risk, and what to look for in safer options ⚠️24:00 – Free vs paid AI accounts: retention, training, and why personally identifiable information doesn't belong in general chatbots25:00 – Evaluating legal AI vendors: zero retention, encryption, prompt confidentiality, and subpoena requirements26:00 – Using tightly controlled legal research platforms and "vault" environments to access models like GPT or Claude securely 🧠27:00 – Documenting prompts and AI use so that, if questioned by a court or bar, you can show reasonable diligence28:30 – Reasonable metadata review in practice: random sampling, documenting your process, and knowing when to bring in eDiscovery tools30:00 – How modern eDiscovery platforms surface metadata and support deeper analysis at scale 📂31:00 – Staying current on AI and tech: newsletters, bar alerts, court updates, and following The Tech-Savvy Lawyer32:30 – AI hallucinated citations and sanctions: how one New York matter became a warning to the entire profession 💸34:30 – Firm-wide consequences when AI misuse becomes a pattern: reputational damage, client impact, and even firm dissolution36:00 – Owning mistakes, repairing trust with judges, and why transparency matters more than perfection37:00 – Live giveaway of The Lawyer's Guide to Podcasting during the first Podcast Club session 🎲38:00 – Inviting students to Capital's upcoming summit/bootcamp and to dinner at the Red Door Tavern, plus closing thoughts on the future of tech ...
    Show more Show less
    40 mins
  • TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖
    Feb 27 2026
    Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖. Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool's Terms of Use can trigger a privilege waiver, and what "tech competence" really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff's wake-up-call analysis of confidentiality and third-party disclosure risk. 🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters. In our conversation, we cover the following 00:00 — The "superhuman assistant" promise, and the procedural nightmare risk. 🧠⚖️00:01 — The core warning: AI use can "blow a hole" in privilege.00:02 — Editorial overview: "The AI Privilege Trap" by Michael D.J. Eisenberg.00:02 — The case: United States v. Hoeppner (SDNY) and why it matters.00:03 — Why Judge Jed Rakoff's opinion gets attention (tech-literate, influential).00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel.00:04 — The court's conclusion: no attorney-client privilege, no work product protection.00:05 — Privilege basics applied to AI: "confidential + lawyer" and why AI fails that test.00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾00:07 — The "stranger on the street" analogy: you can't retroactively make it confidential.00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk.00:08 — ABA Model Rule 1.1: competence includes understanding tech risks.00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI.00:10 — "Reasonable safeguards": read policies, adjust settings, and know training/logging.00:11 — Public vs. enterprise AI: why contracts and "walled gardens" matter.00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings.00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor.00:13 — Redefining "tech-savvy lawyer" in 2026: judgment and restraint. 🧭00:14 — The "straight-face test": could you defend confidentiality after a judge reads the policy?00:15 — Client-side risk: clients can sabotage privilege before contacting counsel.00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒 RESOURCES Mentioned in the episode ABA Model Rules of Professional Conduct (Rules 1.1, 1.4, 1.6, 5.1, 5.3) Software & Cloud Services mentioned in the conversation Lexis (Lexis+ AI category mentioned) — https://www.lexisnexis.com/Microsoft Word — https://www.microsoft.com/microsoft-365/wordPublic generative AI "chatbot" tools (general category) — https://en.wikipedia.org/wiki/ChatbotWestlaw (Westlaw AI category mentioned) — https://legal.thomsonreuters.com/en/products/westlaw
    Show more Show less
    19 mins
  • TSL.P Labs 🧪: Lawyers and AI Oversight: What the VA's Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖
    Feb 20 2026
    Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, "Lawyers and AI Oversight: What the VA's Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖" and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer. In our conversation, we cover the following: 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that's a problem ⚖️ 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a "mirror" for the legal profession 🩺➡️⚖️ 00:03:00 – "Speed without governance": what the VA Inspector General actually warned about, and why it matters to your practice 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law 00:06:00 – Shadow AI in law firms: staff "just trying out" public chatbots on live matters and the unseen risk this creates 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice 00:09:00 – Competence in the age of AI: why "I'm not a tech person" is no longer a safe answer 🧠 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations 00:12:00 – From slogan to system: why "meaningful human engagement" must be operationalized, not just admired 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test 00:14:00 – You don't need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 00:15:00 – Risk mapping: distinguishing administrative AI use from "safety-critical" lawyering tasks 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use 00:17:00 – Why governance is not "just for BigLaw" and how solos can implement checklists and simple documentation 📋 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters 00:18:00 – Redefining the "human touch" as the safety mechanism that makes AI ethically usable at all 🤝 00:19:00 – AI as power tool: why lawyers must remain the "captain of the ship" even when AI drafts at lightning speed 🚢 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for? 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time? 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI RESOURCES Mentioned in the episode American Bar Association Model Rules of Professional Conduct Interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, "VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line". Software & Cloud Services mentioned in the conversation ChatGPT — https://chat.openai.com/ Lexis - https://www.lexisnexis.com Westlaw - https://legal.thomsonreuters.com
    Show more Show less
    23 mins
  • 🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️
    Feb 17 2026
    My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy. Join Justin and me as we discuss the following three questions and more! What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk?Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? In our conversation, we cover the following 00:00 – Welcome and guest introductionJustin joins the show and shares his current tech setup at his desk. 00:00–01:00 – Justin's current tech stackLenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making. 01:00–02:00 – Android vs. iPhone for AI useWhy Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability. 02:00–05:30 – Q1: Top three ways litigators should be using AI right nowUsing AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks. 05:30–07:30 – StrongSuit vs. basic tools like Word grammar checkHow StrongSuit aims to "up-level" a lawyer's writing, not just catch typos.Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents. 06:00–08:00 – AI context limits and scaling doc reviewConstraints of large models' context windows (around ~1M tokens ≈ ~750 pages).How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights. 08:00–09:00 – Handling tens of thousands of documentsHow StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters. 09:00–11:30 – Origin story of StrongSuitWhy Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.StrongSuit's focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step. 11:30–13:30 – From intake to brief drafting in minutesGenerating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.StrongSuit's long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment. 12:00–14:30 – How StrongSuit tackles hallucinationsBuilding a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.Validating citations by checking whether the Bluebook citation actually exists in StrongSuit's case database before surfacing it to the user.Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations. 14:30–16:30 – Coverage and jurisdictionsCoverage of all U.S. jurisdictions, federal and state, focused on precedential cases.Handling most regulations from administrative agencies, and limits around local ordinances.Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows. 15:00–17:00 – Security and confidentiality for litigatorsSOC 2 compliance and industry-standard encryption at rest and in transit.No model training on user data.Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys. 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigationMistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract ...
    Show more Show less
    35 mins
  • TSL.P Labs 🧪: Courts Are Punishing Fake AI Evidence: How to Protect Your Cases, Clients, and License ⚖️🤖
    Feb 13 2026
    Everyday devices can capture extraordinary evidence, but the same tools can also manufacture convincing fakes. 🎥⚖️ In this episode, we unpack our February 9, 2026, editorial on how courts are punishing fake digital and AI-generated evidence, then translate the risk into practical guidance for lawyers and legal teams. You'll hear why judges are treating authenticity as a frontline issue, what ethical duties get triggered when AI touches evidence or briefing, and how a simple "authenticity playbook" can help you avoid career-ending mistakes. ✅ In our conversation, we cover the following 00:00:00 – Preview: From digital discovery to digital deception, and the question of what happens when your "star witness" is actually a hallucination or deepfake 🚨00:00:20 – Introducing the editorial "Everyday Tech, Extraordinary Evidence Again: How Courts Are Punishing Fake Digital and AI Data." 📄00:00:40 – Welcome to the Tech-Savvy Lawyer.Page Labs Initiative and this AI Deep Dive Roundtable 🎙️00:01:00 – Framing the episode: flipping last month's optimism about smartphones, dash cams, and wearables as case-winning "silent witnesses" to their dark mirror—AI-fabricated evidence 🌗00:01:30 – How everyday devices and AI tools can both supercharge litigation strategy and become ethical landmines under the ABA Model Rules ⚖️00:02:00 – Panel discussion opens: revisiting last month's "Everyday Tech, Extraordinary Evidence" AI bonus and the optimism around smartphone, smartwatch, and dash cam data as unbiased proof 📱⌚🚗00:02:30 – Remembering cases like the Minnesota shooting and why these devices were framed as "ultimate witnesses" if the data is preserved quickly enough 🕒00:03:00 – The pivot: same tools, new threats—moving from digital discovery to digital deception as deepfakes and hallucinations enter the evidentiary record 🤖00:03:30 – Setting the "mission" for the episode: examining how courts are reacting to AI-generated "slop" and deepfakes, with an increasingly aggressive posture toward sanctions 💣00:04:00 – Why courts are on high alert: the "democratization of deception," falling costs of convincing video fakes, and the collapse of the old presumption that "pictures don't lie" 🎬00:04:30 – Everyday scrutiny: judges now start with "Where did this come from?" and demand details on who created the file, how it was handled, and what the metadata shows 🔍00:05:00 – Metadata explained as the "data about the data"—timestamps, software history, edit traces—and how it reveals possible AI manipulation 🧬00:06:00 – Entering the "sanction phase": why we are beyond warnings and into real penalties for mishandling or fabricating digital and AI evidence 🚫00:06:30 – Horror Story #1 (Mendon v. Cushman & Wakefield, Cal. Super. Ct. 2025): plaintiffs submit videos, photos, and screenshots later determined to be deepfakes created or altered with generative AI 🧨00:07:00 – Judge Victoria Kakowski's response: finding that the deepfakes undermined the integrity of judicial proceedings and imposing terminating sanctions—"death penalty" for the lawsuit ⚖️00:07:30 – How a single deepfake "poisons the well," destroying the court's trust in all of a party's submissions and forfeiting their right to the court's time 💥00:08:00 – Horror Story #2 (S.D.N.Y. 2023): the New York "hallucinating lawyer" case where six imaginary cases generated by ChatGPT were filed without verification 📚00:08:30 – Rule 11 sanctions and humiliation: Judge Castel's order, monetary penalty, and the requirement to send apology letters to real judges whose names were misused ✉️00:09:00 – California follow-on: appellate lawyer Amir Mustaf files an appeal brief with 21 fake citations, triggering a 10,000-dollar sanction and a finding that he did not read or verify his own filing 💸00:09:30 – Courts' reasoning: outsourcing your job to an AI tool is not just being wrong—it is wasting judicial resources and taxpayer money 🧾00:10:00 – Do we need new laws? Why Michael argues that existing ABA Model Rules already provide the safety rails; the task is to apply them to AI and digital evidence, not to reinvent them 🧩00:10:20 – Rule 1.1 (competence): why "I'm not a tech person" is no longer a viable excuse if you use AI to enhance video or draft briefs without understanding or verifying the output 🧠00:11:00 – Rule 1.6 (confidentiality): the ethical minefield of uploading client dash cam video or wearable medical data to consumer-grade AI tools and risking privilege leakage ☁️00:11:30 – Training risk: how client data can end up in model training sets and why "quick AI summaries" can inadvertently expose secrets 🔐00:12:00 – Rules 3.3 and 4.1 (candor and truthfulness): presenting AI-altered media as original or failing to verify AI output can now be treated as misrepresentation 🤥00:12:30 – Rules 5.1–5.3 (supervision): why partners and ...
    Show more Show less
    19 mins
  • TSL.P Labs 🧪: Legal Tech Wars, Client Data, and Your Law License: An AI-Powered Ethics Deep Dive ⚖️🤖
    Feb 6 2026
    📌 To Busy to Read This Week's Editorial? Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this Tech-Savvy Lawyer Page Labs Initiative episode, AI co-hosts walk through how high‑profile "legal tech wars" between practice‑management vendors and AI research startups can push your client data into the litigation spotlight and create real ethics exposure under ABA Model Rules 1.1, 1.6, and 5.3. We'll explore what happens when core platforms face federal lawsuits, why discovery and forensic audits can put confidential matters in front of third parties, and how API lockdowns, stalled product roadmaps, and forced sales can grind your practice operations to a halt. More importantly, you'll get a clear five‑step action plan—inventorying your tech stack, confirming data‑export rights, mapping backup providers, documenting diligence, and communicating with clients—that works even if you consider yourself "moderately tech‑savvy" at best. Whether you're a solo, a small‑firm practitioner, in‑house, or simply AI‑curious, this conversation will help you evaluate whether you are the supervisor of your legal tech—or its hostage. 🔐 In our conversation, we cover the following 00:00:00 – Setting the stage: Legal tech wars, "Godzilla vs. Kong," and why vendor lawsuits are not just Silicon Valley drama for spectators.00:01:00 – Introducing the Tech-Savvy Lawyer Page Labs Initiative and the use of AI-generated discussions to stress-test legal tech ethics in real-world scenarios.00:02:00 – Who's fighting and why it matters: Clio as the "nervous system" of many firms versus Alexi as the "brainy intern" of AI legal research.00:03:00 – The client data crossfire: How disputes over data access and training AI tools turn your routine practice data into high-stakes litigation evidence.00:04:00 – Allegations in the Clio–Alexi dispute, from improper data access to claims of anti-competitive gatekeeping of legal industry data.00:05:00 – Visualizing risk: Client files as sandcastles on a shelled beach and why this reframes vendor fights as ethics issues, not IT gossip.00:06:00 – ABA Model Rule 1.1 (Competence): What "technology competence" really entails and why ignorance of vendor instability is no longer defensible.00:07:00 – Continuity planning as competence: Injunctions, frozen servers, vendor shutdowns, and how missed deadlines can become malpractice.00:08:00 – ABA Model Rule 1.6 (Confidentiality): The "danger zone" of treating the cloud like a bank vault and misunderstanding who really holds the key.00:09:00 – Discovery risk explained: Forensic audits, third‑party access, protective orders that fail, and the cascading impact on client secrets.00:10:00 – Data‑export rights as your "escape hatch": Why "usable formats" (CSV, PDF) matter more than bare contractual promises.00:11:00 – Practical homework: Testing whether you can actually export your case list today, not during a crisis.00:12:00 – ABA Model Rule 5.3 (Supervision): Treating software vendors like non‑lawyer assistants you actively supervise rather than passive utilities.00:13:00 – Asking better questions: Uptime, security posture, and whether your vendor is using your data in its own defense.00:14:00 – Operational friction: Rising subscription costs, API lockdowns, broken integrations, and the return of manual copy‑pasting.00:15:00 – Vaporware and stalled product roadmaps: How litigation diverts engineering resources away from features you are counting on.00:16:00 – Forced sales and 30‑day shutdown notices: Data‑migration nightmares under pressure and why waiting is the riskiest strategy.00:17:00 – The five‑step moderate‑tech action plan: Inventory dependencies, review contracts, map contingencies, document diligence, and communicate with nuance.00:18:00 – Turning risk management into a client‑facing strength and part of your value story in pitches and ongoing relationships.00:19:00 – Reframing legal tech tools as members of your legal team rather than invisible utilities.00:20:00 – "Supervisor or hostage?": The closing challenge to check your contracts, your data‑export rights, and your practical ability to "fire" a vendor. Resources Mentioned in the episode ABA Model Rule 1.1 – Competence (Technology Competence Comment) – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/ABA Model Rule 1.6 – Confidentiality of Information – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/ABA Model Rule 5.3 – Responsibilities Regarding Nonlawyer Assistance – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_3_responsibilities_regarding_nonlawyer_assistance/...
    Show more Show less
    21 mins