From Tokens to Vectors: The Efficiency Hack That Could Save AI (Ep. 294)
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to Cart failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from wishlist failed.
Please try again later
Adding to library failed
Please try again
Follow podcast failed
Please try again
Unfollow podcast failed
Please try again
-
Narrated by:
-
By:
LLMs generate text painfully slow, one low-info token at a time. Researchers just figured out how to compress 4 tokens into smart vectors & cut costs by 44%—with full code & proofs! Meanwhile OpenAI drops product ads, not papers. We explore CALM & why open science matters. 🔥📊
Sponsors
This episode is brought to you by Statistical Horizons At Statistical Horizons, you can stay ahead with expert-led livestream seminars that make data analytics and AI methods practical and accessible. Join thousands of researchers and professionals who’ve advanced their careers with Statistical Horizons. Get $200 off any seminar with code DATA25 at https://statisticalhorizons.com
No reviews yet