The Doomer's Error: Why AGI Is An Incoherent Concept
Failed to add items
Sorry, we are unable to add the item because your shopping cart is already at capacity.
Add to Cart failed.
Please try again later
Add to Wish List failed.
Please try again later
Remove from wishlist failed.
Please try again later
Adding to library failed
Please try again
Follow podcast failed
Please try again
Unfollow podcast failed
Please try again
-
Narrated by:
-
By:
What's the strongest anti-AGI case, the argument that reveals the fallacies underlying the belief that AGI is a viable goal – as well as the AI doomerism that believing AGI will soon arrive often spawns? Princeton professor Arvind Narayanan recently made a statement that we feel deserves amplification: For real-world problems, machines face some of the same key fundamental limits and challenges that humans face.
Listen to Luba and Eric unpack, explore, and expound. #noAGI
No reviews yet