This interview podcast explores the safety testing of AI, specifically focusing on Anthropic's AI chatbot, Claude. The podcast begins by discussing the common anxieties surrounding AI apocalypses depicted in popular culture, then transitions into an interview with Logan Graham, head of Anthropic's Frontier Red team, which focuses on AI safety. Graham details Anthropic's safety testing methodology, including evaluations in cybersecurity, biological/chemical weapons, and AI autonomy, using a system called AI Safety Level (ASL). The podcast highlights Anthropic's ASL 2 rating for Claude, indicating it's safe for release, but also discusses the ongoing debate about the reliability of self-assessment in AI safety and the need for third-party verification and government regulation. The interview concludes with Graham expressing optimism about preventing major AI risks, emphasizing the importance of speed and collaboration in addressing these challenges.