This interview podcast features Bob McGrew, formerly Chief Research Officer at OpenAI, discussing the evolution of AI, particularly large language models (LLMs). The conversation begins with McGrew's background and early work at OpenAI, focusing on projects like teaching a robot to solve a Rubik's Cube and OpenAI's approach to AI development, contrasting it with that of Google Brain and DeepMind. The discussion then shifts to scaling laws in AI, the challenges of data limitations, and the emergence of reasoning models as a key advancement. McGrew emphasizes the importance of scaling and the potential of reasoning models to unlock more reliable and capable AI agents, suggesting a future where AI acts as personal assistants and even aids in scientific discovery. A key takeaway is McGrew's prediction of a "ChatGPT moment" for robotics in the next five years, driven by the application of foundation models to physical robots.