In this podcast, Noam Brown, a research scientist at OpenAI, explores the future of Large Language Models (LLMs). He points out that while the costs of scaling pre-training are becoming increasingly high, there’s a wealth of untapped potential in enhancing test-time compute. Brown believes that focusing on this area, along with making algorithmic improvements, could help us reach Artificial General Intelligence (AGI) sooner than we think. His work on o1, a model that utilizes increased test-time compute, highlights this potential by revealing emergent reasoning abilities not seen in earlier models like GPT-4. Additionally, Brown addresses the changing role of academia in AI research and discusses the promising applications of LLMs across various fields, including social sciences and scientific research.
Sign in to continue reading, translating and more.
Continue