A grand challenge in reinforcement learning is producing intelligent exploration, especially when rewards are sparse or deceptive. I will present Go-Explore, a new algorithm for such ‘hard exploration problems.’ Go-Explore dramatically improves the state of the art on benchmark hard-exploration problems, enabling previously unsolvable problems to be solved. I will explain the algorithm and the new research directions it opens up. I will also explain why we believe it will enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that harness a simulator during training (e.g. robotics). More information can be found at https://eng.uber.com/go-explore
Jeff Clune is the Research Manager / Team Lead at OpenAI. He focuses on robotics, reinforcement learning, and training neural networks either via deep learning or evolutionary algorithms. He has also researched open questions in evolutionary biology using computational models of evolution, including the evolutionary origins of modularity, hierarchy, and evolvability. Previously, Jeff was a Senior Research Manager and founding member at Uber AI Labs, and also the Loy and Edith Harris Associate Professor in Computer Science at the University of Wyoming. Prior to becoming a professor, he was a Research Scientist at Cornell University, received a PhD in computer science and an MA in philosophy from Michigan State University, and received a BA in philosophy from the University of Michigan.