This video is part of the Deep Learning Summit, San Francisco, 2019 Event. If you would like to access all of the videos please click here.

Latent Structure in Deep Robotic Learning

Traditionally, deep reinforcement learning has focused on learning one particular skill in isolation and from scratch. This often leads to repeated efforts of learning the right representation for each skill individually, while it is likely that such representation could be shared between different skills. In contrast, there is some evidence that humans reuse previously learned skills efficiently to learn new ones, e.g. by sequencing or interpolating between them.
In this talk, I will demonstrate how one could discover latent structure when learning multiple skills concurrently. In particular, I will present a first step towards learning robot skill embeddings that enable reusing previously acquired skills. I will show how one can use these ideas for multi-task reinforcement learning, sim-to-real transfer and imitation learning.

Karol Hausman, Research Scientist & PhD Student at Google Brain & University of Southern California

Karol is a Research Scientist at Google Brain in Mountain View working on robotics and machine learning. Quite recently, he finished his PhD at the University of Southern California in the Robotics Embedded Systems Lab (RESL) under supervision of Prof. Gaurav Sukhatme. While at USC, he was also closely collaborating with Stefan Schaal's group. Karol is mostly involved in interactive perception, reinforcement learning and probabilistic state estimation, however, Karol has very broad interests in the fields of robotics and machine learning. While being at RESL, he did a number of internships at: Bosch LLC (2013 and 2014) working on active articulation model estimation, NASA JPL (2015) working on multi-sensor fusion and Qualcomm Research (2016) working on active mapping and planning under uncertainty. In summer 2017, Karol joined Google DeepMind for another exciting internship. Karol's research interests lie in active state estimation, control generation and machine learning for robotics. He investigates interactive perception, by which robots use their manipulation capabilities to gain the most useful perceptual information to model the world and inform intelligent decision making. The paradigm of generating motion to improve state estimation (interactive perception) and task execution (reinforcement learning) is applied throughout his work, in which he shows that coupling perception and control together can be beneficial for both fields. More recently, Karol has been investigating deep reinforcement learning and its applications in robotics. Karol have evaluated his work on many different platforms including quadrotors, humanoid robots and robotic arms.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more