This video is part of the Deep Learning for Robotics Summit, San Francisco, 2018 Event. If you would like to access all of the videos please click here.

Learning Reusable Robot Skill Embeddings

Karol presents a first step towards learning robot skill embeddings that enable reusing previously acquired skills. The results in Karol's research indicate that their method can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.

Karol Hausman, Research Scientist & PhD Student at Google Brain & University of Southern California

Karol is a Research Scientist at Google Brain in Mountain View working on robotics and machine learning. Quite recently, he finished his PhD at the University of Southern California in the Robotics Embedded Systems Lab (RESL) under supervision of Prof. Gaurav Sukhatme. While at USC, he was also closely collaborating with Stefan Schaal's group. Karol is mostly involved in interactive perception, reinforcement learning and probabilistic state estimation, however, Karol has very broad interests in the fields of robotics and machine learning. While being at RESL, he did a number of internships at: Bosch LLC (2013 and 2014) working on active articulation model estimation, NASA JPL (2015) working on multi-sensor fusion and Qualcomm Research (2016) working on active mapping and planning under uncertainty. In summer 2017, Karol joined Google DeepMind for another exciting internship. Karol's research interests lie in active state estimation, control generation and machine learning for robotics. He investigates interactive perception, by which robots use their manipulation capabilities to gain the most useful perceptual information to model the world and inform intelligent decision making. The paradigm of generating motion to improve state estimation (interactive perception) and task execution (reinforcement learning) is applied throughout his work, in which he shows that coupling perception and control together can be beneficial for both fields. More recently, Karol has been investigating deep reinforcement learning and its applications in robotics. Karol have evaluated his work on many different platforms including quadrotors, humanoid robots and robotic arms.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more