Interviewed by Tony Peng, Synced:
• Give me an overview of your work at Google AI & University of Southern California
• How did you begin your work in AI, and more specifically in robotics? What came first?
What motivates you to keep working in this space?
How do you teach robots to learn from scratch?
• What challenges are you currently facing in your work, and how are you using AI to overcome these?
• You mentioned that you’re currently working on learning robot skill embeddings that can reuse previously acquired skills - can you expand on this? What sort of results have you seen?
• What are some of the real world applications of your work, and how are you using AI for Good?
• How do you think we can ensure AI in robotics is used for a positive impact?
• AI is becoming applied in countless industries - what areas are you most excited to see transformed, and where do you think we’ll see the biggest impact?
• What’s next for you?
• Where can we find you? Do you have Twitter, or should we keep our eye out for any new work or publications?
Karol is a Research Scientist at Google Brain in Mountain View working on robotics and machine learning. Quite recently, he finished his PhD at the University of Southern California in the Robotics Embedded Systems Lab (RESL) under supervision of Prof. Gaurav Sukhatme. While at USC, he was also closely collaborating with Stefan Schaal's group. Karol is mostly involved in interactive perception, reinforcement learning and probabilistic state estimation, however, Karol has very broad interests in the fields of robotics and machine learning. While being at RESL, he did a number of internships at: Bosch LLC (2013 and 2014) working on active articulation model estimation, NASA JPL (2015) working on multi-sensor fusion and Qualcomm Research (2016) working on active mapping and planning under uncertainty. In summer 2017, Karol joined Google DeepMind for another exciting internship. Karol's research interests lie in active state estimation, control generation and machine learning for robotics. He investigates interactive perception, by which robots use their manipulation capabilities to gain the most useful perceptual information to model the world and inform intelligent decision making. The paradigm of generating motion to improve state estimation (interactive perception) and task execution (reinforcement learning) is applied throughout his work, in which he shows that coupling perception and control together can be beneficial for both fields. More recently, Karol has been investigating deep reinforcement learning and its applications in robotics. Karol have evaluated his work on many different platforms including quadrotors, humanoid robots and robotic arms.