This video is part of the Machine Intelligence Summit, Berlin, 2016 Event. If you would like to access all of the videos please click here.

Learning Semantic Environment Perception for Cognitive Robots

Robots need to perceive their environment to act in a goal-directed way. While mapping the environment geometry is a necessary prerequisite for many mobile robot applications, understanding the semantics of the environment will enable novel applications, which require cognitive abilities. In the talk, I will report on methods that we developed for learning tasks like the categorization of surfaces, the detection, recognition, and pose estimation of objects, and the transfer of manipulation skills to novel objects. By combining geometric modelling – which is based on registration of measurements and graph optimization – and semantic categorization – which is based on Random Forests, Deep Learning, and Transfer Learning – 3D semantic maps of the environment are built. We demonstrated the utility of semantic environment perception with cognitive robots in multiple challenging application domains, including domestic service, space exploration, and bin picking.

Sven Behnke, Head of Computer Science Department at University of Bonn

Prof. Dr. Sven Behnke is a full professor for Computer Science at University of Bonn, Germany, where he heads the Autonomous Intelligent Systems group. He has been investigating deep learning since 1997. In 1998, he proposed the Neural Abstraction Pyramid, hierarchical recurrent convolutional neural networks for image interpretation. He developed unsupervised methods for layer-by-layer learning of increasingly abstract image representations. The architecture was also trained in a supervised way to iteratively solve computer vision tasks, such as superresolution, image denoising, and face localization. In recent years, his deep learning research focused on learning object-class segmentation of images and semantic RGB-D perception.