This video is part of the Deep Learning Summit, San Francisco 2017 Event. If you would like to access all of the videos please click here.

Variational Autoencoders

Current deep learning mostly relies on supervised learning: given a vast amount of examples of humans performing tasks such as labeling images or translation, we can teach computers to mimic humans at these tasks. Since supervised methods only focus on modeling the task directly, however, these methods are not particularly efficient: they need much more examples than humans require for learning new tasks. Enter unsupervised learning, where computers not only model tasks, but also their context, vastly improving data efficiency. We discuss the powerful framework of Variational Autoencoders (VAEs), a synthesis of deep learning and Bayesian methods, as a principled yet practical approach towards unsupervised deep learning. In addition to the underlying mathematics, we discuss current scientific and practical applications of VAEs, such as semi-supervised learning, drug discovery, and image resynthesis.

Durk Kingma, Research Scientist at OpenAI

Diederik (or Durk) Kingma is a Research Scientist at OpenAI, with a focus on unsupervised deep learning. His research carreer started in 2009, while graduated at Utrecht University, working with prof. Yann LeCun at NYU. Since 2013, he pursues a PhD with prof. Max Welling in Amsterdam, focusing on the intersection of deep learning and Bayesian inference. Early in his PhD, he proposed the Variational Autoencoder (VAE), a principled framework for Bayesian unsupervised deep learning. Other well-known work is Adam, a now standard method for stochastic gradient descent.