Deep learning has been successful at many AI tasks, largely thanks to the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. I’ll describe a framework that has recently been used successfully to address the problem of generalizing from small amounts of data, known as meta-learning. In meta-learning we develop a learning algorithm that itself can produce and train a learning algorithm for some target class of problems. I’ll review some examples of successful use of meta-learning to produce good few-shot classification algorithms.
Hugo Larochelle is Research Scientist at Google and Assistant Professor at the Université de Sherbrooke (UdeS). Before, he was working with Twitter and he also spent two years in the machine learning group at University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at Université de Montréal, under the supervision of Yoshua Bengio. He is the recipient of two Google Faculty Awards. His professional involvement includes associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), member of the editorial board of the Journal of Artificial Intelligence Research (JAIR) and program chair for the International Conference on Learning Representations (ICLR) of 2015 and 2016.