This video is part of the Deep Learning Summit, Montreal 2019 Event. If you would like to access all of the videos please click here.

Generalizing The Lottery Ticket Hypothesis Across Datasets And Optimizers And Beyond Suprtvised Image Classification - Ari Morcos

The success of lottery ticket initializations (Frankle and Carbin, 2019) suggests that small, sparsified networks can be trained so long as the network is initialized appropriately. This phenomenon is intriguing and suggests that initialization strategies for DNNs can be improved substantially, but a number of open questions remain. Do winning tickets contain generic inductive biases for training or are they just overfitted to a particular problem? Is the lottery ticket phenomenon simply an artifact of image classification or is it present in other domains as well? In this talk, I will discuss recent work to address both of these critical questions.

Ari Morcos, Research Scientist at FacebookAI Research

Ari Morcos is a Research Scientist at Facebook AI Research working on understanding the mechanisms underlying neural network computation and function, and using these insights to build machine learning systems more intelligently. In particular, Ari has worked on understanding the properties predictive of generalization, methods to compare representations across networks, the role of single units in computation, and on strategies to measure abstraction in neural network representations. Previously, he worked at DeepMind in London, and earned his PhD in Neurobiology at Harvard University, using machine learning to study the cortical dynamics underlying evidence accumulation for decision-making.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more