The Myth of the Interpretable, Robust, Compact and High Performance Deep Neural Network

Most progress in machine learning has been measured according to gains in test-set accuracy on tasks like image recognition. However, test-set accuracy appears to be poorly correlated with other design objectives such as interpretability, robustness to adversarial attacks or training compact networks that can be used in resource constrained environments. This talk will ask whether it is possible to have it all, and more importantly how do we measure progress when we want to train model functions that fulfill multiple criteria.

Sara Hooker, Artificial Intelligence Resident at Google Brain

Sara Hooker is Artificial Intelligence Resident at Google Brain doing deep learning research on model compression and reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She spent her childhood in Africa, growing up in South Africa, Swaziland, Mozambique, Lesotho and Kenya. Her family now lives in Monrovia, Liberia.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more