Deep neural networks have made a spectacular leap from research to the applied domain. The use of convolutional neural networks has allowed for large gains in test-set accuracy on image classification tasks. However, there is consensus that test-set accuracy is an inadequate measure of other properties we may care about like adversarial robustness, fairness, model compression or interpretability of a model. In this talk, we will consider some of the challenges associated with measuring progress on training model functions that fulfill multiple desirable characteristics.
Sara Hooker is Artificial Intelligence Resident at Google Brain doing deep learning research on model compression and reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She spent her childhood in Africa, growing up in South Africa, Swaziland, Mozambique, Lesotho and Kenya. Her family now lives in Monrovia, Liberia.