This video is part of the Responsible AI Summit, Montreal 2019 Event. If you would like to access all of the videos please click here.

Adversarial Machine Learning; Ensuring Security of ML Models and Sensitive Data - Christopher Choquette Choo

As machine learning (ML) has seen dramatic growth in industrial applications, so have we begun to question what trust and security mean in the context of ML. I will give an overview of adversarial ML as a research area and explore some of the attack and defense strategies that have been developed in recent literature. In particular, I will showcase some of the use cases and implementations of differential privacy and how it can be used to protect sensitive data used for training ML models.

Christopher Choquette Choo, Machine Learning Researcher at Vector Institute

Christopher is a researcher in the CleverHans Lab at the Vector Institute exploring Adversarial ML, and in particular, membership inference attacks, differential privacy, and adversarial examples. He is also a researcher with the Aspuru-Guzik lab at the Vector Institute exploring the applications of Bayesian models and active learning in molecular discovery. Christopher has worked at Georgian Partners LP, where he developed open source solutions for differential privacy and AutoML. Christopher also worked at Intel where he researched and developed a deep neural network bug triager.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more