Security and Privacy in Machine Learning

There is growing recognition that machine learning exposes new security and privacy issues in software systems. In this talk, we first expose the attack surface of systems deploying machine learning. We then describe how an attacker may force models to make wrong predictions with very little information about the victim. We demonstrate that these attacks are practical against existing machine learning as a service platforms. Finally, we discuss a framework for learning privately. The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.

Nicolas Papernot, Google PhD Fellow in Security at Penn State University

Nicolas Papernot is a PhD student in Computer Science and Engineering working with Dr. Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security, privacy and machine learning. He is supported by a Google PhD Fellowship in Security. In 2016, he received his M.S. in Computer Science and Engineering from the Pennsylvania State University and his M.S. in Engineering Sciences from the Ecole Centrale de Lyon.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more