This video is part of the Deep Learning Summit - Track 1, Montreal, 2017 Event. If you would like to access all of the videos please click here.

Unsupervised Domain Adaptation with Adversarial Network

This talk presents our recent work on using Adversarial learning to improve the recognition task in the presence of domain shifts or bias between the source (training) and target (testing) domains. Our new approach factors the learned feature space of both source domains and target domains into discriminative space and reconstructive space, where the discriminative space captures class-specific information while the reconstructive space captures the domain-specific information. A GAN-loss is used to minimize the domain shifts in the discriminative space between the two domains for better generalization performance. Our preliminary results show the promise of our approach, achieving new state-of-the-art results on standard cross-domain digit classification tasks.

Jianchao Yang, Lead Research Scientist at Snap Inc

Jianchao Yang is currently a Lead Research Scientist at Snap Inc. Before joining Snap, he was a Research Scientist at Adobe Research. He received his M.S. and Ph.D. degrees both from the ECE Department of University of Illinois at Urbana-Champaign, under supervision of Prof. Thomas Huang. His research focuses on computer vision, deep learning, and image and video processing. He has published more than 80 technical papers on top tier conferences and journals, with Google scholar citation more than 13,000 times. He received the Best Student Paper award from ICCV 2010, the winner prize of the classification task in PASCAL VOC 2009, first position for object localization using external data for ILSVRC ImageNet 2014, and third place in WebVision Challenge 2017. He serves as workshop chair for ACM MM 2017.