Visualizing and Understanding Generative Adversarial Networks

The remarkable success of Generative Adversarial Networks in generating nearly photorealistic images leads to the question: how do they work? Are GAN just memorization machines, or do they learn semantic structures? What do these networks learn? I introduce the method of Network Dissection to test the semantics captured by neurons in the middle layers of a network, and show how recent state-of-the-art GANs learn a remarkable amount of structure. Even without any labels in the training data, neurons in a GAN trained to draw scenes will separately code for objects such as trees, furniture, and other meaningful objects. The causal effects such neurons are strong enough that we can add and remove objects and paint pictures directly by manipulating the neurons of a GAN. These methods provide insights about the a GAN's errors as well as the contextual relationships learned by a GAN. By cracking open the black box, we can see how deep networks learn meaningful structure, and we can gain understandable insights about a network’s inner workings.

David Bau, PhD Student at MIT CSAIL

David Bau is a PhD student at MIT CSAIL, advised by Professor Antonio Torralba. David previously worked at Google and Microsoft where he has contributed to several widely used products including Google Image Search and Microsoft Internet Explorer. David believes that complex systems should be built to be transparent, and his research focuses on the interpretability of deep networks.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more