What Do Your Neural Networks Learn- A Peek Inside The Black Box

Deep neural networks are famously difficult to interpret. We'll take a tour of their inner workings to build an intuition of what's inside the black box and how all those cogs fit together. Then we'll use those insights as we step through a image processing problem with deep learning, showing at every step what the neural network is "thinking".

Brandon Rohrer, Principal Data Scientist at iRobot

Brandon love solving puzzles and building things. Applied machine learning gives him the opportunity to do both in equal measure. He started by studying robotics and human rehabilitation at MIT (MS '99, PhD '02), moved on to machine vision and complex system modeling at Sandia National Laboratories, then to predictive modeling of agriculture DuPont Pioneer, and cloud data science at Microsoft. At Facebook he works to get internet and electrical power to those in the world who don't have it, using deep learning and satellite imagery, and to do a better job identifying topics reliably in unstructured text. In his spare time he likes to rock climb, write robot learning algorithms, and go on walks with his wife and our dog, Reign of Terror.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more