Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Insights From Deep Representations

Abstract: To continue the successes of deep learning, it becomes increasingly important to better understand the phenomena exhibited by these models, ideally through a combination of systematic experiments and theory. Central to this challenge is a better understanding of deep representations. In this talk I discuss some of our work addressing questions in this space. I overview our development of measures of neural network expressivity, and quantify and empirically measure the effect of network depth and width on the latent representations. I then describe adapting Canonical Correlation Analysis (SVCCA) as a tool to directly compare latent representations, across layers, training steps, and even different networks. The results show differences in per-layer convergence and also help identify parts of the representation critical to the task. Finally, I introduce a new testbed of environments for Deep Reinforcement Learning that lets us study different RL algorithms, single agent, multiagent and self play settings, and evaluate generalization in a systematic way.

Bio: Maithra Raghu is a PhD student at Cornell, working with Jon Kleinberg, and a research resident at Google Brain. Her research interests are broadly in developing a better understanding of latent representations learned by deep neural networks, and using these insights to help guide new improvements.

Edit - History - Print - Recent Changes - Search
Page last modified on April 08, 2018, at 04:33 PM