Recent Changes - Search:


University of Massachusetts


Exploiting Compositionality To Explore A Large Space Of Model Structures

Abstract: The recent proliferation of highly structured probabilistic models raises the question of how to automatically determine an appropriate model for a dataset. We focus on a space of matrix decomposition models which can express a variety of widely used models from unsupervised learning. To enable model selection, we organize these models into a context-free grammar which generates a wide variety of structures through the compositional application of a few simple rules. We use our grammar to generically and efficiently infer latent components and estimate predictive likelihood for nearly 2500 structures using a small toolbox of reusable algorithms. Using a greedy search over our grammar, we automatically choose the decomposition structure from raw data by evaluating only a tiny fraction of all models. The proposed method typically finds the correct structure for synthetic data and backs off gracefully to simpler models under heavy noise. It learns plausible structures for datasets as diverse as image patches, motion capture, 20 Questions, and U.S. Senate votes, all using exactly the same code.

Short Bio: Roger Grosse is a graduate student at MIT studying machine learning and computer vision with Bill Freeman and Josh Tenenbaum. He obtained his BS and MS from Stanford in 2007 and 2008, studying with Andrew Ng. There, he worked on sparse coding and deep learning algorithms for automatically learning representations of image and audio data. His current work focuses on recovering shading and reflectance from single images and on automatically determining the structure of a probabilistic model.

Edit - History - Print - Recent Changes - Search
Page last modified on April 25, 2012, at 09:44 AM