Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

A Reinforcement Learning Approach To Variational Inference In Probabilistic Programs

Variational inference is a powerful technique for reasoning about probabilistic models, but it is hard to derive variational equations for highly structured distributions. In this talk, I will present a dynamical systems perspective on variational inference, discuss the temporal structure that exists in generative models, and show how inference in many generative models can be viewed as a temporal credit assignment problem that can be solved using reinforcement learning. This reduction allows RL techniques to be applied directly to inference problems: policy search methods (such as policy gradients) become a direct search through variational parameters; state-space estimation becomes structured variational inference, and temporal-difference methods suggest novel inference algorithms.

This approach is especially promising in the context of probabilistic programming, where it is easy to express complex, structured distributions but hard to reason about them. I will illustrate the technique on structured models from geological modeling, and show how insights from RL improve inference over more naive approaches.

Joint work with Theo Weber and Jonathan Kane.

Edit - History - Print - Recent Changes - Search
Page last modified on November 14, 2011, at 02:15 PM