MLFL Wiki |
Main /
A Stein Variational Framework For Deep Probabilistic ModelingTitle: A Stein Variational Framework for Deep Probabilistic Modeling Abstract. Modern AI and machine learning techniques increasingly depend on highly complex, hierarchical (deep) probabilistic models to reason with complex relations and learn to predict and act under uncertain environment. This, however, casts a significant demand for developing efficient computational methods for handling highly complex probabilistic models for which exact calculation is prohibitive. In this talk, we discuss a new framework for approximate learning and inference that combines ideas from Stein's method, an advantaged theoretical technique developed by mathematical statistician Charles Stein, with practical machine learning and statistical computation techniques such as variational inference, Monte Carlo, optimal transport and reproducing kernel Hilbert space (RKHS). Our framework provides a new foundation for probabilistic learning and reasoning and allows us to develop a host of new algorithms for a variety of challenging statistical tasks, that are significantly different from, and have critical advantages over, traditional methods. We will discuss a number of examples, including a computationally tractable goodness-of-fit test for evaluating highly complex models, a new type of approximation inference method for scalable Bayesian computation, amortized maximum likelihood training for deep generative models, and new policy gradient methods that yield better exploration using Bayesian uncertainty for deep reinforcement learning. Bio: Qiang Liu is an assistant professor of computer science at Dartmouth college. His research interests are in machine learning, Bayesian inference, probabilistic graphical models and deep learning. He received his Ph.D from University of California at Irvine, followed with a postdoc at MIT CSAIL. |