Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Online Learning By Optimizing Bregman Divergences

Much work in machine learning builds on ideas from convex analysis and optimization. In this talk, I will describe my recent research on new online algorithms for solving regularized convex optimization problems, combining two key ideas: generalized nonsymmetric distance metrics, in particular Bregman divergences, and new methods for constructing sparse representations by stochastic coordinate descent.

Bregman divergences unify a large variety of distance measures, such as Euclidean distance and KL-divergence. They have been used to develop regret bounds for additive and multiplicative online algorithms, and unify logistic regression and boosting. Bregman divergences have also been used to generalize dimensionality reduction methods like PCA to the exponential familiy, and to analyze the convergence of belief propagation in graphical models. Stochastic coordinate descent methods have a long history in machine learning, and have recently become the preferred method to solve large convex loss problems in machine learning.

I'll describe a family of new algorithms for solving L1-regularized convex optimization problems that require computing fixed points of nonexpansive projection operators. Such problems occur widely in reinforcement learning, for example in policy evaluation. One class of methods are based on stochastic coordinate descent to optimize L1-regularized loss functions for policy evaluation. The other class of methods is based on mirror descent and proximal methods using Bregman divergences.

Edit - History - Print - Recent Changes - Search
Page last modified on September 19, 2011, at 10:14 PM