Machine Learning and Friends Lunch





home
past talks
resources
conferences

Cliquewise Training for Undirected Models


Charles Sutton
UMass

Abstract


For many large undirected models that arise in real-world
applications, exact generative training is infeasible. Conditional
training is even more difficult, because the partition function
depends not only on the parameters, but also on the observed input,
requiring repeated inference over each training example. An appealing
idea for such models is to independently train a local undirected
classifier over each clique, afterwards combining the learned weights
into a single global model. We show that a simple refinement of this
method can be justified as minimizing an upper bound on the log
partition function. In particular, we show that the tree-reweighted
upper bound of Wainwright, Jaakkola, and Willsky yields this decoupled
training method, if the component trees are restricted to single
cliques. This choice of trees is especially suited for conditional
training because unlike spanning trees, it avoids the usual need to
invoke a message-passing algorithm many times during training. We
present experimental results supporting this approach.

Back to ML Lunch home