Probabilistic Graphical Models

CS 688, Spring 2016, UMass Amherst CS

Instructor: Brendan O’Connor, brenocon AT cs.umass.edu

Lecture: MW 2:30-3:45, Engineering Laboratory, room 304

TA: Tao Sun, taosun AT cs.umass.edu

See the Moodle and Piazza sites for this course.

Syllabus

Course description

Probabilistic graphical models are an intuitive visual language for describing the structure of joint probability distributions using graphs. They enable the compact representation and manipulation of exponentially large probability distributions, which allows them to efficiently manage the uncertainty and partial observability that commonly occur in real-world problems. As a result, graphical models have become invaluable tools in a wide range of areas from computer vision and sensor networks to natural language processing and computational biology.

The aim of this course is to develop the knowledge and skills necessary to effectively design, implement and apply these models to solve real problems. The course will cover (a) Bayesian and Markov (MRF) networks; (b) exact and approximate inference methods; (c) estimation of both the parameters and structure of graphical models. Students entering the class should have good programming skills and knowledge of algorithms. Undergraduate-level knowledge of probability and statistics is recommended.

This course emphasizes the versatile compositionality of PGMs’ core building blocks—intuitive representations and algorithms which can be combined to derive a huge variety of models and inference/learning procedures. Probabilistic graphical models unify a very broad range of statistical and machine learning methods, and allow the invention of new ones too. These concepts are essential for advanced work in machine learning, artificial intelligence, and statistical modeling.

An approximate list of topics (subject to change):

Textbook: Kevin Murphy (2012), “Machine Learning: a Probabilistic Perspective.” (book website).

Who should take this course?: This is a PhD-level course that is designed for students who want depth in statistical machine learning. It may also be useful for related areas such as probabilistic inference in intelligent systems. This course focuses on mathematical formalisms, algorithms, and models. If you haven’t taken a course in machine learning, artificial intelligence, or statistical modeling before (e.g. CS 589, CS 689, CS 383, CS 683, STAT 597*, STAT 697*, etc.), the motivation for this course may not be as clear.

Retrospective Syllabus

Schedule of topics we actually covered:

In the readings, texts refer to

[Lec1] W 1/20: Intro and Probability

[Lec2] M 1/25: Bayesian Networks

[Lec3] W 1/27: Maximum Likelihood in BNs

[Lec3’] M 2/1: BN learning, continued

[Lec4] W 2/3: MRFs

[Lec5] M 2/8: MRF inference

[Lec6] W 2/10: MRF learning and message-passing

[Lec7] T 2/16: Message passing

[Lec8] W 2/17: Message passing wrap-up

M 2/22: Recitation

[Lec8.5] W 2/24: Numerical optimization

[Lec9] W 2/24: Monte Carlo methods

[Lec10] M 2/29: Markov Chain Monte Carlo (1)

[Lec11] W 3/2: Markov Chain Monte Carlo (2)

[Lec12] M 3/7: Markov Chain Monte Carlo (3)

[Lec13] W 3/9: Markov Chain Monte Carlo (4)

[Lec14] M 3/21: Latent Variable Models with Gibbs Sampling

[Lec15] W 3/23: The EM Algorithm

[Lec16] M 3/28: EM and Gaussian mixtures

[Lec17] W 3/30: Variational inference (I)

[Lec18] M 4/4: Variational inference (II)

[Lec19] W 4/6: Topic models

[Lec20] M 4/11: Regression models (I)

[Lec21] W 4/13: Regression models (II)

[Lec22] W 4/20: Non-parametric Bayes

[Lec23] M 4/25: Non-probabilistic graphical models

[Lec24] W 4/27: Review