Representation learning fundamentally differs from previous formulations of learning, in that it is both modality and task independent. The goal of representation learning is to "understand" the data or state space, or more precisely, estimate the underlying geometry or manifold structure of a data space through the construction of basis functions . These basis functions can then be used in conjunction with standard learning techniques, such as clustering, regression, reinforcement learning, or supervised learning, to solve not just a single problem, but many problems. This course will provide a gentle introduction to both the theory and applications of representation learning.
The course will be organize into three sections. Part 1 will cover elementary techniques of representation learning, beginning with classical linear algebraic methods such as principal components analysis (PCA). Part 2 will introduce more recent nonlinear graph-theoretic methods, including global "Fourier" methods, such as Laplacian eigenmaps, and multi-scale "Wavelet" methods, such as diffusion wavelets. Finally, Part 3 will explore advanced topics, including the representation theory of finite and infinite groups, Lie groups, and Riemannian manifolds. The underlying theory and algorithms will be illustrated using a diverse set of applications, such as compression of 3D objects in computer graphics (see above), ranking (IR), document clustering, analysis of Markov chains and Markov decision processes, sensor networks, computer vision, robotics, networking, and web search.