Optimization techniques are frequently used in many areas of computer science, in particular in machine learning, to handle a variety of large-scale, data intensive problems. Moreover, algorithmic tools of ever-increasing sophistication are introduced at a fast pace, offering unparalleled opportunities to solve problems efficiently. This class will cover a wide range of optimization methods, including convex, nonconvex and discrete optimization. The first two thirds of the course are dedicated to foundational topics, whereas the last third is dedicated to the study and understanding of cutting-edge techniques presented at leading conferences in the field.
A significant number of problems in Computer Science hinge on maximizing or minimizing an objective. In machine learning, for instance, predictive models are trained from observed data by minimizing loss functions such as the mean squared error. Such problems are solved through mathematical optimization techniques, a discipline which encompasses a broad array of techniques designed to handle various types of problems, such as convex problems and combinatorial problems. Knowledge of fundamental optimization techniques, as well as their applicability and limitations, is therefore invaluable to students in Computer Science.