Bridging The Gap Optimization And Dynamical Systems
Optimization forms the backbone to many machine learning algorithms. In this talk, Iíll argue that optimization theory exists within a much larger framework of dynamical systems. This perspective is primarily motivated by recent work with variational inequalities and brings with it a wealth of benefits including new algorithms, explanations for pre-existing ones, and avenues for potentially new formulations.
The talk is structured to trace the path I took to arrive at this conclusion. First, Iíll describe the theory of variational inequalities (VIs), some fundamental methods, and a few domains. Next, I will explain how VIs relate to differential equations (DEs) through its alternate perspective as projected dynamical systems (PDS). This will provide the gateway to what I find is an eye-opening discussion of the stability of fixed points as well as algorithms commonly used to solve ordinary differential equations (ODEs). Iíll follow this discussion with the presentation of several examples where techniques borrowed from VIs and ODEs have already been successfully applied. In order to fully take advantage of these ideas, weíll need to formulate new models, specifically ones that donít share an equivalent optimization formulation. To that end, I will pose a novel embedding problem that possesses the flexibility to represent solutions beyond optima. Finally, I will conclude the talk with a lofty conversation that considers dynamical systems and cognitive states within the brain in order to point out the progression from optimization to VIís concept of equilibration as a potentially fruitful research direction in AI.
Ian Gemp is currently pursuing his M.S./Ph.D. in Computer Science at the University of Massachusetts, Amherst. He graduated Northwestern University in 2011 with a B.S. in Mechanical Engineering and a B.S./M.S. in Applied Math. Before arriving in Amherst, he worked for two years as a project management consultant for Capgemini. His research interests include optimization, equilibration, reinforcement learning, multi-agent systems, and dimensionality reduction.