Rethinking Machine Learning For The21st Century From Optimization To Equilibration
The past two decades has seen machine learning (ML) transformed from an academic curiosity to a multi-billion dollar industry, becoming a centerpiece of our entertainment, national security, scientific, and social infrastructures. Increasingly, ML research is driven by applications requiring analysis of massive datasets in highly distributed networked cloud environments. In this talk, Iíll argue that this reality requires a fundamental rethinking of our basic analytic tools. My thesis will be that ML needs to shift from its current focus on optimization to equilibration, from modeling the world as uncertain, but stationary and benign, to one where the world is non-stationary, competitive, and potentially malicious. Adapting to this new world will require developing new ML frameworks and algorithms. My talk will introduce one such framework ó equilibration using variational inequalities and projected dynamical systems ówhich not only generalizes optimization, but is better suited to the distributed networked cloud-oriented future that ML faces.
To explain this paradigm change, Iíll begin by summarizing the au courant optimization-based approach to ML using recent research in the ALL lab on finding low-dimensional representations of high-dimensional data, and doing scalable gradient optimization in non-Euclidean spaces. I will then present an equilibration-based framework using variational inequalities and projected dynamical systems, which originated in mathematics for solving partial differential equations in physics, but has been since been widely applied in its finite-dimensional formulation to network equilibrium problems in economics, transportation, and other areas. Iíll describe a range of algorithms for solving variational inequalities, showing their scope allows ML to extend beyond optimization, to finding game-theoretic equilibria, solving complementarity problems, and many other areas.