Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Logistic Regression The Importance Of Being Improper

Abstract: Logistic regression is a fundamental task in machine learning and statistics. For the simple case of linear models, Hazan et al. (2014) showed that any logistic regression algorithm that estimates model weights from samples must exhibit exponential dependence on the weight magnitude. As an alternative, we explore a counterintuitive technique called improper learning, whereby one estimates a linear model by fitting a non-linear model. Past success stories for improper learning have focused on cases where it can improve computational complexity. Surprisingly, we show that for sample complexity (number of examples needed to achieve a desired accuracy level), improper learning leads to a doubly-exponential improvement in dependence on weight magnitude over estimation of model weights, and more broadly over any so-called "proper" learning algorithm. This provides a positive resolution to a COLT 2012 open problem of McMahan and Streeter. As a consequence of this improvement, we also resolve two open problems on the sample complexity of boosting and bandit multi-class classification.

Bio: Dylan Foster is a postdoctoral researcher at the MIT Institute for Foundations of Data Science. In 2018 he received his PhD in computer science at Cornell University, advised by Karthik Sridharan. His research focuses on theory for machine learning in real-world settings. He is particularly interested in all aspects of generalization theory and related algorithmic questions, particularly as it applies to interactive learning, deep learning, and non-convex optimization. Dylan previously received his BS and MS in Electrical Engineering from USC in 2014. He has received awards including the NDSEG PhD fellowship, Facebook PhD fellowship, and best student paper award at COLT 2018.

Edit - History - Print - Recent Changes - Search
Page last modified on April 16, 2019, at 11:04 AM