Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Learning Halfspaces With Malicious Noise

We give new algorithms for learning halfspaces in the challenging malicious noise model, where an adversary may corrupt both the labels and the underlying distribution of examples. Our algorithms can tolerate malicious noise rates exponentially larger than previous work in terms of the dependence on the dimension $n$, and succeed for the fairly broad class of all isotropic log-concave distributions.

Our analysis crucially exploits both concentration and anti-concentration properties of isotropic log-concave distributions. Our algorithms combine an iterative outlier removal procedure using Principal Component Analysis together with "smooth" boosting.

Joint work with Adam Klivans (UT Austin) and Phil Long (Google).

Edit - History - Print - Recent Changes - Search
Page last modified on November 03, 2009, at 11:58 AM