Abstract : It is extremely important in many application domains to have transparency in predictive modeling. Domain experts do not tend to prefer "black box" predictive model models. They would like to understand how predictions are made, and possibly, prefer models that emulate the way a human expert might make a decision, with a few important variables, and a clear convincing reason to make a particular prediction.
I will discuss recent work on interpretable predictive modeling with decision lists. I will describe several approaches, including an algorithm based on discrete optimization, and an algorithm based on Bayesian analysis.
Collaborators are: Ben Letham, Allison Chang, Tyler McCormick, David Madigan and Shawn Qian.
-Building Interpretable Classifiers with Rules using Bayesian Analysis
-Ordered Rules for Classification
-Bayesian Hierarchical Rule Modeling For Predicting Medical Conditions
Bio: Cynthia Rudin is an associate professor of statistics at the Massachusetts Institute of Technology associated with CSAIL and the Sloan School of Management, and directs the Prediction Analysis Lab. Previously, Prof. Rudin was an associate research scientist at the Center for Computational Learning Systems at Columbia University, and prior to that, an NSF postdoctoral research fellow at NYU. She holds an undergraduate degree from the University at Buffalo where she received the College of Arts and Sciences Outstanding Senior Award in Sciences and Mathematics, and she received a PhD in applied and computational mathematics from Princeton University in 2004. She is the recipient of the 2013 INFORMS Innovative Applications in Analytics Award. She was given an NSF CAREER award in 2011. Her work has been featured in IEEE Computer, Businessweek, The Wall Street Journal, the Boston Globe, the Times of London, Fox News (Fox & Friends), the Toronto Star, WIRED Science, Yahoo! Shine, U.S. News and World Report, Slashdot, CIO magazine, and on Boston Public Radio.