Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Bring Your Own Model Model-Agnostic Improvements In NLP

Abstract:

The majority of NLP research focuses on improving NLP systems by designing better model classes (e.g., non-linear models, latent variable models). In this talk, I will describe a complementary approach based on incorporation of linguistic bias and optimization of text representations that is applicable to several model classes. First, I will present a structured regularizer that is suitable for the problem when only some parts of an input are relevant to the prediction task (e.g., sentences in text, entities in scenes of images) and an efficient algorithm based on the alternating direction method of multipliers to solve the resulting optimization problem. I will then show how such regularizer can be used to incorporate linguistic structures into a text classification model. In the second part of the talk, I will present our first step towards building a black box NLP system that automatically chooses the best text representation for a given dataset by treating it as a global optimization problem. I will also briefly describe an improved algorithm that can generalize across multiple datasets for faster optimization. I will conclude by discussing how such a framework can be applied to other NLP problems.

Bio:

Dani Yogatama is a fifth-year PhD student in the Language Technologies Institute of the School of Computer Science at Carnegie Mellon University. He works with Professor Noah Smith on designing better machine learning methods for natural language processing. Prior to CMU, he was a Monbukagakusho fellow at the University of Tokyo.

Edit - History - Print - Recent Changes - Search
Page last modified on April 09, 2015, at 09:03 AM