Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Why Should We Care About Linguistics

Abstract: In just the past few months, a flurry of adversarial studies have pushed back on the apparent progress of neural networks, with multiple analyses suggesting that deep models of text fail to capture even basic properties of language, such as negation, word order, and compositionality. Alongside this wave of negative results, our field has stated ambitions to move beyond task-specific models and toward "general purpose" word, sentence, and even document embeddings. This is a tall order for the field of NLP, and, I argue, marks a significant shift in the way we approach our research. I will discuss what we can learn from the field of linguistics about the challenges of codifying all of language in a "general purpose" way. Then, more importantly, I will discuss what we cannot learn from linguistics. I will argue that the state-of-the-art of NLP research is operating close to the limits of what we know about natural language semantics, both within our field and outside it. I will conclude with thoughts on why this opens opportunities for NLP to advance both technology and basic science as it relates to language, and the implications for the way we should conduct empirical research.

Bio: Ellie Pavlick is an Assistant Professor of Computer Science and Brown University and a Research Scientist at Google AI. Ellie received her PhD from University of Pennsylvania under the supervision of Chris Callison-Burch. Her current research focus is on semantics, pragmatics, and building cognitively-plausible computational models of natural language inference.

Edit - History - Print - Recent Changes - Search
Page last modified on October 22, 2018, at 09:22 AM