Emma Strubell

I'm a Ph.D. candidate at UMass Amherst working in the Information Extraction and Synthesis Laboratory with Professor Andrew McCallum. Previously, I earned a B.S. in Computer Science from the University of Maine with a minor in math, where I applied models from mathematical biology to the spread of internet worms with Professor David Hiebeler in his Spatial Population Ecological and Epidemiological Dynamics Lab.

In summer 2016 I interned as a Research Scientist with Tom Kollar on the Alexa NLU team at Amazon Lab126. In 2017 I interned with Daniel Andor and David Weiss at Google AI Language in NYC. I am grateful to have been supported by an IBM PhD Fellowship Award for the 2017-2018 academic year.

I'm very happy to announce that I'll be joining the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University as an assistant professor in Fall 2020! I'll be spending the intervening year as a visiting scientist at Facebook AI Research in Seattle. 🎉

Research Interests

I am interested in developing new machine learning techniques to facilitate fast and robust natural language processing.

Core natural language processing (NLP) tasks such as part-of-speech tagging, syntactic parsing and entity recognition have come of age thanks to advances in machine learning. For example, the task of semantic role labeling (annotating who did what to whom) has seen nearly 40% error reduction over the past decade. NLP has reached a level of maturity long-awaited by domain experts who wish to leverage natural language analysis to inform better decisions and effect social change. By deploying these systems at scale on billions of documents across many domains practitioners can consolidate raw text into structured, actionable data. These cornerstone NLP tasks are also crucial building blocks to higher-level natural language understanding (NLU) that our field has yet to accomplish, such as whole-document understanding and human-level dialog.

In order for NLP to effectively process raw text across many domains, we require models that are both robust to different styles of text and computationally efficient. The success described above has been achieved in those limited domains for which we have expensive annotated data; models that obtain state-of-the-art accuracy in these data-rich settings are typically neither trained nor evaluated for accuracy out-of-domain. Users also have practical concerns about model responsiveness, turnaround time in large-scale analysis, electricity costs, and consequently environmental conservation, but the highest accuracy systems also have high computational demand. As hardware advances, NLP researchers tend to increase model complexity in step.

My research enables a diversity of domain experts to leverage NLU at large scale with the goal of informing decision-making and practical solutions to far-reaching problems. Towards this end, I provide fundamental advances in computational efficiency and robustness. To facilitate computational efficiency I design new training and inference algorithms cognizant of strengths in the latest tensor processing hardware, and eliminate redundant computation through joint modeling across many tasks. I will enable high accuracy across diverse natural language domains by developing joint models where parameter sharing improves generalization, paired with novel methods for adversarial training that will enable transfer to new domains and languages without labeled data. I will apply my research broadly to low-level NLP as well as high-level NLU tasks. In conjunction with these new machine learning techniques, I will collaborate with domain experts to make a positive mark on society.

Publications

2019

2018

2017

2016

2015

2014

2012

/etc

In my spare time, I enjoy cooking (with a focus on making vegetables delicious), fermenting (kombucha, kimchi, yogurt, sourdough), enjoying the outdoors (backpacking and rock climbing), and training my dog.

In search of a fast Scala lexer, I forked JFlex and added the ability to emit Scala code. JFlex-scala, and its corresponding maven and sbt plugins, are available on Maven Central. For an example of its use, check out the tokenizer in FACTORIE.

I am also co-author of Plant Jones. He is a semi-intelligent plant who tweets negatively about water when he's thirsty, and positively when he's not. His code is available here.

In my junior year of college I wrote and presented a tutorial on quantum algorithms aimed for undergraduate students in computer science, available here, along with slides part 1 and part 2.

Gentoo Linux user since 2005.

Amherst, Massachusetts, USA

cs.umass.edu/~strubell

strubell [at] cs [dot] umass [dot] edu

curriculum vitae (PDF)