MLFL Wiki |
Main /
Learning Challenges In Natural Language ProcessingAbstract: As the availability of data for language learning grows, the role of linguistic structure is under scrutiny. At the same time, it is imperative to closely inspect patterns in data which might present loopholes for models to obtain high performance on benchmarks. In a two-part talk, I will address each of these challenges.| First, I will introduce the paradigm of scaffolded learning. Scaffolds enable us to leverage inductive biases from one structural source for prediction of a different, but related structure, using only as much supervision as is necessary. We show that the resulting representations achieve improved performance across a range of tasks, indicating that linguistic structure remains beneficial even with powerful deep learning architectures. In the second part of the talk, I will showcase some of the properties exhibited by NLP models in large data regimes. Even as these models report excellent performance, sometimes claimed to beat humans, a closer look reveals that predictions are not a result of complex reasoning, and the task is not being completed in a generalizable way. Instead, this success can be largely attributed to exploitation of some artifacts of annotation in the datasets. I will discuss some questions our finding raises, as well as directions for future work. Bio: Swabha Swayamdipta is a PhD candidate at the Language Technologies Institute at Carnegie Mellon University (currently a visiting student at University of Washington). She works with Noah Smith and Chris Dyer on developing efficient algorithms for linguistic structured prediction, with a focus on incorporating syntactic inductive biases. Prior to joining her PhD program, she earned a Masters degree from Columbia University. She has done research internships at Google AI, New York and at Allen Institute of Artificial Intelligence in Seattle. |