MLFL Wiki |
Main /
Detecting Asymmetric Semantic Relations In Context A Case-Study On Hypernymy DetectionAbstract: Comparing the meaning of words and understanding how they relate is a fundamental challenge in natural language understanding. In this talk, I’ll introduce WHiC, a challenging testbed for detecting hypernymy, an asymmetric relation between words. While previous work has focused on detecting hypernymy between word types, we ground the meaning of words in specific contexts drawn from WordNet examples, and require predictions to be sensitive to changes in contexts. WHiC also lets us analyze different properties of two approaches of inducing vector representations of word meaning in context, allowing us to identify their strengths and weaknesses. I’ll also show that such contextualized word representations also improve detection of a wider range of semantic relations in context. Bio: Yogarshi Vyas is a fourth year PhD student in the Department of Computer Science at the University of Maryland, College Park. His broad research interests lie in semantics, multilingual NLP, and machine translation, and the intersection of these. His current research focus is on comparing and contrasting the meaning of text in different languages using the idea of entailment as well as learning representations for multilingual data that facilitate meaningful and easy comparisons across languages. He recently won the Adam Kilgarriff Best Paper award at *SEM 2017. |