Recent Changes - Search:

MLFL Home

University of Massachusetts

MLFL Wiki

Neural Knowledge Representation And Reasoning

Abstract: Making complex decisions in science, government policy, finance, and clinical treatments all require integrating and reasoning over disparate sources of information. While some decisions can be made from a single piece of evidence, others require considering information that exists outside of the current context. A long-term store of abstracted knowledge over related concepts can facilitate this type of reasoning, while also influencing interpretation as well as enhancing the acquisition of new knowledge. A symbolic graph over a fixed, human-defined schema encoding facts about entities and their relations is the predominant method of representing knowledge, but this method is brittle, lacks specificity, and is inevitably highly incomplete. On the other extreme, recent work on purely text based knowledge models lack abstractions necessary for complex reasoning.

In this talk I will present a middle ground incorporating powerful neural network models with rich structured ontologies and unstructured raw text to improve the representations of entities and their relations. We first discuss our work on universal schema, a method for learning a latent schema over both existing structured resources and unstructured free text data, embedding them jointly within a shared semantic space. Next we inject additional hierarchical structure into the embedding space of concepts, resulting in more efficient statistical sharing amongst related concepts and improving accuracy in both fine-grained entity typing and linking. We then present initial work to represent knowledge in context, including a single model for extracting all entities and long-range relations simultaneously over full paragraphs while jointly linking these entities to a knowledge graph. Lastly, we propose future directions for representing knowledge in context by incorporating cognitive theories of human memory systems and discuss how these models can address longstanding shortcomings in knowledge representation.

Bio: Patrick Verga is a final year PhD candidate in the College of Information and Computer Sciences at UMass Amherst, advised by Andrew McCallum. His research contributes to knowledge representation and reasoning, with a focus on large knowledge base construction from unstructured text, with applications to general domain, commonsense, and biomedicine. Pat previously interned at Google and the Chan Zuckerberg Initiative and received the best long paper award at EMNLP 2018. Over the past several years he has advised multiple M.S. and junior PhD students, resulting in published research in fine-grained entity typing, unsupervised parsing, and partially labeled named entity extraction. He holds M.S. and B.A degrees in computer science as well as a B.S. in neuroscience.

Edit - History - Print - Recent Changes - Search
Page last modified on February 11, 2019, at 11:01 PM