Language Understanding And Reasoning With Memory Augmented Neural Networks
This talk will first briefly review recent advances in memory augmented neural nets and then present our own contribution, Neural Semantic Encoders (NSE) [1,2]. With a special focus on NSE, we show that external memory in conjunction with attention mechanism can be a good asset in natural language understanding and reasoning. Particularly we will cover a set of real and large-scale NLP tasks ranging from sentence classification to seq-seq learning and question answering, and demonstrate how NSE is effectively applied to them.
1. Munkhdalai, Tsendsuren, and Hong Yu. "Neural Semantic Encoders." (To appear in EACL 2017).
2. Munkhdalai, Tsendsuren, and Hong Yu. "Reasoning with memory augmented neural networks for language comprehension." (To appear in ICLR 2017).
Tsendsuren Munkhdalai is a postdoctoral associate at Prof. Hong’s BioNLP group at Umass medical school. He recently received his PhD in biomedical information extraction and NLP from the Department of Computer Science at Chungbuk National University, South Korea under the excellent supervision of Prof. Keun Ho Ryu. His research interest includes semi-supervised learning, representation learning, meta learning and deep learning with applications to natural language understanding and (clinical/biomedical) information extraction.