MLFL Wiki |
Main /
Controllable Text Generation With Deep Latent-Variable ModelsAbstract: Progress in deep learning has led to optimism for automatic text generation. Yet state-of-the-art systems still predict inaccurate output on a non-trivial percentage of examples. Lack of user control to correct these issues makes it difficult to deploy these models in real applications. In this talk, I will argue that discrete latent-variable models provide a natural declarative framework for more controllable text models. I will present two recent works exploring this theme: (1) a method for learning neural template models that can be adjusted directly by users; (2) a variational approach to soft attention that learns alignment as a latent variable. I will end by discussing research challenges for making it easy to design and fit these models for large scale applications. Bio: Alexander "Sasha" Rush is an Assistant Professor at Harvard University, where he studies natural language processing and machine learning. Sasha received his PhD from MIT supervised by Michael Collins and was a postdoc at Facebook NY under Yann LeCun. His group supports open-source development, running several projects including OpenNMT. His research has received several best paper awards at NLP conferences, an NSF Career award, and faculty awards from Google, Facebook, and others. He is currently the senior program chair of ICLR 2019. |