MLFL Wiki |
Main /
Relaxation-Based Algorithms For Machine TranslationTalk: Beam search and cube pruning have become the de facto inference algorithms for phrase-based and syntactic translation respectively. In practice the algorithms often find near-optimal solutions very quickly. However, the effectiveness of both algorithms is theoretically surprising, since neither provides a guarantee of optimality or even a bound on approximation error. In this talk, we describe new beam-search algorithms for finding optimal translations in phrase-based and syntax-based models. These algorithms piggyback on the practical efficiency of beam search and cube pruning, while extending them to provide additional guarantees. The algorithms have the following benefits: (1) they provide formal guarantees about approximation gap and certificates of optimality, (2) they are simple and extend off-the-shelf beam search decoders, and (3) preliminary results show that they are significantly faster than other exact methods for translation decoding. This is joint work with Yin-Wen Chang and Michael Collins. Bio: Alexander Rush is a Ph.D. candidate in Computer Science at the Massachusetts Institute of Technology studying with Professor Michael Collins and is currently a visiting scholar at Columbia University in New York. He received his A.B. in Computer Science from Harvard University in 2007. Before starting graduate study, he worked as lead engineer on the Platform/API team at Facebook. His research interest is in formally sound, but empirically fast inference methods for natural language processing, with a focus on models for syntactic parsing, translation, and speech. Last spring, he was the lead TA for NLP on Coursera, Columbia's first MOOC, with over 30,000 registered students. He has received best paper awards from EMNLP 2010 and NAACL 2012, the latter for work completed as an intern at Google research. |