Spectral Learning Techniques for Weighted Automata, Transducers, and Grammars

Tutorial @ EMNLP 2014 (Qatar)

presented by

Borja Balle, Ariadna Quattoni, and Xavier Carreras

News
Overview

In recent years we have seen the development of efficient and provably correct algorithms for learning weighted automata and closely related function classes such as weighted transducers and weighted context-free grammars. The common denominator of all these algorithms is the so-called spectral method, which gives an efficient and robust way to estimate recursively defined functions from empirical estimations of observable statistics. These algorithms are appealing because of the of existence of theoretical guarantees (e.g. they are not susceptible to local minima) and because of their efficiency. However, despite their simplicity and wide applicability to real problems, their impact in NLP applications is still moderate. One of the goals of this tutorial is to remedy this situation.

The contents that will be presented in this tutorial will offer a complementary perspective with respect to previous tutorials on spectral methods presented at ICML-2012, ICML-2013, and NAACL-2013. Rather than using the language of graphical models and signal processing, we tell the story from the perspective of formal languages and automata theory (without assuming a background in formal algebraic methods). Our presentation highlights the common intuitions lying behind different spectral algorithms by presenting them in a unified framework based on the concepts of low-rank factorizations and completions of Hankel matrices. In addition, we provide an interpretation of the method in terms of forward and backward recursions for automata and grammars. This provides extra intuitions about the method and stresses the importance of matrix factorization for learning automata and grammars. We believe that this complementary perspective might be appealing for an NLP audience and serve to put spectral learning in a wider and, perhaps for some, more familiar context. Our hope is that this will broaden the understanding of these methods by the NLP community and empower many researchers to apply these techniques to novel problems.

Presenters Bio

Borja Balle is currently a postdoctoral fellow at McGill University, and prior to that he obtained his PhD from Universitat Politecnica de Catalunya (UPC) in July 2013. His research interests lie on the intersection between automata theory and machine learning, in particular on applications of spectral learning techniques to natural language processing, grammatical inference, and reinforcement learning. He was area chair for NIPS 2014, program committee member for ICGI 2014, and has recently organized workshops (at ICML 2013, NIPS 2013, and ICML 2014) on methods of moments and spectral learning.

Ariadna Quattoni is currently a researcher at Xerox Research Centre Europe (XRCE), prior to that she was a researcher at the Universitat Politecnica de Catalunya (UPC). She obtained her PhD from MIT in 2009. Her main research focuses on latent variable models for structured prediction with applications to natural language processing and computer vision. On the last years her work has centered on spectral learning techninques for structured prediction problems with applications to sequence tagging, learning general transductions and parsing.

Xavier Carreras research is in natural language processing and machine learning. He is interested in grammatical induction and parsing methods for syntactic-semantic analysis and translation of natural languages. In 2005 he completed his PhD at the Universitat Politecnica de Catalunya (UPC). From 2006 to 2009 he was a postdoctoral researcher at MIT/CSAIL. From 2009 to 2014 he was a researcher at UPC and since June 2014 he is senior researcher at Xerox Research Centre Europe.

Last Update: Oct 20, 2014