Beyond the Mean Field Approximation for Inference in Multi-Layer Perceptrons

Interest in Multi-Layer Perceptrons (MLPs) has spiked in recent years due to their central role in deep learning technologies. MLPs can be seen as a trainable multi-input multi-output function constructed by composing linear and non-linear functions organized into layers of nodes. A lesser known fact is that sigmoid-based MLPs also admit a probabilistic interpretation. Under this interpretation, the MLP forward-pass can be seen as approximating a marginalization of hidden node activations at each layer under the mean field assumption. In this talk we explore this probabilistic view of MLPs and propose a closed-form approximation beyond mean field. This new approximation takes into consideration the uncertainty of inference at each layer and is thus closer to the true marginalization. Furthermore, it is also shown that such an approximation provides improved performance in Automatic Speech Recognition experiments when used for feature extraction.

Ramon Astudillo

Ramón F. Astudillo obtained the industrial engineering degree with specialization electronics in automatic regulation at the Escuela Politecnica Superior de Ingenieria de Gijon (Spain) in 2005, completing the last two years of this degree with an Erasmus scholarship at the Technische Universität Berlin. In 2006 he worked as an intern at Peiker Acustic researching model-based speech enhancement. On this same year he was awarded with a La Caixa and the German Academic Exchange Service (DAAD) scholarship for research towards the Ph.D. degree. He obtained the title with distinction from the Technische Universität Berlin in 2010 with the thesis "Integration of Short-Time Fourier Domain Speech Enhancement and Observation Uncertainty Techniques for Robust Automatic Speech Recognition". He is currently a Post.-Doc. researcher at INESC-ID/L2F researching both on robust speech recognition and robust natural language processing speech applications in a Bayesian setting.INESC, IST