Neural Auto-associative Memory Via Sparse Recovery
An associative memory is a structure learned from a dataset M of vectors (signals) in a way such that, given a noisy version of one of the vectors as input, the nearest valid vector from M (nearest neighbor) is provided as output, preferably via a fast iterative algorithm. Traditionally, neural networks are used to model the above structure. In this talk we propose a model of associative memory based on sparse recovery of signals. Our basic premise is simple. Given a dataset, we learn a set of linear constraints that every vector in the dataset must satisfy. Provided these linear constraints possess some special properties, it is possible to cast the task of finding nearest neighbor as a sparse recovery problem. Assuming generic random models for the dataset, we show that it is possible to store exponential number of n-length vectors in a neural network of size O(n). Furthermore, given a noisy version of one of the stored vectors corrupted in linear number of coordinates, the vector can be correctly recalled using a neurally feasible algorithm.
Instead of assuming the above subspace model for the dataset, we might assume that the data is a sparse linear combination of vectors from a dictionary (sparse-coding). This very relevant model poses significant challenge in designing associative memory and is one of the main problems we will describe. (This is a joint work with Ankit Singh Rawat (CMU) and was presented in part at NIPS'15).
Arya Mazumdar is an assistant professor in the College of Information and Computer Science at the University of Massachusetts, Amherst. From Jan 2013 till recently Arya used to be an assistant professor at University of Minnesota-Twin Cities, and form Aug 2011 to Dec 2012, he was a postdoctoral scholar at Massachusetts Institute of Technology. He received his Ph.D. from University of Maryland, College Park, in 2011. Arya is a recipient of 2014-15 NSF CAREER award and the 2010 IEEE ISIT Student Paper Award. He is also the recipient of the Distinguished Dissertation Award, 2011, at the University of Maryland. He spent the summers of 2008 and 2010 at the Hewlett-Packard Laboratories, Palo Alto, CA, and IBM Almaden Research Center, San Jose, CA, respectively. Arya's research interests include Information and Coding Theory and their applications to networked systems and learning.