MLFL Wiki |
Main /
## Bayesian Learning And Inference In Recurrent Switching Linear Dynamical Systems
Many natural systems, such as neurons firing in the brain or basketball teams traversing a court, give rise to time series data with complex, nonlinear dynamics. We can gain insight into these systems by decomposing the data into segments that are each explained by simpler dynamic units. I will present a model class that builds on the switching linear dynamical system (SLDS), leveraging its combination of discrete and continuous latent states to discover dynamical units. Our recurrent SLDS will go one step further: by learning how transition probabilities depend on observations or continuous latent states, we will better explain switching behavior. Our key innovation is to design these recurrent SLDS models to enable Pólya-gamma auxiliary variable techniques and thus make approximate Bayesian learning and inference in these models easy, fast, and scalable.
Scott is a postdoctoral fellow in the labs of Liam Paninski and David Blei at Columbia University. He completed his Ph.D. in Computer Science at Harvard University under the supervision of Ryan Adams and Leslie Valiant, and he received his B.S. in Electrical and Computer Engineering from Cornell University. Prior to graduate school, he worked at Microsoft as a software engineer on the Windows networking stack. His research is focused on machine learning, computational neuroscience, and the general question of how computer science and statistics can help us decipher biological computation. |

Page last modified on May 01, 2017, at 08:23 AM