Discovering Latent Network Structure In Neural Spike Train Recordings
The steady expansion of neural recording capability provides exciting opportunities to discover unexpected patterns and gain new insights into neural computation. Realizing these gains requires statistical methods for extracting interpretable network structure from large-scale neural recordings. In this talk I will present our recent work on methods that reveal such structure in simultaneously recorded multi-neuron spike trains. We combine random network models with Hawkes processes, a type of multivariate point process, in a joint probabilistic model, and derive efficient fully-Bayesian inference algorithms. We extend these models to handle nonlinear interactions by leveraging recent innovations in Bayesian inference for logistic models. Interpretable properties such as latent cell types and features, hidden states of the network, and unknown synaptic plasticity rules are incorporated into the model as latent variables that govern the network. We apply our models to neural recordings from primate retina, rat hippocampal place cells, and neural simulators to discover latent structure in population recordings.
Scott a Computer Science Ph.D. candidate at Harvard University where he is advised by Leslie Valiant and Ryan Adams and is a member of the HIPS group. His research is focused on computational neuroscience, machine learning, and the general question of how computer science can help us decipher neural computation. This includes bottom-up methods for inferring statistical patterns in large-scale neural recordings as well as top-down models of probabilistic reasoning in neural circuits.
He completed by B.S. in Electrical and Computer Engineering at Cornell University in 2008. Prior to beginning graduate school, he worked for three years as a software development engineer at Microsoft, specifically working on the Windows networking stack.