I am currently a post-doctorate researcher working with Terry Sejnowski in the Computational Neurobiology Lab (CNL) at the Salk Institute. Before that I worked with Pankaj Mehta while getting my physics PhD from Boston University. And even earlier than that, I grew up and went to college in Wisconsin.
My current research focuses on applying artificial neural networks (aka deep learning) to real biological neural networks to both analyze experimental neuroscience data as well as answering questions from computational neuroscience. When I write down my past research projects on paper and in order, I have to admit that even to me it kind of looks like a jumble of topics. However, the name of my blog, N 2 Infinity and Beyond, truly is the unifying factor behind my research. Statistical physics has shown that a lot of large systems with multiple interacting components are often "simpler" than smaller systems with fewer parts. Statistical physicists love to assume there are infinitely many interacting components (N to infinity limit) and analyze the characteristics of the resulting simpler system. The N to infinity limit turns out to be a very powerful mathematical and conceptual concept that can be applied a variety of fields outside of physics, hence all my different research projects.
Other things of mine throughout the Internet: