Ari Benjamin
Computational and theoretical neuroscience
Our thoughts and actions correspond to the flow of information through networks of neurons in our brains. In this very moment, you reading this text is identically, as far we yet know, the propagation of activity through cells in your brain. Never ceases to be wild, huh?
I believe that the best way to understand this miracle of correspondence between mind and brain is through mathematical models of neural networks.
Yet while modern AI is the most mind-like formalism we have yet built, it is overly abstracted from neurobiology. There is much we know about neurons that has no home in AI models of the brain.
Current position
I work as a postdoc in the laboratory of Tony Zador at Cold Spring Harbor Laboratory. I completed my PhD with Konrad Kording at the University of Pennsylvania. My dissertation was titled "Machine learning as tool and theory for neuroscience."
Research themes
What are the computational roles of cell types? Modern estimates put the number of cell types in the brain at between 5,000 and 10,000 types. What is the computational purpose of such diversity? 
AI for single-cell biology Any theory of cellular function must be grounded experimental knowledge. In reality, cellular biology is extremely complex and only beginning to be understood. I believe that AI methods are the right tools for understanding the hidden structure of cellular neurobiology.
How does initial connectivity shape what can easily be learned? All animals easily learn some things but are stumped by other problems. What defines the line between easy and hard, 'natural' and 'unnatural' tasks? I use ideas from deep learning theory to understand learning biases, and look to experimental data for hypotheses. I operate under the hypothesis that learning modifies a genetically-encoded initial scaffold of long-range connectivity that defines brain area identities.
What are the algorithms of learning in the brain? How do cell types coordinate their plasticity such that animals learn what they do? And what is it that we are learning, exactly? I follow various literatures including neural plasticity, deep learning optimization, credit assignment, and classical learning theory and attempt to tie these together into a coherent picture of learning and development.