Ari Benjamin
Bridging neuroscience and AI
What we can learn, and why?
All animals easily learn some things but are stumped by other problems. What defines the line between easy and hard, 'natural' and 'unnatural' tasks? The answer determines who we are – we are the product of our learning algorithms.
To answer this question, I work as a theorist and data analyst in the laboratory of Tony Zador at Cold Spring Harbor Laboratory, following a PhD I completed with Konrad Kording at UPenn.
Machine learning as tool and theory for neuroscience
In my theory work I study artificial neural networks as model systems of learning in the brain. Like any learning machine, ANNs extract certain generalizations from data. By studying these learning preferences in this limited context, we can gain insights that help us understand learning in the mammalian brain.
However, artificial neural networks are vastly oversimplified models of the brain. To help discover what is missing, I also work to analyze neurobiological data using machine learning methods. Theory and experiment need each other, but are often too far apart in neuroscience.
Working in collaboration with experimental labs obtaining single-cell neuroanatomical (projection and transcription) and functional (2p) data, I seek to discover the cell types and basic functional components of learning and computation. This work involves training large 'foundation models' of such data modalities that can reveal general properties and transfer to smaller datasets of interest. I operate under the hypothesis that the genome encodes canonical rules for cortical learning that modify a genetically-encoded initial scaffold of long-range connectivity.