To Understand the Brain, This Neuroscientist Built the Smallest AI Model That Could
The dominant approach in artificial intelligence is to build systems as large as possible and train them on as much data as possible. Scale, the thinking goes, produces capability. Neuroscientist Benjamin Cowley of Cold Spring Harbor Laboratory has been pursuing the opposite strategy - and arriving at insights that the big-model approach cannot provide.
Cowley's goal is not to build better AI. It is to understand how the biological brain processes the visual world. His problem with large AI models is that they are, in a fundamental sense, just as opaque as the brain itself. Replacing one complex black box with another does not advance understanding. So Cowley, working with Carnegie Mellon's Matthew Smith and Princeton's Jonathan Pillow, set out to build an AI model of the visual system that was small enough to actually examine.
From Large to Tiny Without Losing Accuracy
The starting point was standard: macaques were shown carefully curated natural images while researchers recorded which neurons in their visual cortex fired in response. A large AI model was then trained to predict these neural responses, outperforming competing models by more than 30 percent. So far, a routine approach.
The unusual step came next. Using compression technology, the team reduced the large model to approximately one-thousandth its original size. The resulting model is small enough to attach to an email. It also retained the predictive accuracy of its massive predecessor - it still correctly identified which images triggered which neurons.
What changed was interpretability. With millions of parameters, the large model is effectively unexaminable. With thousands, its internal structure becomes accessible. The team could ask: what are the model neurons actually doing? What features of an image drive their responses?
Neurons That Love Dots
The answer revealed a consistent architecture. Every compact model neuron processed images by first detecting low-level features - edges, colors, spatial frequencies - and then combining them into more specific preferences. Different neurons specialized in different combinations. Some responded strongly to elongated edges at particular orientations. Others developed what Cowley describes as a preference for dots.
"In the monkey's brain - and in our brains, too, most likely - there's a group of V4 neurons that love dots," Cowley said. The apparent oddity becomes less strange when you consider what dots represent in natural scenes: eyes, circular objects, the visual features that signal social information. Evolution may have built dedicated dot detectors because of how important they are in navigating a world full of faces.
This is the level of specificity that large models cannot provide. A million-parameter model that accurately predicts neural responses is useful for generating predictions; it is not useful for understanding what computation underlies those predictions. The compression step converts predictive accuracy into biological insight.
What Came Before and What Might Follow
Cowley's previous work applied similar methods to fruit fly neural circuits. Macaques represent a step considerably closer to human neuroscience - the primate visual system is anatomically and functionally much more similar to our own than the insect nervous system is.
Looking ahead, Cowley is thinking about applying the same framework to models of neurological disease. The logic is this: if you know the images that drive neurons to talk to each other under normal conditions, you might be able to identify what has changed when those connections degrade. In conditions like Alzheimer's dementia, where synaptic loss is a central feature, compact interpretable models might help define what inputs would need to be provided to re-engage pathways that have gone quiet. This is speculative - the study published in Nature establishes the compression approach in the visual domain, not in disease models. But it represents a direction the researchers plan to pursue.
The limitations are real. The recordings were made in macaque visual cortex, not in human subjects, and focused on relatively early stages of visual processing. Whether the same compression approach would work for modeling higher-level visual areas, or brain regions involved in memory and cognition rather than vision, is not yet established. The path from small AI models of normal visual cortex to practical tools for understanding neurological disease is long.
The study was published in Nature and conducted across Cold Spring Harbor Laboratory, Carnegie Mellon University, and Princeton University.