Learning makes neurons coordinate more, not less, overturning a decades-old assumption
Science, March 2026
Think about the last time you got better at something visual: reading a handwritten note, spotting a friend in a crowd, or recognizing a bird in flight. The dominant theory in neuroscience would say your brain improved by making the neurons responsible for that task more independent, reducing redundancy so information could be read out more cleanly. A new study says the opposite is happening.
The prevailing view, and its reversal
For decades, the standard model of perceptual learning held that the brain becomes more efficient by pushing neurons to act independently. Shared activity between neurons was considered noise, something that interfered with clean information processing. Learning, according to this framework, should reduce that shared activity.
Shizhao Liu, a graduate student in the labs of Ralf Haefner and Adam Snyder at the University of Rochester's Department of Brain and Cognitive Sciences, tested this by tracking the same small networks of neurons in the visual cortex over several weeks as subjects learned to distinguish different visual patterns. The results, published in Science, show the opposite of what the standard model predicts.
Before learning, neurons mostly worked independently. As subjects improved their visual discrimination skills, the neurons increased their shared activity. They became more coordinated, not less.
A sports team, not a factory
Snyder offered an analogy: instead of a factory assembly line where each worker operates in efficient isolation, the neurons behaved more like a sports team that communicates and coordinates to solve problems together. The shared information made each individual neuron better informed and the group more flexible and adaptive.
Critically, this coordination only appeared during active tasks. When subjects passively viewed the same images without making decisions, the effect vanished. The neurons most important for the task showed the biggest boost in coordination, and the effect was strongest at the moments when decisions were being made.
These are not permanent changes. The researchers believe the coordination is guided by feedback signals from higher-level brain areas, allowing neurons to adjust their behavior dynamically depending on what the task demands.
The brain as inference engine
The findings support a growing theoretical framework in neuroscience: the brain is not a passive conveyor belt that simply forwards sensory information to higher processing areas. Instead, it constantly blends what it perceives with what it has learned to expect. Perception, in this view, is active inference, a process of combining incoming signals with prior knowledge to construct an interpretation of the world.
That blending requires neurons to share information. If each neuron encoded only its own independent slice of the visual scene, there would be no mechanism for prior expectations to shape sensory processing. The increased coordination Liu observed is consistent with feedback from higher brain areas pushing sensory neurons to incorporate learned expectations into their responses.
Liu described the shift directly: sensory areas of the brain are not just passively encoding the world. They are actively performing inference by combining incoming information with what the brain has learned to expect.
Implications for AI and learning disorders
The findings have potential implications in two directions. For neuroscience, understanding how the brain coordinates neurons during learning could provide insights into learning disorders and conditions that affect perception. If learning depends on increasing neural coordination rather than reducing it, then disorders that disrupt coordination mechanisms could impair learning in ways that current models do not predict.
For artificial intelligence, the results suggest a different architectural approach. Most current AI systems use discriminative architectures that map inputs directly to outputs. Haefner noted that incorporating generative feedback loops, in which internal models shape sensory representations, may lead to systems that learn faster from limited data, handle uncertainty better, and adapt more flexibly to changing tasks.
Study limitations
The study tracked neurons in the visual cortex during a specific type of perceptual learning task. Whether the same pattern of increased coordination occurs in other brain areas or during other types of learning, such as motor learning or language acquisition, is not established. The work was also done in animal subjects, and while visual cortex organization is broadly similar across mammals, direct translation to human learning processes requires caution.
The mechanistic explanation, that feedback from higher brain areas drives the coordination, is supported by the data but not definitively proven. Alternative explanations, including local circuit mechanisms, have not been entirely ruled out.