Three MIT AI Tools Can Now Track Every Neuron in a Wriggling Worm or Pulsating Jellyfish
Neuroscientists studying simple, transparent animals have a distinct advantage: they can watch every neuron in the brain light up in real time as the animal behaves. Visibility is only half the battle, though. A worm wriggles. A jellyfish pulsates. Every time the animal moves, the same neuron occupies a different position in the image frame. Matching each neuron across thousands of video frames - correctly, at scale, and fast enough to be practical - has been one of the field's most stubborn technical bottlenecks for years.
An MIT team has now published three AI tools in eLife that largely solve the problem. Developed in the lab of Steven Flavell at the Picower Institute for Learning and Memory, the suite - BrainAlignNet, AutoCellLabeler, and CellDiscoveryNet - handles alignment, identity labeling, and unsupervised clustering across animals with accuracy that matches or exceeds trained human annotators, while cutting processing time by a factor of 600.
Five hours per worm, reduced to minutes
The scale of the previous bottleneck illustrates why the tools matter. When Flavell's lab was conducting studies of brainwide activity and serotonin's role in behavior around 2022, annotating the identity of each neuron in a single worm's video recording took an expert up to five hours. The lab uses a four-color barcoding system called NeuroPAL that marks each neuron type with a distinctive color combination, but reading those barcodes required months of training. When Flavell looked into outsourcing the annotation, estimates exceeded six figures in cost.
Within days of that conversation, graduate student Adam Atanas delivered the first version of AutoCellLabeler. The tools have since ended the lab's need to choose between speed and accuracy.
What each tool does
BrainAlignNet addresses the pure alignment problem: is the neuron that appeared here in frame 1 now located there in frame 2? It operates at single-pixel accuracy, matches ground truth 99.6% of the time, and runs 600 times faster than the lab's previous approach. The tool requires no knowledge of neuron identity.
AutoCellLabeler takes the next step, identifying what type of cell each neuron is. It requires some human-annotated training data but tolerates incomplete labeling. With the full four-color NeuroPAL system it achieves 98% accuracy; with only two color channels that figure drops only slightly.
CellDiscoveryNet operates without any training or supervision. It clusters fluorescently labeled cell types across different animals - determining whether a given neuron in worm A is the same cell type as a neuron in worm B - purely from image structure. Its performance matched well-trained human labelers.
Critically, none of the tools required the researchers to instruct the neural networks to examine specific features. The networks learned which image characteristics led to task success through optimization alone.
Jellyfish: an unexpected proving ground
Brady Weissbourd, an assistant professor at the Picower Institute and study co-author, has been trying to extract neural activity data from videos of C. hemisphaerica jellyfish - animals whose translucent bodies allow every neuron to be imaged but whose fluid, unstructured movement makes cell tracking nearly impossible.
"They call it a jellyfish for a reason," Weissbourd said. "Any part of it can move relative to any other part of it. One of our major bottlenecks was figuring out how to actually extract neural activity data from those videos because all of the neurons are moving around arbitrarily relative to each other."
BrainAlignNet was applied to only one cell type in the jellyfish in the current study, covering roughly 10% of the total neuron population. Weissbourd is now working to label all neuron types and is developing a microscope capable of imaging freely swimming animals.
Application beyond worms and jellyfish
Flavell frames the tools as a potential model for other imaging-heavy fields. "People are swimming in microscopy data these days," he said. "Automatically identifying all of the cells in each image is a problem that a lot of people are grappling with." The approach could apply to human tissue samples, organoids, or any other biological imaging dataset where tracking cell identity across many images is a challenge. The tools are not yet validated in those contexts, but the underlying architecture - neural networks that learn relevant features without explicit programming - is general.