A robotic chip that produces 1,000 nanoparticle formulations per hour could finally let AI design mRNA delivery
ACS Nano, March 2026
The lipid nanoparticles that delivered mRNA in COVID-19 vaccines were found largely through trial and error. Researchers made libraries of particles, tested them, and picked the winners. It worked, but it was slow, and the design space is enormous: on the order of 10 to the 15th power possible formulations, any one of which might be optimal for a given therapy.
Artificial intelligence could, in theory, navigate that space far more efficiently. But AI needs data to learn from, and the bottleneck has been generating enough nanoparticle formulations to train predictive models. A new platform built by engineers at the University of Pennsylvania, described in ACS Nano, may remove that constraint.
The formulation bottleneck
Making a lipid nanoparticle (LNP) involves three basic steps: synthesizing ionizable lipids, combining them with other ingredients into a formulation, and testing the resulting particles. The first and last steps can already operate at high throughput. Thousands of new lipids can be synthesized and thousands of formulations can be tested simultaneously.
The middle step is where everything stalls. Current methods produce only tens to hundreds of formulations per hour. Manual mixing is slow and requires cleaning equipment between runs. Robotic liquid handlers can prepare ingredients faster but rely on inconsistent mixing methods that introduce batch-to-batch variability. Microfluidic chips produce consistent particles but operate mostly in serial fashion.
The result, as doctoral student Andrew Hanna put it, is that formulation is the bottleneck. Without scaling that step, the large, systematic datasets that machine learning models need simply cannot be generated.
How LIBRIS works
LIBRIS, short for LIpid nanoparticle Batch production via Robotically Integrated Screening, resembles a miniature factory. Tubes carrying different LNP components feed into a glass microfluidic chip housed in aluminum casing. Inside the chip, components mix in microscopic channels under precisely controlled pressure. A well plate moves beneath the chip to collect streams of particles in solution.
The key innovation is parallelism. The chip contains multiple channels that create up to eight distinct formulations simultaneously. Rapid cleaning between runs allows near-continuous operation, producing roughly 1,000 LNP formulations per hour. That is approximately 100 times faster than conventional microfluidic methods.
Because the mixing conditions are tightly controlled, the particles produced are consistent. This consistency is critical for generating datasets where differences in biological performance can be attributed to differences in formulation rather than manufacturing variability.
From screening to design
The long-term vision extends beyond simply making more particles faster. Michael Mitchell, associate professor of bioengineering and co-senior author, described the goal as moving from screening to rational design. Instead of making many formulations and asking which works best, researchers want to specify the properties they need and then build a nanoparticle to match.
That requires understanding the relationship between chemical inputs, such as lipid structure and component ratios, and biological outcomes, such as how effectively a particle delivers its payload inside cells. Currently, that relationship is poorly mapped. LIBRIS provides the data generation capacity needed to start filling in the map.
Co-senior author David Issadore, professor of bioengineering, emphasized that AI excels at pattern recognition but needs sufficient data for patterns to emerge. The platform is designed to produce exactly that: large, well-defined libraries of LNPs whose properties and performance can be systematically correlated.
What remains to be done
LIBRIS addresses the formulation bottleneck, but building predictive AI models for LNP design will require more than fast formulation. The biological testing step, while already capable of high throughput in cell-based assays, becomes much slower and more expensive when moving to animal models. The gap between in vitro screening results and in vivo performance remains a fundamental challenge in nanoparticle development.
The platform has been demonstrated in a laboratory setting. Scaling it for industrial use and validating its output across diverse therapeutic applications will take additional work. The team has not yet published results showing AI models trained on LIBRIS-generated data outperforming traditional approaches in designing LNPs for specific applications.
Still, the throughput improvement is substantial enough to change what is practical. At 1,000 formulations per hour, a single platform could generate in one day what previously took months. If the quality of the data matches the quantity, the foundation for AI-guided nanoparticle design may finally be in place.