Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Science 2026-03-19

Hair-thin implants let amputees control a prosthetic leg with their thoughts alone

A bidirectional neural interface reads movement intent and restores sensation through a single device — a first for lower-limb prosthetics.

Based on a press release from Chalmers University of Technology. Study published in Nature Communications (March 2026).

Try to imagine something you do without thinking: placing your heel on a step, feeling the stair's edge press into your sole, shifting your weight forward as your ankle adjusts. Now remove all of that. No pressure. No angle. No feedback. Your prosthetic leg swings forward on a timer, governed by accelerometers and gyroscopes that know nothing about what you intend to do next. You are a passenger in your own stride.

That is the daily reality for millions of people living with above-knee amputations. Unlike upper-limb prostheses, which increasingly tap into the wearer's own muscle signals, prosthetic legs remain stubbornly autonomous. They react to motion already happening rather than motion the wearer wants to initiate. The brain still sends commands down the remaining nerves — flex the ankle, curl the toes — but the hardware at the other end isn't listening.

A team spanning five countries has just proved that it can be.

The phantom signals that never stopped

When a limb is amputated, the nerves that once controlled it don't go silent. They keep firing. Every time a person with an above-knee amputation thinks about pointing their foot or bending their knee, electrical impulses travel down the sciatic nerve's tibial branch and arrive at a dead end. These "phantom" signals have been documented for decades, but reading them accurately enough to drive a prosthesis has remained out of reach — especially in the leg, where the anatomy and signal characteristics differ substantially from the arm.

Previous studies that managed to decode nerve signals focused exclusively on upper limbs. The reasons were partly practical: arm amputees have more residual muscle to work with, and the signals from arm nerves tend to be stronger. Leg nerves, post-amputation, produce faint, noisy output that has resisted reliable capture.

Giacomo Valle, an assistant professor at Chalmers University of Technology in Sweden and one of the study's senior authors, framed the challenge bluntly: "When you tell your body to move, signals travel through the nerves to the muscles which carry out the action — even if the limb is no longer there. This means you can find all the information needed within those nerves. The major challenge is extracting that information and understanding the neural code behind it."

Four electrodes, each thinner than a hair

The solution required hardware and software to advance in lockstep. On the hardware side, the researchers used a neural implant originally developed at the University of Freiburg — four ultrathin, flexible electrodes inserted directly into the tibial branch of the sciatic nerve in two participants with above-knee amputations. Each electrode is roughly the diameter of a human hair, pliable enough to move with the nerve rather than damaging it.

These implants had been used before, but only in one direction: stimulating nerves to restore a sense of touch. No one had successfully used them to read outgoing motor signals from leg nerves. The Chalmers-led team did both.

When the participants were asked to attempt movements with their missing leg — bend the knee, rotate the ankle, wiggle the toes — the implants captured the resulting nerve traffic. The signals were faint. They were buried in noise. And they arrived as rapid-fire bursts of electrical spikes rather than the smooth, continuous waveforms that conventional signal-processing tools handle best.

An AI that thinks in spikes

This is where the software innovation comes in. Standard artificial intelligence — the kind behind image generators and large language models — processes numbers on a continuous scale. Biological neurons don't work that way. They communicate in discrete, precisely timed electrical pulses: spikes. Trying to decode neural spike trains with conventional AI is a bit like trying to read Morse code by treating the dots and dashes as a smooth audio wave. You can extract something, but you lose the timing that carries most of the meaning.

The team turned instead to Spiking Neural Networks (SNNs), a class of AI architecture that processes information as timed spikes — matching the format the nervous system actually uses. Elisa Donati, a professor at the University of Zurich and ETH Zurich and the study's other senior author, explained the logic: "Peripheral nerves communicate through discrete electrical impulses — or spikes — and spiking neural networks are therefore naturally suited to processing this type of signal. By aligning our computational models more closely with biology, we can extract movement intent efficiently, using compact models and relatively limited data."

The practical upshot: the SNN-based decoder identified which movement a participant was attempting with what the researchers describe as high accuracy, distinguishing not just gross actions like knee flexion but fine-grained intentions — ankle rotation, toe curling — that had never been decoded from leg nerves before.

Toes that aren't there, decoded anyway

The resolution surprised even the researchers. The electrodes sat high in the residual limb, far from where toes would have been. Yet the system reliably decoded the intention to wiggle individual toes. Valle called it "amazing to see how electrodes placed high up in what remains of a leg could decode attempts to wiggle the toes."

That granularity matters for more than novelty. A prosthetic leg that can detect fine motor intent could adapt its ankle angle on uneven ground before the wearer stumbles, or modulate toe-off force during a sprint versus a stroll. Current prosthetic legs guess at these adjustments using external sensors. A nerve-linked system would know what the wearer actually wants.

But the study's arguably most significant finding is that the same implant that reads outgoing motor commands can also deliver incoming sensory signals — pressure, position, texture — back to the nerve. Previous research required separate hardware for each direction. Here, a single set of electrodes handles both.

"The system is bidirectional," Valle said. "Once electrodes are implanted inside the nerve, they can be used to communicate bidirectionally with the nervous system. So, for the first time, a single neurotechnology can provide both natural neural control and sensory feedback in the same implantable device."

Two patients, big caveats

This is a proof-of-concept study, and it comes with the limitations that label implies. The cohort was two people. The movements were decoded in a controlled laboratory setting, not while walking on pavement or climbing stairs. The implants recorded signals during structured tasks — "try to flex your ankle now" — rather than the continuous, overlapping stream of intentions that real-world locomotion demands.

Translating from phantom-limb movements (imagined actions with a missing leg) to actual prosthetic control introduces another layer of complexity. The mapping between neural intent and mechanical output will need calibration that accounts for fatigue, day-to-day signal variation, and the cognitive load of consciously driving a limb that was once automatic.

The SNN decoder, while efficient, was tested offline — meaning the recorded signals were analyzed after capture, not in real time. For a wearable prosthesis, the entire pipeline from spike detection to motor command must happen in milliseconds. Donati's point about "low-power, fully implantable systems" signals that this is the goal, but it hasn't been demonstrated yet.

And longevity remains an open question. Neural implants can degrade, shift position, or provoke immune responses over months and years. The study does not report long-term stability data.

From proof to prosthesis

Still, the trajectory is clear. The team's next step, according to Valle, is to integrate the technology into a working prosthetic leg — one that a person controls through intention rather than inertia, and that sends sensation back through the same nerve channel. If the approach scales, it could change the equation for the majority of amputees worldwide, since lower-limb amputations outnumber upper-limb cases by a wide margin, yet have received far less attention in neural-interface research.

The work also carries implications beyond prosthetics. A reliable method for reading and writing signals in peripheral nerves could inform treatments for chronic pain, paralysis, and neurological conditions where the peripheral nervous system is intact but miscommunicating with the brain.

For now, the result is a narrow but decisive one: the nerve signals are there, they can be read, and an AI architecture modeled on biology's own logic can make sense of them. The leg prosthesis that feels like part of the body — not a gadget strapped to it — just moved from theoretical to plausible.

Study: "Decoding phantom limb movements from intraneural recordings," published in Nature Communications, March 2026.

Authors: Cecilia Rossi, Marko Bumbasirevic, Paul Cvancara, Thomas Stieglitz, Stanisa Raspopovic, Elisa Donati, and Giacomo Valle.

Institutions: Chalmers University of Technology (Sweden), University of Zurich and ETH Zurich (Switzerland), University of Belgrade (Serbia), University of Freiburg (Germany), Medical University of Vienna (Austria).