Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Engineering 2026-02-25 2 min read

An Umbrella Can Hijack a Tracking Drone - UC Irvine Team Shows How

The FlyTrap attack exploits visual pattern deficiencies in AI-powered autonomous tracking systems to draw target-following drones close enough to net or crash

Autonomous target-tracking drones - aircraft that lock onto a selected person or object and follow without a human controller maintaining active input - are deployed in border patrol, law enforcement, security surveillance, consumer photography, and, increasingly, by people who use them for stalking. A research team at the University of California, Irvine has demonstrated that all of these use cases share a common vulnerability that an attacker can exploit with a portable umbrella and a printed visual pattern.

The attack framework, which the UC Irvine team calls FlyTrap, was presented at the Network and Distributed System Security Symposium in San Diego. It represents the first comprehensive security study of autonomous target-tracking technology, and its implications span from law enforcement operations to the privacy of individuals being monitored by commercial drones.

How FlyTrap Works

Autonomous tracking systems rely on computer vision: cameras feed continuous imagery to neural networks that identify and maintain spatial awareness of a designated target. The neural network estimates how far away the target is and generates flight commands to maintain a consistent tracking distance.

FlyTrap exploits a deficiency in how these neural networks interpret images. An umbrella covered with a specifically designed visual pattern deceives the tracking algorithm into perceiving that the target is moving farther away - even when the umbrella holder is stationary. The drone's control logic responds as designed: to maintain tracking distance, it flies toward the target. It keeps flying closer until it can be captured with a net gun or caused to crash.

"An ordinary umbrella covered with a specifically designed visual pattern can deceive neural network tracking systems used by autonomous drones," said lead author Shaoyuan Xie, a graduate student researcher at UC Irvine. "If it's that easy to seize control over an autonomous drone, operating them in critical security or law enforcement settings should be reconsidered."

Three Commercial Drones, Three Confirmed Vulnerabilities

The research team tested FlyTrap against three commercial drones: the DJI Mini 4 Pro, the DJI Neo, and the HoverAir X1. The attack successfully drew all three drones close enough for physical capture or induced collision. The approach functions in varied weather and lighting conditions, requires no wireless signal or external connectivity, and operates entirely through the ordinary physical act of opening an umbrella. The team has responsibly disclosed these vulnerabilities to manufacturers DJI and HoverAir. All drone experiments were completed before December 22, 2025.

Two-Sided Implications

The FlyTrap attack has dual-use potential that the researchers acknowledge directly. A criminal using an autonomous drone to stalk an individual could be deterred or eliminated by a target deploying the attack. A border zone patrolled by autonomous tracking drones could be disrupted by adversaries carrying patterned umbrellas. Law enforcement drones could be neutralized by suspects being tracked.

This ambiguity makes the urgency of addressing the vulnerability more acute, not less. If autonomous tracking drones are to be deployed in critical security applications, their resistance to this class of physical-world attack needs to be established before broad deployment, not discovered afterward. "Our findings highlight urgent needs for security improvements in autonomous target-tracking systems before wider deployment in critical infrastructure," Xie said. The research received financial support from NASA and the National Science Foundation.

Source: Xie S, Wang N, Sato T, et al. FlyTrap attack framework. Presented at the Network and Distributed System Security Symposium, San Diego, February 2026. University of California, Irvine. Funded by NASA and NSF. Media contact: Brian Bell, UC Irvine - bpbell@uci.edu, 949-565-5533