Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Science 2026-03-17 3 min read

A camera inspired by dragonfly eyes can measure vibrations without touching anything

University of Tsukuba team uses event cameras and topological math to recover full vibration trajectories from passive optical data alone

Bridges vibrate. Aircraft panels vibrate. Railway tracks vibrate. Knowing exactly how they vibrate - the frequency, amplitude, and phase of that motion - is essential for structural safety monitoring. The gold standard is laser Doppler vibrometry, which is accurate but expensive, elaborate to set up, and impractical for many field applications. Standard cameras can do the job more cheaply, but they hit a physics wall: capturing high-speed vibrations requires short exposure times, which starves the sensor of light. Compensating with brighter illumination trades away spatial resolution.

Researchers at the University of Tsukuba have found a different path. Their tool is an event camera - a sensor modeled on how dragonfly eyes process the world - paired with a mathematical framework borrowed from topology. Together, they can reconstruct the full vibration trajectory of a structure from passive optical data alone, with no laser and no special lighting.

How an event camera sees motion

A conventional camera captures frames - full images taken at fixed intervals. An event camera does something fundamentally different. Each pixel operates independently, firing only when it detects a change in brightness. The result is not a sequence of images but a stream of timestamped events, each recording where and when brightness shifted. This architecture, inspired by the way insect vision prioritizes change over static scenes, allows event cameras to capture extremely fast motion without the exposure-time constraints that plague frame-based systems.

Previous attempts to use event cameras for vibration measurement could estimate frequency reasonably well but struggled with amplitude and phase - the two other components needed to fully characterize a vibration. Without all three, you know something is shaking but not how far or in what pattern.

Topology fills the gap

The breakthrough came from an unexpected direction: topological data analysis (TDA), a mathematical framework that identifies geometric shapes and patterns in complex, high-dimensional data. Specifically, the team adapted the Mapper algorithm, a TDA tool that compresses complicated datasets into simplified topological representations while preserving their essential structure.

Applied to the stream of events generated by the camera, the Mapper algorithm reconstructs the vibration trajectory directly - the full path of the oscillating surface over time. From that trajectory, amplitude, phase, and frequency can all be extracted precisely. The method requires no active illumination and works with the passive event stream alone.

Separating multiple sound sources with one camera

An additional capability sets this approach apart: the system can isolate and record multiple vibration sources simultaneously using a single camera. In environments with overlapping mechanical vibrations - a bridge deck with traffic, an aircraft fuselage during flight - the ability to decompose the signal into its individual components without additional hardware is a practical advantage.

Where this fits and where it does not

The method occupies a useful middle ground. Laser vibrometry remains more precise for laboratory-grade measurements, but it requires point-by-point scanning and controlled conditions. Standard high-speed cameras can measure vibrations across a field of view but need intense lighting. The event camera approach works passively, at high temporal resolution, and across a spatial field - a combination that suits field deployment on infrastructure, machinery, and transport systems.

That said, event cameras are still relatively specialized hardware, not as widely available or as cheap as standard industrial cameras. The topological analysis pipeline adds computational overhead. And the method has been demonstrated in controlled settings; how it performs under variable outdoor lighting, weather, and complex structural geometries remains to be validated at scale.

The work was supported by Grant-in-Aid for Scientific Research (No. 24KJ0497). The lead researchers are Ryogo Niwa, a doctoral student, and Assistant Professor Tatsuki Fushimi, both at the University of Tsukuba.

Source: "Event Topology-based Visual Vibrometer" - Niwa R. and Fushimi T., University of Tsukuba. Published in Applied Physics Letters, 2026. DOI: 10.1063/5.0311647