Motor imagery (MI) is the mental process of imagining a specific limb movement, such as raising a hand or walking, without physically performing it. These imagined movements generate distinct patterns of brain activity that can be recorded using electroencephalography (EEG). By decoding these signals, researchers can enable direct communication between the brain and computers, making MI-EEG a powerful tool for applications such as motor rehabilitation and the assistive control of wheelchairs and prosthetic devices.
However, EEG signals generated during MI vary significantly across individuals and over time. Conventional MI-EEG methods often fail to capture and decode the complex patterns in EEG signals, particularly the dynamic spatial and temporal variations inherent in brain activity.
A breakthrough by researchers from Japan bridges this gap, enabling more seamless communication between the human brain and machines. Ph.D. student Chaowen Shen and Professor Akio Namiki from the Graduate School of Science and Engineering, Chiba University, Japan, have developed a novel artificial intelligence-based framework, called the Embedding-Driven Graph Convolutional Network (EDGCN). The approach leverages a spatio-temporal embedding fusion mechanism to decode dynamic variations in EEG patterns, improving adaptability and generalizability compared to conventional brain–computer interface (BCI) technologies.
"Decoding MI-EEG is not only an engineering challenge but also a window into understanding the neural mechanisms of MI and the functional connections of the brain. We hope to design more efficient models that can advance our understanding and utilization of the human brain," explains Prof. Namiki.
Their findings were made available online on January 22, 2026, and will be published in Volume 131 of the journal Information Fusion on July 1, 2026.
Traditional machine learning models adopt common spatial patterns and predefined graph structures and rely heavily on expert knowledge. This structural rigidity limits their ability to fully capture the complex patterns hidden within brain signals and increases computational costs. Recently, Convolutional Neural Networks and Graph Convolutional Networks have shown strong performance in decoding EEG signals.
However, conventional deep learning approaches may still struggle to fully capture the complex interactions between different brain regions and the dynamic and evolving individual variations. To overcome this limitation, researchers are increasingly turning to graph-based methods that better represent the brain's network-like activity.
EEG signals record brain activity from multiple electrodes over time, providing multi-channel time-series data. To capture pattern variations over different time scales, the researchers designed a local feature extraction module that allows the model to process EEG signals through multiple parallel pathways. Since EEG signals are recorded at distinct time points, critical points of brain activity may be missed. To overcome this challenge, the researchers adopted a Multi-Resolution Temporal Embedding strategy that allows the model to analyze the signal at multiple time scales by increasing and reducing its resolution. This approach enables a more consistent and comprehensive understanding of dynamic brain activity.
Further, they introduced a Structure-Aware Spatial Embedding mechanism that connects 'local' (structurally close) and 'global' (functionally connected) brain channels, capturing the synchronization of brain activity across different regions. This approach enables a more detailed spatial representation of short-range and long-range interactions that change dynamically during MI tasks.
Finally, to validate the effectiveness of the model, the researchers conducted a series of MI classification experiments using public datasets. Notably, the model outperformed current state-of-the-art methods, achieving superior classification accuracies of 86.50% and 90.14%, and a higher decoding accuracy of 64.04%. Additionally, removing the spatial and temporal adaptations led to degradation in the model's performance, suggesting that these embeddings were crucial for capturing the complex spatiotemporal heterogeneity of EEG signals.
Overall, the proposed EDGCN model demonstrated significant advantages in decoding heterogeneous MI-EEG signals. In the future, the researchers hope to extend its applications to real-world portable BCI hardware and rehabilitation scenarios. Further, given that EEG signals carry sensitive biometric information, developing advanced encryption strategies is crucial for defending against security attacks.
"EDGCN's high decoding accuracy and generalization capabilities will drive the commercialization of consumer-grade BCI products. Patients with stroke, spinal cord injury, amyotrophic lateral sclerosis, and other movement disorders can be assisted in the stable control of neurorehabilitation devices such as wheelchairs, prostheses, and upper limb rehabilitation robots through simple MI," concludes Prof. Namiki.
To see more news from Chiba University, click here.
***
Reference:
DOI: 10.1016/j.inffus.2026.104170
Authors: Chaowen Shen, Yanwen Zhang, Zejing Zhao, and Akio Namiki
Affiliations: Graduate School of Engineering, Chiba University, Japan
About Professor Akio Namiki from Chiba University, Japan
Professor Akio Namiki is a faculty member at the Graduate School of Engineering, Chiba University. He completed his Ph.D. from the University of Tokyo, Japan. His research interests include intelligent robotics, high-speed vision, visual and tactile feedback, manipulation, sensory-motor integration, high-speed robots, and sensor fusion. He is an eminent member of various engineering societies and has authored over 150 publications across mechanical engineering, electrical engineering, and robotics.
Funding:
This work was supported by the JST SPRING program (grant number: JPMJSP2109).
END