TY - GEN
T1 - ST-GCN
T2 - 22nd IEEE International Symposium on Parallel and Distributed Processing with Applications, ISPA 2024
AU - Xu, Jingzhou
AU - Qi, Jun
AU - Zhang, Junqing
AU - Yue, Yong
AU - Zhang, Tingting
AU - Chen, Jianjun
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Motor imagery (MI) is a mental process extensively used in the experimental paradigm for brain-computer interfaces (BCIs) across various basic science and clinical research studies. Despite its widespread use, accurately decoding intentions from MI poses significant challenges due to the complex nature of brain patterns and the limited sample sizes typically available for machine learning. This paper introduces a Spatiotemporal Graph Neural Network (ST-GCN) designed for MI classification. First, the spatial-temporal convolution layer is used to extract features from raw EEG data, where mixed depthwise convolution extracts temporal features, followed by spatial filtering convolution that decomposes the EEG signal. A graph convolution module employing the max relative aggregator is then utilized to explore the relationships between the spatially decomposed EEG components. In the final step, under the combined supervision of cross-entropy and our proposed channel selection loss, the ST-GCN achieves feature extraction that enhances interclass dispersion and intraclass compactness. We compare ST-GCN with several benchmark EEG decoding methods on two MI datasets: the BCI Competition III Dataset IVa and the BCI Competition IV Dataset 1. ST-GCN outperforms the deep learning benchmark methods by achieving an accuracy of 78.11% and 71.94%, respectively, in 10-fold cross-validation.
AB - Motor imagery (MI) is a mental process extensively used in the experimental paradigm for brain-computer interfaces (BCIs) across various basic science and clinical research studies. Despite its widespread use, accurately decoding intentions from MI poses significant challenges due to the complex nature of brain patterns and the limited sample sizes typically available for machine learning. This paper introduces a Spatiotemporal Graph Neural Network (ST-GCN) designed for MI classification. First, the spatial-temporal convolution layer is used to extract features from raw EEG data, where mixed depthwise convolution extracts temporal features, followed by spatial filtering convolution that decomposes the EEG signal. A graph convolution module employing the max relative aggregator is then utilized to explore the relationships between the spatially decomposed EEG components. In the final step, under the combined supervision of cross-entropy and our proposed channel selection loss, the ST-GCN achieves feature extraction that enhances interclass dispersion and intraclass compactness. We compare ST-GCN with several benchmark EEG decoding methods on two MI datasets: the BCI Competition III Dataset IVa and the BCI Competition IV Dataset 1. ST-GCN outperforms the deep learning benchmark methods by achieving an accuracy of 78.11% and 71.94%, respectively, in 10-fold cross-validation.
KW - Brain Computer Interface
KW - EEG
KW - Graph Neural Network
KW - Motor Imagery
UR - http://www.scopus.com/inward/record.url?scp=105000142807&partnerID=8YFLogxK
U2 - 10.1109/ISPA63168.2024.00133
DO - 10.1109/ISPA63168.2024.00133
M3 - Conference Proceeding
AN - SCOPUS:105000142807
T3 - Proceedings - 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications, ISPA 2024
SP - 1007
EP - 1013
BT - Proceedings - 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications, ISPA 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 30 October 2024 through 2 November 2024
ER -