TY - GEN
T1 - Weakly Supervised Online Action Detection for Infant General Movements
AU - Luo, Tongyi
AU - Xiao, Jia
AU - Zhang, Chuncao
AU - Chen, Siheng
AU - Tian, Yuan
AU - Yu, Guangjun
AU - Dang, Kang
AU - Ding, Xiaowei
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - To make the earlier medical intervention of infants’ cerebral palsy (CP), early diagnosis of brain damage is critical. Although general movements assessment (GMA) has shown promising results in early CP detection, it is laborious. Most existing works take videos as input to make fidgety movements (FMs) classification for the GMA automation. Those methods require a complete observation of videos and can not localize video frames containing normal FMs. Therefore we propose a novel approach named WO-GMA to perform FMs localization in the weakly supervised online setting. Infant body keypoints are first extracted as the inputs to WO-GMA. Then WO-GMA performs local spatio-temporal extraction followed by two network branches to generate pseudo clip labels and model online actions. With the clip-level pseudo labels, the action modeling branch learns to detect FMs in an online fashion. Experimental results on a dataset with 757 videos of different infants show that WO-GMA can get state-of-the-art video-level classification and clip-level detection results. Moreover, only the first 20% duration of the video is needed to get classification results as good as fully observed, implying a significantly shortened FMs diagnosis time. Code is available at: https://github.com/scofiedluo/WO-GMA.
AB - To make the earlier medical intervention of infants’ cerebral palsy (CP), early diagnosis of brain damage is critical. Although general movements assessment (GMA) has shown promising results in early CP detection, it is laborious. Most existing works take videos as input to make fidgety movements (FMs) classification for the GMA automation. Those methods require a complete observation of videos and can not localize video frames containing normal FMs. Therefore we propose a novel approach named WO-GMA to perform FMs localization in the weakly supervised online setting. Infant body keypoints are first extracted as the inputs to WO-GMA. Then WO-GMA performs local spatio-temporal extraction followed by two network branches to generate pseudo clip labels and model online actions. With the clip-level pseudo labels, the action modeling branch learns to detect FMs in an online fashion. Experimental results on a dataset with 757 videos of different infants show that WO-GMA can get state-of-the-art video-level classification and clip-level detection results. Moreover, only the first 20% duration of the video is needed to get classification results as good as fully observed, implying a significantly shortened FMs diagnosis time. Code is available at: https://github.com/scofiedluo/WO-GMA.
KW - Fidgety movements (FMs)
KW - General movements assessment
KW - Online action detection
KW - Weakly supervised
UR - http://www.scopus.com/inward/record.url?scp=85139060761&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16434-7_69
DO - 10.1007/978-3-031-16434-7_69
M3 - Conference Proceeding
AN - SCOPUS:85139060761
SN - 9783031164330
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 721
EP - 731
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 - 25th International Conference, Proceedings
A2 - Wang, Linwei
A2 - Dou, Qi
A2 - Fletcher, P. Thomas
A2 - Speidel, Stefanie
A2 - Li, Shuo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022
Y2 - 18 September 2022 through 22 September 2022
ER -