TY - JOUR
T1 - Abnormal event detection for video surveillance using an enhanced two-stream fusion method
AU - Yang, Yuxing
AU - Fu, Zeyu
AU - Naqvi, Syed Mohsen
N1 - Publisher Copyright:
© 2023 The Author(s)
PY - 2023/10/7
Y1 - 2023/10/7
N2 - Abnormal event detection is a critical component of intelligent surveillance systems, focusing on identifying abnormal objects or unusual human behaviours in video sequences. However, conventional methods struggle due to the scarcity of labelled data. Existing solutions typically train on normal data, establish boundaries for regular events, and identify outliers during testing. These approaches are often inadequate as they do not efficiently leverage the geometry and image texture information, and they lack a specific focus on different types of abnormal events. This paper introduces a novel two-stream fusion algorithm for abnormal event detection to address these diverse abnormal events better. We first extract the object, pose, and optical flow features. Then, the object and pose information is combined early on to eliminate occluded pose graphs. The trusted pose graphs are fed into a Spatio-Temporal Graph Convolutional Network (ST-GCN) to detect abnormal behaviours. Simultaneously, we propose a video prediction framework that identifies abnormal frames by measuring the difference between predicted and ground truth frames. Lastly, we execute a decision-level fusion between the classification and prediction streams to achieve the final results. Our results on the UCSD PED1 dataset indicate the enhanced performance of the fusion model for various abnormal events. Furthermore, experimental results on the UCSD PED2 dataset and the ShanghaiTech campus dataset underscore our approach's effectiveness compared to other related works.
AB - Abnormal event detection is a critical component of intelligent surveillance systems, focusing on identifying abnormal objects or unusual human behaviours in video sequences. However, conventional methods struggle due to the scarcity of labelled data. Existing solutions typically train on normal data, establish boundaries for regular events, and identify outliers during testing. These approaches are often inadequate as they do not efficiently leverage the geometry and image texture information, and they lack a specific focus on different types of abnormal events. This paper introduces a novel two-stream fusion algorithm for abnormal event detection to address these diverse abnormal events better. We first extract the object, pose, and optical flow features. Then, the object and pose information is combined early on to eliminate occluded pose graphs. The trusted pose graphs are fed into a Spatio-Temporal Graph Convolutional Network (ST-GCN) to detect abnormal behaviours. Simultaneously, we propose a video prediction framework that identifies abnormal frames by measuring the difference between predicted and ground truth frames. Lastly, we execute a decision-level fusion between the classification and prediction streams to achieve the final results. Our results on the UCSD PED1 dataset indicate the enhanced performance of the fusion model for various abnormal events. Furthermore, experimental results on the UCSD PED2 dataset and the ShanghaiTech campus dataset underscore our approach's effectiveness compared to other related works.
KW - Abnormal event detection
KW - Adversarial learning
KW - Data fusion
KW - Graph convolutional neural network
KW - Object detection
KW - Optical flow
KW - Pose estimation
UR - http://www.scopus.com/inward/record.url?scp=85165544319&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2023.126561
DO - 10.1016/j.neucom.2023.126561
M3 - Article
AN - SCOPUS:85165544319
SN - 0925-2312
VL - 553
JO - Neurocomputing
JF - Neurocomputing
M1 - 126561
ER -