TY - GEN
T1 - Human Action Recognition with Sparse Autoencoder and Histogram of Oriented Gradients
AU - Tan, Pooi Shiang
AU - Lim, Kian Ming
AU - Lee, Chin Poo
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/9/26
Y1 - 2020/9/26
N2 - This paper presents a video-based human action recognition method leveraging deep learning model. Prior to the filtering phase, the input images are pre-processed by converting them into grayscale images. Thereafter, the region of interest that contains human performing action are cropped out by a pre-trained pedestrian detector. Next, the region of interest will be resized and passed as the input image to the filtering phase. In this phase, the filter kernels are trained using Sparse Autoencoder on the natural images. After obtaining the filter kernels, convolution operation is performed in the input image and the filter kernels. The filtered images are then passed to the feature extraction phase. The Histogram of Oriented Gradients descriptor is used to encode the local and global texture information of the filtered images. Lastly, in the classification phase, a Modified Hausdorff Distance is applied to classify the test sample to its nearest match based on the histograms. The performance of the deep learning algorithm is evaluated on three benchmark datasets, namely Weizmann Action Dataset, CAD-60 Dataset and Multimedia University (MMU) Human Action Dataset. The experimental results show that the proposed deep learning algorithm outperforms other methods on the Weizmann Dataset, CAD-60 Dataset and MMU Human Action Dataset with recognition rates of 100%, 88.24% and 99.5% respectively.
AB - This paper presents a video-based human action recognition method leveraging deep learning model. Prior to the filtering phase, the input images are pre-processed by converting them into grayscale images. Thereafter, the region of interest that contains human performing action are cropped out by a pre-trained pedestrian detector. Next, the region of interest will be resized and passed as the input image to the filtering phase. In this phase, the filter kernels are trained using Sparse Autoencoder on the natural images. After obtaining the filter kernels, convolution operation is performed in the input image and the filter kernels. The filtered images are then passed to the feature extraction phase. The Histogram of Oriented Gradients descriptor is used to encode the local and global texture information of the filtered images. Lastly, in the classification phase, a Modified Hausdorff Distance is applied to classify the test sample to its nearest match based on the histograms. The performance of the deep learning algorithm is evaluated on three benchmark datasets, namely Weizmann Action Dataset, CAD-60 Dataset and Multimedia University (MMU) Human Action Dataset. The experimental results show that the proposed deep learning algorithm outperforms other methods on the Weizmann Dataset, CAD-60 Dataset and MMU Human Action Dataset with recognition rates of 100%, 88.24% and 99.5% respectively.
KW - histogram of oriented gradients
KW - Human action recognition
KW - modified hausdorff distance
KW - sparse autoencoder
UR - http://www.scopus.com/inward/record.url?scp=85098069775&partnerID=8YFLogxK
U2 - 10.1109/IICAIET49801.2020.9257863
DO - 10.1109/IICAIET49801.2020.9257863
M3 - Conference Proceeding
AN - SCOPUS:85098069775
T3 - IEEE International Conference on Artificial Intelligence in Engineering and Technology, IICAIET 2020
BT - IEEE International Conference on Artificial Intelligence in Engineering and Technology, IICAIET 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Artificial Intelligence in Engineering and Technology, IICAIET 2020
Y2 - 26 September 2020 through 27 September 2020
ER -