Convolutional Bi-LSTM based human gait recognition using video sequences

Javaria Amin, Muhammad Almas Anjum, Sharif Muhammad, Seifedine Kadry, Yunyoung Nam*, Shui Hua Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

Recognition of human gait is a difficult assignment, particularly for unobtrusive surveillance in a video and human identification from a large distance. Therefore, a method is proposed for the classification and recognition of different types of human gait. The proposed approach is consisting of two phases. In phase I, the new model is proposed named convolutional bidirectional long short-term memory (Conv-BiLSTM) to classify the video frames of human gait. In this model, features are derived through convolutional neural network (CNN) named ResNet-18 and supplied as an input to the LSTM model that provided more distinguishable temporal information. In phase II, the YOLOv2-squeezeNet model is designed, where deep features are extricated using the fireconcat-02 layer and fed/passed to the tinyYOLOv2 model for recognized/localized the human gaits with predicted scores. The proposed method achieved up to 90% correct prediction scores on CASIA-A, CASIA-B, and the CASIA-C benchmark datasets. The proposed method achieved better/improved prediction scores as compared to the recent existing works.

Original languageEnglish
Pages (from-to)2693-2709
Number of pages17
JournalComputers, Materials and Continua
Volume68
Issue number2
DOIs
Publication statusPublished - 13 Apr 2021
Externally publishedYes

Keywords

  • Bi-LSTM
  • Gait
  • Open neural network
  • ResNet-18
  • SqueezeNet
  • YOLOv2

Fingerprint

Dive into the research topics of 'Convolutional Bi-LSTM based human gait recognition using video sequences'. Together they form a unique fingerprint.

Cite this