H2GCN: A hybrid hypergraph convolution network for skeleton-based action recognition

Yiming Shao, Lintao Mao, Leixiong Ye, Jincheng Li, Ping Yang, Chengtao Ji, Zizhao Wu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Recent GCN-based works have achieved remarkable results for skeleton-based human action recognition. Nevertheless, while existing approaches extensively investigate pairwise joint relationships, only a limited number of models explore the intricate, high-order relationships among multiple joints. In this paper, we propose a novel hypergraph convolution method that represents the relationships among multiple joints with hyperedges, and dynamically refines the height-order relationship between hyperedges in the spatial, temporal, and channel dimensions. Specifically, our method initiates with a temporal-channel refinement hypergraph convolutional network, dynamically learning temporal and channel topologies in a data-dependent manner, which facilitates the capture of non-physical structural information inherent in the human body. Furthermore, to model various inter-joint relationships across spatio-temporal dimensions, we propose a spatio-temporal hypergraph joint module, which aims to encapsulate the dynamic spatial–temporal characteristics of the human body. Through the integration of these modules, our proposed model achieves state-of-the-art performance on RGB+D 60 and NTU RGB+D 120 datasets.

Original languageEnglish
Article number102072
JournalJournal of King Saud University - Computer and Information Sciences
Volume36
Issue number5
DOIs
Publication statusPublished - Jun 2024

Keywords

  • Action recognition
  • Hypergraph convolution network
  • Spatio-temporal modeling

Fingerprint

Dive into the research topics of 'H2GCN: A hybrid hypergraph convolution network for skeleton-based action recognition'. Together they form a unique fingerprint.

Cite this