Multibranch Attention Networks for Action Recognition in Still Images

Shiyang Yan*, Jeremy S. Smith, Wenjin Lu, Bailing Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

56 Citations (Scopus)


Contextual information plays an important role in visual recognition. This is especially true for action recognition as contextual information, such as the objects a person interacts with and the scene in which the action is performed, is inseparable from a predefined action class. Meanwhile, the attention mechanism of humans shows remarkable capability compared with the existing computer vision system in discovering contextual information. Inspired by this, we applied the soft attention mechanism by adding two extra branches in the original VGG16 model in which one is to apply scene-level attention whilst the other is region-level attention to capture the global and local contextual information. To make the multibranch model well converged and fully optimized, a two-step training method is proposed with an alternating optimization strategy. We call this model multibranch attention networks. To validate the effectiveness of the proposed approach on two experimental settings: with and without the bounding box of the target person, three publicly available datasets on human action were used for evaluation. This method achieved state-of-the-art results on the PASCAL VOC action dataset and the Stanford 40 dataset on both experimental settings and performed well on humans interacting with common objects dataset.

Original languageEnglish
Article number8214269
Pages (from-to)1116-1125
Number of pages10
JournalIEEE Transactions on Cognitive and Developmental Systems
Issue number4
Publication statusPublished - Dec 2018


  • Action recognition
  • contextual information
  • multibranch CNN
  • soft attention mechanism

Cite this