Image captioning via hierarchical attention mechanism and policy gradient optimization

Shiyang Yan*, Yuan Xie, Fangyu Wu, Jeremy S. Smith, Wenjin Lu, Bailing Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

38 Citations (Scopus)

Abstract

Automatically generating the descriptions of an image, i.e., image captioning, is an important and fundamental topic in artificial intelligence, which bridges the gap between computer vision and natural language processing. Based on the successful deep learning models, especially the CNN model and Long Short Term Memories (LSTMs) with attention mechanism, we propose a hierarchical attention model by utilizing both of the global CNN features and the local object features for more effective feature representation and reasoning in image captioning. The generative adversarial network (GAN), together with a reinforcement learning (RL) algorithm, is applied to solve the exposure bias problem in RNN-based supervised training for language problems. In addition, through the automatic measurement of the consistency between the generated caption and the image content by the discriminator in the GAN framework and RL optimization, we make the finally generated sentences more accurate and natural. Comprehensive experiments show the improved performance of the hierarchical attention mechanism and the effectiveness of our RL-based optimization method. Our model achieves state-of-the-art results on several important metrics in the MSCOCO dataset, using only greedy inference.

Original languageEnglish
Article number107329
JournalSignal Processing
Volume167
DOIs
Publication statusPublished - Feb 2020

Keywords

  • Generative adversarial network
  • Hierarchical attention mechanism
  • Image captioning
  • Policy gradient
  • Reinforcement learning

Cite this