BERT-hLSTMs: BERT and hierarchical LSTMs for visual storytelling

Jing Su, Qingyun Dai, Frank Guerin, Mian Zhou*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

27 Citations (Scopus)

Abstract

Visual storytelling is a creative and challenging task, aiming to automatically generate a story-like description for a sequence of images. The descriptions generated by previous visual storytelling approaches lack coherence because they use word-level sequence generation methods and do not adequately consider sentence-level dependencies. To tackle this problem, we propose a novel hierarchical visual storytelling framework which separately models sentence-level and word-level semantics. We use the transformer-based BERT to obtain embeddings for sentences and words. We then employ a hierarchical LSTM network: the bottom LSTM receives as input the sentence vector representation from BERT, to learn the dependencies between the sentences corresponding to images, and the top LSTM is responsible for generating the corresponding word vector representations, taking input from the bottom LSTM. Experimental results demonstrate that our model outperforms most closely related baselines under automatic evaluation metrics BLEU and CIDEr, and also show the effectiveness of our method with human evaluation.

Original languageEnglish
Article number101169
JournalComputer Speech and Language
Volume67
DOIs
Publication statusPublished - May 2021
Externally publishedYes

Keywords

  • BERT
  • Hierarchical LSTMs
  • Sentence vector
  • Visual storytelling

Fingerprint

Dive into the research topics of 'BERT-hLSTMs: BERT and hierarchical LSTMs for visual storytelling'. Together they form a unique fingerprint.

Cite this