VLP2MSA: Expanding vision-language pre-training to multimodal sentiment analysis

Guofeng Yi, Cunhang Fan*, Kang Zhu, Zhao Lv, Shan Liang, Zhengqi Wen, Guanxiong Pei, Taihao Li, Jianhua Tao*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

Large-scale vision-and-language representation learning has improved performance on various joint vision-language downstream tasks. In this work, our objective is to extend it effectively to multimodal sentiment analysis tasks and address two urgent challenges in this field: (1) the low contribution of the visual modality (2) the design of an effective multimodal fusion architecture. To overcome the imbalance between the visual and textual modalities, we propose an inter-frame hybrid transformer, which extends the recent CLIP and Timesformer architectures. This module extracts spatio-temporal features from sparsely sampled video frames, not only focusing on facial expressions but also capturing body movement information, providing a more comprehensive visual representation compared to the traditional direct use of pre-extracted facial information. Additionally, we tackle the challenge of modality heterogeneity in the fusion architecture by introducing a new scheme that prompts and aligns the video and text information before fusing them. Specifically, We generate discriminative text prompts based on the video content information to enhance the text representation and align the unimodal video-text features using a video-text contrastive loss. Our proposed end-to-end trainable model demonstrates state-of-the-art performance on three widely-used datasets using only two modalities: MOSI, MOSEI, and CH-SIMS. These experimental results validate the effectiveness of our approach in improving multimodal sentiment analysis tasks.

Original languageEnglish
Article number111136
JournalKnowledge-Based Systems
Volume283
DOIs
Publication statusPublished - 11 Jan 2024
Externally publishedYes

Keywords

  • Multimodal fusion
  • Multimodal sentiment analysis
  • Vision-language

Fingerprint

Dive into the research topics of 'VLP2MSA: Expanding vision-language pre-training to multimodal sentiment analysis'. Together they form a unique fingerprint.

Cite this