Semi-Supervised Multimodal Fusion Model for Social Event Detection on Web Image Collections

Qing Li, Zhenguo Yang, Zheng Lu, Yun Ma, Zhiguo Gong, Haiwei Pan, Yangbin Chen

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

In this work, the authors aim to detect social events from Web images by devising a semi-supervised multimodal fusion model, denoted as SMF. With a multimodal feature fusion layer and a feature reinforcement layer, SMF learns feature histograms to represent the images, fusing multiple heterogeneous features seamlessly and efficiently. Particularly, a self-tuning approach is proposed to tune the parameters in the process of feature reinforcement automatically. Furthermore, to deal with missing values in raw features, prior knowledge is utilized to estimate the missing ones as a preprocessing step, and SMF will further extend an extra attribute to indicate if the values in the fused feature are missing. Based on the fused expression achieved by SMF, a series of algorithms are designed by adopting clustering and classification strategies separately. Extensive experiments conducted on the MediaEval social event detection challenge reveal that SMF-based approaches outperform the baselines.
Original languageEnglish
Pages (from-to)1-22
JournalInternational Journal of Multimedia Data Engineering & Management
Volume6
Issue number4
DOIs
Publication statusPublished - 1 Oct 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'Semi-Supervised Multimodal Fusion Model for Social Event Detection on Web Image Collections'. Together they form a unique fingerprint.

Cite this