Multimodal Emotion and Sentiment Modeling from Unstructured Big Data: Challenges, Architecture, Techniques

Jasmine Kah Phooi Seng, Kenneth Li Minn Ang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

34 Citations (Scopus)

Abstract

The exponential growth of multimodal content in today's competitive business environment leads to a huge volume of unstructured data. Unstructured big data has no particular format or structure and can be in any form, such as text, audio, images, and video. In this paper, we address the challenges of emotion and sentiment modeling due to unstructured big data with different modalities. We first include an up-to-date review on emotion and sentiment modeling including the state-of-the-art techniques. We then propose a new architecture of multimodal emotion and sentiment modeling for big data. The proposed architecture consists of five essential modules: data collection module, multimodal data aggregation module, multimodal data feature extraction module, fusion and decision module, and application module. Novel feature extraction techniques called the divide-and-conquer principal component analysis (Div-ConPCA) and the divide-and-conquer linear discriminant analysis (Div-ConLDA) are proposed for the multimodal data feature extraction module in the architecture. The experiments on a multicore machine architecture are performed to validate the performance of the proposed techniques.

Original languageEnglish
Article number8755834
Pages (from-to)90982-90998
Number of pages17
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 2019
Externally publishedYes

Keywords

  • Big data
  • affective analytics
  • emotion recognition
  • sentiment modeling
  • unstructured data

Cite this