Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG

Jiahui Pan, Weijie Fang, Zhihang Zhang, Bingzhi Chen*, Zheng Zhang, Shuihua Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

Goal: As an essential human-machine interactive task, emotion recognition has become an emerging area over the decades. Although previous attempts to classify emotions have achieved high performance, several challenges remain open: 1) How to effectively recognize emotions using different modalities remains challenging. 2) Due to the increasing amount of computing power required for deep learning, how to provide real-time detection and improve the robustness of deep neural networks is important. Method: In this paper, we propose a deep learning-based multimodal emotion recognition (MER) called Deep-Emotion, which can adaptively integrate the most discriminating features from facial expressions, speech, and electroencephalogram (EEG) to improve the performance of the MER. Specifically, the proposed Deep-Emotion framework consists of three branches, i.e., the facial branch, speech branch, and EEG branch. Correspondingly, the facial branch uses the improved GhostNet neural network proposed in this paper for feature extraction, which effectively alleviates the overfitting phenomenon in the training process and improves the classification accuracy compared with the original GhostNet network. For work on the speech branch, this paper proposes a lightweight fully convolutional neural network (LFCNN) for the efficient extraction of speech emotion features. Regarding the study of EEG branches, we proposed a tree-like LSTM (tLSTM) model capable of fusing multi-stage features for EEG emotion feature extraction. Finally, we adopted the strategy of decision-level fusion to integrate the recognition results of the above three modes, resulting in more comprehensive and accurate performance. Result and Conclusions: Extensive experiments on the CK+, EMO-DB, and MAHNOB-HCI datasets have demonstrated the advanced nature of the Deep-Emotion method proposed in this paper, as well as the feasibility and superiority of the MER approach.

Original languageEnglish
Pages (from-to)396-403
Number of pages8
JournalIEEE Open Journal of Engineering in Medicine and Biology
Volume5
DOIs
Publication statusPublished - 2024
Externally publishedYes

Keywords

  • Multimodal emotion recognition
  • electroencephalogram
  • facial expressions
  • speech

Fingerprint

Dive into the research topics of 'Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG'. Together they form a unique fingerprint.

Cite this