Dynamic Transfer Exemplar based Facial Emotion Recognition Model Toward Online Video

An Qi Bi, Xiao Yang Tian, Shui Hua Wang, Yu Dong Zhang

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


In this article, we focus on the dynamic facial emotion recognition from online video. We combine deep neural networks with transfer learning theory and propose a novel model named DT-EFER. In detail, DT-EFER uses GoogLeNet to extract the deep features of key images from video clips. Then to solve the dynamic facial emotion recognition scenario, the framework introduces transfer learning theory. Thus, to improve the recognition performance, model DT-EFER focuses on the differences between key images instead of those images themselves. Moreover, the time complexity of this model is not high, even if previous exemplars are introduced here. In contrast to other exemplar-based models, experiments based on two datasets, namely, BAUM-1s and Extended Cohn-Kanade, have shown the efficiency of the proposed DT-EFER model.

Original languageEnglish
Article number121
JournalACM Transactions on Multimedia Computing, Communications and Applications
Issue number2
Publication statusPublished - 6 Oct 2022
Externally publishedYes


  • GoogLeNet
  • Transfer learning
  • dynamic facial emotion recognition
  • exemplar-based learning model


Dive into the research topics of 'Dynamic Transfer Exemplar based Facial Emotion Recognition Model Toward Online Video'. Together they form a unique fingerprint.

Cite this