Abstract
In this article, we focus on the dynamic facial emotion recognition from online video. We combine deep neural networks with transfer learning theory and propose a novel model named DT-EFER. In detail, DT-EFER uses GoogLeNet to extract the deep features of key images from video clips. Then to solve the dynamic facial emotion recognition scenario, the framework introduces transfer learning theory. Thus, to improve the recognition performance, model DT-EFER focuses on the differences between key images instead of those images themselves. Moreover, the time complexity of this model is not high, even if previous exemplars are introduced here. In contrast to other exemplar-based models, experiments based on two datasets, namely, BAUM-1s and Extended Cohn-Kanade, have shown the efficiency of the proposed DT-EFER model.
| Original language | English |
|---|---|
| Article number | 121 |
| Journal | ACM Transactions on Multimedia Computing, Communications and Applications |
| Volume | 18 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 6 Oct 2022 |
| Externally published | Yes |
Keywords
- GoogLeNet
- Transfer learning
- dynamic facial emotion recognition
- exemplar-based learning model
Fingerprint
Dive into the research topics of 'Dynamic Transfer Exemplar based Facial Emotion Recognition Model Toward Online Video'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver