StyleAU: StyleGAN based Facial Action Unit Manipulation for Expression Editing

Yanliang Guo, Xianxu Hou, Feng Liu, Linlin Shen*, Lei Wang, Zhen Wang, Peng Liu

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

Abstract

Facial expression editing has a wide range of applications, such as emotion detection, human-computer interaction, and social entertainment. However, existing expression editing methods either fail to allow for fine-grained editing, resulting in unnatural and unrealistic facial expressions, or generate artifacts and blurs, leading to poor image quality. In this paper, we propose a novel framework called StyleAU, which is based on StyleGAN and facial action units, to address these problems. Our framework leverages the pre-trained StyleGAN prior knowledge to enable action unit editing of the face in the StyleGAN latent space, allowing precise expression editing. In addition, we use an encoder to extract multi-scale content features to achieve high-fidelity image reconstruction. Our approach qualitatively and quantitatively outperforms competing methods for action unit manipulation and expression editing.

Original languageEnglish
Title of host publication2023 IEEE International Joint Conference on Biometrics, IJCB 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350337266
DOIs
Publication statusPublished - 2023
Event2023 IEEE International Joint Conference on Biometrics, IJCB 2023 - Ljubljana, Slovenia
Duration: 25 Sept 202328 Sept 2023

Publication series

Name2023 IEEE International Joint Conference on Biometrics, IJCB 2023

Conference

Conference2023 IEEE International Joint Conference on Biometrics, IJCB 2023
Country/TerritorySlovenia
CityLjubljana
Period25/09/2328/09/23

Fingerprint

Dive into the research topics of 'StyleAU: StyleGAN based Facial Action Unit Manipulation for Expression Editing'. Together they form a unique fingerprint.

Cite this