GuidedStyle: Attribute knowledge guided style manipulation for semantic face editing

Xianxu Hou, Xiaokang Zhang, Hanbang Liang, Linlin Shen*, Zhihui Lai, Jun Wan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

36 Citations (Scopus)

Abstract

Although significant progress has been made in synthesizing high-quality and visually realistic face images by unconditional Generative Adversarial Networks (GANs), there is still a lack of control over the generation process in order to achieve semantic face editing. In this paper, we propose a novel learning framework, called GuidedStyle, to achieve semantic face editing on pretrained StyleGAN by guiding the image generation process with a knowledge network. Furthermore, we allow an attention mechanism in StyleGAN generator to adaptively select a single layer for style manipulation. As a result, our method is able to perform disentangled and controllable edits along various attributes, including smiling, eyeglasses, gender, mustache, hair color and attractive. Both qualitative and quantitative results demonstrate the superiority of our method over other competing methods for semantic face editing. Moreover, we show that our model can be also applied to different types of real and artistic face editing, demonstrating strong generalization ability.

Original languageEnglish
Pages (from-to)209-220
Number of pages12
JournalNeural Networks
Volume145
DOIs
Publication statusPublished - Jan 2022
Externally publishedYes

Keywords

  • Generative Adversarial Networks
  • Semantic face editing
  • StyleGAN

Fingerprint

Dive into the research topics of 'GuidedStyle: Attribute knowledge guided style manipulation for semantic face editing'. Together they form a unique fingerprint.

Cite this