TY - JOUR
T1 - GuidedStyle
T2 - Attribute knowledge guided style manipulation for semantic face editing
AU - Hou, Xianxu
AU - Zhang, Xiaokang
AU - Liang, Hanbang
AU - Shen, Linlin
AU - Lai, Zhihui
AU - Wan, Jun
N1 - Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2022/1
Y1 - 2022/1
N2 - Although significant progress has been made in synthesizing high-quality and visually realistic face images by unconditional Generative Adversarial Networks (GANs), there is still a lack of control over the generation process in order to achieve semantic face editing. In this paper, we propose a novel learning framework, called GuidedStyle, to achieve semantic face editing on pretrained StyleGAN by guiding the image generation process with a knowledge network. Furthermore, we allow an attention mechanism in StyleGAN generator to adaptively select a single layer for style manipulation. As a result, our method is able to perform disentangled and controllable edits along various attributes, including smiling, eyeglasses, gender, mustache, hair color and attractive. Both qualitative and quantitative results demonstrate the superiority of our method over other competing methods for semantic face editing. Moreover, we show that our model can be also applied to different types of real and artistic face editing, demonstrating strong generalization ability.
AB - Although significant progress has been made in synthesizing high-quality and visually realistic face images by unconditional Generative Adversarial Networks (GANs), there is still a lack of control over the generation process in order to achieve semantic face editing. In this paper, we propose a novel learning framework, called GuidedStyle, to achieve semantic face editing on pretrained StyleGAN by guiding the image generation process with a knowledge network. Furthermore, we allow an attention mechanism in StyleGAN generator to adaptively select a single layer for style manipulation. As a result, our method is able to perform disentangled and controllable edits along various attributes, including smiling, eyeglasses, gender, mustache, hair color and attractive. Both qualitative and quantitative results demonstrate the superiority of our method over other competing methods for semantic face editing. Moreover, we show that our model can be also applied to different types of real and artistic face editing, demonstrating strong generalization ability.
KW - Generative Adversarial Networks
KW - Semantic face editing
KW - StyleGAN
UR - http://www.scopus.com/inward/record.url?scp=85118674938&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2021.10.017
DO - 10.1016/j.neunet.2021.10.017
M3 - Article
C2 - 34768091
AN - SCOPUS:85118674938
SN - 0893-6080
VL - 145
SP - 209
EP - 220
JO - Neural Networks
JF - Neural Networks
ER -