Deep generative image priors for semantic face manipulation

Xianxu Hou, Linlin Shen*, Zhong Ming, Guoping Qiu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Previous works on generative adversarial networks (GANs) mainly focus on how to synthesize high-fidelity images. In this paper, we present a framework to leverage the knowledge learned by GANs for semantic face manipulation. In particular, we propose to control the semantics of synthesized faces by adapting the latent codes with an attribute prediction model. Moreover, in order to achieve a more accurate estimation of different facial attributes, we propose to pretrain the attribute prediction model by inverting the synthesized face images back to the GAN latent space. As a result, our method explicitly considers the semantics encoded in the latent space of a pretrained GAN and is able to faithfully edit various attributes like eyeglasses, smiling, bald, age, mustache and gender for high-resolution face images. Extensive experiments show that our method has superior performance compared to state of the art for both face attribute prediction and semantic face manipulation.

Original languageEnglish
Article number109477
JournalPattern Recognition
Volume139
DOIs
Publication statusPublished - Jul 2023

Keywords

  • Face attribute prediction
  • GANs
  • Semantic face manipulation

Fingerprint

Dive into the research topics of 'Deep generative image priors for semantic face manipulation'. Together they form a unique fingerprint.

Cite this