Improving variational autoencoder with deep feature consistent and generative adversarial training

Xianxu Hou, Ke Sun, Linlin Shen, Guoping Qiu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

42 Citations (Scopus)


We present a new method for improving the performances of variational autoencoder (VAE). In addition to enforcing the deep feature consistent principle thus ensuring the VAE output and its corresponding input images to have similar deep features, we also implement a generative adversarial training mechanism to force the VAE to output realistic and natural images. We present experimental results to show that the VAE trained with our new method outperforms state of the art in generating face images with much clearer and more natural noses, eyes, teeth, hair textures as well as reasonable backgrounds. We also show that our method can learn powerful embeddings of input face images, which can be used to achieve facial attribute manipulation. Moreover we propose a multi-view feature extraction strategy to extract effective image representations, which can be used to achieve state of the art performance in facial attribute prediction.

Original languageEnglish
Pages (from-to)183-194
Number of pages12
Publication statusPublished - 14 May 2019
Externally publishedYes


  • Facial attributes
  • GAN
  • Generative model
  • Image generation
  • VAE


Dive into the research topics of 'Improving variational autoencoder with deep feature consistent and generative adversarial training'. Together they form a unique fingerprint.

Cite this