Deep feature similarity for generative adversarial networks

Xianxu Hou, Ke Sun, Guoping Qiu

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

2 Citations (Scopus)

Abstract

We propose a new way to train generative adversarial networks (GANs) based on pretrained deep convolutional neural network (CNN). Instead of directly using the generated images and the real images in pixel space, the corresponding deep features extracted from pretrained networks are used to train the generator and discriminator. We enforce the deep feature similarity of the generated and real images to stabilize the training and generate more natural visual images. Testing on face and flower image dataset, we show that the generated samples are clearer and have higher visual quality than traditional GANs. The human evaluation demonstrates that humans cannot easily distinguish the fake from real face images.

Original languageEnglish
Title of host publicationProceedings - 4th Asian Conference on Pattern Recognition, ACPR 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages120-125
Number of pages6
ISBN (Electronic)9781538633540
DOIs
Publication statusPublished - 13 Dec 2018
Externally publishedYes
Event4th Asian Conference on Pattern Recognition, ACPR 2017 - Nanjing, China
Duration: 26 Nov 201729 Nov 2017

Publication series

NameProceedings - 4th Asian Conference on Pattern Recognition, ACPR 2017

Conference

Conference4th Asian Conference on Pattern Recognition, ACPR 2017
Country/TerritoryChina
CityNanjing
Period26/11/1729/11/17

Keywords

  • CNN
  • Deep Feature
  • GAN

Fingerprint

Dive into the research topics of 'Deep feature similarity for generative adversarial networks'. Together they form a unique fingerprint.

Cite this