Generative adversarial networks with mixture of t-distributions noise for diverse image generation

Jinxuan Sun, Guoqiang Zhong*, Yang Chen, Yongbin Liu, Tao Li, Kaizhu Huang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

18 Citations (Scopus)

Abstract

Image generation is a long-standing problem in the machine learning and computer vision areas. In order to generate images with high diversity, we propose a novel model called generative adversarial networks with mixture of t-distributions noise (tGANs). In tGANs, the latent generative space is formulated using a mixture of t-distributions. Particularly, the parameters of the components in the mixture of t-distributions can be learned along with others in the model. To improve the diversity of the generated images in each class, each noise vector and a class codeword are concatenated as the input of the generator of tGANs. In addition, a classification loss is added to both the generator and the discriminator losses to strengthen their performances. We have conducted extensive experiments to compare tGANs with a state-of-the-art pixel by pixel image generation approach, pixelCNN, and related GAN-based models. The experimental results and statistical comparisons demonstrate that tGANs perform significantly better than pixleCNN and related GAN-based models for diverse image generation.

Original languageEnglish
Pages (from-to)374-381
Number of pages8
JournalNeural Networks
Volume122
DOIs
Publication statusPublished - Feb 2020

Keywords

  • Class codeword
  • Diversity
  • Generate adversarial networks
  • Image generation
  • Mixture of t-distributions

Fingerprint

Dive into the research topics of 'Generative adversarial networks with mixture of t-distributions noise for diverse image generation'. Together they form a unique fingerprint.

Cite this