TY - JOUR
T1 - Generative adversarial networks with decoder–encoder output noises
AU - Zhong, Guoqiang
AU - Gao, Wei
AU - Liu, Yongbin
AU - Yang, Youzhao
AU - Wang, Da Han
AU - Huang, Kaizhu
N1 - Publisher Copyright:
© 2020 Elsevier Ltd
PY - 2020/7
Y1 - 2020/7
N2 - In recent years, research on image generation has been developing very fast. The generative adversarial network (GAN) emerges as a promising framework, which uses adversarial training to improve the generative ability of its generator. However, since GAN and most of its variants use randomly sampled noises as the input of their generators, they have to learn a mapping function from a whole random distribution to the image manifold. As the structures of the random distribution and the image manifold are generally different, this results in GAN and its variants difficult to train and converge. In this paper, we propose a novel deep model called generative adversarial networks with decoder–encoder output noises (DE-GANs), which take advantage of both the adversarial training and the variational Bayesian inference to improve GAN and its variants on image generation performances. DE-GANs use a pre-trained decoder–encoder architecture to map the random noise vectors to informative ones and feed them to the generator of the adversarial networks. Since the decoder–encoder architecture is trained with the same data set as the generator, its output vectors, as the inputs of the generator, could carry the intrinsic distribution information of the training images, which greatly improves the learnability of the generator and the quality of the generated images. Extensive experiments demonstrate the effectiveness of the proposed model, DE-GANs.
AB - In recent years, research on image generation has been developing very fast. The generative adversarial network (GAN) emerges as a promising framework, which uses adversarial training to improve the generative ability of its generator. However, since GAN and most of its variants use randomly sampled noises as the input of their generators, they have to learn a mapping function from a whole random distribution to the image manifold. As the structures of the random distribution and the image manifold are generally different, this results in GAN and its variants difficult to train and converge. In this paper, we propose a novel deep model called generative adversarial networks with decoder–encoder output noises (DE-GANs), which take advantage of both the adversarial training and the variational Bayesian inference to improve GAN and its variants on image generation performances. DE-GANs use a pre-trained decoder–encoder architecture to map the random noise vectors to informative ones and feed them to the generator of the adversarial networks. Since the decoder–encoder architecture is trained with the same data set as the generator, its output vectors, as the inputs of the generator, could carry the intrinsic distribution information of the training images, which greatly improves the learnability of the generator and the quality of the generated images. Extensive experiments demonstrate the effectiveness of the proposed model, DE-GANs.
KW - Generative adversarial networks
KW - Generative models
KW - Image generation
KW - Noise
KW - Variational autoencoders
UR - http://www.scopus.com/inward/record.url?scp=85083309385&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2020.04.005
DO - 10.1016/j.neunet.2020.04.005
M3 - Article
C2 - 32315932
AN - SCOPUS:85083309385
SN - 0893-6080
VL - 127
SP - 19
EP - 28
JO - Neural Networks
JF - Neural Networks
ER -