Lifelong Age Transformation with a Deep Generative Prior

Xianxu Hou, Xiaokang Zhang, Hanbang Liang, Linlin Shen*, Zhong Ming

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

In this paper, we consider the lifelong age progression and regression task, which requires to synthesize a person's appearance across a wide range of ages. We propose a simple yet effective learning framework to achieve this by exploiting the prior knowledge of faces captured by well-trained generative adversarial networks (GANs). Specifically, we first utilize a pretrained GAN to synthesize face images with different ages, with which we then learn to model the conditional aging process in the GAN latent space. Moreover, we also introduce a cycle consistency loss in the GAN latent space to preserve a person's identity. As a result, our model can reliably predict a person's appearance for different ages by modifying both shape and texture of the head. Both qualitative and quantitative experimental results demonstrate the superiority of our method over concurrent works. Furthermore, we demonstrate that our approach can also achieve high-quality age transformation for painting portraits and cartoon characters without additional age annotations.

Original languageEnglish
Pages (from-to)3125-3139
Number of pages15
JournalIEEE Transactions on Multimedia
Volume25
DOIs
Publication statusPublished - 2023
Externally publishedYes

Keywords

  • GANs
  • age transformation

Fingerprint

Dive into the research topics of 'Lifelong Age Transformation with a Deep Generative Prior'. Together they form a unique fingerprint.

Cite this