High-Resolution Virtual Try-On Network with Coarse-to-Fine Strategy

Qi Lyu*, Qiu Feng Wang, Kaizhu Huang

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

2 Citations (Scopus)

Abstract

In this paper, we propose a high-resolution virtual try-on network model based on 2D images, which can seamlessly put on given clothing to a target person with any pose. Under the coarse-to-fine strategy, we firstly transform the given normal clothes to warped clothes to well match the pose of the person by a clothing matching module, then these two generated images are combined to generate one fitting image of the person put on the given clothes by a try-on module, lastly utilize a Very Deep Super Resolution (VDSR) module to refine the generated fitting image. Compared to the 3D based methods that are computationally prohibitive, our method only needs 2D images, which is much faster. We evaluate our proposed model both quantitatively (i.e., in terms of SSIM) and qualitatively on a public virtual try-on dataset (i.e, Zalando). The experimental results demonstrate the effectiveness of the proposed method: generating visually better quality of images, our new method can improve the SSIM by 1.5%.

Original languageEnglish
Article number012009
JournalJournal of Physics: Conference Series
Volume1880
Issue number1
DOIs
Publication statusPublished - 27 Apr 2021
Event5th International Conference on Machine Vision and Information Technology, CMVIT 2021 - Virtual, Online
Duration: 26 Feb 2021 → …

Fingerprint

Dive into the research topics of 'High-Resolution Virtual Try-On Network with Coarse-to-Fine Strategy'. Together they form a unique fingerprint.

Cite this