Semantic Similarity Distance: Towards better text-image consistency metric in text-to-image generation

Zhaorui Tan, Xi Yang*, Zihan Ye, Qiu-Feng Wang, Yuyao Yan, Anh Nguyen, Kaizhu Huang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Generating high-quality images from text remains a challenge in visual-language understanding, with text-image consistency being a major concern. Particularly, the most popular metric R-precision may not accurately reflect the text-image consistency, leading to misleading semantics in generated images. Albeit its significance, designing a better text-image consistency metric surprisingly remains under-explored in the community. In this paper, we make a further step forward to develop a novel CLIP-based metric, Semantic Similarity Distance (S S D), which is both theoretically founded from a distributional viewpoint and empirically verified on benchmark datasets. We also introduce Parallel Deep Fusion Generative Adversarial Networks (PDF-GAN), which use two novel components to mitigate inconsistent semantics and bridge the text-image semantic gap. A series of experiments indicate that, under the guidance
Original languageEnglish
JournalPattern Recognition
Publication statusPublished - 2023

Keywords

  • text-to-image generation
  • text-image consistency metric

Fingerprint

Dive into the research topics of 'Semantic Similarity Distance: Towards better text-image consistency metric in text-to-image generation'. Together they form a unique fingerprint.

Cite this