Unsupervised single-image dehazing via self-guided inverse-retinex GAN

Hui Chen, Rong Chen*, Yushi Li*, Haoran Li, Nannan Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

With the fast growth of deep learning, trainable frameworks have been presented to restore hazy images. However, the capability of most existing learning-based methods is limited since the parameters learned in an end-to-end manner are difficult to generalize to the haze or foggy images captured in the real world. Another challenge of extending data-driven models into image dehazing is collecting a large number of hazy and haze-free image pairs for the same scenes, which is impractical. To address these issues, we explore unsupervised single-image dehazing and propose a self-guided generative adversarial network (GAN) based on the dual relationship between dehazing and Retinex. Specifically, we carry out image dehazing as illumination-reflectance separation using a decomposition net in the generator. Then, a guide module is applied to encourage local structure preservation and realistic reflectance generation. In addition, we integrate the model with the outdoor heavy-duty pan-tilt-zoom (PTZ) camera to implement dynamic object detection in hazy environment. We comprehensively evaluate the proposed GAN with both synthetic and real-world scenes. The quantitative and qualitative results demonstrate the effectiveness and robustness of our model in handling unseen hazy images with varying visual properties.

Original languageEnglish
Article number139
JournalMultimedia Systems
Volume31
Issue number2
DOIs
Publication statusPublished - Apr 2025

Keywords

  • Image dehazing
  • Retinex
  • Self-guided
  • Unsupervised

Fingerprint

Dive into the research topics of 'Unsupervised single-image dehazing via self-guided inverse-retinex GAN'. Together they form a unique fingerprint.

Cite this