Abstract
With the fast growth of deep learning, trainable frameworks have been presented to restore hazy images. However, the capability of most existing learning-based methods is limited since the parameters learned in an end-to-end manner are difficult to generalize to the haze or foggy images captured in the real world. Another challenge of extending data-driven models into image dehazing is collecting a large number of hazy and haze-free image pairs for the same scenes, which is impractical. To address these issues, we explore unsupervised single-image dehazing and propose a self-guided generative adversarial network (GAN) based on the dual relationship between dehazing and Retinex. Specifically, we carry out image dehazing as illumination-reflectance separation using a decomposition net in the generator. Then, a guide module is applied to encourage local structure preservation and realistic reflectance generation. In addition, we integrate the model with the outdoor heavy-duty pan-tilt-zoom (PTZ) camera to implement dynamic object detection in hazy environment. We comprehensively evaluate the proposed GAN with both synthetic and real-world scenes. The quantitative and qualitative results demonstrate the effectiveness and robustness of our model in handling unseen hazy images with varying visual properties.
Original language | English |
---|---|
Article number | 139 |
Journal | Multimedia Systems |
Volume | 31 |
Issue number | 2 |
DOIs | |
Publication status | Published - Apr 2025 |
Keywords
- Image dehazing
- Retinex
- Self-guided
- Unsupervised