Abstract
Underwater depth estimation is crucial for marine applications such as autonomous navigation and robotics. However, monocular depth estimation in underwater environments remains challenging due to the rapid attenuation of the red light spectrum in deep waters, causing bluish-green color distortion, while suspended particles and limited illumination lead to blurry effects. These underwater degradations severely affect the performance of RGB-based depth estimation methods, particularly in background regions. To overcome the limitations of color-based depth estimation techniques in underwater scenarios, this paper proposes a novel dual-source depth fusion framework leveraging color and light attenuation information. First, an innovative input space is designed inspired by the principle of depth-dependent light transmission in underwater environments. This input space enhances robustness against color distortion and improves the capacity to capture depth information, particularly in blurry underwater regions. Subsequently, we develop an adaptive fusion module to optimize the strengths of both RGB and this new input space across varying underwater conditions. This module employs a novel confidence-based mechanism to dynamically assess the reliability of depth information from each source on a per-pixel basis. By leveraging a learned confidence map, it can adaptively weigh and fuse the contributions of RGB and the new input space. This strategy enables optimal depth estimation across diverse underwater scenarios. Extensive experiments on multiple challenging datasets demonstrate that our method consistently outperforms current state-of-the-art monocular depth estimation techniques in various subaqueous environments.
Original language | English |
---|---|
Article number | 102961 |
Journal | Information Fusion |
Volume | 118 |
DOIs | |
Publication status | Published - Jun 2025 |
Keywords
- Deep learning
- Information fusion
- Monocular depth estimation
- Underwater imaging