TY - JOUR
T1 - Dual-stream autoencoder for channel-level multi-scale feature extraction in hyperspectral unmixing
AU - Gan, Yuquan
AU - Wang, Yong
AU - Li, Qiuyu
AU - Luo, Yiming
AU - Wang, Yihong
AU - Pan, Yushan
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2025/5/23
Y1 - 2025/5/23
N2 - High-dimensional hyperspectral imagery presents significant challenges for accurate unmixing due to spectral variability, limited spatial resolution, and noise. Traditional unmixing approaches often rely on spatial multi-scale processing, leading to redundant computations and suboptimal feature representations. In response to these challenges, we propose a novel Channel Multi-Scale Dual-Stream Autoencoder (CMSDAE) that innovatively integrates channel-level multi-scale feature extraction with dedicated spectral information guidance. By leveraging Channel-level Multi-Scale Perception Blocks and a Hybrid Attention-Aware Feature Block, CMSDAE efficiently captures diverse and robust spectral-spatial features while significantly reducing computational redundancy. Extensive experiments on both synthetic and real-world datasets demonstrate that CMSDAE not only improves unmixing accuracy and robustness against noise but also offers enhanced computational efficiency compared to state-of-the-art methods. This work provides new insights into spectral-spatial modeling for hyperspectral unmixing, promising more reliable and scalable analysis in challenging remote sensing applications.
AB - High-dimensional hyperspectral imagery presents significant challenges for accurate unmixing due to spectral variability, limited spatial resolution, and noise. Traditional unmixing approaches often rely on spatial multi-scale processing, leading to redundant computations and suboptimal feature representations. In response to these challenges, we propose a novel Channel Multi-Scale Dual-Stream Autoencoder (CMSDAE) that innovatively integrates channel-level multi-scale feature extraction with dedicated spectral information guidance. By leveraging Channel-level Multi-Scale Perception Blocks and a Hybrid Attention-Aware Feature Block, CMSDAE efficiently captures diverse and robust spectral-spatial features while significantly reducing computational redundancy. Extensive experiments on both synthetic and real-world datasets demonstrate that CMSDAE not only improves unmixing accuracy and robustness against noise but also offers enhanced computational efficiency compared to state-of-the-art methods. This work provides new insights into spectral-spatial modeling for hyperspectral unmixing, promising more reliable and scalable analysis in challenging remote sensing applications.
KW - Autoencoder
KW - Channel-level multi-scale
KW - Hyperspectral unmixing
KW - Multi-scale feature extraction
KW - Spectral information guidance
UR - http://www.scopus.com/inward/record.url?scp=105001798961&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2025.113428
DO - 10.1016/j.knosys.2025.113428
M3 - Article
AN - SCOPUS:105001798961
SN - 0950-7051
VL - 317
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 113428
ER -