TY - JOUR
T1 - SDRCNN
T2 - A Single-Scale Dense Residual Connected Convolutional Neural Network for Pansharpening
AU - Fang, Yuan
AU - Cai, Yuanzhi
AU - Fan, Lei
N1 - Funding Information:
This work was supported in part by the Xi'an Jiaotong-Liverpool University Research Enhancement Fund under Grant REF-21-01-003, and in part by the Xi'an Jiaotong-Liverpool University Postgraduate Research Scholarship under Grant PGRS2006010.
Publisher Copyright:
© 2008-2012 IEEE.
PY - 2023
Y1 - 2023
N2 - Pansharpening is a process of fusing a high spatial resolution panchromatic image and a low spatial resolution multispectral (MS) image to create a high-resolution MS image. A novel single-branch, single-scale lightweight convolutional neural network, named SDRCNN, is developed in this article. By using a novel dense residual connected structure and convolution block, SDRCNN achieved a better tradeoff between accuracy and efficiency. The performance of SDRCNN was tested using four datasets from the WorldView-3, WorldView-2, and QuickBird satellites. The compared methods include eight traditional methods (i.e., GS, Gram-Schmidt adaptive, partial replacement adaptive CS, band-related spatial detail, smoothing-filter-based intensity modulation, GLP-CBD, CDIF, and LRTCFPan) and five lightweight deep-learning methods (i.e., pansharpening neural network, PanNet, BayesianNet, DMDNet, and FusionNet). Based on a visual inspection of the pansharpened images created and the associated absolute residual maps, SDRCNN exhibited least spatial detail blurring and spectral distortion, among all the methods considered. The values of the quantitative evaluation metrics were closest to their ideal values when SDRCNN was used. The processing time of SDRCNN was also the shortest among all methods tested. Finally, the effectiveness of each component in the SDRCNN was demonstrated in ablation experiments. All of these confirmed the superiority of SDRCNN.
AB - Pansharpening is a process of fusing a high spatial resolution panchromatic image and a low spatial resolution multispectral (MS) image to create a high-resolution MS image. A novel single-branch, single-scale lightweight convolutional neural network, named SDRCNN, is developed in this article. By using a novel dense residual connected structure and convolution block, SDRCNN achieved a better tradeoff between accuracy and efficiency. The performance of SDRCNN was tested using four datasets from the WorldView-3, WorldView-2, and QuickBird satellites. The compared methods include eight traditional methods (i.e., GS, Gram-Schmidt adaptive, partial replacement adaptive CS, band-related spatial detail, smoothing-filter-based intensity modulation, GLP-CBD, CDIF, and LRTCFPan) and five lightweight deep-learning methods (i.e., pansharpening neural network, PanNet, BayesianNet, DMDNet, and FusionNet). Based on a visual inspection of the pansharpened images created and the associated absolute residual maps, SDRCNN exhibited least spatial detail blurring and spectral distortion, among all the methods considered. The values of the quantitative evaluation metrics were closest to their ideal values when SDRCNN was used. The processing time of SDRCNN was also the shortest among all methods tested. Finally, the effectiveness of each component in the SDRCNN was demonstrated in ablation experiments. All of these confirmed the superiority of SDRCNN.
KW - Convolutional neural network (CNN)
KW - deep learning (DL)
KW - fusion
KW - multispectral (MS) image
KW - pansharpening
KW - resolution
UR - http://www.scopus.com/inward/record.url?scp=85164424783&partnerID=8YFLogxK
U2 - 10.1109/JSTARS.2023.3292320
DO - 10.1109/JSTARS.2023.3292320
M3 - Article
AN - SCOPUS:85164424783
SN - 1939-1404
VL - 16
SP - 6325
EP - 6338
JO - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
JF - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
ER -