TY - JOUR
T1 - Artificial Intelligence-Based Deep Fusion Model for Pan-Sharpening of Remote Sensing Images
AU - Iskanderani, Ahmed I.
AU - Mehedi, Ibrahim M.
AU - Aljohani, Abdulah Jeza
AU - Shorfuzzaman, Mohammad
AU - Akhter, Farzana
AU - Palaniswamy, Thangam
AU - Latif, Shaikh Abdul
AU - Latif, Abdul
AU - Jannat, Rahtul
N1 - Publisher Copyright:
© 2021 Ahmed I. Iskanderani et al.
PY - 2021
Y1 - 2021
N2 - During the past two decades, many remote sensing image fusion techniques have been designed to improve the spatial resolution of the low-spatial-resolution multispectral bands. The main objective is fuse the low-resolution multispectral (MS) image and the high-spatial-resolution panchromatic (PAN) image to obtain a fused image having high spatial and spectral information. Recently, many artificial intelligence-based deep learning models have been designed to fuse the remote sensing images. But these models do not consider the inherent image distribution difference between MS and PAN images. Therefore, the obtained fused images may suffer from gradient and color distortion problems. To overcome these problems, in this paper, an efficient artificial intelligence-based deep transfer learning model is proposed. Inception-ResNet-v2 model is improved by using a color-aware perceptual loss (CPL). The obtained fused images are further improved by using gradient channel prior as a postprocessing step. Gradient channel prior is used to preserve the color and gradient information. Extensive experiments are carried out by considering the benchmark datasets. Performance analysis shows that the proposed model can efficiently preserve color and gradient information in the fused remote sensing images than the existing models.
AB - During the past two decades, many remote sensing image fusion techniques have been designed to improve the spatial resolution of the low-spatial-resolution multispectral bands. The main objective is fuse the low-resolution multispectral (MS) image and the high-spatial-resolution panchromatic (PAN) image to obtain a fused image having high spatial and spectral information. Recently, many artificial intelligence-based deep learning models have been designed to fuse the remote sensing images. But these models do not consider the inherent image distribution difference between MS and PAN images. Therefore, the obtained fused images may suffer from gradient and color distortion problems. To overcome these problems, in this paper, an efficient artificial intelligence-based deep transfer learning model is proposed. Inception-ResNet-v2 model is improved by using a color-aware perceptual loss (CPL). The obtained fused images are further improved by using gradient channel prior as a postprocessing step. Gradient channel prior is used to preserve the color and gradient information. Extensive experiments are carried out by considering the benchmark datasets. Performance analysis shows that the proposed model can efficiently preserve color and gradient information in the fused remote sensing images than the existing models.
UR - http://www.scopus.com/inward/record.url?scp=85122757836&partnerID=8YFLogxK
U2 - 10.1155/2021/7615106
DO - 10.1155/2021/7615106
M3 - Article
C2 - 34976044
AN - SCOPUS:85122757836
SN - 1687-5265
VL - 2021
JO - Computational Intelligence and Neuroscience
JF - Computational Intelligence and Neuroscience
M1 - 7615106
ER -