TY - JOUR
T1 - Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules
AU - Rong, Yi Biao
AU - Xiong, Yu
AU - Li, Chong
AU - Chen, Ying
AU - Wei, Peiwei
AU - Wei, Chuliang
AU - Fan, Zhun
N1 - Publisher Copyright:
© 2023, International Federation for Medical and Biological Engineering.
PY - 2023/7
Y1 - 2023/7
N2 - Automated and accurate segmentation of retinal vessels in fundus images is an important step for screening and diagnosing various ophthalmologic diseases. However, many factors, including the variations of vessels in color, shape and size, make this task become an intricate challenge. One kind of the most popular methods for vessel segmentation is U-Net based methods. However, in the U-Net based methods, the size of the convolution kernels is generally fixed. As a result, the receptive field for an individual convolution operation is single, which is not conducive to the segmentation of retinal vessels with various thicknesses. To overcome this problem, in this paper, we employed self-calibrated convolutions to replace the traditional convolutions for the U-Net, which can make the U-Net learn discriminative representations from different receptive fields. Besides, we proposed an improved spatial attention module, instead of using traditional convolutions, to connect the encoding part and decoding part of the U-Net, which can improve the ability of the U-Net to detect thin vessels. The proposed method has been tested on Digital Retinal Images for Vessel Extraction (DRIVE) database and Child Heart and Health Study in England Database (CHASE DB1). The metrics used to evaluate the performance of the proposed method are accuracy (ACC), sensitivity (SE), specificity (SP), F1-score (F1) and the area under the receiver operating characteristic curve (AUC). The ACC, SE, SP, F1 and AUC obtained by the proposed method are 0.9680, 0.8036, 0.9840, 0.8138 and 0.9840 respectively on DRIVE database, and 0.9756, 0.8118, 0.9867, 0.8068 and 0.9888 respectively on CHASE DB1, which are better than those obtained by the traditional U-Net (the ACC, SE, SP, F1 and AUC obtained by U-Net are 0.9646, 0.7895, 0.9814, 0.7963 and 0.9791 respectively on DRIVE database, and 0.9733, 0.7817, 0.9862, 0.7870 and 0.9810 respectively on CHASE DB1). The experimental results indicate that the proposed modifications in the U-Net are effective for vessel segmentation. [Figure not available: see fulltext.].
AB - Automated and accurate segmentation of retinal vessels in fundus images is an important step for screening and diagnosing various ophthalmologic diseases. However, many factors, including the variations of vessels in color, shape and size, make this task become an intricate challenge. One kind of the most popular methods for vessel segmentation is U-Net based methods. However, in the U-Net based methods, the size of the convolution kernels is generally fixed. As a result, the receptive field for an individual convolution operation is single, which is not conducive to the segmentation of retinal vessels with various thicknesses. To overcome this problem, in this paper, we employed self-calibrated convolutions to replace the traditional convolutions for the U-Net, which can make the U-Net learn discriminative representations from different receptive fields. Besides, we proposed an improved spatial attention module, instead of using traditional convolutions, to connect the encoding part and decoding part of the U-Net, which can improve the ability of the U-Net to detect thin vessels. The proposed method has been tested on Digital Retinal Images for Vessel Extraction (DRIVE) database and Child Heart and Health Study in England Database (CHASE DB1). The metrics used to evaluate the performance of the proposed method are accuracy (ACC), sensitivity (SE), specificity (SP), F1-score (F1) and the area under the receiver operating characteristic curve (AUC). The ACC, SE, SP, F1 and AUC obtained by the proposed method are 0.9680, 0.8036, 0.9840, 0.8138 and 0.9840 respectively on DRIVE database, and 0.9756, 0.8118, 0.9867, 0.8068 and 0.9888 respectively on CHASE DB1, which are better than those obtained by the traditional U-Net (the ACC, SE, SP, F1 and AUC obtained by U-Net are 0.9646, 0.7895, 0.9814, 0.7963 and 0.9791 respectively on DRIVE database, and 0.9733, 0.7817, 0.9862, 0.7870 and 0.9810 respectively on CHASE DB1). The experimental results indicate that the proposed modifications in the U-Net are effective for vessel segmentation. [Figure not available: see fulltext.].
KW - Retinal vessel segmentation
KW - Self-calibrated convolutions
KW - Spatial attention modules
UR - http://www.scopus.com/inward/record.url?scp=85149745248&partnerID=8YFLogxK
U2 - 10.1007/s11517-023-02806-1
DO - 10.1007/s11517-023-02806-1
M3 - Article
AN - SCOPUS:85149745248
SN - 0140-0118
VL - 61
SP - 1745
EP - 1755
JO - Medical and Biological Engineering and Computing
JF - Medical and Biological Engineering and Computing
IS - 7
ER -