A unified end-to-end classification model for focal liver lesions

Ling Zhao, Shuaiqi Liu, Yanling An*, Wenjia Cai, Bing Li, Shui Hua Wang, Ping Liang, Jie Yu, Jie Zhao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Accurate diagnosis of focal liver lesions (FLLs) plays a crucial role in patients’ management, surveillance, and prognosis. Contrast-enhanced ultrasound (CEUS) as a vital diagnostic tool for FLLs still faces the challenge of image feature overlap among several FLLs. In this study, we proposed a deep learning-based model, denoted as a unified end-to-end (UEE) model, to fully capture the lesion information to achieve the classification of FLLs by adopting CEUS. We first exploited ResNet50 as the backbone to extract multi-scale features from several CEUS frames. Secondly, the hybrid attention enhancement module (HAEM) was designed to enhance the significant features with various scales. The enhanced features were then concatenated and passed into the nested feature aggregation module (NFAM) to add nonlinearity to the features with various scales. Finally, all features from different frames were averaged and fed into a Sigmoid classifier for FLL classification. The experiments are developed on a multi-center dataset which ensured diversity. The extensive experimental results revealed that the UEE model achieved 88.64 % accuracy on benign (Be) and malignant (Ma) classification, and 91.27 % accuracy on hepatocellular carcinoma (HCC) and intrahepatic cholangiocellular carcinoma (ICC) classification.

Original languageEnglish
Article number105260
JournalBiomedical Signal Processing and Control
Volume86
DOIs
Publication statusPublished - Sept 2023
Externally publishedYes

Keywords

  • Deep learning
  • Focal liver lesions
  • Medical image classification Contrast-enhanced ultrasound

Fingerprint

Dive into the research topics of 'A unified end-to-end classification model for focal liver lesions'. Together they form a unique fingerprint.

Cite this