Sensorineural hearing loss classification via deep-HLNet and few-shot learning

Xi Chen, Qinghua Zhou, Rushi Lan*, Shui Hua Wang, Yu Dong Zhang*, Xiaonan Luo

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


We propose a new method for hearing loss classification from magnetic resonance image (MRI), which can automatically detect tissue-specific features in a given MRI. Sensorineural hearing loss (SHNL) is highly prevalent in our society. Early diagnosis and intervention have a profound impact on patient outcomes. A solution to provide early diagnosis is the use of automated diagnostic systems. In this study, we propose a novel Deep-HLNet framework, based on few-shot learning, for the automated classification of SNHL. This research involves magnetic resonance (MRI) images from 60 participants of three balanced categories: left-sided SNHL, right-sided SNHL, and healthy controls. A convolutional neural network was employed for feature extraction from individual categories, while a neural network and a comparison classifier strategy constituted a tri-classifier for SNHL classification. In terms of experiment results and practicability of the algorithm, the classification performance was significantly better than the standard deep learning methods or other conventional methods, with an overall accuracy of 96.62%.

Original languageEnglish
Pages (from-to)2109-2122
Number of pages14
JournalMultimedia Tools and Applications
Issue number2
Publication statusPublished - Jan 2021
Externally publishedYes


  • Deep-HLNet
  • Few-shot learning
  • Hearing loss


Dive into the research topics of 'Sensorineural hearing loss classification via deep-HLNet and few-shot learning'. Together they form a unique fingerprint.

Cite this