FSNet: Dual Interpretable Graph Convolutional Network for Alzheimer's Disease Analysis

Hengxin Li, Xiaoshuang Shi, Xiaofeng Zhu*, Shuihua Wang*, Zheng Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

14 Citations (Scopus)

Abstract

Graph Convolutional Networks (GCNs) are widely used in medical images diagnostic research, because they can automatically learn powerful and robust feature representations. However, their performance might be significantly deteriorated by trivial or corrupted medical features and samples. Moreover, existing methods cannot simultaneously interpret the significant features and samples. To overcome these limitations, in this paper, we propose a novel dual interpretable graph convolutional network, namely FSNet, to simultaneously select significant features and samples, so as to boost model performance for medical diagnosis and interpretation. Specifically, the proposed network consists of three modules, two of which leverage one simple yet effective sparse mechanism to obtain feature and sample weight matrices for interpreting features and samples, respectively, and the third one is utilized for medical diagnosis. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets demonstrate the superior classification performance and interpretability over the recent state-of-the-art methods.

Original languageEnglish
Pages (from-to)15-25
Number of pages11
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Volume7
Issue number1
DOIs
Publication statusPublished - 1 Feb 2023
Externally publishedYes

Keywords

  • Alzheimer's disease diagnosis research
  • feature interpretability
  • graph convolutional network
  • sample interpretability

Fingerprint

Dive into the research topics of 'FSNet: Dual Interpretable Graph Convolutional Network for Alzheimer's Disease Analysis'. Together they form a unique fingerprint.

Cite this