Abstract
Graph Convolutional Networks (GCNs) are widely used in medical images diagnostic research, because they can automatically learn powerful and robust feature representations. However, their performance might be significantly deteriorated by trivial or corrupted medical features and samples. Moreover, existing methods cannot simultaneously interpret the significant features and samples. To overcome these limitations, in this paper, we propose a novel dual interpretable graph convolutional network, namely FSNet, to simultaneously select significant features and samples, so as to boost model performance for medical diagnosis and interpretation. Specifically, the proposed network consists of three modules, two of which leverage one simple yet effective sparse mechanism to obtain feature and sample weight matrices for interpreting features and samples, respectively, and the third one is utilized for medical diagnosis. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets demonstrate the superior classification performance and interpretability over the recent state-of-the-art methods.
| Original language | English |
|---|---|
| Pages (from-to) | 15-25 |
| Number of pages | 11 |
| Journal | IEEE Transactions on Emerging Topics in Computational Intelligence |
| Volume | 7 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 1 Feb 2023 |
| Externally published | Yes |
Keywords
- Alzheimer's disease diagnosis research
- feature interpretability
- graph convolutional network
- sample interpretability