Abstract
Graph convolutional networks (GCNs) emerge as the most successful learning models for graph-structured data. Despite their success, existing GCNs usually ignore the entangled latent factors typically arising in real-world graphs, which results in nonexplainable node representations. Even worse, while the emphasis has been placed on local graph information, the global knowledge of the entire graph is lost to a certain extent. In this work, to address these issues, we propose a novel framework for GCNs, termed LGD-GCN, taking advantage of both local and global information for disentangling node representations in the latent space. Specifically, we propose to represent a disentangled latent continuous space with a statistical mixture model, by leveraging neighborhood routing mechanism locally. From the latent space, various new graphs can then be disentangled and learned, to overall reflect the hidden structures with respect to different factors. On the one hand, a novel regularizer is designed to encourage interfactor diversity for model expressivity in the latent space. On the other hand, the factor-specific information is encoded globally via employing a message passing along these new graphs, in order to strengthen intrafactor consistency. Extensive evaluations on both synthetic and five benchmark datasets show that LGD-GCN brings significant performance gains over the recent competitive models in both disentangling and node classification. Particularly, LGD-GCN is able to outperform averagely the disentangled state-of-the-arts by 7.4% on social network datasets.
Original language | English |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 35 |
Issue number | 3 |
DOIs | |
Publication status | Accepted/In press - 2022 |
Keywords
- (Semi-)supervised node classification
- Correlation
- Data models
- disentangled representation learning
- graph convolutional networks (GCNs)
- Image color analysis
- local and global learning
- Message passing
- Representation learning
- Routing
- Task analysis