Adaptive knowledge transfer for class incremental learning

Zhikun Feng, Mian Zhou*, Zan Gao, Angelos Stefanidis, Jionglong Su, Kang Dang, Chuanhui Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Humans are excellent at adapting to constantly changing circumstances, but deep neural networks have catastrophic forgetting. Recently, significant progress has been made with class-incremental methods based on dynamic network structures. However, these methods combine individual networks in a simplistic coupled manner, neglecting the fusion capabilities between modules, leading to a decline in the overall prediction performance. To address this, we propose a class-incremental learning paradigm with adaptive knowledge transfer. This paradigm leverages crucial self-learning factors to transfer the importance knowledge of old classes as much as possible, allowing each module to integrate the optimal information from the current class. Experiments demonstrate that our designed adaptive knowledge transfer module effectively reduces the sharpness of decision boundaries, thereby significantly improving the final accuracy. Additionally, we have devised a compression module with supplementary learning to mitigate errors arising from long-term session sequences during model fusion. Extensive experiments conducted on benchmarks, Our approach achieved an average accuracy exceeding 2.72%, 1.30% in the CIFAR-100 and ImageNet-100/1000 achieves SOTA performance in both ordinary and challenging class-incremental settings.

Original languageEnglish
Pages (from-to)165-171
Number of pages7
JournalPattern Recognition Letters
Volume183
DOIs
Publication statusPublished - Jul 2024

Keywords

  • Class incremental learning
  • Dynamic network
  • Knowledge distillation
  • Knowledge sharing

Fingerprint

Dive into the research topics of 'Adaptive knowledge transfer for class incremental learning'. Together they form a unique fingerprint.

Cite this