TIPS: Two-level prompt selection for more stability-plasticity balance in continual learning

Zhikun Feng, Liang Peng, Kang Dang, Mian Zhou, Ping Kuang*, Mingyu Wu, Liu Yu, Jionglong Su

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

ecent advances in prompt-based continual learning have demonstrated remarkable performance in resisting catastrophic forgetting. However, the effectiveness of these methods heavily depends on prompt selection strategy. Moreover, most existing methods overlook the model plasticity since they focus on solving the model’s stability issues, leading to a sharp decline in performance for new classes in long task sequences of incremental learning. To address these limitations, we propose a novel prompt-based continual learning method called TIPS, which mainly consists of two modules: (1) a novel two-level prompt selection strategy combined with a set of adaptive weights for sparse joint tuning, aiming to improve the accuracy of prompt selection; (2) a semantic knowledge distillation module that enhances the generalization ability to new classes by creating a language token and utilizing semantic information of class names. We validated TIPS on 4 datasets across three incremental task scenarios. TIPS surpasses or matches SOTA in all scenario settings, maintaining stable prompt selection accuracy throughout multiple incremental learning sessions. Notably, TIPS outperformed the current state-of-the-art by 2.03 %, 4.78 %, 1.18 %, and 5.59 % on CIFAR, ImageNet-R, CUB-200, and DomainNet. Our code locates at: https://github.com/gogo-l/Tips.
Original languageEnglish
JournalPattern Recognition
Volume171
Issue numberpart B
Publication statusPublished - 13 Aug 2025

Keywords

  • Continual learning
  • Prompt learning
  • Catastrophic forgetting

Fingerprint

Dive into the research topics of 'TIPS: Two-level prompt selection for more stability-plasticity balance in continual learning'. Together they form a unique fingerprint.

Cite this