TY - JOUR
T1 - TIPS: Two-level prompt selection for more stability-plasticity balance in continual learning
AU - Feng, Zhikun
AU - Peng, Liang
AU - Dang, Kang
AU - Zhou, Mian
AU - Kuang, Ping
AU - Wu, Mingyu
AU - Yu, Liu
AU - Su, Jionglong
PY - 2025/8/13
Y1 - 2025/8/13
N2 - ecent advances in prompt-based continual learning have demonstrated remarkable performance in resisting catastrophic forgetting. However, the effectiveness of these methods heavily depends on prompt selection strategy. Moreover, most existing methods overlook the model plasticity since they focus on solving the model’s stability issues, leading to a sharp decline in performance for new classes in long task sequences of incremental learning. To address these limitations, we propose a novel prompt-based continual learning method called TIPS, which mainly consists of two modules: (1) a novel two-level prompt selection strategy combined with a set of adaptive weights for sparse joint tuning, aiming to improve the accuracy of prompt selection; (2) a semantic knowledge distillation module that enhances the generalization ability to new classes by creating a language token and utilizing semantic information of class names. We validated TIPS on 4 datasets across three incremental task scenarios. TIPS surpasses or matches SOTA in all scenario settings, maintaining stable prompt selection accuracy throughout multiple incremental learning sessions. Notably, TIPS outperformed the current state-of-the-art by 2.03 %, 4.78 %, 1.18 %, and 5.59 % on CIFAR, ImageNet-R, CUB-200, and DomainNet. Our code locates at: https://github.com/gogo-l/Tips.
AB - ecent advances in prompt-based continual learning have demonstrated remarkable performance in resisting catastrophic forgetting. However, the effectiveness of these methods heavily depends on prompt selection strategy. Moreover, most existing methods overlook the model plasticity since they focus on solving the model’s stability issues, leading to a sharp decline in performance for new classes in long task sequences of incremental learning. To address these limitations, we propose a novel prompt-based continual learning method called TIPS, which mainly consists of two modules: (1) a novel two-level prompt selection strategy combined with a set of adaptive weights for sparse joint tuning, aiming to improve the accuracy of prompt selection; (2) a semantic knowledge distillation module that enhances the generalization ability to new classes by creating a language token and utilizing semantic information of class names. We validated TIPS on 4 datasets across three incremental task scenarios. TIPS surpasses or matches SOTA in all scenario settings, maintaining stable prompt selection accuracy throughout multiple incremental learning sessions. Notably, TIPS outperformed the current state-of-the-art by 2.03 %, 4.78 %, 1.18 %, and 5.59 % on CIFAR, ImageNet-R, CUB-200, and DomainNet. Our code locates at: https://github.com/gogo-l/Tips.
KW - Continual learning
KW - Prompt learning
KW - Catastrophic forgetting
UR - https://www.sciencedirect.com/science/article/pii/S0031320325009379
M3 - Article
SN - 0031-3203
VL - 171
JO - Pattern Recognition
JF - Pattern Recognition
IS - part B
ER -