TY - JOUR
T1 - Dynamic feature regularized loss for weakly supervised semantic segmentation
AU - Zhang, Bingfeng
AU - Xiao, Jimin
AU - Zhao, Yao
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/8
Y1 - 2025/8
N2 - We focus on confronting weakly supervised semantic segmentation with scribble-level annotation. The regularized loss has proven to be an effective solution for this task. However, most existing regularized losses only leverage static shallow features (color, spatial information) to compute the regularized kernel, which limits its final performance since such static shallow features fail to describe pair-wise pixel relationships in complicated cases. In this paper, we propose a new regularized loss that utilizes both shallow and deep features that are dynamically updated to aggregate sufficient information to represent the relationship of different pixels. Moreover, to provide accurate deep features, we design a feature consistency head to train the pair-wise feature relationship. In contrast to most approaches that adopt a multi-stage training strategy with complicated training settings and high time-consuming steps, our approach can be directly trained in an end-to-end manner, in which the feature consistency head and our regularized loss can benefit from each other. We evaluate our approach on different backbones, and extensive experiments show that our approach achieves new state-of-the-art performances on different cases, e.g., using our approach with a vision transformer outperforms other approaches by a substantial margin (more than 5% mIoU increase). The source code will be released at: https://github.com/zbf1991/DFR.
AB - We focus on confronting weakly supervised semantic segmentation with scribble-level annotation. The regularized loss has proven to be an effective solution for this task. However, most existing regularized losses only leverage static shallow features (color, spatial information) to compute the regularized kernel, which limits its final performance since such static shallow features fail to describe pair-wise pixel relationships in complicated cases. In this paper, we propose a new regularized loss that utilizes both shallow and deep features that are dynamically updated to aggregate sufficient information to represent the relationship of different pixels. Moreover, to provide accurate deep features, we design a feature consistency head to train the pair-wise feature relationship. In contrast to most approaches that adopt a multi-stage training strategy with complicated training settings and high time-consuming steps, our approach can be directly trained in an end-to-end manner, in which the feature consistency head and our regularized loss can benefit from each other. We evaluate our approach on different backbones, and extensive experiments show that our approach achieves new state-of-the-art performances on different cases, e.g., using our approach with a vision transformer outperforms other approaches by a substantial margin (more than 5% mIoU increase). The source code will be released at: https://github.com/zbf1991/DFR.
KW - Regularized loss
KW - Scribble annotation
KW - Semantic segmentation
KW - Weakly supervised
UR - http://www.scopus.com/inward/record.url?scp=105000076619&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2025.111540
DO - 10.1016/j.patcog.2025.111540
M3 - Article
AN - SCOPUS:105000076619
SN - 0031-3203
VL - 164
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 111540
ER -