TY - JOUR
T1 - SFBM: Shared Feature Bias Mitigating for Long-tailed Image Recognition
AU - Zhao, Xinqiao
AU - Sun, Mingjie
AU - Lim, Eng Gee
AU - Zhao, Yao
AU - Xiao, Jimin
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Long-tailed distribution exists in real-world scenario and compromises the performance of recognition models. In this article, we point out that a neural network classifier has a shared feature bias, which tends to regard the shared features among different classes as head-class discriminative features, leading to misclassifications on tail-class samples under long-tailed scenarios. To solve this issue, we propose a shared feature bias mitigating (SFBM) framework. Specifically, we create two parallel classifiers trained concurrently with the baseline classifier, using our special training loss. The parallel classifier weight sums are then used for estimating the shared feature components in baseline classifier weights. Finally, we rectify the baseline classifier by removing the estimated shared feature components from it while supplementing the parallel classifier weights class by class to the rectified classifier weights, mitigating shared feature bias. Our proposed SFBM demonstrates broad compatibility with nearly all recognition methods while maintaining high computational efficiency, as it introduces no additional computation during inference. Extensive experiments on CIFAR10/100-LT, ImageNet-LT, and iNaturalist 2018 demonstrate that simply incorporating SFBM during the training phase consistently boosts the performance of various state-of-the-art methods by significant margins.
AB - Long-tailed distribution exists in real-world scenario and compromises the performance of recognition models. In this article, we point out that a neural network classifier has a shared feature bias, which tends to regard the shared features among different classes as head-class discriminative features, leading to misclassifications on tail-class samples under long-tailed scenarios. To solve this issue, we propose a shared feature bias mitigating (SFBM) framework. Specifically, we create two parallel classifiers trained concurrently with the baseline classifier, using our special training loss. The parallel classifier weight sums are then used for estimating the shared feature components in baseline classifier weights. Finally, we rectify the baseline classifier by removing the estimated shared feature components from it while supplementing the parallel classifier weights class by class to the rectified classifier weights, mitigating shared feature bias. Our proposed SFBM demonstrates broad compatibility with nearly all recognition methods while maintaining high computational efficiency, as it introduces no additional computation during inference. Extensive experiments on CIFAR10/100-LT, ImageNet-LT, and iNaturalist 2018 demonstrate that simply incorporating SFBM during the training phase consistently boosts the performance of various state-of-the-art methods by significant margins.
KW - Classifier bias
KW - image recognition
KW - long-tailed distribution
KW - shared feature
UR - https://www.scopus.com/pages/publications/105012964880
U2 - 10.1109/TNNLS.2025.3586215
DO - 10.1109/TNNLS.2025.3586215
M3 - Article
AN - SCOPUS:105012964880
SN - 2162-237X
VL - 99
SP - 17781
EP - 17790
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 99
ER -