TY - JOUR
T1 - Regularized margin-based conditional log-likelihood loss for prototype learning
AU - Jin, Xiao Bo
AU - Liu, Cheng Lin
AU - Hou, Xinwen
N1 - Funding Information:
This work is supported in part by the Hundred Talents Program of Chinese Academy of Sciences (CAS) and the National Natural Science Foundation of China (NSFC) under Grant nos. 60775004 and 60825301.
PY - 2010/7
Y1 - 2010/7
N2 - The classification performance of nearest prototype classifiers largely relies on the prototype learning algorithm. The minimum classification error (MCE) method and the soft nearest prototype classifier (SNPC) method are two important algorithms using misclassification loss. This paper proposes a new prototype learning algorithm based on the conditional log-likelihood loss (CLL), which is based on the discriminative model called log-likelihood of margin (LOGM). A regularization term is added to avoid over-fitting in training as well as to maximize the hypothesis margin. The CLL in the LOGM algorithm is a convex function of margin, and so, shows better convergence than the MCE. In addition, we show the effects of distance metric learning with both prototype-dependent weighting and prototype-independent weighting. Our empirical study on the benchmark datasets demonstrates that the LOGM algorithm yields higher classification accuracies than the MCE, generalized learning vector quantization (GLVQ), soft nearest prototype classifier (SNPC) and the robust soft learning vector quantization (RSLVQ), and moreover, the LOGM with prototype-dependent weighting achieves comparable accuracies to the support vector machine (SVM) classifier. Crown
AB - The classification performance of nearest prototype classifiers largely relies on the prototype learning algorithm. The minimum classification error (MCE) method and the soft nearest prototype classifier (SNPC) method are two important algorithms using misclassification loss. This paper proposes a new prototype learning algorithm based on the conditional log-likelihood loss (CLL), which is based on the discriminative model called log-likelihood of margin (LOGM). A regularization term is added to avoid over-fitting in training as well as to maximize the hypothesis margin. The CLL in the LOGM algorithm is a convex function of margin, and so, shows better convergence than the MCE. In addition, we show the effects of distance metric learning with both prototype-dependent weighting and prototype-independent weighting. Our empirical study on the benchmark datasets demonstrates that the LOGM algorithm yields higher classification accuracies than the MCE, generalized learning vector quantization (GLVQ), soft nearest prototype classifier (SNPC) and the robust soft learning vector quantization (RSLVQ), and moreover, the LOGM with prototype-dependent weighting achieves comparable accuracies to the support vector machine (SVM) classifier. Crown
KW - Conditional log-likelihood loss
KW - Distance metric learning
KW - Log-likelihood of margin (LOGM)
KW - Prototype learning
KW - Regularization
UR - http://www.scopus.com/inward/record.url?scp=77949487932&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2010.01.013
DO - 10.1016/j.patcog.2010.01.013
M3 - Article
AN - SCOPUS:77949487932
SN - 0031-3203
VL - 43
SP - 2428
EP - 2438
JO - Pattern Recognition
JF - Pattern Recognition
IS - 7
ER -