TY - JOUR
T1 - Convex ensemble learning with sparsity and diversity
AU - Yin, Xu Cheng
AU - Huang, Kaizhu
AU - Yang, Chun
AU - Hao, Hong Wei
N1 - Funding Information:
We would like to thank the anonymous reviewers for their constructive comments. The research is partly supported by National Basic Research Program of China (2012CB316301), National Natural Science Foundation of China (61105018 and 61175020), and R&D Special Fund for Public Welfare Industry (Meteorology) of China (GYHY201106039 and GYHY201106047).
PY - 2014/11
Y1 - 2014/11
N2 - Classifier ensemble has been broadly studied in two prevalent directions, i.e., to diversely generate classifier components, and to sparsely combine multiple classifiers. While most current approaches are emphasized on either sparsity or diversity only, we investigate classifier ensemble focused on both in this paper. We formulate the classifier ensemble problem with the sparsity and diversity learning in a general mathematical framework, which proves beneficial for grouping classifiers. In particular, derived from the error-ambiguity decomposition, we design a convex ensemble diversity measure. Consequently, accuracy loss, sparseness regularization, and diversity measure can be balanced and combined in a convex quadratic programming problem. We prove that the final convex optimization leads to a closed-form solution, making it very appealing for real ensemble learning problems. We compare our proposed novel method with other conventional ensemble methods such as Bagging, least squares combination, sparsity learning, and AdaBoost, extensively on a variety of UCI benchmark data sets and the Pascal Large Scale Learning Challenge 2008 webspam data. Experimental results confirm that our approach has very promising performance.
AB - Classifier ensemble has been broadly studied in two prevalent directions, i.e., to diversely generate classifier components, and to sparsely combine multiple classifiers. While most current approaches are emphasized on either sparsity or diversity only, we investigate classifier ensemble focused on both in this paper. We formulate the classifier ensemble problem with the sparsity and diversity learning in a general mathematical framework, which proves beneficial for grouping classifiers. In particular, derived from the error-ambiguity decomposition, we design a convex ensemble diversity measure. Consequently, accuracy loss, sparseness regularization, and diversity measure can be balanced and combined in a convex quadratic programming problem. We prove that the final convex optimization leads to a closed-form solution, making it very appealing for real ensemble learning problems. We compare our proposed novel method with other conventional ensemble methods such as Bagging, least squares combination, sparsity learning, and AdaBoost, extensively on a variety of UCI benchmark data sets and the Pascal Large Scale Learning Challenge 2008 webspam data. Experimental results confirm that our approach has very promising performance.
KW - Classifier ensemble
KW - Convex quadratic programming
KW - Diversity
KW - Sparsity
UR - http://www.scopus.com/inward/record.url?scp=84901636774&partnerID=8YFLogxK
U2 - 10.1016/j.inffus.2013.11.003
DO - 10.1016/j.inffus.2013.11.003
M3 - Article
AN - SCOPUS:84901636774
SN - 1566-2535
VL - 20
SP - 49
EP - 59
JO - Information Fusion
JF - Information Fusion
IS - 1
ER -