Stochastic Conjugate Gradient Algorithm With Variance Reduction, CCF B, CAS Q1

Xiao Bo Jin, Xu Yao Zhang, Kaizhu Huang, Guang Gang Geng*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

36 Citations (Scopus)

Abstract

Conjugate gradient (CG) methods are a class of important methods for solving linear equations and nonlinear optimization problems. In this paper, we propose a new stochastic CG algorithm with variance reduction 1 and we prove its linear convergence with the Fletcher and Reeves method for strongly convex and smooth functions. We experimentally demonstrate that the CG with variance reduction algorithm converges faster than its counterparts for four learning models, which may be convex, nonconvex or nonsmooth. In addition, its area under the curve performance on six large-scale data sets is comparable to that of the LIBLINEAR solver for the L2 -regularized L2 -loss but with a significant improvement in computational efficiency. 1 CGVR algorithm is available on github: https://github.com/xbjin/cgvr.

Original languageEnglish
Article number08475017
Pages (from-to)1360-1369
Number of pages10
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume30
Issue number5
DOIs
Publication statusPublished - May 2019

Keywords

  • Computational efficiency
  • covariance reduction
  • empirical risk minimization (ERM)
  • linear convergence
  • stochastic conjugate gradient (CG)

Fingerprint

Dive into the research topics of 'Stochastic Conjugate Gradient Algorithm With Variance Reduction, CCF B, CAS Q1'. Together they form a unique fingerprint.

Cite this