Incremental neural network training with an increasing input dimension

Sheng Uei Guan*, Jun Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

27 Citations (Scopus)

Abstract

Conventional Neural Network (NN) training is done by introducing training patterns in the full input dimension under batch mode. In this paper, an incremental training method with an increasing input dimension (ITID) is presented. ITID works by dividing the whole input dimension into several sub dimensions each of which corresponds to an input attribute. Instead of learning input attributes altogether as an input vector in a training instance, NN learns input attributes one after another through their corresponding sub-networks and the NN structure is grown incrementally with an increasing input dimension. During training, information obtained from a new sub-network is merged together with the information obtained from the old ones to refine the current NN structure. With less internal interference among input attributes, ITID achieves higher generalization accuracy than the conventional method. The experiment results of several benchmark problems show that ITID is efficient and effective for both classification and regression problems.

Original languageEnglish
Pages (from-to)45-69
Number of pages25
JournalJournal of Intelligent Systems
Volume13
Issue number1
DOIs
Publication statusPublished - 2004
Externally publishedYes

Keywords

  • Feedforward network
  • Incremental training
  • Input attributes
  • Input dimension
  • Neural networks
  • Supervised learning

Cite this