Incremental self-growing neural networks with the changing environment

L. Su, S. U. Guan*, Y. C. Yeo

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

Conventional incremental learning approaches in multi-layered feedforward neural networks are based on new incoming training instances. In this paper, however, changing environment is defined as new incoming features of a specific problem. Our empirical study illustrates that ISGNN (incremental self-growing neural networks) can adapt to such a changing environment with new input dimension. In the meanwhile, dynamic neural network algorithms are used for automatic network structure design to avoid a time-consuming search for an appropriate network topology with the trial-and-error method. We also exploit information learned by the previous grown network to avoid retraining. Finally, we report simulation results on two benchmark problems. Our experiments show that this kind of adaptive learning mechanism could significantly improve the performance of original networks.

Original languageEnglish
Pages (from-to)43-74
Number of pages32
JournalJournal of Intelligent Systems
Volume11
Issue number1
DOIs
Publication statusPublished - 2001
Externally publishedYes

Keywords

  • Cascade correlation networks
  • Incremental input
  • Incremental learning

Fingerprint

Dive into the research topics of 'Incremental self-growing neural networks with the changing environment'. Together they form a unique fingerprint.

Cite this