Output partitioning of neural networks

Shen Uei Guan*, Qi Yinan, Syn Kiat Tan, Shanchun Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Many constructive learning algorithms have been proposed to find an appropriate network structure for a classification problem automatically. Constructive learning algorithms have drawbacks especially when used for complex tasks and modular approaches have been devised to solve these drawbacks. At the same time, parallel training for neural networks with fixed configurations has also been proposed to accelerate the training process. A new approach that combines advantages of constructive learning and parallelism, output partitioning, is presented in this paper. Classification error is used to guide the proposed incremental-partitioning algorithm, which divides the original data set into several smaller sub-data sets with distinct classes. Each sub-data set is then handled in parallel, by a smaller constructively trained sub-network which uses the whole input vector and produces a portion of the final output vector where each class is represented by one unit. Three classification data sets are used to test the validity of this method, and results show that this method reduces the classification test error.

Original languageEnglish
Pages (from-to)38-53
Number of pages16
JournalNeurocomputing
Volume68
Issue number1-4
DOIs
Publication statusPublished - Oct 2005
Externally publishedYes

Keywords

  • Constructive learning algorithm
  • Neural networks
  • Output partitioning

Fingerprint

Dive into the research topics of 'Output partitioning of neural networks'. Together they form a unique fingerprint.

Cite this