Reduced pattern training in pattern distributor networks

Chunyu Bao*, Tse Ngee Neo, Sheng Uei Guan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)


In this paper, we propose a new task decomposition method, Task Decomposition with Pattern Distributor (PD), for multilayered feedforward neural networks. The method uses a combination of network modules in parallel and series to generate the overall solution for a complex problem. We also introduce a method called reduced pattern training in PD networks. This method aims to improve the performance of the pattern distributor network. Our analysis and the experimental results show that reduced pattern training improves the performance of pattern distributor network significantly. Experimental results confirm that this new method can reduce training time and improve network generalization accuracy significantly when compared to ordinary task decomposition methods such as Output Parallelism.

Original languageEnglish
Pages (from-to)273-286
Number of pages14
JournalJournal of Research and Practice in Information Technology
Issue number4
Publication statusPublished - 2007
Externally publishedYes


  • Full pattern training
  • Pattern distributor
  • Reduced pattern training
  • Task decomposition

Cite this