Task decomposition based on output parallelism

Sheng Uei Guan*, Shanchun Li

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

In this paper, we propose a new method for task decomposition based on output parallelism, in order to find the appropriate architectures for large-scale real-world problems automatically and efficiently. By using this method, a problem can be divided flexibly into several sub-problems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for each sub=problem) is responsible for producing a fraction of the output vector of the original problem. This way, the hidden structure for the original problem's output units is decoupled. These modules can be grown and trained in sequence or in parallel. Incorporated with the constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Several benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computation time, increase learning speed, and improve generalization accuracy for both classification and regression problems.

Original languageEnglish
Pages260-263
Number of pages4
Publication statusPublished - 2001
Externally publishedYes
Event10th IEEE International Conference on Fuzzy Systems - Melbourne, Australia
Duration: 2 Dec 20015 Dec 2001

Conference

Conference10th IEEE International Conference on Fuzzy Systems
Country/TerritoryAustralia
CityMelbourne
Period2/12/015/12/01

Cite this