Abstract
In this paper, we propose a new method for task decomposition based on output parallelism, in order to find the appropriate architectures for large-scale real-world problems automatically and efficiently. By using this method, a problem can be divided flexibly into several sub-problems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for each sub=problem) is responsible for producing a fraction of the output vector of the original problem. This way, the hidden structure for the original problem's output units is decoupled. These modules can be grown and trained in sequence or in parallel. Incorporated with the constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Several benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computation time, increase learning speed, and improve generalization accuracy for both classification and regression problems.
Original language | English |
---|---|
Pages | 260-263 |
Number of pages | 4 |
Publication status | Published - 2001 |
Externally published | Yes |
Event | 10th IEEE International Conference on Fuzzy Systems - Melbourne, Australia Duration: 2 Dec 2001 → 5 Dec 2001 |
Conference
Conference | 10th IEEE International Conference on Fuzzy Systems |
---|---|
Country/Territory | Australia |
City | Melbourne |
Period | 2/12/01 → 5/12/01 |