Abstract
This paper deals with the situation where output attributes are introduced to a neural network incrementally, Conventionally, when new outputs are introduced to a neural network, the old network would be discarded and a new network would be retrained to integrate the old knowledge with the new knowledge. However, this method is likely to be computationally inefficient, mainly due to the loss of learnt knowledge in the existing network. As such, our primary interest is to integrate both old and new knowledge to form a single network as the solution. In this paper, we present three Incremental Output Learning (IOL) algorithms for incremental output learning. When a new output attribute is introduced to the original problem, a new sub-network is trained under IOL to acquire the new knowledge and the output attributes from the new sub-network are integrated with the output attributes of the existing network. The experimental results from several benchmarking datasets show that our methods are more effective and more efficient than conventional retraining methods.
Original language | English |
---|---|
Pages (from-to) | 95-122 |
Number of pages | 28 |
Journal | Journal of Intelligent Systems |
Volume | 13 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2004 |
Externally published | Yes |
Keywords
- Incremental learning
- Neural networks
- Output attributes
- Supervised learning