Abstract
A neural network training method ID-BT (Incremental Discriminatory Batch Training) is presented in this paper. The method separates the input space into two batches - significant and insignificant attributes-and before introducing them into the network, orders the attributes within each batch according to their individual discrimination ability. By backward eliminating insignificant attributes that are futile, the generalization accuracy of network training is increased. Incremental Discriminatory Batch and Individual Training (ID-BIT), which further improves ID-BT, introduces significant attributes individually and insignificant attributes as a batch. The architecture used for both methods employs several incremental learning algorithms. We tested our algorithm extensively, using several widely used benchmark problems, i.e. PROBEN1. The simulation results show that these two methods outperform incremental training with an increasing input dimension or conventional batch training where no partitioning of neural network input space occurs; we can achieve better network performance in terms of generalization accuracy.
Original language | English |
---|---|
Pages (from-to) | 321-351 |
Number of pages | 31 |
Journal | Journal of Intelligent Systems |
Volume | 14 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2005 |
Externally published | Yes |
Keywords
- Backward elimination
- Data presentation order
- Incremental training
- Input space partitioning
- Neural networks