MultiLearner based recursive supervised training

Kiruthika Ramanathan*, Sheng Uei Guan, Laxmi R. Iyer

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

Abstract

In supervised learning, most single solution neural networks such as Constructive Backpropagation give good results when used with some datasets but not with others. Others such as Probabilistic Neural Networks (PNN) fit a curve to perfection but need to be manually tuned in the case of noisy data. Recursive Percentage based Hybrid Pattern Training (RPHP) overcomes this problem by recursively training subsets of the data, thereby using several neural networks. MultiLearner based Recursive Training (MLRT) is an extension of this approach, where a combination of existing and new learners are used and subsets are trained using the weak learner which is best suited for this subset. We observed that empirically, MLRT performs considerably well as compared to RPHP and other systems on benchmark data with 11% improvement in accuracy on the spam dataset and comparable performances on the vowel and the two-spiral problems.

Original languageEnglish
Title of host publication2006 IEEE Conference on Cybernetics and Intelligent Systems
DOIs
Publication statusPublished - 2006
Externally publishedYes
Event2006 IEEE Conference on Cybernetics and Intelligent Systems - Bangkok, Thailand
Duration: 7 Jun 20069 Jun 2006

Publication series

Name2006 IEEE Conference on Cybernetics and Intelligent Systems

Conference

Conference2006 IEEE Conference on Cybernetics and Intelligent Systems
Country/TerritoryThailand
CityBangkok
Period7/06/069/06/06

Keywords

  • Backpropagation
  • Neural networks
  • Probabilistic neural networks (PNN)
  • Recursive percentage based hybrid pattern training (RPHP)
  • Supervised learning

Cite this