Feature selection for modular neural network classifiers

Sheng Uei Guan*, Peng Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

An N-class problem can be fully decomposed into N-independent small neural networks called modules (or sub-problems) in a modular neural network classifier. Each sub-problem is a two-class ('yes' or 'no') problem. Hence, the optimal input feature space for each module is also likely to be a subset of the original feature space. Therefore, feature selection plays an important role in finding these useful features. Some feature selection techniques have been developed from different perspectives but are not suitable, however, for the two-class problems resulting from complete task decomposition. In this paper, we propose two feature selection techniques- Relative Importance Factor (RIF) and Relative FLD Weight Analysis (RFWA) for modular neural network classifiers. Our approaches involve the use of Fisher's linear discriminant (FLD) function to obtain the importance of each feature and to find the correlation among features. In RIF, the input features are classified as relevant and irrelevant based on their contribution in classification. In RFWA, the irrelevant features are further classified into noise or redundant features based on the correlation among features. The proposed techniques have been applied to several classification problems. The results show that these techniques can successfully detect the irrelevant features in each module and improve accuracy while reducing computation effort.

Original languageEnglish
Pages (from-to)173-200
Number of pages28
JournalJournal of Intelligent Systems
Volume12
Issue number3
DOIs
Publication statusPublished - 2002
Externally publishedYes

Keywords

  • Class decomposition
  • Correlation between input features
  • FLD
  • Feature selection
  • Modular neural network
  • Transformation vector

Cite this