Theoretical and empirical study on the potential inadequacy of mutual information for feature selection in classification

Benoît Frénay, Gauthier Doquire, Michel Verleysen

Research output: Contribution to journalArticlepeer-review

16 Downloads (Pure)

Abstract

Mutual information is a widely used performance criterion for filter feature selection. However, despite its popularity and its appealing properties, mutual information is not always the most appropriate criterion. Indeed, contrary to what is sometimes hypothesized in the literature, looking for a feature subset maximizing the mutual information does not always guarantee to decrease the misclassification probability, which is often the objective one is interested in. The first objective of this paper is thus to clearly illustrate this potential inadequacy and to emphasize the fact that the mutual information remains a heuristic, coming with no guarantee in terms of classification accuracy. Through extensive experiments, a deeper analysis of the cases for which the mutual information is not a suitable criterion is then conducted. This analysis allows us to confirm the general interest of the mutual information for feature selection. It also helps us better apprehending the behaviour of mutual information throughout a feature selection process and consequently making a better use of it as a feature selection criterion.
Original languageEnglish
Pages (from-to)64-78
Number of pages15
JournalNeurocomputing
Volume112
DOIs
Publication statusPublished - 18 Jul 2013
Externally publishedYes

Keywords

  • Classification
  • Feature selection
  • Hellman-Raviv and Fano bounds
  • Mutual information
  • Probability of misclassification

Fingerprint

Dive into the research topics of 'Theoretical and empirical study on the potential inadequacy of mutual information for feature selection in classification'. Together they form a unique fingerprint.

Cite this