Abstract
Feature selection is an important preprocessing step that helps to improve model performance and to extract knowledge about important features in a dataset. However, feature selection methods are known to be adversely impacted by changes in the training dataset: even small differences between input datasets can result in the selection of different feature sets. This paper tackles this issue in the particular case of the delta test, a well-known feature relevance criterion that approximates the noise variance for regression tasks. A new feature selection criterion is proposed, the delta test bar, which is shown to be more stable than its close competitors.
Original language | English |
---|---|
Pages (from-to) | 1-7 |
Number of pages | 7 |
Journal | IEEE Transactions on Artificial Intelligence |
DOIs | |
Publication status | Published - 12 Sept 2023 |
Keywords
- Artificial intelligence
- Bars
- Classification and regression
- Feature extraction
- interpretable machine learning
- Numerical stability
- Stability criteria
- Standards
- Training
- trustworthy artificial intelligence