Abstract
Feature selection is an important preprocessing step that helps to improve model performance and to extract knowledge about important features in a dataset. However, feature selection methods are known to be adversely impacted by changes in the training dataset: even small differences between input datasets can result in the selection of different feature sets. This letter tackles this issue in the particular case of the delta test (DT), a well-known feature relevance criterion that approximates the noise variance for regression tasks. A new feature selection criterion is proposed, the delta test bar, which is shown to be more stable than its close competitors.
Original language | English |
---|---|
Article number | 10248958 |
Pages (from-to) | 1911-1917 |
Number of pages | 7 |
Journal | IEEE Transactions on Artificial Intelligence |
Volume | 5 |
Issue number | 5 |
DOIs | |
Publication status | Published - 12 Sept 2023 |
Keywords
- Artificial intelligence
- Bars
- Classification and regression
- Feature extraction
- interpretable machine learning
- Numerical stability
- Stability criteria
- Standards
- Training
- trustworthy artificial intelligence