Abstract
Interpreting nonlinear dimensionality reduction models using external features (or external variables) is crucial in many fields, such as psychology and ecology. Multidimensional scaling (MDS) is one of the most frequently used dimensionality reduction techniques in these fields. However, the rotation invariance of the MDS objective function may make interpretation of the resulting embedding difficult. This paper analyzes how the rotation of MDS embeddings affects sparse regression models used to interpret them and proposes a method, called the Best Interpretable Rotation (BIR) method, which selects the best MDS rotation for interpreting embeddings using external information.
Original language | English |
---|---|
Pages (from-to) | 83-96 |
Number of pages | 14 |
Journal | Neurocomputing |
Volume | 342 |
DOIs | |
Publication status | Published - 4 Feb 2019 |
Keywords
- Machine learning
- Interpretability
- Dimensionality reduction
- Multidimensional scaling
- Orthogonal transformation
- Multi-view
- Sparsity
- Lasso regularization
Fingerprint
Dive into the research topics of 'BIR: A Method for Selecting the Best Interpretable Multidimensional Scaling Rotation using External Variables'. Together they form a unique fingerprint.Student theses
-
Interpretability and Explainability in Machine Learning and their Application to Nonlinear Dimensionality Reduction
Bibal, A. (Author)FRENAY, B. (Supervisor), VANHOOF, W. (President), Cleve, A. (Jury), Dumas, B. (Jury), Lee, J. A. (Jury) & Galarraga, L. (Jury), 16 Nov 2020Student thesis: Doc types › Doctor of Sciences
File