Interpreting nonlinear dimensionality reduction models using external features (or external variables) is crucial in many fields, such as psychology and ecology. Multidimensional scaling (MDS) is one of the most frequently used dimensionality reduction techniques in these fields. However, the rotation invariance of the MDS objective function may make interpretation of the resulting embedding difficult. This paper analyzes how the rotation of MDS embeddings affects sparse regression models used to interpret them and proposes a method, called the Best Interpretable Rotation (BIR) method, which selects the best MDS rotation for interpreting embeddings using external information.
- Machine learning
- Dimensionality reduction
- Multidimensional scaling
- Orthogonal transformation
- Lasso regularization
Interpretability and Explainability in Machine Learning and their Application to Nonlinear Dimensionality ReductionAuthor: Bibal, A., 16 Nov 2020
Student thesis: Doc types › Doctor of Sciences