BIR: A Method for Selecting the Best Interpretable Multidimensional Scaling Rotation using External Variables

Rebecca Marion, Adrien Bibal, Benoît Frenay

Research output: Contribution to journalArticle

Abstract

Interpreting nonlinear dimensionality reduction models using external features (or external variables) is crucial in many fields, such as psychology and ecology. Multidimensional scaling (MDS) is one of the most frequently used dimensionality reduction techniques in these fields. However, the rotation invariance of the MDS objective function may make interpretation of the resulting embedding difficult. This paper analyzes how the rotation of MDS embeddings affects sparse regression models used to interpret them and proposes a method, called the Best Interpretable Rotation (BIR) method, which selects the best MDS rotation for interpreting embeddings using external information.
Original languageEnglish
JournalNeurocomputing
Publication statusPublished - 4 Feb 2019

Fingerprint

Ecology
Invariance
Psychology

Keywords

  • Machine learning
  • Interpretability
  • Dimensionality reduction
  • Multidimensional scaling
  • Orthogonal transformation
  • Multi-view
  • Sparsity
  • Lasso regularization

Cite this

@article{51f08cd4e241408c9340f527dd2779f1,
title = "BIR: A Method for Selecting the Best Interpretable Multidimensional Scaling Rotation using External Variables",
abstract = "Interpreting nonlinear dimensionality reduction models using external features (or external variables) is crucial in many fields, such as psychology and ecology. Multidimensional scaling (MDS) is one of the most frequently used dimensionality reduction techniques in these fields. However, the rotation invariance of the MDS objective function may make interpretation of the resulting embedding difficult. This paper analyzes how the rotation of MDS embeddings affects sparse regression models used to interpret them and proposes a method, called the Best Interpretable Rotation (BIR) method, which selects the best MDS rotation for interpreting embeddings using external information.",
keywords = "Machine learning, Interpretability, Dimensionality reduction, Multidimensional scaling, Orthogonal transformation, Multi-view, Sparsity, Lasso regularization",
author = "Rebecca Marion and Adrien Bibal and Beno{\^i}t Frenay",
year = "2019",
month = "2",
day = "4",
language = "English",
journal = "Neurocomputing",
issn = "0925-2312",
publisher = "Elsevier",

}

TY - JOUR

T1 - BIR: A Method for Selecting the Best Interpretable Multidimensional Scaling Rotation using External Variables

AU - Marion, Rebecca

AU - Bibal, Adrien

AU - Frenay, Benoît

PY - 2019/2/4

Y1 - 2019/2/4

N2 - Interpreting nonlinear dimensionality reduction models using external features (or external variables) is crucial in many fields, such as psychology and ecology. Multidimensional scaling (MDS) is one of the most frequently used dimensionality reduction techniques in these fields. However, the rotation invariance of the MDS objective function may make interpretation of the resulting embedding difficult. This paper analyzes how the rotation of MDS embeddings affects sparse regression models used to interpret them and proposes a method, called the Best Interpretable Rotation (BIR) method, which selects the best MDS rotation for interpreting embeddings using external information.

AB - Interpreting nonlinear dimensionality reduction models using external features (or external variables) is crucial in many fields, such as psychology and ecology. Multidimensional scaling (MDS) is one of the most frequently used dimensionality reduction techniques in these fields. However, the rotation invariance of the MDS objective function may make interpretation of the resulting embedding difficult. This paper analyzes how the rotation of MDS embeddings affects sparse regression models used to interpret them and proposes a method, called the Best Interpretable Rotation (BIR) method, which selects the best MDS rotation for interpreting embeddings using external information.

KW - Machine learning

KW - Interpretability

KW - Dimensionality reduction

KW - Multidimensional scaling

KW - Orthogonal transformation

KW - Multi-view

KW - Sparsity

KW - Lasso regularization

UR - https://www.sciencedirect.com/science/article/pii/S0925231219301481

M3 - Article

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

ER -