Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features

Adrien Bibal, Rebecca Marion, Benoît Frenay

Research output: Contribution in Book/Catalog/Report/Conference proceedingConference contribution

5 Downloads (Pure)

Abstract

One approach to interpreting multidimensional scaling (MDS)
embeddings is to estimate a linear relationship between the MDS dimen-
sions and a set of external features. However, because MDS only preserves
distances between instances, the MDS embedding is invariant to rotation.
As a result, the weights characterizing this linear relationship are arbitrary
and difficult to interpret. This paper proposes a procedure for selecting
the most pertinent rotation for interpreting a 2D MDS embedding.
Original languageEnglish
Title of host publication26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
Place of PublicationBruges
Pages537-542
Number of pages7
Publication statusPublished - 2018
Event 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018) - Bruges, Bruges, Belgium
Duration: 25 Apr 201827 Apr 2018

Conference

Conference 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018)
CountryBelgium
CityBruges
Period25/04/1827/04/18

Keywords

  • Machine learning
  • Interpretability
  • Dimensionality reduction
  • Multidimensional scaling
  • Multi-view
  • Sparsity
  • Lasso regularization

Cite this

Bibal, A., Marion, R., & Frenay, B. (2018). Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features. In 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 537-542). Bruges.
Bibal, Adrien ; Marion, Rebecca ; Frenay, Benoît. / Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features. 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges, 2018. pp. 537-542
@inproceedings{c711cd775ac74eeab209e495a3e8a24a,
title = "Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features",
abstract = "One approach to interpreting multidimensional scaling (MDS)embeddings is to estimate a linear relationship between the MDS dimen-sions and a set of external features. However, because MDS only preservesdistances between instances, the MDS embedding is invariant to rotation.As a result, the weights characterizing this linear relationship are arbitraryand difficult to interpret. This paper proposes a procedure for selectingthe most pertinent rotation for interpreting a 2D MDS embedding.",
keywords = "Machine learning, Interpretability, Dimensionality reduction, Multidimensional scaling, Multi-view, Sparsity, Lasso regularization",
author = "Adrien Bibal and Rebecca Marion and Beno{\^i}t Frenay",
year = "2018",
language = "English",
pages = "537--542",
booktitle = "26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning",

}

Bibal, A, Marion, R & Frenay, B 2018, Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features. in 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges, pp. 537-542, 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018), Bruges, Belgium, 25/04/18.

Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features. / Bibal, Adrien; Marion, Rebecca; Frenay, Benoît.

26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges, 2018. p. 537-542.

Research output: Contribution in Book/Catalog/Report/Conference proceedingConference contribution

TY - GEN

T1 - Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features

AU - Bibal, Adrien

AU - Marion, Rebecca

AU - Frenay, Benoît

PY - 2018

Y1 - 2018

N2 - One approach to interpreting multidimensional scaling (MDS)embeddings is to estimate a linear relationship between the MDS dimen-sions and a set of external features. However, because MDS only preservesdistances between instances, the MDS embedding is invariant to rotation.As a result, the weights characterizing this linear relationship are arbitraryand difficult to interpret. This paper proposes a procedure for selectingthe most pertinent rotation for interpreting a 2D MDS embedding.

AB - One approach to interpreting multidimensional scaling (MDS)embeddings is to estimate a linear relationship between the MDS dimen-sions and a set of external features. However, because MDS only preservesdistances between instances, the MDS embedding is invariant to rotation.As a result, the weights characterizing this linear relationship are arbitraryand difficult to interpret. This paper proposes a procedure for selectingthe most pertinent rotation for interpreting a 2D MDS embedding.

KW - Machine learning

KW - Interpretability

KW - Dimensionality reduction

KW - Multidimensional scaling

KW - Multi-view

KW - Sparsity

KW - Lasso regularization

M3 - Conference contribution

SP - 537

EP - 542

BT - 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning

CY - Bruges

ER -

Bibal A, Marion R, Frenay B. Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features. In 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges. 2018. p. 537-542