Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features

Adrien Bibal, Rebecca Marion, Benoît Frénay

Research output: Contribution in Book/Catalog/Report/Conference proceedingConference contribution

24 Downloads (Pure)

Abstract

One approach to interpreting multidimensional scaling (MDS)
embeddings is to estimate a linear relationship between the MDS dimen-
sions and a set of external features. However, because MDS only preserves
distances between instances, the MDS embedding is invariant to rotation.
As a result, the weights characterizing this linear relationship are arbitrary
and difficult to interpret. This paper proposes a procedure for selecting
the most pertinent rotation for interpreting a 2D MDS embedding.
Original languageEnglish
Title of host publication 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
Subtitle of host publicationESANN 2018 : Bruges, Belgium, April 25, 26, 27, 2018
EditorsMichel Verleysen
Place of PublicationLouvain-la-Neuve
PublisherCIACO
Pages537-542
ISBN (Electronic)9782875870476
Publication statusPublished - 1 Jan 2018
Event 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018) - Bruges, Bruges, Belgium
Duration: 25 Apr 201827 Apr 2018

Conference

Conference 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2018)
Country/TerritoryBelgium
CityBruges
Period25/04/1827/04/18

Keywords

  • Machine learning
  • Interpretability
  • Dimensionality reduction
  • Multidimensional scaling
  • Multi-view
  • Sparsity
  • Lasso regularization

Fingerprint

Dive into the research topics of 'Finding the Most Interpretable MDS Rotation for Sparse Linear Models based on External Features'. Together they form a unique fingerprint.

Cite this