Non-linear dimensionality reduction techniques, such as tSNE, are widely used to visualize and analyze high-dimensional datasets. While non-linear projections can be of high quality, it is hard, or even impossible, to interpret the dimensions of the obtained embeddings. This paper adapts LIME to locally explain t-SNE embeddings. More precisely, the sampling and black-box-querying steps of LIME are modified so that they can be used to explain t-SNE locally. The result of the proposal is to provide, for a particular instance x and a particular t-SNE embedding Y, an interpretable model that locally explains the projection of x on Y.
|Title of host publication||ESANN 2020|
|Subtitle of host publication||28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning|
|Place of Publication||Bruges, Belgium|
|Publication status||Published - 21 Oct 2020|
|Event||28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning: ESANN2020 - Bruges, Belgium|
Duration: 2 Oct 2020 → 4 Oct 2020
|Conference||28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning|
|Period||2/10/20 → 4/10/20|
FingerprintDive into the research topics of 'Explaining t-SNE embeddings locally by adapting LIME'. Together they form a unique fingerprint.
Interpretability and Explainability in Machine Learning and their Application to Nonlinear Dimensionality ReductionAuthor: Bibal, A., 16 Nov 2020
Student thesis: Doc types › Doctor of SciencesFile