Abstract
Non-linear dimensionality reduction techniques, such as tSNE, are widely used to visualize and analyze high-dimensional datasets. While non-linear projections can be of high quality, it is hard, or even impossible, to interpret the dimensions of the obtained embeddings. This paper adapts LIME to locally explain t-SNE embeddings. More precisely, the sampling and black-box-querying steps of LIME are modified so that they can be used to explain t-SNE locally. The result of the proposal is to provide, for a particular instance x and a particular t-SNE embedding Y, an interpretable model that locally explains the projection of x on Y.
Original language | English |
---|---|
Title of host publication | ESANN 2020 |
Subtitle of host publication | 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
Place of Publication | Bruges, Belgium |
Publisher | ESANN (i6doc.com) |
Pages | 393-398 |
ISBN (Electronic) | 978-287587074-2 |
ISBN (Print) | 9978-2-87587-073-5 |
Publication status | Published - 21 Oct 2020 |
Event | 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning: ESANN2020 - Bruges, Belgium Duration: 2 Oct 2020 → 4 Oct 2020 |
Conference
Conference | 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
---|---|
Country/Territory | Belgium |
City | Bruges |
Period | 2/10/20 → 4/10/20 |
Fingerprint
Dive into the research topics of 'Explaining t-SNE embeddings locally by adapting LIME'. Together they form a unique fingerprint.Student theses
-
Interpretability and Explainability in Machine Learning and their Application to Nonlinear Dimensionality Reduction
Bibal, A. (Author), FRENAY, B. (Supervisor), VANHOOF, W. (President), Cleve, A. (Jury), Dumas, B. (Jury), Lee, J. A. (Jury) & Galarraga, L. (Jury), 16 Nov 2020Student thesis: Doc types › Doctor of Sciences
File