Abstract
In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.
Original language | English |
---|---|
Title of host publication | NIPS Workshop on Interpretable Machine Learning in Complex Systems |
Place of Publication | Barcelona |
Publication status | Published - 2016 |
Event | Thirtieth Conference on Neural Information Processing Systems - Sagrada Familia, Barcelonne, Spain Duration: 5 Dec 2016 → 10 Dec 2016 |
Symposium
Symposium | Thirtieth Conference on Neural Information Processing Systems |
---|---|
Abbreviated title | NIPS 2016 (NeurIPS) |
Country/Territory | Spain |
City | Barcelonne |
Period | 5/12/16 → 10/12/16 |
Fingerprint
Dive into the research topics of 'Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment'. Together they form a unique fingerprint.Student theses
-
Interpretability and Explainability in Machine Learning and their Application to Nonlinear Dimensionality Reduction
Bibal, A. (Author)FRENAY, B. (Supervisor), VANHOOF, W. (President), Cleve, A. (Jury), Dumas, B. (Jury), Lee, J. A. (Jury) & Galarraga, L. (Jury), 16 Nov 2020Student thesis: Doc types › Doctor of Sciences
File