In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.
|Title of host publication||NIPS Workshop on Interpretable Machine Learning in Complex Systems|
|Place of Publication||Barcelona|
|Publication status||Published - 2016|
|Event||Thirtieth Conference on Neural Information Processing Systems - Sagrada Familia, Barcelonne, Spain|
Duration: 5 Dec 2016 → 10 Dec 2016
|Symposium||Thirtieth Conference on Neural Information Processing Systems|
|Abbreviated title||NIPS 2016 (NeurIPS)|
|Period||5/12/16 → 10/12/16|
FingerprintDive into the research topics of 'Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment'. Together they form a unique fingerprint.
Interpretability and Explainability in Machine Learning and their Application to Nonlinear Dimensionality ReductionAuthor: Bibal, A., 16 Nov 2020
Student thesis: Doc types › Doctor of SciencesFile