In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.
|titre||NIPS Workshop on Interpretable Machine Learning in Complex Systems|
|Lieu de publication||Barcelona|
|Etat de la publication||Publié - 2016|
Contient cette citation
Bibal, A., & Frenay, B. (2016). Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment. Dans NIPS Workshop on Interpretable Machine Learning in Complex Systems