Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment

Research output: Contribution in Book/Catalog/Report/Conference proceedingConference contribution

23 Downloads (Pure)

Abstract

In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.
Original languageEnglish
Title of host publicationNIPS Workshop on Interpretable Machine Learning in Complex Systems
Place of PublicationBarcelona
Publication statusPublished - 2016

Fingerprint Dive into the research topics of 'Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment'. Together they form a unique fingerprint.

  • Cite this

    Bibal, A., & Frenay, B. (2016). Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment. In NIPS Workshop on Interpretable Machine Learning in Complex Systems