Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment

Research output: Contribution in Book/Catalog/Report/Conference proceedingConference contribution

26 Downloads (Pure)

Abstract

In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.
Original languageEnglish
Title of host publicationNIPS Workshop on Interpretable Machine Learning in Complex Systems
Place of PublicationBarcelona
Publication statusPublished - 2016
EventThirtieth Conference on Neural Information Processing Systems - Sagrada Familia, Barcelonne, Spain
Duration: 5 Dec 201610 Dec 2016

Symposium

SymposiumThirtieth Conference on Neural Information Processing Systems
Abbreviated titleNIPS 2016 (NeurIPS)
CountrySpain
CityBarcelonne
Period5/12/1610/12/16

Fingerprint Dive into the research topics of 'Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment'. Together they form a unique fingerprint.

Cite this