Résumé
In order to be useful, visualizations need to be interpretable. This paper uses a user-based approach to combine and assess quality measures in order to better model user preferences. Results show that cluster separability measures are outperformed by a neighborhood conservation measure, even though the former are usually considered as intuitively representative of user motives. Moreover, combining measures, as opposed to using a single measure, further improves prediction performances.
langue originale | Anglais |
---|---|
titre | NIPS Workshop on Interpretable Machine Learning in Complex Systems |
Lieu de publication | Barcelona |
Etat de la publication | Publié - 2016 |
Evénement | Thirtieth Conference on Neural Information Processing Systems - Sagrada Familia, Barcelonne, Espagne Durée: 5 déc. 2016 → 10 déc. 2016 |
Colloque
Colloque | Thirtieth Conference on Neural Information Processing Systems |
---|---|
Titre abrégé | NIPS 2016 (NeurIPS) |
Pays/Territoire | Espagne |
La ville | Barcelonne |
période | 5/12/16 → 10/12/16 |
Empreinte digitale
Examiner les sujets de recherche de « Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment ». Ensemble, ils forment une empreinte digitale unique.Thèses de l'étudiant
-
Interpretability and Explainability in Machine Learning and their Application to Nonlinear Dimensionality Reduction
Auteur: Bibal, A., 16 nov. 2020Superviseur: FRENAY, B. (Promoteur), VANHOOF, W. (Président), Cleve, A. (Jury), Dumas, B. (Jury), Lee, J. A. (Personne externe) (Jury) & Galarraga, L. A. (Personne externe) (Jury)
Student thesis: Doc types › Docteur en Sciences
Fichier