Projets par an
Résumé
This report explores the concept of explainability in AI-based systems, distinguishing between "local"
and "global" explanations. “Local” explanations refer to specific algorithmic outputs in their
operational context, while “global” explanations encompass the system as a whole. The need to tailor
explanations to users and tasks is emphasised, acknowledging that explanations are not universal
solutions and can have unintended consequences. Two use cases illustrate the application of
explainability techniques: an educational recommender system, and explainable AI for scientific
discoveries. The report discusses the subjective nature of meaningfulness in explanations and
proposes cognitive metrics for its evaluation. It concludes by providing recommendations, including
the inclusion of “local” explainability guidelines in the EU AI proposal, the adoption of a user-centric
design methodology, and the harmonisation of explainable AI requirements across different EU
legislation and case law.
Overall, this report delves into the framework and use cases surrounding explainability in AI-based
systems, emphasising the need for “local” and “global” explanations, and ensuring they are tailored
toward users of AI-based systems and their tasks.
and "global" explanations. “Local” explanations refer to specific algorithmic outputs in their
operational context, while “global” explanations encompass the system as a whole. The need to tailor
explanations to users and tasks is emphasised, acknowledging that explanations are not universal
solutions and can have unintended consequences. Two use cases illustrate the application of
explainability techniques: an educational recommender system, and explainable AI for scientific
discoveries. The report discusses the subjective nature of meaningfulness in explanations and
proposes cognitive metrics for its evaluation. It concludes by providing recommendations, including
the inclusion of “local” explainability guidelines in the EU AI proposal, the adoption of a user-centric
design methodology, and the harmonisation of explainable AI requirements across different EU
legislation and case law.
Overall, this report delves into the framework and use cases surrounding explainability in AI-based
systems, emphasising the need for “local” and “global” explanations, and ensuring they are tailored
toward users of AI-based systems and their tasks.
langue originale | Anglais |
---|---|
Editeur | CERRE |
Nombre de pages | 44 |
Etat de la publication | Publié - 10 juil. 2023 |
Empreinte digitale
Examiner les sujets de recherche de « Meaningful XAI Based on User-Centric Design Methodology ». Ensemble, ils forment une empreinte digitale unique.Projets
- 1 Actif
-
ARIAC by DigitalWallonia4.AI: Applications et Recherche pour une Intelligence Artificielle de Confiance (TRAIL-Foundations)
Frénay, B. (Responsable du Projet), Jacquet, J.-M. (CoPI) & Dumas, B. (CoPI)
1/01/21 → 31/12/26
Projet: Recherche