Projects per year
Abstract
This report explores the concept of explainability in AI-based systems, distinguishing between "local"
and "global" explanations. “Local” explanations refer to specific algorithmic outputs in their
operational context, while “global” explanations encompass the system as a whole. The need to tailor
explanations to users and tasks is emphasised, acknowledging that explanations are not universal
solutions and can have unintended consequences. Two use cases illustrate the application of
explainability techniques: an educational recommender system, and explainable AI for scientific
discoveries. The report discusses the subjective nature of meaningfulness in explanations and
proposes cognitive metrics for its evaluation. It concludes by providing recommendations, including
the inclusion of “local” explainability guidelines in the EU AI proposal, the adoption of a user-centric
design methodology, and the harmonisation of explainable AI requirements across different EU
legislation and case law.
Overall, this report delves into the framework and use cases surrounding explainability in AI-based
systems, emphasising the need for “local” and “global” explanations, and ensuring they are tailored
toward users of AI-based systems and their tasks.
and "global" explanations. “Local” explanations refer to specific algorithmic outputs in their
operational context, while “global” explanations encompass the system as a whole. The need to tailor
explanations to users and tasks is emphasised, acknowledging that explanations are not universal
solutions and can have unintended consequences. Two use cases illustrate the application of
explainability techniques: an educational recommender system, and explainable AI for scientific
discoveries. The report discusses the subjective nature of meaningfulness in explanations and
proposes cognitive metrics for its evaluation. It concludes by providing recommendations, including
the inclusion of “local” explainability guidelines in the EU AI proposal, the adoption of a user-centric
design methodology, and the harmonisation of explainable AI requirements across different EU
legislation and case law.
Overall, this report delves into the framework and use cases surrounding explainability in AI-based
systems, emphasising the need for “local” and “global” explanations, and ensuring they are tailored
toward users of AI-based systems and their tasks.
Original language | English |
---|---|
Publisher | CERRE |
Number of pages | 44 |
Publication status | Published - 10 Jul 2023 |
Fingerprint
Dive into the research topics of 'Meaningful XAI Based on User-Centric Design Methodology'. Together they form a unique fingerprint.Projects
- 1 Active
-
ARIAC by DigitalWallonia4.AI: Applications and Research for Trusted Artificial Intelligence (TRAIL-Foundations)
Frénay, B. (PI), Jacquet, J.-M. (CoPI) & Dumas, B. (CoPI)
1/01/21 → 31/12/26
Project: Research