Trust in Artificial Intelligence: Beyond Interpretability

Tassadit Bouadi, Benoît Frénay, Luis Galárraga, Pierre Geurts, Barbara Hammer, Gilles Perrouin

Research output: Contribution in Book/Catalog/Report/Conference proceedingConference contribution

Abstract

As artificial intelligence (AI) systems become increasingly integrated into everyday life, the need for trustworthiness in these systems has emerged as a critical challenge. This tutorial paper addresses the complexity of building trust in AI systems by exploring recent advances in explainable AI (XAI) and related areas that go beyond mere interpretability. After reviewing recent trends in XAI, we discuss how to control AI systems, align them with societal concerns, and address the robustness, reproducibility, and evaluation concerns inherent in these systems. This review highlights the multifaceted nature of the mechanisms for building trust in AI, and we hope it will pave the way for further research in this area.
Original languageEnglish
Title of host publicationESANN
Pages257-266
ISBN (Electronic)978-2-87587-090-2
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Trust in Artificial Intelligence: Beyond Interpretability'. Together they form a unique fingerprint.

Cite this