Abstract

Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.

Original languageEnglish
Pages (from-to)149-169
Number of pages21
JournalArtificial Intelligence & Law
Volume29
Issue number2
DOIs
Publication statusPublished - Jun 2021

Keywords

  • Explainability
  • Interpretability
  • Law
  • Machine learning

Fingerprint

Dive into the research topics of 'Legal requirements on explainability in machine learning'. Together they form a unique fingerprint.

Cite this