Explaining latent-based models for link prediction in knowledge graphs

  • Guillaume Latour

Student thesis: Master typesMaster en sciences informatiques à finalité spécialisée en data science

Résumé

More and more often ML models are criticised for their lack of interpretability. One must be able to understand the decision process of the model that led to the refusal of its mortgage, the diagnosis of a disease, or any legal advice.

The ability to provide an explanation for a prediction is crucial and has been in the spotlight for a moment now.

Link prediction is an interesting task among the knowledge graph realm due to its various applications, \textit{e.g.} user recommendation, fact-checking, \textit{etc}.

As far as we know, the methods providing the best results for link prediction are based on embeddings, and therefore are not intrinsically comprehensible by a human.

This work proposes a post-hoc interpretability procedure based on rule mining that retrieves some insights about the models' motivations for the provided predictions.
la date de réponse1 sept. 2021
langue originaleAnglais
L'institution diplômante
  • Universite de Namur
SuperviseurBenoît Frénay (Promoteur)

Contient cette citation

'