The impact of decision-making explanation on user privacy calculus

  • Nicolas Wuyts

Student thesis: Master typesMaster in Business Engineering Professional focus in Data Science

Abstract

Consumer data has become a goldmine for companies. This data is valuable because AIs (artificial intelligence) can extract highly accurate predictions from it and enable companies to offer personalised services to their consumers. The operation of these AIs is not easily understood and this raises privacy issues for consumers. To solve this problem, techniques to explain how AIs work have been developed. These are called explainable AI (XAI). This thesis will investigate the impact of these explanations on the privacy calculus. The privacy calculus is a process in which the consumer will estimate his intention to disclose his data by evaluating the risks and benefits of a situation. In this thesis, I will analyse the impact of explanations through the effect of overall transparency on users' overall trust in AIs and the effect of this trust on the perceived risks of privacy calculus. To carry out this analysis, I performed a quantitative analysis on 138 responses. The results show that while overall transparency in AI does have an effect on overall trust in AI, overall trust in AI has no impact on perceived risks. I have, however, highlighted the significant influence of perceived data sensitivity on overall trust in AI and perceived risks. I also highlighted the influence of AI knowledge and AI self-efficacy on overall trust in AI and confirmed the independence of the variables overall trust in AI and privacy concerns.
Date of Award29 Nov 2021
Original languageEnglish
Awarding Institution
  • University of Namur
SupervisorWafa Hammedi (Supervisor)

Keywords

  • Privacy calculus
  • AI
  • explanable AI
  • XAI
  • transparency
  • sensitivity
  • trust
  • data

Cite this

'