Constraint Enforcement on Decision Trees and its Application to Global Explainability

Student thesis: Doc typesDocteur en Sciences

Résumé

Machine learning is taking an increasing place in our society, even in high-stake and highly regulated domains such as finance and health. In such domains, automatic decisions produced by machine learning models may be expected to meet guidelines. However, as a research field, machine learning has been much more focused on pushing the boundary of performance of existing models, or on designing more powerful models.
As a result, less attention has been paid to either the enforcement of guidelines or the incorporation of domain knowledge to prevent undesirable behaviour.
This thesis first focuses on constraint enforcement on one of the simplest, nonlinear and most popular classes of machine learning models which are decision trees. The thesis also uses constraint enforcement to improve the explainability by decision rules of differentiable black-box models. The thesis makes three contributions to machine learning research.
First, the thesis thoroughly investigates and analyses techniques available in the literature that tackle the imposition of constraints on decision trees. In particular, the thesis proposes a taxonomy of constraints and another taxonomy of approaches through the lens of optimisation tools. The taxonomy of constraints defined structure-level constraints (e.g., size, depth), feature-level constraints (e.g., monotonicity, fairness), instance-level constraints (e.g., robustness). The taxonomy of methods includes top-down greedy, safe enumeration, LP/SAT/CP and probabilistic (including Bayesian) approaches.
Second, still considering decision tree models, the thesis introduces two techniques to enforce constraints on decision trees. The first technique leverages soft constraint enforcement and introduces a method called BDT, which integrates boundary-based fairness constraints to learn fairer decision trees. The second technique called CPTree leverages hard constraint enforcement and introduces a framework based on MIP/CP to learn decision trees under domain-knowledge constraints.
Third, considering differentiable black-box models such as multi-layer perceptrons (MLPs) the thesis introduces a statistical framework where these models can be implicitly and softly constrained to be easily explainable by decision rules.
la date de réponse24 nov. 2022
langue originaleAnglais
L'institution diplômante
  • Universite de Namur
SponsorsFSR-FNRS
SuperviseurBenoît Frénay (Promoteur), Paul Temple (Copromoteur), Wim Vanhoof (Président), Michel Verleysen (Jury), Gilles Perrouin (Jury) & Hendrik Blockeel (Jury)

Contient cette citation

'