Decision trees have the particularity of being machine learning models that are visually easy to interpret and understand. Therefore, they are primarily suited for sensitive domains like medical diagnosis, where decisions need to be explainable. However, if used on complex problems, then decision trees can become large, making them hard to grasp. In addition to this aspect, when learning decision trees, it may be necessary to consider a broader class of constraints, such as the fact that two variables should not be used in a single branch of the tree. This motivates the need to enforce constraints in learning algorithms of decision trees. We propose a survey of works that attempted to solve the problem of learning decision trees under constraints. Our contributions are fourfold. First, to the best of our knowledge, this is the first survey that deals with constraints on decision trees. Second, we define a flexible taxonomy of constraints applied to decision trees and methods for their treatment in the literature. Third, we benchmark state-of-The art depth-constrained decision tree learners with respect to predictive accuracy and computational time. Fourth, we discuss potential future research directions that would be of interest for researchers who wish to conduct research in this field.

Original languageEnglish
Article number201
Number of pages34
JournalACM Computing Surveys
Issue number10
Publication statusPublished - 14 Sept 2022


  • Decision trees
  • constraints
  • domain knowledge
  • explainability
  • fairness
  • interpretability
  • privacy


Dive into the research topics of 'Constraint Enforcement on Decision Trees: a Survey'. Together they form a unique fingerprint.

Cite this