Teaching Decision Trees to Explain Themselves
Decision trees (and ensembles like random forests and gradient boosting) make powerful predictions—but their reasoning can be hard to follow, which is risky in healthcare, finance, and other safety-critical settings.
This paper shows how to turn those predictions into clear, formal explanations using Answer Set Programming (ASP), a logic-based approach that can encode preferences and list all valid explanations.
- Sufficient: Minimal conditions that guarantee the model’s decision.
- Contrastive: Small changes that would flip the decision.
- Majority: What most trees in an ensemble agree on.
- Tree-specific: Path-based reasons from particular trees.
Compared to other logic solvers, ASP offers flexible control (e.g., prioritize shorter or more actionable explanations) and can enumerate multiple, equally valid reasons—useful for audits and user choice. The authors test their method across diverse datasets, reporting when it works best and where it struggles—an honest step toward trustworthy, auditable AI.
Read more: https://arxiv.org/abs/2601.03845v1
Paper: https://arxiv.org/abs/2601.03845v1
Register: https://www.AiFeta.com
AI ExplainableAI MachineLearning DecisionTrees RandomForest GradientBoosting Logic AnswerSetProgramming TrustworthyAI Research