From Black Box to Logic: Explaining Neural Networks with xDNN(ASP)

From Black Box to Logic: Explaining Neural Networks with xDNN(ASP)

Deep neural networks are powerful—but often inscrutable. xDNN(ASP), a new method by Ly Ly Trieu and Tran Cao Son, turns a trained network into a human-readable set of logical rules using Answer Set Programming (a logic-based AI method).

Unlike many explainability tools that only highlight which inputs mattered for one prediction, xDNN(ASP) builds a global explanation: a logic program whose answers correspond to the network’s input–output behavior.

  • See which features consistently drive decisions.
  • Understand how hidden nodes influence outcomes.
  • Keep prediction accuracy high while revealing the model’s structure.
  • Use those insights to prune unnecessary hidden nodes and simplify the network.

Tested on synthetic datasets, the extracted rules closely match the original model’s predictions and provide actionable guidance for optimization.

Paper: xDNN(ASP): Explanation Generation System for Deep Neural Networks powered by Answer Set Programming — https://arxiv.org/abs/2601.03847v1

Paper: https://arxiv.org/abs/2601.03847v1

Register: https://www.AiFeta.com

AI ExplainableAI xAI DeepLearning MachineLearning LogicProgramming NeuralNetworks

Read more