Meet Modules of Influence: finding feature groups in AI decisions

Meet Modules of Influence: finding feature groups in AI decisions

Meet “Modules of Influence”: groups of features that drive AI decisions

Most explainers like SHAP or LIME tell you which single features mattered for one prediction. This paper goes a step further: it builds an explanation graph across many predictions and uses community detection to reveal feature groups that act together.

  • See which features consistently team up to push outcomes.
  • Debug faster with module-level ablations (turn groups off/on, not just one feature).
  • Localize bias exposure to specific modules instead of blaming the whole model.
  • Check redundancy and causality patterns with new stability and synergy metrics.

Across synthetic and real datasets, the approach uncovers correlated feature clusters and makes model behavior easier to reason about—at the level where decisions actually happen: groups, not isolated inputs. Includes code and benchmarks for module discovery.

Paper by Ehsan Moradi. Read more: http://arxiv.org/abs/2510.27655v1

Paper: http://arxiv.org/abs/2510.27655v1

Register: https://www.AiFeta.com

ExplainableAI XAI MachineLearning AITransparency ModelInterpretability SHAP LIME Graphs CommunityDetection ResponsibleAI

Read more