ExplainableAI
Meet Modules of Influence: finding feature groups in AI decisions
Meet “Modules of Influence”: groups of features that drive AI decisions Most explainers like SHAP or LIME tell you which single features mattered for one prediction. This paper goes a step further: it builds an explanation graph across many predictions and uses community detection to reveal feature groups that act