Test-Time Defense Against Adversarial Attacks via Stochastic Resonance of Latent Ensembles
Quick take: Test-Time Defense Against Adversarial Attacks via Stochastic Resonance of Latent Ensembles
We propose a test-time defense mechanism against adversarial attacks: imperceptible image perturbations that significantly alter the predictions of a model. Unlike existing methods that rely on feature filtering or smoothing, which can lead to information loss, we propose to "combat noise with noise" by leveraging stochastic resonance to enhance robustness while minimizing information loss.
Our approach introduces small translational perturbations to the input image, aligns the transformed feature embeddings, and aggregates them before mapping back to the original reference image. This can be expressed in a closed-form formula, which can be deployed on diverse existing network architectures without introducing additional network modules or fine-tuning for specific attack types. The resulting method is entirely training-free, architecture-agnostic, and attack-agnostic.
Why it matters: This research may affect how everyday systems stay reliable and safe.
What do you think? Share a thought or tag a friend 👇
Paper: http://arxiv.org/abs/2510.03224v1
Register: https://www.AiFeta.com