Test-time defense: fighting adversarial noise with noise

Counterintuitive, but it works: add tiny shifts to boost robustness.

This training-free, architecture-agnostic method uses stochastic resonance: apply small image translations, align features, aggregate, and map back—no extra modules, no attack-specific tuning. It recovers up to 68.1% of accuracy loss in classification, 71.9% in stereo, and 29.2% in optical flow under diverse attacks.

Why it matters: Instead of smoothing away information, this approach preserves detail while resisting adversarial perturbations—extending to dense prediction tasks for the first time.

It’s like stabilizing a shaky video by layering several nudges until the truth comes into focus. 🎛️🛡️🖼️

See the closed-form formulation and results, then consider where test-time defenses fit your stack.

Paper: http://arxiv.org/abs/2510.03224v1

Register: https://www.AiFeta.com

Paper: http://arxiv.org/abs/2510.03224v1

Register: https://www.AiFeta.com

#AdversarialML #RobustAI #ComputerVision #Security #MachineLearning #Defense #OpticalFlow #Stereo

Read more