Adversarial images fall apart when you cover them up
Neural networks can be fooled by adversarial images—but here’s a twist: those fake images are even more fragile than the real ones, especially when parts are hidden. Researchers tested nine popular attacks (like FGSM and PGD) on CIFAR-10 and slid a small mask across each image while watching