Spotting AI-Made Images Without Constant Retraining
Spotting AI-made images—without constant retraining
Generative models evolve fast, making deepfake detectors go stale. This study introduces a two-stage method that adapts quickly to new image generators.
- Stage 1: A vision model learns subtle “fingerprints” via supervised contrastive learning, trained while holding out some generator families to force cross-model generalization.
- Stage 2: A lightweight k-NN classifier is fit in a few-shot setup using about 150 images per class from a new generator—no heavy retraining.
Results: 91.3% detection accuracy with just a handful of samples, a +5.2 pp gain over prior methods. For source attribution in an open-set setting, it boosts AUC by +14.70% and OSCR by +4.27%.
Why it matters: detectors can stay effective as new models appear, using readily obtainable examples, keeping forensic tools robust and scalable.
Paper by Jaime Álvarez Urueña, David Camacho, Javier Huertas Tato. arxiv.org/abs/2511.16541
Paper: https://arxiv.org/abs/2511.16541v1
Register: https://www.AiFeta.com
AI DeepfakeDetection GenerativeAI ComputerVision MachineLearning TrustAndSafety AIDetection FewShotLearning