Fast, adaptable detection of AI-generated images

Fast, adaptable detection of AI-generated images

Can we quickly tell if an image is AI-made — and even which generator produced it — without retraining huge models every month? This study says yes.

The authors propose a two-stage system:

  • Learn a visual "fingerprint" space. A vision model trained with supervised contrastive learning separates real vs. synthetic images, even for generator types it never saw.
  • Adapt in few shots. A simple k-nearest neighbors step uses as few as 150 images per class from a new generator to tune detection and attribution.

Results: 91.3% average detection accuracy with just those few samples, about +5.2 percentage points over prior methods. For source attribution, it boosts AUC by 14.70% and OSCR by 4.27% in open-set tests.

Why it matters: detectors that generalize and adapt quickly are essential as new image generators roll out faster than retraining cycles. This framework aims to be robust, scalable, and practical for real-world media integrity checks.

Paper: https://arxiv.org/abs/2511.16541v1

Paper: https://arxiv.org/abs/2511.16541v1

Register: https://www.AiFeta.com

AI genai deepfakes syntheticmedia computervision machinelearning cybersecurity digitalforensics fewshot

Read more