AI that translates ultrasound videos to fill data gaps

AI that translates ultrasound videos to fill data gaps

Ultrasound comes in different "flavors" (grayscale and color flow Doppler), but clinical datasets rarely have both for every view. A new study trains a generative AI to "translate" between them—creating realistic videos to fill the gaps.

Trained on 54,975 videos and tested on 8,368, the model produces synthetic clips that look and behave like the real thing. Doctors could not reliably tell them apart (~54% accuracy, i.e., chance), and AI models for classification and segmentation performed similarly on real vs synthetic data (F1 ~0.9; Dice 0.97). Even though it was trained on heart scans, it generalized to other ultrasound domains.

  • High visual similarity to ground truth (SSIM ~0.91).
  • Can help balance imbalanced datasets and reduce missing data.
  • Boosts the value of existing, retrospective imaging.

Why it matters: more complete datasets can mean stronger, fairer medical AI—without extra scanning. Paper: http://arxiv.org/abs/2511.03255v1

Paper: http://arxiv.org/abs/2511.03255v1

Register: https://www.AiFeta.com

ultrasound AI medicalimaging deepLearning generativeAI healthcare cardiology radiology dataaugmentation

Read more