Smarter AI with Less Labeled Data: Unsupervised Data Augmentation

Smarter AI with Less Labeled Data: Unsupervised Data Augmentation

Training AI usually needs lots of human-labeled examples. This work shows a different path: make models learn from plenty of unlabeled data by asking them to stay consistent under strong "noise."

Instead of simple tweaks (like small crops or word drops), they use powerful data augmentation: RandAugment for images and back-translation for text. The model is trained to give the same answer before and after these changes—tightening its understanding even when labels are scarce.

  • IMDb sentiment: with just 20 labeled reviews, it hits 4.20% error—beating a prior model trained on 25,000 labeled reviews.
  • CIFAR-10: 5.43% error using only 250 labeled images, outperforming all previous methods.
  • Works across 6 language and 3 vision tasks, and boosts transfer learning (e.g., BERT) and large-scale setups like ImageNet.

Code: https://github.com/google-research/uda
Paper: http://arxiv.org/abs/1904.12848

Paper: http://arxiv.org/abs/1904.12848v6

Register: https://www.AiFeta.com

MachineLearning SemiSupervised NLP ComputerVision AIResearch DataAugmentation DeepLearning UDA

Read more