Spotting the Unexpected: Text-Driven OOD Segmentation for Safer Self-Driving

Spotting the Unexpected: Text-Driven OOD Segmentation for Safer Self-Driving

When a self-driving car meets something unusual—a stray sofa, a blown tire—it must flag it as unknown fast. That task is called out-of-distribution (OOD) segmentation.

What’s new

  • Text-driven learning: The model aligns images with language, tapping rich word knowledge to recognize a wider variety of objects.
  • Distance-based prompts: Phrases placed at varying semantic distances from known road classes teach the model how “different” an object is.
  • Semantic augmentation: Diverse OOD descriptions expand coverage of rare and unexpected cases.

Why it matters

By blending vision and language, the system generalizes to unseen hazards and delivers more reliable OOD segmentation in complex, real-world driving.

Results

State-of-the-art performance on Fishyscapes, Segment-Me-If-You-Can, and Road Anomaly datasets (both pixel- and object-level).

Paper: http://arxiv.org/abs/2511.07238v1
Authors: Seungheon Song, Jaekoo Lee

Paper: http://arxiv.org/abs/2511.07238v1

Register: https://www.AiFeta.com

#AI #AutonomousDriving #SelfDriving #ComputerVision #OOD #Segmentation #VisionLanguage #Robotics #Safety #DeepLearning

Read more