Fighting AI with AI: Safer Planes and Self-Driving Cars

Fighting AI with AI: Safer Planes and Self-Driving Cars

Fighting AI with AI

How do we trust AI inside airplanes and self-driving cars? Deep neural networks are powerful, but opaque—and that makes traditional safety checks hard.

This paper proposes using foundation models to make AI-enabled systems safer, from requirements to deployment.

  • REACT: uses Large Language Models to translate messy, natural-language requirements into precise, testable specifications—and to flag inconsistencies early.
  • SemaLens: uses Vision-Language Models to reason about what perception AIs "see," test them with human-understandable concepts, and monitor them for risky behavior.

Together, these tools bridge the gap between high-level intent and low-level neural networks, helping engineers verify and validate safety-critical software earlier and at scale.

Read more: https://arxiv.org/abs/2511.20627v1

Paper: https://arxiv.org/abs/2511.20627v1

Register: https://www.AiFeta.com

AI Safety Aerospace AutonomousVehicles DeepLearning LLM VLM SafetyCritical Assurance Research

Read more