Fighting AI with AI: Making Safety-Critical Systems Trustworthy

Fighting AI with AI: Making Safety-Critical Systems Trustworthy

When AI helps fly aircraft or drive cars, “just trust the model” isn’t enough. Deep neural networks are opaque, and the gap between plain‑English requirements and low‑level code blocks traditional safety assurance.

This paper proposes using foundation models to assure AI itself, via two complementary tools:

  • REACT: Uses Large Language Models to translate natural-language requirements into precise, checkable specifications. It flags ambiguities, suggests refinements, and can generate tests—enabling earlier verification and validation.
  • SemaLens: Uses Vision-Language Models to reason about, test, and monitor DNN perception with human-understandable concepts. It helps surface failure modes and corner cases that matter in the real world.

Together, they form a pipeline from informal requirements to validated implementations for domains like aerospace and autonomous vehicles—bringing explainability, consistency, and continuous monitoring to AI-enabled, safety-critical systems.

By Anastasia Mavridou, Divya Gopinath, and Corina S. Păsăreanu. Read more: https://arxiv.org/abs/2511.20627v1

Paper: https://arxiv.org/abs/2511.20627v1

Register: https://www.AiFeta.com

#AI #SafetyCritical #Assurance #LLM #VLM #AutonomousVehicles #Aerospace #DeepLearning #TrustworthyAI #RequirementsEngineering

Read more