We consider the reality of deploying AI in safety-critical systems such as autonomous vehicles, medical diagnoses, and weather forecasting. Our discussion is grounded in the mathematical nature of AI systems, including how an AI’s mathematical properties relate to its benefit and risk profile. Benefits include the ability to learn models from data even when no physical model exists, increased automation, and enhanced speed compared with traditional approaches. Risks of AI include its opaque (mis-)understanding of the world, failures on out-of-distribution (OOD) inputs, its insatiable appetite for data and compute, and the ongoing challenge of aligning the AI’s objectives with human values. Such risks are potentially manageable with clear-eyed expectations, and our hope in this work is to clarify what can be expected.
This paper tackles the big question: Is it safe to use artificial intelligence (AI) in systems where lives are at stake, like self-driving cars, medical diagnoses, and weather forecasting? Using these three real-world scenarios, the author explains how AI can bring huge benefits—like faster decisions and increased automation—but also introduces serious risks, based on their opaque view of the world and unpredictable behavior. The paper makes the science approachable, showing that AI is powerful but not magical, and argues that careful design, constant monitoring, and human oversight are essential for using AI safely in safety-critical systems.
@techreport{Ko25,
author = {Tamara G. Kolda},
title = {Is it Safe to Deploy {AI} in Safety-Critical Systems?},
institution = {MathSci.ai},
month = {September},
year = {2025},
url = {https://mathsci.ai/publication/is-it-safe-to-deploy-ai-in-safety-critical-systems.pdf},
}