Is it Safe to Deploy AI in Safety-Critical Systems?

Abstract

We consider the reality of deploying AI in safety-critical systems such as autonomous vehicles, medical diagnoses, and weather forecasting. Our discussion is grounded in the mathematical nature of AI systems, including how an AI’s mathematical properties relate to its benefit and risk profile. Benefits include the ability to learn models from data even when no physical model exists, increased automation, and enhanced speed compared with traditional approaches. Risks of AI include its opaque (mis-)understanding of the world, failures on out-of-distribution (OOD) inputs, its insatiable appetite for data and computation, and the ongoing challenge of aligning the AI’s objectives with human values. Such risks are potentially manageable with clear-eyed expectations, and our hope in this work is to clarify what can be expected.

Publication
Philosophical Transactions of the Royal Society A
Date
Links
Citation
T. G. Kolda. Is it Safe to Deploy AI in Safety-Critical Systems?. Philosophical Transactions of the Royal Society A, accepted for publication on December 5, 2025, 2025. https://mathsci.ai/publication/is-it-safe-to-deploy-ai-in-safety-critical-systems.pdf

Comments

accepted for publication on December 5, 2025

BibTeX

@article{Ko26,  
author = {Tamara G. Kolda}, 
title = {Is it Safe to Deploy {AI} in Safety-Critical Systems?}, 
journal = {Philosophical Transactions of the Royal Society A}, 
month = {December}, 
year = {2025},
note = {accepted for publication on December 5, 2025},	
url = {https://mathsci.ai/publication/is-it-safe-to-deploy-ai-in-safety-critical-systems.pdf},
}