Speaker
Description
Agentic AI systems that generate code and interact with control systems are beginning to support routine operations at particle accelerators and light sources. Their deployment in high-stakes environments, however, raises stringent requirements on safety, access control, and accountability. In this contribution we outline a defense-in-depth safety architecture for agentic AI workflows in accelerator control, and illustrate its implementation in the OSPREY framework used at the Advanced Light Source and other facilities.
The architecture separates four independent safety layers: (i) code-generation isolation, where language models produce text-only artifacts in a sandbox and never access control-system interfaces directly; (ii) human-in-the-loop approval, including plan-, code-, and write-level confirmation workflows; (iii) runtime validation, where all writes pass through a protocol-agnostic mediation API that enforces channel whitelists, value bounds, and rate limits before any connector call; and (iv) execution and network isolation, where generated scripts run in restricted environments behind segmented gateways. All agent interactions, control-system operations, and errors are logged to provide full audit trails.
We discuss design principles, integration patterns with existing control systems, and early operational experience, and propose this layered approach as a reference safety pattern for agentic AI in safety-critical accelerator applications.
| In which format do you inted to submit your paper? | LaTeX |
|---|