Speaker
Description
Advances in optimization and machine learning algorithms have shown great potential when applied to control systems of many industries, such as automotive, avionics and aerospace. At CERN, we also find many initiatives applied to our particle accelerators and industrial facilities. In recent years, neural networks are increasingly being explored as components or even full replacements for model-based control systems, which rely on handcrafted rules or hard optimization schemes. In contrast, neural networks promise near-optimal performance while being trainable purely on existing data. However, for critical control systems it is of great importance that any control-policy or dynamics-model conforms to predictable behavior and adheres to strict requirements. While model-based control realizes this via construction, neural networks models are known to exhibit unpredictable behavior, such as adversarial examples. Due to this, the use of formal methods for guaranteeing properties on neural networks has been widely explored in the literature. In this paper, we present an overview of the safety, robustness and stability challenges posed by neural network-based control systems at CERN. We examine how these challenges can be specified as formal properties and discuss state-of-the-art techniques for verifying and mitigating them.