Speaker
Description
Reinforcement learning (RL) is a promising approach for the online control of complex, real-world systems, with recent success demonstrated in applications such as particle accelerator control. However, model-free RL algorithms often suffer from sample inefficiency, making training infeasible without access to high-fidelity simulations or extensive measurement data. This limitation poses a significant challenge for efficient real-world deployment. In this work, we explore data-driven model-predictive control (MPC) as a solution. Specifically, we employ Gaussian processes (GPs) to model the unknown transition functions in the real-world system, enabling safe exploration in the training process. We apply the GP-MPC framework to the transverse beam tuning task at the ARES accelerator, demonstrating its potential for efficient online training. This study showcases the feasibility of data-driven control strategies for accelerator applications, paving the way for more efficient and effective solutions in real-world scenarios.
Region represented | Europe |
---|---|
Paper preparation format | LaTeX |