Speaker
Description
Achieving high-quality proton beams for accelerators hinges on effective beam tuning. However, the conventional "Monkey Jump" method, widely used for tuning, proves labor-intensive and inefficient. Through harnessing Reinforcement Learning (RL), a novel beam tuning strategy can swiftly emerge, making informed decisions based on the prevailing system status and control demands, offering a promising alternative for accelerator systems.
We explore novel techniques RL-based beam tuning and applying it to the beam tuning process of the CiADS Front End accelerator currently, with the aim of significantly enhancing the efficiency of the tuning process. To achieve this, we will first establish an RL-compatible environment based on dynamic simulation software. Subsequently, the policy is trained under different initial conditions. Finally, the strategy successfully trained in the simulation environment will be tested on real accelerator to verify its effectiveness.