Speaker
Description
High reliability is a major challenge of high-current linear accelerators. This is particularly problematic for Accelerator Driven Systems (ADS) such as the China initiative Accelerator Driven System (CiADS). In order to achieve rapid beam recovery, it is necessary to adjust and compensate the superconducting solenoids and cavities adjacent to the failed components in superconducting linear accelerators. In this study, we employ the Soft Actor-Critic (SAC) algorithm, a reinforcement learning technique, to train a compensation model within a simulated environment of the CiADS superconducting section. Compared to previous methods utilizing genetic algorithms, the reinforcement learning approach demonstrates superior performance in delivering more stable and consistent results for beam dynamics control.
I have read and accept the Privacy Policy Statement | Yes |
---|