Speaker
Description
Optimization and design of particle accelerators is challenging due to the large number of free parameters and the corresponding lack of gradient information available to the optimizer. Thus, full optimization of large beamlines becomes infeasible due to the exponential growth of free parameter space the optimization algorithm must navigate. Providing exact or approximate gradient information to the optimizer can significantly improve convergence speed, enabling practical optimization of high-dimensional problems. To achieve this, we have leveraged state-of-the-art automatic differentiation techniques developed by the machine learning community to enable end-to-end differentiable particle tracking simulations. We demonstrate that even a simple tracking simulation with gradient information can be used to significantly improve beamline design optimization. Furthermore, we show the flexibility of our implementation with various applications that make use of different kinds of derivative information.
Funding Agency
This work was supported by the U.S. National Science Foundation under Award PHY-1549132, the Center for Bright Beams.
I have read and accept the Privacy Policy Statement | Yes |
---|