Speaker
Description
There is growing interest in differentiable simulations that have fast execution time and yield additional gradient information of physical observables with respect to design parameters. Existing differentiable codes have focused on picking a specific codebase and then reimplementing standard simulation algorithms - matrix and symplectic drift-kick tracking. This approach can be limiting due to various performance/compilation/ease of use tradeoffs of the chosen framework, especially for specialized GPU/TPU/other accelerator devices. We present a new library for differentiable simulations, JACC, that combines several numerical differentiation methods (Jax, PyTorch, NVIDIA Warp, finite differences) with an intelligent beamline generator that hardcodes fixed parameters into tracking kernels. This enables easy JIT traceability and kernel fusion, improving performance as compared to generic elements. Common Xsuite elements are implemented, and results carefully benchmarked. Furthermore, we provide templates for elements based on physics-informed neural networks and Gaussian processes, supporting arbitrary (reduced-fidelity and very fast) models. Examples of optics design and differentiable space charge tracking are discussed, demonstrating usefulness for injector design. We discuss implementation challenges and how to keep up with a rapidly changing ML ecosystem.
I have read and accept the Privacy Policy Statement | Yes |
---|---|
Please consider my poster for contributed oral presentation | Yes |
Would you like to submit this poster in student poster session on Sunday (August 10th) | No |