Speaker
Description
Generative deep learning has emerged as a transformational technology for a wide range of tasks including tools such as Stable Diffusion, IMAGEN, DALL-E, and Midjourney for image generation from user-defined captions, AlphaFold for prediction of 3D protein structures directly from amino acid sequences, and large language models such as GPT and Claude which can carry on conversations, summarize documents, and write custom computer code. For particle accelerators, such generative models have amazing potential for various tasks including surrogate models, virtual beam diagnostics, automated adaptive beam tuning and optimization for autonomous accelerator control, and accelerator design. This talk gives an overview of state-of-the art generative AI techniques including transformers, which are the backbone of large language models and are state-of-the-art for learning distant dependencies in long sequences of data, variational autoencoders, and diffusion-based generative models which are the state-of-the-art for creating high resolution representations of complex objects. The tutorial will also discuss how feedback and hard physics constraints can be built into these generative frameworks to make robust tools for complex time-varying dynamic systems. The tutorial will also present examples of how these models are being utilized in science in general and specifically for a wide range of accelerator applications such as generative models for virtual 6D phase space diagnostics.
I have read and accept the Privacy Policy Statement | Yes |
---|---|
Please consider my poster for contributed oral presentation | Yes |
Would you like to submit this poster in student poster session on Sunday (August 10th) | No |