ICALEPCS 2025 - The 20th International Conference on Accelerator and Large Experimental Physics Control Systems

America/Chicago
Palmer House Hilton Chicago

Palmer House Hilton Chicago

17 East Monroe Street Chicago, IL 60603, United States of America
Guobao Shen (Argonne National Laboratory), Joseph Sullivan (Argonne National Laboratory)
Description

Welcome!

On behalf of the organizing committee, we are pleased to welcome you to Chicago, for the 20th International Conference on Accelerator and Large Experimental Physics Control Systems, ICALEPCS 2025.

This series of conferences facilitates fruitful collaborations among the world's control system specialists from major scientific installations and industries, such as particle accelerators, light sources, laser facilities, telescopes, tokamaks, etc. The series of ICALEPCS conferences started in 1987 in Villars-sur-Ollon (Switzerland), hosted by CERN. The conferences subsequently rotated between three major areas of the world: America (including North, Central and South America), Asia (including Oceania) and EMEA (Europe including Russia, the Middle East and Africa). Over the years the conferences have seen a growing number of participants, institutions, and countries.

We are excited about the program we have put together, with the help of the International Scientific Advisory Committee (ISAC). Attendees will have an opportunity to meet with other experts from around the world, interact and build new collaborations. We hope to see both new faces and those we know from past conferences, coming to share ideas and learn new things.

ICALEPCS 2025 is being hosted by Argonne National Laboratory (ANL), home to the Advanced Photon Source (APS), the world class particle accelerator which is finishing its major renovation. Argonne is a multipurpose research institution funded primarily by the U.S. Department of Energy’s Office of Science. Visit Argonne’s website to learn more about the laboratory’s programs and history.

The ICALEPCS series has a storied history, and this conference is structured to welcome newcomers and students to our community. The conference fosters an open forum for discussions, and provides avenues for students to be greater participants in the conference.

We look forward to meeting you in Chicago!

 

Guobao Shen & Joseph Sullivan
Conference Co-Chairs

Fanny Rodolakis
Local Organization Committee Chair

Announcements

July 15th, 2025: Important Notice on Phishing Emails

We’ve learned that attendees of other conferences have received phishing emails requesting credit card or personal information. Please note:

  • ICALEPCS organizers will never request passport or ID documents.
  • Credit card payments are processed securely by CVent through our official registration site only.

Mar 28th, 2025: Join our LinkedIn community to stay updated on the major activities of the ICALEPCS Conference series. https://www.linkedin.com/company/icalepcs-conference-series/


Hosted by

Endorsed by the American Physical Society

Platinum Event Sponsors


    • Workshops: TBA
    • Workshops: TBA
    • Social: Welcome Reception
    • 07:30
      Registration
    • MOIG Welcome
      Convener: Joseph Sullivan (Argonne National Laboratory)
      • 1
        Welcome and Announcements
        Speaker: Guobao Shen (Argonne National Laboratory)
      • 2
        Welcome from the Conference Chairs
        Speaker: Guobao Shen (Argonne National Laboratory)
    • MOKG Keynote Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Joseph Sullivan (Argonne National Laboratory)
      • 3
        Advanced Photon Source Upgrade: Commissioning, Initial Science, and Controls Challenges

        The Advanced Photon Source (APS) recently underwent an upgrade to a 4th generation synchrotron x-ray source greatly reducing the natural emittance of the storage-ring. Reduced emittance produces much smaller and more parallel x-ray beams (higher brilliance), that provide new capabilities for understanding materials at nanometer length-scales or exploring the time-evolution of systems orders-of-magnitude faster than was previously possible. This talk will detail the APS’s recent experience with building and commissioning the new accelerator, beamlines, and instruments, and will highlight some of the specific challenges for controls systems at 4th generation synchrotron sources such as the APS.

        Speaker: Jonathan Lang (Argonne National Laboratory)
    • MOAG MC02 Control System Upgrades Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Misaki Komiyama (The Institute of Physical and Chemical Research), Mr Yuliang Zhang (Institute of High Energy Physics)
      • 4
        Status of the APS Accelerator Controls Following Its Major Upgrade

        During the APS Upgrade project's dark time, the original storage ring was replaced with a brand new, state-of-the-art ring, and the accelerator controls system. Although the APS injector controls system remained unchanged, its storage ring controls has been fully redesigned with significant modernization. Together with other technical systems, the controls system was successfully brought back online in a timely manner, playing a crucial role in achieving a tightly scheduled transition from decommissioning to beam commissioning and restart the user beam operations within a year. This paper presents the latest status of the APS accelerator control systems with some highlights of the lessons learned during the upgrade period.

        Speaker: Guobao Shen (Argonne National Laboratory)
      • 5
        Design and development of Diamond-II accelerator control system

        Diamond light source is a 3rd-generation synchrotron light source that has been operating since 2007. The existing accelerator control system is based on EPICS V3, and a mixture of VME hardware, PCs and embedded devices. An upgrade of Diamond to Diamond-II is now in the construction phase with installation set to begin in January 2028, followed by storage ring commissioning in Oct 2028.
        A new control system is currently under development, leveraging existing infrastructure while modernizing key components. The updated control system will be built on EPICS 7 with software deployed via Kubernetes clusters. This paper outlines the system requirements, development activities, planning, and deployment strategy, for the Diamond-II accelerator control system.

        Speaker: Mr Mark Heron (Diamond Light Source)
      • 6
        Upgrade and modernization of the CSNS accelerator control system towards CSNS-II

        The CSNS-II project, launched on 1st January 2024, aims to significantly enhance the beam power from 100 kW to 500 kW. The current accelerator control system, commissioned in 2018, was designed based on hardware and software platforms finalized in 2012. Over time, these systems have begun to exhibit obsolescence issues. To meet the advanced requirements of CSNS-II, a comprehensive upgrade and modernization of the control system is essential. This presentation outlines the overall upgrade plan and design considerations for the control system. The key upgrades include: transitioning the EPICS framework from version 3 to the modern and feature-rich version 7, migrating the hardware platform from VME to MTCA, adopting the latest Phoebus as part of the Control System Studio suite, incorporating support for big data analytics and artificial intelligence capabilities to enhance system performance and diagnostics. These enhancements will ensure the control system meets the demanding operational requirements of CSNS-II while improving reliability, scalability, and future readiness.

        Speaker: Mr Yuliang Zhang (Institute of High Energy Physics)
      • 7
        SLS 2.0 beamline upgrade experience: navigating modernization, legacy, and commissioning constraints

        After successfully reaching the key milestones of the SLS 2.0 machine upgrade, focus and prioritization have shifted to the beamline upgrades. These have been structured into three distinct phases. In the "pre-dark time" Phase 0, core technical solutions and controls hardware portfolio were validated on selected beamlines. We are currently finalizing the upgrades for Phase 1 beamlines, while preparations for Phase 2 — scheduled for 2026 — are gradually ramping up. In this contribution, we first reflect on the Phase 1 commissioning deliverables and assess how far we could implement the originally proposed control system upgrade strategy, particularly regarding hardware modernization and the coexistence with legacy components. Second, we analyze the impact of resource and time constraints on our commissioning activities. Delays in the prerequisite steps for control system commissioning (device list provision, schematic design, hardware assembly, testing, installation and cabling) largely due to limited capacity in infrastructure groups, ultimately resulted in a significantly compressed commissioning window during a critical project phase. We discuss adopted mitigation strategies - including pre-commissioning using test systems, solution standardization efforts, task prioritization driven by the minimum viable product (MVP) delivery, and strong cross-team coordination activities. These insights offer practical lessons for managing the coming SLS 2.0 Phase 2 beamline upgrades.

        Speaker: Tine Celcer (Paul Scherrer Institute)
      • 8
        Upgrading Fermilab’s accelerator controls with ACORN

        The Fermilab Accelerator Complex is the largest national user facility in the Office of High Energy Physics (DOE/HEP) program and the only national user facility operating at Fermilab. Fermilab serves as the host to the Long Baseline Neutrino Facility/Deep Underground Neutrino Experiment (LBNF/DUNE), the laboratory’s flagship project for neutrino science that is under construction. LBNF/DUNE will be powered by megawatt beams from an upgraded accelerator, the Proton Improvement Plan II (PIP-II) that will replace the laboratory’s aging linear accelerator with a new one based on superconducting radio-frequency cavities. The Accelerator Controls Operations Research Network (ACORN) Project will support LBNF/DUNE and PIP-II by modernizing the accelerator control system. The project is at the conceptual design phase and looking to achieve Critical Decision 1 (CD-1) later this year. The scope and structure of the project will be presented, along with an overview of how that has changed in the past year. Current design and technology choices will be shared. Specific challenges facing the project will be addressed, along with current thinking on solutions.

        Speaker: Christian Roehrig (Fermi National Accelerator Laboratory)
    • 10:45
      Coffee
    • MOBG MC07 Functional Safety and Protection Systems Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Ralph Baer (GSI Helmholtz Centre for Heavy Ion Research), Ronaldo Mercado (Diamond Light Source)
      • 9
        Design and implementation of the ESS personnel safety system

        The European Spallation Source (ESS) in Lund, Sweden, is a state-of-the-art research facility featuring the world’s most powerful linear proton accelerator and a high-intensity neutron source. ESS employs a comprehensive Personnel Safety System (PSS), which integrates several safety interlock systems distributed across the facility. These systems operate independently, but are connected via a centralized interlink, which allows them to coordinate hazard mitigation and determine the facility’s readiness for proton beam generation and neutron production.

        The ESS PSS architecture is designed to provide a reliable and robust safety framework, with a strong focus on scalability, upgradeability, and maintainability to support long-term and efficient operation. It is based on the Nexus PSS, a PLC-based system that manages communication between distributed safety subsystems. A fiber optic ring network ensures real-time, fail-safe communication with high availability and built-in redundancy. This design allows quick response to safety events and supports system reconfiguration in case of failures while maintaining operational integrity.

        This paper provides an overview of the ESS PSS framework, including its integration, communication infrastructure, and approaches to key safety challenges.

        Speaker: Vincent Harahap (European Spallation Source)
      • 10
        New machine protection system at the Spallation Neutron Source – design process and performance analysis

        A New Machine Protection System (MPS) at the Spallation Neutron Source (SNS) was developed and implemented on µTCA-based hardware platforms. The system monitors more than 2500 field inputs and shuts off the beam within 10 µs if adverse events occur.
        We will present system level design process of various firmware and software components as well as the system integration into EPICS environment. The performance analysis of the MPS after two SNS run cycles will also be presented.

        Speaker: Miljko Bobrek (Oak Ridge National Laboratory)
      • 11
        Current state and improvements of the luminescent screen protection system in the Karabo control system at the European XFEL

        During the startup, maintenance and user program a stable beam conditions are crucial for smooth operation of beamlines and instruments at many light sources, including European XFEL. For visual verification of the beam position and jitter, mostly invasive luminescent screens are used. These screens made of various materials are located at several locations along the beam path and can withstand particular photon intensity and flux. Excessive intensity within few x-ray pulses may damage the screen resulting in increased maintenance costs and operational down time.
        At the European XFEL to protect these screens a set of Karabo control system services are deployed. The main functionality of these services includes calculation of screen damage thresholds and estimation of actual photon flux. In the case of an overdose the user is warned, the screen is retracted and moved to a predefined safe position.
        In the contribution the current protection system and upcoming improvements of the optical luminescent screens in the Karabo control system at the European XFEL is presented.

        Speaker: Ivars Karpics (European X-Ray Free-Electron Laser)
      • 12
        Architecture of the personnel protection systems for Spallation Neutron Source Second Target Station

        Oak Ridge National Laboratory (ORNL) is implementing a major upgrade to the Spallation Neutron Source (SNS) facility, encompassing the addition of the Second Target Station (STS). Preliminary design reviews have been conducted on several STS Personnel Protection Systems (PPS). The reviews focused primarily on the integration with the existing SNS PPS, the new proton transport tunnel, and the target areas. Development of the PPS is ongoing, to ensure a coherent safety system with the mission of protecting users and workers from prompt radiation hazards while providing high beam availability to operations. The STS PPS element in the Integrated Control System (ICS) is a facility-wide system composed of multiple safety subsystems, including the Ring to Second Target (RTST) beam transport tunnel, Target, Bunker and Instruments. Personnel working in all these geographic areas are protected by modular reliable PPS solutions. The safety system enforces access controls, radiation monitoring, beam destination control, and application of critical device inhibit upon detection of abnormal condition. It utilizes well-documented process, Common Industrial Protocol (CIP) safety, pulsed test, and redundancy to achieve the desire Safety Integrity Level (SIL). This paper gives an architectural overview of the STS PPS and a detailed safety plan for the SNS facility, addressing safety solutions and human factors.

        Speaker: Chrysostomos "Tommy" Michaelides (Oak Ridge National Laboratory)
    • MOBR MC14 Digital Twins & Simulation Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Enrique Blanco Vinuela (European Organization for Nuclear Research), Stephane Perez (Commissariat à l'Energie Atomique), Timo Korhonen (European Spallation Source)
      • 13
        Development of a digital twin towards enhanced monitoring of the SF6 subsystem on the Z Machine

        In this work, we develop a digital twin of the Sulfur Hexafluoride (SF6) subsystem on the Z Machine at Sandia National Laboratories. The Z Machine is a premier pulsed power research facility for studying high energy density science. Z’s SF6 subsystem provides centralized SF6 distribution to high voltage components which use it as an insulating gas to enable nanosecond time frame switching operations. Due to varying experimental requirements, the SF6 system is highly dynamic, heavily automated, and equipped with numerous monitoring systems. Partially motivating this automation is a desire for online leak detection, as SF6 emissions are regulated and reportable. Refinements to the leak detection system improve response times and minimize gas losses. A digital twin can dynamically update alongside the physical system using real-time data, providing a precise estimate of the SF6 subsystem’s internal state which can augment monitoring processes. The digital twin described in this work was developed in Simulink and calibrated using historical data from the Z Machine. It supports real-time, online IO and interaction with the SF6 control system. Future work will include incorporating the digital twin as the plant model in a state observer using online telemetry to synchronize the digital twin’s state with that of the full system. We discuss potential future functionality which could further reduce SF6 loss, including leak localization, system optimization, and predictive maintenance.

        Speaker: Isabel McCabe (Sandia National Laboratories)
      • 14
        Accelerator Commissioning Tools: Architectural Proposals and Design Patterns

        Particle accelerators are expensive machine complexes. Newer generation machines (e.g. synchrotron light sources) are more challenging to control. These will be only reasonably controllable with a dedicated software stack providing commissioning and tuning applications. This software stack is developed and validated during machine design and construction phase using Monte-Carlo studies. These studies use digital twins as surrogate of the future machine.
        The software stack as much as the digital twins get more and more complex thus requiring an thus an adapted appropriate architecture facilitates the implementation of the stack and the used tools.
        In this paper we review the architecture behind different tools available and report on common patterns we found. We show how these can be simplified rethinking them as interacting with a large (or even huge) state and report on patterns that can be used for a more stream lined implementation.

        Speaker: Pierre Schnizer (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 15
        VBL++: Accelerating the design of high-energy, next generation laser architectures

        Virtual Beam Line++ (VBL++) is an advanced nonlinear optical beam propagation code used for the design and operation of high-energy and high-power laser systems. It is the latest tool employed to precisely setup the National Ignition Facility’s (NIF) laser beams and support a broad range of missions from fusion ignition to high energy density science.
        VBL++ also plays a critical role in supporting a growing base of users with emerging and diverse laser designs, technologies, and applications. Beyond its use in NIF operations, this tool actively contributes to the design of novel architectures proposed by NIF researchers and collaborators. VBL++ offers an intuitive user interface for building optical chains and supports high-resolution runs using HPC resources. This talk will focus on a subset of features in VBL++ that help users to elevate their modeling above single simulations. These features include:
        - Parameterization and optimization of inputs
        - An interface for smooth desktop-to-HPC integration.
        - An inverse solver used to determine the input low-power pulse shape that will achieve the high-power request on target.
        - An interface written in Python, MATLAB, and IDL that enables users to build chains and run simulations with greater configurability.
        This application is still under active development: new models and enhanced features are added regularly to address evolving user’s needs at the cutting edge of inertial fusion class laser design and operations.

        Speaker: Jordan Penner (Lawrence Livermore National Laboratory)
      • 16
        Leveraging software simulators in SKA Dish LMC development and testing

        The Square Kilometre Array (SKA) project employs the TANGO Controls framework to manage its telescopes. Within SKA MID, the Dish Local Monitoring and Control (LMC) software integrates dish sub-components into the overall control system. Because hardware availability is often limited, Dish LMC development and validation are heavily based on software simulators derived from Interface Control Documents (ICDs) and system-level requirements. These simulators emulate the behaviour of the respective devices for requirement verification. Their use has enabled iterative testing in both staging and integration environments, accelerated development, and reduced risks ahead of hardware integration. This paper presents the design and implementation of Dish LMC simulators, their role in supporting system-level validation and advancing software maturity.

        Speaker: Samuel Twum (SKA Observatory)
      • 17
        Twinac: initiation of a community-driven accelerator digital twin framework

        We present the initiation of a community-driven framework for the integration of accelerator digital twins into control systems: Twinac. Few facilities have fully integrated accelerator digital twins like at Cornell’s CHESS. Many facilities have active research to employ surrogate models to aid in operational decisions like at Argonne’s ALS, MSU’s FRIB, SLAC’s LCLS-II, and Fermilab’s FAST/IOTA, PIP-II, and main complex. To lower the barrier to entry for all accelerator facilities to build and benefit from a digital twin of their own accelerators, we propose the following software framework. Twinac will provide the capability to compose one’s own digital twin using reusable components engineered at other facilities. With this model in place, Twinac will also support tools for (1) predictive maintenance systems; (2) discovery of correlated but uncontrolled environmental factors, like seasonal temperature variations causing performance changes on power supplies, magnets, etc.; and (3) prototyping and updating sophisticated optimization and controls algorithms. The Twinac framework will enable sharing and simplified deployment of modeled components and control algorithms at all facilities. With an inter-facility team to build and support the Twinac famework, it will be easy to publish and try out the latest advancements at one’s own facility.

        Speaker: Dr Tia Miceli (Fermi National Accelerator Laboratory)
    • 12:30
      Lunch
    • MOCG MC13 Artificial Intelligence and Machine Learning Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Mike Fedorov (Lawrence Livermore National Laboratory), Mirjam Lindberg (MAX IV Laboratory)
      • 18
        Experience integrating online modeling / adaptive digital twin infrastructure and ML-based tuning for accelerator control at SLAC

        SLAC and collaborators are developing infrastructure and algorithms for deploying online physics models and combining them with machine learning (ML) models and ML-based feedback from its running accelerators. These models predict details of the beam phase space distribution, include nonlinear collective effects, and leverage high performance computing and ML-based acceleration of simulations to enable execution in reasonable times for control room use. By enabling accelerator system models to be adapted over time and increasing the speed of model execution, these system models can provide useful information for both human-driven and automated tuning. System models such as these are sometimes called "digital twins", which are distinguished from offline models by the bi-directional flow of information with the real system. We have also been leveraging these system models to speed up accelerator tuning, by providing initial guesses of settings (i.e. "warm starts") and physics information to speed up ML-based tuning. For example, we have used these models to provide priors for Bayesian optimization and training platforms for reinforcement learning. Here, we give and overview of these developments, our deployment experience, and applications at LCLS, LCLS-II, and FACET-II, with a focus on emittance tuning, FEL pulse intensity tuning, and phase space shaping. We also discuss ongoing collaborations with LBNL, JLAB, FNAL, and BNL in this space.

        Speaker: Sara Miskovich (SLAC National Accelerator Laboratory)
      • 19
        Neural network-based model predictive control for energy efficient HVAC systems for CERN accelerators

        In times of concern over the environmental impact of high-energy physics organizations, our research in CERN's Cooling and Ventilation group in the Engineering department investigates energy-saving strategies for heating, ventilation, and air conditioning (HVAC) systems. Widely used in both residential and industrial settings, HVAC systems contribute up to 40% of residential and 70% of industrial consumption, making their optimization a global concern. At CERN, cooling and ventilation systems are used to ensure appropriate temperature and humidity conditions in the accelerator complex. Together with general water-cooling systems, these systems account for up to 12% of total electricity consumption of CERN’s flagship accelerator. Despite their energy intensity, these systems are typically managed by classical controllers, which are reliable for temperature and humidity regulation but not optimal in terms of energy efficiency. This study aims to quantify the potential energy savings in HVAC systems using model predictive control (MPC), an advanced control strategy that incorporates behaviour prediction and external data, such as weather forecast. In a quest for a reproducible solution that can be easily adapted to different HVAC plants, the digital twin is built using physics-informed neural networks, following recent advances in academic research.

        Speaker: Diogo Monteiro (European Organization for Nuclear Research)
      • 20
        Optimization of the transfer line against collective effects using physics-constrained generative phase space reconstruction

        Optimization of the transfer line against collective effects such as space charge and coherent synchrotron radiation (CSR) effects is crucial to preserve the beam quality. While simple conventional diagnostic methods provide ensemble averaged beam parameters or limited information of phase space, they are still limited in obtaining precise, complete 6-dimensional phase space with all the correlations due to hardware and dedicated time requirements. A generative phase space reconstruction method (GPSR) has been developed as a robust diagnostic framework that reconstructs complete 6-dimensional phase space. Here we show a physics-constrained GPSR model that incorporates known physical parameters, such as RMS beam sizes and emittances, as constraints. We performed simulated demonstrations at the Pohang Accelerator Laboratory X-ray Free Electron Laser facility that the Physics-informed GPSR can be performed for complete 6-dimensional phase space. Furthermore, by using the reconstructed phase space, we performed non-differentiable particle tracking simulations to investigate the phase space evolution against the space charge and CSR along the bunch compressor. We present the trend of predicted CSR-induced emittance growth, which closely matches the ground truth.

        Speaker: Seongyeol Kim (Pohang Accelerator Laboratory)
      • 21
        AI-driven autonomous tuning of radioactive ion beams

        The Californium Rare Isotope Breeder Upgrade (CARIBU) at Argonne National Laboratory is a pivotal facility for studying rare and unstable atomic nuclei, providing radioactive ion beams (RIBs) from the spontaneous fission of Californium-252. Since 2008, CARIBU has significantly impacted nuclear structure studies, nuclear astrophysics research, and national security applications. However, the traditional extraction and transport of these beams have relied on expert-driven tuning methods, which are time-consuming and involve manually optimizing hundreds of parameters, thus hindering operational efficiency and scientific output. To address these challenges, a novel system utilizing Artificial Intelligence (AI) has been introduced at CARIBU to automate the beam delivery process. This system employs machine-learning techniques, specifically Bayesian Optimization, to autonomously tune each section of the beam line. Live tests have shown that the AI-driven system can achieve transport efficiency and delivery times on par with experienced experts. Extending this methodology to other RIB facilities globally promises to significantly enhance the field by improving operational efficiency and accelerating research in nuclear physics and related areas. This integration of AI-driven systems represents a significant step towards autonomous scientific discovery.

        Speaker: Sergio Lopez-Caceres (Argonne National Laboratory)
      • 22
        Data-driven models for virtual diagnostics and control

        Machine learning methods have been increasingly used to model complex physical processes that are difficult to address with traditional approaches, especially when these processes exhibit temporal dynamics or require real-time implementation. The linear accelerator (LINAC) at the LANSCE facility is one such system. While a high-resolution simulation tool, HPSim, exists, the complexity and high computational costs of the simulation, combined with the spatiotemporal variability of the LINAC and limited diagnostic measurements, creates challenges for real-time operation. These challenges can be mitigated by developing fast surrogate machine learning models to provide virtual diagnostics and enable control. However, the highly expressive nature of machine learning models often results in opaque representations, complicating their use in control applications. Control design and tuning are significantly simplified when the system dynamics are captured by a more interpretable, parsimonious model. This study seeks to harness the power of machine learning while applying traditional system identification techniques to develop models that are both effective for control and computationally efficient.

        Speaker: Brad Ratto (Los Alamos National Laboratory)
      • 23
        Automation of sample alignment for neutron scattering experiments

        Neutron scattering experiments are a critical tool for the exploration of molecular structure in compounds. The TOPAZ single crystal diffractometer at the Spallation Neutron Source and the Powder Diffractometer at the High Flux Isotope Reactor study these samples by illuminating them with different energy neutron beams and recording the scattered neutrons. Aligning and maintaining the alignment of the sample during an experiment is key to ensuring high quality data are collected. At present this process is performed manually by beamline scientists. RadiaSoft in collaboration with the beamline scientists and engineers at ORNL has developed a machine learning based alignment software automating this process. We utilize a fully-connected convolutional neural network configured in a U-net architecture to identify the sample center of mass. We then move the sample using a custom python-based EPICS IOC interfaced with the motors. In this poster we provide an overview of our machine learning tools and show our results aligning samples at ORNL.

        Speaker: Jonathan Edelen (RadiaSoft (United States))
    • MOCR MC11 User Interfaces and User Experience Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Marco Bartolini (SKA Observatory), Vicente Rey Bakaikoa (European Synchrotron Radiation Facility)
      • 24
        Redesigning the MeerKAT archive: more than meets the UI

        The South African Radio Astronomy Observatory (SARAO) recently completed a major redevelopment effort to modernize the web-based archive of MeerKAT telescope data. This included resolving long-standing bugs, introducing new features, and realigning the system with current technologies. Although initially scoped as a straightforward port of a list-based search interface, the effort also became an opportunity to rethink the archive’s role within SARAO and the broader astronomy community.
        This case study shares lessons learned—highlighting new architectural patterns, improved UI capabilities, and a UX journey shaped by surprising user feedback and design flaws.
        One example it examines is the shift in authentication architecture. Moving from proxy-based authentication to in-app session logic enabled finer-grained control of feature access. However, cookie size limits forced session state into a database, requiring batched lookups to reduce load. This subtle change—replacing cookie parsing with a session-loading function—introduced a critical new code path, where bugs could silently assign users the wrong session data.
        Small infrastructure decisions can have serious UX consequences. These findings underscore the importance—and challenge—of justifying tasks like logging and observability, which surfaced this and other issues. It also illustrates how modernization efforts can reveal assumptions, uncover hidden workflows, and strengthen the human/system interface.

        Speaker: Zachary Smith (South African Radio Astronomy Observatory)
      • 25
        Phoebus: An open ecosystem for control system applications and services

        The Phoebus toolkit continues to evolve as a modern, extensible platform for control system user applications and middle-layer services. As the next generation of the Eclipse-based Control System Studio, Phoebus retains the familiar, integrated toolset experience while replacing the Eclipse RCP framework with a modular architecture built on standard Java technologies and JavaFX. This transition simplifies maintenance and extensibility while providing foundational building blocks for modern applications and scalable services. Recent efforts have focused on strengthening infrastructure and streamlining deployment, including updates to support recent Java LTS releases; modernization of the Middle-layer services to use newer Kafka and Spring Boot versions; and improved documentation. Middle-layer services—offering alarm management, save/restore, channel finder, and logbooks—continue to evolve, with emphasis on simplifying configuration, improving scalability, and aligning with modern web standards and containerized workflows. The Phoebus collaboration now includes contributors from dozens of facilities worldwide, many of them first-time participants. Alongside technical progress, the project has prioritized a sustainable, inclusive collaboration model to support future developers and users. This paper outlines the current status, community efforts, and future directions of the Phoebus ecosystem.

        Speaker: Kunal Shroff (Brookhaven National Laboratory)
      • 26
        Implementing design principles in the ROCK-IT GUIs: A UI/UX development case

        Effective UI/UX design is essential for scientific GUIs, especially when complex workflows and time constraints challenge usability. By grounding GUI development in UI/UX design principles, accessibility, and user feedback; one can implement user-centered improvements in experimental physics control systems that will lead to higher user satisfaction and increased productivity.
        This presentation explores the UI/UX development process within the demonstrator case of the ROCK-IT project, that develops all necessary tools for the automation and remote access of in-situ and operando catalysis synchrotron experiments. The first part of the presentation outlines the key principles of GUI design and visual accessibility, supported by examples of user feedback from a ROCK-IT beamline’s previous GUI. Shaped by theory and in-person user interviews, the early-stage sketches and interface concepts that emerged from this process will also be presented.
        The second part of the presentation focuses on how the GUI design process evolved in response to timelines, workforce availability, and shifting priorities in ROCK-IT - steering the route toward an efficient, yet user-centered and accessible solution.

        Keywords: GUI, UI/UX, accessibility, screen design, design principles, user research

        Speaker: Zeynep Isil Isik Dursun (Deutsches Elektronen-Synchrotron DESY)
      • 27
        Beyond like-for-like: A user-centered approach to modernizing legacy applications

        When modernizing a legacy application, it is easy to fall back on a like-for-like replica with new tools and updated design stylings, but this is an opportunity to explore making a more intuitive application that supports user tasks and efficiency. Rather than having a blank canvas--unburdened by legacy tech debt--to create a new application, you are working with an existing application that is integral to accelerator operations and one that expert users are already familiar with. Due to this, you might assume people will prefer the like-for-like, but you could be carrying forward the pain points, processes that are inefficient, and ultimately wind up with an application that no one wants to use because it doesn't solve existing problems.
        Getting users involved can make all the difference in your approach to modernizing a legacy application that caters to both newer and expert users. It also can bridge the gap between like-for-like and introducing new GUI design. Having a legacy application doesn't have to make the modernized one difficult to develop, as the existing application is a tool in how you move forward with the new application. It provides insight into areas that an application with a clean slate doesn't give you.

        Speaker: Madelyn Polzin (Fermi National Accelerator Laboratory)
      • 28
        Developments of PAnTHer: 2D and 3D web maps

        PAnTHer (Particle Accelerator on THree.js) is an innovative visualization tool designed to create interactive 2D and 3D maps for particle accelerators, leveraging advanced web and touch technologies. These dynamic maps are seamlessly integrated with real-time data sourced from accelerator control systems, simulation outputs, and an external component database.
        The mapping process utilizes a JSON-formatted lattice file, which can be generated on-the-fly by extracting essential parameters from a simulator device server.
        This allows for instantaneous presentation to remote users, complete with visualizations of particle trajectories at refresh rates of up to 25 frames per second.
        To ensure efficient and secure data delivery, we have implemented several techniques that facilitate access for external users while maintaining data integrity.
        The 3D visualization component was specifically developed to strike an optimal balance between visual fidelity and performance, enhancing the user experience in
        exploring complex accelerator environments.
        This presentation will detail the architecture, functionality, and performance metrics of PAnTHer, highlighting its potential applications in accelerator physics and education.

        Speaker: Lucio Zambon (Elettra-Sincrotrone Trieste S.C.p.A.)
      • 29
        Modern control system applications at Fermilab with Dart and Flutter

        The control system at Fermilab is undergoing an unprecedented modernization effort. Hundreds of legacy applications originally developed with technology from the early nineties will be replaced with a suite of modern web applications. The selection of an user interface framework is a key technology decision that will impact on all the applications developed over the next decade. The Controls department at Fermilab has decided to use Google’s Dart language and Flutter framework for future application development. In this paper we will discuss the decision process that selected Dart/Flutter and the development of early general purpose control system applications with the framework.

        Speaker: John Diamond (Fermi National Accelerator Laboratory)
    • 15:30
      Coffee
    • MODG MC08 Diverse Device Control and Integration Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Lorenzo Pivetta (Elettra-Sincrotrone Trieste S.C.p.A.), Masanori Satoh (High Energy Accelerator Research Organization)
      • 30
        The CBXFEL control system: bringing new technologies into the SLAC LCLS accelerator control system

        The Cavity-Based Free Electron Laser (CBXFEL) project is proposing to produce a recirculating X-ray cavity and deploy it to the SLAC LCLS (Linac Coherent Light Source) Hard X-ray (HXR) undulator line. The objective of the project is to demonstrate multi-pass gain and to eventually increase peak brightness of the produced X-ray beam. To support this novel application, new electronics and control system architectures have been adopted and integrated into the LCLS accelerator control system. This includes nanoscale precision multi-axis motion control, high-precision in-vacuum temperature control, high-speed and high-resolution USB cameras, and high-speed digitizers. Moreover, the existing accelerator network, vacuum, timing, and machine protection system (MPS) control systems were expanded to support the new devices. This paper describes the main requirements to be met, how the technologies were integrated into the accelerator control system, and the main lessons learned.

        Speaker: Maria Alessandra Montironi (SLAC National Accelerator Laboratory)
      • 31
        Open source EtherCAT motion control, ECMC, at PSI

        The open-source EtherCAT motion control EPICS module, ECMC, is now being extensively deployed at the Paul Scherrer Institute (PSI), with the number of motion axes currently exceeding 600. ECMC is used across PSI facilities, particularly for the ongoing upgrade of the Swiss Light Source (SLS 2.0), as well as in SwissFEL and HIPA. The scale and diversity of motion control and data acquisition (DAQ) applications have driven the development of many new features in ECMC. Recent developments include support for PVT (Position-Velocity-Time) motion profiles for advanced fly scanning applications, along with beam-synchronous data acquisition capabilities. A concept for flexible and mobile motion control solutions has also been introduced, aiming to support beamline common-pool equipment and facilitate the integration of user-provided motors. Efforts have also focused on improving user-friendliness, defining best practices, streamlining commissioning, and simplifying troubleshooting processes. This contribution highlights the ongoing development and deployment of ECMC within the complex motion control environments at PSI.

        Speaker: Anders Sandström (Paul Scherrer Institute)
      • 32
        A scalable approach to camera integration at NIF

        The Integrated Computer Control System (ICCS) at the National Ignition Facility (NIF) manages devices essential to fusion ignition experiments, including 488 specialized cameras supporting beam alignment, target alignment, cryogenic layering, optics damage inspection, collision avoidance, and other key processes. These heterogeneous camera systems, sourced from multiple vendors, use varied bus technologies and Windows-based proprietary SDKs, posing challenges to integrate into a unified control architecture. To address this, ICCS adopted a diskless Front End Processor (FEP) platform using Linux and open-source camera control libraries. Leveraging a network-booted Linux environment and community-supported video drivers, ICCS integrated diverse vendor hardware while enabling rapid adoption of new camera models. Consolidating multiple cameras onto single FEPs improved maintainability and reliability through strategic grouping. Open-source libraries also position NIF as a contributor to the broader controls community. This modernization demonstrates a systematic approach to integrating dissimilar hardware components across a high-stakes facility. By bridging various proprietary protocols, consolidating scattered device controls, and leveraging community-supported software, ICCS achieves a maintainable, expandable architecture for key imaging and streaming functions.

        Speaker: Brian Hackel (Lawrence Livermore National Laboratory)
      • 33
        Design and commissioning of position control for the high-dynamic cryogenic sample stage of the SAPOTI nanoprobe at the CARNAÚBA beamline (SIRIUS)

        SAPOTI is the second cryogenic nanoprobe station recently installed and under commissioning at the CARNAÚBA beamline at SIRIUS, designed for multi-analytical X-ray techniques, including 2D and 3D ptychographic imaging. A high-dynamic mechatronic system* aimed to provide sample positioning at single-nanometer resolution was developed using a decoupled architecture, force actuators, high-speed and high-accuracy metrology and a dynamic filter for reaction forces. This paper presents the design and implementation of XYZ positioning controller, the fly-scan operation for imaging and results from initial technical commissioning. First, the sample stage mechatronic architecture is overviewed. Next, control strategy based on feedback and trajectory feedforward is detailed, followed by the system identification procedure and design of controllers using loop-shaping and model inversion techniques. Implementation in a real-time digital controller, along with fly-scan trajectories and triggering scheme for detectors are discussed. Finally, in-situ performance in stability, trajectory tracking and imaging reconstruction are demonstrated with experimental results.

        Speaker: Gabriel Oehlmeyer Brunheira (Brazilian Synchrotron Light Laboratory)
      • 34
        A centralized and standardized approach to low-voltage DC power

        Modern accelerator facilities like the Linac Coherent Light Source (LCLS) demand highly reliable and maintainable control system infrastructure to ensure uninterrupted operation and rapid fault recovery. To meet these requirements, LCLS has implemented a centralized, standardized low-voltage DC (LVDC) power distribution architecture that supports mission-critical subsystems, including vacuum controls, system PLCs, and other essential control components. The architecture features dual-voltage distribution—24V DC for PLCs and select vacuum systems, and 48V DC for powering motors. At its core are intelligent DC circuit breakers that offer integrated protection, real-time monitoring, and remote control. These breakers are integrated with IOCs (Input/Output Controllers) and are fully manageable via the EPICS control system. This integration enables remote power cycling, fault isolation, and system recovery without requiring physical access, significantly improving uptime and maintainability. The standardized LVDC infrastructure reduces system complexity and supports modular scalability. Integrating with the control system enables advanced diagnostics and continuous monitoring. This talk will review the current LVDC implementation at LCLS, explore the challenges involved in designing DC systems for new applications, and outline planned enhancements and future developments.

        Speaker: Divya Kameswaran (SLAC National Accelerator Laboratory)
      • 35
        Advanced polarization and energy control for APPLE-II type undulator beamlines at MAX-IV

        Precise control of photon beam properties is essential for modern synchrotron beamlines, particularly those utilizing APPLE-II type undulators. This paper presents the control system architecture developed at MAX-IV by using IcePAP drivers and TANGO control system, to achieve advanced polarization and energy manipulation. The system implements the BLUES (Beamline Universal Polarization Mode) framework, allowing dynamic control of both helical and inclined polarization states through synchronized phase motor settings. Central to this approach is the use of parametric lookup tables to define non-linear motion trajectories for the undulator’s gap and phase axes. This system enables linear energy ramps, supporting constant eV/s scans crucial for high-resolution spectroscopy and imaging techniques, taking full advantage of the high flux provided by fourth-generation light sources and improving data collection efficiency without compromising the stability or quality of the photon beam. Integration between the beamline and accelerator control systems allows for the complex coordination required to manage polarization settings. To ensure electron beam stability during undulator motion, the control system integrates feedforward correction loops that compensate for orbit and optics distortions induced by gap and phase changes. This approach offers a scalable and precise method for enhancing beamline capabilities, tailored specifically for the challenges posed by APPLE-II undulators.

        Speaker: Áureo Freitas (MAX IV Laboratory)
    • MODR MC05 FPGA and Embedded Systems Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Evangelia Gousiou (European Organization for Nuclear Research), Michael Costanzo (Brookhaven National Laboratory), Scott Cogan (Facility for Rare Isotope Beams)
      • 36
        Readout of the FPGA-based global trigger system of the ATLAS experiment

        A new FPGA-based global trigger system is intended for the High Luminosity (HL) program at the ATLAS experiment at the Large Hadron Collider (LHC). The system will process data from the experiment with fixed latency to allow the selection of individual collisions of proton bunches. Data from the global trigger system are read out for the collisions of interest for further processing with commodity computing. The readout of the system is handled by the readout firmware that interfaces the global trigger and the ATLAS readout system. The firmware receives trigger decisions and the LHC-related signals via the Local Trigger Interface (LTI) link and outputs data via 25 Gb/s Interlaken links. The readout firmware includes derandomizing buffers, Network-on-Chip (NoC), Interlaken encoder, and LTI decoder. The LTI protocol is synchronous to the LHC bunch crossing clock and uses 8b/10b encoding. The data is transmitted at 9.8 Gb/s. The firmware was tested with global trigger boards equipped with AMD Versal Premium FPGAs.

        Speaker: Mr Tong Xu (Argonne National Laboratory)
      • 37
        PandABox II: A collaborative platform designed for future upgrades

        Ten years ago, the PandABox platform was first introduced in Melbourne during the MOCRAF workshop. Originally developed through a collaboration between Synchrotron SOLEIL and Diamond Light Source, PandABox was designed to support multi-technique scanning and feedback applications. Since then, the platform has been widely adopted across synchrotron facilities worldwide—including SOLEIL, DIAMOND, MAX IV, and DESY in Europe; NSLS-II in the United States; HEPS in Asia; and SESAME in Middle-East. With the fourth-generation light sources, there is an increasing need for high-performance, multi-channel encoder processing to enable synchronized data acquisition and motion control during continuous scanning experiments—now a critical feature for automation.

        In response to these evolving demands, and following discussions within the LEAPS-INNOV WP5.3 project, the opportunity to jointly develop a new state-of-the-art equipment became evident. This effort has since expanded into a broader collaboration that now includes MAX IV, ALBA, and DESY alongside the original partners. This paper presents the new generation of the PandABox platform, offering a comprehensive overview of its integration within EPICS and TANGO control systems. It also outlines future functionalities and the framework of the ongoing international collaboration driving its development.

        Speaker: Yves-Marie Abiven (Synchrotron soleil)
      • 38
        Modernizing embedded controllers at the NIF

        As the world’s most energetic laser, the National Ignition Facility (NIF) plays a critical role in advancing high energy density physics and inertial confinement fusion research. The NIF relies on a distributed control system to automate setup and execution of experiments. This includes over 1,000 embedded controllers split between 17 distinct types. Most of these controllers were designed in the late 1990s to early 2000s, using platforms ranging from Lontalk microcontrollers to STD Bus single board computers.

        Over the decades, many of our chosen software technologies, hardware components, and platforms have reached end of life. Therefore, we have begun a modernization effort for the NIF embedded controllers. Modern hardware/software will allow us to procure additional units in order to maintain adequate spares and support upcoming upgrades to the NIF. It will also allow us to tackle incompatibilities developing between our existing firmware and modern IT infrastructures. This paper will provide an overview of our embedded controller strategy. It will focus on component selection, minimizing risk during the transition using an automated test bench, and firmware/hardware upgrades to minimize on-going maintenance costs.

        Speaker: Adrian Barnes (Lawrence Livermore National Laboratory)
      • 39
        Distributed I/O Tier as a reference platform for harnessing system-on-chips in CERN’s control system: gateware design, build system, and software services

        The Distributed I/O Tier (DI/OT) project was initially launched to develop a common, modular hardware platform for custom electronics at the lowest layer of the CERN control system. With the adoption of the AMD Zynq UltraScale+ MPSoC for the high-performance System Board, DI/OT has also become a reference platform for integrating System-on-Chip (SoC) technology into CERN’s control system. This paper presents two key aspects of DI/OT’s role as a SoC reference platform: (1) tools and methodologies to streamline end-application development and (2) integration of DI/OT into CERN’s control system as a Front-End platform. The first aspect includes a user-friendly build system and reference design that enable seamless integration of custom FPGA IP cores and Linux device tree entries while providing the reference design with essential DI/OT functionality and monitoring interfaces for local crate peripherals. This build system also automates synthesis and low-level software compilation to generate a complete bootable binary. The second aspect covers a fail-safe and reliable SoC boot mechanism, network booting of FECOS (a Debian-based CERN Linux image for Front-End Computers), and integration with standard monitoring services.

        Keywords: System-On-Chip, DI/OT, integration, build-system, fail-safe, monitoring

        Speaker: Alen Arias Vazquez (European Organization for Nuclear Research)
      • 40
        Next generation direct RF sampling LLRF control and monitoring system for linear accelerators

        The low-level RF (LLRF) systems for linear accelerating structures are typically based on heterodyne architectures. The linear accelerators normally have many RF stations and multiple RF inputs and outputs for each station, so the complexity and size of the LLRF system grows rapidly when scaling up. To meet the design goals of being compact and affordable for future accelerators, or upgrade of existing ones, we have developed and characterized the next generation LLRF (NG-LLRF) platform based on the RF system-on-chip (RFSoC) for S-band and C-band accelerating structures. The integrated RF data converters in RFSoC sample and generate the RF signals directly without any analogue mixing circuits, which significantly simplified the architecture compared with the conventional LLRF systems. We have performed high-power tests for the NG-LLRF with the S-band accelerating structure in the Next Linear Collider Test Accelerator (NLCTA) test facility at SLAC National Accelerator Laboratory and a C-band structure prototyped for Cool Cooper Collider (CCC). The NG-LLRF platform demonstrated pulse-to-pulse fluctuation levels considerably better than the requirements of the targeted applications and high precision and flexibility in generating and measuring the RF pulses. In this paper, the characterization results of the platform with different system architectures will be summarized and a selection of high-power test results of the NG-LLRF will be presented and analyzed.

        Speaker: Chao Liu (SLAC National Accelerator Laboratory)
      • 41
        Real-time FPGA-based control architecture for high-dynamic mechatronic systems at SIRIUS

        A 4th-generation synchrotron light source demands high-performance mechatronic systems to meet stringent requirements for optical focusing, energy filtering, beam stability, sample positioning, and scanning. SIRIUS, the facility the Brazilian Synchrotron Light Laboratory (LNLS), has achieved exceptional beam quality, supporting advanced scientific experiments through the continuous development of state-of-the-art mechatronic systems* at its beamlines. A flexible, scalable, and high-performance FPGA-based real-time control architecture has been developed to meet the demanding requirements. It is capable of handling high-dynamic systems with control bandwidth on the order of hundreds of hertz, as well as stable motions at the sub-nanometer scale. This work focuses on the embedded code architecture and implementation, reviewing the hardware topology and its seamless integration enabling the full mechanical capabilities. The design principles, implementation strategies, and performance evaluations demonstrate its effectiveness and potential applicability to advanced control applications. The scalable design has also led to significant improvements in beamline development and deployment throughput, enabling harmonious integration and customization towards a reliable system.

        Speaker: Telles René Silva Soares (Brazilian Synchrotron Light Laboratory)
      • 42
        Design of a standardized FPGA architecture for EIC common platform daughtercards

        The EIC Common Platform is a modular system architecture which will serve as the basis for the EIC Controls Systems. It consists of a SoC based carrier board with up to two independent pluggable FPGA based Daughtercards. Different types of Daughtercards have custom electronics catering to the specific needs of an application. All types of Daughtercards will have FPGA logic to support a common protocol for communication with the carrier board as well as a basic set of features for programming and telemetry. Logic to support Daughtercard specific functionality will be implemented in the same FPGA. Daughtercard FPGA projects will be organized with a common modular structure to facilitate reuse of IP cores while allowing for flexibility within the Daughtercard specific logic design. The FPGA Firmware Framework (FWK) developed at DESY will be leveraged for managing the generation and building of FPGA projects. The basic functionality and organizational structure of EIC Common Platform Daughtercard FPGA projects is presented.

        Speaker: Paul Bachek (Brookhaven National Laboratory)
    • TUKG Keynote Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Martin Pieck (Los Alamos National Laboratory)
      • 43
        Exploring the smallest things with the largest microscopes

        This talk is about a journey of particle physicists in searching for the most fundamental unit (or the ultimate building blocks) of matter and their properties. Great experiments of the 20th century have led to the discovery of ever-smaller entities that make up what were once thought to be indivisible particles. Moreover, this theory of the very small has been shown to be intimately connected to the largest scales imaginable – cosmology and the beginnings of the universe. Despite these considerable successes, this current theory nevertheless has within it the seeds of its own demise and is predicted to break down when probed at even smaller scales. By using our increased understanding, we continue to peel away at the more hidden layers of truth with the hope of discovering a more elegant and complete theory.

        Speaker: Young-Kee Kim (University of Chicago)
    • TUAG MC01 Status Reports Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Barbara Ojur (South African Radio Astronomy Observatory), Martin Pieck (Los Alamos National Laboratory)
      • 44
        Electron Ion Collider project and control system updates for 2025

        As the Relativistic Heavy Ion Collider (RHIC) completes its final physics run in 2025, design activities for the future Electron Ion Collider (EIC) that will probe the building blocks of nuclear physics for decades to come have made critical advances.  Recent improvements including a new Electron Injector System and updated Low Energy Cooler for electron-based cooling of hadron beams will be described.  Advancements in the planning and demonstration activities for accelerator controls elements such as the Common Platform front-end computer designs and related infrastructure, EPICS-based software infrastructure, bridging tools to controls for legacy systems required to support hadron injector systems, and networking and computing infrastructure designs will be highlighted.  Our analysis of potential scope of AI/ML integration with controls for the EIC accelerator and detector systems will be introduced.

        Speaker: James Jamilkowski (Brookhaven National Laboratory)
      • 45
        The small size telescope control system design of the Cherenkov Telescope Array Observatory

        The Cherenkov Telescope Array Observatory (CTAO) will include telescopes of three different sizes, the smallest of which are the Small-Sized Telescopes (SSTs). In particular, the SSTs will be installed at the southern site of CTAO, on the Chilean Andes, and will cover the highest energy range of CTAO (up to ~300 TeV). The SSTs are developed by an international consortium of institutes that will provide them as an in-kind contribution to CTAO. The optical design of the SSTs is based on a Schwarzschild-Couder-like dual-mirror configuration. They are equipped with a focal plane camera based on SiPM detectors.
        The Telescope Control System (TCS) is the system responsible for the control and supervision of each telescope. The TCS includes several supervisor components that interface with the telescope local control systems, the hardware and software that control the telescopes hardware devices such as the telescope mount drive systems and the Cherenkov camera. The TCS is also the interface between the telescope and the CTAO Array Control and Data Acquisition system (ACADA). As far as the mechanical structure is concerned, the TCS is also derived from what has already been developed within the ASTRI project.
        The design of the SST telescopes was evaluated and approved during the Critical Design and Manufacturing Readiness review (CDMR) organized with CTAO. In this contribution we will present the design of the Telescope Control System, including the results of the CDMR.

        Speaker: Vito Conforti (Osservatorio di Astrofisica e Scienza dello Spazio)
      • 46
        IFMIF-DONES control systems: general architecture

        IFMIF-DONES (International Fusion Materials Irradiation Facility DEMO-Oriented NEutron Source) Facility Project is a novel research infrastructure for testing and qualifying structural materials for future fusion reactors under relevant irradiation conditions. The project is implemented through in-kind contributions within the DONES Programme. IFMIF-DONES Control Systems are organized in two levels: Central and Local Instrumentation and Control Systems (CICS and LICS), connected through a complex network infrastructure. The CICS comprises three systems: Control, Data Access and Communication which is responsible for normal operation via central supervision and control, timing and synchronization, data management, alarm handling, system administration, and software management; Machine Protection System that ensures protection of investment against failures or operational errors; and Safety Control System, that implements the safety functions to protect personnel and the environment, including radiological protection functions. Several of the key technologies have been validated through the Linear IFMIF Prototype Accelerator (LIPAc), enabling risk mitigation and design consolidation. This contribution provides an overview of the IFMIF-DONES Control Systems architecture, focusing on strategic decisions for availability, maintainability, and long-term operability, as well as coordination frameworks addressing complex challenges throughout the project lifecycle.

        Speaker: Celia Carvajal Almendros (IFMIF-DONES Spain Consortium)
      • 47
        The new PIP-II super conducting LINAC at Fermilab and its controls status

        The Fermi National Accelerator Laboratory (FNAL) Proton Improvement Project II (PIP-II) is building a new Super Conducting Linear Accelerator (SCL) accelerating protons to 800 MeV for injection into the rest of the FNAL beam complex. After two more accelerator synchrotron stages, the complex will provide a high intensity beam of 1.2MW of 120 GeV protons onto the Long Baseline Neutrino Facility (LBNF) beamline target. The LBNF target will provide a beam of neutrinos for study by the Deep Underground Neutrino Experiment (DUNE) at the FNAL site and at the SURF lab in South Dakota. We will present the design and status of the new accelerator elements with special consideration on our new Controls, Data Acquisition and Timing systems. As PIP-II has chosen to use EPICS, Phoebus and related tools to control this new accelerator system and we will present our experience and plans here as well.

        Speaker: Daniel Crisp (Fermi National Accelerator Laboratory)
    • 10:00
      Coffee
    • TUBG MC12 Software Development and Management Tools Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Gianluca Chiozzi (European Organisation for Astronomical Research in the Southern Hemisphere), Guifré Cuní Soler (Paul Scherrer Institute)
      • 48
        Leveraging IT-inspired workflows for PLC software

        PLCs play a crucial role in operating, controlling, and interlocking high-power distributed systems at CERN, including main magnet power supplies and static var compensators. The increasing number, complexity, scale, and specialization of these critical applications make development and maintenance particularly challenging for a small team. To address this, we have introduced IT-inspired engineering workflows and technologies that wrap around the UNICOS framework, balancing customization with standardization. This approach enhances engineering efficiency and software quality. This paper highlights the benefits of using continuous integration pipelines, automated functional tests, reviewed merge requests, and the necessary virtual computing infrastructure, demonstrating how these innovations streamline development and maintenance in high-power system automation.

        Speaker: Jose Manuel de Paco Soto (European Organization for Nuclear Research)
      • 49
        Accelerating Control Systems with GitOps: a path to automation and reliability

        GitOps is a foundational approach for modernizing infrastructure by leveraging Git as the single source of truth for declarative configurations. The poster explores how GitOps transforms traditional control system infrastructure, services and applications by enabling fully automated, auditable, and version-controlled infrastructure management. Cloud-native and containerized environments are shifting the ecosystem not only in the IT industry but also within the computational science field, as is the case of CERN and Diamond Light Source among other Accelerator/Science facilities which are slowly shifting towards modern software and infrastructure paradigms. The ACORN project, which aims to modernize Fermilab's control system infrastructure and software is implementing proven best-practices and cutting-edge technology standards including GitOps, containerization, infrastructure as code and modern data pipelines for control system data acquisition and the inclusion of AI/ML in our accelerator complex.

        Speaker: Maria Acosta (Fermi National Accelerator Laboratory)
      • 50
        Software Release and Integration Verification for the SKA Observatory

        The Square Kilometre Array Observatory (SKAO) is an international effort to construct word's largest radio telescope in South Africa and Australia, managed as a single observatory from the global headquarters in the UK. SKAO software encompasses the full suite of software products required for telescope operations, developed using the Scaled Agile Framework (SAFe) by over 35 teams across 5 Agile Release Trains. Every 3 months, teams plan and co-ordinate their work to meet observatory milestones through continuous integration and continuous delivery (CI-CD) pipelines.

        Maintaining high quality software is critical, as released artefacts are verified by Assembly, Integration and Verification (AIV) teams in test facilities before production deployment. A structured release process, enables continuous delivery of new features. Releases follow semantic versioning and are managed through a dedicated jira-based release project, capturing detailed changelog, documentation and stakeholder-relevant metadata.

        All releases undergo verification in a cloud-based integration environment using simulators and automated tests. This enables early detection of issues, establishes known working baselines, and supports rollback.

        This paper outlines how SKAO's release management framework support the coordinated delivery of the complex software system, and how integration verification improves the quality, reliability, and traceability of releases delivered to AIV and other stakeholders.

        Speaker: Snehal Ujjainkar (SKA Observatory)
      • 51
        Team Red: Experiences of an agile task force supporting multiple software products

        Controlling CERN’s accelerator complex requires a significant number of domain specific applications. Typically these are developed by small teams comprised of a staff member with essential domain knowledge, and 1 or 2 students or graduates staying for 1-to-3 years. Developments may span over months or even years, according to the complexity and requirements, which may evolve significantly over time to follow the accelerator programme. After some years, applications need to be consolidated or re-written to follow the inevitable technology evolution and remain secure, hardware compatible, and based on technologies for which skills exist in the marketplace. This development approach has produced results, but is inefficient due to the isolated developments sometimes leading to black-box solutions that eventually need reverse-engineering. Furthermore, long-term staff members can become siloed within specific domains, increasing the risk of single points of failure. This paper shares the results and experience gained over 18 months in applying a more agile, task-force team-based approach to address the aforementioned limitations and inefficiencies. Together with the application of micro-frontend architectures, the team has been able to develop and maintain multiple applications with enhanced distribution of both domain and technical knowledge, as well as predictable delivery times. The team’s workflows will be described, together with challenges faced and lessons learned.

        Speaker: Anti Asko (European Organization for Nuclear Research)
      • 52
        Hunting for hidden bugs: dealing with test flakiness in SKA control software

        Test flakiness—when a test intermittently passes or fails without changes to the code—poses a significant challenge in the validation of distributed control systems. This paper presents an investigation into test flakiness in CSP.LMC (Local Monitoring and Control for the Central Signal Processor), a key subsystem of the SKA (Square Kilometre Array) telescope. CSP.LMC is a Python application built on the TANGO framework, that is tested using a multi-level testing approach combining unit, component, and integration tests. To achieve scalable and reproducible deployment, the entire SKA control software runs within a Kubernetes environment. We systematically collect test outcomes and execution benchmarks to monitor system stability over time. A data mining approach is applied to uncover correlations and hidden patterns associated with test instability. Our analysis aims to uncover subtle software issues that are not easily detected through standard test evaluation. Furthermore, we aim to explore how the complexity of both the software architecture and its deployment may introduce sources of non-determinism that can lead to flaky tests. We discuss the impact of flakiness on the reliability of SKA control software and propose practical strategies to benchmark, detect, and mitigate flaky tests in complex distributed environments.

        Speaker: Gianluca Marotta (INAF - OAA (Arcetri Astropysical Observatory))
    • TUBR MC03 Control System Sustainment and Management Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Barry Fishler (SLAC National Accelerator Laboratory), Denise Finstrom (Fermi National Accelerator Laboratory)
      • 53
        Squeezing every last drop: life extension of the LHC and PS access safety systems

        The LHC Access Safety System (LASS) was commissioned in 2008 for the initial launch of the LHC, while the current PS Access Safety System (PASS) entered in operation in 2015 after a full multi-year revision. By the end of the upcoming LHC Long Shutdown 3 in 2030, LASS will have accumulated over 20 years of service and PASS 15 years. Both systems are based on Siemens series 400 safety PLCs, a product line that is being progressively phased out by the vendor. After evaluating several consolidation strategies to ensure the smoothest possible continuity of operation of both LHC and PS, we have opted for a life extension of both current access safety systems. This strategy retains the existing systems but introduces a number of architectural enhancements to mitigate current limitations that could impact accelerator availability. The objective is to secure reliable long-term operation of both safety systems, while also improving operational autonomy, maintenance capabilities, and resilience to vendor product life cycles. This paper outlines the rationale behind the selected approach, the external factors influencing our decision, and the key steps planned to implement this consolidation effectively.

        Speaker: Timo Hakulinen (European Organization for Nuclear Research)
      • 54
        Managing technical debt across large-scale control systems

        This presentation provides a comprehensive overview of technical debt in the context of control systems for large-scale physics facilities. We will explore its various forms, common causes, and potential consequences on system reliability, maintainability, and upgradability. Drawing on experiences from multiple projects, we will present a range of strategies for proactively managing technical debt, including best practices in design, development, testing, and documentation, as well as reactive approaches for identifying and mitigating existing issues. The presentation will also emphasize the importance of a collaborative approach and the use of modern tools, like fault tracking systems, to ensure the long-term health and success of control systems in our field.

        Speaker: Adam Watts (Fermi National Accelerator Laboratory)
      • 55
        Managing obsolescence of alignment control system on the Laser Megajoule facility

        The Laser MegaJoule, a 176-beam laser facility developed by CEA, is located near Bordeaux. It is part of the French Simulation Program, which combines improvement of theoretical models used in various domains of physics and high performance numerical simulation. It is designed to deliver about 1.4 MJ of energy on targets, for high energy density physics experiments, including fusion experiments.
        The LMJ technological choices were validated on the LIL, a scale-1 prototype composed of 1 bundle of 4-beams. The first bundle of 8-beams was commissioned in October 2014 with the implementation of the first experiment on the LMJ facility. The operational capabilities are increasing gradually every year until the full completion by 2026. By the end of 2025, 22 bundles of 8-beams will be assembled (full scale) and 19 bundles are expected to be fully operational.
        The alignment control system is an essential part of the LMJ facility that allows the alignment of all the beams through the bundles to the target. It is built to control several hundred of devices, mainly drives, but also cameras and lasers.
        The architecture of the alignment control system is based on physical and software PLC that were chosen 15 years ago. These PLC are now obsolete and we have to replace them. This article discusses our strategy to manage the obsolescence of alignment control system by minimizing the impact on the operation of the LMJ.

        Key words: Laser facility, LMJ, IT, Obsolescence, PLC, Control System

        Speaker: Julien HANNIER (Commissariat à l'Énergie Atomique et aux Énergies Alternatives)
      • 56
        Operational experience and optimization techniques for large EPICS control system environments

        The Spallation Neutron Source (SNS) utilizes the EPICS control system to manage the main accelerator, multiple test stands, and most neutron scattering instruments. The control system environment for the main accelerator alone includes approximately 300 Input/Output Controllers (IOCs), 200 soft IOCs, and 100 Operator Interface (OPI) workstations, all communicating through over 1,000,000 process variables. The EPICS distributed control system relies heavily on UDP broadcast communication to locate process variables on the network, which can significantly impact the operation of connected devices and, in turn, affect accelerator uptime. In this paper, we present operational challenges and discuss optimization techniques applied to the EPICS environment at SNS.

        Speaker: Klemen Vodopivec (Oak Ridge National Laboratory)
      • 57
        Leaving agile island: using Flight Levels at the ALBA Synchrotron

        Integrating agile methodologies within large, non-agile organizations often results in isolated agile islands and inefficient Water-Agile-Fall models. This work outlines the practical approach adopted by ALBA Synchrotron's Computing Division to embed agile practices effectively within a traditional, hierarchical structure. Instead of complex, large-scale frameworks (SAFe, SoS or LeSS), we employed a flow-based agile integration strategy based on Klaus Leopold's Flight Levels.
        Teams utilize agile development at the operational level (Flight Level 1). The critical integration occurs at Flight Level 2 (Coordination), focusing on optimizing end-to-end value streams across teams. Key FL2 mechanisms at this level include visualizing the entire workflow, enforcing strict Work-in-Progress (WIP) limits to enhance predictability and throughput, fostering targeted communication, and ensuring reliable commitments. In practice, we have developed a custom app drawing data using the JIRA API (Atlassian ecosystem) as a key stone, which ensures the crucial aspects of workflow visualization and enhanced communication. To bridge the gap with non-agile stakeholders, we track progress via time investment versus estimates and utilize milestones (e.g., MVPs, system validation) as governance and delivery checkpoints.
        Additionally, we will discuss how these practices illustrate reliability and transparency, cultivating organizational trust between agile teams and the wider hierarchical structure.

        Speaker: Oscar Matilla (ALBA Synchrotron Light Source)
      • 58
        Maintaining control system viability and reliability amidst declining budgets, resources and sanity (or: How I learned to stop worrying and love saying no).

        Jefferson Lab’s CEBAF electron accelerator has been in operation for over 30 years. The first 4 GeV beam was delivered in 1994. Machine energy increased to 6 GeV, then 12 GeV by 2014. CEBAF has made significant contributions to nuclear physics, refining our understanding of QCD, making precision tests of the Standard Model, and identifying the internal dynamics of protons and neutrons. Further upgrades are planned. CEBAF’s control system was initially based on an in-house design (CEBAF Control System (CCS)). The mid-90s brought a transition to EPICS. During this period, Jefferson Lab’s software team (now incorporated in the Accelerator Computing Group) was a significant contributor to the EPICS community. In the intervening years, the scope of ACG’s responsibilities has increased dramatically, while group size has declined from 3.2% of the lab population in 2000 to 1.6% presently. Developers have attempted to compensate for this shortfall through creative scaling, improved process efficiency, and heroic effort. Significant challenges with hardware and software obsolescence and technical debt remain to be addressed. This work will re-introduce Jefferson Lab’s accelerator programs and discuss control system evolution, approaches taken to mitigate resource – responsibility imbalance, team management, and current and future modernization efforts.

        Speaker: Mr Gary Croke (Thomas Jefferson National Accelerator Facility)
    • 12:00
      Lunch
    • TUCG MC09 Experimental Control and Data Acquisition Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Mark Rivers (Consortium for Advanced Radiation Sources), Steven Hartman (Oak Ridge National Laboratory)
      • 59
        EPICS areaDetector overview and update

        areaDetector is an EPICS framework for 2-D and other types of detectors that is widely used in synchrotron and neutron facilities. An overview of areaDetector and enhancements since the last ICALEPCS presentation in 2017 will be presented. These include:
        - HDF5 file writing plugin:
        o Support for Blosc, lz4, bitshuffle/lz4, and JPEG compression.
        o Support for Direct Chunk Write which allows directly writing precompressed NDArrays, improving performance.
        - Support for automatically converting medm OPI files to CSS/Phoebus, CSS/Boy, edm, and caQtDM. The other OPI files are now included in the distribution.
        - Capability for drivers to wait until all plugins are done processing before declaring acquisition to be complete.
        - NDPluginCodec to compress and decompress NDArrays. Supported codecs are Blosc, lz4, bitshuffle/lz4, and JPEG. A primary application is is to transport compressed NTNDArrays using pvAccess.
        - NDPluginBadPixel to flag bad pixels in NDArrays.
        - New drivers detector drivers including:
        o ADEiger for Dectris Eiger detectors.
        o ADSpinnaker for FLIR cameras with their Spinnaker SDK.
        o ADVimba for Allied Vision caneras with their Vimba SDK.
        o ADGenICam base class for GenICam cameras.
        o ADAravis for GenICam cameras using the open-source aravis library.
        o ADEuresys for CoaXPress cameras using Eureys frame grabbers.
        o ADDCAMHamamatsu for Hamamatsu cameras using their DCAM library.
        - A roadmap for future developments will also be presented.

        Speaker: Mark Rivers (Consortium for Advanced Radiation Sources)
      • 60
        Data acquisition and on-the-fly processing from high rate detectors at MAX IV

        At MAX IV, we have developed a high-performance data acquisition (DAQ) system to handle the high rate detectors exploiting the brightness of the fourth-generation source. This system integrates multiple detector types, including photon counting and charge integrating detectors as well as sCMOS cameras, into a unified DAQ framework. Data are streamed to a central Kubernetes cluster which mounts an IBM Storage Scale (GPFS) storage, with control provided via Tango. The system provides live feedback from the detectors/cameras and is furthermore extended to provide on-the-fly data reduction via the "Dranspose" framework, a horizontally scalable, distributed data analysis pipeline. We present an overview of the diverse detector suite at MAX IV and describe the components of our DAQ and processing framework, highlighting its performance for live data streaming and on-the-fly reduction with reference to several applications.

        Speaker: Aleko Lilius (MAX IV Laboratory)
      • 61
        A motion control system for fly scans synchronized to X-ray pulses

        With the High Energy Upgrade project in the installation phase at the SLAC National Accelerator Laboratory Linac Coherent Light Source (LCLS), soon the repetition rate of the X-ray beam will exceed the capability of the present motion control system to reliably deliver samples. We describe the design of a motion control system which aims to deliver solid crystal samples within 5 um of the focal point of the X-ray beam at a rate of 1000 Hz using a fly scan. Throughout the fly scan, a triggered timestamping device is used with an EtherCAT based distributed clock to predict the position error at the time of each X-ray pulse. This position error is used in a feedback loop to bias the velocity command to each motion axis and restore synchronism. The design is flexible enough to expand to any number of synchronized axes up to the limits of the computing hardware. Additionally, with most of the software written in Structured Text language defined in IEC 61131-3, it is transferable to other EtherCAT based real-time systems with few modifications. Finally, we describe how the TwinCAT PLC programming environment can be used to develop and test almost all of the functionality of the software without hardware present, through a combination of unit testing and simulation axes.

        Speaker: Nicholas Lentz (SLAC National Accelerator Laboratory)
    • TUCR MC02 Control System Upgrades Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Misaki Komiyama (The Institute of Physical and Chemical Research), Yuliang Zhang (Chinese Academy of Sciences)
      • 62
        Integrating EPICS, OPC UA, and TSN: A Unified, AI-Ready Control Architecture for Particle Accelerators and Large Research Facilities.

        The increasing complexity and data demands of modern particle accelerator and large research facilities necessitate a paradigm shift towards unified, intelligent control system architectures. This paper proposes a study to enhance a particle accelerator control framework architecture in order to develop an AI-ready control system, drawing upon the strengths of both open-source and commercial control frameworks, on the basis of the IFMIF-DONES control design experience. By integrating the widely adopted Experimental Physics and Industrial Control System (EPICS) with industrial Supervisory Control and Data Acquisition (SCADA) systems via a key component –an OPC UA server for EPICS pvAccess– a seamless and standardized communication layer is established. This hybrid approach enhances flexibility, scalability, and long-term maintainability while leveraging the benefits of both ecosystems. Furthermore, the proposed architecture explores the potential of unifying traditionally separate networks, such as Timing, Control/Monitoring, and Interlock, through Time-Sensitive Networking (TSN) to simplify infrastructure and improve bandwidth utilization. This paper outlines the architectural design, discusses the advantages of this integrated approach in achieving AI readiness and improved interoperability, and highlights the role of a EPICS - OPC UA Gateway as a cornerstone for future advancements in accelerator control.

        Speaker: Javier Cruz Miranda (Universidad de Granada)
      • 63
        Control system upgrades at the National Ignition Facility for higher laser energy and higher fusion yields

        Following the landmark achievement of fusion ignition in December 2022, the National Ignition Facility (NIF) has now repeated ignition multiple times, reaching record yields and fusion gains. To further advance fusion research into new experimental regimes, NIF is currently planning the Enhanced Yield Capability (EYC) upgrade, raising laser energy to 2.6 MJ by fully utilizing the laser amplification potential of its design. Simulations predict EYC yields exceeding 30 MJ, enabling transformative opportunities for Inertial Confinement Fusion (ICF) and High-Energy-Density (HED) sciences. This paper focuses on the dual challenge of implementing EYC while sustaining aging control systems nearly two decades old. While the data-driven NIF control system architecture requires only modest modifications for higher laser energy, these still demand coordination with the sustainment of the pulse shaping, amplification, and optical damage mitigation subsystems. Upgrades must remain compatible with legacy interfaces and hybrid legacy-modern components while delivering enhanced performance for higher energies. We detail the technical approaches and operational strategies for integrating capability enhancement and component renewal in a facility with ongoing experiments, highlighting how well-planned design synergies minimize conflicts between major upgrades and sustainment efforts.

        Speaker: Mike Fedorov (Lawrence Livermore National Laboratory)
      • 64
        Parallel control systems: an efficient and low risk approach for a migration from Vsystem to EPICS

        For over 30 years, the AGOR cyclotron control system at UMCG PARTREC has relied on Vsystem. However, the limitations of Vsystem's aging technology stack hinder efforts to improve reliability. To address this, we have decided to migrate to EPICS. Given the limited IT resources at PARTREC, a cost-effective migration strategy is essential. Additionally, planned and unplanned accelerator downtime must be kept to an absolute minimum. Instead of the conventional approach of a gradual transition, we have opted for a different method: running both Vsystem and EPICS concurrently as fully configured control systems. During migration, all controllers will communicate with both systems simultaneously, ensuring continuity and minimizing downtime. This paper outlines the feasibility of this approach, its cost-effectiveness, and the proofs of concept conducted to validate its implementation.

        Speaker: Klaas Winter (University Medical Center Groningen)
      • 65
        Control system considerations for LANSCE modernization and integration of the LAMP front-end

        The Los Alamos Neutron Science Center (LANSCE) has embarked on a major modernization effort through the LANSCE Modernization Project (LAMP), which has recently received approval for mission need. LAMP proposes a new 100 MeV front end to replace aging components, including the proton sources, Cockcroft-Walton generators, and 100MeV drift tube linac. This new design will integrate with the existing cavity-coupled linac to reach the facility’s full design energy of 800 MeV, with full deployment anticipated by 2030. The modernization project introduces significant control system challenges, particularly in integrating new high-performance subsystems while maintaining full operability of the legacy infrastructure, which will remain responsible for approximately 85% of the accelerator complex. This paper discusses control system strategies for timing synchronization, high-speed data acquisition, and software integration. Key topics include compatibility between legacy and modern control protocols, deployment of real-time data systems, and software development to ensure operational continuity. The LANSCE control system must provide seamless support for both existing and modernized hardware, enabling efficient operation and long-term sustainability.

        Speaker: Heath Watkins (Los Alamos National Laboratory)
    • TUMG Mini-Orals (MC02, MC16) Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Karen White (Oak Ridge National Laboratory)
      • 66
        Upgrade of the Los Alamos Neutron Science Center (LANSCE) Beam Chopper Pattern Generator

        LANSCE delivers macropulses of beam, hundreds of microseconds in duration and at a nominal repetition rate of 120 Hz, to five experiment areas. These macropulses are distributed to four H⁻ areas and one H⁺ area. Each of the H⁻ experiment areas require a unique beam time structure within the macropulse. This time structure is imposed on the beam by a traveling wave chopper located in the H- Low Energy Beam Transport (LEBT) section of LANSCE. The chopper is driven by pulsed power systems which receive digital signals generated by the LANSCE chopper pattern generator. This chopper pattern generator system must maintain tight synchronization with multiple LANSCE RF reference signals and is triggered by the LANSCE master timer system. This paper describes a recent upgrade to the LANSCE chopper pattern generator from its original NIM/CAMAC/VXI form factor, including details in software and hardware, test results, and future plans.

        Speaker: Anthony Braido (Los Alamos National Laboratory)
      • 67
        Accelerator Process Water Upgrade at ANL/APS

        This presentation will describe recent hardware & software updates to the Accelerator Process Water System of the Advanced Photon Source at Argonne National Laboratory. The topics covered include replacing outdated PLC hardware, updating EPICS software (deploying a python application called ‘plcepics’ to build EPICS databases), and an overview of problems encountered during commissioning of the control system.

        Speaker: James Stevens (Argonne National Laboratory)
      • 68
        The IRRAD Proton Irradiation Facility Data Management, Analytics, Control and Beam Diagnostic systems: current performance and outlook beyond the CERN Long Shutdown 3

        The proton irradiation facility (IRRAD) at the CERN East Area was built in 2014 during the Long Shutdown 1 (LS1), and later improved during the LS2 (2019), to address the needs of the HL-LHC accelerator and detector upgrade projects. IRRAD, together with the CHARM facility on the same beamline, exploits the 24GeV/c proton beam of the Proton Synchrotron (PS) providing an essential service at CERN showcasing more than 4400 samples irradiated during the last decade. IRRAD is operated with precise custom-made irradiation systems, instrumentation for beam monitoring (IRRAD-BPM), operational GUIs (OPWT) and a dedicated data management tool (IDM) for experiments follow-up and samples traceability. Moreover, performance tracking generated by custom-made analytics tools (Jupyter, etc.) guarantees regular feedback to the PS operation, thus maximizing the beam availability for IRRAD. While the HL-LHC components qualification is coming to an end with the LS3 (2026-2028), new challenges arise for future detector, electronics components and material irradiations: reaching extremely high fluence levels, operating lower momenta or heavy ion beams, being some of those. In this context we first describe the last (software and hardware) improvements implemented at IRRAD after the LS2 and then present the challenges ahead that will drive future upgrades such as, for example, applying Machine Learning techniques to the IRRAD-BPM data aiming to achieve real-time automatic beam steering and control

        Speaker: Federico Ravotti (European Organization for Nuclear Research)
      • 69
        Development of an EtherCAT-based control system for an In-Vacuum Undulator for SPring-8-II

        SPring-8, a third-generation light source, has operated for nearly three decades. Recently, light source accelerators have transitioned towards fourth-generation light sources, which implement low-emittance storage rings. Therefore, SPring-8 will upgrade its storage ring to a new one named SPring-8-II between 2027 and 2028. The upgrade involves implementing new Insertion Devices (IDs), specifically In-Vacuum Undulators for SPring-8-II (IVU-II), and optimizing accelerator control systems. As part of the control system upgrade for slow control, we are replacing VME-based systems with EtherCAT-based systems*. Between 2023 and 2027, the schedule dictates the annual installation of three to a maximum of six IVU-IIs, and we will install EtherCAT control systems accordingly. Crucially, IVU-II control systems installed during the SPring-8 phase must be compatible with the varying operational parameters of SPring-8 and SPring-8-II. In 2024, we implemented the first EtherCAT-based control system, which satisfies the requirements. This system manages the gap between magnets and two power supplies for two steering magnets, monitors magnet temperatures and the vacuum system, and handles interlock signals. In the SPring-8-II era, dedicated systems such as the vacuum controls and the interlock system will handle vacuum and interlock functions, reallocating them from ID controls. Future ID controls will employ the EtherCAT model.

        Speaker: Kosei Yamakawa (Japan Synchrotron Radiation Research Institute)
      • 70
        Continuous integration of control systems in parallel to the existing systems of LIPAc, for a radio-frequency conditioning test bench.

        Under the Broader Approach agreement between Japan and Europe, the Linear IFMIF Prototype Accelerator (LIPAc) aims the validation of the International Fusion Materials Irradiation Facility (IFMIF) accelerator design, to produce a deuteron beam of 125 mA at 9 MeV in continuous wave. In parallel to the installation of a superconductive linear acceleration stage, a high-power test bench was set up for the testing and conditioning of four pairs of radio-frequency (RF) couplers for LIPAc’s RF quadrupole*. Accordingly, the control systems (CS) part was implemented in parallel to the existing CS of LIPAc, benefiting from the tools available while avoiding their modification. Also, additional functionalities and devices were integrated to tackle the test bench specificities. This work was continuously performed during the operations of the test bench, identifying and answering further needs, such as deploying an automated conditioning tool, or enabling slow feedback loops for automatic parameter tuning. Furthermore, this test bench became a testing environment for the modifications foreseen in the LIPAc CS refurbishment plan, such as upgrading the CS framework to EPICS v7, switching to CS-Studio Phoebus and its applications for the operator interfaces, or using Debian 12 as the operating system and ProxMox 8 for the virtualization environment. The experience acquired here will be precious for the IFMIF-DONES Facility Project (DEMO-Oriented NEutron Source) implementation of IFMIF.

        Speaker: Lucas Maindive (IFMIF-DONES Spain Consortium)
      • 71
        Development of GigE vision camera control system and application to beam diagnostics for SPring-8 and NanoTerasu

        As an imaging system supporting beam diagnostics using screen monitors (SCMs) at the SPring-8 site, we have continuously developed and improved a GigE Vision camera control system and expanded its adoption. By adopting the versatile open-source library Aravis, we eliminated vendor dependency and built an image acquisition system integrated into the SPring-8 control framework, MADOCA 4.0. Key features include the ability to control up to eight GigE cameras per computer with centralized management of camera power, trigger distribution, and screen operations. Its long-distance cabling enables flexible and simple deployment. Operation is achieved by writing the configuration file without programming, significantly reducing development costs and time. As part of the SPring-8 upgrade, this system was successfully implemented for the SCMs of the beam transport line (XSBT) that uses the SACLA linac as the injector for the SPring-8 storage ring*. We expanded the application of this system to the SCMs of the SACLA linac and the SACLA-BL1 linac (SCSS+), replacing the complex and costly Camera Link cameras. We also newly applied it to NewSUBARU injector linac and NanoTerasu in Sendai. This presentation outlines the R&D of our GigE Vision camera control system for stability and enhancements, reporting on multi-facility deployment, operation, and stabilization efforts toward advanced utilization like automated beam parameter optimization from beam diagnostics using machine learning.

        Speaker: Dr Akio Kiyomichi (Japan Synchrotron Radiation Research Institute)
      • 72
        Update on migration to EPICS at the ISIS accelerators

        The ISIS Neutron and Muon Facility accelerators are migrating to an EPICS control system. The tools developed to run two control systems in parallel and to automate the migration of hardware and user interfaces to EPICS have been previously presented. We now detail our emerging EPICS setup. Hardware interfaces are implemented as a mixture of conventional EPICS IOCs, in-house developed equivalents in Python, and bridged through our old control system. Our user interfaces are primarily the Phoebus stack but web interfaces in Python are being explored, particularly to support machine learning purposes such as automated optimisation and anomaly detection. We present issues which may arise at any site in transition, such as handling continuity of data archiving

        Speaker: Dr Ivan Finch (Science and Technology Facilities Council)
      • 73
        SOLARIS synchrotron control system upgrade: addressing challenges and implementing solutions

        The National Synchrotron Radiation Centre SOLARIS*, a 3rd Generation Synchrotron Light Source, stands as the most advanced research infrastructure in Poland. Since its commencement of operation in 2015, SOLARIS has undergone significant expansions. Initially, system upgrades were straightforward to implement. However, as the facility matured, new beamlines were created, and the number of equipment increased significantly. This led to a rise in the complexity of upgrades, prompting the SOLARIS team to focus on creating automation tools for deployments and maintaining up-to-date libraries and software. During this period, many versions of libraries, such as Python and PyQt, as well as the CentOS operating system, became obsolete, leading to increased maintenance costs. To address these challenges, a comprehensive strategy was developed. This strategy includes transitioning from CentOS 6 and 7 to AlmaLinux 9, upgrading older versions of Python to version 3.9, and updating automation tools such as Ansible and GitLab CI/CD. This paper presents the methodology employed for the control system upgrade, detailing the architecture of the new system, the upgrade process, and the challenges encountered.

        Speaker: Michał Fałowski (SOLARIS National Synchrotron Radiation Centre, Jagiellonian University)
      • 74
        MAPS vacuum control system upgrade at ISIS neutron and muon source

        The control system, based on Schneider Quantum PLC and HMI, had been in operation for over 2 decades and faced increasing reliability and support challenges due to hardware obsolescence, lack of OEM support, and limited compatibility with modern protocols. To address these issues, the system was upgraded using Omron NX-series PLC and NA5 HMI, along with a complete redesign of control cabinet and documentation that conforms with British Standard BS7671 Wiring Regulations. A major challenge was integrating legacy devices like TPG300 vacuum gauge and cryopump controllers using RS232 communication. The LabView based interface acting as a bridge to communicate to cryopump controller was replaced with custom serial PLC logic, eliminating reliance on 3rd party device. The new system uses OPC UA to interface with EPICS (IBEX), enhancing cybersecurity and data integrity. A critical safety flaw in the legacy logic that risked over-pressurising the vacuum tank was resolved by redesigning the vacuum personnel protection system. The transition process included various tests and design reviews with stakeholders. The upgraded PLC logic now includes fault diagnostics for each device, improving maintainability and troubleshooting tasks. The HMI redesign includes a new GUI, and enhanced information display to better support scientists and operators. Offline commissioning was performed using Node-Red and OPC UA simulation. Result is a more secure, robust system.

        Speaker: Aamir Khokhar (ISIS Neutron and Muon Source, Science and Technology Facilities Council)
      • 75
        Embracing the accelerator computing revolution at SLAC

        We face a number of challenges in planning future controls and computing for large accelerator facilities. Online tuning increasingly requires 6-d phase space customization, fast numerical estimation methods, and space-charge modeling in timescales relevant to operations. The needs are being met by advances in machine learning, artificial intelligence and the proximity of multi-particle methods to accelerator operations, whose outcomes must be deployed to effectively change how we do accelerator physics and experiment optimization. This imperative, along with cyber and technical debt mitigation, are driving changes in architecture - in controls to add high performance and edge computing, in data systems to add high fidelity and vector databases, and in networks to interconnect these, and add security and throughput. At the same time, the data from devices gets larger, and more complicated, requiring new data structures, and new control primitives that incorporate data semantics. These changes are happening in the face of increased funding pressure. However, there are also more tools at our disposal and technologies from web, streaming and internet we embrace to help. We present these drivers, our vision of an integrated response, the path we're on, architectures and data systems in development to support the new physics techniques and tools, and our roadmap for the next few years.

        Speaker: Greg White (SLAC National Accelerator Laboratory)
      • 76
        Control software and technology choices for the electron-ion collider

        The Electron-Ion Collider (EIC) will succeed the current Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. For over two decades, RHIC and its injectors have relied on a homegrown Accelerator Device Object (ADO)-based control system, which has provided a reliable and efficient operational framework. However, the EIC’s requirements—such as a greater number of subsystems, higher uptime, increased data rates, and other factors—demand significant enhancements. Advances in both hardware and software technologies since the RHIC era have expanded the range of available options, each with its own set of benefits and challenges. In response, the EIC plans to deploy state-of-the-art technologies to meet these elevated demands, favoring open-source and community-driven solutions wherever feasible. This talk will focus on the control software and the technology choices under consideration and the strategies being adopted for the EIC.

        Speaker: Md Latiful Kabir (Brookhaven National Laboratory)
      • 77
        New L2SI dynamic reaction microscope endstation in TMO: control system design, installation and integration

        To take advantage of the world's most powerful X-ray beam delivered by the LCLS-II project, the former Atomic, Molecular & Optical Science (AMO) instrument at the SLAC Linac Coherent Light Source (LCLS) user facility has been upgraded to the Time-resolved AMO (TMO) instrument by the L2SI project. The new Dynamic Reaction Microscope (DREAM) endstation, also covered by the L2SI project and located at the second interaction point of the TMO, will offer unique capabilities to support cutting-edge research in the fundamental science of matter and energy. This talk provides an in-depth overview of the control systems for the DREAM endstation, detailing its architecture, design methodology, implementation, and seamless integration with the broader LCLS control infrastructure. It will also address the key challenges, including integrating SmarACT motion control systems with the X-ray Machine Protection System (MPS) across different platforms, developing a robust and flexible equipment protection system, and implementing automated vacuum controls to meet stringent reliability and operational requirements.

        Speaker: Jing Yin (SLAC National Accelerator Laboratory)
      • 78
        Towards the implementation of a new European XFEL scientific data policy – challenges and chances.

        As data volumes at European XFEL continue to grow rapidly, the need for sustainable storage, access, and preservation has become increasingly critical. In response, and despite operating within an established data management environment, the facility has introduced a new scientific data policy to address rising demands and align with evolving international best practices. The policy emphasizes long-term sustainability and adherence to the FAIR principles (Findable, Accessible, Interoperable, Reusable), promoting enhanced transparency, sharing, and reuse of research data. To support this, several key updates have been implemented. Data Management Plans (DMPs) are now mandatory from the project planning stage onward, guiding researchers in defining data workflows and lifecycle strategies. Data reduction techniques have been adopted to optimize storage without sacrificing scientific value. The storage infrastructure now features a tiered model, combining high-performance systems for short-term needs with cost-efficient long-term archival. Metadata tools have been upgraded to improve discoverability and access controls have been refined to support secure, collaborative research. These developments build on European XFEL’s strong foundation—spanning infrastructure, policy, and expertise—ensuring scalable, efficient data management in line with global standards.

        Speaker: Janusz Malka (European X-Ray Free-Electron Laser)
      • 79
        Universal LIMS for Diamond

        Universal LIMS is a new set of web services being developed at Diamond, as part of the Diamond-II upgrade. It will provide users and beamline scientists with tools to manage the logistics and scientific metadata for their experiments. For scientific samples it will allow users to ship them to Diamond, track where they are within the experiment hall, and store data about them. Universal LIMS will also allow users to define parameters for non-interactive experiments and the Data Catalogue will provide an overview of the data they have collected, including metadata from the data acquisition systems and summaries of analysis pipelines that have run. These services will work together to provide a complete workflow to facilitate user experiments at Diamond. A key part of the approach is to provide flexibility in the data that is stored. Defining a database schema to cover the needs of the eight different science groups in Diamond would be challenging and there is a development cost to updating the database schemas as requirements evolve. Instead, with Universal LIMS we store data as JSON, validated against a JSON schema. This ensures schemas can be updated easily, while the data can still be understood effectively by downstream applications. In this talk we will discuss the progress on development of the Universal LIMS services, the creation of the repository of JSON schemas and how these fits in with the software architecture being developed as part of the Diamond-II upgrade.

        Speaker: Ian Bush (Diamond Light Source)
      • 80
        Adaptive approach to spatial interpolation and visualization of scattered monitoring data at CERN

        In order to ensure safe operations, CERN leverages an extensive SCADA system to monitor radiation levels and collect environmental measurements across its premises. The Health & Safety and Environmental Protection (HSE) Unit addressed the challenge of visualizing radiation fields from non-uniformly distributed sensors across large areas. This paper presents the approach and implementation of a 2D interpolation and visualization system for such measurements. The system implements two algorithms for two complementary monitoring scenarios. The Inverse Distance Weighting (IDW) interpolation addresses cases where radiation sources are located near sensor locations, assuming maximum values occur at measurement points. The Radial Basis Function (RBF) method handles scenarios with potential radiation peaks between sensors. A region network approach divides large areas into independent regions for optimized performance. Accuracy is evaluated using leave-one-out cross-validation.The architecture relies on TypeScript, React and WebSockets. The system processes measurements across CERN's premises and provides operators with a real-time or historical spatial visualization of radiation levels.

        Speaker: Carlo Maria Musso (European Organization for Nuclear Research)
      • 81
        The ESS Synchronous Data Service (SDS) development and first results

        The 5 MW proton linear accelerator of the European Spallation Source ERIC is designed to accelerate the beam at a repetition rate of 14 Hz, which dictates the refresh rate of most of the relevant data produced by acquisition systems. Each cycle of the 14 Hz timing structure receives a unique cycle ID from the ESS Timing System, which can be used as an index when data is collected and stored. The ESS Linac Synchronous Data Service (SDS) facilitates the collection of high-resolution data from various accelerator subsystems. Currently, SDS consists of an EPICS extension to be included in data acquisition IOCs and a client service (SDS Collector) that collects and stores the data produced by these IOCs. The novel features provided by the EPICS PVAccess protocol and libraries play a crucial role in this project by supporting structured data in the EPICS Process Variable data format. This paper outlines how SDS is designed, how it enables data-on-demand and post-mortem collection of large array datasets without overloading the network and describes the results of using SDS during the latest ESS beam commissioning campaign in 2025. From a broader perspective, SDS will be part of the ESS Data Framework, which comprises a set of tools to collect, store, catalog, retrieve, and analyze ESS Linac data to support advanced applications such as machine learning algorithms. This framework is briefly described in this paper.

        Speaker: João Paulo Scalão Martins (European Spallation Source)
    • TUMR Mini-Orals (MC03, MC04, MC08) Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Oscar Matilla (ALBA Synchrotron Light Source)
      • 82
        A shared virtual machine framework for EPICS hands-on training

        Facilities relying on collaboratively developed software projects, like the Experimental Physics and Industrial Control System (EPICS), often face challenges in ensuring consistent skill levels and efficient onboarding of staff.
        This paper introduces a new framework for creating reproducible pre-configured virtual machine (VM) environments, specifically designed for hands-on EPICS training.
        A key benefit of this framework is its ability to establish shared, reusable, general training modules. Such shared resources are highly valuable for collaborations, as they provide a standardized platform for skill development, reduce redundant training efforts, cultivate a common understanding of EPICS mechanisms, and ultimately strengthen collective knowledge within and across institutions.

        Speaker: Ralph Lange (ITER Organization)
      • 83
        Automated test execution framework for experiment control systems at LCLS

        At the Linear Coherent Light Source (LCLS), maintaining the health and correct configuration of the control system is essential to ensuring successful experimental operations. ATEF (Automated Testing and Evaluation Framework) supports this goal by enabling the development and execution of automated procedures to verify control system status before experiments begin.
        ATEF focuses on pre-experiment validation of devices and controllers—ensuring they are correctly configured, network-visible, and responsive to control system commands. These verifications range from confirming process variable (PV) status and device communication to setting and reaching predefined setpoints.
        By providing a framework to establish and routinely verify a configuration baseline, ATEF allows users and system integrators to ensure the control system is ready to commence operations.
        This talk will present the design and use of ATEF at LCLS, explore real-world examples of its application, and highlight how it supports the broader goals of system integrity and experimental readiness.

        Speaker: Christian Tsoi-A-Sue (SLAC National Accelerator Laboratory)
      • 84
        No child left behind: managing requirements, interfaces, and communication in high-impact projects

        The Linac Coherent Light Source (LCLS) is a world-leading facility located at the SLAC National Accelerator Laboratory that constantly pushes the boundaries of science and technology. To stay at the frontier, we must continuously upgrade and evolve our instruments and control systems — which means tackling new projects, new capabilities, and, most importantly, new requirements.
        This talk will outline how the LCLS Experiment Control Systems (ECS) team works closely with stakeholders across LCLS, SLAC, and the project teams to define, capture, and manage requirements and interfaces for major projects like LCLS-II-HE and MEC-U. This talk will highlight the processes developed by our LCLS System Engineering Team and how ECS executes them to bring clarity and structure to our collaborations, as well as how we are leveraging Jama Connect as our central platform for capturing, reviewing, and refining these critical project elements. By standardizing our approach and tools, we are building a stronger foundation for today’s upgrades and tomorrow’s innovations.

        Speaker: Mitchell Cabral (SLAC National Accelerator Laboratory)
      • 85
        HDB++, a retrospective on 5+ years using Timescale

        The Tango HDB++ project is a high-performance, event-driven archiving system that stores data with microsecond resolution timestamps. HDB++ supports various backend databases to accommodate any infrastructure choice, with Timescale as the default option. Timescale, an extension of PostgreSQL, is selected for its exceptional performance, reliability, and open-source license.
        After more than five years of using the system in production at major facilities such as the ESRF, MAX IV and SKAO, this paper presents the insights gained from operating HDB++ with Timescale in a large research facility.
        Results are presented considering various perspectives. From a performance standpoint, the paper examines how the scalability features have maintained low query response times despite the continuous growth in data volume over the years. From the system administration perspective, findings show that standardized and proven technologies have consistently supported high-quality service delivery. Lastly, from the user perspective, we analyze how users can query data stored from the inception of the project up to last week within seconds, either from the python API or from clients like grafana. This capability is also enabled by the successful migration and integration of archived data from older or different systems into the database in full compliance with HDB++ standards.

        Speaker: Reynald Bourtembourg (European Synchrotron Radiation Facility)
      • 86
        EPICS Summer School: training future scientific control system experts

        The EPICS Summer School addresses the persistent demand for expertise in control systems for advanced scientific facilities by providing a crucial training platform for university students and young professionals. The highly successful inaugural event at BESSY II in Berlin offered an effective learning experience through a structured two-week program encompassing one week of foundational lectures and one week of practical application in a hands-on group project with real scientific hardware. This approach demonstrated a significant positive impact, empowering a new cohort from these groups with essential EPICS and distributed control system skills. In an effort to make the event sustainable, a collaborative framework has been established with other institutes, paving the way for a future where hosting the summer school cycles between facilities. This aims to make this training accessible to a wider audience of future scientific leaders, ensuring a continuous supply of skilled engineers capable of supporting and advancing groundbreaking scientific research across multiple facilities.

        Speaker: Luca Porzio (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 87
        The Enhanced Liquid Interface Spectroscopy and Analysis (ELISA) beamline control system prototype

        The Enhanced Liquid Interface Spectroscopy and Analysis (ELISA) beamline is a new instrument at BESSY II focusing on a novel, integrated approach for the high-fidelity preparation and investigation of liquid interfaces using soft x-ray radiation and infrared radiation light generated simultaneously from the BESSY II light source. [1] As ELISA is part of the BESSY II+ upgrade scheme [2] it will be the first soft x-ray beamline at BESSY II to use new hardware motion control standards and a novel BESSY II EPICS deployment system.
        This contribution focuses on workflows and tools we have developed for beamline control at BESSY II. We demonstrate their application at the ELISA beamline and finish with an outlook on scaling usage across the full experimental hall.

        Speaker: Maxim Brendike (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 88
        Towards sustainable work management in scientific facilities: applying Kanban at ALBA Controls

        Controls engineers in scientific facilities manage numerous projects, support multiple Customer Units (CUs), and balance operational demands with new initiatives. This often leads to growing backlogs and staff stress. Meanwhile, managers struggle to allocate resources across CUs, frequently prioritizing short-term goals at the expense of long-term strategy. At ALBA, such pressures prompted a new organizational approach. Building on lessons from successful past transitions - including the 2013 shift to operations * and a major staff turnover ** - the Controls Section adopted the Kanban method to optimize the resource allocation and maximize the throughput. Tasks were categorized by size/complexity and Class of Service (CoS), and a unified board with multiple views was implemented to visualize Work In Progress (WIP) and support follow-up. All work begins in a visible backlog, jointly prioritized with CUs based on CoS. Dedicated engineering teams were formed to improve coordination and knowledge sharing. Policies and metrics are clearly defined and transparent. The implementation was done using Jira and Confluence (Atlassian ecosystem). First results of the new approach are included in the paper. This initiative aligns with broader organizational efforts such as the Activity Plan for ALBA (APA) and project tracking within the Computing division ***, laying the groundwork toward the ALBA II upgrade to a 4th generation synchrotron light source.

        Speaker: Zbigniew Reszela (ALBA Synchrotron (Spain))
      • 89
        Trigger synchronization unit consolidation for SPS and LHC beam dumping kickers systems

        The TSU's (Trigger Synchronization Unit ) purpose is to synchronize beam dump requests with the Beam Abort Gap (BAG) upon request from clients, primarily the Beam Interlock System (BIS). Synchronization is crucial to prevent damaging absorbers/collimators and cryo ring magnets.
        The TSU system, per beam, comprises two interconnected, redundant chassis for synchronization. It internally regenerates the Beam Revolution Frequency (BRF) in case of signal loss. Synchronization between chassis and internal surveillance ensures high level of redundancy. Dump triggers are issued via three paths: two synchronous and one asynchronous with delay. Software analyses post-mortem all dump triggers and critical signals for interlocking the TSU system on any unissued or abnormal triggers.
        The TSU is a safety-critical system for both SPS and LHC Beam Dumping Systems, requiring high reliability. At the LBDS, one asynchronous dump is allowed per year of operation, and one catastrophic failure is acceptable per thousand years.
        During the design process, a full hardware and principle reliability study was conducted with CERN's Reliability Analysis Working Group (RAWG). In addition, a lab test bench is designed to test all functional requirements using unit tests.
        This paper presents the TSU system implementation, high-level software, GUI, and reliability study steps and results. The study confirmed TSU's high reliability, meeting initial specifications, with no single point of failure identified.

        Speaker: Léa Strobino (European Organization for Nuclear Research)
      • 90
        Advanced controls infrastructure for Diamond Light Source

        Full utility of Diamond II’s increased brilliance and coherence will require next generation scanning, capable of operating at high speed in the nano-scale regime. At this level, motion systems can become highly non-linear and under actuated creating challenges beyond the capabilities of traditional control methodologies. The new architecture, and more specifically, the high performance FPGA based controller will push the limits of motion control systems and create a platform for deploying advanced control models. This will begin with system identification and the extraction of data driven models. Control laws will be determined using advanced controls techniques and ML solutions such as agent based reinforcement learning.

        The presentation will include a description of the hardware as well as a brief overview of the advanced controls strategies used and the reasons for using them.

        Speaker: Gareth Nisbet (Diamond Light Source)
      • 91
        Beam Synchronized Acquisition and enhancements to associated services

        The Linac Coherent Light Source (LCLS) has developed a pulse-by-pulse data acquisition system, Beam-Synchronized Acquisition (BSA). BSA evolved from a 360 Hz software-based system to a 1 MHz firmware-based architecture tightly integrated with the LCLS timing system and beam rate.
        Alongside this transition, the EPICS control platform evolved from Channel Access (CA) to PV Access (PVA), enabling BSA to meet modern acquisition requirements—particularly for high-rate, high-volume applications requiring timestamping, pulse ID tagging, and precise cross-system alignment across the facility.
        BSA includes a fault buffer mechanism for each monitored variable, with four rotating buffers per variable, each capable of storing one million samples. One buffer collects data continuously at beam rate (1 MHz) in a ring configuration, while the others remain on standby. When the Machine Protection System (MPS) detects a fault, the active buffer is instantly frozen and a standby buffer takes over, preserving a one-second snapshot of data. This snapshot is synchronized across the facility and available for all BSA variables system-wide.
        This paper presents the architecture, firmware and software components, and supporting services developed to meet the demanding requirements of SC operation, enabling machine learning and real-time feedback capabilities.

        Speaker: Kukhee Kim (SLAC National Accelerator Laboratory)
      • 92
        Building the backbone: cable plant planning, design, and progress for LCLS

        The Matter in Extreme Conditions Upgrade (MEC-U) is a Department of Energy (DOE) Fusion Energy Sciences (FES) funded project slated for construction at the SLAC National Accelerator Laboratory later this decade. The facility will deliver the Linac Coherent Light Source (LCLS) X-ray Free Electron Laser (XFEL) beam to an experiment target chamber, coordinated with two high-power laser systems: a high-energy, long-pulse (HE-LP) laser and a rep-rated laser (RRL), built by the Laboratory for Laser Energetics (LLE) and Lawrence Livermore National Laboratory (LLNL), respectively.
        Designing the cable plant for MEC-U — which encompasses rack and tray layouts, cable specifications, penetrations, and grounding — presents a unique set of technical challenges and learning opportunities for the Experiment Control Systems (ECS) team. The design must be robust against high levels of electromagnetic interference (EMI) generated within the experimental target chamber and laser systems while also accommodating extensive cable lengths throughout the facility. It must also strike a careful balance between an accelerated facility schedule and lagging technical design readiness.
        This talk will highlight key challenges, current mitigation strategies, and the progress made to date in the MEC-U cable plant, as well as outline the roadmap ahead as we support the next frontier of high-energy fusion science.

        Speaker: Mitchell Cabral (SLAC National Accelerator Laboratory)
      • 93
        Novel distributed fast controls architecture for the consolidation of CERN's PS kickers

        The control of fast pulsed magnet systems at CERN requires often a common set of fast digital electronics sub-systems to perform tight timing control and fast protection of high-voltage pulse generators. Although the generators architecture is mainly modular, these control systems were until now most of the time centralized: several generators per equipment, but one global and equipment-specific control system.
        With the upcoming consolidation of CERN's PS kicker magnets controls, a new distributed architecture is proposed. Instead of one global control crate per functionality (timing, fast protection, acquisition, etc.), this new approach incorporates one control crate per generator, merging several functionalities together. The crate becomes more generic, offering higher flexibility in terms of system size (number of generators or magnets). It also allows to reduce the cabling costs but comes with new challenges in terms of data transmission bandwidth and software latency.
        This paper presents the new Distributed Kicker Fast Controls (DKFC) solution based on CERN ATS Distributed I/O Tier (DI/OT) ecosystem*, including new Open Hardware electronic boards (ADCs, DACs, I/Os, dry contacts, etc.) and gateware structure with high-speed board-to-board data exchange. Advantages and drawbacks of this new architecture and possible future extensions are also discussed.

        Speaker: Léa Strobino (European Organization for Nuclear Research)
      • 94
        Automatic bunch targeting for single-bunch beam position monitors

        Single-bunch beam position monitors (BPM) are used to track the trajectory and measure turn-by-turn positions of a single bunch of electrons while multiple bunches are present in the storage ring. In the Advanced Photon Source Upgrade (APS-U), 20 single-bunch BPMs have been installed in the first three sections right after the injection point. For each BPMs, an RF switch is used to select the signal of the target bunch to the BPM electronics. The timing signals to the BPM electronics and the RF switch are provided by the Fast Event System at APS-U, a global event-based trigger distribution system based on the hardware components developed by Micro-research Finland (MRF). The requirement of the timing signal to the RF switch is stringent to reliably select the target bunch. The GTX output function of the MRF event receiver can fine-tune the delay of the output signal to achieve the desired timing resolution for the RF switches. In this presentation, the hardware configuration of the timing signals and the software developed for automatic bunch targeting are described. The performance of the system and example applications are also discussed.
        The work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.

        Speaker: Ran Hong (Argonne National Laboratory)
      • 95
        Time served - a look at the past, present, and future of timing at Fermilab

        This presentation covers the history of Fermilab's Tevatron Clock (TCLK) timing system and how it has served to regulate the facility over the past 40 years. The presentation provides an overview of beamlines at Fermilab, the challenges of timing in a Rapid Cycling Synchrotron, and the transfer scenarios utilized to generate megawatt-class beam at America's "premier" high energy physics laboratory!
        The PIP-II project introduces additional challenges in the timing of Fermilab. Over the past 5 years, the Controls group has seized this opportunity to modernize, and is actively in development of an upgraded Accelerator Clock (ACLK) timing system to meet stringent performance demands. This new implementation vastly improves the real-time control and synchronization capabilities of the facility, supporting beam synchronous operation from LBNF and beyond!

        Speaker: Evan Milton (Fermi National Accelerator Laboratory)
      • 96
        Modernization of PLC-based control systems at SNS

        When the SNS site was built around 20 years ago, the Conventional Facilities (CF) control systems were designed using 2 communication protocols to allow programmable logic controllers (PLCs) to interface with motors, variable frequency drives (VFDs), and distributed inputs and outputs (I/O). The protocol chosen to control motors and VFDs is DeviceNet, a CANbus-based protocol developed by Rockwell Automation. The protocol chosen to communicate with distributed I/O is ControlNet, another protocol developed by Rockwell Automation. Both of these protocols are obsolete and present reliability and maintainability issues, particularly DeviceNet. As the Control Systems Section at SNS is working to modernize control systems throughout the machine, a major goal for PLC-based systems is to remove the obsolete communication protocols in favor of standard, ubiquitous Ethernet. To this end, any new VFDs installed use Ethernet communication. Many VFDs are currently being replaced in the Central Utilities Building (CUB) and the Central Exhaust Facility (CEF) and are being removed from DeviceNet in favor of Ethernet communication. Planning is underway to retrofit Eaton Intelligent Technology motor control centers (MCCs) in the Target Building to remove DeviceNet adaptors and replace them with Ethernet adaptors for each motor starter. The ControlNet network in the CUB has been demolished, with I/O drops integrated into a local Ethernet network, improving sustainability and maintainability.

        Speaker: Isaiah Beaushaw (Oak Ridge National Laboratory)
      • 97
        RTST vacuum control system design

        This paper presents the design and implementation of the Integrated Control Systems (ICS) vacuum control system for the Second Target Station (STS) within the Ring-to-Second Target Beam Transport Line (RTST) of the Spallation Neutron Source (SNS). The RTST vacuum system is crucial for maintaining a high-vacuum environment necessary for the operation of a high-intensity proton beamline, extending from the existing Ring to Target Beam Transfer (RTBT) to the new STS. The system is composed of various components, including vacuum assemblies, sensors, pumps, and an architecture based on established SNS control systems utilizing Allen-Bradley Programmable Logic Controllers (PLCs) coupled with EPICS Soft Input/Output Controllers (IOCs). The design emphasizes reliability and safety, incorporating sector gate valves for isolation, remote controls for turbomolecular and ion pumps, and pressure monitoring through advanced gauge systems. This paper details the architectural framework, instrumentation, control layers, and operational interfaces to ensure robust management of the vacuum conditions necessary for the successful operation of the RTST

        Speaker: Jianxun Yan (Oak Ridge National Laboratory)
      • 98
        Enhancing scanning nano-tomography instrumentation with a magnetic levitation stage

        Next-generation synchrotron experiments—such as those planned for SOLEIL II—require fast, accurate sample positioning to meet increasingly demanding scientific challenges. To address these needs, SOLEIL launched the development of a magnetic levitation stage demonstrator dedicated to hard X-ray scanning tomographic nano-imaging techniques such as PXCT and STXM. Designing this new class of mechatronic instruments involves a significant shift from traditional stacked architectures used for point-to-point motion to advanced scanning techniques with high dynamics. It also requires substantial design innovations. The demonstrator was developed in the frame of the LEAPS-INNOV project. It integrates high-speed 2D scanning modes (step-scan and fly-scan) with full 360° sample rotation. SOLEIL’s development strategy involves partnering with a company from the semiconductor industry that has built ultra-precise and highly reliable mechatronic systems. MIPartners company was selected through a “competitive dialogue” tender, allowing for iterative refinement of specifications.
        This paper outlines the design principles that ensure performance and reliability in synchrotron instrumentation. The complete design workflow—from modelling to control implementation—will be detailed, along with the validation of the scanning nanoprobe stage. Results from factory and site acceptance tests, as well as the development of an external metrology bench to characterize the stage will be presented.

        Speaker: Yves-Marie Abiven (Synchrotron soleil)
      • 99
        Proof of concept of a PLC based emittance meter for the NEWGAIN project

        The SPIRAL2 accelerator features several diagnostic devices used to characterize, adjust, and monitor the beam. As part of its NEWGAIN (New GANIL Injector) upgrade project, SPIRAL2 will be equipped with a new source and a new injector. Therefore, new diagnostic tools will be developed, including an ALLISON-type emittance meter. To manage the emittance meter, we opted for a modern industrial PLC solution, leveraging its expanding capabilities and established maintenance advantages over a traditional PLC/VME combination. This paper details the architecture and programming concepts of our proof-of-concept system. It further outlines the test campaign conducted to validate the PLC's capabilities in several key areas: controlling the motors for measurement head positioning within the beam, managing high-voltage ramps, acquiring experimental data, and communicating results to the EPICS control system. The paper will also discuss findings related to current measurement accuracy, measurement rate, and synchronization, as well as the repeatability of the overall measurement process.

        Speaker: Clement Hocini (Grand Accélérateur Nat. d'Ions Lourds)
      • 100
        EPICS-based X-ray beam intensity monitoring for the CBXFEL project at SLAC LCLS

        The Cavity-Based X-ray Free-Electron Laser (CBXFEL) project, to be deployed in the Linac Coherent Light Source (LCLS) Hard X-ray (HXR) undulator line, aims to produce a highly coherent X-ray beam by recirculating X-ray pulses within a cavity. Precise alignment of the diamond crystal mirrors in this cavity is critical to achieving optimal CBXFEL performance. To support this, we deploy a system of five silicon-based X-ray Beam Intensity Monitors (XBIMs) and one diamond XBIM. Each XBIM generates a signal that may be amplified and is then digitized using high-speed digitizers. These digitized values are integrated into the EPICS control system, enabling synchronized data acquisition, feedback, and monitoring alongside other experimental subsystems. This paper outlines the requirements for CBXFEL beam diagnostics, details the digitizer selection and configuration, and describes the implementation of the control architecture.

        Speaker: An Le (SLAC National Accelerator Laboratory)
    • 15:30
      Coffee
    • TUPD Posters
      • 101
        A new PLC based control system to orchestrate the PS main power converters

        The Power for the PS (POPS) system supplies the Proton Synchrotron (PS) magnets by exchanging energy between capacitor banks and magnet loads, minimizing direct power draw from the electrical network. The POPS+ upgrade improves availability through redundancy by integrating additional power converters, introducing a new PLC-based control system to manage increased complexity and to retrofit legacy turnkey controls infrastructure. This distributed architecture features a central PLC orchestrating all power converters and their FGC4-based controllers. The PLC enables the custom integration of the FGC4 platform, originally designed for generic converters, into POPS+. A key challenge is the implementation of a real-time Ethernet communication protocol (FGC4PN) on a Multi-Functional Platform (MFP) attached to the PLC. The PLC also manages the state machine, including startup, charging and stopping sequences, as well as interlocks and safe start conditions, leveraging CERN’s UNICOS framework for standardized control and SCADA supervision. This paper presents the design and implementation of the control system, detailing the distributed architecture and the communication strategies. It also highlights the use of the Hardware-in-the-Loop (HIL) simulation platform for the system integration, development and virtual commissioning.

        Speaker: Marcos Marin Rodriguez (European Organization for Nuclear Research)
      • 102
        A Python-based serial communication framework for legacy PMAC controllers

        Many beamlines at BESSY-II still operate using VME-crate-based PMAC motor controllers that rely on proprietary Windows software for communication and configuration. However, these programs often require old licenses and are not compatible with modern operating systems, making maintenance increasingly difficult. To address this, we have developed an open-source Python tool that communicates with legacy PMAC controllers over serial interfaces. The tool uses a modular manager-client architecture where multiple client programs can send commands concurrently without conflicts, using a managed queue and locking system. Dedicated clients have been created for terminal interaction, batch upload of command files, watch window monitoring and plotting of PMAC variables with configurable fetch and display intervals. The programs are lightweight, installed via Python package management, and accessible through simple command-line interfaces. While the serial communication is not real-time, it is sufficient for configuration, monitoring, and motion program uploads. Extensive logging is provided for traceability. A muti-tabbed GUI with plotting and data saving feature is implemented for better usability. Future developments include integration into the broader framework to support newer motor controllers. This tool provides a modern, open-source alternative for maintaining legacy motion control systems and ensures continued support without reliance on outdated commercial software.

        Speaker: Parvathi Sreelatha Devi (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 103
        A remote-controlled high voltage power supply using open source firmware modules

        The HVPS (High Voltage Power Supply) is a programmable, precision power supply developed at the Brazilian Synchrotron Light Laboratory (LNLS) for use in beamline instrumentation and laboratory setups. Specifically designed for integration with ionization chambers, the HVPS provides a stable unipolar output up to 5 kV with excellent load regulation and low ripple, ensuring high reliability and precision.
        The system supports multiple control interfaces, allowing flexible operation in either local or remote environments, where real-time voltage and current monitoring is available via a built-in LCD display, analog outputs and remote monitoring. A dedicated embedded firmware, based on a RTOS, has been developed. It ensures remote configuration, monitoring capabilities and firmware update over TCP/IP stack, leading to a full integration with any control system standard and architecture. This paper describes HVPS hardware and firmware topology, as well as first use cases at SIRIUS facility.

        Speaker: Guilherme Ricioli (Brazilian Synchrotron Light Laboratory)
      • 104
        A shared virtual machine framework for EPICS hands-on training

        Facilities relying on collaboratively developed software projects, like the Experimental Physics and Industrial Control System (EPICS), often face challenges in ensuring consistent skill levels and efficient onboarding of staff.
        This paper introduces a new framework for creating reproducible pre-configured virtual machine (VM) environments, specifically designed for hands-on EPICS training.
        A key benefit of this framework is its ability to establish shared, reusable, general training modules. Such shared resources are highly valuable for collaborations, as they provide a standardized platform for skill development, reduce redundant training efforts, cultivate a common understanding of EPICS mechanisms, and ultimately strengthen collective knowledge within and across institutions.

        Speaker: Ralph Lange (ITER Organization)
      • 105
        A TimescaleDB and Grafana framework for ATLAS DCS data flow

        Atlas DCS Data Tools (DDT) offers a complementary data flow alongside the existing Oracle-based infrastructure, providing easy access to Detector Control System (DCS) data. Its architecture comprises a TimescaleDB database for efficient storage of both live and historical data (including a bridge to ingest over a decade of archived Oracle records), Grafana for intuitive visualization, and a backend server to manage traffic across multiple databases while maintaining backward compatibility with the existing Oracle schema. High-performance C++ and user-friendly Python APIs enable detector and operations experts—whose work is essential to ATLAS safety and performance—to improve their workflows. DDT’s modular design facilitates straightforward extension to additional subsystems and ensures scalable performance as data volumes grow. By addressing user-derived requirements, DDT enhances operational workflows, streamlines detector-monitoring studies, and supports the ATLAS community in maintaining robust and safe experiment operations.

        Speaker: Paris Moschovakos (European Organization for Nuclear Research)
      • 106
        Accelerator process water upgrade at ANL/APS

        This presentation will describe recent hardware & software updates to the Accelerator Process Water System of the Advanced Photon Source at Argonne National Laboratory. The topics covered include replacing outdated PLC hardware, updating EPICS software (deploying a python application called ‘plcepics’ to build EPICS databases), and an overview of problems encountered during commissioning of the control system.

        Speaker: James Stevens (Argonne National Laboratory)
      • 107
        Active magnetic bearings electronic system upgrade for CERN’s (HL)-LHC cryogenic 1.8 K cold compressor units

        The Large Hadron Collider (LHC) at CERN relies on superfluid helium, supplied by eight large refrigeration units, each providing 2.4 kW at 1.8 K. These units, developed by specialized cryogenic industrial suppliers, consist of serial hydrodynamic cold compressors equipped with axial-centrifugal impellers and Active Magnetic Bearings (AMB), coupled with volumetric warm screw compressors. The AMB electronic units, delivered by suppliers, have been in reliable operation for over 20 years.
        As part of the CERN consolidation project, the need to meet evolving LHC cryogenic operational requirements—along with the obsolescence of electronic components—has driven the upgrade of the entire electronic and electrical control system. A step-by-step analysis, beginning with an operational risk assessment, led to the design and development of a prototype. This effort was undertaken at CERN in collaboration with the French company SKF Magnetic Mechatronics.
        The prototype underwent extensive testing, first in a dedicated CERN cold compressor test bench and later in real-system operation, where it was successfully validated. This paper presents the complete upgrade process, the positive test results obtained, and the outlook for full deployment during the LHC Long Shutdown 3 (2026–2029).

        Speaker: Marco Pezzetti (European Organization for Nuclear Research)
      • 108
        Adaptive approach to spatial interpolation and visualization of scattered monitoring data at CERN

        In order to ensure safe operations, CERN leverages an extensive SCADA system to monitor radiation levels and collect environmental measurements across its premises. The Health & Safety and Environmental Protection (HSE) Unit addressed the challenge of visualizing radiation fields from non-uniformly distributed sensors across large areas. This paper presents the approach and implementation of a 2D interpolation and visualization system for such measurements. The system implements two algorithms for two complementary monitoring scenarios. The Inverse Distance Weighting (IDW) interpolation addresses cases where radiation sources are located near sensor locations, assuming maximum values occur at measurement points. The Radial Basis Function (RBF) method handles scenarios with potential radiation peaks between sensors. A region network approach divides large areas into independent regions for optimized performance. Accuracy is evaluated using leave-one-out cross-validation.The architecture relies on TypeScript, React and WebSockets. The system processes measurements across CERN's premises and provides operators with a real-time or historical spatial visualization of radiation levels.

        Speaker: Carlo Maria Musso (European Organization for Nuclear Research)
      • 109
        Advanced controls infrastructure for Diamond Light Source

        Full utility of Diamond II’s increased brilliance and coherence will require next generation scanning, capable of operating at high speed in the nano-scale regime. At this level, motion systems can become highly non-linear and under actuated creating challenges beyond the capabilities of traditional control methodologies. The new architecture, and more specifically, the high performance FPGA based controller will push the limits of motion control systems and create a platform for deploying advanced control models. This will begin with system identification and the extraction of data driven models. Control laws will be determined using advanced controls techniques and ML solutions such as agent based reinforcement learning.

        The presentation will include a description of the hardware as well as a brief overview of the advanced controls strategies used and the reasons for using them.

        Speaker: Gareth Nisbet (Diamond Light Source)
      • 110
        ALS storage ring RF control system upgrade plan and status

        The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory, a third-generation synchrotron light source operational since 1992, is undergoing a comprehensive upgrade of its storage ring RF control system. The legacy Horner PLC controllers and remote I/O modules, now at end-of-life, are being replaced with an Allen-Bradley PLC platform to improve maintainability, reliability, and long-term support. This paper presents the planning, design, and current status of the upgrade project.

        Speaker: Najm Us Saqib (Lawrence Berkeley National Laboratory)
      • 111
        APS-U storage ring power supply interlock and temperature monitoring control system

        The Advanced Photon Source (APS) is a third-generation synchrotron light source at Argonne National Laboratory. The APS Upgrade (APS-U) storage ring power supply interlock and temperature monitoring system plays an important role for personal and equipment safety. The system utilizes the Experimental Physics and Industrial Control System (EPICS) input/output controllers (IOCs) to interface with remote programmable
        logic controllers (PLCs) for controlling the storage ring power supply interlock system and monitoring various equipment temperatures. This paper will present how the system is implemented and operating successfully.

        Speaker: Shifu Xu (Argonne National Laboratory)
      • 112
        Assessing WinCC OA project limits to guide DCS architecture for the Phase-II ATLAS upgrade

        In preparation for the Phase-2 upgrade of the ATLAS experiment, the detector subsystems that will be upgraded to cope with the new operational conditions imposed by the High-Luminosity LHC are required to develop a Detector Control System (DCS) tailored to their specific needs. A key consideration for this upgrade is the size of WinCC OA projects in terms of various parameters. Understanding how large a WinCC OA project can be, without compromising performance, is critical for ensuring the stability and efficiency of the DCS.
        This work presents a series of studies conducted on WinCC OA 3.19 projects in order to assess the limits based on the servers that are being used within the ATLAS experiment. The findings provide practical insights into the factors that influence system scalability, such as the number of datapoint elements and the distribution across projects. These results aim to support detector groups in planning and optimizing their DCS architectures, helping them decide on the appropriate number and size of WinCC OA projects based on their future operational requirements.

        Speaker: Nikolaos Kanellos (National Technical University of Athens)
      • 113
        Automated test execution framework for experiment control systems at LCLS

        At the Linear Coherent Light Source (LCLS), maintaining the health and correct configuration of the control system is essential to ensuring successful experimental operations. ATEF (Automated Testing and Evaluation Framework) supports this goal by enabling the development and execution of automated procedures to verify control system status before experiments begin.
        ATEF focuses on pre-experiment validation of devices and controllers—ensuring they are correctly configured, network-visible, and responsive to control system commands. These verifications range from confirming process variable (PV) status and device communication to setting and reaching predefined setpoints.
        By providing a framework to establish and routinely verify a configuration baseline, ATEF allows users and system integrators to ensure the control system is ready to commence operations.
        This talk will present the design and use of ATEF at LCLS, explore real-world examples of its application, and highlight how it supports the broader goals of system integrity and experimental readiness.

        Speaker: Christian Tsoi-A-Sue (SLAC National Accelerator Laboratory)
      • 114
        Automatic bunch targeting for single-bunch beam position monitors

        Single-bunch beam position monitors (BPM) are used to track the trajectory and measure turn-by-turn positions of a single bunch of electrons while multiple bunches are present in the storage ring. In the Advanced Photon Source Upgrade (APS-U), 20 single-bunch BPMs have been installed in the first three sections right after the injection point. For each BPMs, an RF switch is used to select the signal of the target bunch to the BPM electronics. The timing signals to the BPM electronics and the RF switch are provided by the Fast Event System at APS-U, a global event-based trigger distribution system based on the hardware components developed by Micro-research Finland (MRF). The requirement of the timing signal to the RF switch is stringent to reliably select the target bunch. The GTX output function of the MRF event receiver can fine-tune the delay of the output signal to achieve the desired timing resolution for the RF switches. In this presentation, the hardware configuration of the timing signals and the software developed for automatic bunch targeting are described. The performance of the system and example applications are also discussed.
        The work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.

        Speaker: Ran Hong (Argonne National Laboratory)
      • 115
        Beam position monitor control system at the European Spallation Source

        The European Spallation Source facility is divided in three main parts: linear accelerator (LINAC), target and neutron instruments. The Beam Position Monitor (BPM) system is installed along the LINAC and enables accelerator teams to characterize the proton beam properties and optimize the phase tuning of the RF cavities, among other diagnostics. A total of 98 BPM sensors are distributed along the machine, with data acquisition and processing handled by 49 AMC digitizer cards housed in 18 MicroTCA crates. This paper describes the control system architecture surrounding the BPM system, including the EPICS integration alongside the graphical user interface developed in Control System Studio. Additionally, it presents the high-level applications built on top of this framework, such as the BPM Manager, Gain Controls, Archiver, and Synchronous Data Service. The paper also outlines the software strategy adopted for system deployment, maintenance, and updates. Auxiliary systems supporting the BPM infrastructure are briefly discussed, including the BPM components controls, MicroTCA IPMI management and the MRF Timing System integration.

        Speaker: Juliano Murari (European Spallation Source)
      • 116
        Beam Synchronized Acquisition and enhancements to associated services

        The Liac Coherent Light Source (LCLS) has developed a pulse-by-pulse data acquisition system, Beam-Synchronized Acquisition (BSA). BSA evolved from a 360 Hz software-based system to a 1 MHz firmware-based architecture tightly integrated with the LCLS timing system and beam rate.
        Alongside this transition, the EPICS control platform evolved from Channel Access (CA) to PV Access (PVA), enabling BSA to meet modern acquisition requirements—particularly for high-rate, high-volume applications requiring timestamping, pulse ID tagging, and precise cross-system alignment across the facility.
        BSA includes a fault buffer mechanism for each monitored variable, with four rotating buffers per variable, each capable of storing one million samples. One buffer collects data continuously at beam rate (1 MHz) in a ring configuration, while the others remain on standby. When the Machine Protection System (MPS) detects a fault, the active buffer is instantly frozen and a standby buffer takes over, preserving a one-second snapshot of data. This snapshot is synchronized across the facility and available for all BSA variables system-wide.
        This paper presents the architecture, firmware and software components, and supporting services developed to meet the demanding requirements of SC operation, enabling machine learning and real-time feedback capabilities.

        Speaker: Kukhee Kim (SLAC National Accelerator Laboratory)
      • 117
        Bluesky NeXus: a solution for NeXus-compliant data acquisition in Bluesky

        Modern scientific experiments require rich, standardized metadata to ensure data is Findable, Accessible, Interoperable, and Reusable (FAIR). The NeXus format—a hierarchical data standard used in neutron, x-ray, and muon science—provides a structured way to organize such metadata, but integrating it automatically into acquisition workflows remains a challenge. We present Bluesky NeXus, a Python package that enables automated, standards-compliant NeXus data generation within Bluesky—a modular Python-based framework for experiment control and data acquisition widely used at synchrotron and neutron facilities.
        Users define the desired NeXus structure—including groups, datasets, and attributes—using human-readable configuration files (YAML schemas), which are validated using models defined with Pydantic, a Python library for data validation, to ensure consistency and adherence to NeXus definitions. This enables flexible, user-defined metadata management while preserving data integrity.
        Bluesky NeXus gathers static metadata (e.g., equipment setup) and dynamic data (e.g., measurements), consolidating them into a complete NeXus file automatically archived with each experiment. It integrates with deployment tools like the Bluesky container used at BESSY II and supports diverse experimental configurations.
        Developed within the ROCK-IT project, Bluesky NeXus streamlines the creation of standardized metadata, advancing the Interoperability and Reusability goals of the FAIR principles.

        Speaker: Mr Daniel Tomecki (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 118
        Building the backbone: cable plant planning, design, and progress for SLAC’s MEC-U project

        The Matter in Extreme Conditions Upgrade (MEC-U) is a Department of Energy (DOE) Fusion Energy Sciences (FES) funded project slated for construction at the SLAC National Accelerator Laboratory later this decade. The facility will deliver the Linac Coherent Light Source (LCLS) X-ray Free Electron Laser (XFEL) beam to an experiment target chamber, coordinated with two high-power laser systems: a high-energy, long-pulse (HE-LP) laser and a rep-rated laser (RRL), built by the Laboratory for Laser Energetics (LLE) and Lawrence Livermore National Laboratory (LLNL), respectively.
        Designing the cable plant for MEC-U — which encompasses rack and tray layouts, cable specifications, penetrations, and grounding — presents a unique set of technical challenges and learning opportunities for the Experiment Control Systems (ECS) team. The design must be robust against high levels of electromagnetic interference (EMI) generated within the experimental target chamber and laser systems while also accommodating extensive cable lengths throughout the facility. It must also strike a careful balance between an accelerated facility schedule and lagging technical design readiness.
        This talk will highlight key challenges, current mitigation strategies, and the progress made to date in the MEC-U cable plant, as well as outline the roadmap ahead as we support the next frontier of high-energy fusion science.

        Speaker: Mitchell Cabral (SLAC National Accelerator Laboratory)
      • 119
        Building the foundations of mechatronic and robotic systems for SOLEIL II automation

        The SOLEIL multidisciplinary environment involves a broad variety of scientific techniques, methods and instruments. This diversity claims a wider perspective in the design of the control system and automation, moving beyond collection of single-loop (axis-based) controllers to manage complex system(s) with multiple interdependent variables and axis to control. To address this challenge, SOLEIL has begun integrating new technical "bricks" or modular components in both software and hardware and advanced control design methodologies into its process automation framework.
        Examples of these "bricks" include, robotic arms used to automate common synchrotron tasks such as detector positioning and sample handling [1], a TANGO device developed for configurable image processing. Indeed, this device allows the implementation of sequences of classic image processing algorithms and Deep Neural Network (DNN) models can be included in the sequence. These two bricks enable automatic sample positioning and self-centring of the sample.
        On the other hand, SOLEIL is adopting a control engineering-approach: including model-based design, estimation (sensor fusion), simulation and visualization. Using this approach one application has been prototyped, which is the synchronization between the monochromator and insertion device, and some additional laboratory applications are currently under development. Thus, this poster summarizes some of the results of these developments and implementations.

        Speaker: Yves-Marie Abiven (Synchrotron soleil)
      • 120
        CERN SCADA systems 2024 large upgrade campaign retrospective

        This paper presents a recent upgrade campaign of supervisory control systems within CERN's Accelerator and Technologies Sector. The effort covered over 240 WinCC OA SCADA applications across more than 120 servers, spanning core accelerator systems such as Power Converters, the Quench Protection System, and the Power Interlock Controller, along with essential technical infrastructure including Cryogenics, Vacuum, and Gas Control. These systems are crucial for machine protection, performance, and the reliable operation of the accelerator complex. Building on experience from previous upgrade efforts, this campaign introduced important advances in automation and process optimization. For the first time, a fully unattended upgrade workflow was achieved through the use of Ansible. In addition the campaign involved a major operating system migration and the upgrade of several supporting satellite systems. This paper details the improvements made in this iteration, discusses the challenges and compares the current campaign with earlier ones. The analysis highlights the evolution of automation strategies and reflects on both successes and difficulties. The work offers valuable insights for future upgrade initiatives and demonstrates how automation tools can significantly enhance the maturity and reliability of large-scale software maintenance in complex operational environments.

        Speaker: Lukasz Goralczyk (European Organization for Nuclear Research)
      • 121
        Consolidation of the state control and surveillance system of the LHC Beam Dump system

        The Large Hadron Collider (LHC) Beam Dump System (LBDS) includes 15 extraction kickers (MKD) and 10 dilution kickers (MKB), each powered by a High Voltage Pulse Generator (HVPG), controlled by the State Control and Surveillance System (SCSS) based on industrial PLC technology. After almost 20 years of reliable operation, a consolidation of the LBDS SCSS is planned for deployment during the Long Shutdown 3 (LS3 2026–2029), to meet the demand of increased diagnostics, functionalities, and guarantee component longevity until the end of LHC operation (2041).
        This paper describes the analysis conducted through a detailed review of the existing hardware, software, network layers, and ageing fieldbus components. It presents the motivations for modernizing the SCSS and the new control architecture with the improvement on the safety-functionalities implemented. It provides an overview of the new system's interlock state machine with its integration in CERN control middleware.

        Speaker: Mr Christophe Boucly (European Organization for Nuclear Research)
      • 122
        Continuous integration of control systems in parallel to the existing systems of LIPAc, for a radio-frequency conditioning test bench.

        Under the Broader Approach agreement between Japan and Europe, the Linear IFMIF Prototype Accelerator (LIPAc) aims the validation of the International Fusion Materials Irradiation Facility (IFMIF) accelerator design, to produce a deuteron beam of 125 mA at 9 MeV in continuous wave. In parallel to the installation of a superconductive linear acceleration stage, a high-power test bench was set up for the testing and conditioning of four pairs of radio-frequency (RF) couplers for LIPAc’s RF quadrupole*. Accordingly, the control systems (CS) part was implemented in parallel to the existing CS of LIPAc, benefiting from the tools available while avoiding their modification. Also, additional functionalities and devices were integrated to tackle the test bench specificities. This work was continuously performed during the operations of the test bench, identifying and answering further needs, such as deploying an automated conditioning tool, or enabling slow feedback loops for automatic parameter tuning. Furthermore, this test bench became a testing environment for the modifications foreseen in the LIPAc CS refurbishment plan, such as upgrading the CS framework to EPICS v7, switching to CS-Studio Phoebus and its applications for the operator interfaces, or using Debian 12 as the operating system and ProxMox 8 for the virtualization environment. The experience acquired here will be precious for the IFMIF-DONES Facility Project (DEMO-Oriented NEutron Source) implementation of IFMIF.

        Speaker: Lucas Maindive (IFMIF-DONES Spain Consortium)
      • 123
        Control software and technology choices for the electron-ion collider

        The Electron-Ion Collider (EIC) will succeed the current Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. For over two decades, RHIC and its injectors have relied on a homegrown Accelerator Device Object (ADO)-based control system, which has provided a reliable and efficient operational framework. However, the EIC’s requirements—such as a greater number of subsystems, higher uptime, increased data rates, and other factors—demand significant enhancements. Advances in both hardware and software technologies since the RHIC era have expanded the range of available options, each with its own set of benefits and challenges. In response, the EIC plans to deploy state-of-the-art technologies to meet these elevated demands, favoring open-source and community-driven solutions wherever feasible. This talk will focus on the control software and the technology choices under consideration and the strategies being adopted for the EIC.

        Speaker: Md Latiful Kabir (Brookhaven National Laboratory)
      • 124
        Control system implementation for temperature monitoring and control for CBXFEL project

        The Cavity-Based Free Electron Laser (CBXFEL) project, a collaboration between Argonne National Laboratory (ANL) and SLAC National Accelerator Laboratory, aims to produce a recirculating X-ray cavity for SLAC Linac Coherent Light Source (LCLS) Hard X-ray (HXR) undulator line. The cavity formed by crystal mirrors will facilitate FEL amplification and Bragg monochromatization [1]. To minimize losses, all cavity crystals must be maintained at the same temperature within +/- 0.4 °C. To support this requirement, a calibrated heater and temperature sensor will be installed at each crystal to ensure this stability. This paper focuses on the architecture and de-sign implemented for the project including sensor calibration and control loop tuning.

        Speaker: Namrata Balakrishnan (SLAC National Accelerator Laboratory)
      • 125
        Control system upgrade of Argonne Wakefield Accelerator facility

        As the Argonne Wakefield Accelerator (AWA) facility expanded, its original in-house control software could not keep up with the increasing complexity and scale of operations. To improve maintainability, reliability, and support future development, the AWA controls group undertook a major upgrade by adopting the Experimental Physics and Industrial Control System (EPICS). With support from the Advanced Photon Source (APS) Controls group, the AWA control system has been successfully upgraded from a centralized, difficult-to-maintain architecture into a flexible, scalable, and maintainable distributed system. This paper presents the current status of AWA's new EPICS-based control system and describes the experiences and lessons learned during the upgrade.

        Speaker: Wanming Liu (Argonne National Laboratory)
      • 126
        Controls of the new eddy current septum for the CERN PS fast extraction

        The CERN PS fast extraction septum deflects protons and ions towards the experimental areas of the PS complex and the SPS. With the increased number of extractions per year since it was first put into service in 1994, the magnet lifetime is nowadays estimated at two years, implying bi-annual rebuilds of the septum, significant costs, and non-negligible radiation doses taken by personnel. Additionally, the present power converter is approaching its end-of-life.
        In view of its superior robustness, an eddy current septum system was chosen to replace the original direct-drive septum. Due to the different technology of the magnet, the existing power converter is replaced by a new fast pulse generator, which implies a complete new control system.
        This paper describes the different units and functionalities of this new control system, covering a wide range of technologies such as high-voltage switch triggering modules, slow interlocks based on PLC, fast interlocks and timing implemented in FPGAs, temperature-compensated acquisition chains, and software-based regulation algorithms. Preliminary results of the system performance are also presented.

        Speaker: Léa Strobino (European Organization for Nuclear Research)
      • 127
        Converting experiment data to NeXus application definitions at BESSY II

        In our efforts to achieve FAIR data practices at BESSY II, we are leveraging the NeXus standard [1], a common data exchange format for data obtained in the fields of neutron, muon, and X-ray science. Two core components of this standard are its base classes and application definitions. NeXus base classes serve as building blocks, offering community-agreed names and data structures for all devices required to run
        an experiment, including those on the beamline. Built upon these base classes, NeXus application definitions specify the minimal required structures and data elements necessary to represent a given experimental technique. In this work, we present preliminary results from the development of an application definition for a multi-modal experiment conducted at the mySpot beamline of BESSY II. This versatile beamline supports measurements with multiple techniques - XRD, SAXS, XRF, EXAFS, and XANES performed simultaneously under in-operando conditions. For the data conversion process, we use pynxtools [2], a tool designed to facilitate FAIR experimental data. Additionally, we discuss the perspective of this development for the Bluesky NeXus package [3], developed at BESSY II, which enables the automated export of NeXus-compliant HDF5 files for Blueksy-based experiments and beamlines.
        [1] https://www.nexusformat.org
        [2] https://github.com/FAIRmat-NFDI/pynxtools
        [3] https://codebase.helmholtz.cloud/hzb/bluesky/core/source/bluesky_nexus

        Speaker: Sonal Ramesh PATEL (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 128
        DCCT controls at the Advanced Photon Source

        The Direct Current Current Transformer (DCCT) in the Advanced Photon Source (APS) Storage Ring is used to measure the storage ring current. This measurement is digitized using an HP3458A Multimeter and synchronized with beam injection through the Micro-Research Finland (MRF) event system. This paper describes the setup and timing of the control system for the DCCT, highlighting its integration and functionality within the APS infrastructure.

        Speaker: Suyin Grass Wang (Argonne National Laboratory)
      • 129
        Design and commissioning of the upgraded ALS SR RF cavity water control system

        The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory, a pioneering third-generation synchrotron light source, has been operational since 1992. The ALS storage ring RF cavity water control system, originally a 30-year-old relay-based chassis system, regulated the temperature of the two storage ring RF cavities while managing interlocks and operational functions. Aging instrumentation and the need for enhanced features, however, led to operational challenges. To address these issues, the system was upgraded to a PLC-based solution. This paper presents the design and commissioning results of the upgraded cavity water control system.

        Speaker: Najm Us Saqib (Lawrence Berkeley National Laboratory)
      • 130
        Design and implementation of control system for ion source of Proton Radiation Effects Facility

        Proton Radiation Effects Facility (PREF) is a 10~60MeV proton accelerator applied to ion irradiation, using a combination of proton source, linear injector, and synchrotron, among which, the proton source can provide a proton beam of up to 3mA. In this paper, the control system of PREF ion source is reported, and the whole system is constructed using a distributed architecture, which mainly includes three parts: ion source device control, ion source timing control and chopper fast pulse acquisition system. The remote monitoring and control of ion source devices is realized by using PLC, serial server, servo motor and industrial computer. The FPGA is used as the main control unit to realize the timing control. The Chopper fast pulse monitoring system uses high-speed acquisition hardware to realize the external trigger synchronous acquisition of Chopper fast pulse signals. The software integrates all controlled devices by establishing the EPICS IOC run-time databases. The user interface layer is developed by using Control System Studio to achieve transparent access to all controlled devices by maintenance operators. The machine protection system is designed based on safety rules to realize protection in the case of abnormal operations. The control system is stable and reliable, which fully meets the needs of the PREF tuning and physical experiments.

        Speaker: Pengpeng Wang (Institute of Modern Physics)
      • 131
        Design and performance of a novel data acquisition and processing system for the APS Upgrade front end XBPM

        The APS-Upgrade (APS-U) ID beamline front ends are equipped with next-generation X-ray beam position monitors (XBPMs). Each XBPM utilizes 16-element array detectors to simultaneously capture beam distribution information from undulator and bend magnet (BM) sources. A novel IOC has been designed to handle the following tasks in real time: estimating and subtracting BM signals, applying undulator-gap-dependent calibration to calculate undulator beam center positions, collecting beam position statistics, and estimating X-ray RMS motion and XBPM resolution. Initial data from early APS-U user runs demonstrate significant performance improvements. The BM signal contribution is reduced by more than a factor of 10, observed RMS beam motion is within the micrometer range, and XBPM resolution is consistently below 1 micrometer, typically less than 10% of the RMS beam motion.

        Speaker: Hairong Shang (Argonne National Laboratory)
      • 132
        Design of snapshot management in CSNS with a frontend-backend-separation architecture

        In CSNS, we developed a snapshot management software, based on the Frontend-Backend-Separation Architecture. The backend includes the PV value updating service which performs batch updating of the values of thousands of PVs, and the snapshot management service which performs the CUPD of database. With such an architecture, the efficiency of snapshot management operation is very high.

        Speaker: Mingtao Li (Institute of High Energy Physics, China Spallation Neutron Source)
      • 133
        Design of the Korea-4GSR Timing system

        The event timing system is a system that coordinates and synchronizes events in a precise sequence over time and provides precise timing to local devices. The system consists of an event master (EVM), an event fanout (EVF), and an event receiver (EVR), and each local device receives event, trigger, and timestamp information through EVR. Korea-4GSR transfers 200 MeV electrons to the booster through a linear accelerator, and provides trigger signals to synchronize the injection order into the storage ring after ramping acceleration from the booster to 4 GeV. The system will be configured using MRF's MTCA products, and the EVG and EVR configuration and layout are being designed by investigating the specifications of local devices that receive trigger signals. In addition, the system will be tested by transmitting event codes and trigger signals to embedded EVR equipment such as BPM and FOFB.

        Speaker: Sohee Park (Pohang Accelerator Laboratory)
      • 134
        Development of a beam gate control system for proton beam irradiation at KOMAC LINAC

        A total of ten beamlines have been designed for the KOMAC 100-MeV proton linear accelerator (LINAC), among which five are currently operational and delivering beams to users. The LINAC system comprises an ion source, radio frequency (RF) system, high voltage converter modulator (HVCM), and a beam diagnostic system, all synchronized through a timing system to enable precise beam acceleration. A dedicated monitoring and control environment has been established for experiments conducted in each target room. Prior to beam irradiation, beam uniformity and dose per pulse are measured, and users input the desired total fluence into the control panel to initiate beam service. The beam gate control system is designed to automatically stop the beam trigger once the user-defined fluence is achieved, ensuring accurate and safe beam delivery. The beam gate control system was implemented with a redundant architecture to safely control the trigger signals output from the timing system. This paper presents the design and implementation of the beam gate control system developed for beam irradiation applications at KOMAC.

        Speaker: Young-Gi Song (Korea Multi-purpose Accelerator Complex)
      • 135
        Development of an EtherCAT-based control system for an In-Vacuum Undulator for SPring-8-II

        SPring-8, a third-generation light source, has operated for nearly three decades. Recently, light source accelerators have transitioned towards fourth-generation light sources, which implement low-emittance storage rings. Therefore, SPring-8 will upgrade its storage ring to a new one named SPring-8-II between 2027 and 2028. The upgrade involves implementing new Insertion Devices (IDs), specifically In-Vacuum Undulators for SPring-8-II (IVU-II), and optimizing accelerator control systems. As part of the control system upgrade for slow control, we are replacing VME-based systems with EtherCAT-based systems*. Between 2023 and 2027, the schedule dictates the annual installation of three to a maximum of six IVU-IIs, and we will install EtherCAT control systems accordingly. Crucially, IVU-II control systems installed during the SPring-8 phase must be compatible with the varying operational parameters of SPring-8 and SPring-8-II. In 2024, we implemented the first EtherCAT-based control system, which satisfies the requirements. This system manages the gap between magnets and two power supplies for two steering magnets, monitors magnet temperatures and the vacuum system, and handles interlock signals. In the SPring-8-II era, dedicated systems such as the vacuum controls and the interlock system will handle vacuum and interlock functions, reallocating them from ID controls. Future ID controls will employ the EtherCAT model.

        Speaker: Kosei Yamakawa (Japan Synchrotron Radiation Research Institute)
      • 136
        Development of EPICS device support on the MELSEC iQ-R embedded Linux controller

        At the RIKEN Radioactive Isotope Beam Factory (RIBF), commercial PLCs are used in various accelerator subsystems. Among them, the Yokogawa FA-M3 series with a Linux CPU module running EPICS IOC has been successfully adopted. Following this model, and in response to user requests, we implemented a similar architecture using the embedded Linux controller (RD55UP12-V) of Mitsubishi Electric’s MELSEC iQ-R series. Conventional MELSEC-based integration with EPICS has relied on asynchronous device support communicating over TCP/IP (e.g., MC protocol). These approaches often face reliability issues, such as socket disconnections and failed reconnections after network disruptions. To address these problems, we developed a native EPICS device support that runs directly on the MELSEC Linux CPU. A Python-based prototype using PyDevice was first created for rapid development and verification, incorporating a thread-safe worker architecture. Based on its success, we re-implemented the support in C++ using the asynPortDriver class of the EPICS asynDriver module. The C++ version provides improved performance, robust multithreading, and better maintainability. This paper presents the development process, differences between the Python and C++ implementations, and results from system integration within the RIBF control environment.

        Speaker: Misaki Komiyama (RIKEN Nishina Center)
      • 137
        Development of GigE vision camera control system and application to beam diagnostics for SPring-8 and NanoTerasu

        As an imaging system supporting beam diagnostics using screen monitors (SCMs) at the SPring-8 site, we have continuously developed and improved a GigE Vision camera control system and expanded its adoption. By adopting the versatile open-source library Aravis, we eliminated vendor dependency and built an image acquisition system integrated into the SPring-8 control framework, MADOCA 4.0. Key features include the ability to control up to eight GigE cameras per computer with centralized management of camera power, trigger distribution, and screen operations. Its long-distance cabling enables flexible and simple deployment. Operation is achieved by writing the configuration file without programming, significantly reducing development costs and time. As part of the SPring-8 upgrade, this system was successfully implemented for the SCMs of the beam transport line (XSBT) that uses the SACLA linac as the injector for the SPring-8 storage ring*. We expanded the application of this system to the SCMs of the SACLA linac and the SACLA-BL1 linac (SCSS+), replacing the complex and costly Camera Link cameras. We also newly applied it to NewSUBARU injector linac and NanoTerasu in Sendai. This presentation outlines the R&D of our GigE Vision camera control system for stability and enhancements, reporting on multi-facility deployment, operation, and stabilization efforts toward advanced utilization like automated beam parameter optimization from beam diagnostics using machine learning.

        Speaker: Dr Akio Kiyomichi (Japan Synchrotron Radiation Research Institute)
      • 138
        Development of MTCA.4 at ESS

        As commissioning activities for the Linear accelerator continues at the European Spallation Source (ESS), it is highlighting important issues and development features for the MTCA.4 systems used in several key subsystems. With over 300 MTCA.4 systems deployed for Low Level Radio Frequency (LLRF), Proton Beam Instrumentation (PBI), Fast Beam Interlock Systems (FBIS), and Timing Distribution (TD), it has proved a valuable platform for fast controls. The use of MTCA.4 has also allowed for novel solutions to provide flexible but reliable controls for the facility, including expanding existing controls design and developing orphan scope. However this is not without its issues, as MTCA.4 for the scientific community is still very much on the smaller scale and reliant on only a few suppliers, with risk of obsolesce of components and changes to the standards. This makes it challenging but fun to continue developing the hardware to ensure the desired reliability for the Linac controls. This paper will describe the procedures and processes that the hardware team has taken to mitigate these issues, the current status of our MTCA infrastructure and the long term plans for its maintenance.

        Speaker: Faye Chicken (European Spallation Source)
      • 139
        Driving fusion success: target diagnostics software control at National Ignition Facility (NIF)

        Achieving controlled nuclear fusion at the National Ignition Facility (NIF) depends not only on precision lasers and target engineering, but also on a robust suite of target diagnostics systems. These diagnostics capture critical physics data—including neutron yield, burn width, symmetry, and x-ray emissions—essential for guiding the path to ignition. Central to this capability is the target diagnostics software control system synchronized to within tens of picoseconds of each laser shot. It coordinates nearly 100 diagnostics and over 600 software-controlled instruments, including high-speed sensors and radiation-hardened devices operating in extreme environments. A major challenge is supporting diverse, evolving hardware from multiple vendors using protocols like serial, Ethernet, GPIB, and proprietary APIs. The software architecture addresses this through centralized orchestration, abstraction layers, and hardware adapters. The system’s flexibility is demonstrated in the successful deployment of the VISAR (Velocity Interferometer System for Any Reflector) diagnostic, which integrates varied vendor instruments — cameras, lasers, delay generators — with custom interfaces and precise synchronization. Long-term use has revealed challenges with aging, vendor-specific systems and managing radiation-hardened devices. Future improvements will enhance resilience, sustainability, and integration of next-generation diagnostics.

        Speaker: Sukhdeep Heerey (Lawrence Livermore National Laboratory)
      • 140
        Embracing the accelerator computing revolution at SLAC

        We face a number of challenges in planning future controls and computing for large accelerator facilities. Online tuning increasingly requires 6-d phase space customization, fast numerical estimation methods, and space-charge modeling in timescales relevant to operations. The needs are being met by advances in machine learning, artificial intelligence and the proximity of multi-particle methods to accelerator operations, whose outcomes must be deployed to effectively change how we do accelerator physics and experiment optimization. This imperative, along with cyber and technical debt mitigation, are driving changes in architecture - in controls to add high performance and edge computing, in data systems to add high fidelity and vector databases, and in networks to interconnect these, and add security and throughput. At the same time, the data from devices gets larger, and more complicated, requiring new data structures, and new control primitives that incorporate data semantics. These changes are happening in the face of increased funding pressure. However, there are also more tools at our disposal and technologies from web, streaming and internet we embrace to help. We present these drivers, our vision of an integrated response, the path we're on, architectures and data systems in development to support the new physics techniques and tools, and our roadmap for the next few years.

        Speaker: Greg White (SLAC National Accelerator Laboratory)
      • 141
        Enhancing scanning nano-tomography instrumentation with a magnetic levitation stage

        Next-generation synchrotron experiments—such as those planned for SOLEIL II—require fast, accurate sample positioning to meet increasingly demanding scientific challenges. To address these needs, SOLEIL launched the development of a magnetic levitation stage demonstrator dedicated to hard X-ray scanning tomographic nano-imaging techniques such as PXCT and STXM. Designing this new class of mechatronic instruments involves a significant shift from traditional stacked architectures used for point-to-point motion to advanced scanning techniques with high dynamics. It also requires substantial design innovations. The demonstrator was developed in the frame of the LEAPS-INNOV project. It integrates high-speed 2D scanning modes (step-scan and fly-scan) with full 360° sample rotation. SOLEIL’s development strategy involves partnering with a company from the semiconductor industry that has built ultra-precise and highly reliable mechatronic systems. MIPartners company was selected through a “competitive dialogue” tender, allowing for iterative refinement of specifications.
        This paper outlines the design principles that ensure performance and reliability in synchrotron instrumentation. The complete design workflow—from modelling to control implementation—will be detailed, along with the validation of the scanning nanoprobe stage. Results from factory and site acceptance tests, as well as the development of an external metrology bench to characterize the stage will be presented.

        Speaker: Yves-Marie Abiven (Synchrotron soleil)
      • 142
        Ensuring reliability and feasibility of software alarms with PHOEBUS in EPICS network for ALS-U

        The Advanced Light Source Upgrade (ALS-U) project requires a reliable and efficient alarm system. This presentation examines the reliability and feasibility of a software alarm system implemented using PHOEBUS within the EPICS network. We will discuss its architecture, configuration strategies, and management techniques. Testing results highlight the system's robustness. Furthermore, we introduce a compact software demo environment for public use, offering key insights for comparable high-reliability environments.

        Speaker: Soo Ryu (Lawrence Berkeley National Laboratory)
      • 143
        EPICS 7 upgrade for LCLS-II undulator motion control system

        Undulators are essential components of the new LCLS-II X-ray Free-Electron Laser (XFEL) facility, providing highly bright and coherent X-ray light for researchers. The LCLS-II includes two undulator lines: the hard X-ray (HXR) line and the soft X-ray (SXR) line, each with distinct architectures. The HXR undulator motion control system, based on RTEMS running on VME and Animatics SmartMotors, leverages existing LCLS hardware for maximum efficiency. In contrast, the SXR undulator system is newly designed with an Aerotech motion controller, both systems are built on EPICS v3 Input Output Controllers (IOCs). To meet the requirements of a significant cyber security upgrade of the EPICS controls framework at SLAC, we have upgraded all EPICS IOCs from EPICS v3 to EPICS 7. This article details the software architecture and upgrade process for the motion control systems of both the HXR and SXR undulators.

        Speaker: Ziyu Huang (SLAC National Accelerator Laboratory)
      • 144
        EPICS Archiver Appliance at Los Alamos Neutron Science Center

        The Accelerator Operations and Technology Instrumentation and Controls (AOT-IC) group has made the decision to adopt the EPICS Archiver Appliance as its primary archiver system. The ability to archive millions of PVs has been a topic of interest at Los Alamos Neutron Science Center (LANSCE) as the demand for large data sets and machine learning workflows increases. The open-source nature of the archiver appliance, the user-friendly web interface, and the proven ability to archive hundreds of thousands of process variables were decisive factors in this choice. This work presents experiences and lessons learned when implementing the EPICS Archiver Appliance at LANSCE.

        Speaker: Jonathan Quemuel (Los Alamos National Laboratory)
      • 145
        EPICS driver for Universal Robots e-Series robot arms

        The integration of modern robots into large-scale control systems, such as synchrotron beamlines, has traditionally required custom, installation-specific solutions, hindering collaboration and reusability. This work presents an EPICS support library for controlling Universal Robots (UR) e-Series robotic arms. Built on the widely used EPICS asyn module, the software provides a standardized interface for controlling UR robots from an EPICS soft IOC. This greatly simplifies deployment to new installations and promotes collaboration across robot users. In addition to exposing low-level robot functions, such as joint control and status monitoring, the library offers higher-level features, including waypoint and path creation, to facilitate common tasks like pick-and-place operations.

        Speaker: Nick Marks (Advanced Photon Source)
      • 146
        EPICS Summer School: training future scientific control system experts

        The EPICS Summer School addresses the persistent demand for expertise in control systems for advanced scientific facilities by providing a crucial training platform for university students and young professionals. The highly successful inaugural event at BESSY II in Berlin offered an effective learning experience through a structured two-week program encompassing one week of foundational lectures and one week of practical application in a hands-on group project with real scientific hardware. This approach demonstrated a significant positive impact, empowering a new cohort from these groups with essential EPICS and distributed control system skills. In an effort to make the event sustainable, a collaborative framework has been established with other institutes, paving the way for a future where hosting the summer school cycles between facilities. This aims to make this training accessible to a wider audience of future scientific leaders, ensuring a continuous supply of skilled engineers capable of supporting and advancing groundbreaking scientific research across multiple facilities.

        Speaker: Luca Porzio (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 147
        EPICS-based X-ray beam intensity monitoring for the CBXFEL project at SLAC LCLS

        The Cavity-Based X-ray Free-Electron Laser (CBXFEL) project, to be deployed in the Linac Coherent Light Source (LCLS) Hard X-ray (HXR) undulator line, aims to produce a highly coherent X-ray beam by recirculating X-ray pulses within a cavity. Precise alignment of the diamond crystal mirrors in this cavity is critical to achieving optimal CBXFEL performance. To support this, we deploy a system of five silicon-based X-ray Beam Intensity Monitors (XBIMs) and one diamond XBIM. Each XBIM generates a signal that may be amplified and is then digitized using high-speed digitizers. These digitized values are integrated into the EPICS control system, enabling synchronized data acquisition, feedback, and monitoring alongside other experimental subsystems. This paper outlines the requirements for CBXFEL beam diagnostics, details the digitizer selection and configuration, and describes the implementation of the control architecture.

        Speaker: An Le (SLAC National Accelerator Laboratory)
      • 148
        Evaluate data lake design for the accelerator control system

        Increasing precision in automation for modern particle accelerators not only creates a requirement to gather data from all devices but also demands scalable and high-performance data infrastructure with the capability of handling vast incoming device data. A well architected data lake is suitable for such a system which integrates real-time data acquisition, transient data caching, and long-term storage. This paper evaluates data lake architecture for an Accelerator Control System (ACS), focusing on two critical components of a data lake, data cache and long-term storage.

        Speaker: Amol Jaikar (Fermi National Accelerator Laboratory)
      • 149
        Evolution of White Rabbit network for large-scale deployment at CERN

        The White Rabbit (WR) network at CERN is undergoing a significant expansion with the introduction of a new branch dedicated to General Machine Timing (GMT), Beam Synchronous Timing (BST), Safe Machine Parameters (SMP) and Radio Frequency (RF) distribution.

        Prior to 2024, the WR network counted approximately 35 switches and 70 nodes. The new deployment aims to add nearly 200 switches and 1800 nodes to the network, representing a considerable increase in operational complexity. Such a rapid expansion requires careful network design and configuration, as well as the development of more advanced tools for continuous monitoring and efficient diagnostics. This paper presents the design and configuration of the new network, as well as diagnostic methodologies and monitoring strategies implemented to support and supervise the evolving network infrastructure.

        Speaker: Maciej Suminski (European Organization for Nuclear Research)
      • 150
        Federated PV management and cross-network data archiving framework for distributed EPICS systems in Korea-4GSR

        The Korea-4GSR project involves the development of a distributed EPICS control infrastructure where each subsystem—ranging from the accelerator core to beamlines and utility stations—is independently operated. To enhance unified control and monitoring across these segmented environments, we developed a federated PV management and data archiving framework that integrates metadata synchronization and real-time monitoring capabilities.

        Each control domain retains logical independence, supported by basic network segmentation and routing strategies to ensure operational stability. EPICS Gateways are introduced to enable cross-network PVAccess communication, allowing seamless real-time PV lookup between isolated subnets while preserving autonomy and facilitating future scalability.

        A centralized ChannelFinder service aggregates PV metadata from each subsystem using secured, API-based batch registration scripts. A custom-built Python GUI enables operators to efficiently search, tag, and manage large numbers of PVs. Integration with the Phoebus interface provides unified visualization and monitoring of PVs across all federated systems.

        In parallel, an Archiver Appliance continuously collects selected PV data from all subsystems into a unified time-series database. This supports advanced services such as fault analysis, system performance tracking, and predictive diagnostics.

        Speaker: sunwoo Kang (Korea Basic Science Institute)
      • 151
        Fiber signal attenuation due to temperature

        The SLAC National Accelerator Laboratory distributes its timing signal throughout the beam via a mix of fiber cables. It is known the RF over fiber signal is affected by temperature fluctuations causing a phase shift in the signal. That is why special consideration was taken in the design for the type of fiber and type of connector to be used for the LINAC Locking system. As the communication of the system would be a RF over fiber signal. However, there have been some instances over the years where higher temperatures have caused issues with the continuous signals going through the fibers.  The cause of the degradation of the signal would usually be a damaged fiber or degraded transceiver but increased temperatures would lead to a high enough signal loss before complete failure of the devices happened. This paper describes how SLAC takes into consideration temperature environment for troubleshooting efforts and possible design considerations to make for future developments.

        Speaker: Daniel Sanchez (SLAC National Accelerator Laboratory)
      • 152
        First diagnostic of the Laser Megajoule timing system

        The Laser MegaJoule (LMJ), a 176-beam laser facility developed by CEA, is located at the CEA CESTA site near Bordeaux. The LMJ facility is part of the French Simulation Program, which combines improvement of theoretical models and data used in various domains of physics, high performance numerical simulations and experimental validation. It is designed to deliver about 1.4 MJ of energy on targets, for high energy density physics experiments, including fusion experiments.
        With 19 bundles operational by the end of 2025, the operational capabilities are gradually increasing until full completion of the LMJ facility by 2027.
        At present, there is no global control and measurement of the synchronization system*. It's up to each individual subassembly to check that the trigger or fiducial signals are correct.
        To ensure that the LMJ synchronization system works perfectly, a new diagnostic was developed and tested on several 1w and 3w laser bundles in comparison with central chamber synchronization.
        In this paper, a review of the LMJ’s first synchronization measurement is given with a description of the prototype measurement diagnostic, the main values measured and a presentation of the future deployment of this diagnostic on all 22 bundles.

        Speaker: Thierry Somerlinck (Commissariat à l'énergie atomique et aux énergies alternatives)
      • 153
        From terabytes to petabyte: scaling of the archiving system for FAIR

        With the recent rise of AI and various machine learning models, the importance of storing and managing data generated by control systems is greater than ever before. In 2016, GSI began developing an archiving system to collect, store, and retrieve data from the diverse accelerator devices managed by the GSI control infrastructure. The system was successfully deployed in production in 2021. To evaluate its capabilities and suitability for operational needs, the system was initially launched with a limited storage capacity of 50 TB and reduced computing power. With a current data volume of over 100 GB per day, the archiving system quickly exceeded its initial limits. However, the experience gained in day-to-day operations thus far has allowed us to better understand our use-cases and identify areas for further improvement. In preparation for the anticipated start of FAIR operations in 2027, the system will require significant scaling to meet future demands. Therefore, this is an opportune moment to review and refine the system’s architecture based on the experience gained so far. This paper outlines the challenges encountered with the current implementation and presents the solutions that will be incorporated into the system for FAIR operations.

        Speaker: Vitaliy Rapp (GSI Helmholtz Centre for Heavy Ion Research)
      • 154
        GPS IRIG-B over the fiber - an alternative to NTP

        Time-stamping of neutron events and environment parameters at the Spallation Neutron Source (SNS) instruments is based on Network Time Protocol (NTP) to distribute GPS time to the instrument front-ends. We are presenting an alternative way of distributing GPS time using fiber optical channels. Instead of millisecond accuracy for NTP, we distribute GPS IRIG-B signaling to remote IRIG-B receivers with an accuracy in nanoseconds.
        The new approach is implemented in VHDL and demonstrated on an FPGA development board using Vivado design tools.

        Speaker: Mr Adrian Hernandez (Oak Ridge National Laboratory)
      • 155
        HDB++, a retrospective on 5+ years using Timescale

        The Tango HDB++ project is a high-performance, event-driven archiving system that stores data with microsecond resolution timestamps. HDB++ supports various backend databases to accommodate any infrastructure choice, with Timescale as the default option. Timescale, an extension of PostgreSQL, is selected for its exceptional performance, reliability, and open-source license.
        After more than five years of using the system in production at major facilities such as the ESRF, MAX IV and SKAO, this paper presents the insights gained from operating HDB++ with Timescale in a large research facility.
        Results are presented considering various perspectives. From a performance standpoint, the paper examines how the scalability features have maintained low query response times despite the continuous growth in data volume over the years. From the system administration perspective, findings show that standardized and proven technologies have consistently supported high-quality service delivery. Lastly, from the user perspective, we analyze how users can query data stored from the inception of the project up to last week within seconds, either from the python API or from clients like grafana. This capability is also enabled by the successful migration and integration of archived data from older or different systems into the database in full compliance with HDB++ standards.

        Speaker: Reynald Bourtembourg (European Synchrotron Radiation Facility)
      • 156
        Homogenizing a control system with a long history

        The LANSCE Accelerator is over 50 years old, the original control system was fully replaced with an EPICS control system a few years ago. However, the upgrade process was slow, and the "new" control system has processor boards that are 30 years old, as well as new soft-core FPGA processors. We have eliminated VXI and almost eliminated CAMAC crates, but also have VME, VME64, cPCI, and VPX. The control system uses 4 types of VME processor boards and 3 real-time operating systems. The diversity in the control system makes it difficult for new personnel to support everything, as well as making maintenance more difficult. We are in the process of consolidating to fewer hardware platforms and rewriting software that has incurred too much technical debt, while also developing new replacement technologies for upgrade projects. Most of our hardware I/O has been converted to compact RIO based systems. Our timing system cards are either VME or cPCI. VME has had a very long support life, VPX also looks promising for a long support lifetime. We are currently planning to develop a solution with an IP-based event receiver so that it will be easier to migrate to new commercial off-the shelf FPGA boards. We are also hoping to move away from VxWorks. Newer device support is being converted to operating system independent software using asynPortDriver. Simplifying the hardware and software is the key to sustainable maintenance.

        Speaker: Scott Baily (Los Alamos National Laboratory)
      • 157
        Installation and commissioning progress of the 2PACL CO2 cooling control systems for Phase II upgrade of the ATLAS and CMS experiments

        In the scope of the High Luminosity Program of the Large Hadron Collider at CERN, the ATLAS and CMS experiments are progressing in the installation and commissioning of their environmentally friendly low temperature detector cooling systems for their new trackers, calorimeters and timing detectors. The selected “on-detector” cooling solution is the CO2 pumped loop concept which is the evolution of the successful 2PACL technique allowing for oil-free, stable, low-temperature control. These systems are of unprecedented scale and largely more complex for both mechanics and controls than installations of today. This paper will present a control system overview, applied PLC architecture and the installation and commissioning progress achieved by the EP-DT group at CERN over the last years. We will describe in detail homogenised solutions which spreads between surface and underground and have been applied for future CO2 cooling systems for silicon detectors at ATLAS and CMS. We will describe in detail applied multi-level redundancy for electricity distribution, mechanics and controls. We will discuss numerous controls-related solutions deployed for electrical design organization, instrumentation selection and PLC programming. We will finally present how we organised early control system commissioning as initial step for LHC Long Shut down 3.

        Speaker: Lukasz Zwalinski (European Organization for Nuclear Research)
      • 158
        Integrated data acquisition and processing pipelines for users at Elettra 2.0: a case study at SYRMEP, the μCT beamline

        Elettra, the Italian Synchrotron in Trieste, is about to undergo a major upgrade of the facility. To effectively exploit such improvement, data acquisition and processing are being integrated into single automated pipelines, subject to a facility-wide standard and yet flexible enough to accommodate the specific usage at each beamline.
        The entire procedure of data acquisition and processing spans a vast ecosystem of different structures and frameworks, which are being standardized across the whole facility: the GeCo control system, handling the safety and the operation of the beamlines; the TANGO Controls framework, that allows distributed control of the data acquisition process; the architecture of data storage and processing, whose accessibility is mediated by VUO, the Elettra unified portal; a modular adaptive processing infrastructure (MAPI) for analysis workflows; and an overview of the data lake data@Elettra. The intertwining of all these components results in the integrated pipeline experienced by the users.
        The acquisition/processing sequence presently in place at SYRMEP, the microtomography beamline, is presented as a case study of the standard structure that is being designed. Beamline and acquisition control, on-the-fly and post-acquisition processing are described in the light of the general landscape proposed for Elettra 2.0, the upgraded facility.

        Speaker: Adriano Contillo (Elettra-Sincrotrone Trieste S.C.p.A.)
      • 159
        KEK electron/positron injector LINAC control system

        Since May 2019, the KEK electron/positron injector LINAC has successfully conducted simultaneous top-up injections into four independent storage rings and a positron damping ring. To ensure long-term stable beam operation under such a complex operational scheme, maintaining high availability of the entire control system is essential.
        The original LINAC control system, developed in the 1990s, was based on a proprietary in-house software framework utilizing Unix servers, VME bus systems, and PLCs. Its operator interface was implemented using the Tcl/Tk scripting language and supported approximately 3,500 control points.
        To improve development efficiency and enhance compatibility with other accelerator control systems, the LINAC control system has been progressively migrated from this legacy platform to an EPICS-based architecture. As a result, the system now supports approximately 200,000 control points. In addition, Linux-based hyper-converged servers, a PXI bus-based system, and in-house embedded frontends have also been implemented.
        In this paper, we provide a detailed overview of the current status of the KEK injector LINAC control system, along with prospects for its future development.

        Speaker: Masanori Satoh (High Energy Accelerator Research Organization)
      • 160
        LabVIEW-based modular control system for the Novel Electron Window Test Stand

        Electron Beam Flue Gas Treatment (EBFGT) offers a solution to reduce SOx and NOx emissions coming from the maritime industry. The accelerator setup must be compact and energy-efficient enough to fit onboard a ship. To study EBFGT efficiency, a membrane window has to be placed between the electron gun and the beam-gas interaction area. A Novel Electron Window Test Stand (NEWTS) has been designed to test and validate these ultra-thin membranes by studying the electron beam attenuation and deflection.
        A dedicated control system for NEWTS has been designed and deployed, which ensures reliable real-time operation in both continuous and pulsed (from milliseconds to nanoseconds) modes. In this paper, we will explain the implementation and architecture of the control system and its suitability to operate in a harsh environment. By combining the capabilities of LabVIEW, RADE, Gitlab CI, Python, NI cRIO and FPGA, we will show how the control system for NEWTS was able to be put in place within a few months to control a variety of "off-the-shelf equipment" and integrate them within the CERN infrastructure. We will also demonstrate the control system's versatility, such that it can be reused to control other electron gun test benches at CERN.

        Keywords: reconfigurability, control system, rapid deployment, versatility, modularity.

        Speaker: Elena Galetti (European Organization for Nuclear Research)
      • 161
        Leveraging agile methodologies for efficient conference delivery: a case study from ICALEPCS 2023 Africa

        The ICALEPCS 2023 conference was a milestone event as the first to take place on the African continent. With intricate interdependencies, aggressive timelines, distributed authors and organizers, and high-stakes expectations, the Local Organizing Committee (LOC) borrowed the agile delivery model inspiration from Scrum and Kanban practices. This paper demonstrates the rigorous application of agile artifacts including stand-ups, and retrospectives to manage parallel work streams such as program curation, sponsor outreach, and venue coordination. A digital task board ensured real-time visibility and flow control, while continuous integration of stakeholder inputs informed adaptive planning. The agile process enabled the LOC to improve cross-functional coordination, mitigate risks early on, and produce high-value outcomes incrementally. This paper will demonstrate how agile practices can be applied reliably to large-scale technical event planning and offers a reusable model for the delivery of subsequent conferences in distributed and dynamic environments.

        Speakers: Bulelani Xaia (South African Radio Astronomy Observatory), Naadjia Padavattan (South African Radio Astronomy Observatory)
      • 162
        MAPS vacuum control system upgrade at ISIS neutron and muon source

        The control system, based on Schneider Quantum PLC and HMI, had been in operation for over 2 decades and faced increasing reliability and support challenges due to hardware obsolescence, lack of OEM support, and limited compatibility with modern protocols. To address these issues, the system was upgraded using Omron NX-series PLC and NA5 HMI, along with a complete redesign of control cabinet and documentation that conforms with British Standard BS7671 Wiring Regulations. A major challenge was integrating legacy devices like TPG300 vacuum gauge and cryopump controllers using RS232 communication. The LabView based interface acting as a bridge to communicate to cryopump controller was replaced with custom serial PLC logic, eliminating reliance on 3rd party device. The new system uses OPC UA to interface with EPICS (IBEX), enhancing cybersecurity and data integrity. A critical safety flaw in the legacy logic that risked over-pressurising the vacuum tank was resolved by redesigning the vacuum personnel protection system. The transition process included various tests and design reviews with stakeholders. The upgraded PLC logic now includes fault diagnostics for each device, improving maintainability and troubleshooting tasks. The HMI redesign includes a new GUI, and enhanced information display to better support scientists and operators. Offline commissioning was performed using Node-Red and OPC UA simulation. Result is a more secure, robust system.

        Speaker: Aamir Khokhar (ISIS Neutron and Muon Source, Science and Technology Facilities Council)
      • 163
        Modernization of DARHT Axis I Timing and Logic

        The Dual Axis Radiographic Hydrodynamic Test (DARHT) facility consists of two linear induction accelerators (LIA’s) used for flash radiography. The electron to Bremsstrahlung X-ray conversion at the target generates upstream traveling debris and requires debris blockers for machine protection. A spinning wheel debris blocker is installed on DARHT Axis I and the timing of the spinning wheel iris opening and the injector firing must be coincided to allow the electron beam to pass through while stopping debris from potentially traveling back into the accelerator. The coincidence between the spinning wheel and injector is the first signal used in the slow sequence of logical feedback needed to fire the machine. DARHT Axis I utilizes a pair of legacy custom electronics to handle this synchronization, giving the green light to the controls system to fire. This hardware is decades old and requires modifications to the input signals to continue the operational readiness of the accelerator. In this poster, we will discuss how we have reconfigured this hardware to bolster the ability to sync the machine with the controls and timing systems using standard off-the-shelf hardware. We will then discuss the new diagnostics and interlocks that we have implemented, to prevent future interruption of accelerator operations, and plans to incorporate a second debris blocker on Axis II into this timing configuration.

        Speaker: James Maslow (Los Alamos National Laboratory)
      • 164
        Modernization of PLC-based control systems at SNS

        When the SNS site was built around 20 years ago, the Conventional Facilities (CF) control systems were designed using 2 communication protocols to allow programmable logic controllers (PLCs) to interface with motors, variable frequency drives (VFDs), and distributed inputs and outputs (I/O). The protocol chosen to control motors and VFDs is DeviceNet, a CANbus-based protocol developed by Rockwell Automation. The protocol chosen to communicate with distributed I/O is ControlNet, another protocol developed by Rockwell Automation. Both of these protocols are obsolete and present reliability and maintainability issues, particularly DeviceNet. As the Control Systems Section at SNS is working to modernize control systems throughout the machine, a major goal for PLC-based systems is to remove the obsolete communication protocols in favor of standard, ubiquitous Ethernet. To this end, any new VFDs installed use Ethernet communication. Many VFDs are currently being replaced in the Central Utilities Building (CUB) and the Central Exhaust Facility (CEF) and are being removed from DeviceNet in favor of Ethernet communication. Planning is underway to retrofit Eaton Intelligent Technology motor control centers (MCCs) in the Target Building to remove DeviceNet adaptors and replace them with Ethernet adaptors for each motor starter. The ControlNet network in the CUB has been demolished, with I/O drops integrated into a local Ethernet network, improving sustainability and maintainability.

        Speaker: Isaiah Beaushaw (Oak Ridge National Laboratory)
      • 165
        Modernizing control system for klystron test stand at LANSCE

        Modernizing scientific test stands is essential for improving data acquisition, control precision, and integration with contemporary research workflows. This paper presents our approach to upgrading the legacy klystron test stand at Los Alamos Neutron Science Center (LANSCE) by implementing EPICS (Experimental Physics and Industrial Control System) for real-time control and monitoring, as well as an overhaul of the diagnostic hardware systems. The transition to EPICS enables scalable, network-distributed control, standardizes communication protocols, and enhances compatibility with the rest of LANSCE’s control systems. The improved control system provides intuitive, customizable interfaces for experiment configuration, live visualization, and automated data logging. This upgrade significantly increases maintainability, user accessibility, and automation capabilities, while reducing system downtime and improving experimental reproducibility. The work demonstrates a practical, extensible model for upgrading test infrastructure in research environments where flexibility, openness, and precision are essential.
        LA-UR-25-24511

        Speaker: Jonathan Quemuel (Los Alamos National Laboratory)
      • 166
        Modernizing the RF control systems at the Advanced Photon Source

        The Advanced Photon Source (APS) recently completed a significant upgrade to its storage ring, replacing all existing components with new ones. However, the RF systems and their control mechanisms were not part of this upgrade. The APS operates four RF systems: the Linac, the Particle Accumulator Ring (PAR), the Booster Ring, and the Storage Ring. These systems have traditionally relied on analog controls utilizing a combination of VME/VXI and Allen-Bradley PLC5 hardware, which are now obsolete. This paper discusses the ongoing transition to digital controls, the integration of new RF hardware, and the progress achieved thus far.

        Speaker: Nicholas DiMonte (Argonne National Laboratory)
      • 167
        Motion control systems for insertion devices in Diamond-II

        Diamond light source has been operating since 2007, and currently has 26 motion-controlled insertion devices that produce synchrotron light for the majority of the 36 beamlines in operation. The Diamond-II upgrade will reduce the emittance, increase the energy of the electron beam, increase the number of straights available, and includes the delivery of three flagship beamlines.
        As a part of delivering Diamond-II we plan to build and procure 12 new insertion devices of which 10 will be motion-controlled using in-house designed and built control systems. We also plan to upgrade three control systems to manage obsolescence and enable software upgrades. This paper describes the various generations of motion control systems
        present, and outlines the upgrade plans, controls challenges, and special requirements.

        Speaker: Ronaldo Mercado (Diamond Light Source)
      • 168
        Muon Intermediate Target control system upgrade at ISIS Neutron and Muon Source

        The ISIS facility at STFC Rutherford Appleton Laboratory had an upgrade to the controls for its muon production target in April 2025, as part of our obsolescence management plan. The PLC components and associated technology such as DeviceNet used for data exchange was from over two decades ago. At ISIS, an Extracted Proton Beam (EPB1) from the synchrotron interacts with the graphite target to produce pulsed pions that decay into muons and feed the experimental areas in the RIKEN-RAL and European Commission Muon facilities. This posed a significant risk to reliable operations due to the potential failure of obsolete hardware components, necessitating a complete replacement of the control system. This project involved electrical redesign to support controls hardware in conformance with IEC 60364, transitioning to newer Omron NX1 PLCs that can support OPC-UA and connect over a modern IDE, adopting EtherCAT to replace DeviceNet communication protocol to remote I/O nodes, implementing a high integrity controller running on redundant power supplies for tripping the proton beam, and streamlining the data monitoring and acquisition process through EPICS. In this paper, the system architecture and design choices are reviewed, and a report provided on the challenges faced in testing and commissioning in a high raditaion environmnet.

        Speaker: Hamza Maqbool (Science and Technology Facilities Council)
      • 169
        Nanoprobe diffraction and scattering method - BCDI, ptychography, robotics

        A new Nanoprobe beamline is under construction at the ANSTO Australian Synchrotron. The 100 m-long beamline aims to achieve 60 nm-resolution X-ray fluorescence microscopy and correlated 10 nm-ptychography. In addition, the Nanoprobe will implement nanobeam diffraction and scattering methods, including Bragg coherent diffractive imaging (BCDI) and ptychography. To record the diffraction from the sample, over an approximate quarter-hemisphere relative to the incident beam, and at sample-to-detector distances from 0.1 – 6.0 m, several detector gantry options were available. ANSTO has engineered a cost-effective solution utilizing a 6-axis industrial robot with 3 m reach and 20 kg payload capacity (KUKA KR20 R3100) travelling on a 6m linear track (Güdel TMF-6) to support and position a diffraction detector (Dectris EIGER2 X 1M). The robot system is required to position the detector sequentially around a chosen (r,θ,ϕ), where cylindrical coordinates define the sample-to-detector-center distance, r, and azimuthal θ and vertical ϕ take-off angles. For certain experiments, the detector will be positioned in a defined (X,Y) plane perpendicular to the incident X-ray beam to capture a full diffraction pattern at a distance away from the sample. The design considerations, and operational configurations for the robot detector positioning system will be discussed in this talk/poster.

        Speaker: Jun Yee Tan (Australian Synchrotron)
      • 170
        Nanoscale precision multi-axis motion control for the CBXFEL project

        The Cavity-Based Free Electron Laser (CBXFEL) project is proposing to produce a recirculating X-ray cavity and deploy it to the SLAC LCLS (Linac Coherent Light Source) Hard X-ray (HXR) undulator line. The shape of the cavity is defined by four diamond crystals which must be positioned with nanometer-level accuracy in four Degrees-Of-Freedom (DOF). Additionally, several electron and X-ray beam diagnostic components need to be precisely positioned to achieve, monitor, and maintain the cavity alignment. These functions are accomplished by a total of sixty-nine motion axes, eight of which are actuated by lead-screw stages operated by stepper motors, thirty-seven by Ultra High Vacuum (UHV) SmarAct piezo stages, and twenty-four by custom designed flexure stages actuated by UHV piezo linear actuators. The flexure stages are driven by UHV piezo actuators and real-time position feedback is provided by capacitive sensors and optical interferometers. A motion control system based on the CK3M PMAC architecture was developed to drive the different motion stages. This paper describes the main requirements to be met, how the technologies were integrated into the accelerator control system, and the main lessons learned.

        Speaker: Maria Alessandra Montironi (SLAC National Accelerator Laboratory)
      • 171
        New L2SI dynamic reaction microscope endstation in TMO: control system design, installation and integration

        To take advantage of the world's most powerful X-ray beam delivered by the LCLS-II project, the former Atomic, Molecular & Optical Science (AMO) instrument at the SLAC Linac Coherent Light Source (LCLS) user facility has been upgraded to the Time-resolved AMO (TMO) instrument by the L2SI project. The new Dynamic Reaction Microscope (DREAM) endstation, also covered by the L2SI project and located at the second interaction point of the TMO, will offer unique capabilities to support cutting-edge research in the fundamental science of matter and energy. This talk provides an in-depth overview of the control systems for the DREAM endstation, detailing its architecture, design methodology, implementation, and seamless integration with the broader LCLS control infrastructure. It will also address the key challenges, including integrating SmarACT motion control systems with the X-ray Machine Protection System (MPS) across different platforms, developing a robust and flexible equipment protection system, and implementing automated vacuum controls to meet stringent reliability and operational requirements.

        Speaker: Jing Yin (SLAC National Accelerator Laboratory)
      • 172
        No child left behind: managing requirements, interfaces, and communication in high-impact projects

        The Linac Coherent Light Source (LCLS) is a world-leading facility located at the SLAC National Accelerator Laboratory that constantly pushes the boundaries of science and technology. To stay at the frontier, we must continuously upgrade and evolve our instruments and control systems — which means tackling new projects, new capabilities, and, most importantly, new requirements.
        This talk will outline how the LCLS Experiment Control Systems (ECS) team works closely with stakeholders across LCLS, SLAC, and the project teams to define, capture, and manage requirements and interfaces for major projects like LCLS-II-HE and MEC-U. This talk will highlight the processes developed by our LCLS System Engineering Team and how ECS executes them to bring clarity and structure to our collaborations, as well as how we are leveraging Jama Connect as our central platform for capturing, reviewing, and refining these critical project elements. By standardizing our approach and tools, we are building a stronger foundation for today’s upgrades and tomorrow’s innovations.

        Speaker: Mitchell Cabral (SLAC National Accelerator Laboratory)
      • 173
        Novel distributed fast controls architecture for the consolidation of CERN's PS kickers

        The control of fast pulsed magnet systems at CERN requires often a common set of fast digital electronics sub-systems to perform tight timing control and fast protection of high-voltage pulse generators. Although the generators architecture is mainly modular, these control systems were until now most of the time centralized: several generators per equipment, but one global and equipment-specific control system.
        With the upcoming consolidation of CERN's PS kicker magnets controls, a new distributed architecture is proposed. Instead of one global control crate per functionality (timing, fast protection, acquisition, etc.), this new approach incorporates one control crate per generator, merging several functionalities together. The crate becomes more generic, offering higher flexibility in terms of system size (number of generators or magnets). It also allows to reduce the cabling costs but comes with new challenges in terms of data transmission bandwidth and software latency.
        This paper presents the new Distributed Kicker Fast Controls (DKFC) solution based on CERN ATS Distributed I/O Tier (DI/OT) ecosystem*, including new Open Hardware electronic boards (ADCs, DACs, I/Os, dry contacts, etc.) and gateware structure with high-speed board-to-board data exchange. Advantages and drawbacks of this new architecture and possible future extensions are also discussed.

        Speaker: Léa Strobino (European Organization for Nuclear Research)
      • 174
        Online analysis for kicker missing pulse diagnosis

        The PS beam extraction system includes 12 kicker magnet modules, nine in section 71 and three in section 79, designed to deliver full kick strength for ejecting a 28 GeV/c beam. Since 2020, sporadic missing pulses caused by aging HV generators linked to old electronic control equipment have reduced performance and have been challenging to diagnose. This led to the development of a Missing Pulse Detection Analyser to assist expert diagnostics. Started in 2021, the offline tool correlates kick pulse waveforms with timing data logged in CERN’s data logging system (NXCALS), providing an analytical and statistical overview. It has since become an online pulse-to-pulse analyzer that uses data from post-mortem acquisition, the Internal Timing System, and the Generator State Controller, all accessed through Front-End Software Application (FESA) classes. A compact feed-forward neural network, added in 2024, improves early detection of waveform deviations and missing pulse patterns. Developed in Python within CERN’s Unified Control Application framework (UCAP), the analyzer interfaces seamlessly with FESA and the Java API for Parameter Control (JAPC), publishing diagnostics through control middleware. This paper details its architecture and initial deployment on the PS Complex (KFA71/79), highlighting operational experience, diagnostic advantages, and plans for integration within the Efficient Particle Accelerator (EPA) framework, including expansion to additional subsystems for the upcoming control consolidation during the 2026 long shutdown.

        Speaker: Mr Christophe Boucly (European Organization for Nuclear Research)
      • 175
        Overview of the LCLS-II superconducting timing system design

        SLAC successfully delivered beam through the in-house firmware and software developed 1MHz Timing System for the new superconducting LINAC (SC-LINAC) LCLS-II. The SC Timing System hardware is ATCA based, the firmware and software architecture developed at SLAC, will be discussed in this poster.
        SLAC has now the capability of delivering electron beam to the SC-LINAC and undulators up to 1MHz nominal rate. The beam pulses can also be directed into any of the intermediate and final 5 destinations along the copper and superconducting LINAC.

        Speakers: Carolina Bianchini Mattison (SLAC National Accelerator Laboratory), Daniel Sanchez (SLAC National Accelerator Laboratory)
      • 176
        Power PMAC drive current case study

        This paper presents a detailed case study of the Power PMAC motor controller deployed at the NSLS-II facility. Designed to support a wide range of motors and encoder types, the Power PMAC controller performs sophisticated software-based calculations to optimize key operational parameters, including drive current settings. For certain high-performance scientific instruments, maximizing torque and speed is essential—making a precise understanding of these parameter limits critical. We analyze the controller’s current-setting mechanisms in conjunction with empirical measurements of the actual delivered current, obtained using a current probe and oscilloscope. This study offers valuable insights into the calibration and performance evaluation of Power PMAC motor-controller, highlighting the relationship between set parameters and real-world outcomes in high-precision applications.

        Speaker: Oksana Ivashkevych (National Synchrotron Light Source II)
      • 177
        Progress update on the superconducting undulator control system for the European XFEL

        This paper presents an update to the work previously published in [1]. Since the initial report, significant progress has been made within the European XFEL development program. The control rack for the first superconducting undulator (SCU) prototype, known as the S-PRESSO (S-PRE-SerieS Prototype), has been produced and is currently undergoing commissioning at the European XFEL. In parallel, the commissioning of both main and auxiliary power supplies is in progress. Furthermore, the architecture of the global control system, which will integrate all components of the SCU, has been finalised. This paper provides an overview of the current status of the S-PRESSO control system and outlines the next steps toward full integration into the existing permanent-magnet undulator (PMU) system.

        Speaker: Mikhail Yakopov (European X-Ray Free-Electron Laser)
      • 178
        Proof of concept of a PLC based emittance meter for the NEWGAIN project

        The SPIRAL2 accelerator features several diagnostic devices used to characterize, adjust, and monitor the beam. As part of its NEWGAIN (New GANIL Injector) upgrade project, SPIRAL2 will be equipped with a new source and a new injector. Therefore, new diagnostic tools will be developed, including an ALLISON-type emittance meter. To manage the emittance meter, we opted for a modern industrial PLC solution, leveraging its expanding capabilities and established maintenance advantages over a traditional PLC/VME combination. This paper details the architecture and programming concepts of our proof-of-concept system. It further outlines the test campaign conducted to validate the PLC's capabilities in several key areas: controlling the motors for measurement head positioning within the beam, managing high-voltage ramps, acquiring experimental data, and communicating results to the EPICS control system. The paper will also discuss findings related to current measurement accuracy, measurement rate, and synchronization, as well as the repeatability of the overall measurement process.

        Speaker: Clement Hocini (Grand Accélérateur Nat. d'Ions Lourds)
      • 179
        Real Time control system upgrade of the CERN Linac4 pre-chopper

        The CERN Linac4 pre-chopper, installed right after the H- ion source in the Low Energy Beam Transport (LEBT) section, plays a crucial role in providing the 45 keV H- beam to the first accelerating structure, the Radio Frequency Quadrupole (RFQ). By applying a pulsed electric field of -20 kV, the pre-chopper deflects the beam when not required and sharps its head and tail in order to remove the long rise time of the source and avoid transmission losses. The existing pre-chopper controller was implemented in 2015 using National Instruments (NI) LabVIEW and PXIe hardware, relying on their proprietary Real-Time (RT) operating system (Phar Lap) and a secondary Linux Front-End Computer (FEC) for the integration in the CERN control system. Phar Lap is EOL since 2025 and will be discontinued during the upcoming Long Shutdown 3 (LS3). This paper presents an upgrade of the control system, aimed at replacing the LabVIEW-RT control layer with standard CERN solutions, leveraging the new Debian-based Linux RT OS, Front-End Computer Operating System (FECOS), and consolidating all functionalities into a single computer. The goal was achieved using the CERN Front-End Software Architecture (FESA) 3 framework and C++ libraries to interface with the NI hardware via NI Linux drivers deployed on FECOS. A new PyQt-based graphical user interface will be developed to ease system monitoring and operation. Installation of the upgraded system is expected for LS3, using a custom PXIe crate and CPU from CERN instead of NI solutions.

        Speaker: Mr Christophe Boucly (European Organization for Nuclear Research)
      • 180
        Refurbishment strategy of the GANIL cyclotrons control system

        The Grand Accelerateur National d'Ions Lourds (GANIL) began operating in 1983, since the scientific demand is still there, a wide renovation project called Cyclotrons Renovation (CYREN) is ongoing, aiming at maintaining the accelerator up and running for at least two more decades. One of the work package of the CYREN project is related to the GANIL Cyclotrons control system (GANIL-CS). Current control system date back to the early 90’s, it relies on Versa Module Eurocard (VME) controllers, software were developed with ADA language and MOTIF/XRT services for graphical user interface (GUI). The MOTIF/XRT technology is more and more difficult to operate with latest Linux distribution and hardware generation, user interfaces are difficult and time consuming to maintain compared to state of the art GUI development environment, input output VME cards are less available. The project has decided to take measures against these risks. This paper will present the strategy defined and the status the refurbishment.

        Speaker: Mr Christophe Haquin (GANIL)
      • 181
        RTST vacuum control system design

        This paper presents the design and implementation of the Integrated Control Systems (ICS) vacuum control system for the Second Target Station (STS) within the Ring-to-Second Target Beam Transport Line (RTST) of the Spallation Neutron Source (SNS). The RTST vacuum system is crucial for maintaining a high-vacuum environment necessary for the operation of a high-intensity proton beamline, extending from the existing Ring to Target Beam Transfer (RTBT) to the new STS. The system is composed of various components, including vacuum assemblies, sensors, pumps, and an architecture based on established SNS control systems utilizing Allen-Bradley Programmable Logic Controllers (PLCs) coupled with EPICS Soft Input/Output Controllers (IOCs). The design emphasizes reliability and safety, incorporating sector gate valves for isolation, remote controls for turbomolecular and ion pumps, and pressure monitoring through advanced gauge systems. This paper details the architectural framework, instrumentation, control layers, and operational interfaces to ensure robust management of the vacuum conditions necessary for the successful operation of the RTST

        Speaker: Jianxun Yan (Oak Ridge National Laboratory)
      • 182
        Seven years of progress: Optimising controls hardware data management at CERN for enhanced efficiency and reliability

        The CERN Controls Hardware Data Management (CHDM) initiative was launched in 2018 with the goal of improving the efficiency of the hardware installation team and equipment groups by allowing them to easily manage and exploit their data when installing, maintaining and dismantling large-scale electronics installations.
        After seven years of refining tools, optimising data flows, both internally and between systems, as well as cleaning and defining remaining data within the centralised systems, we have successfully achieved our primary objectives: obtaining valuable reliability statistics and providing auto-generated schematics and data-driven diagnostics tools.
        This paper outlines the approach taken throughout this process and examines the costs and benefits associated with this investment.

        Speaker: Ioan Kozsar (European Organization for Nuclear Research)
      • 183
        SIBERIA -- Stitcher, Integrated in Beamline Environment, Rigid, Infallible and Alacritous

        Here we present a recent highlight at the high throughput tomography setup at the P14 EMBL beamline on Petra III (Hamburg, Germany) [1,2]: an automatically executed fast stitching of 3D volumes triggered on the completion of the last reconstruction in a series of scans acquired in a raster pattern. SIBERIA has been developed to allow the beamline user to explore the stitched reconstructed data volume shortly after the data collection. Our reconstructions finish as fast as within ~30s after the completion of the scan [3]. Their size ranges from ~10GB to ~125GB depending on the acquisition mode (cropped, standard, extended field of view). A number of reconstructed volumes are needed to accommodate samples larger than the ~1.3mm x 1.3mm FOV that the beamline can illuminate. SIBERIA running on a single cluster node with 512GB RAM, and 176 hyper-threads produces a ready-to-be-viewed stitch in as little as 20s after reconstruction end for 2 volumes. With increasing the number of volumes and overlaps between them its execution time increases correspondingly, but remains within minutes for large datasets (e.g. 12 min for 3x3 volumes, 4096x4096x1016 pixels each, with 28 overlapping pairs). The data is read from, and written onto a 1.7PB BeeGFS storage system.
        SIBERIA follows a globally optimal stitching algorithm similar to that described in [4]. It is, however, a brand new implementation in C/C++ optimized for speed, and for the already existing computing environment at the beamline.

        Speaker: Marina Nikolova (European Molecular Biology Laboratory)
      • 184
        SOLARIS synchrotron control system upgrade: addressing challenges and implementing solutions

        The National Synchrotron Radiation Centre SOLARIS*, a 3rd Generation Synchrotron Light Source, stands as the most advanced research infrastructure in Poland. Since its commencement of operation in 2015, SOLARIS has undergone significant expansions. Initially, system upgrades were straightforward to implement. However, as the facility matured, new beamlines were created, and the number of equipment increased significantly. This led to a rise in the complexity of upgrades, prompting the SOLARIS team to focus on creating automation tools for deployments and maintaining up-to-date libraries and software. During this period, many versions of libraries, such as Python and PyQt, as well as the CentOS operating system, became obsolete, leading to increased maintenance costs. To address these challenges, a comprehensive strategy was developed. This strategy includes transitioning from CentOS 6 and 7 to AlmaLinux 9, upgrading older versions of Python to version 3.9, and updating automation tools such as Ansible and GitLab CI/CD. This paper presents the methodology employed for the control system upgrade, detailing the architecture of the new system, the upgrade process, and the challenges encountered.

        Speaker: Michał Fałowski (SOLARIS National Synchrotron Radiation Centre, Jagiellonian University)
      • 185
        SSRF superconducting wiggler control & coil voltage monitoring system and quench monitoring results

        The SSRF (Shanghai Synchrotron Radiation Facility) superconducting wiggler consists of three parts: a superconducting multipole magnet, a cryostat system and magnet power & control system. Superconducting multipole magnets can generate a strong magnetic field with a peak of 4.2T, and the generated magnetic field alternates positively and negatively along the direction of electron motion in the storage ring. The superconducting wiggler is installed in the BL12 unit of the SSRF Storage Ring. The voltage monitoring system can monitor the voltage of each part of the coil of the superconducting multipole magnet through the voltage sense leads, thereby obtaining the voltage trends of each part of the coil when the coil quench occurs. The voltage monitoring system collects the voltage data of each coils through a Siemens PLC analog input modules which is an innovative method. And the system realizes the quench detection by recording the voltage cycle by cycle and judge by a delay threshold. Based on the PLC system both the equipment monitoring function and the voltage monitoring function are achieved. The quench voltage of each coil is captured and analyzed.

        Speaker: Tianya Meng (Shanghai Advanced Research Institute, Chinese Academy of Sciences)
      • 186
        Standardizing MicroTCA integration into the TANGO control system via ChimeraTK and OPC UA at SOLEIL

        Integration of MicroTCA systems into TANGO using ChimeraTK at SOLEIL
        Twenty years ago, standard electronic systems such as cPCI, motion controllers, and the S7 300 PLC were chosen to build the SOLEIL control system. Although fully operational since 2006, many of these technologies are now obsolete. As SOLEIL undergoes a major upgrade with the SOLEIL II project—aimed at constructing a 4th-generation synchrotron light source—modernizing the control system with state-of-the-art technologies has become essential. SOLEIL has adopted the MicroTCA (MTCA) platform as one of the new standard baselines for applications requiring high-speed control and data acquisition.
        This poster presents ongoing efforts to standardize the integration of MicroTCA systems and applications into the TANGO control system. It outlines SOLEIL’s MTCA integration strategy, including the development of a connector bridging TANGO to the ChimeraTK and FWK frameworks developed at DESY via OPC UA. This communication layer facilitates interoperability and modular integration within the SOLEIL control architecture. The approach is illustrated through two current MTCA-based projects—low-level RF and fast orbit feedback—highlighting technology choices, system development, installation, and preliminary results.

        Speaker: Mrs Jade Pham (Synchrotron soleil)
      • 187
        Status of the control system for the DELTA synchrotron light source

        The 1.5 GeV electron storage ring DELTA, operated by the University of Dortmund in Germany, celebrated its 30th anniversary last fall. Over the past three decades, the control system has undergone many different IT infrastructure development cycles. It was commissioned between 1994 and 1998, utilizing a series of command-line-based in-house applications that operated on individual, low-performance networked HP workstations and VME-based real-time CPUs, initially without the support of graphical user interfaces (GUIs). These GUIs were gradually implemented later with the introduction of the EPICS software package (1999-2001). Based on a combination of EPICS and a newly installed Linux PC-based client/server architecture, a variety of software tools and hardware extensions were introduced in the following years. Today, the DELTA control system utilizes an open-source virtual environment with a server management platform that integrates kernel-based virtual machines (KVM), software-defined storage and network functions on a single platform. In addition, web-based user interfaces simplify the configuration of the integrated disaster recovery tool and enhance the management of high availability and redundancy within the server cluster. Furthermore, machine learning algorithms have been incorporated into the control and optimization of the storage ring. This report gives a historical review, summarizes the most important developments in recent years and provides an outlook on future projects.

        Speaker: Detlev Schirmer (TU Dortmund University)
      • 188
        Supporting injector operation with the FAIR settings management system

        The FAIR Settings Management System is in productive use at the GSI accelerator facility. The core of the system is developed in collaboration with CERN and is based on CERN’s Software Architecture (LSA) framework. It is currently used at GSI for operating the synchrotrons, storage rings, and transfer lines. As part of the Injector Controls Upgrade project, which aims to integrate the Unilac linear accelerator into the new control system, concepts for scheduling parallel particle beams are being further realized. In preparation for the integration of the FAIR facility, existing concepts are being reviewed, refined and implemented. A key milestone for the upcoming implementation steps is the operation of Unilac with the new control system during the beam time 2026, with preparatory beam tests starting in summer 2025. This paper describes the current implementation status.

        Speaker: Jutta Fitzek (GSI Helmholtz Centre for Heavy Ion Research)
      • 189
        Technical design concept and development of the new accelerator control and timing systems for PETRA IV

        At DESY, extensive technical design and prototyping work is currently underway to upgrade the PETRA III synchrotron light source to PETRA IV, a fourth-generation low-emittance machine. The realization of the new machine necessitates a redesign of the accelerator’s control and timing systems. The primary hardware components will be based on the MTCA.4 standard, which has established itself as a reliable platform at DESY. The control system framework will be modernized to accommodate the demands of a fourth-generation light source. This paper presents the key decisions made in this context and provides an overview of the design and development process.

        Speaker: Tim Wilksen (Deutsches Elektronen-Synchrotron DESY)
      • 190
        The control system of sub-cooled liquid nitrogen cooling system for C11 CPMU at SSRF

        Cryogenic Permanent Magnet Undulator (CPMU) is an important kind of insert device at the synchrotron radiation facilities. The magnets of CPMU have a better magnetic performance than a conventional In-vacuum Undulator. The work temperature of CPMU magnets in C11 CPMU is below 80K. The cryogenic operation of CPMU requires a sub-cooled liquid nitrogen cooling system. The operational stability of cooling system is the key factor for device operation throughout one continuous operation period. The control system design for the sub-cooled liquid nitrogen cooling system will be discussed including control system architecture, hardware and software design, control methods. The control loop parameters and performance will be introduced. The system was put into operation in August 2024 and maintains a steady state till January 2025 which has a steady control effect on controlled temperature and pressure.

        Speaker: Tianya Meng (Shanghai Advanced Research Institute)
      • 191
        The Enhanced Liquid Interface Spectroscopy and Analysis (ELISA) beamline control system prototype

        The Enhanced Liquid Interface Spectroscopy and Analysis (ELISA) beamline is a new instrument at BESSY II focusing on a novel, integrated approach for the high-fidelity preparation and investigation of liquid interfaces using soft x-ray radiation and infrared radiation light generated simultaneously from the BESSY II light source. [1] As ELISA is part of the BESSY II+ upgrade scheme [2] it will be the first soft x-ray beamline at BESSY II to use new hardware motion control standards and a novel BESSY II EPICS deployment system.
        This contribution focuses on workflows and tools we have developed for beamline control at BESSY II. We demonstrate their application at the ELISA beamline and finish with an outlook on scaling usage across the full experimental hall.

        Speaker: Maxim Brendike (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 192
        The ESS Synchronous Data Service (SDS) development and first results

        The 5 MW proton linear accelerator of the European Spallation Source ERIC is designed to accelerate the beam at a repetition rate of 14 Hz, which dictates the refresh rate of most of the relevant data produced by acquisition systems. Each cycle of the 14 Hz timing structure receives a unique cycle ID from the ESS Timing System, which can be used as an index when data is collected and stored. The ESS Linac Synchronous Data Service (SDS) facilitates the collection of high-resolution data from various accelerator subsystems. Currently, SDS consists of an EPICS extension to be included in data acquisition IOCs and a client service (SDS Collector) that collects and stores the data produced by these IOCs. The novel features provided by the EPICS PVAccess protocol and libraries play a crucial role in this project by supporting structured data in the EPICS Process Variable data format. This paper outlines how SDS is designed, how it enables data-on-demand and post-mortem collection of large array datasets without overloading the network and describes the results of using SDS during the latest ESS beam commissioning campaign in 2025. From a broader perspective, SDS will be part of the ESS Data Framework, which comprises a set of tools to collect, store, catalog, retrieve, and analyze ESS Linac data to support advanced applications such as machine learning algorithms. This framework is briefly described in this paper.

        Speaker: João Paulo Scalão Martins (European Spallation Source)
      • 193
        The hardware and software architecture of the MTCA.4 BPM electronics for the DESY’s PETRA IV accelerator

        The PETRA IV accelerator at DESY represents the forefront of synchrotron radiation science, demanding advanced instrumentation to deliver the requested beam quality. This paper describes the hardware and software architecture of the MicroTCA.4-based BPM electronics, prototyped collaboratively by Instrumentation Technologies (I-Tech) and DESY. The project was based on the legacy of the Libera Brilliance+ (LB+) BPM system and the Libera Base software architecture. While LB+ is based on the MicroTCA.0 architecture, the project required full compatibility with MicroTCA.4. By leveraging the existing front-end electronics and the mature software framework, the project focused on the platform compatibility challenges while keeping risks under control and expediting prototyping. The new key hardware components include a BPM-optimized RTM module developed by I-Tech and the DAMC-UNIZUP AMC processing board from DESY. After two years of tests at PETRA III, the validated prototype is ready to be industrialized and deployed for PETRA IV. The BPM application became hardware-agnostic, supporting multiple BPM boards while maintaining platform independence. This flexibility enhances system versatility and establishes its role in PETRA IV's FOFB architecture. By harmonizing innovative hardware design with robust software solutions, this work underscores a seamless transition from legacy technology to next-generation systems, offering valuable insights for future accelerator advancements.

        Speaker: Martin Anton Škoberne (Instrumentation Technologies (Slovenia))
      • 194
        The IRRAD Proton Irradiation Facility Data Management, Analytics, Control and Beam Diagnostic systems: current performance and outlook beyond the CERN Long Shutdown 3

        The proton irradiation facility (IRRAD) at the CERN East Area was built in 2014 during the Long Shutdown 1 (LS1), and later improved during the LS2 (2019), to address the needs of the HL-LHC accelerator and detector upgrade projects. IRRAD, together with the CHARM facility on the same beamline, exploits the 24GeV/c proton beam of the Proton Synchrotron (PS) providing an essential service at CERN showcasing more than 4400 samples irradiated during the last decade. IRRAD is operated with precise custom-made irradiation systems, instrumentation for beam monitoring (IRRAD-BPM), operational GUIs (OPWT) and a dedicated data management tool (IDM) for experiments follow-up and samples traceability. Moreover, performance tracking generated by custom-made analytics tools (Jupyter, etc.) guarantees regular feedback to the PS operation, thus maximizing the beam availability for IRRAD. While the HL-LHC components qualification is coming to an end with the LS3 (2026-2028), new challenges arise for future detector, electronics components and material irradiations: reaching extremely high fluence levels, operating lower momenta or heavy ion beams, being some of those. In this context we first describe the last (software and hardware) improvements implemented at IRRAD after the LS2 and then present the challenges ahead that will drive future upgrades such as, for example, applying Machine Learning techniques to the IRRAD-BPM data aiming to achieve real-time automatic beam steering and control

        Speaker: Federico Ravotti (European Organization for Nuclear Research)
      • 195
        The White Rabbit Collaboration: An innovative model of public-private partnership

        White Rabbit (WR) is an open-source synchronisation technology developed at CERN in collaboration with other institutes and industry. It is commercially available from multiple vendors, and its adoption in industry and academia grew significantly after its standardisation as part of the Precision Time Protocol (IEEE 1588-2019). This resulted in a substantial increase in the number of feature and support demands.

        To support this broader adoption, ensure the continued development of the open-source core, and establish effective governance over the evolving technology, the White Rabbit Collaboration (WRC) was formed. WRC members, through an annual fee, contribute to funding the CERN-based Bureau, responsible for maintaining WR’s core components, providing support to members, and collaboratively shaping the technology's future.

        The WRC represents an innovative model of public-private partnership and knowledge transfer of an open-source technology and can serve as a template for similar initiatives. This paper will analyse WRC’s establishing and first year of operation.

        Speaker: Evangelia Gousiou (European Organization for Nuclear Research)
      • 196
        Time served - a look at the past, present, and future of timing at Fermilab

        This presentation covers the history of Fermilab's Tevatron Clock (TCLK) timing system and how it has served to regulate the facility over the past 40 years. The presentation provides an overview of beamlines at Fermilab, the challenges of timing in a Rapid Cycling Synchrotron, and the transfer scenarios utilized to generate megawatt-class beam at America's "premier" high energy physics laboratory!
        The PIP-II project introduces additional challenges in the timing of Fermilab. Over the past 5 years, the Controls group has seized this opportunity to modernize, and is actively in development of an upgraded Accelerator Clock (ACLK) timing system to meet stringent performance demands. This new implementation vastly improves the real-time control and synchronization capabilities of the facility, supporting beam synchronous operation from LBNF and beyond!

        Speaker: Evan Milton (Fermi National Accelerator Laboratory)
      • 197
        Towards sustainable work management in scientific facilities: applying Kanban at ALBA Controls

        Controls engineers in scientific facilities manage numerous projects, support multiple Customer Units (CUs), and balance operational demands with new initiatives. This often leads to growing backlogs and staff stress. Meanwhile, managers struggle to allocate resources across CUs, frequently prioritizing short-term goals at the expense of long-term strategy. At ALBA, such pressures prompted a new organizational approach. Building on lessons from successful past transitions - including the 2013 shift to operations * and a major staff turnover ** - the Controls Section adopted the Kanban method to optimize the resource allocation and maximize the throughput. Tasks were categorized by size/complexity and Class of Service (CoS), and a unified board with multiple views was implemented to visualize Work In Progress (WIP) and support follow-up. All work begins in a visible backlog, jointly prioritized with CUs based on CoS. Dedicated engineering teams were formed to improve coordination and knowledge sharing. Policies and metrics are clearly defined and transparent. The implementation was done using Jira and Confluence (Atlassian ecosystem). First results of the new approach are included in the paper. This initiative aligns with broader organizational efforts such as the Activity Plan for ALBA (APA) and project tracking within the Computing division ***, laying the groundwork toward the ALBA II upgrade to a 4th generation synchrotron light source.

        Speaker: Zbigniew Reszela (ALBA Synchrotron (Spain))
      • 198
        Towards the implementation of a new European XFEL scientific data policy – challenges and chances.

        As data volumes at European XFEL continue to grow rapidly, the need for sustainable storage, access, and preservation has become increasingly critical. In response, and despite operating within an established data management environment, the facility has introduced a new scientific data policy to address rising demands and align with evolving international best practices. The policy emphasizes long-term sustainability and adherence to the FAIR principles (Findable, Accessible, Interoperable, Reusable), promoting enhanced transparency, sharing, and reuse of research data. To support this, several key updates have been implemented. Data Management Plans (DMPs) are now mandatory from the project planning stage onward, guiding researchers in defining data workflows and lifecycle strategies. Data reduction techniques have been adopted to optimize storage without sacrificing scientific value. The storage infrastructure now features a tiered model, combining high-performance systems for short-term needs with cost-efficient long-term archival. Metadata tools have been upgraded to improve discoverability and access controls have been refined to support secure, collaborative research. These developments build on European XFEL’s strong foundation—spanning infrastructure, policy, and expertise—ensuring scalable, efficient data management in line with global standards.

        Speaker: Janusz Malka (European X-Ray Free-Electron Laser)
      • 199
        Trigger synchronization unit consolidation for SPS and LHC beam dumping kickers systems

        The TSU's (Trigger Synchronization Unit ) purpose is to synchronize beam dump requests with the Beam Abort Gap (BAG) upon request from clients, primarily the Beam Interlock System (BIS). Synchronization is crucial to prevent damaging absorbers/collimators and cryo ring magnets.
        The TSU system, per beam, comprises two interconnected, redundant chassis for synchronization. It internally regenerates the Beam Revolution Frequency (BRF) in case of signal loss. Synchronization between chassis and internal surveillance ensures high level of redundancy. Dump triggers are issued via three paths: two synchronous and one asynchronous with delay. Software analyses post-mortem all dump triggers and critical signals for interlocking the TSU system on any unissued or abnormal triggers.
        The TSU is a safety-critical system for both SPS and LHC Beam Dumping Systems, requiring high reliability. At the LBDS, one asynchronous dump is allowed per year of operation, and one catastrophic failure is acceptable per thousand years.
        During the design process, a full hardware and principle reliability study was conducted with CERN's Reliability Analysis Working Group (RAWG). In addition, a lab test bench is designed to test all functional requirements using unit tests.
        This paper presents the TSU system implementation, high-level software, GUI, and reliability study steps and results. The study confirmed TSU's high reliability, meeting initial specifications, with no single point of failure identified.

        Speaker: Léa Strobino (European Organization for Nuclear Research)
      • 200
        Universal LIMS for Diamond

        Universal LIMS is a new set of web services being developed at Diamond, as part of the Diamond-II upgrade. It will provide users and beamline scientists with tools to manage the logistics and scientific metadata for their experiments. For scientific samples it will allow users to ship them to Diamond, track where they are within the experiment hall, and store data about them. Universal LIMS will also allow users to define parameters for non-interactive experiments and the Data Catalogue will provide an overview of the data they have collected, including metadata from the data acquisition systems and summaries of analysis pipelines that have run. These services will work together to provide a complete workflow to facilitate user experiments at Diamond. A key part of the approach is to provide flexibility in the data that is stored. Defining a database schema to cover the needs of the eight different science groups in Diamond would be challenging and there is a development cost to updating the database schemas as requirements evolve. Instead, with Universal LIMS we store data as JSON, validated against a JSON schema. This ensures schemas can be updated easily, while the data can still be understood effectively by downstream applications. In this talk we will discuss the progress on development of the Universal LIMS services, the creation of the repository of JSON schemas and how these fits in with the software architecture being developed as part of the Diamond-II upgrade.

        Speaker: Ian Bush (Diamond Light Source)
      • 201
        Unsupervised Anomaly Detection in ALS EPICS Event Logs

        This paper introduces an automated fault analysis framework for the Advanced Light Source (ALS) that processes real-time event logs from its EPICS control system. By treating log entries as natural language, we transform them into contextual vector representations using semantic embedding techniques. A sequence-aware neural network, trained on normal operational data, assigns a real-time anomaly score to each event. This method flags deviations from baseline behavior, enabling operators to rapidly identify the critical event sequences that precede complex system failures.

        Speaker: Thorsten Hellert (Lawrence Berkeley National Laboratory)
      • 202
        Update on migration to EPICS at the ISIS accelerators

        The ISIS Neutron and Muon Facility accelerators are migrating to an EPICS control system. The tools developed to run two control systems in parallel and to automate the migration of hardware and user interfaces to EPICS have been previously presented. We now detail our emerging EPICS setup. Hardware interfaces are implemented as a mixture of conventional EPICS IOCs, in-house developed equivalents in Python, and bridged through our old control system. Our user interfaces are primarily the Phoebus stack but web interfaces in Python are being explored, particularly to support machine learning purposes such as automated optimisation and anomaly detection. We present issues which may arise at any site in transition, such as handling continuity of data archiving

        Speaker: Dr Ivan Finch (Science and Technology Facilities Council)
      • 203
        Upgrade of the control systems for the CERN LHC gas system distribution modules

        At the experiments of the CERN Large Hadron Collider (LHC), 31 gas systems are used to deliver a precise gas mixture to the corresponding detectors with high reliability and availability. The Distribution Modules of these gas systems control and monitor gas flow and pressure across thousands of channels. The Embedded Local Monitor Board (ELMB), an I/O board using the CANopen communication protocol*, is used to read the flow and pressure sensors. After years of operation, the PLC hardware and software of most of the Distribution Modules have become obsolete, raising the need for complex support and impeding improvements. This led to a comprehensive upgrade which was first tested on a retired LHC gas system and subsequently deployed on the production gas system of the MDT detector in the ATLAS experiment. The upgrade allowed the addition of several features, for instance the reporting of detailed sensor error codes and diagnostic information about the ELMBs. System maintainability and upgradability were improved, ensuring uninterrupted operation for the upcoming years of LHC operation.

        Speaker: Pieter Vanslambrouck (European Organization for Nuclear Research)
      • 204
        Upgrade of the Los Alamos Neutron Science Center (LANSCE) Beam Chopper Pattern Generator

        LANSCE delivers macropulses of beam, hundreds of microseconds in duration and at a nominal repetition rate of 120 Hz, to five experiment areas. These macropulses are distributed to four H⁻ areas and one H⁺ area. Each of the H⁻ experiment areas require a unique beam time structure within the macropulse. This time structure is imposed on the beam by a traveling wave chopper located in the H- Low Energy Beam Transport (LEBT) section of LANSCE. The chopper is driven by pulsed power systems which receive digital signals generated by the LANSCE chopper pattern generator. This chopper pattern generator system must maintain tight synchronization with multiple LANSCE RF reference signals and is triggered by the LANSCE master timer system. This paper describes a recent upgrade to the LANSCE chopper pattern generator from its original NIM/CAMAC/VXI form factor, including details in software and hardware, test results, and future plans.

        Speaker: Anthony Braido (Los Alamos National Laboratory)
      • 205
        Upgrading hardware and software for the LANSCE PSR ExBPM with FPGA based high speed digitizers

        This paper presents the current approach to upgrade the readout instrumentation for the Proton Storage Ring (PSR) Extraction Line Beam Position Monitor System (ExBPMs) that are located between PSR and Lujan center Target at the Los Alamos Neutron Science center (LANSCE) accelerator. National Instrument’s PXIe platform with high-speed digitizers have been chosen as the hardware which will replace the original CAMAC/NIM/VME system which is obsolete. The beam position algorithm for this project has been demonstrated with the slower real-time Labview software using NI-Scope. The software is currently being written and tested using Labview FPGA to make the position processing algorithm match the acquisition speed. The beam position algorithm is designed to handle larger data streams so that it can also be used on the LANSCE Linac BPMs. FPGA design of the algorithm is inherently complex and requires multiple interacting factors such as the LinuxRT OS and supporting packages deployed on the PXIe. (LA-UR-25-23748)

        Speaker: Tyagi Ramakrishnan (Los Alamos National Laboratory)
      • 206
        Upgrading the ALS beamline equipment protection system: software development and implementation

        The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory, a third-generation synchrotron light source, has been in continuous operation since 1992. The ALS beamline equipment protection system is being upgraded by replacing legacy end-of-life Modicon PLCs with Allen-Bradley PLCs. This paper presents the software architecture of the upgraded system, highlighting the Allen-Bradley PLC programming approach and the development of the EPICS database that integrates the protection system into the facility’s control system infrastructure.

        Speaker: Najm Us Saqib (Lawrence Berkeley National Laboratory)
      • 207
        Vacuum monitoring and controls acceptance testing for LCLS-II HE cryomodules at SLAC

        The LCLS-II High Energy (LCLS-II HE) upgrade at SLAC National Accelerator Laboratory involves the delivery and long-term storage of superconducting cryomodules, manufactured at partner laboratories and shipped to SLAC. To maintain their performance, these cryomodules must preserve ultra-high vacuum (UHV) conditions prior to installation. Upon arrival, each unit undergoes a comprehensive controls acceptance test to validate instrumentation functionality, including sensor accuracy, signal integrity, and system connectivity. To ensure vacuum integrity during extended storage, a vacuum monitoring system was developed to continuously track cold cathode gauge pressure, integrated into the EPICS control system for real-time data, alarms, and archiving. Despite using a single gauge per cryomodule, the system enables early detection of vacuum degradation with minimal hardware. This paper outlines the acceptance testing, vacuum monitoring system design, and operational experiences, along with data trends, alarm scenarios, and lessons learned to improve future cryomodule storage practices.

        Speakers: Rohini Sri Priya Chillara (SLAC National Accelerator Laboratory), Ziyu Huang (SLAC National Accelerator Laboratory)
    • TUSV Speaker's Corner
      • 208
        Nanoprobe diffraction and scattering method - BCDI, ptychography, robotics

        A new Nanoprobe beamline is under construction at the ANSTO Australian Synchrotron. The 100 m-long beamline aims to achieve 60 nm-resolution X-ray fluorescence microscopy and correlated 10 nm-ptychography. In addition, the Nanoprobe will implement nanobeam diffraction and scattering methods, including Bragg coherent diffractive imaging (BCDI) and ptychography. To record the diffraction from the sample, over an approximate quarter-hemisphere relative to the incident beam, and at sample-to-detector distances from 0.1 – 6.0 m, several detector gantry options were available. ANSTO has engineered a cost-effective solution utilizing a 6-axis industrial robot with 3 m reach and 20 kg payload capacity (KUKA KR20 R3100) travelling on a 6m linear track (Güdel TMF-6) to support and position a diffraction detector (Dectris EIGER2 X 1M). The robot system is required to position the detector sequentially around a chosen (r,θ,ϕ), where cylindrical coordinates define the sample-to-detector-center distance, r, and azimuthal θ and vertical ϕ take-off angles. For certain experiments, the detector will be positioned in a defined (X,Y) plane perpendicular to the incident X-ray beam to capture a full diffraction pattern at a distance away from the sample. The design considerations, and operational configurations for the robot detector positioning system will be discussed in this talk/poster.

        Speaker: Jun Yee Tan (Australian Synchrotron)
    • WEKG Keynote Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Anton Venter (South African Radio Astronomy Observatory)
      • 209
        Alternative Forms of Intelligence for Resource-Constrained Robots

        Recent breakthroughs in artificial intelligence have greatly increased the capabilities of robots. These methods, however, require large amounts of memory, computation, and training data. These resources are not always available or accessible. In this talk I discuss three examples of alternative forms of robotic intelligence. In the first example, low-level intelligence is built into the physical structure of the robot while the high-level planning is handled by a human operator. In the second example, a robot dog learns to correct a physics-based control law using a linear policy. The final example shows a swarm of low-capability agents collectively estimating environmental quantities without any individual being required to maintain a global picture of the estimate or data.

        Speaker: Matthew Elwin (Northwestern University)
    • WEAG MC10 Software Architecture and Technology Evolution Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Ralph Lange (ITER Organization), Tim Wilksen (Deutsches Elektronen-Synchrotron DESY)
      • 210
        The Tango Controls Collaboration status in 2025

        Since last status update in 2023, the Tango Controls collaboration has undertaken a major effort to add new features to cppTango, the core of Tango Controls, and two other official language bindings, JTango and PyTango. Significant development efforts have been dedicated to the implementation and prototyping of community-requested features. Observability is a trending topic in software development, and we have listened to our community adding OpenTelemetry support. Continuing with cppTango refactoring, we switched to C++17 and catch2 as the new testing framework to improve code quality and test coverage. PyTango has undergone a major overhaul by switching from boost-python to pybind11, which has been a welcome modernization of the code base and has allowed us to remove obsolete APIs. Special Interest Group (SIG) meetings continued to be a great success. Several have been held, among them one that addressed and is still addressing the request of our users for a much improved documentation. Encryption has also been a SIG topic, and a prototype for complete end-to-end encryption of all communication in Tango Controls has been developed. CI/CD has again received major updates and gained more computing power to run more tests in less time, thanks to the Gitlab runner contributions of the collaboration members. Thanks to the continuous community effort on keeping a modern and well maintained core, the future road map of Tango Controls looks promising and achievable.

        Speaker: Mr Thomas Juerges (SKA Observatory)
      • 211
        Next-gen middleware for Fermilab beam instrumentation DAQ systems

        The Fermilab Accelerator Division, Instrumentation Department is always adopting modern and current software methodologies for complex DAQ architectures. This presentation highlights the Redis Adapter (RA) as the key software component enabling high-performance, modular communication between digitizers and distributed control systems by leveraging Redis and containerization. The RA provides a unified, efficient interface between Redis-based data streams and consumer systems.
        In the legacy architecture, digitized data flowed through the custom, UDP-based Distributed Data Communication Protocol in the middle layer. In the current system, DDCP remains the ingestion path, while the RA serves as the decoupling layer.
        The proposed system replaces old VME digitizers with a SOM-based digitizer that communicates with Redis using the RA. The RA acts as both a performance-critical bridge and a protocol-agnostic adapter, ensuring compatibility with legacy control frameworks while enabling future scalability and modularity. This restructuring of the middle layer also helps the system achieve high throughput, reduce latency, and simplify the data path.
        Finally, we will demonstrate how the RA is utilized in our two core products to deliver both legacy compatibility and future flexibility.

        Speaker: Shreya Joshi (Fermi National Accelerator Laboratory)
      • 212
        Introducing web based technologies at GANIL SPIRAL2 control system.

        The SPIRAL2 accelerator began operating in 2021. One of the key applications of the control system is the management of all devices' parameters (magnets, RF …), roughly 80000 EPICS variables. That application is fundamental for optimizing the setup time of the accelerator and for easily reproducing the configuration of a given beam from year to year. Because web-based technologies are believed to offer many advantages, such as portability, easier maintenance, optimized use of hardware resources, and centralized security, we decided to evaluate this technology in order to form our opinion from the perspective of a wider renovation project. This paper will explain how the software architecture is designed, both on the client and server side, and what technologies we used (web framework, REST APIs, web server, database and ORM). It will also describe the outcomes we achieved in terms of features of the application, such as beam characteristics management, reload of a given beam configuration and application to the devices, storage of the accelerator setup, and calculation of parameters based on the concept of optic configurations. After 4 months of operation in 2024 with that new application, we will also discuss the question: are web based technologies a good choice for SPIRAL2 control system user interface?

        Speaker: olivier delahaye (GANIL)
      • 213
        Robust and maintainable operation using behavior tree semantics

        The ITER machine's inherent complexity and the diverse operational phases, such as commissioning and engineering operations, present significant challenges in balancing operability, integration, and automation.
        The open-source software framework oac-tree* was designed to create, maintain, and execute operational procedures in a robust and repeatable manner. Its semantics are based on a behavior tree model, which inherently supports reactive behavior, making it well-suited for goal-oriented tasks.
        To accommodate diverse use cases — from small-scale tests to fully integrated operations — the framework's architecture was built with composability and extensibility in mind. System-specific interfaces and user interactions are fully decoupled from the core library, ensuring flexibility and adaptability.
        The oac-tree library is deployed in production at ITER, offering a command-line interface for executing procedures as system daemons or for interactive use. Its maintainability is ensured by a minimal, low-complexity codebase with well-encapsulated third-party dependencies.
        The oac-tree ecosystem also includes a Qt-based graphical user interface and a server enabling multiple clients to interact with running procedures. A plugin mechanism supports framework extensions, with existing plugins available for EPICS-based control systems, mathematical expression evaluation, common control system logic, and ITER-specific network protocols.

        Speaker: Ralph Lange (ITER Organization)
      • 214
        Facilitating user code inclusion in a supported environment to enhance the flexibility of beamline control frameworks

        Beamline control system software is complex: it provides a common interface to a diverse range of hardware components, data services, and output file formats, any of which may not have been designed for ease of software integration. It should provide both simple and expert operation levels. End users are not expected to know the intricacies of the software, and facility staff will engage with it at a variety of levels. To enable users of the Beamline Experiment Control (BEC) software for SLS 2.0 at PSI to define custom and even one-off routines, while sustainably integrating them with the framework, we introduced two parallel systems.

        First, a plugin system, mainly intended for beamline staff. BEC plugins can be new devices, scan routines, data output formats, and GUI widgets. They are managed through a bespoke plugin-creation application which provides the user with the required boilerplate. It is based on highly configurable code templating, allowing us to update the template without exposing users to the peculiarities of the underlying libraries or the difficulties of resolving merge conflicts.

        Second, a system to run arbitrary user-supplied scripts, designed to limit the security issues this would normally entail. Scripts are executed in a container permissioned only to access the central message broker. The execution environment closely mimics the beamline iPython terminal environment, so that a procedure can be interactively developed, then submitted to the server.

        Speaker: David Perl (Paul Scherrer Institute)
      • 215
        Better software observability using Tango Controls with OpenTelemetry - experience at MAX IV

        Distributed software systems are complex and the interactions across multiple machines can be difficult to debug and monitor. Log messages are not enough for observability. We need more information about the communication between applications, how each one is executing, and its internal state. In practice, applications can be made more observable using software frameworks such as OpenTelemetry. The Tango Controls framework has built-in support for OpenTelemetry in C++ and Python since version 10.0.0. We are using it operationally at the MAX IV synchrotron. We provide examples of the traces, trends, and other data available when running at scale on a beamline with hundreds of devices. We report on the compute and performance impact for client and server software applications, as well as practical issues. For the backend servers that ingest and query the telemetry data (running Grafana Tempo for traces and Grafana Loki for logs) we report on the compute resources required.

        Speaker: Marcelo Alcocer (MAX IV Laboratory)
    • 10:30
      Coffee
    • WEBG MC06 Infrastructure and Cyber Security Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Miroslaw Dach (Lawrence Berkeley National Laboratory), Thomas Birke (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 216
        20yrs (trying to) securing controls

        Computer security is a marathon ran by some of us since decades. Trying to keep the malicious evil out while not inhibiting or strongly impacting accelerator operations and data taking. This talk shall review what worked and what less, and takes a view onto the upcoming challenges in maintaining a fair balance between operations and “security”.

        Speaker: Stefan Lueders (European Organization for Nuclear Research)
      • 217
        HEPS control system network traffic detection with deep learning techniques

        The High Energy Photon Source (HEPS) is a low-emittance synchrotron radiation-based light source located in suburban Beijing. The HEPS control system encompasses both the accelerator and the beamlines. The system design principles incorporate industrial standards, a global timing system, and modular subsystems. The development of effective cybersecurity techniques for the HEPS control system is critical for enabling scientific exchange, ensuring adequate access for remote participation, and maintaining reliable equipment control, particularly in light of the increasing number of cybersecurity threats.
        Network traffic detection is a vital method for identifying network attacks. In this presentation, we introduce a deep learning-based network traffic detection method for the HEPS control system. First, the HEPS control system network traffic is collected and divided into sessions using five-tuple segmentation. Second, the traffic is converted into grayscale images which reflect the intrinsic characteristics of the traffic. Finally, these images are input into the deep learning algorithm to train the control system network traffic detection model, allowing for the automatic learning of original network traffic features without manual efforts. The proposed approach is evaluated using four commonly used metrics, and the results demonstrate that our method can effectively detect network traffic for the HEPS control system.

        Speaker: Jiarong Wang (Institute of High Energy Physics)
      • 218
        Centralized EPICS channel access for VDI users at NSLS-II via CA Gateway architecture

        At NSLS-II, EPICS servers for the accelerator and beamlines reside on dedicated VLANs isolated for security and network bandwidth. Since clients must run applications within respective networks, this poses a challenge for enabling centralized observability and control for staff with various roles. We have created a portal to access EPICS process variables (PVs) across the facility, using Virtual Desktop Infrastructure (VDI) and a dual Channel Access Gateway (CAGW) architecture on a dedicated “EPICS VDI” network. For each beamline and the accelerator two CAGW instances are deployed: one on the “EPICS VDI” network serving client applications, and one on the control system VLAN communicating with IOCs. The controls-side gateway bridges the isolated “Controls” network and the routable “Services” network.
        CAGW security enforces PVs as read-only by default, with Active Directory group membership granting beamline-specific write access. Any EPICS CA-based client can run in the VDI environment, including CS-Studio Phoebus—the primary tool enabling staff to interact with PVs across the facility from a single session. PV access via VDI removes the need to run client software in the Controls environment, reducing system exposure and improving architectural separation. CAGW deployment is automated by Ansible using templated generation of network settings, PV lists, and access rules. This approach builds on a proven accelerator-beamline communication model and has shown stable performance.

        Speaker: Anton Derbenev (National Synchrotron Light Source II)
      • 219
        TIPPSS for navigating a changing cybersecurity landscape at the Electron-Ion Collider and other scientific research facilities

        The Electron-Ion Collider (EIC) aims to unlock the secrets of the strong nuclear force and revolutionize our understanding of the fundamental structure of visible matter. It is being built at Brookhaven National Laboratory (BNL) and could possibly be the only large collider built in the world in the next 20-30 years, during the “Age of AI”. This creates the very unique opportunity for a complete AI/ML lifecycle of a large-scale state-of-the-art scientific research facility, but also many challenges, as this lifecycle overlaps with a rapidly changing cybersecurity landscape. Standards, regulations, and guidance are likely to be released (and then possibly revised) at the same time that design, construction, and then finally operations of the EIC must proceed. We present the use of the new Trust, Identity, Privacy, Protection, Safety, and Security (TIPPSS) framework from the IEEE/UL 2933 TIPPSS standard as a framework for scientific research facilities. This will enable us to design and build a safe and secure infrastructure, and robust trust and identity architecture, to protect the scientific instrument ecosystem as we enable “AI readiness” and AI/ML deployment (especially at scale) in the face of increasing cybersecurity challenges, using the EIC as a case study.

        Speakers: Florence Hudson (Columbia University), Linh Nguyen (Brookhaven National Laboratory)
      • 220
        Cyber Secure Experimental Physics and Industrial Control System

        Secure PVAccess (SPVA) brings production-grade cybersecurity to the
        Experimental Physics and Industrial Control System (EPICS) framework
        by encapsulating the PVAccess protocol within Transport Layer Security
        (TLS). It integrates X.509 certificate-based authentication with
        common laboratory-wide services such as Kerberos and LDAP, and delivers a full certificate authority, management, and distribution solution.
        Leveraging this robust authentication layer, Secure PVAccess extends
        the existing EPICS Security model to enforce true Process Variable
        (PV) access control based on verified peer identities, attributes, and
        connection modes. We describe the overall architecture, key design decisions, software components, current status, envisioned future capabilities, and the collaborative effort driving this initiative.

        Speaker: George McIntyre (SLAC National Accelerator Laboratory)
    • WEBR MC04 Hardware Architecture and Synchronization Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Joseph Sullivan (Argonne National Laboratory), Oscar Matilla (ALBA Synchrotron Light Source)
      • 221
        Fast Event System for the Advanced Photon Source

        The Fast Event System, a global time base and event-based trigger distribution system, has been developed and commissioned for the Advanced Photon Source Upgrade (APS-U) and the linear accelerator (LINAC) refurbishment projects. The hardware components developed by Miro-research Finland (MRF), including event masters (EVMs), event receivers (EVRs), and event fan-out modules, are installed in VME Input/Output Controllers (IOCs) to generate and distribute timing signals to client devices. The firmware of the MRF EVM and EVR has been updated by the manufacturer to meet the needs of the Fast Event System at APS. The software for these IOCs has been developed using the EPICS framework. In this presentation, the overall structure, key functions, and recently developed features of the Fast Event System are introduced. Particularly, the performances of the delay compensation and event synchronization are discussed. The experiences and lessons learned during the commission and operation will be discussed as well.
        The work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.

        Speaker: Ran Hong (Argonne National Laboratory)
      • 222
        Inter-generational compatibility study of MRF event timing modules

        Event timing systems are critical for the synchronization of beam diagnostics and accelerator control at KEK LINAC. Such systems have historically relied on VME-based modules since 2008, such as the MRF 230 series event generator and event receiver. However, with some VME modules approaching its market end-of-life, transitioning to modern platforms like MicroTCA is becoming imperative. This work addresses the challenges of such a migration by focusing on a phased upgrade approach, where new MicroTCA-based MRF 300-series timing modules will coexist with and eventually replace the legacy VME timing modules. A primary concern during this transition is ensuring the functional compatibility between the old and new generations. This paper presents a comprehensive evaluation of critical timing function across coexisting VME and MicroTCA systems. Core compatibility aspects evaluated include event code transmission and reception accuracy, timing precision and jitter, trigger output characteristics, event rate handling, sequencer operation, distributed bus signal integrity, and timestamping consistency. The findings aim to provide a quantitative assessment of compatibility, identify potential limitations, and offer practical insights for other facilities planning a similar upgrade of their MRF event timing systems, thereby minimizing risk during the accelerator operation.

        Speaker: Dr Di Wang (High Energy Accelerator Research Organization)
      • 223
        White Rabbit Timing: The new CERN accelerator timing system

        After more than 30 years of service, CERN's accelerator timing system is being renovated, moving from the existing distribution infrastructure based on the RS-485 technology and legacy hardware modules, to a new one based on White Rabbit.
        Developed at CERN, White Rabbit Timing (WRT) is a generic toolkit composed of the White Rabbit Event Node (WREN) - a System-on-Chip based hardware module, and the corresponding software stack. WRT allows transmission and reception of messages, along with an arbitrary payload (key-value pairs). The received messages enable the generation of triggers in the form of software interrupts and electrical pulses, with sophisticated and highly configurable triggering patterns. WRT seamlessly integrates time derived from the radio frequency used for particle acceleration, with WRENs capable of locally generating beam orbit and bunch clocks, as well as broadcasting beam-synchronous timing streams over dedicated optical links.
        We present the key concepts of WRT, its architecture, multi-layered distribution network layout, functionalities and usage at CERN. We also draw a potential path towards a turn-key timing system based on WRT that could be deployed anywhere for scientific or commercial applications.

        Speaker: Giorgio Giuseppe Moscardi (European Organization for Nuclear Research)
      • 224
        TCLK must stay! CAMAC must go! How does Fermilab move forward?

        The current Timing System at Fermilab has been around for 40 years and currently relies on 7 CAMAC crates and over 100 CAMAC cards to produce the Tevatron Clock (TCLK). Thanks to the ingenuity of those before us, this has allowed Fermilab the flexibility to change the timing and EVENTs for its accelerator as beamlines and projects have changed over the years. With the advent of the Proton Improvement Plan-II (PIP-II), the Timing System at Fermilab is being reimagined into a single chassis with even greater flexibility and functionality for decades to come while tackling the ever challenging task of maintaining backwards compatibility.

        Speaker: Mark Austin (Fermi National Accelerator Laboratory)
      • 225
        HEPS timing system

        The high energy photon source (HEPS) is the first 4th generation synchrotron light source in China with a beam energy of 6GeV and an ultra-low emittance of better than 100pm×rad, has officially begun its joint-commissioning on March 27,2025. HEPS is mainly composed of accelerators, beamlines and end-stations. At first phase, HEPS built 15 beamlines and corresponding experimental stations.
        As one of the important system, HEPS timing provides triggers and clocks for both accelerators and beamlines, and also be responsible for operating control. Unlike other light source, HEPS adopted a novel on-axis swap-out injection. The Booster not only run as a traditional Booster but also as an accumulator. The bunches in the Storage Ring will be extracted to Booster and be merged with low-charge bunches, then be injected back into the storage ring.To achieve such an injection and extraction processes, HEPS timing has been meticulously designed. By introducing a coincidence clock, HES timing has achieved independent control of each injection process. HEPS also adopted a step-by-step ramping for its Booster, timing should provide step-by-step triggers for both power supply and LLRF of RF cavity and can stop at any energy. Due to the ultra-low emittance, the RMS jitters of HEPS timing should less than 30ps, for the electron gun and SR injection and extraction kickers should less than 10 ps. This talk will give an introduction of HEPS timing from the design, construction and operating.

        Speaker: Fang Liu (Institute of High Energy Physics)
      • 226
        Distributed I/O Tier: from concept to operational readiness – a modular platform for custom radiation-tolerant and radiation-free electronics

        The Distributed I/O Tier (DI/OT) project was launched to develop a common, modular hardware platform for custom electronics at the lowest layer of the CERN control system. Traditionally, this layer—closest to the accelerator and often exposed to radiation—relied on highly specialized, custom-designed devices with little reusability across subsystems. DI/OT addresses this limitation with a 3U crate based on the CompactPCI Serial industrial standard, along with high-performance (non-radiation-tolerant) and radiation-tolerant (lower-performance) modules. Key components include two System Boards (featuring Igloo2 FPGA and AMD Zynq UltraScale+ MPSoC), a radiation-tolerant switched-mode power supply, a fan tray, an FMC WorldFIP module, and a radiation-tolerant RISC-V soft-core. DI/OT users can tailor the platform to their needs by designing application-specific Peripheral Boards, FPGA configurations, and low-level software for the System Board. Standardizing on a common platform enables different equipment groups to benefit from centrally supported hardware while facilitating the sharing of application-specific peripherals. Over the past few years, DI/OT has evolved from a prototype to a production-ready platform. This paper presents DI/OT’s hardware modules, radiation qualification results, initial pilot deployments at CERN, and its adoption by the quantum computing community.

        Keywords: DI/OT, custom electronics, CompactPCI-Serial, SoC, radiation-tolerant

        Speaker: Grzegorz Daniluk (European Organization for Nuclear Research)
    • Conference: Conference Photo
    • 12:45
      Lunch
    • WECG MC12 Software Development and Management Tools Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Gianluca Chiozzi (European Organisation for Astronomical Research in the Southern Hemisphere), Guifré Cuní Soler (Paul Scherrer Institute)
      • 227
        Behavior tree sequencing and automation framework at LCLS

        BEAMS is a sequencing framework in development at the Linac Coherent Light Source (LCLS) that uses behavior trees to meet diverse automation goals.

        LCLS is implemented with a distributed array of control systems operating in concert to deliver bright, coherent x-ray lasers to a variety of experimental endstations. The automation systems at these endstations have different goals and maturity levels, which has resulted in a diverse set of sequencing and automation needs. In order to optimize uptime and flexibility, we are leveraging behavior trees as an automation framework. Proven in industry, behavior trees provide a comprehensive, shared no-code language that invested parties can communicate and iterate upon. This system concretizes operator experience in a formalized, version controlled document, and gives system owners a structured way to induce state transformation or react to off nominal events.

        Speaker: Zachary Lentz (Linac Coherent Light Source)
      • 228
        Native Golang client for EPICS Channel Access

        The Experimental Physics and Industrial Control System (EPICS) provides essential control infrastructure in large scientific research facilities worldwide. Currently, EPICS Channel Access (CA) clients are primarily implemented in languages such as C/C++, Python, or Java, each with specific strengths but also certain limitations. To address these challenges, I have developed a native CA client entirely in Golang, a modern programming language known for clear syntax and built-in concurrency support. In this presentation, I will explain the key design principles behind this new client, highlighting how Golang’s asynchronous programming capabilities naturally align with EPICS’s communication model. I will also present preliminary benchmark results illustrating the client’s practical performance and discuss ways Golang simplifies integration within complex accelerator control environments. Finally, I will outline future plans and identify specific opportunities for community collaboration, ensuring this client meets the evolving needs of EPICS users.

        Speaker: Marcin Lukaszewski (Control Systems Marcin Lukaszewski)
      • 229
        Lessons learned refactoring the EuXFEL’s central data acquisition system

        The European XFEL, a world-leading X-ray light source, boasts high brilliance and fast burst repetition rates supported by bespoke MHz-imaging detectors, resulting in data rates close to 20 GB/s. Correlating auxiliary data from various sources with these images requires a centralized Data Acquisition (DAQ) system that ingests data from Karabo, and outputs aggregated data into a common HDF5 format. Facility-side calibration and processing rely on the stability of this data format.
        We discuss refactoring this major, mission-critical software to scale for future data reduction needs. The system transitioned from a blackboard to a pipelined design modeling unidirectional dataflows. Refactoring was prepared with extensive tests for verification. Unit tests, often following test-driven development, accompanied refactoring and new features. A high continuous integration coverage is enhanced by system-level tests.
        The refactored system was introduced incrementally over 3 months, following a „test early, fail early“ philosophy. The new system successfully implements critical petabyte-scale data reduction, and enhanced status monitoring capabilities.

        Speaker: Steffen Hauf (European X-Ray Free-Electron Laser)
    • WECR MC03 Control System Sustainment and Management Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Barry Fishler (SLAC National Accelerator Laboratory), Denise Finstrom (Fermi National Accelerator Laboratory)
      • 230
        Development of accelerator controls for FAIR - the management aspect

        As the works at FAIR (Facility for Antiproton and Ion Research) progress deeper into the installation and near the commissioning phase, FAIR control system gets closer to its final shape. With many software parts for the accelerator controls completed, valuable experience has been gained from the encountered managerial challenges.
        In this paper, we present the best practices and lessons learned from tackling challenges in the scopes of managing contributions from in-kind partners, consortia and other contractors, project planning, establishing effective communication and coordination between in-house teams and outside partners, applying agile concepts, dealing with dynamics at FAIR and others.

        Speaker: Ralph Baer (GSI Helmholtz Centre for Heavy Ion Research)
      • 231
        The MAX IV KITOS experience: organising rapid response operation support in a user facility

        To enhance operational stability and user satisfaction, the ICT groups at MAX IV Laboratory (formerly KITS) have implemented a structured support model inspired by the European XFEL's Data Department Operations Centre (DOC). The KITOS (KITS Operation Support) initiative ensures expert coverage during working hours and a best-effort support model during evenings and weekends, staffed by a rotating team of two on-duty specialists. The team leverages proactive system monitoring and dedicated on-duty time for cross-training and knowledge sharing.

        KITOS supports all critical phases of user operation—including accelerator operations, beamline commissioning, experiment setup, and active beamtime—through a combination of real-time response and structured on-call coverage. Central to its success is detailed issue logging, which enables the identification and resolution of recurring problems, whether technical, procedural, or related to documentation and training gaps.

        Since its inception in 2022, the KITOS model has demonstrably improved control system reliability. This presentation will share our implementation approach, key lessons learned, and impact metrics with the broader scientific facility community.

        Speaker: Mirjam Lindberg (MAX IV Laboratory)
      • 232
        Achieving sustainment: from daily operations to long-term strategic planning

        Particle accelerator facilities support a wide array of applications and vary greatly in type, purpose, size, construction cost, and both the sources and levels of operational funding. While the lifespan of each accelerator is shaped by a range of technical and organizational factors, many continue to operate well beyond their originally projected lifetimes. Sustaining long-term operations requires organizations to bridge the gap between the immediate demands of daily activities and broader strategic goals. This includes aligning short-term actions with long-term objectives, fostering a culture of continuous improvement, and implementing a clear, well-communicated strategic plan. Although each facility faces its own unique challenges, this article offers an initial framework for supporting operational sustainability. It also seeks to inspire facilities to define and pursue their own paths toward lasting success in beam delivery—while encouraging collaboration and knowledge exchange across the accelerator community. The framework presented here draws on ongoing efforts within the Accelerator Operations & Technology Division, which oversees the operation and maintenance of the Los Alamos Neutron Science Center (LANSCE) accelerator and is further illustrated through examples from Control and Instrumentation systems.

        Speaker: Martin Pieck (Los Alamos National Laboratory)
    • WEMG Mini-Orals (MC01, MC05, MC10) Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Chris Roderick (European Organization for Nuclear Research)
      • 233
        Laser Megajoule facility status report

        The Laser MegaJoule, a 176-beam laser facility developed by CEA, is located near Bordeaux. It is part of the French Simulation Program, which combines improvement of theoretical models used in various domains of physics and high performance numerical simulation. It is designed to deliver about 1.4 MJ of energy on targets, for high energy density physics experiments, including fusion experiments.

        The LMJ technological choices were validated on the LIL, a scale-1 prototype composed of 1 bundle of 4-beams. The first bundle of 8-beams was commissioned in October 2014 with the implementation of the first experiment on the LMJ facility. The operational capabilities are increasing gradually every year until the full completion by 2026. By the end of 2025, 22 bundles of 8-beams will be assembled (full scale) and 19 bundles are expected to be fully operational.

        As the assembly of the laser bundles is coming to an end and before to be in full operation, we propose to make a status report on the LMJ/PETAL installation. We will present the major software developments done for theses 2 past years, the latest experimental results and the new challenges to keep this facility at its best operating level.

        Key words: Laser facility, LMJ, PETAL, Control System

        Glossary:
        LMJ: Laser MegaJoule
        CEA: Commissariat à l’Energie Atomique et aux Energies Alternatives
        LIL : Ligne d’Intégration Laser

        Speaker: Dr Stephanie PALMIER (Commissariat à l'Énergie Atomique et aux Énergies Alternatives)
      • 234
        SKA telescope control system in 2025

        It is 2025 and the SKA Telescope Control System has come a long way since the start of construction. The outline of the software architecture and some key technology decisions (including the choice of Tango) were made early. To keep the geographically distributed teams engaged, and avoid creating silos and fragmentation, development of virtually all the software components started in parallel; often while the detailed designs for the custom hardware was still evolving and before the COTS equipment was selected. The deployment strategy was adjusted to align with the industry trends. From designing a software system for hardware that does not exist we arrived at the point where we can prove that the software can actually work with the hardware. However, the software design and implementation meeting reality uncovered some issues, forcing us to make changes (ska-tango-base) and learn hard lessons (naive implementation of event callbacks). Are we ready to deliver a large distributed control system? We realize that scalability will be a challenge. This paper provides an honest overview of what works and what did not work so well, and how we address issues.

        Speaker: Ms Sonja Vrcic (SKA Observatory)
      • 235
        First phase of control system for compact Muon Linac at J-PARC

        A Muon Linear accelerator (Muon Linac) for the muon g-2/EDM experiment is currently under construction at Japan Proton Accelerator Research Complex (J-PARC). The objective of this project is to accelerate thermal muons (25 meV at 300 K) to 212 MeV, marking the world’s first implementation of muon acceleration.
        Development of the control system for the Muon Linac began in 2024. Core functionalities of the Ultra-Slow Muon section have been tested during beam time in May 2025. The system adopts the standard EPICS framework and features a compact architecture consisting of (a) a QNAP NAS for disk storage, (b) two operator terminals, and (c) two commercial micro servers serving as the EPICS IOC and the archiver server, respectively.
        This paper reports on the status of the control system development for the Ultra-Slow Muon section, as part of the first phase of the Muon Linac project. Toward the full commissioning of the entire Muon Linac in 2028, the prospects for extending the present control system to the main Linac components are discussed.

        Speaker: Shuei Yamada (High Energy Accelerator Research Organization)
      • 236
        The EuAPS betatron radiation source control system

        EuAPS (EuPRAXIA Advanced Photon Source) is a project carried out in the EuPRAXIA context and financed by the Italian Ministry in the Recovery Europe plan framework. A new advanced betatron radiation source, obtained by exploiting plasma LWFA, is currently being realized at the Laboratori Nazionali di Frascati of INFN in Italy and will be operated as user facility. Several elements of EuAPS are remotely controlled, such as laser diagnostic devices, motors and vacuum system components. In order to efficiently run the facility, the realization of a robust and performing control system is crucial. The EuAPS control system is based on EPICS (Experimental Physics and Industrial Control System) open-source software framework. Functional safety systems such as Machine Protection System (MPS) and Personnel Safety System (PSS) in accordance with IEC-61508 standards are also integrated for interlocks control, anomalies monitoring and for protecting the personnel from hazardous areas. In this contribution, details on how the EuAPS control system has been projected and realized will be provided.

        Speaker: Valentina Dompè (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Frascati)
      • 237
        LCLS-II cavity heater controls: design, operation, and accuracy

        The SLAC National Accelerator Laboratory's upgrade to the LCLS-II, featuring a 4 GeV superconducting linear accelerator with 37 cryomodules and two helium refrigeration systems supporting 4 kW at 2.0 K, represents a significant advancement in accelerator technology. Central to this upgrade is a 2K system with five stages of centrifugal cold compressors, operating across a pressure range from 26 mbar suction to 1.2 bara discharge*. These dynamic centrifugal compressors have a limited operational envelope hence maintaining stable pressure and flow is critical for its operation. This paper describes how SLAC achieved stable LINAC pressures in each of the 37 Cryomodule using electrical heaters compensating actively to the changes in RF power to maintain constant flow through the system. Additionally, this paper details the power accuracy of these heaters, which can be useful not only for control, but also when measuring cavity efficiency.

        Speaker: Andrew Wilson (SLAC National Accelerator Laboratory)
      • 238
        An FPGA-based autoencoder model for real-time RF signal denoising for industrial accelerators

        A challenge that industrial particle accelerators face is the high amounts of noise in sensor readings. This noise obscures essential beam diagnostic and operational data, limiting the amount of information that is relayed to machine operators and beam instrumentation engineers. Machine learning-based techniques have shown great promise in isolating noise patterns while preserving high-fidelity signals, enabling more accurate diagnostics and performance tuning. Our work focuses on the implementation of a real-time FPGA-based noise reduction autoencoder, tested on a Xilinx ZCU104 evaluation kit with the intention of being deployed on industrial particle accelerators in the near future.

        Speaker: Vikshar Rajesh (RadiaSoft (United States))
      • 239
        Control and tuning of complex bend magnet for proposed NSLS-II upgrade

        The Complex Bend (CB) is a novel lattice concept proposed for the NSLS-II upgrade, utilizing permanent magnets instead of traditional electromagnets. This innovative design aims to reduce horizontal emittance from 700 pm to 40 pm and increase beam energy from 3 GeV to 4 GeV, significantly enhancing beam brightness. However, as a new lattice architecture, the CB introduces substantial technical challenges in design, assembly, and verification, particularly in meeting stringent magnetic field requirements. Unlike electromagnets, permanent magnets cannot be adjusted after assembly, making precise design and fabrication critical. These challenges are further compounded by the nonlinear behavior of magnetic fields with respect to magnet position and geometry.

        To address these issues, we propose integrating advanced FPGA-based hardware with EPICS-based software into a comprehensive control and tuning system. Real-time sensor data, including position, pressure, magnetic field strength, and temperature will be continuously collected and analysed. In addition, AI/ML algorithms will support optimizing magnet positioning and alignment to meet the required field specifications for each CB unit. This presentation will cover the CB mechanical assembly system, electrical hardware design, low-level control software design, and high-level tuning software implementation.

        Speaker: Yuke Tian (Brookhaven National Laboratory)
      • 240
        Object oriented industrial I/O

        Abstract
        The Los Alamos Neutron Science Center (LANSCE ) has completed a significant modernization effort, migrating from the legacy RICE control system to an entirely EPICS-based infrastructure. A key enabler of this transition has been the development and deployment of modular, object-oriented Industrial I/O (IIO) architectures on National Instruments (NI) cRIO platforms. The Industrial I/O framework provides a reusable and scalable system for controlling and monitoring sensors and instruments. It is built around precompiled FPGA bitfiles accessed through NI’s C application programming interface. Where necessary, LabVIEW real-time code integrates seamlessly with EPICS IOCs. This architecture enables clear separation between control logic and hardware interfaces, supports future maintenance with minimal overhead, and accommodates both modern Linux RT cRIO and legacy VxWorks systems. The result is a flexible and resilient method for managing and improving complex control architectures across LANSCE.

        This contribution outlines how IIO enables hardware reuse by treating NI cards as modular components with shared logic, abstracting low-level FPGA interaction, and standardizing configurations through parameterized bitfiles and EPICs startup files. The poster and discussion focus on how this approach supports object-like behavior to improve maintainability, scalability, and cross-platform deployments of EPICS-compatible systems.
        LA-UR-25-24051

        Speaker: Rocio Martin (Los Alamos National Laboratory)
      • 241
        Modernizing FPGA development using the DESY FPGA firmware framework

        Brookhaven National Laboratory (BNL) is currently developing new hardware description language (HDL) code and embedded software for the Electron-Ion Collider (EIC) control system. Part of this effort is modernizing the development process itself, leveraging methodologies and tools that were initially targeted at the software world. These methods include effective source control and project management, modularization and rapid deployment of updated code, automated testing, and in many cases automated code generation. HDL designers additionally face unique challenges compared to software designers, particularly with vendor locking and dependency on particular tools and IP. The FPGA Firmware Framework (FWK), developed by DESY, is a set of tools that helps to both apply these modern methods and to overcome some of those unique challenges. This paper will cover the workflow, successes, and challenges faced when using the FWK. In particular, we will focus on the experience using this workflow to develop a customizable delay generator IP targeting a Zynq FPGA.

        Speaker: David Vassallo (Brookhaven National Laboratory)
      • 242
        Proton pulse charge calculation algorithm in Beam Power Limiting System at the Spallation Neutron Source

        A proton pulse charge calculation algorithm in the Beam Power Limiting System (BPLS) at the Spallation Neutron Source (SNS) was developed and implemented in an FPGA. The algorithm calculates one-minute running average of the pulse charges and issues a fault to the Personal Protection System (PPS) and the Machine Protection System (MPS) when a limit is reached.

        A bit-accurate model of the algorithm was first developed and tested in Matlab® and then implemented and simulated in VHDL using Vivado® design environment. Finally, the algorithm was verified on a µTCA-based hardware platform.

        Speaker: Miljko Bobrek (Oak Ridge National Laboratory)
      • 243
        A high-precision motion profile data stream pipeline for LCLS-II fast wire scanner

        The LCLS-II is the first X-ray Free Electron Laser (XFEL) to utilize continuous-wave superconducting accelerator technology (CW-SCRF), capable of delivering X-ray pulses at repetition rates up to 1 MHz. The LCLS-II fast wire scanner motion control system, based on the Aerotech Ensemble controller, is designed to measure the beam profile across both high and low repetition rates. To effectively and timely analyse the motion trajectory of the fast wire scanner, we have developed a data stream pipeline that transmits high-precision profile data from the Ensemble controller to the LCLS-II server. This system integrates the motion profile into the EPICS control system, displaying the scan profile in real time via a PyDM GUI. This paper outlines the design of the data transmission pipeline and the software development process.

        Speaker: Ziyu Huang (SLAC National Accelerator Laboratory)
      • 244
        Control system information exchange based on data models

        Many control algorithms or optimisation procedures profit from a consistent set of data which is available with a high frequency: e.g. machine learning or automated commissioning. Modern distributed control systems allow combining and presenting data based on data models, which are then transported consistently over the network: e.g. EPICS7 introduced these data models as normative types or their combination.
        In this paper we present use cases that profit from combining data sub-models to a consistent higher order data model. These are today typically implemented in some programming language.
        The authors present use cases that can profit from a consistent robust combination of data sub-models of many devices to a higher order model. Finally common patterns are presented which could be reasonable to implement independently.

        Speaker: Pierre Schnizer (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 245
        Improving performance and reliability of a Python-based EPICS IOC by switching to pyDevSup

        The power supplies used for FOFB correctors at SIRIUS expose only electrical current values, making it necessary to perform conversions to and from beam kick values. To take advantage of the canonical Python implementation of this conversion, a separate IOC was developed using pyEPICS and PCASPy. This technology stack imposed some limitations, making it necessary to limit the update rate, and, even then, requiring one independent instance of the IOC per ring sector (20 in total) to avoid PV timeouts and disconnects; disconnection events when one of the power supplies was down also had cascading issues with reconnection and memory corruption.
        This motivated us to pursue more modern alternatives for integrating Python code into an IOC, specifically one that could take advantage of the Channel Access (CA) integration already present in EPICS databases, avoiding any of the bridges between CA and Python. We evaluated the pythonSoftIOC project and the pyDevice and pyDevSup support modules, which we present in this work. We settled on pyDevSup due to the development experience it provided.
        This work also presents benchmarks comparing the performance gains with the new IOC and aims to explore the architecture differences that enabled them.

        Speaker: Érico Nogueira Rolim (Brazilian Synchrotron Light Laboratory)
      • 246
        Upgrading ATLAS’s tune archiving system

        The Argonne Tandem Linear Accelerating System (ATLAS) facility at Argonne National Laboratory is a National User Facility capable of delivering ion beams from hydrogen to uranium. The existing tune archiving system, which utilizes Corel’s Paradox relational data-base management software, is responsible for retrieving and restoring machine parameters from previously optimized configurations. However, the Paradox platform suffers from outdated support, a proprietary programming language, and limited functionality, prompting the need for a modern replacement.
        To address these limitations, ATLAS is transitioning to a new archiving system based on PySide for the user interface, InfluxDB for time-series data storage, and FastAPI for backend communication.

        Speaker: Kenneth Bunnell (Argonne National Laboratory)
      • 247
        Status of development and application of the Pyapas at HEPS

        To meet the stringent requirements of beam commissioning at the High Energy Photon Source (HEPS), China’s first fourth-generation high-energy synchrotron light source, a new high-level application (HLA) framework named Pyapas was developed entirely in Python. Designed for flexibility and maintainability, Pyapas serves as the foundation for all HLAs at HEPS, supporting tasks such as orbit correction, optics measurement, and machine modeling. Since early 2023, Pyapas-based HLAs have been successfully applied during the commissioning of the Linac, booster, and storage ring, contributing to key milestones including first light in October 2024. This paper summarizes the major developments and applications of HLAs at HEPS and outlines the direction of future work.

        Speaker: Mr Yuliang Zhang (Institute of High Energy Physics)
      • 248
        Modular scientific SCADA suite with Sardana and Taurus – latest developments

        Sardana* and Taurus** are community-driven, open-source SCADA solutions that have been used for over a decade in scientific facilities, including synchrotrons (ALBA, DESY, MAX IV, SOLARIS) and laser laboratories (MBI-Berlin).
        Taurus is a Python framework for building both graphical and command-line user interfaces that support multiple control systems or data sources. Sardana, is an experiment orchestration tool that provides a high-level hardware abstraction and a sequence engine. It follows a client-server architecture built on top of the TANGO control system***. In the last two years, significant developments have been made in both projects. Sardana focused on enhancing continuous scans, introducing multiple synchronization descriptions to support passive elements (e.g. shutters) and detectors reporting at different rates. The configuration tool has also been extended, following the roadmap defined by the community****. Taurus has seen substantial performance gains, particularly in GUI startup times, as part of an optimization effort that started nearly three years ago. Latest improvements take profit of new TANGO event subscription asynchronous modes*****. Continuous codebase modernization is underway, and support for Qt6 is planned for the July 2025 release.
        This presentation will overview these recent advancements in both Sardana and Taurus and outline their current development roadmap.

        Speaker: Michal Piekarski (SOLARIS National Synchrotron Radiation Centre)
      • 249
        HL-LHC Inner Triplet String controls and software architecture

        The High Luminosity-Large Hadron Collider (HL-LHC) project at CERN aims to increase the integrated luminosity of the Large Hadron Collider (LHC). As an important milestone of the HL-LHC project, the scope of the Inner Triplet (IT) String test facility is to represent the various operation modes and the controls environment to study and validate the collective behaviour of the different systems. As for the HL-LHC, the IT String operation requires a wide-ranging set of control systems and software for magnet powering, magnet protection, cryogenics, insulation vacuum, and the full remote alignment.
        An overview of the control systems and their interfaces is presented with a particular focus on the software layers essential for the powering and magnet protection tests during the IT String validation program. Ensuring integration of the new HL-LHC device types and their operational readiness requires close collaboration between development teams, equipment owners and the IT String operation team which is validated by dedicated Dry Run tests. These tests aiming to validate the functionalities of new device types within the control and software applications are described in detail, with the goal of achieving a smooth transition to the magnet powering phase. The IT String facility presents a unique opportunity to validate all control and software layers ahead of the HL-LHC hardware commissioning (HWC) within the LHC complex and their operation in the High Luminosity era.

        Speaker: Sebastien Blanchard (European Organization for Nuclear Research)
      • 250
        Implementation and scalability analysis of TSPP for Vacuum Framework

        SCADA (Supervisory Control and Data Acquisition) systems traditionally acquire data from PLCs through polling. The Time Stamped Push Protocol (TSPP), on the other hand, enables a PLC to timestamp and push data to the SCADA at its own discretion. The Vacuum Control Systems for CERN accelerators are primarily built on a dedicated Vacuum Framework, which relies on polling and is therefore subject to its limitations. Implementing TSPP would thus be an important improvement.
        TSPP needs software on the PLC – a Data Manager - to determine what data to push, when to push it, and how to package it into the correct format. Due to its particular data model, implementing TSPP for the Vacuum Framework required the development of a dedicated Data Manager. Additionally, while most current systems with TSPP have a single PLC per SCADA instance, Vacuum Framework applications often involve hundreds. Given that no data was available on the impact that large numbers of PLCs pushing data to a SCADA system might have, extensive testing was required. In particular, the relationship between server load and the effective rate of received values was studied to assess performance at scale.
        This paper details the implementation of TSPP for the Vacuum Framework, its Data Manager design, and the testing carried out to validate the protocol and assess its performance limits in order to ensure a smooth deployment.

        Speaker: Rodrigo Ferreira (European Organization for Nuclear Research)
    • WEMR Mini-Orals (MC13, MC14, MC15) Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Martin Pieck (Los Alamos National Laboratory)
      • 251
        Exploring AI-based models in accelerators: a case study of the SOLARIS synchrotron

        The National Synchrotron Radiation Center SOLARIS, third generation light source, is the only synchrotron located in Central-Eastern Europe, in Poland. The SOLARIS Center, with seven fully operational beamlines, serves as a hub for research across a diverse range of disciplines. The most important aspect of such research infrastructure is to provide stable working conditions for the users, operators and the conducted projects. Due to its unique properties, problem complexities, and challenges that require advanced approaches, the problem of anomaly detection and automatic analysis of signals for the beam stability assessment is still a huge challenge that has not been fully developed. To address this problem, different AI-based projects are under discussion and development, i.e. automatic analysis of diagnostic signals on the example of transverse beam profiles or beam position FFT windows classification. The best proposed solution, based on the InceptionV3 architecture, can assess beam quality automatically, based solely on the image itself with 94.1% accuracy and 96.6% precision. Discussion on the current developments and deployments in SOLARIS on that field, both for the accelerator and beamlines, will be covered.

        Speaker: Michal Piekarski (SOLARIS National Synchrotron Radiation Centre)
      • 252
        Leveraging AI and ML for assisted experiments, data analysis, and virtual agents

        The future of synchrotron beamline operations is poised for a transformative leap with advancements in artificial intelligence (AI) and machine learning (ML). While SOLARIS National Synchrotron Radiation Centre* has yet to integrate these technologies, their potential to revolutionize experiments, data analysis, and user interactions is immense. AI-driven automation promises real-time assistance in optimizing beamline experiments, minimizing manual intervention while enhancing precision. Machine learning algorithms will unlock deeper insights from complex datasets, facilitating faster, more accurate interpretations. Additionally, intelligent virtual agents could redefine how researchers interact with beamline controls, offering predictive guidance and adaptive optimization. As SOLARIS expands its capabilities, embracing AI and ML will position it at the forefront of scientific innovation, ensuring seamless, efficient, and accessible synchrotron research for future generations.

        Speaker: Magdalena Szczepanik (SOLARIS National Synchrotron Radiation Centre)
      • 253
        Plans and strategy for edge AI/ML at the Electron-Ion Collider at Brookhaven National Laboratory

        Scheduled to begin Operations in 2035, the Electron-Ion Collider (EIC) is being built at Brookhaven National Laboratory (BNL) and will be the only operating particle collider in the United States. It may also be the only large collider built in the world in the next 20-30 years, during the “Age of Artificial Intelligence (AI)”. Recognizing the potential for AI and machine learning (ML) to enhance operations and create more research opportunities, the EIC is being envisioned and designed as a large-scale AI-ready state-of-the-art facility. Specifically, it will support three core areas of AI/ML capabilities, referred to as Edge, End-to-End, and Bottom-Up. Edge capabilities are intended to address what are expected to be some of the most demanding AI/ML applications in the world in terms of timescales by anticipating the infrastructure, hardware, and local compute resources needed for success. At the same time, considerable care must be taken to ensure that these capabilities are manifested in an efficient, safe, and secure Controls ecosystem and Operations environment. We report on our plans and strategy for high-performance edge AI/ML at the EIC.

        Speaker: Linh Nguyen (Brookhaven National Laboratory)
      • 254
        Machine learning–based longitudinal phase space control for X-ray free-electron laser

        Precise control of the longitudinal phase space (LPS) in X-ray free-electron laser (XFEL) is critical for optimizing beam qualities and X-ray pulses properties required by the experimental stations. We present results of using machine learning techniques for LPS shaping and control with Bayesian optimization.

        Speaker: Zihan Zhu (SLAC National Accelerator Laboratory)
      • 255
        Reinforcement learning for automation of accelerator tuning

        For more than half a decade, RadiaSoft has developed machine learning (ML) solutions to problems of immediate, practical interest in particle accelerator operations. These solutions include machine vision through convolutional neural networks for automating neutron scattering experiments and several classes of autoencoder networks for de-noising signals from beam position monitors and low-level RF systems in the interest of improving and automating controls. As active deployments of our ML products have taken shape, one area which has become increasingly promising for future development is the use of agentic ML through reinforcement learning (RL). Leveraging our substantial suite of ML tools as a foundation, we have now begun to develop an RL framework for achieving higher degrees of automation for accelerator operations. Here we discuss our RL approaches for two areas of ongoing interest at RadiaSoft: total automation of sample alignment at neutron and x-ray beamlines, and automated targeting and dose delivery optimization for FLASH radiotherapy. We will provide an overview of both the ML and RL methods employed, as well as some of our early results and intended next steps.

        Speaker: Morgan Henderson (RadiaSoft (United States))
      • 256
        Design of an intelligent inspection system for particle accelerator facilities

        Aiming at the limitations of the traditional manual inspection method for particle accelerator facilities, an intelligent inspection system for particle accelerator facilities based on multi-modal sensors and artificial intelligence technology is provided. Multi-modal sensors can collect various types of data from the accelerator facilities, such as temperature, audio, images, and water leakage information, which provides comprehensive information for a thorough understanding of the equipment status. Artificial intelligence technology is capable of conducting in-depth analysis of massive amounts of data, uncovering fault patterns and rules. Through technologies such as fault modeling and data analysis, it can achieve early warning and diagnosis of faults. The intelligent inspection system for particle accelerator facilities effectively addresses the limitations of traditional inspection methods. It enables real-time monitoring of the accelerator facilities and rapid alarming of abnormalities, significantly improving work efficiency and the operational safety of the equipment. This system provides strong support for the stable operation of particle accelerator facilities.

        Speaker: Yuliang Zhang (Institute of High Energy Physics)
      • 257
        Integrated denoising for improved stabilization of RF cavities

        Typical operational environments for industrial particle accelerators are less controlled than those of research accelerators. This leads to increased levels of noise in electronic systems, including radio frequency (RF) systems, which make control and optimization more difficult. This is compounded by the fact that industrial accelerators are mass-produced with less attention paid to performance optimization. However, growing demand for accelerator-based cancer treatments, imaging, and sterilization in medical and agricultural settings requires improved signal processing to take full advantage of available hardware and increase the margin of deployment for industrial systems. In order to improve the utility of RF accelerators for industrial applications we have developed methods for removing noise from RF signals and characterized these methods in a variety of contexts. Here we expand on this work by integrating denoising with pulse-to-pulse stabilization algorithms. In this poster we provide an overview of our noise reduction results and the performance of pulse-to-pulse feedback with integrated ML based denoising.

        Speaker: Jonathan Edelen (RadiaSoft (United States))
      • 258
        Image processing with ML for automated tuning of the NASA Space Radiation Laboratory beam line

        Research conducted at the NASA Space Radiation Laboratory (NSRL) seeks to increase the safety of space exploration. The NSRL uses beams of heavy ions extracted from Brookhaven's Booster synchrotron to simulate the high-energy cosmic rays found in space. To accomplish this, the source machines provide many potential beam species, ranging in atomic number (Z) from 1, hydrogen/protons, to 83, bismuth and we have gone as high as Uranium. To test large-area samples, beams can be shaped to the user's specifications from a small-format 1-cm radius circular beam up to 20-cm by 20-cm uniform-area rectangular beams. This requires a complex transfer line of 24 magnets, including 9 quadrupole and 2 octupole magnets. Given the wide range of beam rigidity and size possibilities, operators tune the optics by hand while observing the beam profile on a phosphor screen imager. Successful tests have been conducted using a machine learning (ML) workflow for tuning. We capture the beam image, then process and parameterize the beam to assess centroid, shape, tilt, edge thickness, and uniform area size. These parameters are fed to the Badger software stack to avoid re-inventing a UI, using an Xopt-based Bayesian optimization algorithm for iterative tuning. The requirement to start from an image, which can be very noisy, and quantify it, makes the workflow more complex than the standard ML cookie-cutter approach of reading from traditional beam instrumentation fed to an algorithm.

        Speaker: Levente Hajdu (Brookhaven National Laboratory)
      • 259
        Leveraging local large language models for enhanced log analysis: integrating Ollama into electronic and trouble logging systems

        Modern systems for electronic log keeping and trouble log management generate vast datasets of issues, events, solutions, and discussions. However, extracting actionable insights from this information remains a challenge without advanced analysis tools. This paper introduces an enhancement to two log programs—Electronic Log Keeping (elog) and Trouble Logging (TroubleLog) - used in the RHIC control system at Brookhaven National Laboratory. The enhancement integrates Ollama, an open-source, locally deployed large language model (LLM), to facilitate intelligent log analysis. We present a framework that combines MySQL database queries with Retrieval-Augmented Generation (RAG), enabling users to generate period-based summaries (e.g., daily, weekly) and retrieve topic-specific information- such as issues and solutions - through natural language queries. By indexing log data using vector embeddings and interfacing with Ollama’s API, the system provides accurate, conversational responses while ensuring data privacy by avoiding external data sharing. The paper details implementation aspects, including SQL query optimization and prompt engineering, and evaluates performance using real-world log data sets. Results demonstrate improved usability and significant reductions in manual analysis time. This work the potential of local LLMs in domain-specific log management, offering a scalable and privacy-preserving solution for accelerator control systems.

      • 260
        Accelerator digital twin development through simulation modeling and MLOps using the LUME ecosystem at SLAC

        Accelerator digital twins can enable real-time optimization and predictive control helping streamline complex facility operations and reduce setup time. Machine Learning(ML) models can enhance digital twin capabilities by leveraging prior experiments, known parameters, and real-time measurements. These require robust infrastructure and open-source software tools for accurate beam modeling and system integration. SLAC has developed the Lightsource Unified Modeling Environment (LUME) to enable large-scale modeling of X-ray free electron laser performance. The team is developing an ecosystem of tools towards start-to-end simulations for a much broader set of accelerators beyond light sources. LUME components include python wrappers for physics simulation which also work with a snapshot of the current accelerator reading through Kafka. LUME-services offers core infrastructure for model workflows including contextualized file service, model and results databases, and scheduling via Prefect for automated runs. LUME-Model holds the data structures used in the LUME modeling toolset, encapsulating physics simulations and ML models. LUME-EPICS is a dedicated API for serving LUME model variables with EPICS. MLflow is being integrated to manage the full machine learning lifecycle. This infrastructure is currently being developed and deployed at SLAC, alongside collaborators from other laboratories (especially LBNL). Here we provide an overview of the LUME ecosystem and example use cases.

        Speaker: Gopika Bhardwaj (SLAC National Accelerator Laboratory)
      • 261
        Integrating CODAC in ITER Plant Simulator

        The use of Digital Control Systems (DCS) with process simulators for engineering purpose, control system validation, virtual commissioning, or operator training is increasingly demanded in large and increasingly complex industrial projects.
        Coupling a DCS with a process simulator requires to support specific functionalities: ability to operate on a simulated time basis and to save/restore states to load different scenarios starting point, or to jump back in time, which is traditionally achieved by emulating or simulating the DCS.
        The ITER Control, Data Access and Communication (CODAC) system uses EPICS at its core, which is not designed to operate with such constraints. In the frame of the ITER Plant Simulator project, we leveraged advanced Linux features (libfaketime, namespaces, and CRIU) combined with a custom interface between CODAC and the simulator to meet these requirements. This approach allows integration of a wide range of CODAC tools (HMI, Archive, Alarms, Logbook, Operations Sequencer), synchronized with the simulator, with a lightweight and efficient solution.

        Speaker: Ralph Lange (ITER Organization)
      • 262
        BOLT: beamline operations and learning testbed for EPICS and Bluesky integration

        The Beamline Operations and Learning Testbed (BOLT) is a portable, cost-effective platform developed at the Advanced Light Source, inspired by a similar device developed at Diamond Light Source, to test experimental control systems without disrupting user operations. BOLT simulates a beamline endstation with minimal hardware (two motors, one detector) while implementing the complete EPICS/ Bluesky software stack. This testbed serves two purposes: firstly, developers can prototype new control software, user interfaces, and infrastructure layouts, e.g. virtualizing beamline computers, and secondly, beamline scientists and users can safely explore new developments and provide feedback before production deployment. This approach is intended to improve usability and accelerate adoption. To help lower the barrier of entry for facilities interested in adoption, we will open-source CAD models, wiring diagrams, and bill of materials. In terms of experimental technique, BOLT implements photogrammetry (3D reconstruction from images), conceptually similar to tomography but using reflected rather than transmitted signals. We demonstrate the system running on a terminal-based EPICS/ Bluesky integration, a browser-based interface developed in-house and presented in a separate contribution, as well as the traditional LabVIEW-based ALS controls system. By creating a safe learning environment, BOLT aims to accelerate the adoption of open-source controls tools across the synchrotron community.

        Speaker: Johannes Mahl (Lawrence Berkeley National Laboratory)
      • 263
        Scheduler for cooling and ventilation plants: feedback on easy and low cost method for energy savings

        In industrial engineering, scheduling is a well-established strategy for optimizing resource use and minimizing operational costs. At CERN's Engineering department, the Cooling and Ventilation group has implemented an automatic scheduling solution to reduce electricity consumption by selectively shutting down plants during nights and weekends, when their operation is not required. Given that CV systems account for a significant share of CERN's total electricity use, even simple scheduling strategies can yield substantial energy savings - up to 75% in some cases. This paper presents the motivation, methodology, and preliminary results of scheduler deployments across multiple CV plants between 2023 and 2025, including recent pilots at Point 5 of the Large Hadron Collider (LHC). Two types of scheduler conditions were implemented: calendar-based (e.g., operating only during working hours) and temperature-based (e.g., starting only when zone temperature thresholds are exceeded). Operational safety was carefully assessed - a CO₂ measurement campaign was conducted at Point 5 to confirm compliance with environmental and safety requirements. Preliminary results from several sites show significant reduction in consumption without compromising performance. This low-cost approach demonstrates how simple digital solutions can lead to impactful energy savings in large-scale technical infrastructures.

        Speaker: Diogo Monteiro (European Organization for Nuclear Research)
      • 264
        Designing the High-Dynamic Double Crystal Monochromators (HD-DCM-Lite) control system for fast energy scans and beam sub-nanometer stability at SIRIUS

        Two new High-Dynamic Double Crystal Monochromators (HD-DCM-Lite) have been successfully deployed on the SAPUCAIA (SAXS) and QUATI (quick-EXAFS) beamlines at SIRIUS. Building on previous work, that introduced the dynamic modeling and initial stabilization control strategies*, this paper details the mechatronic architecture, commutation schemes, and control strategies that enabled these systems to meet stringent operational requirements during online beamline validation. The contributions of this work can be summarized in three key areas: (i) the development of commutation rules that enable closed-loop motion control of the 3-phase brushless rotary stages; (ii) the control approach for coordinated motion of two goniometers to achieve single-degree-of-freedom movement; and (iii) the design of controllers for the high-bandwidth Short-Stroke system. At SAPUCAIA, the HD-DCM-Lite achieved sub-5 nrad RMS parallelism stability in the pitch direction, essential for ultra-low-noise scattering experiments. At QUATI, the system was able to reach high-speed energy scanning while maintaining the beam in fixed-exit condition, crucial for quality assurance in time-resolved spectroscopy. These results highlight the HD-DCM-Lite as a state-of-the-art mechatronic platform. Experimental data on crystal and beam stabilities during online operation confirm the effectiveness of the designed control system and commutation strategies.

        Speaker: Gabriel Oehlmeyer Brunheira (Brazilian Synchrotron Light Laboratory)
    • 16:00
      Coffee
    • WEPD Posters
      • 265
        A high-precision motion profile data stream pipeline for LCLS-II fast wire scanner

        The LCLS-II is the first X-ray Free Electron Laser (XFEL) to utilize continuous-wave superconducting accelerator technology (CW-SCRF), capable of delivering X-ray pulses at repetition rates up to 1 MHz. The LCLS-II fast wire scanner motion control system, based on the Aerotech Ensemble controller, is designed to measure the beam profile across both high and low repetition rates. To effectively and timely analyse the motion trajectory of the fast wire scanner, we have developed a data stream pipeline that transmits high-precision profile data from the Ensemble controller to the LCLS-II server. This system integrates the motion profile into the EPICS control system, displaying the scan profile in real time via a PyDM GUI. This paper outlines the design of the data transmission pipeline and the software development process.

        Speaker: Ziyu Huang (SLAC National Accelerator Laboratory)
      • 266
        A serverless control system

        Serverless refers to a set of principles and practices that offload the complexities of provisioning, managing and scaling infrastructure to a cloud computing provider. At Fermilab, the Controls department has been investigating how to bring the promise of Serverless on-premise by providing similar cloud-computing infrastructure to software development teams. In this paper we discuss how Fermilab is utilizing Fission, a product that brings Functions-as-a-Service to Kubernetes as a platform for hosting core “business logic” of its control system in a developer-friendly and scalable environment.

        Speaker: John Diamond (Fermi National Accelerator Laboratory)
      • 267
        Accelerator digital twin development through simulation modeling and MLOps using the LUME ecosystem at SLAC

        Accelerator digital twins can enable real-time optimization and predictive control helping streamline complex facility operations and reduce setup time. Machine Learning(ML) models can enhance digital twin capabilities by leveraging prior experiments, known parameters, and real-time measurements. These require robust infrastructure and open-source software tools for accurate beam modeling and system integration. SLAC has developed the Lightsource Unified Modeling Environment (LUME) to enable large-scale modeling of X-ray free electron laser performance. The team is developing an ecosystem of tools towards start-to-end simulations for a much broader set of accelerators beyond light sources. LUME components include python wrappers for physics simulation which also work with a snapshot of the current accelerator reading through Kafka. LUME-services offers core infrastructure for model workflows including contextualized file service, model and results databases, and scheduling via Prefect for automated runs. LUME-Model holds the data structures used in the LUME modeling toolset, encapsulating physics simulations and ML models. LUME-EPICS is a dedicated API for serving LUME model variables with EPICS. MLflow is being integrated to manage the full machine learning lifecycle. This infrastructure is currently being developed and deployed at SLAC, alongside collaborators from other laboratories (especially LBNL). Here we provide an overview of the LUME ecosystem and example use cases.

        Speaker: Gopika Bhardwaj (SLAC National Accelerator Laboratory)
      • 268
        Advanced p4p usage at the ISIS Neutron and Muon Source

        The p4p library is a Python wrapper for the C++ pvxs library allowing Python developers to access client functionality to put, get, and monitor pvAccess PVs. Server functionality allows the creation of PVs and implements the structure of the most commonly used Normative Types (e.g. NTScalar) and their fields (e.g. alarm, control, etc.). To facilitate the transition to EPICS underway at the ISIS Neutron and Muon Source accelerators, an implementation of the logic of the Normative Type fields and a subset of other IOC functionality such as CALC records has been developed. We present our uses of this work and highlight parts which may be applicable to other facilities interested in using Python.

        Speaker: Dr Ivan Finch (Science and Technology Facilities Council)
      • 269
        AI-driven device driver generator

        We present a web-based application that significantly simplifies and accelerates the development of Tango Controls device servers by integrating large language models (LLMs) into the code generation process. The tool allows users to define device attributes, commands, and properties through an intuitive graphical interface, and optionally upload device documentation in PDF format. Using retrieval-augmented generation, the system extracts relevant content from the documentation and generates Python code for Tango device servers, tailored to the specific device functionality. The backend leverages FastAPI and LangChain to interface with various LLMs such as GPT, Claude, and Gemini. Tests on devices like power supplies and teslameters show that the generated code often requires limited manual adjustments. While the application improves development efficiency and accuracy, it also highlights certain limitations, including occasional command mismatches and the need for better retrieval strategies. Future enhancements include automated test code generation, improved document parsing, support for additional programming languages, and integration of open-source models for broader applicability.

        Speaker: Mr Lukasz Zytniak (S2Innovation Sp z o. o. [Ltd.])
      • 270
        AI-powered scientific chatbot

        We present the design of a retrieval-augmented generation (RAG) based scientific chatbot, tailored for control room operators at particle accelerators and laser facilities. The chatbot integrates with institutional knowledge bases, including operational manuals, control system documentation, incident logs, and structured machine data, to provide real-time, context-aware responses to operator queries. This tool is designed to support critical operational workflows such as troubleshooting, shift handovers, beamline setup, and safety procedures. By leveraging secure deployment options (e.g. on-premise or cloud environments), it ensures compliance with data governance and cybersecurity policies typical in large-scale research infrastructures. The system reduces cognitive load, improves onboarding of new staff, and enhances efficiency by enabling intuitive natural language access to complex technical knowledge, automatic logs and reports creation, and many other optimizations of daily responsibilities. We will discuss the system architecture, data integration challenges, evaluation with pilot users, and the broader potential of AI assistants in control room environments.

        Speaker: Mr Lukasz Zytniak (S2Innovation Sp z o. o. [Ltd.])
      • 271
        An FPGA-based autoencoder model for real-time RF signal denoising for industrial accelerators

        A challenge that industrial particle accelerators face is the high amounts of noise in sensor readings. This noise obscures essential beam diagnostic and operational data, limiting the amount of information that is relayed to machine operators and beam instrumentation engineers. Machine learning-based techniques have shown great promise in isolating noise patterns while preserving high-fidelity signals, enabling more accurate diagnostics and performance tuning. Our work focuses on the implementation of a real-time FPGA-based noise reduction autoencoder, tested on a Xilinx ZCU104 evaluation kit with the intention of being deployed on industrial particle accelerators in the near future.

        Speaker: Vikshar Rajesh (RadiaSoft (United States))
      • 272
        An overview of the FGC4 – CERN’s new power converter controller

        The CERN’s Electrical Power Converters group manages over 5000 power converters, 4000 of which are controlled, monitored, and diagnosed by a few generations of the Function Generator/Controller (FGC) devices. However, the current generation (FGC3) is now facing performance limitations and component obsolescence. To address this and accommodate future installations at CERN and other laboratories, a fourth generation of FGC is under development. Built with cutting-edge technology and modern standards, FGC4 features a Linux-based System-on-Chip (SoC), delivering an order-of-magnitude improvement in regulation rate, extensive configuration options, and significantly enhanced diagnostics. While designed to fit CERN’s accelerator control system, reusability beyond CERN has been a core design principle from the outset, enabling compatibility with EPICS and TANGO frameworks. This paper provides an overview of the FGC4 project, with a primary focus on its software architecture and highly modular design, which facilitates extensibility and ensures a future-proof solution. Additionally, it discusses the hardware architecture, including a CERN-developed System-on-Module hosting a Xilinx SoC.

        Speaker: Dariusz Zielinski (European Organization for Nuclear Research)
      • 273
        Anomaly detection in fast-sampled RF signal data

        Jefferson Lab recently upgraded data acquisition on legacy cryomodules in CEBAF to increase the sample rate of RF signals from one Hz to 5 kHz. Initial results show that pairing this fast-sampled data with a machine learning algorithm can yield automated detection of anomalies that would have otherwise slipped by unnoticed. These uncaught RF anomalies often result in beam loss trips and a time intensive investigative effort. A major challenge in earlier work was that only limited data could be collected and analyzed prior to an extended scheduled accelerator down. In this work, I present the results of alternative machine learning algorithms for investigating abnormalities on additional data from this fast-sampled RF data source.

        Speaker: Adam Carpenter (Thomas Jefferson National Accelerator Facility)
      • 274
        ATLAS DEMO Inheritance commissioning and performance tesing using SCADA and PLC based automatized procedures

        By the end of 2024, to cope with needs of future ATLAS tracking detector (ITk), surface integration and testing facility at CERN was fully equipped with a new large power and low temperature cooling system nick-named “DEMO Inheritance” based on the 2PACL technique. This paper will discuss implemented control system solutions for both the cooling plant and the distribution system installed in the detector proximity at the assembly clean room. The PLC and SCADA software has been fully deployed, following CERN UNICOS framework and allowed for successful and rapid initial performance test with 50kW dummy load. In this paper we will describe in details SCADA and PLC based procedures used for automatic system performance tests with no operator attendance as the first step in future commissioning of the final detector cooling systems of ATLAS and CMS detectors at the High Luminosity era of the Large Hadron Collider at CERN. These procedures running unattended allow for significant commissioning time reduction which is a key requirement for very challenging schedule.

        Speaker: Lukasz Zwalinski (European Organization for Nuclear Research)
      • 275
        Automated sample identification and registration system for the MOGNO beamline at SIRIUS

        Mogno* is a micro- and nano-tomography beamline at the Brazilian Synchrotron Light Source, SIRIUS. It performs fast tomographies with tender (22 keV, 39 keV) and hard (67 keV) X-rays at resolutions down to 500 nm, supporting classic, 4D (time-resolved), zoom (continuous magnification) and high-throughput experiments. Two stations are available: a nano-station for external users and a micro-station in scientific commissioning, each equipped with an automatic sample-exchange system using robotic arms and magazines holding 21 and 88 samples, respectively.
        To enhance automation and user experience under fast-measurement, high-throughput conditions, we developed a sample-registration and cataloging system. Samples are registered via a Data Matrix at a dedicated station using a PyQt desktop interface, which sends requests to a RESTful FastAPI service backed by PostgreSQL. During operation, an orchestrator routine coordinates the magazine, robot arm and code reader to identify each sample holder, fetch its name via the SOA services, and display status on the main beamline control UI. This architecture streamlines workflows, reduces manual errors and enables traceable, high-volume sample handling.

        Speaker: Lucas Eduardo Pinho Vecina (Brazilian Center for Research in Energy and Materials)
      • 276
        Bayesian active learning for converging posteriors in latent variable inference for control systems

        Inferring latent variables, such as Courant-Snyder parameters in particle accelerators, is challenging due to noisy, partial observations that often produce multi-modal posterior distributions, despite the true latent variable being unique. We present a Bayesian Active Learning (BAL) framework to enhance latent variable inference in simulation-equipped control systems. BAL actively selects control settings (e.g., quadrupole magnet configurations) to maximize information gain, efficiently refining multi-modal posteriors into unimodal ones for improved inference accuracy. Using an ensemble of physics-informed beam envelope simulations in PyTorch, our approach approximates posterior sampling and mutual information to guide data acquisition. This interpretable framework holds broad potential for improving latent variable inference in control systems.

        Speaker: Kilean Hwang (Facility for Rare Isotope Beams)
      • 277
        BOLT: beamline operations and learning testbed for EPICS and Bluesky integration

        The Beamline Operations and Learning Testbed (BOLT) is a portable, cost-effective platform developed at the Advanced Light Source, inspired by a similar device developed at Diamond Light Source, to test experimental control systems without disrupting user operations. BOLT simulates a beamline endstation with minimal hardware (two motors, one detector) while implementing the complete EPICS/ Bluesky software stack. This testbed serves two purposes: firstly, developers can prototype new control software, user interfaces, and infrastructure layouts, e.g. virtualizing beamline computers, and secondly, beamline scientists and users can safely explore new developments and provide feedback before production deployment. This approach is intended to improve usability and accelerate adoption. To help lower the barrier of entry for facilities interested in adoption, we will open-source CAD models, wiring diagrams, and bill of materials. In terms of experimental technique, BOLT implements photogrammetry (3D reconstruction from images), conceptually similar to tomography but using reflected rather than transmitted signals. We demonstrate the system running on a terminal-based EPICS/ Bluesky integration, a browser-based interface developed in-house and presented in a separate contribution, as well as the traditional LabVIEW-based ALS controls system. By creating a safe learning environment, BOLT aims to accelerate the adoption of open-source controls tools across the synchrotron community.

        Speaker: Johannes Mahl (Lawrence Berkeley National Laboratory)
      • 278
        CANModule: a lightweight, vendor-neutral CAN bus abstraction library for simplified integration and diagnostics

        This paper presents CANModule, an open‑source, cross‑platform library that provides a unified abstraction layer for vendor‑specific Controller Area Network (CAN) bus implementations. It supports ethernet‑CAN gateways from Analytica and Linux’s SocketCAN out of the box, and offers an open architecture for adding further gateways. Requiring only standard C++17, CANModule is lightweight and framework‑independent, unlike Qt CAN support, which introduces extra dependencies. The library standardises CAN communication via a generic API in C++ and Python, reducing the effort of integrating multiple vendor APIs. Built‑in diagnostic tools mirror SocketCAN’s canutils but work transparently with many vendors and OS, easing development for heterogeneous CAN environments. CANModule is integrated into CERN’s Quasar Framework, enabling numerous OPC UA servers that control large‑scale experiments and infrastructure. It has proven reliable in settings such as ATLAS detector control and power‑supply control. Beyond CERN, the library suits industrial applications—including automotive and robotics—by providing a scalable, extensible foundation for CAN‑based systems and abstracting vendor‑specific complexities. CANModule streamlines CAN bus integration, providing a flexible, dependable, and efficient foundation for both research and industrial use cases.

        Speaker: Luis Miguens Fernandez (European Organization for Nuclear Research)
      • 279
        Commissioning and operation of vacuum control system for SPES project

        The SPES (Selective Production of Exotic Species) project aims to create a facility based on particle accelerators to produce radioactive ion beams. The second phase of the project foreseen the transport of the non-reaccelerated radioactive ion beam from the TIS (Target Ion Source) to the low energy experimental hall. This part of SPES beam lines involves the major requirements in terms of complexity for the vacuum control system, as interface with GRS (Gas Recovery System), GSS (Global Safety System) and MPS (Machine Protection System), management of centralize pumping system for the exhaust, and different configurations of each section.
        The VCS (Vacuum Control System) of the TIS and of the following beam line sections are based on modular control units which are highly configurable to be used in different installation and with different equipment. Each unit is constituted by a SIEMENS S7-1500 PLC and 10” touch panel for the local/remote configuration and operation by expert operator, while high level control system is done in EPICS (Experimental Physics and Industrial Control System) and CSS (Control System Studio).
        This paper describes the commissioning phase of the VCS for the SPES facility, and the operation of the about 20 systems running at LNL (Laboratori Nazionali di Legnaro)*.

        Speaker: Loris Antoniazzi (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro)
      • 280
        Control and tuning of complex bend magnet for proposed NSLS-II upgrade

        The Complex Bend (CB) is a novel lattice concept proposed for the NSLS-II upgrade, utilizing permanent magnets instead of traditional electromagnets. This innovative design aims to reduce horizontal emittance from 700 pm to 40 pm and increase beam energy from 3 GeV to 4 GeV, significantly enhancing beam brightness. However, as a new lattice architecture, the CB introduces substantial technical challenges in design, assembly, and verification, particularly in meeting stringent magnetic field requirements. Unlike electromagnets, permanent magnets cannot be adjusted after assembly, making precise design and fabrication critical. These challenges are further compounded by the nonlinear behavior of magnetic fields with respect to magnet position and geometry.

        To address these issues, we propose integrating advanced FPGA-based hardware with EPICS-based software into a comprehensive control and tuning system. Real-time sensor data, including position, pressure, magnetic field strength, and temperature will be continuously collected and analysed. In addition, AI/ML algorithms will support optimizing magnet positioning and alignment to meet the required field specifications for each CB unit. This presentation will cover the CB mechanical assembly system, electrical hardware design, low-level control software design, and high-level tuning software implementation.

        Speaker: Yuke Tian (Brookhaven National Laboratory)
      • 281
        Control system design for the new SMH16 pulsed current generator in the CERN PS extraction system

        The existing direct-drive septum magnet (PE.SMH16), in operation since 1994, is reaching the end of its lifetime under increased extraction frequency and will be replaced by a new eddy-current septum magnet, requiring a redesigned 30 kA pulsed generator. To meet the demanding flat-top stability requirement of ±0.05% over an 11 µs window and ±0.05% pulse-to-pulse repeatability over a year, a dedicated regulation control system was developed and validated. The objective of this work is to demonstrate that the control system achieves the required closed-loop performance, delivering repeatable magnet current waveforms under representative test bench conditions. The control system employs real-time regulation of flat-top amplitude and flatness, supported by a thermally stabilized enclosure to mitigate acquisition drift. Measurements confirm that the closed-loop system consistently maintained flatness requirement over an 8 µs window, with long-term repeatability and all observed deviations, including spurious glitches up to 400 ppm, remaining within specification. The restriction of 8 µs stems from test-bench limitations rather than control capability, which is discussed in detail. These results show that the regulation system fulfils its role up to specification; final confirmation of absolute flat-top accuracy will come from further qualification of acquisition chain elements in a dedicated test campaign, followed by beam-based validation in operation.

        Speaker: Christophe Boucly (European Organization for Nuclear Research)
      • 282
        Control system information exchange based on data models

        Many control algorithms or optimisation procedures profit from a consistent set of data which is available with a high frequency: e.g. machine learning or automated commissioning. Modern distributed control systems allow combining and presenting data based on data models, which are then transported consistently over the network: e.g. EPICS7 introduced these data models as normative types or their combination.
        In this paper we present use cases that profit from combining data sub-models to a consistent higher order data model. These are today typically implemented in some programming language.
        The authors present use cases that can profit from a consistent robust combination of data sub-models of many devices to a higher order model. Finally common patterns are presented which could be reasonable to implement independently.

        Speaker: Pierre Schnizer (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 283
        Design of an intelligent inspection system for particle accelerator facilities

        Aiming at the limitations of the traditional manual inspection method for particle accelerator facilities, an intelligent inspection system for particle accelerator facilities based on multi-modal sensors and artificial intelligence technology is provided. Multi-modal sensors can collect various types of data from the accelerator facilities, such as temperature, audio, images, and water leakage information, which provides comprehensive information for a thorough understanding of the equipment status. Artificial intelligence technology is capable of conducting in-depth analysis of massive amounts of data, uncovering fault patterns and rules. Through technologies such as fault modeling and data analysis, it can achieve early warning and diagnosis of faults. The intelligent inspection system for particle accelerator facilities effectively addresses the limitations of traditional inspection methods. It enables real-time monitoring of the accelerator facilities and rapid alarming of abnormalities, significantly improving work efficiency and the operational safety of the equipment. This system provides strong support for the stable operation of particle accelerator facilities.

        Speaker: Yuliang Zhang (Institute of High Energy Physics)
      • 284
        Design of an upgraded analog signal digitizer to replace the MADC system at RHIC

        A new general-purpose analog signal digitizer has been designed and prototyped to serve as an upgrade to the legacy Multiplexed Analog to Digital Converter (MADC) system currently in use around the RHIC accelerator and injector complex at BNL. The new system is a standalone rackmount chassis with an embedded System on a Chip (SoC). This is a departure from the traditional VME form factor used by most legacy controls equipment within the Collider Accelerator Department. New features include completely independent channels, real time digital signal processing, large sample buffers, built-in timing links, and high bandwidth network connectivity. Support is included for the legacy timing links as well as future compatibility with the EIC Timing Data Link. The core features, system architecture, and scheme for integration with the controls system network is presented.

        Speaker: Paul Bachek (Brookhaven National Laboratory)
      • 285
        Design study for integrating EPICS-based control systems with medical treatment apparatus

        This contribution proposes a design study focused on the future integration between an EPICS-based accelerator control system and a medical treatment environment within the ANTHEM project. As the accelerator subsystems (source, RFQ, MEBT, etc.) evolve toward operational readiness, a conceptual architecture is needed to bridge high-level beamline control with treatment room systems. The goal is to anticipate integration challenges by identifying critical requirements, data flows, and safety constraints typical of medical environments. This study explores design approaches to minimize bottlenecks at the final integration phase, particularly where patient positioning, safety interlocks, and real-time synchronization with beam delivery intersect. The study will not implement a prototype, but rather provide a high-level blueprint and system requirements, laying the groundwork for future harmonization between scientific instrumentation and clinical operation contexts.

        Speaker: Mauro Giacchini (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro)
      • 286
        Designing the High-Dynamic Double Crystal Monochromators (HD-DCM-Lite) control system for fast energy scans and beam sub-nanometer stability at SIRIUS

        Two new High-Dynamic Double Crystal Monochromators (HD-DCM-Lite) have been successfully deployed on the SAPUCAIA (SAXS) and QUATI (quick-EXAFS) beamlines at SIRIUS. Building on previous work, that introduced the dynamic modeling and initial stabilization control strategies*, this paper details the mechatronic architecture, commutation schemes, and control strategies that enabled these systems to meet stringent operational requirements during online beamline validation. The contributions of this work can be summarized in three key areas: (i) the development of commutation rules that enable closed-loop motion control of the 3-phase brushless rotary stages; (ii) the control approach for coordinated motion of two goniometers to achieve single-degree-of-freedom movement; and (iii) the design of controllers for the high-bandwidth Short-Stroke system. At SAPUCAIA, the HD-DCM-Lite achieved sub-5 nrad RMS parallelism stability in the pitch direction, essential for ultra-low-noise scattering experiments. At QUATI, the system was able to reach high-speed energy scanning while maintaining the beam in fixed-exit condition, crucial for quality assurance in time-resolved spectroscopy. These results highlight the HD-DCM-Lite as a state-of-the-art mechatronic platform. Experimental data on crystal and beam stabilities during online operation confirm the effectiveness of the designed control system and commutation strategies.

        Speaker: Gabriel Oehlmeyer Brunheira (Brazilian Synchrotron Light Laboratory)
      • 287
        Development of an EPICS-based control system platform for electron beam commissioning at ELI-NP

        The ELI-NP (Extreme Light Infrastructure - Nuclear Physics) gamma beam system employs a 234-742 MeV linear accelerator to generate gamma rays via laser-electron interactions. Accelerator operation requires coordinated control of multiple subsystems (RF, vacuum, magnets, diagnostics). During pre-commissioning, we developed an EPICS-based platform to enable equipment debugging and beam tuning simulations before control system completion.
        The platform integrates both soft IOCs (for device emulation and algorithm validation) and device IOCs (for hardware control). This dual-IOC architecture allows: 1) High-level control software testing without physical hardware, and 2) Physics application interfaces for advanced functionality. The system successfully simulated control operations and supported tuning algorithm development.
        Testing demonstrated the platform's effectiveness for subsystem debugging and realistic beam tuning simulations. By bridging the gap between hardware availability and control system readiness, the platform is expected to accelerate the commissioning process and enhance the operational efficiency of the ELI-NP gamma beam system in the future.

        Speaker: Mr Aurelian Ionescu (Horia Hulubei National Institute for R and D in Physics and Nuclear Engineering)
      • 288
        Development of virtual beamline technology for advanced light sources : simulation and application of key components

        This study develops a virtual beamline technology for advanced light sources, with a focus on simulating the fundamental operational functions of critical devices including motors, double-crystal monochromators (DCM), fluorescent screens (FS), X-ray detectors, and X-ray beam position monitors (XBPM). By establishing parametric models, the simulation of device actions is achieved. It supports users in setting the displacement of motors, adjusting the Bragg angle of the DCM, and configuring the parameters of the slit aperture, and generates the corresponding state feedback signals of the devices.An interactive visualization interface is designed. Based on the state feedback signals of the devices, it generates the spot images on the fluorescent screen and synchronously displays the position trajectory of the beam measured by the XBPM, providing a visual reference for the beam tuning process. Through preliminary beam tuning simulations, the platform enables standardized operational workflows (e.g., energy selection) and optimizes parameter configuration sequences, effectively reducing trial-and-error adjustments during physical commissioning. The lightweight simulation framework proposed in this work offers a scalable and practical reference for advancing virtual commissioning technologies in synchrotron radiation facilities and other large-scale scientific installations.

        Speaker: Miao Zhang (Institute of High Energy Physics)
      • 289
        Development status of FPGA-based FOFB system for PLS-II

        The third-generation synchrotron accelerator Pohang Light Source-II (PLS-II) at Pohang Accelerator Laboratory uses a Fast Orbit Feedback (FOFB) system to maintain beam orbit stability in the storage ring. The FOFB system operates in real time to suppress orbit perturbations in both horizontal and vertical directions. Currently, the system uses VME-based Single Board Computers (SBCs) and Reflective Memory (RFM) technology, achieving a feedback repetition rate of about 1kHz. However, the aging hardware is causing difficulties in maintenance and performance upgrades. To solve this issue, a new FOFB system based on Zynq UltraScale+ FPGA high-speed digital processing technology is under development, aiming to increase the feedback rate to 10 kHz. The new design distributes twelve independent FOFB controllers throughout the storage ring to minimize latency from Beam Position Monitor (BPM) Fast Acquisition (FA) data reception to the output of control signals to the magnet power supplies. The system is being developed to work stably with the existing Fast Magnet Power Supplies at 1 kHz and also to support future high-performance supplies capable of operating at higher rates. The FPGA-based FOFB system is currently under development, with a goal of achieving a control bandwidth greater than 100 Hz and significantly improving maintainability and scalability. This paper introduces the design concept and the current development status of the new system.

        Speaker: Wooseong Cho (Pohang Accelerator Laboratory)
      • 290
        Digital twin framework for enhanced commissioning and operation of the PIP-II superconducting linac

        The PIP-II superconducting linac at Fermilab is designed to deliver multi-megawatt proton beams for neutrino physics and other high-intensity applications. To expedite commissioning and enhance operational reliability, we have developed an EPICS-based data flow framework that seamlessly integrates digital twins (DT) with physical twins (PT). These digital twins comprise high-fidelity beam dynamics models or data-driven surrogate models connected to their physical counterparts through real-time diagnostics and advanced machine-learning algorithms.
        Central to this framework is Linac_Gen, an accelerated simulation tool that incorporates convolutional neural networks, random forests, and genetic algorithms to provide up to a tenfold speedup in optimizing the accelerator geometry model. An EPICS translator layer ensures interoperability by efficiently mapping lattice parameters across diverse simulation platforms.
        Our EPICS-based framework supports multiple operational modes—monitoring, passive learning, closed-loop control, and online learning—covering the entire machine lifecycle. By leveraging HPC resources and multi-objective optimization techniques, the digital twin enables adaptive trajectory correction, real-time fault detection, and predictive modeling of beam stability. This comprehensive approach paves the way for robust, high-intensity operation and data-driven accelerator R&D at Fermilab.

        Speaker: Abhishek Pathak (Fermi National Accelerator Laboratory)
      • 291
        Enabling high-performance PLC communication through open standards: OPC UA PubSub

        The growing diversity of PLC models and brands in industrial controls systems is increasing the complexity of the communications at the supervisory (SCADA) and control layers. At CERN, the in-house framework UNICOS-CPC manages this diversity through the integration of proprietary and open communication protocols, as well as bespoke implementations at the SCADA level to manage the incoming process data received by different communication drivers, including Modbus, S7, and S7Plus. To reduce this complexity, this paper proposes unifying PLC-SCADA communications across all platforms using OPC UA PubSub, a lightweight and highly performant publisher-subscriber protocol specified in the IEC 62541 industrial standard. This approach simplifies integration with new vendors and technologies, while enabling direct communication between PLCs. It positions OPC UA once more as a homogenizing middleware layer on top of heterogeneous hardware, which has proven to be a reliable and scalable solution for other use cases at CERN, such as power supplies, powered crates and custom electronics. The paper outlines the design, prototyping and testing phases involved in integrating the OPC UA PubSub protocol into industrial applications. It also presents the challenges encountered in the integration process, and concludes with the promising results achieved in both PLC-PLC and PLC-SCADA communication setups.

        Speaker: Loreto Gutierrez Prendes (European Organization for Nuclear Research)
      • 292
        Enhancing SIRIUS fast orbit feedback actuators using IIR filters

        Studies with the SIRIUS Fast Orbit Feedback (FOFB) system revealed that its power supplies were operating near saturation, which limited the achievable FOFB controller gain. Further analysis identified avoidable noise sources in Beam Position Monitors (BPMs) and adjacent magnet power supplies as the main causes of this behavior. This paper presents a model-based approach employing Infinite Impulse Response (IIR) filters to attenuate the effects of such noise sources on the control effort as well as equalization of actuators' dynamic responses without any hardware modifications. The proposed method effectively reduces actuator effort and provides response equalization adding negligible phase shift in the band of interest, resulting in higher FOFB loop gain and improved rejection of orbit disturbances up to 1 kHz.

        Speaker: Guilherme Ricioli (Brazilian Synchrotron Light Laboratory)
      • 293
        EPICS IOC Extension Points: Old, Recent, and Proposed

        The EPICS Input/Output Controller (IOC) has always been extensible, enabling applications to add functionality without modifying the core software. Since the early EPICS 3.14 releases in 2002 the Core Developers have introduced ten new extension mechanisms that IOC application developers can use in individual IOCs or shared support modules. This paper reviews the plugin interfaces available in EPICS 7.0.9 and suggests a couple of areas where new extension points could be added in the future.

        Speaker: Andrew Johnson (Argonne National Laboratory)
      • 294
        epics-in-docker: a small framework for building slim IOC and EPICS tooling container images

        The SIRIUS accelerators have used containers for IOCs for years, but build definitions and launch scripts were often duplicated, and image sizes could be over 3GB. On the other hand, the SIRIUS beamlines, until recently, used IOCs installed in a shared NFS, which complicated application management, especially across different OS versions.
        To address these issues, we have developed a framework for building slim IOC container images (e.g. ADAravis takes 300MB) using a curated set of dependencies (and their versions) and simple and short build definitions. We avoid duplicating shared information by using git submodules, which aids in versioning the base images used. The resulting container images include a standard set of installed packages and scripts, making them ready for deployment in a wide range of container orchestration setups. The shared interface provided by the EPICS build system allows us to also create images with EPICS tools, including CA and PVA gateways and epics-base utilities.
        For beamlines, it was necessary to adapt the IOC orchestration to also support containerized applications, keeping the same user interface for managing IOCs for beamline and support staff.
        This article aims to explain the epics-in-docker architecture, the user experience, and how SIRIUS manages containers. It also aims to explore the different tradeoffs made in epics-in-docker and other frameworks, such as our choice to not support different versions of dependencies.

        Speaker: Guilherme Rodrigues de Lima (Brazilian Synchrotron Light Laboratory)
      • 295
        Exploring AI-based models in accelerators: a case study of the SOLARIS synchrotron

        The National Synchrotron Radiation Center SOLARIS, third generation light source, is the only synchrotron located in Central-Eastern Europe, in Poland. The SOLARIS Center, with seven fully operational beamlines, serves as a hub for research across a diverse range of disciplines. The most important aspect of such research infrastructure is to provide stable working conditions for the users, operators and the conducted projects. Due to its unique properties, problem complexities, and challenges that require advanced approaches, the problem of anomaly detection and automatic analysis of signals for the beam stability assessment is still a huge challenge that has not been fully developed. To address this problem, different AI-based projects are under discussion and development, i.e. automatic analysis of diagnostic signals on the example of transverse beam profiles or beam position FFT windows classification. The best proposed solution, based on the InceptionV3 architecture, can assess beam quality automatically, based solely on the image itself with 94.1% accuracy and 96.6% precision. Discussion on the current developments and deployments in SOLARIS on that field, both for the accelerator and beamlines, will be covered.

        Speaker: Michal Piekarski (SOLARIS National Synchrotron Radiation Centre)
      • 296
        Extension of the SPIRAL2 PLC-based control system for the integration of DESIR and NEWGAIN

        The SPIRAL2 heavy ion accelerator is currently undergoing several extension projects under development. In Phase 1+, the DESIR experimental hall will receive very low energy radioactive ion beams from S3 or SPIRAL1. In Phase 1++, a new cryogenic ion source and a new injector will be implemented to broaden the range of heavy ions currently accelerated.
        This contribution presents the integration of these new facilities into the SPIRAL2 PLC-based control system. The integration builds on recent technological upgrades while maintaining system consistency and ensuring compliance with the specific constraints related to radiation protection and operational safety. Some existing PLCs have been extended, and new ones have been added to automate processes related to beam operation (machine protection system, equipment insertion, vacuum, RF, cryogenics), building infrastructure (nuclear ventilation, refrigeration, alarm management), and nuclear safety (access control units, radiation monitoring).
        This integrated approach ensures a coherent, maintainable control system that supports safe and efficient operations for the DESIR and Newgain projects.

        Speaker: Quentin Tura (GANIL, Institut National de Physique Nucléaire et de Physique des Particules)
      • 297
        Facility-scale differentiable digital twin for the IOTA/FAST facility

        As the design complexity of modern accelerators grows, there is more interest in using advanced simulations that have fast execution time or yield additional insights. One notable example are the gradients of physical observables with respect to design parameters, which are broadly useful in optimization and uncertainty analysis. The IOTA/FAST facility has been working on implementing and experimentally validating an end-to-end digital twin that is both fast and gradient-aware, allowing for rapid prototyping of new software and experiments with minimal beam time costs. In this contribution we will discuss our plans and progress. We will cover the integration of physics and ML codes for linac and ring simulation and discuss the development of generic interfaces between surrogate and physics-based sections. We will focus on how the interface is exposed as either a deterministic event loop (discrete event simulator) API, or through a several control systems - a fully asynchronous EPICS soft IOC and gRPC-based Data Pool Manager. We will also discuss challenges in model calibration and uncertainty quantification, as well as future plans to extend modelling to proton accelerators like PIPII and Booster.

        Speaker: Nikita Kuklev (Fermi National Accelerator Laboratory)
      • 298
        Fast archiving for BPM data at ALS-U

        The Advanced Light Source Upgrade (ALS-U) is a major upgrade project for the existing light source at the Lawrence Berkeley National Laboratory. There is a growing interest in the community to employ ML/AI methods to use predictive analysis, optimization and error fault analysis. In order to enable those methods, a rich dataset must be available and integrated into the control system. This project aims at collecting, storing and providing methods to retrieve, initially, BPM data from the ALS-U Storage Ring, by using an additional, passive node connected to the Fast Orbit Feedback network. The data rate from just the BPMs alone would be on the order of 1Gbps (updating at a rate of 10kHz) with the requirement that the system must be able to store the data for 1 week, in a continuous manner, totaling dozens of TB of data. This paper describes a conceptual design and prototype details of the Fast Archiver. The authors are confident that the archiver can be easily extended to archiving other useful metrics of the ALS-U, like power supply setpoints and monitoring data.

        Speaker: Vamsi Vytla (Lawrence Berkeley National Laboratory)
      • 299
        Fermilab's control system development with digital twin

        Control Systems development is often the last thing considered when designing and building new equipment, e.g. a new detector or superconducting RF LINAC; however when the new equipment is installed, it is the first thing desired to be operational for testing. Due to frequent delays in building new equipment and project deadlines, control system development and testing is often curtailed. A way to alleviate this problem is to simulate the control system, though this will be challenging for complex systems.

        The Fermilab PIP-II (proton improvement plan - II) project is being constructed at Fermilab to deliver $800\,MeV$ protons of $>1\,MW$ beam power to replace the present LINAC for the remainder of the existing accelerator complex. The new LINAC consists of a warm front end (WFE), 23 superconducting RF cryomodules (of 5 types), and a beam transfer line (BTL) to the existing complex.

        The accelerator physics group has a parallel project to create a digital twin (DT) of the PIP-II accelerator. We have coupled the EPICS controls to this DT and are developing both the DT and EPICS software in parallel. This will allow us to develop the EPICS software framework, the HMIs, sequences, high level physics applications, and other services for use in a fully functional control system.

        This presentation will detail the work that we have performed to date and show demonstrations of controlling and monitoring the status of the accelerator, as well as future plans for this work.

        Speaker: Pierrick Hanlet (Fermi National Accelerator Laboratory)
      • 300
        Field deployment and iterative enhancement of the dish structure qualification (DiSQ) software for SKA-Mid

        As part of the construction of the SKA-Mid telescope in South Africa’s Karoo desert, each of the 133 new mid-frequency radio dish structures, supporting a 15m diameter dish, must undergo a thorough qualification process before they are integrated into the array. To support this work, the SKAO Wombat team has developed the Dish Structure Qualification (DiSQ) software: a tailored suite of tools designed to interact with the dish structure’s PLC-based control system via an OPC-UA interface. DiSQ comprises a user-focused engineering GUI, a synchronous Python API for automated testing with bespoke scripts, and a high-performance data logger that captures engineering parameters in HDF5 format. Since 2024, DiSQ has been successfully deployed during testing and commissioning activities by Dish Structure engineers, operating in the field on the first delivered dish structures. Its modular design enabled rapid adaptation to differences between simulation environments and real hardware, with updates informed by continuous feedback from SKAO and SARAO Dish Structure engineers. This paper presents the current status of DiSQ, highlights lessons learned from deployment, and details enhancements made to improve usability, resilience, and compatibility with evolving control interfaces. DiSQ’s evolution exemplifies the value of iterative development and close collaboration between software development teams and end-users in delivering robust tools for complex scientific engineering tasks.

        Speaker: Mr Ulrik Pedersen (Observatory Sciences Ltd, SKA Observatory)
      • 301
        First phase of control system for compact Muon Linac at J-PARC

        A muon linear accelerator (Muon Linac) for the muon g-2/EDM experiment is currently under construction at Japan Proton Accelerator Research Complex (J-PARC). The objective of this project is to accelerate thermal muons (25 meV at 300 K) to 212 MeV, marking the world’s first implementation of muon acceleration.
        Development of the control system for the Muon Linac began in 2024, with the implementation of the Ultra-Slow Muon (USM) section -- the initial acceleration stage up to 5.6 keV -- nearly completed in April 2025.
        The system adopts the standard EPICS framework and features a compact architecture consisting of (a) a QNAP NAS for disk storage and LDAP-based user authentication, (b) two operator terminals, and (c) two commercial micro servers serving as the EPICS IOC and the archiver server, respectively.
        Core functionalities of the control system are scheduled for verification during May and June, followed by beam commissioning of the USM section in December 2025.
        This paper reports on the status of the control system development for the USM section, as part of the first phase of the Muon Linac project. Toward the full commissioning of the entire Muon Linac in 2028, the prospects for extending the present control system to the main Linac components are discussed.

        Speaker: Shuei Yamada (High Energy Accelerator Research Organization)
      • 302
        Flexible and advanced control of beam dynamics using optical stochastic cooling

        Optical stochastic cooling (OSC) is a state-of-the-art beam cooling technology first demonstrated in 2021 at the IOTA storage ring at Fermilab's FAST facility. A second phase of the research program is planned to run in 2026 and will incorporate an optical amplifier, potentially with turn-by-turn configurability, to enable significantly increased cooling rates and greater operational flexibility.

        In addition to beam cooling, an OSC system can be configured to enable advanced control over the phase space of the beam. Fast control over the optical gain and phasing of the OSC system, along with properties of the integrated accelerator systems such as design of the magnetic lattice, can enable custom and potentially exotic beam distributions. To take advantage of this potential, a beam controls system based on reinforcement learning and edge AI is being developed within the scope of the ongoing OSC physics program at IOTA. This system is named Optical Cooling and Control for Advanced Manipulations (OCCAM).

        This contribution will discuss the status of OCCAM within the development of the next phase of the experimental OSC program. Proof-of-principle demonstrations using a simplified physics engine will be presented, along with the plans to improve the fidelity of the simulations and to develop hardware controls for use in the experiment.

        Speaker: Michael Wallbank (Fermi National Accelerator Laboratory)
      • 303
        Flexible containerised deployment of EPICS IOCs via CI/CD

        Managing a large number of EPICS Input/Output Controllers (IOCs) in a control system presents significant challenges in configuration, deployment, and maintenance. This poster introduces a streamlined deployment pipeline that efficiently manages IOC lifecycles within containerized environments, leveraging CI/CD practices for robust configuration management and automated updates. A key feature of this system is its fine grained deployment control: it supports both the rollout of revised IOC images across all instances and the selective deployment of updated database (db) files to individual IOC instances without impacting others in the same Docker stack. This flexibility enables rapid, low risk updates and simplifies the orchestration of complex EPICS-based infrastructures, ensuring scalability, maintainability, and operational reliability.

        Speaker: Aqeel AlShafei (ISIS Neutron and Muon Source)
      • 304
        Geoff: applications and developments at GSI and CERN

        The complexity of the CERN and GSI/FAIR accelerator facilities requires a high degree of automation to maximize beam time and performance for physics experiments. GeOFF, the Generic Optimization Framework & Frontend, is an open-source tool developed within the EURO-LABS project by CERN and GSI to streamline access to classical and AI-based optimization methods. It provides standardized interfaces for optimization problems and utility functions to speed up implementation. Plugins are independent packages with their own dependencies, allowing scaling from simple prototypes to complex state machines that communicate with devices in different timing domains. This contribution presents GeOFF’s design, features, and current applications.

        At GSI, multi-objective Bayesian optimization was applied to SIS18 multi-turn injection, building a Pareto front from experimental data. At CERN, GeOFF and ML/AI contributed to a record ion beam intensity for the LHC in 2024 through LEIR and SPS optimization. In addition, GeOFF underwent major updates in 2025, aligning it with the latest developments in Python-based numerical and machine-learning software.

        Speaker: Jutta Fitzek (GSI Helmholtz Centre for Heavy Ion Research)
      • 305
        Graphical interfaces and integration tools for particle accelerator digital twins

        Particle accelerator modeling tools are largely based on open-source codes that utilize specialized command line interfaces. This makes them well suited for high performance computing workflows, however difficult for proficient usage. Moreover, the associated customized computing environments makes models difficult to share. RadiaSoft developed a browser-based toolkit enabling accelerator simulation interfacing and constructing end-to-end simulations. In a parallel partnership with SLAC, development of infrastructure for deploying our models leveraging online models integrated with LUME. Additionally, we have developed infrastructure for connecting our GUIs directly to beamlines through a control-system layer, bluesky. Our goal is to provide integration services for deploying models on the machine utilizing either a physics back-end or a machine learning back-end. Here we will introduce our toolkits, provide example demonstrations, and describe our plans for machine learning integration.

        Speaker: Jonathan Edelen (RadiaSoft (United States))
      • 306
        High voltage waveform monitoring

        The Advanced Photon Source (APS) recently completed a significant upgrade to its storage ring, replacing all existing components with new ones. One of the newly introduced systems is the waveform monitoring system. This system is a 2U rackmount chassis with 8 ADC inputs, MRF event link and 16 channels of TTL I/O. This is an FPGA-based 8-channel 4 GS/s digitizer that monitors the decoherence and injection high voltage pulser waveforms. The main function of this system is to qualify each pulse and make a decision on whether to proceed with injection or not. This paper presents in detail the development and features of this system.

        Speaker: Dan Paskvan (Argonne National Laboratory)
      • 307
        HL-LHC Inner Triplet String controls and software architecture

        The High Luminosity-Large Hadron Collider (HL-LHC) project at CERN aims to increase the integrated luminosity of the Large Hadron Collider (LHC). As an important milestone of the HL-LHC project, the scope of the Inner Triplet (IT) String test facility is to represent the various operation modes and the controls environment to study and validate the collective behaviour of the different systems. As for the HL-LHC, the IT String operation requires a wide-ranging set of control systems and software for magnet powering, magnet protection, cryogenics, insulation vacuum, and the full remote alignment.
        An overview of the control systems and their interfaces is presented with a particular focus on the software layers essential for the powering and magnet protection tests during the IT String validation program. Ensuring integration of the new HL-LHC device types and their operational readiness requires close collaboration between development teams, equipment owners and the IT String operation team which is validated by dedicated Dry Run tests. These tests aiming to validate the functionalities of new device types within the control and software applications are described in detail, with the goal of achieving a smooth transition to the magnet powering phase. The IT String facility presents a unique opportunity to validate all control and software layers ahead of the HL-LHC hardware commissioning (HWC) within the LHC complex and their operation in the High Luminosity era.

        Speaker: Sebastien Blanchard (European Organization for Nuclear Research)
      • 308
        Image processing with ML for automated tuning of the NASA Space Radiation Laboratory beam line

        Research conducted at the NASA Space Radiation Laboratory (NSRL) seeks to increase the safety of space exploration. The NSRL uses beams of heavy ions extracted from Brookhaven's Booster synchrotron to simulate the high-energy cosmic rays found in space. To accomplish this, the source machines provide many potential beam species, ranging in atomic number (Z) from 1, hydrogen/protons, to 83, bismuth and we have gone as high as Uranium. To test large-area samples, beams can be shaped to the user's specifications from a small-format 1-cm radius circular beam up to 20-cm by 20-cm uniform-area rectangular beams. This requires a complex transfer line of 24 magnets, including 9 quadrupole and 2 octupole magnets. Given the wide range of beam rigidity and size possibilities, operators tune the optics by hand while observing the beam profile on a phosphor screen imager. Successful tests have been conducted using a machine learning (ML) workflow for tuning. We capture the beam image, then process and parameterize the beam to assess centroid, shape, tilt, edge thickness, and uniform area size. These parameters are fed to the Badger software stack to avoid re-inventing a UI, using an Xopt-based Bayesian optimization algorithm for iterative tuning. The requirement to start from an image, which can be very noisy, and quantify it, makes the workflow more complex than the standard ML cookie-cutter approach of reading from traditional beam instrumentation fed to an algorithm.

        Speaker: Levente Hajdu (Brookhaven National Laboratory)
      • 309
        Implementation and scalability analysis of TSPP for Vacuum Framework

        SCADA (Supervisory Control and Data Acquisition) systems traditionally acquire data from PLCs through polling. The Time Stamped Push Protocol (TSPP), on the other hand, enables a PLC to timestamp and push data to the SCADA at its own discretion. The Vacuum Control Systems for CERN accelerators are primarily built on a dedicated Vacuum Framework, which relies on polling and is therefore subject to its limitations. Implementing TSPP would thus be an important improvement.
        TSPP needs software on the PLC – a Data Manager - to determine what data to push, when to push it, and how to package it into the correct format. Due to its particular data model, implementing TSPP for the Vacuum Framework required the development of a dedicated Data Manager. Additionally, while most current systems with TSPP have a single PLC per SCADA instance, Vacuum Framework applications often involve hundreds. Given that no data was available on the impact that large numbers of PLCs pushing data to a SCADA system might have, extensive testing was required. In particular, the relationship between server load and the effective rate of received values was studied to assess performance at scale.
        This paper details the implementation of TSPP for the Vacuum Framework, its Data Manager design, and the testing carried out to validate the protocol and assess its performance limits in order to ensure a smooth deployment.

        Speaker: Rodrigo Ferreira (European Organization for Nuclear Research)
      • 310
        Improving fly-scan accuracy via software-based trajectory correction at the TARUMÃ experimental station

        The TARUMÃ experimental station, part of the CARNAÚBA* beamline at the SIRIUS synchrotron light source (LNLS/CNPEM), employs a high-flux, sub-micrometric X-ray probe for fast, multimodal scans (XRF, XRD, STXM, etc.) using a three-axis piezoelectric flexure stage. Scans are executed in fly-scan mode through pre-defined trajectory files. During beamline operation, trajectory-following errors in the nanopositioning stage were identified, limiting the spatial resolution of X-ray images. Although the stage controller supports feedforward vectors, a practical and non-invasive method was preferred to avoid interrupting routine experiments and allow for continuous refinement. The proposed solution leverages experimental data acquired during regular scans: repeated trajectories, already present in typical user measurements, are used to collect real position feedback via high-resolution capacitive encoders. These profiles are averaged, filtered, and scaled to generate a corrected trajectory, which is then saved as a new trajectory file. This method requires no hardware changes and was easily integrated into the beamline workflow. It reduced following errors by approximately 200 nm in a 10×10 μm² scan, with significant improvements in image quality. The approach enables trajectory accuracy to be incrementally improved over time, directly from user data, and has shown promising results for multiple scan configurations.

        Speaker: Antonio Carlos Piccino Neto (Brazilian Center for Research in Energy and Materials)
      • 311
        Improving performance and reliability of a Python-based EPICS IOC by switching to pyDevSup

        The power supplies used for FOFB correctors at SIRIUS expose only electrical current values, making it necessary to perform conversions to and from beam kick values. To take advantage of the canonical Python implementation of this conversion, a separate IOC was developed using pyEPICS and PCASPy. This technology stack imposed some limitations, making it necessary to limit the update rate, and, even then, requiring one independent instance of the IOC per ring sector (20 in total) to avoid PV timeouts and disconnects; disconnection events when one of the power supplies was down also had cascading issues with reconnection and memory corruption.
        This motivated us to pursue more modern alternatives for integrating Python code into an IOC, specifically one that could take advantage of the Channel Access (CA) integration already present in EPICS databases, avoiding any of the bridges between CA and Python. We evaluated the pythonSoftIOC project and the pyDevice and pyDevSup support modules, which we present in this work. We settled on pyDevSup due to the development experience it provided.
        This work also presents benchmarks comparing the performance gains with the new IOC and aims to explore the architecture differences that enabled them.

        Speaker: Érico Nogueira Rolim (Brazilian Synchrotron Light Laboratory)
      • 312
        Integrated denoising for improved stabilization of RF cavities

        Typical operational environments for industrial particle accelerators are less controlled than those of research accelerators. This leads to increased levels of noise in electronic systems, including radio frequency (RF) systems, which make control and optimization more difficult. This is compounded by the fact that industrial accelerators are mass-produced with less attention paid to performance optimization. However, growing demand for accelerator-based cancer treatments, imaging, and sterilization in medical and agricultural settings requires improved signal processing to take full advantage of available hardware and increase the margin of deployment for industrial systems. In order to improve the utility of RF accelerators for industrial applications we have developed methods for removing noise from RF signals and characterized these methods in a variety of contexts. Here we expand on this work by integrating denoising with pulse-to-pulse stabilization algorithms. In this poster we provide an overview of our noise reduction results and the performance of pulse-to-pulse feedback with integrated ML based denoising.

        Speaker: Jonathan Edelen (RadiaSoft (United States))
      • 313
        Integrating CODAC in ITER Plant Simulator

        The use of Digital Control Systems (DCS) with process simulators for engineering purpose, control system validation, virtual commissioning, or operator training is increasingly demanded in large and increasingly complex industrial projects.
        Coupling a DCS with a process simulator requires to support specific functionalities: ability to operate on a simulated time basis and to save/restore states to load different scenarios starting point, or to jump back in time, which is traditionally achieved by emulating or simulating the DCS.
        The ITER Control, Data Access and Communication (CODAC) system uses EPICS at its core, which is not designed to operate with such constraints. In the frame of the ITER Plant Simulator project, we leveraged advanced Linux features (libfaketime, namespaces, and CRIU) combined with a custom interface between CODAC and the simulator to meet these requirements. This approach allows integration of a wide range of CODAC tools (HMI, Archive, Alarms, Logbook, Operations Sequencer), synchronized with the simulator, with a lightweight and efficient solution.

        Speaker: Ralph Lange (ITER Organization)
      • 314
        Large Language Model (LLM) tool to improve autonomous operation at TEX facility

        In this work we report the integration of Large Language Model (LLM) to improve the operation of a particle accelerator facility such as TEX (TEst-stand for XBand) at the Frascati National Laboratories (LNF) of the Italian Institute for Nuclear Physics (INFN).
        The integration of a LLM through the Cheshire Cat framework presents a transformative approach to enhancing operational capabilities for operators and users. This innovative tool leverages the advanced capabilities of artificial intelligence to assist operators in real-time decision-making and problem-solving. The LLM can interpret and analyze data, suggest optimal operational strategies, and facilitate communication across different subsystems, thereby improving coordination among teams.
        Thanks to the Cheshire Cat's ability to manage different types of memory, including episodic, declarative, and procedural memory, it is possible to develop a highly specialized tool that possesses targeted knowledge of procedures, problem trouble shooting and physical phenomena relevant to accelerator facilities. This versatility allows for the integration of the tool with control system frameworks such as EPICS, thereby facilitating direct interaction with the accelerator itself. The integrated approach not only optimizes daily operations but also leverages historical information and best practices, enhancing the efficiency and safety of operations in the field of particle physics.

        Speaker: Stefano Pioli (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Frascati)
      • 315
        Laser Megajoule facility status report

        The Laser MegaJoule, a 176-beam laser facility developed by CEA, is located near Bordeaux. It is part of the French Simulation Program, which combines improvement of theoretical models used in various domains of physics and high performance numerical simulation. It is designed to deliver about 1.4 MJ of energy on targets, for high energy density physics experiments, including fusion experiments.

        The LMJ technological choices were validated on the LIL, a scale-1 prototype composed of 1 bundle of 4-beams. The first bundle of 8-beams was commissioned in October 2014 with the implementation of the first experiment on the LMJ facility. The operational capabilities are increasing gradually every year until the full completion by 2026. By the end of 2025, 22 bundles of 8-beams will be assembled (full scale) and 19 bundles are expected to be fully operational.

        As the assembly of the laser bundles is coming to an end and before to be in full operation, we propose to make a status report on the LMJ/PETAL installation. We will present the major software developments done for theses 2 past years, the latest experimental results and the new challenges to keep this facility at its best operating level.

        Key words: Laser facility, LMJ, PETAL, Control System

        Glossary:
        LMJ: Laser MegaJoule
        CEA: Commissariat à l’Energie Atomique et aux Energies Alternatives
        LIL : Ligne d’Intégration Laser

        Speaker: Dr Stephanie PALMIER (Commissariat à l'Énergie Atomique et aux Énergies Alternatives)
      • 316
        LCLS-II cavity heater controls: design, operation, and accuracy

        The SLAC National Accelerator Laboratory's upgrade to the LCLS-II, featuring a 4 GeV superconducting linear accelerator with 37 cryomodules and two helium refrigeration systems supporting 4 kW at 2.0 K, represents a significant advancement in accelerator technology. Central to this upgrade is a 2K system with five stages of centrifugal cold compressors, operating across a pressure range from 26 mbar suction to 1.2 bara discharge*. These dynamic centrifugal compressors have a limited operational envelope hence maintaining stable pressure and flow is critical for its operation. This paper describes how SLAC achieved stable LINAC pressures in each of the 37 Cryomodule using electrical heaters compensating actively to the changes in RF power to maintain constant flow through the system. Additionally, this paper details the power accuracy of these heaters, which can be useful not only for control, but also when measuring cavity efficiency.

        Speaker: Andrew Wilson (SLAC National Accelerator Laboratory)
      • 317
        Leveraging AI and ML for assisted experiments, data analysis, and virtual agents

        The future of synchrotron beamline operations is poised for a transformative leap with advancements in artificial intelligence (AI) and machine learning (ML). While SOLARIS National Synchrotron Radiation Centre* has yet to integrate these technologies, their potential to revolutionize experiments, data analysis, and user interactions is immense. AI-driven automation promises real-time assistance in optimizing beamline experiments, minimizing manual intervention while enhancing precision. Machine learning algorithms will unlock deeper insights from complex datasets, facilitating faster, more accurate interpretations. Additionally, intelligent virtual agents could redefine how researchers interact with beamline controls, offering predictive guidance and adaptive optimization. As SOLARIS expands its capabilities, embracing AI and ML will position it at the forefront of scientific innovation, ensuring seamless, efficient, and accessible synchrotron research for future generations.

        Speaker: Magdalena Szczepanik (SOLARIS National Synchrotron Radiation Centre)
      • 318
        Leveraging local large language models for enhanced log analysis: integrating Ollama into electronic and trouble logging systems

        Modern systems for electronic log keeping and trouble log management generate vast datasets of issues, events, solutions, and discussions. However, extracting actionable insights from this information remains a challenge without advanced analysis tools. This paper introduces an enhancement to two log programs—Electronic Log Keeping (elog) and Trouble Logging (TroubleLog) - used in the RHIC control system at Brookhaven National Laboratory. The enhancement integrates Ollama, an open-source, locally deployed large language model (LLM), to facilitate intelligent log analysis. We present a framework that combines MySQL database queries with Retrieval-Augmented Generation (RAG), enabling users to generate period-based summaries (e.g., daily, weekly) and retrieve topic-specific information- such as issues and solutions - through natural language queries. By indexing log data using vector embeddings and interfacing with Ollama’s API, the system provides accurate, conversational responses while ensuring data privacy by avoiding external data sharing. The paper details implementation aspects, including SQL query optimization and prompt engineering, and evaluates performance using real-world log data sets. Results demonstrate improved usability and significant reductions in manual analysis time. This work the potential of local LLMs in domain-specific log management, offering a scalable and privacy-preserving solution for accelerator control systems.

        Speaker: Wenge Fu (Brookhaven National Laboratory)
      • 319
        Leveraging playback of collected data through the EPICS areaDetector framework for development of streaming and real-time processing workflows

        At the National Synchrotron Light Source II (NSLS-II), as the volume of data collected by detectors from experiments continues to rise, it has become important to leverage streaming, real-time processing, and analysis for data reduction. In addition, real-time analysis can provide immediate insights into experimental results, providing users with invaluable data on how to proceed with their beam-time. However, streaming and real time processing workflows are often tightly coupled with the data acquisition (DAQ) system and require scientific data as input to confirm accuracy, and so can be challenging to develop without dedicated beam-time. At NSLS-II, our data access service, Tiled, aims to offer API access to the data produced during experiments, without directly touching the filesystem. For this project, we aim to leverage this tool to collected data as a way of ingesting scientific results into an EPICS areaDetector application, and once loaded, playing it back in order to allow for testing of streaming pipelines without access to hardware. We demonstrate the benefits of utilizing the Tiled service to load data as opposed to reading files, as well as the features of the software that allow for generating datasets by applying filters to existing ones. We demonstrate how the software allows for simulating ttl triggers, to mimic the real beamline DAQ setup, and we will provide some examples of streaming and real time analysis techniques that were prototyped using this tool.

        Speaker: Jakub Wlodek (Brookhaven National Laboratory)
      • 320
        Libera instruments integration with control systems

        Libera instruments have been used with various control systems for several years. In line with the latest security and functionality upgrades, Libera control system interfaces have also been upgraded. EPICS interface in Libera instruments has been upgraded to support the latest EPICS BASE version 7.0.9, which enables users to use the PVA protocol and retrieve more signal data in a single call. Group PVs, allowing atomic access to all signal components, were also added. Furthermore, similar parameters, such as sensors, can be grouped on the Libera side already and provided to PVA clients in a single PV. The TANGO interface has been upgraded to version 9.5. It supports flexible configuration where DeviceClasses can be configured for each board type individually. The interface has also been extended with TANGO alarm and logging functionalities. Both interfaces, EPICS and TANGO, can run on the Libera instrument or they can now be compiled and run from an external server station. This network architecture enables easier maintenance and upgrades. This paper details all recent updates and improvements to the Libera control system interfaces and presents possible use cases.

        Speaker: Aleš Kete (Instrumentation Technologies (Slovenia))
      • 321
        Logging infrastructure for EPICS-based control systems using Loki and Promtail

        To support multiple scientific facilities at SLAC, a modern logging system capable of high message throughput and easy record filtering is required. A new logging system deployed at SLAC meets these requirements by blending an existing EPICS logging server with open-source technologies for log storage and retrieval. To save log data, Loki was chosen as the data storage technology. Promtail was selected to push existing log files generated by various EPICS clients into Loki. To encourage increased accelerator operator engagement, multiple interfaces are provided for interacting with logs. Grafana dashboards offer a user-friendly way to build displays with minimal code, while a custom PyQt-based user interface displays the results of direct queries to Loki in a table-based display. A wrapper script around LogCLI called qlog provides a command-line interface for interacting with the logs. The rationale behind these decisions and their integration into our controls infrastructure will be considered.

        Speaker: Jesse Bellister (SLAC National Accelerator Laboratory)
      • 322
        Long short-term memory of recurrent neural network to the analysis of temperature rise on the production target in Hadron Experimental Facility of J-PARC

        Hadron Experimental Facility (HEF) is designed to handle intense slow-extraction proton beam from 30-GeV Main Ring (MR) of Japan Proton Accelerator Research Complex (J-PARC). The production target in HEF is operated under severe conditions in which its temperatures periodically vary from 30 to over 300 °C by heat deposit of irradiated beams. In a long term, a careful evaluation of damage to the target and its lifetime is quite important. If the target temperatures are accurately predicted from the existing data, including beam intensity, duration of beam extraction and beam position, it could be possible to verify the cumulative damage by comparing the predicted temperature rise with the measured one. The predicted temperature rise was calculated from the existing data using linear regression with a machine learning library, scikit-learn. However, the predictions of temperature rise by linear regression were not fully satisfactory due to changes of beam optical and accelerator conditions. Therefore, an enhanced prediction method for the temperature rise on the proton target has been developed using Long Short-Term Memory (LSTM), a type of Recurrent Neural Network (RNN) architecture. A systematic study was also conducted to investigate the effects of hyperparameters, including a sequence length and hidden layer. The present paper reports the status of the prediction system of the temperature rise on the production target using the LSTM analysis in detail.

        Speaker: Keizo Agari (High Energy Accelerator Research Organization)
      • 323
        Machine learning for ISIS Controls

        Machine learning methods have been demonstrated significant promise when applied in the context of accelerator controls. This work highlights the ongoing efforts using ML methods to improve and optimise the operation of the accelerator. We present current applications in anomaly detection, optimisation and digital twinning. As well as control applications the machine learning operations (MLOps) work required for rapid prototyping and reliable and repeatable deployment is presented. Finally, we discuss the current and future impact of ML onthe ISIS Control system.

        Speaker: Mateusz Leputa (ISIS Neutron and Muon Source)
      • 324
        Machine learning–based longitudinal phase space control for X-ray free-electron laser

        Precise control of the longitudinal phase space (LPS) in X-ray free-electron laser (XFEL) is critical for optimizing beam qualities and X-ray pulses properties required by the experimental stations. We present results of using machine learning techniques for LPS shaping and control with Bayesian optimization.

        Speaker: Zihan Zhu (SLAC National Accelerator Laboratory)
      • 325
        MeerKAT antenna positioner emulator test bench project

        The MeerKAT Antenna Positioner Emulator (APE) project is being built to imitate the functionality of the antenna positioner servo system (AP). The AP is the mechanical component mainly responsible for the mounting and pointing of the antenna structure on the MeerKAT radio telescope. The project aim is fault finding, obsolescence mitigation, software debugging, and technical training. APE will use existing MeerKAT AP building blocks with a new mechanical design. The project will be designed for remote access, this involves network configurations. Internal interfaces include monitoring with electrical and internal infrastructure. The external interfaces include connections to Control And Monitoring software, time and frequency reference (TFR), development servers, and on-site hardware. The project is also focused on electronics design review, cable manufacturing, power supply considerations, mechanical aspects such as load tests and weight distribution. Project management activities include discussions on asset transfers, procurement, foundation design, and remote access considerations. Systems engineering involves documentations, discussions around user interface, and coordination with various stakeholders.

        Speaker: Buntu Ngcebetsha (South African Radio Astronomy Observatory)
      • 326
        Modernizing FPGA development using the DESY FPGA firmware framework

        Brookhaven National Laboratory (BNL) is currently developing new hardware description language (HDL) code and embedded software for the Electron-Ion Collider (EIC) control system. Part of this effort is modernizing the development process itself, leveraging methodologies and tools that were initially targeted at the software world. These methods include effective source control and project management, modularization and rapid deployment of updated code, automated testing, and in many cases automated code generation. HDL designers additionally face unique challenges compared to software designers, particularly with vendor locking and dependency on particular tools and IP. The FPGA Firmware Framework (FWK), developed by DESY, is a set of tools that helps to both apply these modern methods and to overcome some of those unique challenges. This paper will cover the workflow, successes, and challenges faced when using the FWK. In particular, we will focus on the experience using this workflow to develop a customizable delay generator IP targeting a Zynq FPGA.

        Speaker: David Vassallo (Brookhaven National Laboratory)
      • 327
        Modernizing hardware and software for LANSCE EVR with FPGA and Real-Time Linux

        This paper describes the approach to modernizing Los Alamos Neutron Science Center’s (LANSCE) Event Receiver (EVR) by replacing the Micro Research Finland (MRF) EVR with Xilinx UltraScale+ Multi-Processor System on a Chip (MPSoC). The Xilinx UltraScale+ MPSoC architecture has been chosen for this project due to its use by other teams across LANSCE and around the industry. The EVR modernization project will utilize open-source FPGA design, mrf-openevr, along with in-house implementations for interfacing. The EVR will: produce timing patterns from Event Generators (EVG) via an event link within existing time constraints, manage new and reoccurring entries into the Per Cycle Data Buffer (PCDB), and provide diagnostic tools in an easy-to-use Real Time Linux interface. The EVR modernization project is in the evaluation stage where minimum viable product criteria is being evaluated on development boards.
        LA-UR-25-24009

        Speaker: Zane Sauer (Los Alamos National Laboratory)
      • 328
        Modernizing legacy Python applications at LCLS

        Between the start of LCLS in 2009 and Python 2’s end of life status in 2020, many control system tools and user interfaces were created using home-built Python 2 environments. With the end of official support for Python 2 comes a host of maintenance issues for all the legacy applications that survive to this day. This poster will contain techniques for modernizing these applications, common pain points in Python 2 to 3 conversions, and advice for testing and redeployment with recent examples from LCLS.

        Speaker: Zachary Lentz (Linac Coherent Light Source)
      • 329
        Modular scientific SCADA suite with Sardana and Taurus – latest developments

        Sardana* and Taurus** are community-driven, open-source SCADA solutions that have been used for over a decade in scientific facilities, including synchrotrons (ALBA, DESY, MAX IV, SOLARIS) and laser laboratories (MBI-Berlin).
        Taurus is a Python framework for building both graphical and command-line user interfaces that support multiple control systems or data sources. Sardana, is an experiment orchestration tool that provides a high-level hardware abstraction and a sequence engine. It follows a client-server architecture built on top of the TANGO control system***. In the last two years, significant developments have been made in both projects. Sardana focused on enhancing continuous scans, introducing multiple synchronization descriptions to support passive elements (e.g. shutters) and detectors reporting at different rates. The configuration tool has also been extended, following the roadmap defined by the community****. Taurus has seen substantial performance gains, particularly in GUI startup times, as part of an optimization effort that started nearly three years ago. Latest improvements take profit of new TANGO event subscription asynchronous modes*****. Continuous codebase modernization is underway, and support for Qt6 is planned for the July 2025 release.
        This presentation will overview these recent advancements in both Sardana and Taurus and outline their current development roadmap.

        Speakers: Dr Fulvio Becheri (ALBA Synchrotron (Spain)), Michal Piekarski (SOLARIS National Synchrotron Radiation Centre), Vanessa Da Silva (MAX IV Laboratory)
      • 330
        Object oriented industrial I/O

        Abstract
        The Los Alamos Neutron Science Center (LANSCE ) has completed a significant modernization effort, migrating from the legacy RICE control system to an entirely EPICS-based infrastructure. A key enabler of this transition has been the development and deployment of modular, object-oriented Industrial I/O (IIO) architectures on National Instruments (NI) cRIO platforms. The Industrial I/O framework provides a reusable and scalable system for controlling and monitoring sensors and instruments. It is built around precompiled FPGA bitfiles accessed through NI’s C application programming interface. Where necessary, LabVIEW real-time code integrates seamlessly with EPICS IOCs. This architecture enables clear separation between control logic and hardware interfaces, supports future maintenance with minimal overhead, and accommodates both modern Linux RT cRIO and legacy VxWorks systems. The result is a flexible and resilient method for managing and improving complex control architectures across LANSCE.

        This contribution outlines how IIO enables hardware reuse by treating NI cards as modular components with shared logic, abstracting low-level FPGA interaction, and standardizing configurations through parameterized bitfiles and EPICs startup files. The poster and discussion focus on how this approach supports object-like behavior to improve maintainability, scalability, and cross-platform deployments of EPICS-compatible systems.
        LA-UR-25-24051

        Speaker: Rocio Martin (Los Alamos National Laboratory)
      • 331
        Open source event timing system at ALS-U

        The Advanced Light Source Upgrade (ALS-U) is a major upgrade project for the existing light source at Lawrence Berkeley National Laboratory. One of the key challenges of the upgrade is that the new accelerator sections cannot operate at the existing RF frequency. The injector will operate at the current RF frequency (f1 = 499.64MHz) and the new Accumulator Ring and Storage Ring will operate at a rationally related frequency (f2 = p/q * f1 = 500.39MHz). As a result, the Timing Event Generator (EVG) must be synchronous to both RF frequencies and generate separate event streams for each frequency domain. To support beam transfer from the injector (f1) to the Accumulator Ring (f2), the EVG must detect the coincidence of the two frequencies and synchronize counters to the coincidence rate.. Due to extensive need for timing channels, Event Fanout (EVF) and Event Receiver (EVR) chassis were also designed as part of this effort. This paper describes the main concepts of the Dual EVG, EVF and EVR projects, open source gateware, embedded software and EPICS IOC implementations, as well as the tests and deployment at the current ALS.

        Speaker: Lucas Russo (Lawrence Berkeley National Laboratory)
      • 332
        Overview and current status of the SKA-Low Monitoring, Control and Calibration Subsystem (MCCS)

        SKA-Low is the low frequency radio telescope currently
        under construction in Western Australia. At its final extent,
        it will consist of 512 stations up to 74 km apart, each
        containing 256 antennas which can be used in different
        combinations to digitally “point” the telescope. The
        Monitoring, Control and Calibration Subsystem (MCCS) is
        responsible for performing calibration and providing local
        monitoring and control of all of the LFAA (Low Frequency
        Aperture Array) hardware components. This includes
        managing the allocation of resources for an observation,
        and the aggregation of health status. SKAO has adopted the
        Tango control system framework, and the MCCS software
        comprises upwards of 18 different Tango devices, some of
        which are replicated dozens of times for a single station.
        This complexity poses a significant challenge to computing
        resources and reliability when considering how to scale up
        the system, first to the 16 stations to be constructed and
        integrated by January 2026, then 68 stations at the end of
        2026, and targeting 307 stations by mid-2028. This paper
        will describe the MCCS architecture, report on our latest
        performance profiling, and discuss how we are preparing for
        the AA 2 construction milestone which will need to support
        68 stations.

        Speaker: Emma Arandjelovic (Observatory Sciences Ltd, SKA Observatory)
      • 333
        Overview and status of the Machine Learning Data Platform (MLDP) project

        The Machine Learning Data Platform (MLDP) is a product providing full-stack support for data science, Artificial Intelligence, and Machine Learning (AI/ML) applications at particle accelerator and large experimental physics facilities. It supports AI/ML applications from front-end, high-speed acquisition of heterogeneous, time-series data, through data archiving and management, to back-end analysis. The MLDP represents a “data-science ready” platform for the diagnosis, modeling, control, and optimization of these facilities. It offers a consistent, data-centric interface to archived data, standardizing implementation and deployment of AI/ML algorithms for different facility configurations, or between facilities. The MLDP is also deployable for experimental data collection, archiving, and analysis. It can acquire and archive heterogeneous data from experimental diagnostics (e.g., images, arrays, etc.) along with control system configurations, and any metadata required for provenance. Thus, the MLDP can manage experimental data through its entire lifecycle, from acquisition and archiving, through analysis and investigation, to release and final publication. The MLDP is a public-domain product and available to the community. The archive management system is fully independent with an installation and deployment utility (see https://github.com/osprey-dcs/data-platform). We present a brief MLDP project overview then detail the status and notable achievements.

        Speaker: Christopher Allen (Osprey DCS LLC)
      • 334
        Particle accelerator simulation using GPUs in Accelerator Toolbox

        In storage ring light sources, two important performance parameters are injection efficiency and lifetime. Accelerator particle tracking in digital twins and simulators is an approach to efficiently derive these quantities and guide the optimization of the physical accelerator machine. These calculations are resource-intensive processes that often require high-performance computing. GPU usage can be an effective strategy to achieve good performance at a reasonable cost. This paper presents the current state of GPU particle tracking code implemented in Accelerator Toolbox (AT) using OpenCL or CUDA. The GPU kernel code is dynamically generated according to the lattice definition, allowing for optimal performance and selection of the symplectic integrator. We present accuracy and performance comparisons obtained with GPUs versus those obtained with AT CPU code. Trade-offs necessary to achieve optimal results are also discussed. Additionally, we discuss GPU kernel synchronization techniques needed by collective effect elements, as well as the object-oriented code structure to facilitate the integration of other GPU APIs.

        Speaker: Mr Theo Rozier (European Synchrotron Radiation Facility)
      • 335
        Performance characterisation of real-time software in C++. A real-life example of digital camera-based acquisition systems at CERN.

        The performance of real-time software is critical in accelerator control systems, where precision and reliability are essential. This paper presents a method for performance characterisation of real-time software developed in C++, using a digital camera-based acquisition system at CERN as a case study. Key performance metrics, including execution time, latency, memory footprint and network use are analysed to evaluate the system's ability to meet strict real-time and scalability constraints. In addition, the profiling of task execution serves as a means of ensuring that the software continues to adhere to agreed specified behaviour, particularly as the system evolves and undergoes changes. The impact of software architecture, multithreading strategies, and hardware optimization on overall system performance is also discussed. Finally, the tools used to extract key performance metrics are presented, emphasising their generic nature and potential use for other real-time software developed in CERN’s accelerator sector.

        Speaker: Athanasios Topaloudis (European Organization for Nuclear Research)
      • 336
        Physics application software for FRIB: from commissioning to operational excellence

        The physics application software is a critical part of the FRIB accelerator’s control and beam tuning infrastructure. Development of high-level applications (HLAs) and online modeling tools began well before initial beam commissioning to support early machine setup, diagnostics, and operational readiness. As the accelerator transitioned to routine beam delivery, a broader suite of applications was developed and iteratively refined through close collaboration between physicists, control system engineers, and software developers. These applications leverage model-based control techniques, online beam dynamics simulations, and automated optimization algorithms to enhance tuning efficiency and improve beam delivery reliability across a wide range of operating conditions. Robust data management has also become essential, enabling the capture, organization, and rapid access to operational and diagnostic data critical for real-time decision-making and post-run analysis. This paper presents an overview of the current software ecosystem, highlights key applications for lattice characterization, beam tuning, and outlines future directions, including expanded automation, tighter integration with machine learning frameworks, and improvements in scalability and maintainability.

        Speaker: Tong Zhang (Facility for Rare Isotope Beams)
      • 337
        Plans and strategy for edge AI/ML at the Electron-Ion Collider at Brookhaven National Laboratory

        Scheduled to begin Operations in 2035, the Electron-Ion Collider (EIC) is being built at Brookhaven National Laboratory (BNL) and will be the only operating particle collider in the United States. It may also be the only large collider built in the world in the next 20-30 years, during the “Age of Artificial Intelligence (AI)”. Recognizing the potential for AI and machine learning (ML) to enhance operations and create more research opportunities, the EIC is being envisioned and designed as a large-scale AI-ready state-of-the-art facility. Specifically, it will support three core areas of AI/ML capabilities, referred to as Edge, End-to-End, and Bottom-Up. Edge capabilities are intended to address what are expected to be some of the most demanding AI/ML applications in the world in terms of timescales by anticipating the infrastructure, hardware, and local compute resources needed for success. At the same time, considerable care must be taken to ensure that these capabilities are manifested in an efficient, safe, and secure Controls ecosystem and Operations environment. We report on our plans and strategy for high-performance edge AI/ML at the EIC.

        Speaker: Linh Nguyen (Brookhaven National Laboratory)
      • 338
        PLC Integrator: A modern tool for PLC-EPICS integration at ESS

        PLC Integrator is a newly developed tool to integrate PLC based control systems with the EPICS (Experimental Physics and Industrial Control System) framework, replacing the legacy PLC Factory at the European Spallation Source (ESS). PLC Factory depended on the ESS Controls Configuration Database (CCDB) service and REST APIs to generate PLC code and interface via Modbus TCP/IP, but its outdated Python base and the scheduled decommissioning of CCDB made its continued support unsustainable. PLC Integrator reproduces all of PLC Factory's functionality while removing obsolete dependencies, and brings native support for modern protocols such as OPC UA and Beckhoff ADS, which are critical for upcoming integration efforts like Target Remote Handling systems and Neutron Scattering Systems. Designed for sustainability, extensibility and ease of maintenance, the tool enables automation engineers to rapidly implement new features as requirements evolve. This paper describes the design motivations, implementation strategies and integration outcomes of PLC Integrator, demonstrating how it modernizes control workflows at ESS, reduces technical debt, and enhances EPICS interoperability.

        Speaker: Adalberto Fontoura (European Spallation Source)
      • 339
        Power supply data acquisition aggregator system

        The Advanced Photon Source (APS) recently completed a significant upgrade to its storage ring, replacing all existing components with new ones. One of the newly introduced systems is the power supply DAQ system. This system is responsible for receiving all the bipolar and unipolar power supply setpoints, readbacks, and status signals and for streaming the DAQ data to a double sector server at 22 KHz. This paper presents in detail the development and features of this system.

        Speaker: Dan Paskvan (Argonne National Laboratory)
      • 340
        Proton pulse charge calculation algorithm in Beam Power Limiting System at the Spallation Neutron Source

        A proton pulse charge calculation algorithm in the Beam Power Limiting System (BPLS) at the Spallation Neutron Source (SNS) was developed and implemented in an FPGA. The algorithm calculates one-minute running average of the pulse charges and issues a fault to the Personal Protection System (PPS) and the Machine Protection System (MPS) when a limit is reached.

        A bit-accurate model of the algorithm was first developed and tested in Matlab® and then implemented and simulated in VHDL using Vivado® design environment. Finally, the algorithm was verified on a µTCA-based hardware platform.

        Speaker: Miljko Bobrek (Oak Ridge National Laboratory)
      • 341
        pvAccess and virtualisation

        At the ISIS accelerators we are migrating to an EPICS control system using the pvAccess protocol, with most of our IOCs and equivalents running in containers. By default, the pvAccess protocol relies on UDP broadcasts for discovery of PVs. Issues that may arise due to this in a containerised environment are discussed, and solutions both general and specific to Docker Swarm are presented. We discuss an open-source UDP broadcast relay tool - called SnowSignal - developed for use internally within our Docker Swarm environments, and the configuration of our PVA Gateways.

        Speaker: Dr Ivan Finch (Science and Technology Facilities Council)
      • 342
        PvPlot: A live software oscilloscope library for accelerator control systems

        When operating a complex accelerator facility such as the Los Alamos Neutron Science Center (LANSCE), it is often necessary to observe live signal waveforms for various purposes. Traditionally, this has been done using dedicated physical oscilloscopes, whether permanently installed alongside equipment or temporarily deployed on rolling carts. At times, the screens of these oscilloscopes were even transmitted by video link over coaxial cable to secondary television monitors, which was a remarkable convenience at the time, but is considered cumbersome and limited today. With modern control system software and network infrastructure, the inconvenience of physical co-location and dedicated long-distance cabling with dedicated secondary equipment can be eliminated in favor of a flexible and dynamic distributed software approach which reduces complexity while adding significant capability. Here we present a solution using the Experimental Physics and Industrial Control System (EPICS) and Python as part of a comprehensive control system UI library that allows connection to arbitrary signal sources, simultaneous viewing from multiple remote networked locations, and instant reconfiguration or selection of alternate signal sources. Library architecture and various other available UI tools are also discussed.

        Speaker: Eric Westbrook (Los Alamos National Laboratory)
      • 343
        Real time calculations of cryogenic He properties

        The Fermilab PIP-II (proton improvement plan - II) project is being constructed at Fermilab to deliver $800\,MeV$ protons of $>1\,MW$ beam power to replace the present LINAC and provide protons to the remainder of the existing accelerator complex. The new LINAC consists of a warm front end, 23 superconducting RF cryomodules, and a beam transfer line to the existing complex. The cryomodules (CMs) are to be tested at Fermilab's CryoModule Test Facility (CMTF).
        An important measurement in cryogenic testing is the heat load of each CM. Traditionally, at Fermilab, these measurements were made collecting archived data offline and analyzing it. The new control system for PIP-II is being developed with the EPICS (Experimental Physics and Industrial Control System) framework, which allows us to compute the heat load in real time using the HePak library.
        We are exploring other $He$ properties, such as flow, where flow meters are not available, which can also be calculated in real time and fed back to the cryogenics engineers.
        This paper details the real time heat load calculation and $He$ flow software developed for CM testing at CMTF, as well as the first results from the prototype HB650 CM. Future plans for 2-phase $LHe$ flow will also be outlined.

        Speaker: Pierrick Hanlet (Fermi National Accelerator Laboratory)
      • 344
        Reinforcement learning approaches for parameter tuning in particle accelerators

        Recent developments at the INFN laboratories in Legnaro have demonstrated the effectiveness of Bayesian optimization in automating the tuning process of particle accelerators, yielding substantial improvements in beam quality, significantly reducing setup times, and shortening recovery times following interruptions. Despite these advances, the high-dimensional parameter space defined by numerous sensors and actuators continues to pose challenges for fast and reliable convergence to optimal configurations. This paper proposes a machine learning-based framework that combines surrogate modeling of the accelerator with reinforcement learning strategies for closed-loop optimization, with the goal of further accelerating commissioning procedures and enhancing beam performance.

        Speaker: Daniele Zebele (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro)
      • 345
        Reinforcement learning for automation of accelerator tuning

        For more than half a decade, RadiaSoft has developed machine learning (ML) solutions to problems of immediate, practical interest in particle accelerator operations. These solutions include machine vision through convolutional neural networks for automating neutron scattering experiments and several classes of autoencoder networks for de-noising signals from beam position monitors and low-level RF systems in the interest of improving and automating controls. As active deployments of our ML products have taken shape, one area which has become increasingly promising for future development is the use of agentic ML through reinforcement learning (RL). Leveraging our substantial suite of ML tools as a foundation, we have now begun to develop an RL framework for achieving higher degrees of automation for accelerator operations. Here we discuss our RL approaches for two areas of ongoing interest at RadiaSoft: total automation of sample alignment at neutron and x-ray beamlines, and automated targeting and dose delivery optimization for FLASH radiotherapy. We will provide an overview of both the ML and RL methods employed, as well as some of our early results and intended next steps.

        Speaker: Morgan Henderson (RadiaSoft (United States))
      • 346
        Scheduler for cooling and ventilation plants: feedback on easy and low cost method for energy savings

        In industrial engineering, scheduling is a well-established strategy for optimizing resource use and minimizing operational costs. At CERN's Engineering department, the Cooling and Ventilation group has implemented an automatic scheduling solution to reduce electricity consumption by selectively shutting down plants during nights and weekends, when their operation is not required. Given that CV systems account for a significant share of CERN's total electricity use, even simple scheduling strategies can yield substantial energy savings - up to 75% in some cases. This paper presents the motivation, methodology, and preliminary results of scheduler deployments across multiple CV plants between 2023 and 2025, including recent pilots at Point 5 of the Large Hadron Collider (LHC). Two types of scheduler conditions were implemented: calendar-based (e.g., operating only during working hours) and temperature-based (e.g., starting only when zone temperature thresholds are exceeded). Operational safety was carefully assessed - a CO₂ measurement campaign was conducted at Point 5 to confirm compliance with environmental and safety requirements. Preliminary results from several sites show significant reduction in consumption without compromising performance. This low-cost approach demonstrates how simple digital solutions can lead to impactful energy savings in large-scale technical infrastructures.

        Speaker: Nikolina Bunijevac (European Organization for Nuclear Research)
      • 347
        Sequencer at the European Spallation Source

        The European Spallation Source (ESS) is set to become the world's most powerful neutron source, enabling groundbreaking research across a wide range of scientific disciplines. A key tool used in its operation is the ESS Sequencer — a software tool designed to automate commonly executed high-level Main Control Room procedures required for the facility’s functionality. By using predefined sequences, it improves repeatability and reliability of the processes by minimizing the potential for human error.
        We will discuss the technical challenges addressed by the ESS Sequencer, including task execution types, architectural design, and system scalability. Additionally, we will highlight recent upgrades and future developments aimed at further enhancing the framework’s capabilities. The implementation of the ESS Sequencer marks a major milestone for the ESS Integrated Control System Software group and is expected to significantly contribute to ESS operations and reduction of time required to deliver neutrons for experiments.

        Speaker: Mr Lukasz Zytniak (S2Innovation Sp z o. o. [Ltd.])
      • 348
        Simplifying cryogenic process control at ESS LINAC through automation: development and integration of an automatic control sequence

        This paper presents the Automatic Control Sequence (ACS) developed and implemented to simplify the control of cryogenic processes in the linear accelerator (LINAC) at the European Spallation Source (ESS), which includes 27 cryomodules and 43 valveboxes. The main objectives of the project were to reduce the risk of human error and minimize manual operations — while maintaining full decision-making authority in the hands of the operator. The sequence is designed through interdisciplinary collaboration, with Excel serving as a central platform for information exchange. A custom Python script is then used to generate PLC code in SCL programming language based on the defined logic. The final sequence is deployed on a master PLC and 43 dedicated PLCs, fully integrated with the EPICS control system and interconnected via Profinet for optimized system synchronization. A user-friendly operational interface was developed using CS-Studio, serving as both a monitoring and control layer. It provides visibility across all levels of the control system — from individual devices, through local PLC sequences, up to inter-system synchronization. This paper provides an overview of the development of the Automatic Control Sequence and discusses key lessons learned and future improvements in cryogenic control at ESS.

        Speaker: Mr Wojciech Bińczyk (S2Innovation Sp z o. o. [Ltd.], European Spallation Source)
      • 349
        SKA telescope control system in 2025

        It is 2025 and the SKA Telescope Control System has come a long way since the start of construction. The outline of the software architecture and some key technology decisions (including the choice of Tango) were made early. To keep the geographically distributed teams engaged, and avoid creating silos and fragmentation, development of virtually all the software components started in parallel; often while the detailed designs for the custom hardware was still evolving and before the COTS equipment was selected. The deployment strategy was adjusted to align with the industry trends. From designing a software system for hardware that does not exist we arrived at the point where we can prove that the software can actually work with the hardware. However, the software design and implementation meeting reality uncovered some issues, forcing us to make changes (ska-tango-base) and learn hard lessons (naive implementation of event callbacks). Are we ready to deliver a large distributed control system? We realize that scalability will be a challenge. This paper provides an honest overview of what works and what did not work so well, and how we address issues.

        Speaker: Ms Sonja Vrcic (SKA Observatory)
      • 350
        SOLEIL II: enhancing data management and computing for tomorrow’s science

        Operational since 2008, SOLEIL [1] offers users access to a wide array of experimental techniques through its 29 beamlines, covering a broad energy spectrum from THz to hard X-rays. In response to evolving scientific and societal needs, SOLEIL is undergoing a major upgrade through the SOLEIL II project. This transformative initiative includes the development of a new Diffraction Limited Storage Ring (DLSR) [2], designed to dramatically increase brilliance, coherence, and flux. The upgrade also encompasses the modernization of beamlines to support state-of-the-art experimental techniques, along with a comprehensive digital transformation centered on data and user-oriented workflows.
        This poster presents the current status of the digital transformation efforts within the SOLEIL II framework. It outlines the project's overall progress, with a particular focus on advancements in computing and data management. A central element of this transformation is the implementation of a unified Data Platform. Key developments include the deployment of a data catalog, upgrades to the IT infrastructure, user interface (UI) research, and the integration of robotics.
        The platform leverages shared infrastructure and software patterns to support both beamline and accelerator teams. Additionally, ongoing evaluations of data streaming technologies—such as ASAPO, LIMA2, and Dranspose—aim to enhance real-time data acquisition and processing capabilities.

        Speaker: Yves-Marie Abiven (Synchrotron soleil)
      • 351
        Status of development and application of the Pyapas at HEPS

        To meet the stringent requirements of beam commissioning at the High Energy Photon Source (HEPS), China’s first fourth-generation high-energy synchrotron light source, a new high-level application (HLA) framework named Pyapas was developed entirely in Python. Designed for flexibility and maintainability, Pyapas serves as the foundation for all HLAs at HEPS, supporting tasks such as orbit correction, optics measurement, and machine modeling. Since early 2023, Pyapas-based HLAs have been successfully applied during the commissioning of the Linac, booster, and storage ring, contributing to key milestones including first light in October 2024. This paper summarizes the major developments and applications of HLAs at HEPS and outlines the direction of future work.

        Speaker: Mr Yuliang Zhang (Institute of High Energy Physics)
      • 352
        Study design of a model-based controller for time varying delay compensation in a cryogenics process

        For the High Luminosity LHC (HL-LHC) project at CERN, a dedicated cryogenic test facility—HL-LHC IT String—is being commissioned. This cryogenic system including a helium refrigerator, a 100 m-long cryogenic distribution line, and a low-pressure pumping system with a cold compressor (CC). A key challenge is the management of time-varying delays in the system's thermal and pressure responses, especially in the presence of dynamic components, which requires control due to its axial-centrifugal configuration and thermal constraints. This paper presents a study on the design of a model-based controller employing a Smith Predictor architecture to compensate for time-varying delays in the cryogenic process. The controller is specifically tailored to the cold compressor’s operating sequence, which includes conditioning, regulation, and controlled stop phases—all of which impose strict operational and safety requirements due to the thermal and mechanical sensitivity of the system. Simulation results using simulation software are presented alongside experimental data from the commissioning of the IT String. The proposed control strategy enables stable operation of Line B during transient phases, such as magnet pre-loading and cooldown, and ensures the preservation of system integrity by limiting thermal gradients and compressor acceleration. The study gives some perspective in advance control model-based delay compensation in complex cryogenic infrastructures like those at CERN.

        Speaker: Marco Pezzetti (European Organization for Nuclear Research)
      • 353
        Study of PLC hardware integration within the CERN controls environment

        In view of the consolidation of the LHC Beam Dumping System (LBDS) control planned during CERN Long Shutdown 3 (LS3), a study was conducted to evaluate industrial Ethernet-based protocols to simplify the current multi-fieldbus architecture connecting PLCs and distributed I/O. Initial assessments used the 32-bit PROFINET driver as an alternative communication interface between the Front-End Software Architecture (FESA) and PLCs, reducing system complexity and enhancing flexibility to adapt to new data acquisition needs without recompilation. While early results were promising, limitations arose from the 32-bit kernel’s integration into CERN’s middleware. The release of a new SIEMENS® PROFINET driver based entirely on libraries now enables 64-bit integration on Debian 12 systems. This development offers new possibilities for interfacing the Slow-Control and Surveillance System (SCSS) of the LBDS, using PROFINET ring communication. This paper evaluates the performance and deployment scenarios of industrial protocols (Open User Communication, SIEMENS® SOFTNET, and OPC UA) within CERN’s control infrastructure. It compares them with the existing system design and highlights the initial operational results of the PROFINET driver integration within SCSS on the Beam Transfer kicker test bench.

        Speaker: Mr Christophe Boucly (European Organization for Nuclear Research)
      • 354
        Swarm and bayesian optimization strategies for the PIAVE-ALPI accelerators at LNL

        The ALPI linear accelerator at the Legnaro National Laboratories serves as the final superconducting stage in a complex chain designed to accelerate heavy ions—from carbon to uranium—for nuclear and applied physics experiments. It also plays a key role in the SPES project, aimed at re-accelerating exotic radioactive ion beams. Within the TANDEM-PIAVE-ALPI (TAP) complex, the PIAVE injector provides superconductive acceleration of very low velocity ions before they enter ALPI. Managing the interface between these two systems poses significant operational challenges: manual tuning is often required, resulting in lengthy setup procedures and reduced transmission efficiency. Beam instabilities further complicate operations, requiring frequent manual re-adjustments. To address these limitations, advanced optimization strategies based on swarm intelligence and Bayesian algorithms have been applied. These methods enable coordinated control of multiple subsystems, including beam optics, RF settings, and ion source parameters, offering a more autonomous and adaptive tuning process. Experimental results demonstrating the effectiveness of this approach will be presented.

        Speaker: Mauro Giacchini (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro)
      • 355
        The EuAPS betatron radiation source control system

        EuAPS (EuPRAXIA Advanced Photon Source) is a project carried out in the EuPRAXIA context and financed by the Italian Ministry in the Recovery Europe plan framework. A new advanced betatron radiation source, obtained by exploiting plasma LWFA, is currently being realized at the Laboratori Nazionali di Frascati of INFN in Italy and will be operated as user facility. Several elements of EuAPS are remotely controlled, such as laser diagnostic devices, motors and vacuum system components. In order to efficiently run the facility, the realization of a robust and performing control system is crucial. The EuAPS control system is based on EPICS (Experimental Physics and Industrial Control System) open-source software framework. Functional safety systems such as Machine Protection System (MPS) and Personnel Safety System (PSS) in accordance with IEC-61508 standards are also integrated for interlocks control, anomalies monitoring and for protecting the personnel from hazardous areas. In this contribution, details on how the EuAPS control system has been projected and realized will be provided.

        Speaker: Valentina Dompè (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Frascati)
      • 356
        The Tango AlarmHandler: advancements in core functionality and tools

        The AlarmHandler system is a key component for ensuring operational safety and efficiency in complex control systems. Key updates include improved support for array data types within the alarm evaluation logic, enabling more sophisticated and flexible condition definitions directly involving array or matrix data. Furthermore new tools, designed to extend the AlarmHandler's reach and usability, are now available. A dedicated notification service allows for configurable, multi-channel alerting, such as email and messaging platforms, facilitating timely operator awareness and response.
        Complementing this, new management utilities have been created to streamline the configuration, deployment, and maintenance of alarm definitions across distributed systems, significantly simplifying administrative tasks.
        This contribution details the architectural changes, implementation specifics, and the benefits these advancements bring in terms of system robustness, operator efficiency, and overall monitoring capability.

        Speaker: Lorenzo Pivetta (Elettra-Sincrotrone Trieste S.C.p.A.)
      • 357
        Toward particle accelerator machine state embeddings as a modality for large language models

        Understanding and diagnosing the state of a particle accelerator requires navigating high-dimensional control system data, often involving hundreds of interdependent parameters. We propose a novel multimodal embedding framework that jointly learns representations of machine states from both numerical control system readouts and natural language descriptions. This enables the translation of complex machine conditions into human-readable summaries while maintaining fidelity to the underlying physical system. The obtained embeddings are subsequently adapted to an open-weights large language model via cross-attention conditioning. We demonstrate a first implementation trained on European XFEL machine state data. This work covers the embedding model architecture, training methodology, and presents initial examples demonstrating the model's capabilities in action. Due to the general concept of machine state, the model can be easily adapted to other facilities and control system environments.

        Speaker: Thorsten Hellert (Lawrence Berkeley National Laboratory)
      • 358
        Towards safe and robust neural network controllers at CERN: a review of methods and challenges

        Advances in optimization and machine learning algorithms have shown great potential when applied to control systems of many industries, such as automotive, avionics and aerospace. At CERN, we also find many initiatives applied to our particle accelerators and industrial facilities. In recent years, neural networks are increasingly being explored as components or even full replacements for model-based control systems, which rely on handcrafted rules or hard optimization schemes. In contrast, neural networks promise near-optimal performance while being trainable purely on existing data. However, for critical control systems it is of great importance that any control-policy or dynamics-model conforms to predictable behavior and adheres to strict requirements. While model-based control realizes this via construction, neural networks models are known to exhibit unpredictable behavior, such as adversarial examples. Due to this, the use of formal methods for guaranteeing properties on neural networks has been widely explored in the literature. In this paper, we present an overview of the safety, robustness and stability challenges posed by neural network-based control systems at CERN. We examine how these challenges can be specified as formal properties and discuss state-of-the-art techniques for verifying and mitigating them.

        Speaker: Borja Fernandez Adiego (European Organization for Nuclear Research)
      • 359
        Upgrading ATLAS’s tune archiving system

        The Argonne Tandem Linear Accelerating System (ATLAS) facility at Argonne National Laboratory is a National User Facility capable of delivering ion beams from hydrogen to uranium. The existing tune archiving system, which utilizes Corel’s Paradox relational data-base management software, is responsible for retrieving and restoring machine parameters from previously optimized configurations. However, the Paradox platform suffers from outdated support, a proprietary programming language, and limited functionality, prompting the need for a modern replacement.
        To address these limitations, ATLAS is transitioning to a new archiving system based on PySide for the user interface, InfluxDB for time-series data storage, and FastAPI for backend communication.

        Speaker: Kenneth Bunnell (Argonne National Laboratory)
      • 360
        Using C++ templates for correct and efficient hardware access

        Programming language compilers utilize sophisticated optimization techniques to translate high-level abstractions into performant machine code. These transformations, such as instruction scheduling and data pre-loading, are deemed correct if they preserve the program's observable behavior. However, such optimizations often fail to maintain correctness when interacting with hardware peripherals due to side effects and timing constraints. This paper presents a C++ template, meta-programming approach to generate hardware access routines that are both correct and exhibit optimal machine code generation.

        Speaker: Richard Neswold (Fermi National Accelerator Laboratory)
      • 361
        Using computer vision for online calibration of beam instruments at CERN

        Accurate calibration of beam instrumentation is critical for the optimal operation of particle accelerators. This work presents a case study of a beam imaging system at CERN’s Antiproton Decelerator (AD) target, composed of a light-emitting screen interacting with the beam and an observation camera. During operational use, the system required frequent online recalibrations to address temperature-induced image drifts. To resolve this issue, a fully automated procedure was developed that periodically acquired images and applied multiple computer vision techniques. These techniques included custom curve-fitting methods applied to pre-processed regions of interest and SIFT-based (Scale-Invariant Feature Transform) feature detection to track and correct positional shifts. By automatically performing recalibrations at regular intervals, the approach has significantly enhanced consistency and reliability, enabling continuous and precise beam monitoring in varying environmental conditions. This stabilization technique has subsequently contributed to the optimization of antiproton production at the AD facility. The paper first introduces the challenges associated with calibrating the beam imaging instrumentation of the AD target. It then presents the chosen image analysis techniques, followed by a discussion of the results and measurement errors of the tested methods. Finally, an outlook on potential future improvements is provided.

        Speaker: Javier Martínez Samblas (European Organization for Nuclear Research)
      • 362
        Validating VSlib, the voltage source control library used in the FGC4 at CERN

        The fourth generation of power converter control at CERN, known as Function Generator/Controller 4 (FGC4), is currently under development. The chosen hardware is based on a quad-core A53 ARM-architecture CPU within an AMD Zynq UltraScale+ MPSoC System-on-Chip (SoC), where one core is dedicated to voltage source control. This architecture necessitates a new approach to voltage control software, previously implemented on a Digital Signal Processor (DSP) card and an FPGA, where high reliability and performance are crucial.
        To achieve these goals in a highly integrated environment, a new library called VSlib (Voltage Source library) has been developed. VSlib serves as a toolkit, providing all necessary building blocks for user-defined voltage regulation algorithms, such as filters, controllers, and lookup tables. Additionally, it supports digital and analogue signal logging, inter-core data exchange, and communication with other FGC4s using an in-house Ethernet-based protocol.
        The development process was test-driven, focusing on performance, determinism, and reliability. The library adheres to best industrial practices, including version control, static analysis, and automated testing. Tests were conducted against power convert models running on a Speedgoat Hardware-in-the-Loop system.

        Speaker: Dariusz Zielinski (European Organization for Nuclear Research)
      • 363
        WREN: A versatile White Rabbit Event Node for CERN’s timing system renovation

        WREN is a versatile White Rabbit (WR) node developed for CERN's event-based timing system renovation. Thousands of WRENs are expected to be deployed across the entire CERN accelerator complex from 2027 onwards. Equipped with dedicated hardware and gateware, WREN integrates synchronisation in both TAI (International Atomic Time) and RF (accelerator Radio Frequency) timing. It can function as a TAI event transmitter and receiver, a Beam Synchronous (RF) transmitter and receiver, and is also capable of FPGA-based time-to-digital conversion and fine-delay generation.

        WREN is highly adaptable for various timing and trigger distribution systems. It is available in multiple form factors, including PCIe, VME, PXIe, and uTCA. All boards are based on the Zynq UltraScale+ System-on-Chip (SoC), designed using the open-source KiCad tool and licensed under the CERN Open Hardware License (OHL). The gateware and software are also open source. This paper presents the WREN hardware modules, the gateware architecture, and potential customisations for applications beyond CERN. It also shares insights from the initial pilot deployments at CERN.

        Speaker: Evangelia Gousiou (European Organization for Nuclear Research)
      • 364
        Xopt and Badger: a machine learning ecosystem for real-time accelerator control and optimization

        Machine learning (ML)-based black-box optimization algorithms have demonstrated significant improvements in accelerator optimization speed, often by orders of magnitude. However, deploying these algorithms in real-time facility control remains challenging due to the specialized expertise and infrastructure required. To bridge this gap, we introduce the Xopt ecosystem, a versatile suite of tools designed to make advanced ML-based optimization accessible to the broader accelerator community. This ecosystem includes Xopt, a modular Python framework that facilitates the integration of ML-based optimization algorithms with arbitrary control problems, and Badger, a graphical user interface built on top of Xopt, which enables seamless deployment of ML algorithms in real-time control systems. The Xopt ecosystem has been successfully applied towards solving challenging real-time control problems at leading international accelerator facilities, including SLAC, LBNL, Argonne, Fermilab, BNL, DESY, and ESRF, demonstrating its effectiveness in real-world optimization tasks. In this presentation, we provide an overview of Xopt’s capabilities and illustrate its impact through case studies from SLAC accelerator facilities including LCLS, LCLS-II, and FACET-II.

        Speaker: Ryan Roussel (SLAC National Accelerator Laboratory)
    • WESV Speaker's Corner (MC13, MC14)
      • 365
        Xopt and Badger: a machine learning ecosystem for real-time accelerator control and optimization

        Machine learning (ML)-based black-box optimization algorithms have demonstrated significant improvements in accelerator optimization speed, often by orders of magnitude. However, deploying these algorithms in real-time facility control remains challenging due to the specialized expertise and infrastructure required. To bridge this gap, we introduce the Xopt ecosystem, a versatile suite of tools designed to make advanced ML-based optimization accessible to the broader accelerator community. This ecosystem includes Xopt, a modular Python framework that facilitates the integration of ML-based optimization algorithms with arbitrary control problems, and Badger, a graphical user interface built on top of Xopt, which enables seamless deployment of ML algorithms in real-time control systems. The Xopt ecosystem has been successfully applied towards solving challenging real-time control problems at leading international accelerator facilities, including SLAC, LBNL, Argonne, Fermilab, BNL, DESY, and ESRF, demonstrating its effectiveness in real-world optimization tasks. In this presentation, we provide an overview of Xopt’s capabilities and illustrate its impact through case studies from SLAC accelerator facilities including LCLS, LCLS-II, and FACET-II.

        Speaker: Ryan Roussel (SLAC National Accelerator Laboratory)
      • 366
        Large Language Model (LLM) tool to improve autonomous operation at TEX facility

        In this work we report the integration of Large Language Model (LLM) to improve the operation of a particle accelerator facility such as TEX (TEst-stand for XBand) at the Frascati National Laboratories (LNF) of the Italian Institute for Nuclear Physics (INFN).
        The integration of a LLM through the Cheshire Cat framework presents a transformative approach to enhancing operational capabilities for operators and users. This innovative tool leverages the advanced capabilities of artificial intelligence to assist operators in real-time decision-making and problem-solving. The LLM can interpret and analyze data, suggest optimal operational strategies, and facilitate communication across different subsystems, thereby improving coordination among teams.
        Thanks to the Cheshire Cat's ability to manage different types of memory, including episodic, declarative, and procedural memory, it is possible to develop a highly specialized tool that possesses targeted knowledge of procedures, problem trouble shooting and physical phenomena relevant to accelerator facilities. This versatility allows for the integration of the tool with control system frameworks such as EPICS, thereby facilitating direct interaction with the accelerator itself. The integrated approach not only optimizes daily operations but also leverages historical information and best practices, enhancing the efficiency and safety of operations in the field of particle physics.

        Speaker: Stefano Pioli (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Frascati)
      • 367
        Fermilab's control system development with digital twin

        Control Systems development is often the last thing considered when designing and building new equipment, e.g. a new detector or superconducting RF LINAC; however when the new equipment is installed, it is the first thing desired to be operational for testing. Due to frequent delays in building new equipment and project deadlines, control system development and testing is often curtailed. A way to alleviate this problem is to simulate the control system, though this will be challenging for complex systems.

        The Fermilab PIP-II (proton improvement plan - II) project is being constructed at Fermilab to deliver $800\,MeV$ protons of $>1\,MW$ beam power to replace the present LINAC for the remainder of the existing accelerator complex. The new LINAC consists of a warm front end (WFE), 23 superconducting RF cryomodules (of 5 types), and a beam transfer line (BTL) to the existing complex.

        The accelerator physics group has a parallel project to create a digital twin (DT) of the PIP-II accelerator. We have coupled the EPICS controls to this DT and are developing both the DT and EPICS software in parallel. This will allow us to develop the EPICS software framework, the HMIs, sequences, high level physics applications, and other services for use in a fully functional control system.

        This presentation will detail the work that we have performed to date and show demonstrations of controlling and monitoring the status of the accelerator, as well as future plans for this work.

        Speaker: Pierrick Hanlet (Fermi National Accelerator Laboratory)
    • 18:30
      Conference Dinner, Awards Presentation
    • THKG Keynote Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Karen White (Oak Ridge National Laboratory)
      • 368
        Trillion-Parameter Foundation Models as Discovery Accelerators: Toward a Scientific Discovery Platform

        Trillion-parameter, science-tuned foundation models can speed discovery, but only inside an AI-native Scientific Discovery Platform (SDP) that connects models to tools, data, HPC, and robotics. I argue for community co-development of the SDP, via open interfaces, shared schedulers, knowledge substrates, provenance, and evaluation, alongside shared models. Early results suggest that such a co-designed stack can boost throughput and reliability in materials and bio workflows, enabling human–AI teams to turn knowledge into experiments and experiments into insight.

        Speaker: Ian Foster (Argonne National Laboratory)
    • THAG MC09 Experimental Control and Data Acquisition Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Mark Rivers (Consortium for Advanced Radiation Sources), Steven Hartman (Oak Ridge National Laboratory)
      • 369
        Multimodal data acquisition system at MAX IV

        The Balder beamline at MAX IV Laboratory, a state-of-the-art 4th generation synchrotron, is designed for X-ray absorption and emission spectroscopy. Delivering a high photon flux (10¹³ ph/s), it supports in situ experiments, which require fast, high-quality data acquisition and support for sequential multi-technique measurements.
        This work presents a data acquisition (DAQ) system that combines X-ray absorption spectroscopy (XAS) and X-ray diffraction (XRD) within a single, synchronized experiment. At the core of the system is a Double Crystal Monochromator, operated by ACS SPiiPlusEC motion controller. This controller enables stable and rapid energy scanning via programmable motion trajectories, allowing sequential acquisition of energy spectra and diffraction patterns.
        Experiment synchronization is achieved via FPGA-based PandABox, which generates TTL signals based on the real-time motor position, enabling technique-specific pulse trains to be sent to the respective XAS and XRD detectors, precisely gated to the energy scan.
        The entire experiment workflow is orchestrated using Sardana through dedicated macros and controllers. User interaction is streamlined through a Taurus GUI, providing an intuitive drag-and-drop interface for sequencing and configuring each technique.
        This contribution outlines the system architecture, integration challenges, and benchmarking results, highlighting the enhanced experimental capabilities made possible by this advanced DAQ system at MAX IV.

        Speaker: Vanessa Da Silva (MAX IV Laboratory)
      • 370
        ADTimePix3 controls for neutron detection

        The TimePix3 detector, developed by the Medipix collaboration, has emerged as a powerful tool for neutron detection applications at Department of Energy (DOE) National User Facilities, including the Spallation Neutron Source (SNS) and High Flux Isotope Reactor (HFIR). This presentation introduces new features and improvements in the EPICS area detector driver (ADTimePix3), specifically designed for neutron detection experiments.
        We will discuss the status of controls for the Timepix3 detector. Computations of Time-of-Flight (ToF) histograms and advanced hardware triggering acquisition modes will be presented. Advanced Mask Generation is an innovative feature that provides an arbitrary shape through circular and rectangular elements for both single-chip and quad-chip detector configurations. Radiation affects Timepix3 chips, and mitigation techniques have emerged as a critical need in neutron experiments. This work represents a significant step in integrating advanced detector technologies into the EPICS control system framework. Future improvement plans with the use of advanced timing detectors will be discussed to optimize timing detector performance in DOE's Scientific User Facilities.

        Speaker: Kazimierz Gofron (Oak Ridge National Laboratory)
      • 371
        HASMI: A configurable EPICS-based framework for automated optimization of undulator-monochromator coordination

        We present HASMI (Harmonic Analyzer State Machine Interface), a highly configurable Python framework designed to automate data acquisition and analytical tasks for optimizing coordinated undulator and monochromator movements at synchrotron beamlines. Built upon EPICS, HASMI features a comprehensive scan library and a command-line interface for managing multi-actuator scans. The framework offers extensive customization through a user-configurable database for defining scan types, actual scan parameters, and user preferences. Integrated analytical tools enable precise identification of undulator harmonic positions using advanced techniques such as convolution and customized cross-correlation. Beamline operations are automated and parameterized through a generalized state machine, which provides both an intuitive command-line interface and a dedicated EPICS interface. The state machine operates efficiently as a continuous background service alongside the monochromator IOC. By integrating multiple alignment scans, visualization, and automated data analysis into a streamlined 'one-button' procedure, HASMI significantly enhances the speed, performance, and reliability of beamline operations for users of the four hard X-ray CPMU17-DCM branches at EMIL.

        Speaker: Andreas Balzer (Helmholtz-Zentrum Berlin für Materialien und Energie)
      • 372
        Data acquisition monitoring and real-time analysis at LCLS

        X-ray experiments at the Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory require rapid analysis and feedback to optimize experimental conditions and maximize data quality. To address this need, the Data Systems team has developed a real-time monitoring and data analysis software tool, facilitating timely decision-making and improving experimental efficiency.
        In this talk, we will present the design and capabilities of this software tool, including its intuitive parallelized, flowchart-based user interface and graph-based backend, with an overview of the pre-defined operations that can be combined into full workflows. We will also mention the software's scalability through its use of ZeroMQ (zmq) sockets, as well as its modular design, which enable easy integration of new data formats or functionalities. This flexibility and scalability make the software a valuable resource for x-ray experiments at LCLS, with the potential to improve experimental efficiency and productivity.

        Speaker: Vincent Esposito (Linac Coherent Light Source)
      • 373
        Advancing position-based continuous energy scans at MAX IV: expanded beamline coverage and enhanced control integration

        The position-based continuous energy scanning system at MAX IV continues to evolve, delivering significantly faster and more consistent data acquisition for X-ray beamlines. Since its initial implementation on BioMAX, FlexPES, and FinEstBeAMS, the system has now been successfully deployed on three additional beamlines: SPECIES, HIPPIE, and NanoMAX. Integration efforts are also underway for SoftiMAX and MAXPEEM. A major recent advancement includes the implementation of non-linear motion trajectories for both APPLE-II and IVU undulators, supporting coupled-axis scanning in both soft and hard X-ray regimes. In addition to its native compatibility with Sardana/TANGO-based orchestration, the system now offers broader integration capabilities, allowing for seamless operation within alternative experimental control environments such as Contrast. These developments further underline the system’s modularity, flexibility, and potential to become a facility-wide standard for efficient, low-dose, and high-resolution scanning.

        Speaker: Lin Zhu (MAX IV Laboratory)
    • THAR MC02 Control System Upgrades Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Misaki Komiyama (The Institute of Physical and Chemical Research), Yuliang Zhang (Chinese Academy of Sciences)
      • 374
        The integration of custom radiation tolerant electronics in industrial control systems

        In the CERN accelerator complex, the conventional magnets are protected against overheating and power converter failures by a PLC-based system, the so-called Warm magnet Interlock Controller (WIC). In 2026, the systems installed in the LHC-SPS transfer lines will reach end-of-life after 20 years of successful operation. Furthermore, Siemens' announcement regarding the phase-out of the S7-300 series necessitates development of a second-generation magnet protection system. Initially, a solution based on the existing purely industrial configuration was considered, involving a simple upgrade of the PLC modules to the new Siemens S7-1500 series. However, due to the susceptibility of this new series to radiation-induced electronic effects, this approach was deemed unfeasible. As a result, a new control system architecture was explored, integrating both an industrial control processor and custom in-house designed radiation-tolerant electronics. To maintain overall system integrity and ensure seamless interfacing with the existing SCADA layer, it was decided to retain the industrial CPU – Siemens S7-1516 PLC – as the process control master. For the slave units, operating in radiation-prone environments, a CERN-developed platform, Distributed I/O Tier, was chosen. The new system is presently being installed in the beam transfer lines. The technical challenges and chosen solutions are described in this paper.

        Speaker: Michal Kalinowski (European Organization for Nuclear Research)
      • 375
        Upgrading hard X-ray experimental instruments controls for LCLS-II-HE: enabling high repetition rate science

        The Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory has been undertaking a major project that builds on the foundation of the LCLS-II project. The LCLS-II High Energy (LCLS-II-HE) upgrade is designed to push the capabilities of LCLS-II even further by increasing the energy of its superconducting accelerator to 8 GeV (up from ~4 GeV), enabling the production of even shorter and more intense X-ray pulses for cutting-edge scientific experiments.
        One of the project’s Key Performance Parameters is the delivery of a high-repetition-rate capable Hard X-ray (HXR) experimental instrument that can perform experiments with the LCLS-II-HE beam. Meeting this requirement has driven the need to upgrade the existing HXR experimental control system.
        This talk will focus on the scope and progress of that upgrade effort, which builds on the LCLS-II controls architecture integrating new hardware and software components. A major part of the upgrade includes the implementation of the Preemptive Machine Protection System (PMPS), which is integrated across all key control subsystems including motion, optics and vacuum to ensure safe beam delivery. The upgraded system is also designed to support dual-mode operation and beamline multiplexing to meet evolving experimental demands.
        In addition, as the project approaches installation and transitions into its final phase, the challenges encountered and mitigation implemented to ensure successful delivery will be presented.

        Speaker: Margaret Ghaly (SLAC National Accelerator Laboratory)
      • 376
        Upgrade of the LHC vacuum control system towards the High Luminosity LHC era

        The HL-LHC project has initiated a comprehensive upgrade of the LHC vacuum control system. Much of the vacuum control hardware, installed at the beginning of the LHC in 2008 or even dating back to the LEP era in the 1990s, was becoming obsolete and required modernization.
        Additionally, the new HL-LHC operating conditions will induce higher radiation levels; therefore, new radiation-tolerant electronics are required in the arcs and dispersion suppressor areas, as well as radiation-hard equipment in the matching sections.
        Moreover, during the third long shutdown (2026-2030), the matching sections around the ATLAS and CMS experiments will need to be extensively modified with the insertion of new systems. The vacuum control system will be relocated to new underground galleries, with a significant increase in vacuum control equipment.
        This upgrade, progressively implemented during each year-end technical stop and long shutdown, spans from 2019 to 2030. Hundreds of controllers are involved, along with significant control software design and refactoring, while keeping the vacuum control system fully operational.
        This paper provides an overview of the new designs and technological solutions chosen, the radiation hardness assurance applied, the evolution of the vacuum control system architecture, and the main control software upgrades to ensure decades of reliable operation. Furthermore, it presents the current progress, challenges, and outlines future activities planned until 2030.

        Speaker: Gregory Pigny (European Organization for Nuclear Research)
      • 377
        Integration of new cryogenic plants into an existing control system: a scalable and standardized approach

        The High-Luminosity upgrade of the Large Hadron Collider (HL-LHC) requires integrating new cryogenic plants while ensuring uninterrupted operation of the existing infrastructure. This paper presents a scalable and standardized approach to upgrading the control system, ensuring flexibility, interoperability, and long-term maintainability. The approach utilizes commercial off-the-shelf (COTS) components, including PLCs and standard programming languages compliant with IEC-1131, to enable modular deployment and minimize development complexity. Additionally, the use of a control framework (UNICOS) streamlines implementation, enhances system coherence, and ensures efficient interaction between new and legacy subsystems. We provide an overview of the control system architecture, highlighting design decisions that enhance scalability and adaptability. Challenges such as maintaining a seamless integration with the operational constraints, reliability assurance, and automation consistency were addressed through structured methodologies. This work serves as a reference for large-scale cryogenic system upgrades, demonstrating how industry standards and COTS solutions facilitate integration while ensuring long-term sustainability.

        Speaker: Jesus Fernandez Cortes (European Organization for Nuclear Research)
      • 378
        Beamline controls experiences under the APS upgrade project

        The upgrade of the Advanced Photon Source included the design and construction of eight brand new beamlines and significant reconstruction of an additional fifteen existing beamlines. With such a significant amount of new support being required, the APS Beamline Controls group took the opportunity to evaluate the hardware we were using and the ways in which we develop and deploy beamline support.
        This is an overview of the outcomes of those discussions, alongside lessons learned during the actual implementation of such planning. We discuss the hardware chosen, changes in IOC configuration and management, support development challenges, and the setup of data acquisition to fully take advantage of our new beam.

        Speaker: Keenan Lang (Argonne National Laboratory)
    • 10:15
      Coffee
    • THBG MC09 Experiment Control and Data Acquisition Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Mark Rivers (Consortium for Advanced Radiation Sources), Steven Hartman (Oak Ridge National Laboratory)
      • 379
        ESRF's software strategy for high-throughput detectors: parallel and scalable data acquisition with real-time processing.

        The ESRF EBS upgrade dramatically increased the X-ray photon flux for the beamlines, thereby opening up new experimental opportunities but also requiring faster and more advanced 2D detectors. At ESRF, detectors are integrated to LIMA, a framework that provides unified control and acquisition Tango APIs for our control software, BLISS.
        To meet the challenges of new 2D detectors, a comprehensive data strategy has been developed. This includes a redesigned data acquisition framework, LIMA2, which aims to support an increasing data throughput with a scalable design, facilitate online data reduction (ODR) tasks and provide data access for Online Data Analysis (ODA) software. In addition, new detectors introduce new data models, such a multi-band, sparse, sub-byte or event-based data which need to be handled appropriately.
        In the framework of our data strategy, we give an update on the recent developments of the LIMA2 project components: the core library architecture and the hardware plug-ins for Dectris, Rigaku XSPA-1M, ESRF Smartpix detectors. We also showcase the processing plugin model illustrated by the serial macro-crystallography (SMX), photon correlation spectroscopy (XPCS) techniques. We demonstrate LIMA2's new capabilities with real-world applications on ID10 and ID15a/b and present the results. We conclude with our vision on future data models and their integration with the control, visualisation and processing softwares.

        Speaker: Samuel Debionne (European Synchrotron Radiation Facility)
      • 380
        FELIX, the ATLAS readout system: from LHC Run 3 to Run 4

        After being successfully deployed to read out a subset of the ATLAS detectors during LHC Run 3 (2022-2026), FELIX will serve all ATLAS detectors in LHC Run 4 (2030-2033). FELIX is a router between custom serial links from front-end ASICs and FPGAs to data collection and processing components via a commodity switched network. FELIX is also capable of fixed-latency forwarding the LHC clock, trigger accepts, and resets received from the TTC (Timing, Trigger and Control) system to front-end electronics. FELIX uses FPGA-based PCIe I/O cards installed in commodity servers. To cope with the increased data rate expected after the major upgrade to the LHC and the detector in between runs, the FLX712 Run 3 PCIe Gen3x16 card, based on an AMD Kintex Ultrascale XCKU115 FPGA, will be replaced with the FLX155, a bifurcated 2x PCIe Gen5x8 card equipped with an AMD Versal Premium VP1552 FPGA/SoC. Firmware installed on the FPGA and software running on the FELIX server are also being upgraded to handle the increased data rate.

        Speaker: Ricardo Luz (Argonne National Laboratory)
      • 381
        Comparison of distributed DAQ software frameworks for the high-throughput TUPI detector

        Timepix-based Ultra-fast Photon Imaging (TUPI) is a photon-counting hybrid detector family proposed to fulfill the requirements of ORION's (Brazil's planned BSL-4 laboratory) tender and hard X-rays beamlines. The first detector model is planned to use an arrangement of 3x3 Timepix4 ASICs to provide 1344x1536 pixel images (55 μm pixel pitch) at 16-bit dynamic range, with expected acquisition rates up to 4kHz in frame-based mode. However, the hardware can reach up to 44kHz, and other chip arrangements are possible in the future, so a future-proof software solution is being sought after among currently available open-source Data Acquisition (DAQ) software frameworks, such as LImA2, Odin and AreaDetector. A distributed DAQ system could simplify server hardware requirements, given that a 3x3 ASIC array acquiring at maximum frame rate requires 144 dedicated 10.24Gbps links, which can be spread across as many hardware units as needed. Furthermore, the decentralized software architecture can help in performance sensitive tasks like file-saving, enabling it to keep up with the detector rates.
        This work aims to compare available frameworks considering multiple aspects, such as performance, ergonomics (i.e. developer experience and deployment procedure) and control system integration (EPICS, in our case). The suitability of the frameworks for online processing and distributed acquisition will also be compared, as current and future challenges might make these features compulsory.

        Speaker: Érico Nogueira Rolim (Brazilian Synchrotron Light Laboratory)
      • 382
        The Karabo middlelayer API and motion control systems at European XFEL

        Karabo is a device-based distributed control system toolkit used to implement the control and data acquisition systems of European XFEL.
        A feature of Karabo is its middlelayer (MDL) API - a powerful, flexible and easy-to-use asynchronous Python API which can be used to implement Karabo devices. Such devices may interface hardware directly, or they may communicate with other Karabo devices to provide coordination between them or derive functionality from them. Lightweight middlelayer devices, called macros, allow routines for coordinating or monitoring Karabo devices to be quickly created. A command-line interface using the MDL API is provided by the iKarabo utility.
        In this contribution an overview of the Karabo middlelayer is given, presenting key parts of the API, and macros and iKarabo are introduced. Examples of MDL devices are presented, with an emphasis on motion systems. A framework for multi-axis motion, virtual motor base, and the scan tool, Karabacon, are described. The Karabo MDL API is compared to EPICS solutions, including pythonSoftIOC and asyn, and to BlueSky.

        Speaker: David Hickin (European X-Ray Free-Electron Laser)
      • 383
        hklpy2 - 2nd generation Bluesky diffractometer controls

        Bluesky (1) enables experimental science at the lab-bench or facility scale. Diffractometers are specialized devices to probe the crystallography of a sample. A new Python package, hklpy2 (2), provides practical use of diffractometers, interfacing an underlying support library (such as Hkl from Synchrotron SOLEIL) with Bluesky as a PseudoPositioner (operates in both crystallographic axes and rotational axes). User-requested features have been designed into hklpy2 such as custom names for any of the rotational axes, access to different computational engines of the support library, choice of support libraries, simple ways to save and restore configuration. It's easy to create a simulator or connect to a motored diffractometer for any geometry provided by the underlying support libraries.

        Speaker: Pete Jemian (Advanced Photon Source)
      • 384
        Enhancing efficiency in high-resolution 2D mapping: arbitrary geometry scanning for µXRD/SAXS and µXRF at MAX IV

        Advanced materials exhibit complex hierarchical architectures across multiple length scales, characterized by spatially heterogeneous chemical element distributions. Comprehensive understanding of such materials necessitates high-resolution mapping of both structural and elemental compositions. Two-dimensional micro X-ray diffraction/small-angle X-ray scattering (µXRD/SAXS) and micro X-ray fluorescence (µXRF) are well-suited for this purpose. Existing continuous scanning methods at MAX IV enable efficient mapping over large sample areas but are constrained to rectangular scan geometries. This limitation leads to inefficiencies when targeting arbitrarily shaped regions of interest, resulting in prolonged scan times due to the inclusion of irrelevant surrounding areas. To address this limitation and enhance scanning efficiency, we present a new arbitrary-geometry scanning solution utilizing a time-resolved hardware synchronization system. This advancement enables users to define and scan custom-shaped areas aligned with specific experimental demands, based on the probing technique rather than relying on optically visible boundaries. Preliminary tests at the DanMAX beamline demonstrate that the reduction in scan time is proportional to the decrease in sample area relative to the original rectangular scan region, thereby significantly enhancing the efficiency of high-resolution structural and compositional mapping workflows.
        Keywords: 2D mapping, Sardana, µXRD/SAXS, µXRF

        Speaker: Yimeng Li (MAX IV Laboratory)
    • THBR MC10 Software Architecture and Technology Evolution Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Ralph Lange (ITER Organization), Tim Wilksen (Deutsches Elektronen-Synchrotron DESY)
      • 385
        The ELT primary mirror fault detection, isolation, and recovery software

        The 39m diameter primary mirror of the ESO Extremely Large Telescope, currently under construction at Cerro Armazones in Chile, is composed of 798 hexagonal segments. Each segment is equipped with several sensors and actuators to measure and adjust its position. The control algorithm uses 24000 I/O points, distributed over 1195 square meters, to dynamically maintain the alignment and the shape of the mirror. The reliability of this large number of devices is improved by the introduction of a failure detection, isolation, and recovery strategy. The main goal of the FDIR strategy is to enable the control loop to continue working even in case of failures by identifying and masking faulty devices in real-time and avoid failure propagation. This paper provides an overview of the system, summarizes the main availability requirements, and illustrates how the FDIR software has been designed, implemented, and tested. The detection and isolation of failing edge sensor devices, responsible for measuring the difference in piston, shear, and gap between adjacent segments, is taken as running example. Advantages and limitations of the presented design are summarized in the conclusions.

        Speaker: Luigi Andolfato (European Southern Observatory)
      • 386
        Experimental supervision tools software upgrade on the Laser Megajoule facility

        The Laser MegaJoule, a 176-beam laser facility developed by CEA, is located near Bordeaux. It is part of the French Simulation Program, which combines improvement of theoretical models used in various domains of physics and high performance numerical simulation. It is designed to deliver about 1.4 MJ of energy on targets, for high energy density physics experiments, including fusion experiments.
        The LMJ technological choices were validated on the LIL, a scale-1 prototype composed of 1 bundle of 4-beams. The first bundle of 8-beams was commissioned in October 2014 with the first experiment on the LMJ facility. Operational capabilities increase gradually every year until the full completion by 2026. By the end of 2025, 22 bundles of 8-beams will be assembled and 19 bundles are expected to be fully operational.
        We present a successful modernization strategy of operational tools for the LMJ, enhancing usability and maintainability. Our efforts focused on replacing legacy systems, by migrating from a monolithic client application to a Django-React web app, providing a more accessible and maintainable interface.
        Additionally, we streamlined the XML tree-based configuration of calculation scenarios by migrating toward a robust PostgreSQL database.
        Furthermore, we overhauled our deployment process by using Ansible, increasing operational efficiency and consistency.

        Key words: Laser facility, LMJ, IT, Deployment, Data migration

        Speaker: Nicolas ROUX (Centre d'Études Scientifiques et Techniques d'Aquitaine)
      • 387
        Evaluating Function-as-a-Service (FaaS) frameworks for the Accelerator Control System

        As particle accelerator control systems evolve in complexity and scale, the need for responsive, scalable, and cost-effective computational infrastructure becomes increasingly critical. Function-as-a-Service (FaaS) offers an alternative to traditional monolithic architecture by enabling event-driven execution, automatic scaling, and fine-grained resource utilization. This paper explores the applicability and performance of FaaS frameworks in the context of a modern particle accelerator control system, with the objective of evaluating their suitability for short lived and triggered workloads.
        In this paper, we evaluate prominent open-source FaaS platforms in executing functional logic, triggers, and diagnostics routines. Evaluation metrics consist of cold-start latency, scalability, performance, integration with other open-source tools like Kafka. Experimental workloads were designed to simulate real-world control tasks when implemented as stateless FaaS functions. These workloads were benchmarked under various invocation loads and network conditions. Self-hosted FaaS platforms, when deployed within accelerator networks, offer greater control over execution environment, better integration with legacy systems, and support for real-time guarantees when paired with message queues. Based on lessons learned and evaluation metrics, this paper describes reliability of the FaaS framework for the Accelerator Control Systems (ACS).

        Speaker: Amol Jaikar (Fermi National Accelerator Laboratory)
      • 388
        EBPFCat - an open source EtherCAT implementation for experimental control

        EtherCAT has become a popular communication protocol for controlling equipment in large facilities. Although it is an open standard that can be used on common hardware, commercial solutions on specialized hardware are usually employed. Here we introduce EBPFCat, a fully open-source implementation of the EtherCAT protocol that can be used on any modern Linux system without the need to write kernel modules. In EBPFCat, non-time-critical parts are implemented in Python, while real-time performance and reliability are ensured through the use of EBPF, a virtual machine embedded within the Linux kernel. EBPFCat contains a code generator that allows EBPF programs to be generated directly from Python without the need of separate compilation tools. These programs are verified for correctness by the Linux kernel and executed therein, allowing for feedback bandwidths exceeding 10 kHz. The EBPF code generator is a generic solution that can also be used for unrelated EBPF programs. EBPFCat is a standalone project that operates independently of any specific high-level control system. Nevertheless, it has already been integrated into Karabo*, the control system used at the European XFEL. We show applications for a large variety of control problems, from simple vacuum systems to intricate feedback loops for motion control.

        Speaker: Martin Teichmann (European X-Ray Free-Electron Laser)
      • 389
        Updates on the Karabo control system

        Karabo is a SCADA system developed at the European XFEL to facilitate operations in a complex and flexible control environment, and to ingest and process data from diverse sources, including MHz-rate detectors, frequently with tight time correlation requirements.
        In this contribution an overview of the current state of the Karabo Ecosystem is given and recent developments are presented. These include the introduction of authentication and authorization, the transition from Boost.Python to pybind11 and from Boost to the C++ standard library, and additional modernization of the codebase. These significant changes are reflected in a new major version, Karabo 3, which replaces the 10-year old Karabo 2 branch.

        Speaker: Steffen Hauf (European X-Ray Free-Electron Laser)
      • 390
        Towards asynchronous control systems, an asyncio implementation of OPC UA using TANGO green modes

        The ALBA Synchrotron (Barcelona, Spain) has been operating as a 3 GeV facility for over 10 years and is now preparing its transition to ALBA II, a fourth-generation light source. As part of this planned upgrade, we are evaluating state-of-the-art technologies that could shape the future of our Tango Control System. In particular, we investigate how asynchronous programming can enhance system responsiveness while reducing latency and resource usage. This study focuses on applying asynchronous communication paradigms at all levels between our Taurus SCADA UIs, Tango Control System and PLC-based systems — used for Equipment (EPS) and Personnel (PSS) Protection as well as automation. In this context, we explore the adoption of OPC Unified Architecture (OPC UA), a self-descriptive industrial standard for secure, platform-independent communication, alongside asyncio, the Python standard library for coroutine-based asynchronous programming, as supported by the FreeOpcUa library and "green" modes of PyTango, the Python binding for Tango Controls. Our goal is to demonstrate a modern, flexible, vendor-independent and high-performance control strategy for ALBA II Control System. We provide a comprehensive comparison and benchmark between the proposed solution and existing PyPLC Tango Device Servers.

        Speaker: Emilio Jose Morales Alejandre (ALBA Synchrotron (Spain))
    • 12:15
      Lunch
    • THCG MC15 Feedback Systems and Optimization Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Daniel Tavares (Brazilian Synchrotron Light Laboratory), Yuke Tian (Brookhaven National Laboratory)
      • 391
        Fast orbit feedback using the GSVD for systems with multiple slow corrector arrays

        Advances in detector speed and resolution at 4th generation light sources make electron beam stability a critical requirement. At Diamond-II, the fast orbit feedback (FOFB) will stabilise the beam using 252 beam position monitors and two actuator arrays of 252 slow and 138 fast correctors. We previously proposed an approach based on the generalised singular value decomposition (GSVD), a two-matrix factorisation, to decouple the system into two-input modes controlled by both slow and fast correctors, and single-input modes controlled by slow correctors alone. This approach assumed identical dynamics for all correctors within each array. However, recent developments have shown that variations in vessel geometries and cooling channels introduce significant differences in corrector bandwidths, particularly for the slow correctors in the horizontal plane. Specifically, Diamond-II will have at least three distinct types of slow correctors in the horizontal plane. To address this, we extend the GSVD-based approach with balancing input filters to incorporate multiple slow corrector arrays with different dynamics. We also introduce a new regularisation matrix that preserves controller properties between the original and mode spaces for any choice of regularisation parameter. We analyse the resulting control system and present simulation results from preliminary Diamond-II data, demonstrating that the control specifications are met.

        Speaker: Idris Kempf (University of Oxford)
      • 392
        Automation with Bayesian optimization in the Karabo SCADA system

        Automation within the control system environment is a strategic goal at the European XFEL facility for various reasons: staff can be allocated more efficiently, procedures can be standardized to increase data quality, operator errors are minimized, and less experienced users can operate instruments. Prime candidates for automation are often recurring procedures during facility operation, such as beam-alignment tasks. Therein, a set of motors/mirrors is moved to optimize a characteristic property of the system at hand, such as the beam intensity or the beam position. This type of problem can be described more generally as the optimization of an expensive-to-evaluate, and often multivariate black-box function. A well-established and efficient method to address such problems is Bayesian Optimization (BO). In this contribution a software is presented, which uses BO to automate the setup of scientific instrumentation, and which is highly adaptable towards a broad range of use-cases. The software is implemented within the Karabo supervisory control and data acquisition system and uses the botorch library for BO.

        Speaker: Florian Sohn (European X-Ray Free-Electron Laser)
      • 393
        Design of a robust controller based on loop-shaping for quick-scanning monochromator

        Quick-scanning Double Crystal Monochromator (QDCM) is core equipment of the Quick-scanning X-ray Absorption Fine Structure (QXAFS) experimental method. To realize the sub-second temporal resolution of QXAFS method, QDCM is responsible for achieving a reproducible oscillatory movement of monochromator crystals with a given frequency and amplitude around Bragg angle. A new robust controller design method is proposed for QDCM system. This controller, based on the loop-shaping principle in frequency-domain, considers the effect of nonlinearities and uncertainties in low frequency region and the suppression of vibration mode in high frequency region. The results are shown to demonstrate the feasibility of this method for the QDCM fast-scan seeking.

        Speaker: Zongyang Yue (Institute of High Energy Physics)
      • 394
        Energy optimal predictive controller for chiller-based cooling plants of accelerator facilities

        Cooling plants are energy-intensive systems which enable ideal thermodynamical conditions for the smooth operation of accelerator facilities. Among this class of plants, chilled water production systems, used to bring cooling water from high to low temperatures, are commonly found. The automatic control system of such plant is usually complex due to the large number of correlated control inputs available, rendering it particularly challenging to minimize its energy consumption when solely relying on conventional control methods (e.g., PID and/or if-based logic). In 2024, CERN's Engineering department partnered with the university Politecnico di Milano to develop an energy optimal model-predictive controller (MPC) for one of the critical chilled water production plants used for the cooling of CERN's flagship accelerator, the Large Hadron Collider. This 18-month collaboration is well underway, and this paper explores the motivational and organizational aspects of the project, as well as highlights the technical solution proposed, the challenges faced to date, and how these were overcome. Simulation-based results are presented for a detailed performance comparison between MPC and the currently used rule-based logic controller. Finally, the architecture of controls and operator interface for the MPC deployment in the real plant are discussed, in view of extending this optimal control solution to numerous similar systems at CERN.

        Speaker: Diogo Monteiro (European Organization for Nuclear Research)
      • 395
        PETRA-IV FOFB System Integration Test Setup

        ​The PETRA-IV Fast Orbit Feedback (FOFB) system will be a large-scale Multi-Input Multi-Output (MIMO) control system, utilizing 790 Beam Position Monitors (BPM) and 560 Fast Corrector Magnets (FCM) to maintain the desired orbit trajectory. Data acquisition and distribution will be managed across 16 supply areas and connected via an extended star network topology. This contribution focuses on system integration test setups while describing the Model-Based Design (MBD) methodologies that are being used for developing, verifying, and commissioning the complete system step by step. Integration of the components of such a large system must be systematic so that potential issues can be isolated and fixed efficiently. The setups will also be used for the characterization of sensors, actuators, and transmission lines. Subsystem identification is of utmost importance for a comprehensive understanding of the system dynamics, which will guide the design of appropriate filters and control strategies to ensure optimal orbit stabilization performance. The analysis will also precisely assess the overall system latency, which is critical for feedback bandwidth and stability.

        Speaker: Burak Dursun (Deutsches Elektronen-Synchrotron DESY)
    • THCR MC16 Data Management and Analytics Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Chris Roderick (European Organization for Nuclear Research), Mr Gary Croke (Thomas Jefferson National Accelerator Facility)
      • 396
        Data management infrastructure for European XFEL.

        Effective data management is crucial for accessible and usable research data. This presentation will describe the data infrastructure at the European XFEL, built on a four-layer storage architecture designed for high-throughput handling. The first layer, Online storage, acts as a high-speed cache for data rates up to 15 GB/s. The second layer, High-Performance Storage, supports real-time processing and analysis, both connected via a 1 Tb/s InfiniBand link spanning 4.4 km to the DESY computing center. The third layer, Mass Storage, enables mid-term access for in-depth analysis, while the Tape Archive ensures secure long-term storage beyond 10 years. Integrated with computing clusters, the system supports both near-online and offline analysis, or data export.
        Capable of processing up to 2 PB of data per day, the infrastructure demonstrates exceptional performance and reliability.

        Speaker: Janusz Malka (European X-Ray Free-Electron Laser)
      • 397
        Data automation strategy at ESRF

        The ESRF has initiated an ambitious program to automate data processing and management across all 45 of its beamlines. Since the successful installation and commissioning of the ESRF-EBS new storage ring in 2020, there has been a significant increase in X-ray flux. A comprehensive strategy has been developed to optimize the use of the increasing quantities of data generated by the new advanced detectors. This strategy consists of several components, including the Lima2 project [1], which provides scalable detector control and early data processing and reduction. The new beamline control system, BLISS [2], coordinates synchronization and data acquisition. Additionally, the Data Automation software, EWOKS [3], processes and exploits the produced data by running data processing workflows either triggered by BLISS online or run offline. DRAC [4] complements the strategy by implementing a database and associated tools for managing all data and metadata generated during experiments and the associated automated pipelines. We will examine how DRAC, EWOKS, and BLISS work together, the techniques that have already been automated, the potential extent of data automation and the orchestration function of Blissdata.
        [1] Lima2: ESRF Software Strategy for high-throughput detectors (this conference)
        [2] https://doi.org/10.1080/08940886.2023.2277141
        [3] https://doi.org/10.1080/08940886.2024.2432305
        [4] https://doi.org/10.1080/08940886.2024.2432816

        Speaker: Vicente Rey Bakaikoa (European Synchrotron Radiation Facility)
      • 398
        A long term storage solution for Tango attribute data at SKAO

        At the Square Kilometre Array Observatory (SKAO), monitoring data is ingested from distributed subsystems via the Tango Controls archiver, with attribute data stored in the Engineering Data Archive (EDA). The EDA uses a PostgreSQL database with the TimescaleDB extension, offering a performant solution for time-series storage. However, as SKAO infrastructure scales, PostgreSQL becomes impractical for long-term retention due to cost and operational complexity. This paper outlines a long-term storage strategy based on S3-compatible object storage. The solution decouples operational and archival storage by exporting and serializing Tango attribute data into efficient formats like Apache Parquet for storage in S3. Metadata indexing ensures the data remains discoverable and retrievable over time. The approach draws from the MeerKAT telescope's experience, a precursor to SKAO operated by SARAO. MeerKAT faced similar challenges archiving large volumes of telemetry data and adopted a database and long term storage model. We also describe supporting tools and processes for managing data lifecycle transitions. The paper concludes with open challenges and future directions for integrating this approach into observatory-wide data access frameworks, ensuring engineering telemetry remains accessible throughout the SKAO system lifecycle.

        Speaker: Mauricio Zambrano (SKA Observatory)
      • 399
        Status of the ITER data handling ecosystem

        Data is one of the key deliverables of the ITER machine.Since 2019, the ITER data handling system has gradually been extended to cope with the commissioning of new plant systems and new needs.This contribution gives an overview of the different sub-systems which compose the data handling ecosystem from data archivers to visualization tools.We will summarize the short- and long-term storage for data archiving and its challenges. Different data archivers based on HDF5 have been developed to collect the data coming from the fast and slow systems.Techniques to speed-up data retrieval has been implemented via data processors.We have extended the UDA library [Unified Data Access], developed by UKAEA for MAST, to support our data models by adding a new plugin which uses the client-server model that UDA is based on.The same server with a different plugin is also used offline to retrieve data for processing, which is based on the IMAS [Integrated Modelling & Analysis Suite] framework.Our ecosystem includes a web application, which allows plotting both archived data and live data.For interoperability purposes with other systems, a plugin for Grafana has been developed to create dashboards which mix archived data and monitoring metrics.In collaboration with our Science and Diagnostic teams, we are developing visualization tools to allow plotting offline data and experimental data.Finally, we will conclude with key takeaways and an outlook on the impact of AI on the data handling system.

        Speaker: Lana Abadie (ITER)
      • 400
        Integrated and automated data management at European XFEL empowered by myMdC metadata catalogue

        As data volumes and complexity continue to rise at European XFEL, the need for integrated, sustainable metadata solutions continues to be critical. We introduce myMdC - a centralized metadata catalogue available at https://in.xfel.eu/metadata * - and highlight its key role in supporting the facility’s data management strategy.
        In operation since the first day of user experiments in 2017, myMdC ensures real-time injection, documentation and cataloging of all acquired datasets, forming the backbone of data and metadata operations at European XFEL. Its deep integration with facility services enables immediate dataset tracking from the moment data is acquired, supporting automation across inventory, access control during embargo periods, metadata preservation, and FAIR ** data workflows.
        Functioning as a central hub, myMdC orchestrates communication between both facility external and internal, enabling consistent, scalable, and efficient management of scientific data. Its real-time capabilities also support seamless data and metadata registration, linking, DOI *** publication, and validation - enhancing discoverability, traceability, and long-term accessibility of datasets.
        By consolidating metadata management into a unified platform, myMdC empowers European XFEL to maintain scalable, transparent, and future-proof data practices - ensuring that its scientific output remains robust and interoperable within the global research community.

        Speaker: Luis Maia (European X-Ray Free-Electron Laser)
    • THMG Mini-Orals (MC07, MC11, MC12) Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Enrique Blanco Vinuela (European Organization for Nuclear Research)
      • 401
        Access control interlock system upgrade at ANL/APS

        This presentation will describe recent hardware & software updates to the Access Control Interlock System of the Advanced Photon Source at Argonne National Laboratory. Topics of interest will include replacement of outdated PLC hardware, updating EPICS software (deploying Microsoft Excel to build EPICS databases), and replacing outmoded control displays.

        Speaker: James Stevens (Argonne National Laboratory)
      • 402
        The novel and robust design of fast protection system for CSNS-II

        The high reliability of the fast protection system(FPS) is crucial for the efficient operation of the entire large-scale scientific facility of Chinese Spallation Neutron Source(CSNS). The construction of the CSNS-II began nearly two years ago. In this new phase, an advanced linear superconducting section is scheduled to be introduced. To prevent operational accidents, such as "temperature rise loss caused by beam loss in the superconducting section," it is necessary to enhance the existing fast protection mechanisms. Based on the characteristics of the interlocking requirement, we plan to implement a hardware architecture comprising "high-performance FPGA + Rocket IO + ATCA." While maintaining the core function of the FPS for CSNS, we will develop the transmission link and protection strategy by integrating the beam loss monitor (BLM) and differential beam current monitor (D-BCM). The whole response time of the FPS for CSNS-II should not exceed 8 microseconds, accounting for the fiber optic transmission delay, hardware circuit delay, interlocking logic processing, and so on. It is worth to focus on that a highly available, reliable, efficient, and fast protection should be to ensure the CSNS II accelerator operates stably and safely in the long term.

        Speaker: Peng Zhu (Institute of High Energy Physics)
      • 403
        The control and safety system for the JULIC neutron platform target station

        For the future high current accelerator-driven neutron source HBS (High Brilliance Neutron Source) at Forschungszentrum Jülich, a prototype for a target station has been developed, which was operated successfully at the JULIC neutron platform and will be relocated to the ARGITU accelerator at ESS Bilbao, in future. A major safety-related feature of the target station is the automatic motorized opening of the shielding gate, which potentially could expose humans to radiation or crushing risks. Based on a risk assessment according to EN ISO 12100, a safety system hast been designed that fulfills the requirements of the European machinery directive, which is necessary for CE marking. The safety system relies on safety edges, enabling switches, emergency stop switches and an interface to the personal safety system of the accelerator. The functional safety achieving PL d according to EN ISO 13849 has been implemented with a Siemens S7-1500 safety PLC, which is integrated into an overall control system based on TANGO and NICOS.

        Speaker: Harald Kleines (Forschungszentrum Jülich)
      • 404
        ESS accelerator personnel safety system journey towards steady state operation

        The European Spallation Source (ESS) Accelerator Personnel Safety System (ACC PSS) journey towards steady state operation has been a remarkable experience. ACC PSS serves the safety interlock and access control functionalities for the Linac.

        The safety interlock mitigates various hazards (ionising radiation, high voltage) associated with the proton beam operation and RF equipment. The access control is achieved by implementing an integrated access management, where a Personal Access Station (PAS) is merged with a Physical Access Control System (PACS), allowing fully automated passage.

        Thorough verification and validation have been performed to attain seamless operation. This paper provides lessons learned and investigations throughout the ESS ACC PSS commissioning journey.

        Speaker: Vincent Harahap (European Spallation Source)
      • 405
        Design and performance of the SNS credited beam power limit system

        The US Spallation Neutron Source (SNS) is the world’s most powerful pulsed spallation neutron source. The recently completed Proton Power Upgrade (PPU) project doubles the available average beam power from 1.4 to 2.8 MW. However, 0.8 MW of that is intended for a future second target station (STS), which is in the preliminary design phase. The mercury based first target station (FTS) has a safety design basis limit of 2.0 MW, thus a precision credited safety system is needed to ensure the proton beam power cannot exceed this limit. The Beam Power Limit System (BPLS) is a novel FPGA/PLC based safety credited instrument that measures beam energy and charge to calculate beam power and will shut off the proton beam if the power exceeds 2MW + 10% for a specified period of time. This paper will discuss the design challenges and operational performance of the BPLS as a credited system.

        Speaker: Kelly Mahoney (Oak Ridge National Laboratory)
      • 406
        Logging a new era at the APS using BELY

        As the “Dark Year” of Advanced Photon Source Upgrade (APS-U) concludes, a new logbook is essential to document the process of bringing the facility back online. The Best Electronic Logbook Yet (BELY) has been developed and deployed as a solution to fulfill this requirement. This paper dives into the development process and technologies used to create BELY. Additionally, it will explore the features BELY provides to address all of its operational requirements. One of the significant strengths of BELY is its broad adoption across the APS, driven by its well-organized structure. The widespread use at the APS significantly enhances communication between teams responsible for maintaining the machine, ensuring that information is easily accessible, and collaboration is seamless. Furthermore, the paper discusses various uses of BELY. Finally, it presents ideas for the future development and enhancement of BELY.

        Speaker: Dariusz Jarosz (Advanced Photon Source, Argonne National Laboratory)
      • 407
        Using Grafana as a representation layer for control data at the European XFEL

        At European XFEL the control system Karabo has been developed for operating photon beamlines and instruments. The time series database InfluxDB is used as a backend for logging control data. Whilst Karabo exposes interfaces to retrieve and present the historical data stored in InfluxDB, there are benefits in using an established solution such as Grafana. This interface enables close to real-time monitoring of the control data using web-based, customizable panels called dashboards.
        In this paper we present the example of using Grafana as a presentation layer for the Karabo control system at the Data Operation Center (DOC) in XFEL. Common monitoring scenarios and diagnostics use cases as well as lessons learned are discussed.

        Speaker: Valerii Bondar (European X-Ray Free-Electron Laser)
      • 408
        A new accelerator user experience working group

        During February 26–28, 2025, the first-ever particle accelerator user interface/user experience (UI/UX) workshop was held at SLAC. Attendees had backgrounds ranging from software development to control systems management and human factors science. The workshop began with participants discussing the current state of UI/UX procedures and practices at their respective laboratories to share experiences and learn from one another. Additional discussions focused on how to effectively integrate UI/UX best practices into actionable goals for developers, managers, and operators when working on new or existing interfaces. The goal of the working group is to create a website that will guide developers, managers, scientists, and end users at accelerator laboratories in incorporating UI/UX best practices into software development. The working group continues to meet virtually toward this goal, and is planning a second workshop for next year.

        Speaker: Tiffany Tran (SLAC National Accelerator Laboratory)
      • 409
        Enhancing user experience through GUIs and helper routines at MOGNO, the micro and nano tomography beamline at SIRIUS

        Since its inception at the first Brazilian synchrotron (UVX), the tomography beamline group has been concerned with the beamline’s usability, recognizing that beamline operation can be a significant challenge for users. This has been carried forward to Mogno* (micro and nano tomography beamline at SIRIUS), being one of the cornerstones for user side software development, aimed at a diverse user pool that encompasses researchers from various fields and employing a range of tomography techniques (e.g., classic, phase contrast, zoom, and in-situ 4D).
        To effectively support this heterogeneity of users and techniques, the development of Mogno's operation software stack focused on three key objectives: providing an easy-to-use set of tools, enabling user experiments; ensuring a smooth learning curve for new and experienced users; and enable the beamline’s staff to efficiently prepare it for experiments. The resulting software stack is composed of a set of libraries and helper scripts built with Python (e.g. making sample alignment easier), that are run from GUIs (Graphical User Interfaces), developed with PyQt and PyDM. More recently, new possibilities for improving user experience are being explored, using web applications, which can lead to more accessible beamline experiment information.

        Speaker: Lucca Campoi (Brazilian Synchrotron Light Laboratory)
      • 410
        A multi-level monitoring interface for the SKA Central Signal Processor using the Taranta synoptic view

        Graphical user interfaces (GUIs) play a critical role in the operation and maintenance of large-scale distributed control systems. In this work, we present a synoptic-based visualization for the Central Signal Processor (CSP) of the SKA (Square Kilometre Array) telescope, developed using Taranta, a web-based visualization tool for TANGO systems. The synoptic view provides an intuitive, multi-level representation of CSP, from the upper-level control managed by CSP.LMC down to individual data processing subsystems, including CBF, PSS, and PST devices.
        The synoptic diagrams are created using Inkscape and exported as SVG files, enabling flexible and straightforward integration with Taranta.
        A key focus of this work is the collaboration between different subsystem teams, which was essential to accurately model and visualize the full CSP hierarchy.
        We also define a strategy to validate the effectiveness and the usability of the GUI, ensuring it delivers tangible value in improving the monitoring and troubleshooting capabilities of complex control systems such as SKA.

        Speaker: Gianluca Marotta (INAF - OAA (Arcetri Astropysical Observatory))
      • 411
        myLog: A modern, integrated logbook for scientific collaboration at European XFEL

        Scientific experiments at large-scale research facilities require flexible and collaborative tools to document, discuss, and track experimental progress. We introduce myLog, a new logbook solution developed at European XFEL after four years of iterative prototyping and user engagement.
        Designed to meet the evolving needs of scientists and support teams, myLog offers a user-friendly interface built on a robust architecture. It leverages Zulip
        for threaded, real-time communication and uses the facility’s metadata catalogue, myMdC, to orchestrate and support users adoption of new workflows and enriched interfaces and GUI**. This integration enables seamless connection between discussions, control system events, experiment datasets metadata and data analysis artifacts.
        myLog emphasizes user-centric organization. Experiment groups can define how information and notifications are structured, while Principal Investigators retain full control over access permissions. Real-time integration with the Control System allows automatic logging of key events, complemented by manual entries when needed.
        Importantly, myLog respects data governance by enforcing embargo policies, ensuring sensitive information is managed appropriately. By combining communication, metadata, and automation in a single platform, myLog provides a modern, scalable approach to scientific logging and collaboration. We aim to present the solution overview and its integration into the facility infrastructure.

        Speaker: Luis Maia (European X-Ray Free-Electron Laser)
      • 412
        An Agile/XP software development process for modernizing the accelerator control system at Fermilab

        Fermilab is undergoing the most ambitious upgrade to its accelerator control system of the 21st century. As part of the ACORN project, hundreds of legacy control system applications written in C/C++ will be re-imagined and developed from the ground up. In addition, applications to support Fermilab’s new super-conducting linear accelerator are already under construction. To manage the development of modern controls applications, the Controls department has adopted an Agile software development process based on eXtreme Programming. In this paper we will describe our process and detail our experience applying it to the development of two case studies.

        Speaker: John Diamond (Fermi National Accelerator Laboratory)
    • THMR Mini-Orals (MC06, MC09) Red Lacquer Room

      Red Lacquer Room

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Barry Fishler (SLAC National Accelerator Laboratory)
      • 413
        Secure EPICS PVAccess deployment framework for external scientific networks integration using Kerberos, LDAPS, and PKI at SLAC

        We present a Secure EPICS PVAccess (SPVA) deployment framework developed at SLAC to enable authenticated, encrypted and authorized access to control systems from external scientific networks. In Phase 1, SPVA has been deployed to connect HPC clients and services on SLAC’s Scientific External Network to internal PVAccess gateways supporting production accelerators.
        SPVA enforces strong mutual authentication using Kerberos service principals, which establish the runtime identity of services and clients. These identities are used to request short-lived X.509 certificates from the SLAC-managed PVAccess Certificate Management Service (PVACMS). The certificates are used for TLS-secured PVAccess communication, ensuring cryptographic trust between peers.
        Authorization decisions are enforced through Access Security Files (ACFs) that define PVAccess security groups (ASGs) referencing User Access Groups (UAGs) and Host Access Groups (HAGs). These groups are centrally managed in LDAPS, allowing fine-grained control based on organizational roles and host policies.
        This framework provides secure, traceable access to EPICS PVs across administrative domains while maintaining compatibility with PVXS-based IOCs and tools. This abstract outlines the architectural design and operational lessons from the Phase 1 rollout, providing a model for deploying secure control system access in federated scientific computing environments.

        Speaker: Jingchen Zhou (SLAC National Accelerator Laboratory)
      • 414
        EPICS diffractometer control with HKL calculations

        The relationships of diffraction momentum coordinates with Cartesian position coordinates at User Facility beamlines with EPICS controls is discussed. The EPICS IOC computes relations between real space and reciprocal diffraction space motors for various four circle and six circle diffractometer geometries. Development on trajectory previews, collision detection, and on-board scan visualization is evaluated.

        Speaker: Alexander Baekey (Oak Ridge National Laboratory, University of Central Florida)
      • 415
        Status of HEPS beamline control system

        HEPS (High Energy Photon Source) will be the first high-energy (6 GeV) synchrotron radiation light source in China, which is mainly composed of accelerator, beamlines and end-stations. Phase I of the project includes 14 user beamlines and one test beamline. Construction of HEPS began in June 2019 and is scheduled for completion in late 2025. Meanwhile, beamlines have completed photon beam commissioning, marking HEPS' official transition to the joint-commissioning phase, starting from March 27th, 2025. The beamline controlled devices are mainly divided into two categories: one category is optical adjustment devices such as slits, K-B Mirrors, monochromators, etc.; the other category is optical diagnostic and detection devices such as XBPMs (X - ray Beam Position Monitors), fluorescence targets, detectors, etc. The beamline control system has been designed, based on the EPICS framework. Beamline network topology consists of three networks, namely the data network, control network, and equipment network. In order to enhance the software reusability and maintain version uniformity, package management technology is utilized to manage both application software and system software. Here, the design and construction of beamline control system are presented.

        Speaker: Gang Li (Institute of High Energy Physics)
      • 416
        Design and control of liquid sample delivery systems at LCLS

        The Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory is a pioneering X-ray free-electron laser that provides researchers with the ability to investigate matter at atomic and molecular scales with unprecedented temporal and spatial resolution. Its applications span a wide range of scientific disciplines, including materials science, chemistry, biology, and physics.
        A vital aspect of conducting successful experiments at LCLS is the precise delivery of samples into the X-ray beam. Depending on the nature of the sample—whether liquid, gas, or solid—various delivery systems are employed to ensure accurate positioning, high repetition rates, and minimal sample waste.
        In this talk, I will present an overview of the control systems developed to support liquid sample delivery for the chemRIXS instrument. I will focus on two advanced systems that have significantly enhanced experimental capabilities. The first is a recirculating liquid sheet jet system that enables the generation of tunable liquid sheets with minimal sample volume, making it ideal for experiments with limited material availability. The second is a Droplet-on-Demand (DoD) robot designed for high-throughput pump–probe studies. This system allows precise sample placement, low sample consumption, and efficient mixing, which are essential for time-resolved measurements.

        Speaker: Josue Zamudio Estrada (SLAC National Accelerator Laboratory)
      • 417
        The LCLS-II modular optical delivery system: lessons learned

        The LCLS-II optical delivery system supports multiple interaction points across multiple experiment hutches using only a handful of laser sources. This reduces financial burden and space usage at the cost of increased complexity for the optical laser systems. To ameliorate this complexity, each interaction point is supplied with a Modular Optical Delivery System (MODS) to inject, shape, and compress the beam before it is further conditioned for the experimental use. To meet operational demands, these MODS must be highly configurable, flexible, and robust while supporting 140+ control points in a dense enclosure. With control points spanning piezoelectric motors, optical imaging, digitizers, and more, the EPICS control system framework simplifies driver maintenance and allows growth of community-driven solutions. Each control point is accessible remotely via pyDM GUI which enables the operator to control these various alignment and diagnostic tools. Managing the deployment and operational stability of these modular systems is nontrivial and has presented several challenges in recent runs that inspired significant design changes for the future of the MODS. This talk takes a closer look at these operational challenges and the solutions we’ve implemented.

        Speaker: Adam Berges (SLAC National Accelerator Laboratory)
      • 418
        First light received by Beamline Experiment Control

        Beamline Experiment Control (BEC) has become the standardized high-level user interface for data-acquisition orchestration, adopted by nearly all beamlines. Built on a distributed server-client architecture, BEC seamlessly integrates with the underlying EPICS control system at Swiss Light Source (SLS), yet can also be used to steer and configure non-EPICS devices through Bluesky’s hardware abstraction layer “ophyd”. Beamlines are integrated through a plugin structure, which allows them to individually extend and adapt the system’s behavior: integrating new devices, customizing the user interface, rearranging visualization components, developing bespoke GUIs (BEC Widgets) or creating custom data analysis pipelines for on-the-fly execution. In addition, BEC enables beamlines to coordinate user access to the data acquisition through user access permissions, which can be fine-tuned either through manual interaction by the beamline scientist or automated updates from the digital user office. The long-term stability of the open-source project is ensured through automated testing (unit and end-to-end tests), semantic versioning, and automated deployment triggered on-demand by the beamline. BEC’s modularity, flexibility and its intuitive graphical user interfaces are streamlining data acquisition after the upgrade of the SLS to a fourth generation synchrotron.

        Speaker: Christian Appel (Paul Scherrer Institute)
      • 419
        Hardware orchestrated, multi-dimensional, continuous scans with the IcePAP motion controller

        The high X-ray flux at fourth generation synchrotron facilities enables high quality data acquisition with short detector integration times. Experiments whose durations were previously dominated by detector integration are thus increasingly dictated by the time required for motorized motion. In particular, experiments performed in a step-wise fashion — where motion is stopped during each integration — suffer from significant motion dead-time due to repeated acceleration/deceleration between each step. For this reason, interest in continuous scans — where detector integration occurs during motion — has grown within the synchrotron community. Precise synchronization is however required in order to ensure data acquisition at the desired positions. These synchronization demands can be particularly challenging in multi-dimensional scans involving multiple moving components.
        Here we present a hardware orchestrated, multi-dimensional, continuous scan implementation based on the IcePAP motion controller. Both motion control & detector triggering are orchestrated by the IcePAP hardware, resulting in high precision synchronization. Arbitrary motion trajectories — in up to 128 degrees of freedom — & trigger patterns can be implemented. Scan configuration & initiation is performed in software by the Sardana orchestration suite backed by the Tango control system. The implementation has been demonstrated to yield significant experimental time savings compared to equivalent step scans.

        Speaker: Marcelo Alcocer (MAX IV Laboratory)
      • 420
        EPICS in practice at LCLS

        At LCLS, EPICS plays a central role in our controls architecture. IOCs are used to interface directly or indirectly with almost all experimental hardware supporting our heterogenous requirements. EPICS network protocols are used for making devices available over network, data acquistion, and security and safety of devices. These ultimately enable a rich environment of controls tools built around standardized communication protocols like Channel Access and pvAccess, such as alarm systems, software interlocks, and data analysis. This poster/oral presentation will detail how EPICS is used at LCLS and the tools built around or on top of it with specific examples and applications used on a day-to-day basis to accomplish basic and advanced needs of a complex controls system. This includes tools developed at LCLS or SLAC specifically to fulfill general needs that other users may be able to take advantage of. I will also talk about specific IOCs and Channel Access based tools I have made or worked on, and describe what EPICS currently does well along with its limitations.

        Speaker: Kaushik Malapati (SLAC National Accelerator Laboratory)
      • 421
        Signal response and analysis of large micro channel plate driven delay line detectors

        For soft X-ray spectroscopy beamlines, delay line detectors are often the main system for detecting the photons from the sample and hence also a component determining the overall beamline performance as it might be a limiting factor of both measurement speed, noise, artifacts, and resolution. As such, and even more with larger micro channel plate driven delay line detectors, the signal readout must be fast and robust to minimize noise and artifacts while still accommodating even the flux from 4th generation synchrotrons. This paper studies the signal response of a delay line detector and how the ns current signal pulses can be filtered, amplified, and converted to voltage before the digitization. The digitizer is a 12 bit 2.5 GSPS 6 channel system, which is set up in a manner to minimize noise and enable post signal analysis integrated into the Sardana control system and live view. The early results indicate that many of the currently present image artifacts are, to a very high degree, suppressed due to analog signal treatment and proper triggering. The digitized signals are fitted using the python tool lmfit to different signal models, such as the exponentially modified Gaussian, to extract the peak of the main signal after identifying the common background response in all channels with the aim to even further improve the resolution of the detector. To optimize sampling, the system is also stress tested with regards to e.g. sampling length and out of range measurement.

        Speaker: Dr Peter Sjöblom (MAX IV Laboratory)
      • 422
        Hardware orchestration architecture for fly and step scan at SIRIUS beamlines: a distributed, multi-platform system for sub-micrometer motion and data acquisition synchronization in on-the-fly synchrotron measurements

        X-ray absorption spectroscopy (XAS) is one of the techniques that require multiple beamline devices to operate in tight synchronization to maximize beam flux, focus, and reliable measurements. These devices, such as the undulator, monochromator, quarter-wave plate, and detectors, exhibit a variety of behaviors, phenomena, capabilities, and controller platforms, ranging from the photon source to the sample holder. First, this work aims to provide an overview of the existing methods, detailing the adopted synchronization definition, and then demonstrates top-notch commissioning results for critical on-the-fly synchrotron measurements – impacting significantly EMA (extreme conditions), QUATI (quick-EXAFS) and SABIA (XMCD) Sirius beamlines. Additionally, the paper highlights the architecture's adaptability, enabling integration across a range of devices while maintaining custom, precise temporal and energy calibration, ensuring short scan duration and minimizing sample damage.

        Speaker: Telles René Silva Soares (Brazilian Synchrotron Light Laboratory)
      • 423
        Integrated control systems for time-resolved RIXS at LCLS-II: design and operational challenges

        The newly enhanced LCLS-II X-ray laser at SLAC National Accelerator Laboratory represents a major advancement in X-ray science, providing unprecedented capabilities for probing ultrafast dynamics in chemistry, materials science, biology, and beyond. Among the new beamlines, the Resonant Inelastic X-ray Scattering (RIX) beamline leverages the high repetition rate of LCLS-II to investigate the energy distribution and evolution of occupied and unoccupied molecular orbitals in complex and catalytic systems, particularly in liquid environments. This beamline features two dedicated endstations—qRIXS (upstream) and chemRIXS (downstream)—each optimized for distinct scientific goals. This talk will detail the design and implementation of the experimental controls and data systems that unify beamline hardware and instrument automation. Additionally, this talk will discuss the challenges of synchronizing operations across two endstations on a single beamline for time-resolved spectroscopy under demanding experimental conditions.

        Speaker: Dr Jyoti Joshi (SLAC National Accelerator Laboratory)
    • 15:45
      Coffee
    • THPD Posters
      • 424
        A client for the ATLAS time machine

        The Argonne Tandem Linear Accelerating System (ATLAS) facility at Argonne National Laboratory is a National User Facility capable of delivering ion beams from hydrogen to uranium. The existing tune archiving system, which utilizes Corel’s Paradox relational database management software, is responsible for retrieving and restoring machine parameters from previously optimized configurations. However, the Paradox platform suffers from outdated support, a proprietary programming language, and limited functionality, prompting the need for a modern replacement.
        The client for the new system is a PySide6/QML based application. Its user interface has been designed with human usability and simplicity in mind, and feedback from ATLAS operators has played a key role in guiding the development process. In addition, it expands upon the functionality of the Paradox platform in a number of ways. For one, the process of searching for specific archived tunes has been greatly simplified through the use of a filtering tool that allows ATLAS operators to narrow down the list of experiments they need to search through based on specific parameter values, timestamps, and experiment numbers. When preloading the beamline, operators can now select tunes from multiple experiments in the archives, rather than just one as in the Paradox platform. Finally, the use of Python, a widely used and popular modern programming language, ensures long-term maintainability.

        Keywords: PySide6, QML
        
        Speaker: Ananth Ramaswamy (University of Illinois Urbana-Champaign)
      • 425
        A dual-network centralized control system for Elettra 2.0

        The Elettra 2.0 storage ring, scheduled for commissioning in 2026, introduces a novel control system architecture, departing from distributed front-end computers. High processing power and intelligent devices, such as magnet power converters, beam position monitors, beam loss monitors, low-level RF systems, and fast interlocks, support centralization. These devices connect through a dual-network design: a standard Ethernet link, managed by virtual servers running Tango, handles supervision, while a high-speed fibre-optic link with custom protocols enables real-time control. Fast interfaces feed a centralized, multi-core real-time (RT) server, where hardware partitioning and the Data Plane Development Kit (DPDK) allow low-latency processing up to 1.1 MHz. Conceptually, each core mimics a front-end computer, while the server’s RAM acts as a high-speed communication bus minimizing delay and latency. This architecture simplifies the infrastructure, improves scalability and enables advanced machine-wide control, ensuring reliability for next-generation accelerators.

        Speaker: Lorenzo Pivetta (Elettra-Sincrotrone Trieste S.C.p.A.)
      • 426
        A flexible validation framework for streamlining hardware testing in accelerator control systems

        A new high-performance Sensors, Acquisition and Motion Control system (SAMbuCa) is under development at CERN to address the challenging requirements of motion control for beam intercepting devices such as the collimators of the Large Hadron Collider. These requirements include high precision in extremely radioactive environments, millisecond-level synchronization, and long-term operational reliability. To meet these stringent demands, rigorous testing and validation is essential throughout the development process. The abstraction models used by existing frameworks for executing and defining tests often lack the flexibility required as they are software oriented. To address this, a generic framework for Production Test Suites (PTS) has been developed, helping to validate all the SAMbuCa components before mass production. It is an open-source hardware validation framework written in Python, with the Accelerator Controls community in mind. The framework targets different types of tests, such as end-of-line, reliability, and calibration. The PTS framework addresses common challenges faced by accelerator and industrial teams. It provides an adaptable and scalable testing solution relevant across multiple control-system environments. In this paper, the architecture and rationale behind PTS is explained and its functionality compared with existing solutions.

        Speaker: Alvaro Martinez Landete (European Organization for Nuclear Research)
      • 427
        A multi-level monitoring interface for the SKA Central Signal Processor using the Taranta synoptic view

        Graphical user interfaces (GUIs) play a critical role in the operation and maintenance of large-scale distributed control systems. In this work, we present a synoptic-based visualization for the Central Signal Processor (CSP) of the SKA (Square Kilometre Array) telescope, developed using Taranta, a web-based visualization tool for TANGO systems. The synoptic view provides an intuitive, multi-level representation of CSP, from the upper-level control managed by CSP.LMC down to individual data processing subsystems, including CBF, PSS, and PST devices.
        The synoptic diagrams are created using Inkscape and exported as SVG files, enabling flexible and straightforward integration with Taranta.
        A key focus of this work is the collaboration between different subsystem teams, which was essential to accurately model and visualize the full CSP hierarchy.
        We also define a strategy to validate the effectiveness and the usability of the GUI, ensuring it delivers tangible value in improving the monitoring and troubleshooting capabilities of complex control systems such as SKA.

        Speaker: Gianluca Marotta (INAF - OAA (Arcetri Astropysical Observatory))
      • 428
        A new accelerator user experience working group

        During February 26–28, 2025, the first-ever particle accelerator user interface/user experience (UI/UX) workshop was held at SLAC. Attendees had backgrounds ranging from software development to control systems management and human factors science. The workshop began with participants discussing the current state of UI/UX procedures and practices at their respective laboratories to share experiences and learn from one another. Additional discussions focused on how to effectively integrate UI/UX best practices into actionable goals for developers, managers, and operators when working on new or existing interfaces. The goal of the working group is to create a website that will guide developers, managers, scientists, and end users at accelerator laboratories in incorporating UI/UX best practices into software development. The working group continues to meet virtually toward this goal, and is planning a second workshop for next year.

        Speaker: Tiffany Tran (SLAC National Accelerator Laboratory)
      • 429
        A server for the ATLAS time machine

        The Argonne Tandem Linear Accelerating System (ATLAS) facility at Argonne National Laboratory is a National User Facility capable of delivering ion beams from hydrogen to uranium. The existing tune archiving system, which utilizes Corel’s Paradox relational database management software, is responsible for retrieving and restoring machine parameters from previously optimized configurations. However, the Paradox platform suffers from outdated support, a proprietary programming language, and limited functionality, prompting the need for a modern replacement.
        The new system is composed of a modular architecture featuring a separate user interface, a time-series storage database, and a backend that connects the two. The new backend employs the FastAPI framework with WebSockets for asynchronous communication, and integrates Pydantic and SQLAlchemy ORM to enable a type-safe, object-oriented interface with SQL databases. This upgraded system significantly improves upon the legacy Paradox-based solution by offering a more robust, open-source architecture with enhanced reliability, maintainability, and ease of use.

        Keywords: Starlette, WebSockets, SQLAlchemy ORM, Asyncio

        Speaker: Matthew Torres (Argonne National Laboratory)
      • 430
        Access Control Interlock System Upgrade at ANL/APS

        This presentation will describe recent hardware & software updates to the Access Control Interlock System of the Advanced Photon Source at Argonne National Laboratory. Topics of interest will include replacement of outdated PLC hardware, updating EPICS software (deploying Microsoft Excel to build EPICS databases), and replacing outmoded control displays.

        Speaker: James Stevens (Argonne National Laboratory)
      • 431
        An Agile/XP software development process for modernizing the accelerator control system at Fermilab

        Fermilab is undergoing the most ambitious upgrade to its accelerator control system of the 21st century. As part of the ACORN project, hundreds of legacy control system applications written in C/C++ will be re-imagined and developed from the ground up. In addition, applications to support Fermilab’s new super-conducting linear accelerator are already under construction. To manage the development of modern controls applications, the Controls department has adopted an Agile software development process based on eXtreme Programming. In this paper we will describe our process and detail our experience applying it to the development of two case studies.

        Speaker: John Diamond (Fermi National Accelerator Laboratory)
      • 432
        Augmented reality for accelerator operations: A virtual control room prototype

        Particle accelerator control rooms rely on fixed workstations with multiple monitors and on-site personnel, limiting operational flexibility. When experts connect remotely—whether for troubleshooting, monitoring, or collaboration—current setups often lack sufficient screen space, forcing users to toggle between interfaces and reducing situational awareness. Recent advancements in augmented reality headsets enable spatially aware virtual control rooms, allowing users to arrange and interact with multiple control panels in 3D space, improving efficiency and collaboration. In this work, we present our progress on developing a Virtual Control Room, addressing key user experience challenges, outlining the technical infrastructure, and demonstrating first results from the Advanced Light Source.

        Speaker: Thorsten Hellert (Lawrence Berkeley National Laboratory)
      • 433
        Automated testing of the integrated computer control system at the National Ignition Facility

        This paper describes the Automated Shot Tester (AST), a test automation framework
        designed to comprehensively test experiments performed using the Nation Ignition
        Facility’s (NIF) Integrated Computer Control System (ICCS). The AST enables the automatic testing of diverse experiment configurations on an emulated test system instead of real hardware and eliminates the need for human intervention. While the actual control system is operated by a team of 12, AST acting on their behalf represents significant effort savings while assuring testing fidelity. The AST considerably enhances testing efficiency and expands the range of test configurations compared to the manual method.
        The AST is a complete end-to-end framework that manages and monitors the state and condition of ICCS software throughout an experiment. This approach is made possible
        by leveraging ICCS’s distributed architecture and middleware, which enables the AST to receive state updates via the ICCS pub-sub system and trigger commands based on a user-specified configuration file. This file creates modularity and expandability, allowing the AST to exercise a library of test case scenarios, and facilitates the creation of new experiments to be added to integration tests. This testing, along with unit, component, and manual tests, ensure software quality at the NIF. This paper will focus on the design of the AST, the benefits gained from automation and conclude with proposed future enhancements.

        Speaker: David Gibbard (Lawrence Livermore National Laboratory)
      • 434
        Automating x-ray beam alignment processes with the split-and-delay graphical user interface at LCLS

        At SLAC National Accelerator Laboratory's Linac Coherent Light Source (LCLS), a series of optics and diodes in the X-ray Correlation Spectroscopy (XCS) beam line's split-and-delay chamber divide the beam into two equal intensity pulses. One pulse is intentionally delayed, facilitating X-ray Photon Correlation Spectroscopy techniques that operate at nanosecond time scales for experiments in biology, chemistry, and materials science. However, achieving precise beam alignment with this setup poses significant challenges because meticulous adjustments to each of the beam line optics are required in succession each time the split-and-delay chamber is used, and this process is very time and effort intensive. To address these challenges, the implementation of the split and delay graphical user interface (GUI) streamlines x-ray beam alignment by enabling remote control of the motorized optical components and displaying user-friendly, live-time monitoring of the beam diagnostics. Work is on-going to allow automatic optimization of beam alignment by stepping the motorized optical components through various positions, measuring the beam intensity, and moving optics on motorized stages to the preferred positions. This advancement will further streamline alignment of the beam through the split-and-delay chamber and thus increase time available for data collection at XCS.

        Speaker: Carolyn Gee (SLAC National Accelerator Laboratory)
      • 435
        BEC widgets: A modern modular framework for Beamline Experiment Control

        BEC Widgets is a modular Qt6 (PySide6) GUI framework developed to streamline usage of Beamline Experiment Control (BEC) at the Swiss Light Source, Paul Scherrer Institute. Emphasizing plug-and-play functionality and rapid adaptability, it enhances scientific workflows by allowing dynamic reconfiguration of interfaces—even during live experiments.
        Built from independent widget components, BEC Widgets enables users to assemble custom applications by flexibly combining pre-built elements. Interfaces can be created or modified via the command line, interactively in the main application, or visually through Qt Designer. Users can also extend functionality by implementing custom widgets tailored to beamline needs.
        Integrated with the BEC system, BEC Widgets uses Redis as a high-performance shared memory backend to synchronize data across all components. This supports seamless interaction with scans, acquisition, live monitoring, automation, and device control. A server–client architecture with RPC ensures GUI stability, isolating it from command-line operations. PyQtGraph-based visualization further boosts real-time data analysis. Crucially, all business logic is kept out of the UI layer, ensuring a clean separation of concerns and maintainable code. Released as open-source software, BEC Widgets invites collaboration and reuse across the scientific community.

        Speaker: Jan Wyzula (Paul Scherrer Institute)
      • 436
        BL31-FaXToR, hard X-ray micro-tomography and radiography at ALBA: current status and ongoing improvements

        BL31-FaXToR is the only hard-X-ray micro-tomography and radiography beamline at the third-generation ALBA synchrotron * . It enables 3D imaging with sub-second temporal resolution under either monochromatic or white-beam conditions. The beamline features a dual-detection system enabling high speed or high resolution acquisitions. For high speed data acquisitions of the detector utilizes multiple frame grabbers and an IBM Storage Scale clustered file system. The goal of the high speed detector is to provide live reconstructions during scans with minimal latency *. The Detectors are managed through a REST client or LImA device server. The control system is based on Tango and Sardana, providing an efficient, distributed Python environment with full user access to hardware via both graphical and command-line interfaces. The synchronization elements also include a voice coil actuated fast shutter capable of 10 ms openings and a periodic chopper, which introduced new challenges for Sardana requiring the implementation of multiple synchronizations in time and position domain. The experimental GUI was developed using Taurus and LavuE. This paper outlines the BL31-FaXToR control system architecture, presents implementation examples, and discusses upcoming planned features.

        Speakers: Zbigniew Reszela (ALBA Synchrotron (Spain)), Dr Fulvio Becheri (ALBA Synchrotron (Spain))
      • 437
        Bluesky Queue Server for beamline control at NSLS-II

        Bluesky is a Python-based framework for experiment orchestration that is widely used at synchrotron facilities around the world. Queue Server (QS) is an essential component of Bluesky software stack that supports high-level functionality, such as control over the environment for executing Bluesky plans, enqueueing plans, executing and managing the plan queue, monitoring and controlling running plans, etc. The functionality is exposed via comprehensive set of APIs, which are designed to support wide range of workflows. QS is successfully used in applications, such as GUI-based and remote control, AI-driven and multimodal experiments. The experience and challenges of deploying and using QS at NSLS-II beamlines is discussed in the presentation.

        Speaker: Dmitri Gavrilov (National Synchrotron Light Source II)
      • 438
        Building on experience: renovation of the SNIFFER gas and fire protection system for the LHC experiments

        The LHC SNIFFER system was commissioned in 2006 to protect the personnel working in the large LHC experiment caverns from hazards caused by fire, CO2/flammable gas leaks, and oxygen deficiency. Currently, SNIFFER operates within the ALICE, ATLAS, and LHCb experiments. The system is composed of custom-built modules, each housing an aspiration pump and a combination of sensors tailored to the specific needs of each experiment. Sensor control units are connected to a central control PLC via a PROFIBUS link. The SNIFFER system enables air sampling and analysis from the most remote locations within the LHC experiments, including inside the detectors, using a dedicated air piping network. After nearly 20 years of continuous service, a complete renovation has become necessary. The aim of the modernization project is to address the identified shortcomings, resolve the obsolescence of key components, and significantly reduce the overall cost of system ownership and maintenance. The new system, known as SNIFFER NG (New Generation) is designed to guarantee the safety of personnel in the LHC experiments up to the announced end of operation of the HL-LHC era. This paper presents the new design philosophy, the rationale behind the selected design choices based on the operational experience gathered over the past two decades, and the key challenges associated with the development and deployment of the new system.

        Speaker: Timo Hakulinen (European Organization for Nuclear Research)
      • 439
        CERN safety system monitoring - entering a new era

        For the past 20 years, the CERN Safety System Monitoring (SSM) framework has safeguarded the operational health of CERN’s access and personnel safety systems. Built on the Zabbix monitoring platform, Grafana, and in-house developments, SSM provides real-time diagnostics, alerts, issue escalation, and predictive analytics for a wide range of critical infrastructure, operating systems, network devices, storage, and specialized equipment like video cameras and UPS units. The objective of SSM is to enhance maintenance and operational efficiency by delivering timely and reliable system feedback, enabling rapid identification of both immediate failures and gradual degradation before they can impact operations. SSM also supports long-term data analysis for post-incident investigations, statistical evaluations, and trend forecasting, thereby contributing to the optimization of safety system designs. It also provides operational statistics and graphs to CERN management and site services offering valuable graphical insights. Ongoing developments aim to expand SSM's capabilities through integration of AI modules for predictive maintenance, enabling pre-emptive interventions and reducing downtime. These enhancements also include automatic generation of reports and notifications to operations teams. By continuously assessing the safety status of operational systems, SSM plays a crucial role in mitigating risks and ensuring long-term reliability of CERN’s technical infrastructure.

        Speaker: Tono Riesco (European Organization for Nuclear Research)
      • 440
        Continuous Integration meets industrial automation: testing industrial controls frameworks

        Continuous Integration (CI) pipelines are a cornerstone of modern software development, enabling early bug detection and robust validation across all system layers. Extending this concept to industrial automation introduces unique challenges due to the involvement of real hardware and vendor-specific proprietary tools. This paper presents how CERN has built a fully automated testing pipeline for one of its industrial controls frameworks, covering every stage from framework releases (traditionally manual developer tasks), to end-to-end PLC-SCADA validation, simulating user workflows. The pipeline automates the generation of platform-specific PLC projects, the configuration of SCADA systems, and the establishment of the PLC-SCADA communication, achieved through a combination of in-house and third-party tools to programmatically execute these steps. Implementing such CI pipelines requires establishing a permanent testing infrastructure, including PLCs, development machines and SCADA servers. While the setup demands significant initial effort, the maintenance cost remains low, yielding high returns in the form of time saved for developers and testing engineers during the pre-release validation phases. This approach bridges the gap between modern software testing and industrial automation, enhancing reliability in complex, multi-platform environments.

        Speaker: Ms Loreto Gutierrez Prendes (European Organization for Nuclear Research)
      • 441
        Control and acquisition system for a radar-based IFMIF-DONES lithium target diagnostic

        The IFMIF-DONES (International Fusion Materials Irradiation Facility – DEMO Oriented Neutron Source) will irradiate materials for fusion reactors with an accelerator-driven neutron source based on a 125 mA, 40 MeV deuteron beam impinging on a liquid lithium target. This is a jet flowing at 15 m/s and heated at 300ºC that must stop the 5 MW beam safely over a distributed beam footprint of 200 mm x 50 mm, while keeping its thickness within 25 ± 1 mm to avoid damages.
        Monitoring lithium thickness in real time is essential to ensure safe operation under harsh environment conditions: intense radiation, high temperature and evaporated lithium. The proposed radar-based solution operates in the millimeter-wave range (mmWave). In case of thickness instabilities, it sends an alarm to Machine Protection System (MPS) with minimum latency.
        The best-suited type of radar is Frequency-Modulated Continuous-Wave radar (FMCW). Specific to this diagnostic is that rough distance is fixed, but small variations have to be measured with accuracy below 1 millimeter.
        The control and acquisition system shown in this contribution operates the radar by generating a sawtooth signal to modulate frequency, and digitizing received Intermediate Frequency (IF) signal with tight synchronization.
        Digitizer has an FPGA to process signal and calculate range in real time with algorithms to improve accuracy beyond resolution. Experimental Physics and Industrial Control System (EPICS) is used for machine integration.

        Speaker: Víctor Villamayor Callejo (Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas)
      • 442
        CSS to Phoebus transition at KIT

        The two accelerators test facilities KARA and FLUTE at the Karlsruhe Institute of Technology have been using Control System Studio as the main GUI for over ten years. The migration to Phoebus allows us to renew the full middle layer stack of services, including alarm server, logbook, channel finder and save-and-restore system with an emphasis on a modular and containerized setup. The panels are served via a central highly available webserver stack, omitting the need to maintain local versions of our panels. This allows us to use continuous integration (CI) and continuous deployment (CD) workflows to gather and publish panels from various different sources, such as EPICS modules, without requiring to copy and maintain such panels into our panel tree. This paper describes the technical setup of our full Phoebus stack and reports on the migration process of our existing panels.

        Speaker: Dr Edmund Blomley (Karlsruhe Institute of Technology)
      • 443
        Design and control of liquid sample delivery systems at LCLS

        The Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory is a pioneering X-ray free-electron laser that provides researchers with the ability to investigate matter at atomic and molecular scales with unprecedented temporal and spatial resolution. Its applications span a wide range of scientific disciplines, including materials science, chemistry, biology, and physics.
        A vital aspect of conducting successful experiments at LCLS is the precise delivery of samples into the X-ray beam. Depending on the nature of the sample—whether liquid, gas, or solid—various delivery systems are employed to ensure accurate positioning, high repetition rates, and minimal sample waste.
        In this talk, I will present an overview of the control systems developed to support liquid sample delivery for the chemRIXS instrument. I will focus on two advanced systems that have significantly enhanced experimental capabilities. The first is a recirculating liquid sheet jet system that enables the generation of tunable liquid sheets with minimal sample volume, making it ideal for experiments with limited material availability. The second is a Droplet-on-Demand (DoD) robot designed for high-throughput pump–probe studies. This system allows precise sample placement, low sample consumption, and efficient mixing, which are essential for time-resolved measurements.

        Speaker: Josue Zamudio Estrada (SLAC National Accelerator Laboratory)
      • 444
        Design and deployment of a slow protection layer for a 100 MeV LINAC safety at KOMAC

        The Korea Multipurpose Accelerator Complex (KOMAC) operates a 100 MeV proton linear accelerator that accelerates proton beams from an ion source through a radio frequency quadrupole (RFQ) and drift tube linacs (DTLs). The accelerated beams are delivered to target rooms via beamlines to provide proton beam services to users. To enhance equipment protection and operational stability, a Machine Protection System (MPS) has been established. A Slow MPS has been additionally developed to handle slow response fault conditions including vacuum degradation , cooling system failure and power supply failures.
        The system is implemented using industrial programmable logic controllers (PLCs) equipped with embedded Linux CPUs, and EPICS IOCs are executed within the PLC environment. Interlock signals from PLC ladder logic are delivered to the EPICS IOC using shared memory and PLC generated interrupts. The Slow MPS operates independently of the fast protection system and is integrated into KOMAC’s EPICS based control system. This paper describes the implementation of the Slow MPS and integration with the EPICS based control system at KOMAC.

        Speaker: JAE-HA KIM (Korea Multi-purpose Accelerator Complex)
      • 445
        Design and implementation of a scalable machine protection system for the EuAPS particle accelerator

        The evolution of research infrastructures in the field of particle accelerators requires increasingly intelligent, integrated and safety-compliant protection systems. In this context, the development of a Machine Protection System (MPS) is underway for the EuAPS project, currently being implemented at the SPARC_LAB facility of INFN National Laboratories in Frascati. The system is designed as a distributed network based on real-time embedded industrial controllers, with FPGAs on cRIO modules and dedicated devices, and is intended to monitor the operational conditions of critical subsystems such as diagnostics, vacuum and safety in real time, responding promptly in the event of interlocks. The prototype is being developed in accordance with the IEC- 61508 standard, including the identification of Safety Instrumented Functions (SIFs) and a structured strategy encompassing hardware/software design, implementation, verification and validation. The goal is to provide a prototypal, standards-compliant platform that is scalable and suitable for future machine expansions, such as the EuPRAXIA project. In addition to protection, the MPS aims to enhance operational management and prevent critical failures, contributing to a safer, more autonomous, modular and efficient accelerator infrastructure.

        Speaker: Valentina Dompè (Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Frascati)
      • 446
        Design and performance of the SNS credited beam power limit system

        The US Spallation Neutron Source (SNS) is the world’s most powerful pulsed spallation neutron source. The recently completed Proton Power Upgrade (PPU) project doubles the available average beam power from 1.4 to 2.8 MW. However, 0.8 MW of that is intended for a future second target station (STS), which is in the preliminary design phase. The mercury based first target station (FTS) has a safety design basis limit of 2.0 MW, thus a precision credited safety system is needed to ensure the proton beam power cannot exceed this limit. The Beam Power Limit System (BPLS) is a novel FPGA/PLC based safety credited instrument that measures beam energy and charge to calculate beam power and will shut off the proton beam if the power exceeds 2MW + 10% for a specified period of time. This paper will discuss the design challenges and operational performance of the BPLS as a credited system.

        Speaker: Kelly Mahoney (Oak Ridge National Laboratory)
      • 447
        Design of a laser safety system for S3

        The S3 (Super Spectrometer Separator) project involves the installation of laser systems in the building housing the S3 separator. To ensure personnel safety, a dedicated safety system is required. This system manages signaling, beam-blocking components, and laser power supplies. It ensures that all safety conditions are met to authorize the production and transmission of laser beams in the designated areas.
        The constraints enforced for the implementation of this control panel include to use non-programmable system to facilitate maintenance and diagnostics by the laser operations team. The system must also allow that a single failure does not result in the loss of the safety function.
        This article describes the system architecture, the sensors and actuators selected to answer these requirements, as well as the relay-based safety processing system. It also describes the system's responses to component failures and the tests conducted to demonstrate proper system functionality.

        Speaker: Nicolas Simon-Bauduin (GANIL)
      • 448
        Design of RAON operation monitoring via screen capture in isolated networks

        The RAON (Rare isotope Accelerator complex for ON-line experiments) in South Korea is a facility designed to accelerate heavy ions and produce rare isotopes for scientific research. The control system at RAON is based on the EPICS framework and operates within a physically isolated control network to ensure system integrity and security. To enable real-time monitoring of the accelerator’s operation status from remote buildings, we developed a secure screen transmission infrastructure. The Phoebus-based user interface, running entirely within the isolated control network, outputs video via HDMI to a hardware capture board. This signal is transferred via USB to an external PC connected to the company’s business network, which streams the operational status to monitoring panels across the facility. By maintaining physical separation while enabling visual data sharing, the system ensures strict security compliance and enhances the visibility of RAON operations.

        Speaker: Mijeong Park (Institute for Basic Science)
      • 449
        Design of the machine interlock system for Korea-4GSR

        MIS (Machine Interlock System) for Korea-4GSR is a system designed to prevent operations that may cause damage to related devices when faults occur in any of the devices configured throughout the accelerator during its operation. It is an essential system of accelerator operation. In the MPS (Machine Protection System), MIS mainly covers signals with relatively slow response speeds among various classifications of signals. For signals generated by an interlock situation, which are recognized by the central device from local devices and transmitted to the final destination, differences in speed exist depending on the route. However, MIS is designed with minimized signal routes so that the difference in speed by signal will be minimal. The controller of MIS is a PLC (Programmable Logic Controller), which is widely used in various fields of industry and experimental physics. Since various PLC guidelines are provided for securing system reliability and responding to failures, configuring a more robust system is more accessible with PLC than with other types of controllers. This poster presents the design to minimize beam downtime due to device faults and maximize device protection.

        Speaker: Yunho Kim (Pohang Accelerator Laboratory)
      • 450
        Development and operation of the new beam interlock system of RIKEN RI Beam Factory

        We are developing a successor system to the existing machine protection beam interlock system (BIS) (hereafter, RIBF-BIS2) by using CompactRIO, a product by National Instruments, since 2021. The BIS consists of 10 I/O stations distributed throughout the RIBF facility and monitors alert signals from sources such as the magnet power supplies and vacuum equipment, as well as the beam loss detected by the baffle slits in the cyclotrons. During the development phase of the RIBF-BIS2, the I/O stations of the RIBF-BIS2 are being installed in parallel with the existing I/O stations of the BIS while the BIS operation continues for safety, and test operation is being started one by one for stations that have been installed. In development until 2024, the I/O stations of the RIBF-BIS2 are installed to 80% of the I/O station of the BIS. In 2025, the remaining 20% of the I/O stations will be installed so that the BIS can be updated to the RIBF-BIS2.
        During the test operation period, we made several improvements to the system. These included enhancing the interlock logic and upgrading the station setup to make maintenance easier. Also, we improved the RIBF-BIS2's front-end circuit installed to shorten the time from the detection of an abnormality in the digital input signal to stop the beam, and the power consumption was reduced to one-fifth while maintaining equivalent performance.

        Speaker: Misaki Komiyama (RIKEN Nishina Center)
      • 451
        Development of a fast orbit interlock system prototype for the 4GSR

        This study aims to develop the Fast Orbit Interlock system (FOI) that can quickly determine situations caused by beam deviation during accelerator operation at the 4th generation synchrotron radiation facility. The FOI system is designed to detect minute beam orbital deviations in real time and thereby determine beam deviation during accelerator operation. The hardware and software of the system are currently being designed to implement high-speed data processing, network communication, and real-time control algorithms. Designed to provide higher accuracy and faster response time, performance evaluation will be conducted through prototypes. This study is expected to provide an important technical foundation for quickly stopping by judging the critical situation caused by the failure of the periphery due to beam deviation and radiation emissions above the allowable amount at the accelerator facility.

        Speaker: Jinsung Yu (Pohang Accelerator Laboratory)
      • 452
        Development of a high-precision electrometer at NSLS-II

        A high-precision electrometer has been developed at NSLS-II. It is capable of measuring current across six different ranges, from 100 nA to 10 mA, with a resolution ranging from 200 fA to 20 nA. Additionally, it can be used for X-ray Beam Position Monitoring (XBPM) in beamlines and frontends, enabling real-time X-ray beam position monitoring. The newly designed third-generation NSLS-II electrometer features an enhanced low-noise analog front end, high-resolution 20-bit ADCs (8 channels), and advanced hardware tailored for X-ray beam position feedback. The upgraded electrometer integrates seamlessly with the EPICS control system, incorporating an extended Input/Output Controller (IOC) that supports both the quadEM and pscDriver frameworks through a hybrid driver implementation. Our primary objective is to deploy beamline detectors for current/voltage output monitoring and integrate the X-ray BPM feedback system into the storage ring’s Fast Orbit Feedback (FOFB) system, aiming to achieve superior photon beam stability. This work represents an ongoing collaborative R&D effort among accelerators, DSSI, and beamline teams to enhance beam position control and monitoring capabilities. This paper will present the new electrometer's architecture and performance, its integration with EPICS IOC, and ongoing research developments. Future directions for beam stabilization and control improvements will also be discussed.

        Speaker: Yuke Tian (National Synchrotron Light Source II)
      • 453
        Efficient Multi-Region Fly-Scan Architecture Using PLC Integration for Synchrotron XAS

        A high-speed fly-scan system has been developed and implemented at the Taiwan Photon Source (TPS) 32A Tender X-ray Absorption Spectroscopy (TXAS) beamline to enhance data acquisition efficiency for X-ray Absorption Spectroscopy (XAS). Unlike conventional fly-scan systems that rely on Field Programmable Gate Array (FPGA) or CompactRIO (cRIO) platforms, this system adopts a programmable logic controller (PLC)-based architecture, offering high modularity, low development complexity, and seamless integration with existing beamline infrastructure. The system enables continuous scanning of the double-crystal monochromator (DCM) while simultaneously acquiring synchronized data from multiple detectors. Analog-to-digital converters (ADCs) digitize current signals for transmission and total electron yield (TEY) modes, while high-speed digital outputs from the PLC provide external triggers for silicon drift detectors (SDDs) used in partial fluorescence yield (PFY) measurements. This approach simplifies the coordination of asynchronous devices and eliminates the need for complex timing hardware. To further optimize performance, a multi-region scan strategy was implemented, allowing dynamic adjustment of scan speed across different energy segments. This significantly reduces total scan time by up to a factor of six compared to traditional step-scan methods, while maintaining comparable spectral quality. The PLC-based fly-scan system also supports real-time data logging, automated sample switching, and energy-axis registration, making it a robust and scalable solution for time-resolved and in-situ XAS experiments.

        Speaker: Cheng-Chih Liang (National Synchrotron Radiation Research Center)
      • 454
        Elettra Commander: A flexible web interface on large touchscreens

        Elettra Commander is an innovative web interface designed to centralize particle accelerator operations. It integrates web-based and native user interfaces into a unified platform, simplifying control tasks. A key feature is the dynamic integration of machine synoptics, which display accurately scaled curves and data with rapidly updating graphs and machine control applications, enabling real-time interaction. Optimized for large touchscreens exceeding 50 inches, Elettra Commander offers intuitive navigation and responsive performance. This presentation will explore its architecture, highlight key features, and outline future developments, demonstrating its potential to advance accelerator control with a flexible, centralized, and effective solution.

        Speaker: Lucio Zambon (Elettra-Sincrotrone Trieste S.C.p.A.)
      • 455
        Enhancing CERN experimental areas beamline configuration management with data-driven schematics

        The beamlines in CERN's experimental areas are continuously evolving to meet the demands of a rich and diverse physics program. Various data management systems are used to efficiently plan, document, and track this evolution, describing past, present, and future beamline layouts, including equipment positions, dimensions, connections, assemblies, and their relationships with underlying physical assets.
        To support this work, a data-driven software application has been developed and is exposed to users via a web interface. This application generates schematics for both experimental area beamlines and connected particle accelerators. The paper highlights the added value of this solution, its technical design and key implementation aspects, emphasizing the role of effective data-driven visualization.
        A case study demonstrates the practical application of the software. Additionally, the paper discusses future improvements, including the ongoing development towards data-driven synoptics with associated intuitive beamline controls.

        Speaker: Mr Lukasz Burdzanowski (European Organization for Nuclear Research)
      • 456
        Enhancing user experience through GUIs and helper routines at MOGNO, the micro and nano tomography beamline at SIRIUS

        Since its inception at the first Brazilian synchrotron (UVX), the tomography beamline group has been concerned with the beamline’s usability, recognizing that beamline operation can be a significant challenge for users. This has been carried forward to Mogno* (micro and nano tomography beamline at SIRIUS), being one of the cornerstones for user side software development, aimed at a diverse user pool that encompasses researchers from various fields and employing a range of tomography techniques (e.g., classic, phase contrast, zoom, and in-situ 4D).
        To effectively support this heterogeneity of users and techniques, the development of Mogno's operation software stack focused on three key objectives: providing an easy-to-use set of tools, enabling user experiments; ensuring a smooth learning curve for new and experienced users; and enable the beamline’s staff to efficiently prepare it for experiments. The resulting software stack is composed of a set of libraries and helper scripts built with Python (e.g. making sample alignment easier), that are run from GUIs (Graphical User Interfaces), developed with PyQt and PyDM. More recently, new possibilities for improving user experience are being explored, using web applications, which can lead to more accessible beamline experiment information.

        Speaker: Lucca Campoi (Brazilian Synchrotron Light Laboratory)
      • 457
        Ensuring the fast response in the powering interlock controller (PIC) upgrade: a UNICOS-based approach

        The Powering Interlock Controller (PIC) plays a critical role in supervising the powering conditions of the superconducting magnet circuits of the Large Hadron Collider (LHC), reducing the risk of severe equipment damage. It ensures that conditions are met before granting the powering permit and reacts within milliseconds to remove it if conditions become unsafe for operation. To enhance maintainability, modularity, and long-term sustainability, the PIC interlock system has been redesigned within CERN's UNICOS framework. This modernisation posed a major challenge: preserving the system’s stringent real-time constraints while adopting high-level programming in Structured Control Language for improved maintainability. This paper presents the strategies adopted to ensure the required response time, including execution path optimisations and efficient handling of interlocks within the UNICOS framework. We discuss the trade-off between standardisation and performance, the impact on system diagnostics and operation, and the validation process ensuring compliance with LHC safety requirements. This work serves as a reference for applying UNICOS in protection-critical systems while preserving real-time capabilities.

        Speaker: Mr Jesus Fernandez Cortes (European Organization for Nuclear Research)
      • 458
        EPICS control system for Beam Operation at Saraf Linac Accelerator

        The CEA Saclay Irfu is responsible for most of the EPICS control system for the Saraf linac accelerator located at Soreq in Israel. This scope includes the control and the tuning of the beam. The accelerator is commissioned by disciplines (vacuum, RF, cryogenics…) and sections. For this purpose, we have developed a high-level application that can activate a set of subsystems to prepare the accelerator for the desired configuration. This application, called BOM (Beam Operation Modes), transmits and controls the operator requests of beam mode and destination to a set of subsystems. The BDM (Beam Destination Master), part of the MPS (Machine Protection System), is responsible for ensuring, among other things, that interceptive elements are not in the beam path, that power converters are in the expected state and that the vacuum conditions are correct. The SBCT (Section Beam Current Transmission), also an element of the MPS, monitors the current transmission along the accelerator using ACCTs and must be configured for the desired destinations. The Cavity Phasing system verifies that the cavities are properly conditioned. The TMG (Timing), another element of the MPS, relies on fiber optics and its network must be monitored and operational. The MPW (Max Pulse Width) application limits the beam power by restricting the pulse width, according to the inserted devices. This paper presents the different components and their functionalities.

        Speaker: Alexis Gaget (Commissariat à l'Énergie Atomique et aux Énergies Alternatives)
      • 459
        EPICS diffractometer control with HKL calculations

        The relationships of diffraction momentum coordinates with Cartesian position coordinates at User Facility beamlines with EPICS controls is discussed. The EPICS IOC computes relations between real space and reciprocal diffraction space motors for various four circle and six circle diffractometer geometries. Development on trajectory previews, collision detection, and on-board scan visualization is evaluated.

        Speaker: Alexander Baekey (Oak Ridge National Laboratory, University of Central Florida)
      • 460
        EPICS in practice at LCLS

        At LCLS, EPICS plays a central role in our controls architecture. IOCs are used to interface directly or indirectly with almost all experimental hardware supporting our heterogenous requirements. EPICS network protocols are used for making devices available over network, data acquistion, and security and safety of devices. These ultimately enable a rich environment of controls tools built around standardized communication protocols like Channel Access and pvAccess, such as alarm systems, software interlocks, and data analysis. This poster/oral presentation will detail how EPICS is used at LCLS and the tools built around or on top of it with specific examples and applications used on a day-to-day basis to accomplish basic and advanced needs of a complex controls system. This includes tools developed at LCLS or SLAC specifically to fulfill general needs that other users may be able to take advantage of. I will also talk about specific IOCs and Channel Access based tools I have made or worked on, and describe what EPICS currently does well along with its limitations.

        Speaker: Kaushik Malapati (SLAC National Accelerator Laboratory)
      • 461
        EPICS IOC development for high-speed, high-resolution imaging with the AXIS-SXRF-60 camera

        At SLAC National Accelerator Laboratory, we recently integrated a high-performance AXIS back-illuminated CMOS camera with Gpixel GSENSE6060BSI sensor into the EPICS control system to support advanced imaging and spectroscopy applications. The AXIS camera features a 6144 × 6144 pixel sensor with 16-bit depth per pixel, producing approximately 75 MB per image. With a maximum frame rate of 26 frames per second at full resolution, the system must manage a sustained data throughput of nearly 1.9 GB/s. This paper presents the development and deployment of EPICS IOCs designed to interface with the AXIS camera, highlighting software optimization strategies to efficiently handle the substantial data volume. By leveraging the EPICS areaDetector framework, specifically the ADGenICam and ADEuresys modules, we expedited the integration process while ensuring full compatibility with the Euresys frame grabber hardware. Key topics include configuring CPU affinity and priority, implementing multi-threaded data pipelines for plugins, and limiting polling features to maintain stable, real-time image acquisition. This work demonstrates practical strategies for supporting high-speed imaging systems within an EPICS environment and offers a guide for similar large-scale instrumentation projects.

        Speaker: Kuktae Kim (SLAC National Accelerator Laboratory)
      • 462
        ESS accelerator personnel safety system journey towards steady state operation

        The European Spallation Source (ESS) Accelerator Personnel Safety System (ACC PSS) journey towards steady state operation has been a remarkable experience. ACC PSS serves the safety interlock and access control functionalities for the Linac.

        The safety interlock mitigates various hazards (ionising radiation, high voltage) associated with the proton beam operation and RF equipment. The access control is achieved by implementing an integrated access management, where a Personal Access Station (PAS) is merged with a Physical Access Control System (PACS), allowing fully automated passage.

        Thorough verification and validation have been performed to attain seamless operation. This paper provides lessons learned and investigations throughout the ESS ACC PSS commissioning journey.

        Speaker: Vincent Harahap (European Spallation Source)
      • 463
        Event-driven alarm management in the Karabo SCADA system

        In this contribution a flexible software for situational awareness and alarm management within the Karabo supervisory control and data acquisition system (SCADA) is presented. Supervision of hardware and software components is an essential function of a SCADA system and includes alarm management as a key aspect. This means that a SCADA system should detect components running at abnormal conditions and trigger alarms to alert operators. The presented software allows operators to define alarm conditions and, in turn, tracks dependent device properties. The evaluation of alarm conditions is event-driven: an alarm condition is evaluated once an updated value of a dependent property is received. If an alarm condition is fulfilled, alarms can be issued in various formats like text messages, sounds, or visual indicators. The software is widely used at the European X-ray Free Electron Laser facility and decreases down-times of instruments and software modules by allowing staff to address problems immediately or even proactively.

        Speaker: Florian Sohn (European X-Ray Free-Electron Laser)
      • 464
        Facility monitoring system

        As the SLAC FEL Beam enters its new era of LCLS-II SC beam and plans ahead for LCLS-II-HE, interest has grown in the area of facility monitoring. A system is needed for the early detection and response to adverse conditions affecting critical infrastructure. An engineered solution aims to minimize the impact on facility operations due to overheated or over humidified enclosures, low cooling water flow rates, high cooling water temperatures, and cooling water leaks. The newly deployed monitoring system uses two main hardware systems: Raritan and Beckhoff. The former consists of a suite of sensors distributed throughout an area, up to 30 meters apart, all connected to a single controller, which itself occupies only 1 rack unit. The latter utilizes a Beckhoff EK9000 Bus Coupler which allows Ethernet networks to connect to EtherCAT terminals. The EtherCAT terminals utilize 4-20mA signals which can easily reach across large areas. These signals are then communicated through software to the user in three ways. The first way relies on EPICS Process Variable Alarm Status and the Next Alarm System (NALMS). The second uses the EPICS Archiver, Grafana, and Slack Chat. The last alerting mechanism, a multi-functional LED beacon and sounder combination, has been installed to notify staff with audio and visual signals.

        Speaker: Nicholas Waters (SLAC National Accelerator Laboratory)
      • 465
        First light received by Beamline Experiment Control

        Beamline Experiment Control (BEC) has become the standardized high-level user interface for data-acquisition orchestration, adopted by nearly all beamlines. Built on a distributed server-client architecture, BEC seamlessly integrates with the underlying EPICS control system at Swiss Light Source (SLS), yet can also be used to steer and configure non-EPICS devices through Bluesky’s hardware abstraction layer “ophyd”. Beamlines are integrated through a plugin structure, which allows them to individually extend and adapt the system’s behavior: integrating new devices, customizing the user interface, rearranging visualization components, developing bespoke GUIs (BEC Widgets) or creating custom data analysis pipelines for on-the-fly execution. In addition, BEC enables beamlines to coordinate user access to the data acquisition through user access permissions, which can be fine-tuned either through manual interaction by the beamline scientist or automated updates from the digital user office. The long-term stability of the open-source project is ensured through automated testing (unit and end-to-end tests), semantic versioning, and automated deployment triggered on-demand by the beamline. BEC’s modularity, flexibility and its intuitive graphical user interfaces are streamlining data acquisition after the upgrade of the SLS to a fourth generation synchrotron.

        Speaker: Christian Appel (Paul Scherrer Institute)
      • 466
        From testing to verification: a comprehensive methodology for industrial controls frameworks

        Industrial control frameworks frequently rely on object representations to manage complex devices found in the field. However, traditional testing methods often struggle to comprehensively validate internal state transitions of these objects, particularly as their state spaces expand due to increasing complexity and configurability of the objects. This paper introduces a novel testing suite for one of CERN's Industrial Controls Frameworks (UNICOS-CPC), capable of systematically verifying internal state transitions of objects in industrial applications by modeling their state space as a graph. The framework enables developers to define test cases by specifying start states, end states, and commands, while autonomously navigating between states to place objects in the correct initial states of each test case. The test suite also stands out for its extensibility: it decouples test syntax from PLC platform-specific implementations through object-oriented design and OPC UA communication, making it adaptable to diverse industrial control systems and frameworks outside CERN that follow a similar device representation approach. By focusing on individual state transitions rather than complex command sequences, this work simplifies testing of UNICOS-CPC objects and enhances their robustness, allowing for the validation of complex object configurations and behaviours in a transparent manner. Furthermore, this novel testing framework enables an automated workflow to validate scenarios determined by formal verification methods where UNICOS-CPC objects arrive at invalid and undesirable states.

        Speaker: Loreto Gutierrez Prendes (European Organization for Nuclear Research)
      • 467
        Gas supply control system for straw tube detectors in the PANDA experiment

        In the PANDA experiment designed for studies with antiproton beams at the FAIR (Facility for Antiproton and Ion Research) accelerator complex, gaseous detectors of straw tube type are used for precise reconstruction of charged particle tracks. Developed multi-channel gas mixture supply system for the PANDA straw detectors meets high requirements regarding, among others, precision of mixing the component gases, stabilization of gas pressure in the detectors and protection of the detectors in the event of a system failure.
        The hardware architecture of the gas system integrates gas supply components, atmospheric condition sensors, and automation based on PLC controllers to maintain optimal working conditions. The software structure, based on EPICS, enables modular control, real-time monitoring, and efficient data collection. The user interface, developed using Phoebus, provides an intuitive graphical environment for system operation and diagnostics.
        This work presents the design, implementation, and functionalities of the gas supply control system, highlighting its role in ensuring precise and reliable operation of the PANDA straw detectors. Potential improvements and future extensions are discussed, focusing on optimizing system performance for upcoming experimental phases.

        Speaker: Mr Lukasz Zytniak (S2Innovation Sp z o. o. [Ltd.])
      • 468
        Hardware orchestrated, multi-dimensional, continuous scans with the IcePAP motion controller

        The high X-ray flux at fourth generation synchrotron facilities enables high quality data acquisition with short detector integration times. Experiments whose durations were previously dominated by detector integration are thus increasingly dictated by the time required for motorized motion. In particular, experiments performed in a step-wise fashion — where motion is stopped during each integration — suffer from significant motion dead-time due to repeated acceleration/deceleration between each step. For this reason, interest in continuous scans — where detector integration occurs during motion — has grown within the synchrotron community. Precise synchronization is however required in order to ensure data acquisition at the desired positions. These synchronization demands can be particularly challenging in multi-dimensional scans involving multiple moving components.
        Here we present a hardware orchestrated, multi-dimensional, continuous scan implementation based on the IcePAP motion controller. Both motion control & detector triggering are orchestrated by the IcePAP hardware, resulting in high precision synchronization. Arbitrary motion trajectories — in up to 128 degrees of freedom — & trigger patterns can be implemented. Scan configuration & initiation is performed in software by the Sardana orchestration suite backed by the Tango control system. The implementation has been demonstrated to yield significant experimental time savings compared to equivalent step scans.

        Speaker: Marcelo Alcocer (MAX IV Laboratory)
      • 469
        Hardware orchestration architecture for fly and step scan at SIRIUS beamlines: a distributed, multi-platform system for sub-micrometer motion and data acquisition synchronization in on-the-fly synchrotron measurements

        X-ray absorption spectroscopy (XAS) is one of the techniques that require multiple beamline devices to operate in tight synchronization to maximize beam flux, focus, and reliable measurements. These devices, such as the undulator, monochromator, quarter-wave plate, and detectors, exhibit a variety of behaviors, phenomena, capabilities, and controller platforms, ranging from the photon source to the sample holder. First, this work aims to provide an overview of the existing methods, detailing the adopted synchronization definition, and then demonstrates top-notch commissioning results for critical on-the-fly synchrotron measurements – impacting significantly EMA (extreme conditions), QUATI (quick-EXAFS) and SABIA (XMCD) Sirius beamlines. Additionally, the paper highlights the architecture's adaptability, enabling integration across a range of devices while maintaining custom, precise temporal and energy calibration, ensuring short scan duration and minimizing sample damage.

        Speaker: Telles René Silva Soares (Brazilian Synchrotron Light Laboratory)
      • 470
        IC@MS – web-based alarm management software

        IC@MS (Integrated Cloud Alarm Management Software) is a modular, web-based platform designed to unify and modernize alarm handling in scientific and industrial control environments. Initially developed for facilities using the Tango Controls framework, IC@MS provides seamless integration with PyAlarm, AlarmHandler, PyTangoArchiving, and TangoGraphQL to enable real-time monitoring, alarm grouping, historical analysis, and device interaction. Its backend, based on Flask and MongoDB, and frontend built with React, offer a responsive user interface for defining alarms, assigning severities, and configuring notifications via email or SMS. Through Dockerized deployment and REST/GraphQL APIs, IC@MS ensures flexibility, scalability, and extensibility across varied infrastructure landscapes. The software has been successfully deployed in particle accelerators and synchrotron light sources, supporting thousands of concurrent alarms and enabling data-driven decisions through structured alarm history and snapshot analysis. IC@MS represents a shift toward interoperable, cloud-ready alarm systems designed to meet the performance, reliability, and compliance needs of complex research facilities.

        Speaker: Mr Lukasz Zytniak (S2Innovation Sp z o. o. [Ltd.])
      • 471
        IFMIF-DONES advanced machine protection system

        The International Fusion Materials Irradiation Facility-DEMO-Oriented Neutron Source (IFMIF-DONES), a cutting-edge accelerator-based neutron source for fusion materials research. IFMIF-DONES Facility Project involves a particle accelerator that produces a deuterium beam of 40 MeV and 125 mA, impacting on a flowing liquid lithium target, generating the neutron source by called nuclear stripping reaction, therefore a robust central control system for safe and efficient operation is required. IFMIF-DONES Central Instrumentation and Control System (CICS) focuses on three groups of systems: CODAC (Control, Data Acquisition, and Communication), responsible for overall coordination and data management; MPS (Machine Protection System), ensuring machine protection; and SCS (Safety Control System), implementing safety functions for personnel and the environment. The work presents the current design focused only on robust central MPS interlock signals for safe, reliable and efficient protection of the machine. The MPS design describes advanced technology and fast interlock propagation, based on hierarchical, modular, and scalable architecture with built-in redundancy. The MPS system shall be able to respond in a maximum time of 10 µs, acting over the accelerator systems and the lithium systems.

        Speaker: Ruben Lorenzo Ortega (IFMIF-DONES Spain Consortium)
      • 472
        Image logging and annotation in Karabo for improved experimental monitoring

        In the Karabo framework, efficient visualization and data logging are essential for monitoring and optimizing experiments. The Image Logger device captures and compresses images from cameras or other imaging sources, storing them for future reference. Image logging can be performed opportunistically—without a dedicated DAQ setup—by treating image data as slow data, meaning it is transmitted asynchronously through broker-based communication rather than via the real-time pipeline used for full-size image streams. This approach offers flexibility and adaptability to a variety of experimental conditions. Users can retrieve past images and view them alongside live ones within the same scene, enabling straightforward visual comparison and reducing reliance on manual logbook entries. Complementing this, the Image Annotation device tracks regions of interest (ROI) such as crosshairs and rectangles, preserving alignment and diagnostic information over time. Additionally, Grafana Panels have been configured to support historical inspection and troubleshooting, providing a structured way to review and interpret logged image data across different timeframes.

        Speaker: Ana García-Tabarés Valdivieso (European X-Ray Free-Electron Laser)
      • 473
        Improving cybersecurity on the Laser Megajoule Facility

        The Laser MegaJoule (LMJ), a 176-beam laser facility developed by CEA, is located near Bordeaux. It is part of the French Simulation Program, which combines improvement of theoretical models used in various domains of physics and high performance numerical simulation. It is designed to deliver about 1.4 MJ of energy on targets, for high energy density physics experiments, including fusion experiments.
        The LMJ technological choices were validated on the LIL, a scale-1 prototype composed of 1 bundle of 4-beams. The first bundle of 8-beams was commissioned in October 2014 with the implementation of the first experiment on the LMJ facility. The operational capabilities are increasing gradually every year until the full completion by 2026. By the end of 2025, 22 bundles of 8-beams will be assembled (full scale) and 19 bundles are expected to be fully operational.
        The computer systems are facing many threats with increasingly sophisticated attacks to break through the defenses in place. In addition to existing measures on the LMJ facility, an action plan has been implemented to improve our protection against cyber threats. This article provides an overview of the different measures considered and the impact of their implementation on existing systems.

        Speaker: Jean-Philippe Airiau (Commissariat à l'Énergie Atomique et aux Énergies Alternatives)
      • 474
        Integrated control systems for time-resolved RIXS at LCLS-II: design and operational challenges

        The newly enhanced LCLS-II X-ray laser at SLAC National Accelerator Laboratory represents a major advancement in X-ray science, providing unprecedented capabilities for probing ultrafast dynamics in chemistry, materials science, biology, and beyond. Among the new beamlines, the Resonant Inelastic X-ray Scattering (RIX) beamline leverages the high repetition rate of LCLS-II to investigate the energy distribution and evolution of occupied and unoccupied molecular orbitals in complex and catalytic systems, particularly in liquid environments. This beamline features two dedicated endstations—qRIXS (upstream) and chemRIXS (downstream)—each optimized for distinct scientific goals. This talk will detail the design and implementation of the experimental controls and data systems that unify beamline hardware and instrument automation. Additionally, this talk will discuss the challenges of synchronizing operations across two endstations on a single beamline for time-resolved spectroscopy under demanding experimental conditions.

        Speaker: Dr Jyoti Joshi (SLAC National Accelerator Laboratory)
      • 475
        Integrated display management for control screens at APS accelerator complex

        The Advanced Photon Source (APS) accelerator employs both MEDM and CSS Phoebus as display managers to support legacy control screens and the new displays introduced during the APS Upgrade project. This dual-system setup presents challenges in the organization, deployment, and maintenance of accelerator control displays. To address these issues, a unified, centralized framework has been developed by APS accelerator for managing the controls screens, regardless of format, purpose, or update frequency. This paper describes the strategies used by APS accelerator controls to organize, deploy, and maintain control displays. It also presents the tools and practices used to manage display managers across the APS accelerator control system.

        Speaker: Lingran Xiao (Argonne National Laboratory)
      • 476
        Integration of a tape drive sample delivery system at the European XFEL

        High-throughput and reliable sample delivery plays a key role in high-energy laser experiments using low cross-section techniques such as X-ray Thomson scattering (XRTS), which require, due to the destructive laser-sample interaction, a large number of reproducible samples to ensure statistical accuracy. The tape drive—a conveyor-based system capable of continuously delivering samples at high speed—has been applied in structural biology and high-intensity laser experiments. In this contribution, we describe the integration of the tape drive into the European XFEL control system, enabling a level of sample delivery throughput that would not be achievable with alternative methods. The STFC-developed tape drive has been fully integrated into the facility’s control infrastructure and is now used to perform highly accurate XRTS measurements. We discuss how this integration improves efficiency by enabling precise synchronization and minimizing downtime.

        Speaker: Ana García-Tabarés Valdivieso (European X-Ray Free-Electron Laser)
      • 477
        Integration of microchannel plate delay line detector in SXP instrument and Karabo control system at European XFEL

        The Soft X-Ray Port (SXP) instrument at European X-Ray Free Electron Laser (EuXFEL) facility is designed to provide a flexible environment for time and spin-resolved X-ray photoelectron spectroscopy (TR-XPES) experiments. Two key components of the TR-XPES experimental station are the time-of-flight (ToF) momentum microscope spectrometer and the microchannel plate delay line detector (MCP-DLD detector). This contribution describes the key steps and challenges of integration of the MCP-DLD detector at the SXP instrument and in to the Karabo control system.

        Speaker: Ivars Karpics (European X-Ray Free-Electron Laser)
      • 478
        Integration of Sample Temperature Control and X-ray Data Acquisition at the TPS 13A Biological Small- and Wide-Angle X-ray Scattering Beamline

        Integration of sample temperature control with X-ray scattering data acquisition is essential for the biological small- and wide-angle X-ray scattering (BioSWAXS) beamline 13A at the 3 GeV Taiwan Photon Source (TPS), National Synchrotron Radiation Research Center (NSRRC) [1–3]. Based on the Experimental Physics and Industrial Control System (EPICS) framework [4,5], we implemented two independent temperature control sys-tems—chiller and heater—into the X-ray data acquisition workflow through the Control System Studio (CSS) and the beamline data collection graphical user interface (DC-GUI). The chiller system employs a JULABO FCW-2500T unit for a temperature range of –20 °C to 80 °C, while the heater system (ambient to 300 °C) uses heating rods regulated by an SRS PTC10 controller, comple-mented by a gas-flow cooling module. Through the DC-GUI, the X-ray data acquisition system coordinates seam-lessly with the sample environment control, including sample positioning, cooling, and both chiller- and heater-based GUIs. This integration enables programmable X-ray data acquisition under synchronized, programmable temperature control. A temperature-dependent SAXS measurement is demonstrated, showing that the stability and interoperability of the combined systems significant-ly enhance beamline performance and experimental reli-ability.

        Speaker: Cheng-Yuan Lin (National Synchrotron Radiation Research Center)
      • 479
        Interfacing custom (EPICS) software with in-house developed frame-grabbers & commercial cameras

        Integrating machine-vision system cameras in an experimental setup can be a tedious and perilous process. First, there is the challenge of adapting a commercial solution to a slightly different environment or mission than the one for which it was originally designed. Second, the design may involve both commercial and internally built hardware and software, with all pieces required to be compatible and interoperable. Furthermore, since separate engineering teams usually develop and deploy the different layers of the final architecture, this could easily lead to miscommunication when defining the various interfaces, configuration and even operational settings and parameters. As a result, failures may occur on various layers in the resulting architecture. At the SLAC National Accelerator Laboratory, such cameras are used, among other things, to monitor the profile of the Linac Coherent Light Source (LCLS) X-ray laser beam, but also for modeling purposes. A custom data-acquisition solution that includes an in-house developed frame-grabber board has been developed to interface with those cameras. The primary motivation of a custom solution is the integration of the timing system into the camera readout. In essence, the LCLS timing system is used by firmware to trigger the camera, but also to construct events consisting of camera image data, along with the associated timing information. In this paper, we present a summary of this work from a primarily software perspective.

        Speaker: Michael Skoufis (SLAC National Accelerator Laboratory)
      • 480
        KEK electron/positron injector LINAC safety system

        At the KEK electron/positron injector LINAC, simultaneous top-up injection into four independent storage rings and a positron damping ring has been successfully performed since May 2019.
        To maintain long-term stable beam operation under such a complex operational scheme, high availability of the control system is essential. A reliable safety system is also critical for preventing radiation-related incidents involving personnel and for protecting machine components.
        The control system at the KEK LINAC has been developed using the EPICS framework, which is widely adopted at particle accelerator facilities. In contrast, the safety system was initially developed independently of the EPICS framework.
        To ensure seamless integration between the control and safety systems, an EPICS IOC for the safety system was implemented using the netDev device support. Moreover, a newly developed beam operation logic status GUI enables quick identification of the causes of abnormal LINAC operating conditions.
        This paper presents a detailed overview of recent improvements to the KEK LINAC safety system.

        Speaker: Masanori Satoh (High Energy Accelerator Research Organization)
      • 481
        Lifecycle management tools in the development of SNS credited safety systems

        A new FPGA/PLC credited Beam Power Limit System (BPLS) ensures beam power to the SNS First Target Station does not exceed a safety envelope of 2MW + 10%. A new SNS beamline, VENUS, uses a single safety PLC to perform safety interlock logic. In order to link and track the extensive documentation required to manage a safety credited system, the commercial tool Reqtracer® was used to develop and link dozens of documents from the system requirements, specifications, code, down to specific test results. This level of linkage helps to ensure the pedigree of the SNS safety systems remains intact over the life of the system. This paper will discuss the experience, advantages, and disadvantages of using such a tool in the lifecycle of a safety credited system.

        Speaker: Kelly Mahoney (Oak Ridge National Laboratory)
      • 482
        Logging a new era at the APS using BELY

        As the “Dark Year” of Advanced Photon Source Upgrade (APS-U) concludes, a new logbook is essential to document the process of bringing the facility back online. The Best Electronic Logbook Yet (BELY) has been developed and deployed as a solution to fulfill this requirement. This paper dives into the development process and technologies used to create BELY. Additionally, it will explore the features BELY provides to address all of its operational requirements. One of the significant strengths of BELY is its broad adoption across the APS, driven by its well-organized structure. The widespread use at the APS significantly enhances communication between teams responsible for maintaining the machine, ensuring that information is easily accessible, and collaboration is seamless. Furthermore, the paper discusses various uses of BELY. Finally, it presents ideas for the future development and enhancement of BELY.

        Speaker: Dariusz Jarosz (Advanced Photon Source, Argonne National Laboratory)
      • 483
        Maintainable web interfaces at beamlines

        Development of web-based applications for beamline controls and data analysis has become increasingly common as labs seek to take advantage of modern browser capabilities and improved user experience. Implementing any new controls interface comes with challenges, but web-based applications present even more unique considerations that can hinder the transition.
        This talk focuses on strategies implemented at the Advanced Light Source (ALS) to create a maintainable, scalable web-based controls application. Our approach centers on creating modular library components that address specific requirements shared across multiple beamlines. A combination of modern tooling and packaging systems has enabled a development framework that encourages reuse, consistency, and long-term maintainability.
        Various technologies are used for this effort, including prototyping tools (Figma), live component viewers (Storybook), and package managers (npm) for seamless integration of modular components into applications.
        These strategies form the foundation of a web development framework that magnifies developer output, reduces redundant work, and simplifies long-term maintenance. The investment in tooling and library creation pays off as new features are added and existing ones evolve. These practices are applied at the ALS in preparation for its upgrade to a fourth-generation synchrotron, where browser-based controls and analysis will be the standard across all new beamlines.

        Speaker: Seij De Leon (Lawrence Berkeley National Laboratory)
      • 484
        Management of server and network infrastructure at SuperKEKB

        The SuperKEKB accelerator employs an EPICS-based control system for its operation.
        This presentation introduces the core server infrastructure and network configuration that support the accelerator controls. In particular, we focus on the server management system, which utilizes "Ansible" for code-based configuration management. This approach ensures that server deployment can be performed reliably and reproducibly by any team member at any time.
        The details of this system, including its structure, benefits, and operational experience, will be presented.

        Speaker: Dr Hitoshi Sugimura (High Energy Accelerator Research Organization)
      • 485
        Managing shifts in the NA61/SHINE experiment: development of the shift scheduler system

        The NA61/SHINE experiment (SPS Heavy Ion and Neutrino Experiment) is a
        high-energy particle spectrometer at CERN, using the Super Proton
        Synchrotron (SPS). With the SPS providing protons and various ions,
        NA61/SHINE can study a wide range of nuclear systems. Its physics
        program focuses on strong interactions and supports research on cosmic
        rays and neutrinos.
        The experiment requires 24/7 shift coverage by collaboration members. To
        ensure continuity, quality, and efficient staffing, the NA61/SHINE shift
        scheduler system was developed.
        This web-based system, integrated with the collaboration’s
        infrastructure, provides access to member roles, contact details, and
        assigned activities. Coordinators can manage runs and ensure optimal use
        of available staff.
        Members can independently book shifts based on their permissions. The
        system includes user and institute profiles with shift statistics and
        charts, visible within the collaboration for transparency.
        A key feature is the interactive shift calendar, which supports
        data-taking planning and duty assignments. The system can also generate
        calendar printouts with assigned shifts, helping to coordinate
        participation and maintain balanced staffing throughout data-taking
        periods.

        Speaker: Marcin Slodkowski (Warsaw University of Technology)
      • 486
        Micro-TCA-based data acquisition for APS radio frequency systems

        A data acquisition (DAQ) system has been developed for Radio Frequency (RF) systems at the Advanced Photon Source (APS). The hardware is based on the micro-TCA platform and includes FPGA carrier boards equipped with FMC cards featuring RF ADC and DAC capabilities. These components interface with a Linux blade via PCIe over a backplane. The software, built on the EPICS framework, facilitates control and real-time DAQ of RF systems. The EPICS DAQ software streams data to distributed services on the APS network. The dataflow is presented, starting at the RF hardware, through the FPGA to the Linux blade, through the EPICS software, and finally to the network services. This project was a collaborative effort involving industry and multiple institutions.

        Speaker: Timothy Madden (Advanced Photon Source)
      • 487
        Migration of SNS cryogenic system alarms from the softioc-based alarm handler to BEAST

        Since inception, the cryogenic system at Spallation Neutron Source (SNS) has been using a softioc-based alarm handler. It remained in place until the migration of all system alarms to the Best Ever Alarm System Toolkit (BEAST) in 2024. Although other SNS systems have migrated to BEAST, the cryogenic system could not transition to BEAST due to critical operational requirements. However, the addition of seven new cryomodules during the SNS Proton Power Upgrade (PPU) project necessitated the move from the softioc-based alarm handler to BEAST. Migration constraints, challenges, and implementation are discussed.

        Speaker: Marnelli Martinez (Spallation Neutron Source)
      • 488
        Modernizing software tools for the accelerator control room

        A postmortem analysis of three new software tools developed for the accelerator control room at SLAC: a new archive viewer, a replacement for the logbooks, and an updated snapshot and restore tool. We discuss the operational improvements provided by each tool, the technologies and methods utilized in their development, and key lessons learned. Additionally, we examine how these experiences are influencing our software development practices at SLAC, including efforts to build more collaborative teams aligned with industry standards and our efforts to integrate UI/UX best practices into our development processes.

        Speaker: Yekta Yazar (SLAC National Accelerator Laboratory)
      • 489
        myLog: A modern, integrated logbook for scientific collaboration at European XFEL

        Scientific experiments at large-scale research facilities require flexible and collaborative tools to document, discuss, and track experimental progress. We introduce myLog, a new logbook solution developed at European XFEL after four years of iterative prototyping and user engagement.
        Designed to meet the evolving needs of scientists and support teams, myLog offers a user-friendly interface built on a robust architecture. It leverages Zulip
        for threaded, real-time communication and uses the facility’s metadata catalogue, myMdC, to orchestrate and support users adoption of new workflows and enriched interfaces and GUI**. This integration enables seamless connection between discussions, control system events, experiment datasets metadata and data analysis artifacts.
        myLog emphasizes user-centric organization. Experiment groups can define how information and notifications are structured, while Principal Investigators retain full control over access permissions. Real-time integration with the Control System allows automatic logging of key events, complemented by manual entries when needed.
        Importantly, myLog respects data governance by enforcing embargo policies, ensuring sensitive information is managed appropriately. By combining communication, metadata, and automation in a single platform, myLog provides a modern, scalable approach to scientific logging and collaboration. We aim to present the solution overview and its integration into the facility infrastructure.

        Speaker: Luis Maia (European X-Ray Free-Electron Laser)
      • 490
        Observation planning tool for the MeerKAT radio telescope

        The South African Radio Astronomy Observatory (SARAO) allocates time on the MeerKAT Radio Telescope to the international scientific community to maximize its impact on radio astronomy while fostering South African scientific leadership and human capital development. To streamline and optimize this process, SARAO has developed an Observation Planning Tool (OPT), which allows astronomers to define and plan observations on the telescope. Submitted observations are then processed by an Astronomer on Duty (AOD), before being scheduled.
        In this paper, we detail how the OPT supports SARAO’s broader mission of effectively operating its radio telescope to produce usable scientific data by enhancing the efficiency, transparency, and scientific utility of the scheduling process. We describe the tool’s functionality, design rationale, and ongoing improvements. Key features include a calibrators' catalogue; the ability to simulate or dry-run observations; a scheduling assistant to aid the Scheduler’s optimization efforts and future schedule plan of observations.

        Speaker: Zanele Kukuma (South African Radio Astronomy Observatory)
      • 491
        Operating system security updates and network boot support for Libera instruments

        The Libera instruments are widely used in particle accelerators for applications like beam position monitoring, beam loss monitoring and control of radio frequency fields. The instruments rely on embedded Linux operating systems to ensure stable and precise operation. Maintaining security and operational reliability across a fleet of such instruments presents a significant challenge, especially in facilities with limited physical access. This paper presents a robust and automated solution for managing operating system security updates and network-based booting for Libera instruments. We detail an approach that integrates secure, version-controlled Linux OS updates with a centralized network boot infrastructure, enabling consistent and traceable deployments across all instruments. The network boot process further simplifies device provisioning and recovery, reducing downtime and minimizing maintenance overhead. This combined strategy improves system security posture, ensures reproducibility and supports scaling. Implementation experiences from production accelerator environments are also discussed.

        Speaker: Mr Miloš Bajič (Instrumentation Technologies (Slovenia))
      • 492
        Polka - web management tool for TANGO controls

        Modern control systems increasingly require intuitive and platform-independent tools to manage distributed infrastructures. We present Polka, a lightweight web-based management tool for TANGO Controls. Polka offers a user-friendly interface to administer multiple TANGO databases, device servers (Starters), branches, and pooling configurations including pooling Manager, Pooling Profiler, and Pool Threads Manager.
        Built with React and WebSocket-based communication, it delivers a responsive, real-time interface accessible from any browser. Unlike Astor, the long-established desktop-based TANGO Manager, Polka requires no client-side installation and supports platform-independent access for both administrators and operators.
        While Astor remains effective for quick diagnostics and device server control via its Java GUI, Polka offers a modernized approach to the same core functionalities, reintroducing multi-database support, Starter editing, and branch migration. It also brings refreshing tools for pooling configuration profiling with visualizations and delivers updated statistics dashboards.
        This paper will present Polka’s architecture, key features, and operational benefits in scientific infrastructures. The comparison with Astor highlights how Polka preserves core TANGO management capabilities while introducing scalable, web-based enhancements that address the needs of modern distributed control systems.

        Speaker: Mr Lukasz Zytniak (S2Innovation Sp z o. o. [Ltd.])
      • 493
        Preliminary design for beam charge monitoring interlock system for Advanced Light Source at Lawrence Berkeley National Lab

        The Accumulator Ring (AR), a key component of the Advanced Light Source Upgrade (ALS-U) project at Lawrence Berkeley National Laboratory (LBNL), is currently under installation and scheduled for commissioning with beam after the 2026 summer shutdown. To support safe commissioning and eventual routine operation, the Personnel Protection Systems (PPS) team is developing a Beam Charge Monitoring Interlock System (BCMIS). The BCMIS is designed to ensure that operation of the accelerator remains within the defined limits of the ALS Accelerator Safety Envelope (ASE). It will provide critical interlock functionality by monitoring beam charge levels and triggering protective actions if thresholds are exceeded. Besides that, BCMIS will be the first safety-rated, PLC-based system developed by the PPS team and integrated into the existing relay-based ALS safety infrastructure. This paper presents the high-level requirements, key assumptions and preliminary design for BCMIS.

        Speaker: Denis Paulic (Lawrence Berkeley National Laboratory)
      • 494
        Preparing the migration of Python GUIs from Qt5 to Qt6 at CERN

        The use of Python in CERN's particle accelerator domain has grown steadily over the last decade. For non-software developers, it integrates well into the reality of a physics laboratory, where people who develop hardware or design operational workflows often author associated software, including Graphical User Interfaces (GUIs). While the general strategy is to standardise GUIs using no-/low-code solutions, complex use cases require more customised GUIs. For this, Python and Qt are combined through PyQt bindings. Qt5 has been used for several years but with its end-of-life already reached, migration to Qt6 is a priority. At the same time, GUI applications in the CERN Controls ecosystem require long-term stability that exceeds the software industry average. Therefore, the migration requires a lot of care, taking into account both maintenance costs and long-term risks associated with reliance on a 3rd-party technology. To plan the upgrades, alternatives to the current PyQt approach have been reviewed. In the past, PyQt5 was an obvious choice, but investment into the PySide library by the Qt Company now presents more options. We outline the criteria considered in our comparison of PyQt, PySide, and QML, and present the resulting decisions and rationale behind them. Finally, we discuss long-term risks associated with Qt bindings for Python and describe how we will manage the migration to future versions of the Qt framework, such as Qt7.

        Speaker: Ivan Sinkarenko (European Organization for Nuclear Research)
      • 495
        Process orchestration and system configuration in the MeerKAT radio telescope

        The Control and Monitoring (CAM) system of the MeerKAT telescope is highly distributed, necessitating a reliable and automated framework for configuring, deploying, and managing the lifecycle of its many software processes. System configuration follows a hybrid approach using static and dynamic configuration that define the telescope's operational parameters and hardware setups. General functionality can be extended or customized through configuration adjustments, minimizing the need for code modifications. The katlauncher application serves as the entry point for initialising the CAM software infrastructure processes for logging and serving configuration information prior to the rest of the CAM system. The katsyscontroller application acts as the overall system coordinator, responsible for the sequenced startup and controlled shutdown of the entire system, as well as managing interventions and operator commands. The processes on each node are monitored and managed by the katnodemanager application, which exposes sensors and requests to control processes via a Karoo Array Telescope Communication Protocol (KATCP) interface. These components work together to ensure that the MeerKAT telescope operates efficiently and reliably, with clear separation of responsibilities for configuration, process management, and system-wide orchestration.

        Speaker: Rishad Ebrahim (South African Radio Astronomy Observatory)
      • 496
        Prometheus system monitoring stack

        The MeerKAT radio telescope, a 64-dish instrument located in South Africa, represents a significant leap in Southern Hemisphere radio astronomy, providing unprecedented sensitivity prior to its integration with the Square Kilometer Array (SKA). The operational efficiency of complex projects like MeerKAT relies heavily on robust Control and Monitoring (CAM) systems, which is underpinned by over fifty Linux server infrastructures. Ensuring the stability and performance of these servers is paramount to maximizing scientific output. This paper details the implementation and benefits of a Prometheus-based monitoring stack, designed to provide comprehensive surveillance of the MeerKAT CAM system’s hardware, operating systems and CAM services. The system proactively detects issues , triggers alerts, speeds remediation, and improves troubleshooting, ultimately minimizing downtime and protecting critical data.
        The monitoring stack for CAM was deployed with Ansible. The Node Exporter application extracts the operating system metrics such as node_memory_MemFree_bytes, node_filesystem_free_bytes as well as our custom CAM services metrics such as karoo_camlog_status ,karoo_vault_status and feeds them into Prometheus. Prometheus processes these metrics and generates alerts using python scripts that the Alertmanager application then uses to generate notifications and alarms via the Mattermost messaging application. Grafana transforms stored metrics in Prometheus into interactive dashboard.

        Speaker: Trymore Gatsi (National Research Foundation)
      • 497
        Pycumbia: Bridging high-performance control logic and Python simplicity for modern UIs

        Pycumbia is a Python binding to the high-performance C++ cumbia framework, designed to simplify the development of control system applications without sacrificing responsiveness or scalability. It offers a user-friendly interface while maintaining the speed, concurrency, and low memory footprint of its C++ backend. By releasing Python’s GIL, pycumbia ensures that GUI applications and data workflows remain smooth and responsive even under heavy load, a key requirement in control system environments.
        From a user experience perspective, pycumbia significantly reduces the complexity typically associated with integrating control systems into custom applications. Developers can build advanced data visualization tools and synoptic panels with just a few lines of Python code, without dealing with polling logic, event dispatching, or thread management. Pycumbia also ships with PYI stubs for full IDE code completion.
        A key architectural advantage is its flexibility in deployment: pycumbia can run inside an isolated miniconda environment, allowing developers to use up-to-date Qt and Python packages independently of the operating system, or system-wide, provided the base OS supports a recent enough software stack. This enables modern Qt-based graphical applications to be developed and run consistently across platforms, bypassing limitations of outdated system packages.
        Pycumbia in real-world control system applications improves both the developer experience and application performance.

        Speaker: Lucio Zambon (Elettra-Sincrotrone Trieste S.C.p.A.)
      • 498
        PyDM: Updates and roadmap

        PyDM (Python Display Manager) is an open-source software platform designed to simplify the development of user interfaces for control systems, leveraging Python's power, flexibility and large 3rd party library support. PyDM provides a no-code, drag-
        and-drop system to make simple screens, as well as a straightforward Python framework to build complex applications. Over the past two years, significant enhancements have been made to PyDM, including adding support for pyside6, new and improved graphing widgets, and steady improvements to the existing framework. We will highlight key advancements achieved over the past two years and provide an overview of planned developments aimed at maintaining PyDM going forward.

        Speaker: Mr Nolan Stelter (SLAC National Accelerator Laboratory)
      • 499
        Radiation monitoring system using a PIN photodiode at the KEK electron/positron injector LINAC

        We have been developing a real-time radiation monitoring system based on a PIN photodiode gamma ray detector (developed by Taisei Corporation, manufactured by Yaguchi Electric Co., Ltd., originally designed by Radiation-Watch Co., Ltd.,PocketGeiger(TM)). This compact and portable system consists of a small sensor unit and a Raspberry Pi, with control software developed using an EPICS-based framework. The software is designed to support a wide range of radiation measurements. To evaluate the system's measurement accuracy and reproducibility, the sensor was calibrated with Co-60 and Cs-137 sources at the KEK Radiation Irradiation Facility. In addition, radiation measurements were conducted during both operational and shutdown periods of the KEK electron/positron injector LINAC. This paper presents a detailed description of the system and its control software, along with the results of calibration and measurement tests carried out at the facility.

        Speaker: Itsuka Satake (High Energy Accelerator Research Organization)
      • 500
        Real-time EPICS-integrated RAON timing system enhancements

        The timing system of the RAON accelerator is critical for precise beam synchronization and safety interlocks. We present a real-time analog feedback infrastructure combining the Digilent ADP3450 (Zynq-7000) with a Raspberry Pi 5, enabling streaming and processing of >10 MSPS event data over the EPICS network. Beam frequency and pulse-width monitoring at sub-microsecond resolution are now directly available in the CSS Phoebus GUI. We have implemented and validated four key features: pulse train, ramp-up, optional emergency-halt, and event-switching. Testing demonstrates stable data delivery and responsive feedback control for 1 Hz signal adjustments and up to ~1 MHz sampling rates. These enhancements address limitations observed during 2024 operations, significantly improving safety, flexibility, and efficiency for upcoming high-energy beam delivery at RAON.

        Speaker: eunsang kwon (Institute for Basic Science)
      • 501
        Real-time experiment steering at European XFEL within Karabo

        European XFEL is an X-ray free-electron laser user facility that generates high-throughput data streams. To ensure optimal steering of experiments, the real-time provision of compact metrics and live visuals for immediate feedback is critical. In this contribution, we describe how the facility's supervisory control and data acquisition system Karabo is exploited to this end. Two use cases, time-resolved X-ray absorption spectroscopy at the Spectroscopy and Coherent Scattering (SCS) instrument and X-ray photon correlation spectroscopy at the Materials Imaging and Dynamics (MID) instrument, exemplify typical workflows.

        Speaker: Cammille Carinan (European X-Ray Free-Electron Laser)
      • 502
        Redesign of the Timeline Generator at Fermilab using a web-based Flutter application, GraphQL API and an IOC

        The control system at Fermilab is undergoing an evolution with a shift towards web-based applications with connections to the EPICS infrastructure. The Timeline Generator (TLG) is an application that serves to coordinate events across the lab using different timing links. These links include the Tevatron clock (TCLK), a 10 MHz serial link with events encoded at 20Hz and Machine Data (MDAT), a communication link with states encoded at 720Hz. This paper covers the redesign of the major components of the TLG. This includes a web-based Flutter application for building timelines. A placement service is in use that has a GraphQL interface and uses a timeline input to compute a schedule of events and states. The Flutter application sends this computed schedule to the TLG IOC via a GraphQL interface to the Data Pool Manager (DPM). The TLG IOC runs on an Arria FPGA, the Accelerator Clock Generator (ACLK-GEN), which is responsible for writing the events and states on to the different timing links.

        Speaker: Linden Carmichael (Fermi National Accelerator Laboratory)
      • 503
        Scalable and standardized PLC development: an automated framework for CERN's Cooling and Ventilation systems

        This paper presents an automated development and testing framework for PLC-based control systems operating CERN’s Cooling and Ventilation (CV) plants, addressing the challenge of engineering numerous new systems annually while maintaining over 500 existing ones with constrained resources. The framework enhances scalability through standardized, reusable control system components integrated with installation-specific logic to form complete PLC code. These components, stored in a centralized repository, undergo rigorous unit testing and comprehensive validation via GitLab pipelines, utilizing PLC manufacturer-specific Command-Line Interfaces and CERN’s proprietary tools to ensure reliability, quality, and regression-free updates. This approach significantly reduces time and resource demands by optimizing development and maintenance processes, providing a scalable model for domains requiring frequent, standardized control system development under high system-to-resource ratios. This work presents insights for engineers seeking efficient, automated solutions for complex control system environments.

        Speaker: Rafael Figueiras dos Santos (European Organization for Nuclear Research)
      • 504
        Secure EPICS PVAccess deployment framework for external scientific networks integration using Kerberos, LDAPS, and PKI at SLAC

        We present a Secure EPICS PVAccess (SPVA) deployment framework developed at SLAC to enable authenticated, encrypted and authorized access to control systems from external scientific networks. In Phase 1, SPVA has been deployed to connect HPC clients and services on SLAC’s Scientific External Network to internal PVAccess gateways supporting production accelerators.
        SPVA enforces strong mutual authentication using Kerberos service principals, which establish the runtime identity of services and clients. These identities are used to request short-lived X.509 certificates from the SLAC-managed PVAccess Certificate Management Service (PVACMS). The certificates are used for TLS-secured PVAccess communication, ensuring cryptographic trust between peers.
        Authorization decisions are enforced through Access Security Files (ACFs) that define PVAccess security groups (ASGs) referencing User Access Groups (UAGs) and Host Access Groups (HAGs). These groups are centrally managed in LDAPS, allowing fine-grained control based on organizational roles and host policies.
        This framework provides secure, traceable access to EPICS PVs across administrative domains while maintaining compatibility with PVXS-based IOCs and tools. This abstract outlines the architectural design and operational lessons from the Phase 1 rollout, providing a model for deploying secure control system access in federated scientific computing environments.

        Speaker: Jingchen Zhou (SLAC National Accelerator Laboratory)
      • 505
        Signal response and analysis of large micro channel plate driven delay line detectors

        For soft X-ray spectroscopy beamlines, delay line detectors are often the main system for detecting the photons from the sample and hence also a component determining the overall beamline performance as it might be a limiting factor of both measurement speed, noise, artifacts, and resolution. As such, and even more with larger micro channel plate driven delay line detectors, the signal readout must be fast and robust to minimize noise and artifacts while still accommodating even the flux from 4th generation synchrotrons. This paper studies the signal response of a delay line detector and how the ns current signal pulses can be filtered, amplified, and converted to voltage before the digitization. The digitizer is a 12 bit 2.5 GSPS 6 channel system, which is set up in a manner to minimize noise and enable post signal analysis integrated into the Sardana control system and live view. The early results indicate that many of the currently present image artifacts are, to a very high degree, suppressed due to analog signal treatment and proper triggering. The digitized signals are fitted using the python tool lmfit to different signal models, such as the exponentially modified Gaussian, to extract the peak of the main signal after identifying the common background response in all channels with the aim to even further improve the resolution of the detector. To optimize sampling, the system is also stress tested with regards to e.g. sampling length and out of range measurement.

        Speaker: Dr Peter Sjöblom (MAX IV Laboratory)
      • 506
        Smart guided commissioning of industrial control systems

        This paper proposes a web application to assist in commissioning of industrial control systems with guided functional tests. The tests verify that trigger conditions produce expected responses within the control system by automatically reading process states from the controllers (e.g., PLCs). The commissioning engineer will receive step-by-step guidance according to the test scripts, real-time feedback, and a final report at the end of the test. The reproducible tests aim to reduce human error and speed up commissioning by automating the tedious task of process state verification.

        Speaker: Jose Manuel de Paco Soto (European Organization for Nuclear Research)
      • 507
        Smart watch and control in accelerator control system

        The controls system at Brookhaven National Laboratory’s RHIC complex contains millions of control points. Many of these produce alarms when a fault condition is present. The severity of alarms often depends on a combination of factors within the Controls system. To provide Operations with condition-specific alarms, it is sometimes necessary to monitor and evaluate multiple control points simultaneously. A server/client-based software architecture was developed which monitors multiple control points, and dynamically generates appropriate alarms based on user-defined programmatic relationships. The user interface tools are provided which allow Operations staff to create, update, enable, and disable conditions for any specific case on the fly. The flexibility of the system can help Operations to simplify diagnostics during some complex failure situations.

        Speaker: Wenge Fu (Brookhaven National Laboratory)
      • 508
        Status of HEPS beamline control system

        HEPS (High Energy Photon Source) will be the first high-energy (6 GeV) synchrotron radiation light source in China, which is mainly composed of accelerator, beamlines and end-stations. Phase I of the project includes 14 user beamlines and one test beamline. Construction of HEPS began in June 2019 and is scheduled for completion in late 2025. Meanwhile, beamlines have completed photon beam commissioning, marking HEPS' official transition to the joint-commissioning phase, starting from March 27th, 2025. The beamline controlled devices are mainly divided into two categories: one category is optical adjustment devices such as slits, K-B Mirrors, monochromators, etc.; the other category is optical diagnostic and detection devices such as XBPMs (X - ray Beam Position Monitors), fluorescence targets, detectors, etc. The beamline control system has been designed, based on the EPICS framework. Beamline network topology consists of three networks, namely the data network, control network, and equipment network. In order to enhance the software reusability and maintain version uniformity, package management technology is utilized to manage both application software and system software. Here, the design and construction of beamline control system are presented.

        Speaker: Gang Li (Institute of High Energy Physics)
      • 509
        Streamlining and updating user interfaces for LANSCE operations using Python

        This poster describes the work and design process for updating user interfaces used for operations at Los Alamos Neutron Science Center (LANSCE) from the Tcl/Tk language to the Python language. Python has become a golden standard in the software industry and has a wide variety of libraries and plugins that can be leveraged for a variety of projects. Utilizing the pyQt5 library, alongside internal libraries designed to interface with the Experimental Physics and Industrial Control System (EPICS) that primarily drives the accelerator’s technologies, we can greatly improve the usability and visuals for its legacy user interfaces. These updates are a multi-year effort, having already replaced several legacy Tcl/Tk interfaces in production.

        LA-UR-25-24008

        Speaker: Greg DeLaTorre (Los Alamos National Laboratory)
      • 510
        Streamlining Phoebus OPI screen creation and maintenance with OPI-Generator

        With the transition from Eclipse-based Control System Studio (CS-Studio) to the Phoebus platform now complete, the focus has shifted to the development and maintenance of Operator Interface (OPI) screens for Phoebus. The "opi-generator" package offers a streamlined solution for generating properly formatted OPI XML files directly from Python scripts. Designed to simplify the creation and ongoing maintenance of OPI screens, opi-generator provides unified customizable screen styling and fine-grained control over widget layout at the pixel level. By automating OPI screen generation, this tool minimizes manual editing, enhances consistency, and accelerates development workflows for Phoebus-based CS-Studio.

        Speaker: Tong Zhang (Facility for Rare Isotope Beams)
      • 511
        TDM, a modern display manager based on web technologies

        This paper presents TDM, a cross-platform display manager built with modern web technologies. TDM leverages the Electron.js framework to integrate Node.js for backend services and React.js for the frontend interface. To enable communication with EPICS IOCs, a dedicated EPICS CA/PVA client library, epics-tca, was developed. TDM follows a client-server architecture: the server handles IOC communications, as well as the management of channels, windows, and script threads, while the client renders the graphical user interface (GUI) windows and manages widgets. Each GUI window runs in a separate thread, allowing TDM to fully utilize multicore CPU resources for responsive performance. TDM includes an EDM compatibility layer, enabling direct loading of existing EDM files, and offering a conversion tool for migrating EDM files to the TDM format. Additionally, TDM provides a built-in web server that allows users to open and edit display files through a web browser with an experience nearly identical to the desktop version. Performance tests demonstrate that TDM can efficiently open and edit complex display files while maintaining moderate resource usage.

        Speaker: Hao Hao (Oak Ridge National Laboratory)
      • 512
        Ten years experience of operating tiny fanless severs as EPICS I/O controllers for J-PARC Main Ring

        J-PARC Main Ring (MR) deployed tiny fanless servers as EPICS I/O controllers (IOCs). During the construction phase of J-PARC MR in 2007, VME single board computers (VME-SBCs) were introduced as IOCs. Subsequently, it was found that the majority of the control targets were network devices and a VME bus was not mandatory. Pilot installation of tiny fanless servers as IOCs was carried out in 2014. Full-scale deployment began in 2015 and the transition from VME-SBCs to those small fanless servers was completed by 2023. This paper reviews the experience of operating the tiny fanless servers and discusses the future prospects.

        Speaker: Shuei Yamada (High Energy Accelerator Research Organization)
      • 513
        The control and safety system for the JULIC neutron platform target station

        For the future high current accelerator-driven neutron source HBS (High Brilliance Neutron Source) at Forschungszentrum Jülich, a prototype for a target station has been developed, which was operated successfully at the JULIC neutron platform and will be relocated to the ARGITU accelerator at ESS Bilbao, in future. A major safety-related feature of the target station is the automatic motorized opening of the shielding gate, which potentially could expose humans to radiation or crushing risks. Based on a risk assessment according to EN ISO 12100, a safety system hast been designed that fulfills the requirements of the European machinery directive, which is necessary for CE marking. The safety system relies on safety edges, enabling switches, emergency stop switches and an interface to the personal safety system of the accelerator. The functional safety achieving PL d according to EN ISO 13849 has been implemented with a Siemens S7-1500 safety PLC, which is integrated into an overall control system based on TANGO and NICOS.

        Speaker: Harald Kleines (Forschungszentrum Jülich)
      • 514
        The ESS fast beam interlock system history buffer for post-mortem analysis and accelerator statistics

        The European Spallation Source (ESS), is a linear accelerator located in Lund, Sweden. It is currently under completion and will be the most powerful neutron source. A key system ensuring the safe operation of the machine is the Fast Beam Interlock System (FBIS), which is the brain of the Machine Protection System (MPS) at the ESS, gathering all information to make decision on keeping or stopping beam production. Being the only system stopping the Beam, it is the central point where transit all Beam stop requests. The latter, with a multitude of events, are recorded to be used for the so-called post-mortem analysis. This consists of keeping an history of events preceding a Beam Stop, identifying what was its root cause, then ensuring the machine can be reliably restarted. To achieve such an analysis with events from fast systems, a resolution of few nanoseconds is necessary and some firmware component have to be used for the data collection. This component is called history buffer and is present is each of the 56 FPGAs of the FBIS actually in operation. This paper explains what are the history buffers, their implementation, and how to exploit their information for post-mortem analysis and accelerator statistics.

        Speaker: Dr Stefano Pavinato (European Spallation Source)
      • 515
        The ESS Fast Beam Interlock System – design, deployment and commissioning till the beam on dump

        The European Spallation Source (ESS), is a linear accelerator located in Lund, Sweden. It is currently under completion and will be the most powerful neutron source. A key system ensuring the safe operation of the machine is the Fast Beam Interlock System (FBIS), which is the brain of the Machine Protection System (MPS) at the ESS. FBIS is both modular and distributed, designed to react to approximately 250 input signals from critical accelerator and target subsystems at the time of this commissioning. The commissioning phase is until beam on dump in 2025. Its role is to assess the beam clearance conditions in real time, ensuring a fast beam stop when necessary to prevent unsafe operation. To meet the requirements of the protection integrity level, FBIS operates at a high data throughput and ultra-low latency. This paper provides an overview of the FBIS control system and the most significant challenges faced during the last commissioning phase. The focus was on integrating several new systems and automating integration tests across the site. The strategies used to validate and deploy over 20 newly installed crates, as well as the important role automation plays in ensuring reliable and efficient commissioning under increasingly complex system conditions.

        Speaker: Dr Stefano Pavinato (European Spallation Source)
      • 516
        The LCLS-II modular optical delivery system: lessons learned

        The LCLS-II optical delivery system supports multiple interaction points across multiple experiment hutches using only a handful of laser sources. This reduces financial burden and space usage at the cost of increased complexity for the optical laser systems. To ameliorate this complexity, each interaction point is supplied with a Modular Optical Delivery System (MODS) to inject, shape, and compress the beam before it is further conditioned for the experimental use. To meet operational demands, these MODS must be highly configurable, flexible, and robust while supporting 140+ control points in a dense enclosure. With control points spanning piezoelectric motors, optical imaging, digitizers, and more, the EPICS control system framework simplifies driver maintenance and allows growth of community-driven solutions. Each control point is accessible remotely via pyDM GUI which enables the operator to control these various alignment and diagnostic tools. Managing the deployment and operational stability of these modular systems is nontrivial and has presented several challenges in recent runs that inspired significant design changes for the future of the MODS. This talk takes a closer look at these operational challenges and the solutions we’ve implemented.

        Speaker: Adam Berges (SLAC National Accelerator Laboratory)
      • 517
        The novel and robust design of fast protection system for CSNS-II

        The high reliability of the fast protection system(FPS) is crucial for the efficient operation of the entire large-scale scientific facility of Chinese Spallation Neutron Source(CSNS). The construction of the CSNS-II began nearly two years ago. In this new phase, an advanced linear superconducting section is scheduled to be introduced. To prevent operational accidents, such as "temperature rise loss caused by beam loss in the superconducting section," it is necessary to enhance the existing fast protection mechanisms. Based on the characteristics of the interlocking requirement, we plan to implement a hardware architecture comprising "high-performance FPGA + Rocket IO + ATCA." While maintaining the core function of the FPS for CSNS, we will develop the transmission link and protection strategy by integrating the beam loss monitor (BLM) and differential beam current monitor (D-BCM). The whole response time of the FPS for CSNS-II should not exceed 8 microseconds, accounting for the fiber optic transmission delay, hardware circuit delay, interlocking logic processing, and so on. It is worth to focus on that a highly available, reliable, efficient, and fast protection should be to ensure the CSNS II accelerator operates stably and safely in the long term.

        Speaker: Peng Zhu (Institute of High Energy Physics)
      • 518
        The operation mode for SHINE machine protection system

        Faster response time and larger signal scale of machine protection system (MPS) should be required for Shanghai HIgh repetitioN rate XFEL and Extreme light facility (SHINE). There are two relatively independent interlock systems for SHINE, one is normal MPS based on PLC, and the other one is fast MPS based on FPGA. In order to satisfy the requirements of different segments commissioning and further reduce the probability of misoperation, the design of operation mode is introduced in MPS. The operation mode can achieve mode and logic switching through "one click". Different logic can be automatically matched in different modes. The mode switch and management is controlled by the main station comprehensively, and it can be transferred to various sub stations and fast interlock modules. The operation mode has been designed in two parts: segmented mode and repetition mode. According to the operation mode, the beam position control, equipment protection and repetition rate control and protection, and a series of functions can be achieved. All the design details are implemented and verified, and the system have been put into online operation for SHINE injector.

        Speakers: Yingbing Yan (Shanghai Synchrotron Radiation Facility), Chunlei Yu (Shanghai Advanced Research Institute, Chinese Academy of Sciences)
      • 519
        Third and fourth phase update of control network in J-PARC MR

        The control network in J-PARC is a local network that provides the distributed control of various power supplies and measuring instruments that consist of an accelerator.
        It is independent from the office network, but some communications are permitted through a firewall.
        The control network consists of a lot of switches.
        For example, core switches in the central control building, aggregation switches in each accelerator facility, edge switches, and some terminal switches as needed are installed.
        The control network was first designed in 2005, and this year marks its 20th anniversary.
        At J-PARC control network, upgrades are carried out for all equipment about every 7 years, considering the general product lifespan.
        The third phase of upgrades began around 2018 and in MR was completed in 2023.
        But other facilities (Linac, MLF, etc.) have not completed this upgrade due to the global semiconductor shortage (from around 2018), price rise of network equipment and rise of maintenance support cost.
        This report shows how the control network changed from the original design up to the third phase of upgrades.
        And also shows that have to planned for the fourth phase of upgrades including operation and maintenance support will big change because of deal with budget cut.

        Speaker: Kenichi Sato (High Energy Accelerator Research Organization)
      • 520
        Towards next-generation instrument control framework for neutron experiments at MLF, J-PARC

        At J-PARC MLF, many neutron instruments currently operate using IROHA2, the standard instrument control framework. Since IROHA2 has been used for over a decade, several challenges have emerged due to architectural obsolescence and limitations in adapting the existing system to new requirements.To address these issues, we initiated the development of a next-generation control system as a post-IROHA2 framework in 2023, which is currently in the prototype phase.
        The new system is designed as a data-centric distributed architecture, aiming to enable data-driven experimental environments. By centrally managing device status and control requests in the database, flexible and efficient data utilization can be achieved. To ensure a loosely coupled architecture, asynchronous messaging is employed between distributed components. For device control, the system supports both the conventional IROHA2 device modules and the EPICS framework.
        While the control components are deployed on-premises, the database and user front-end are deployed in a cloud environment to form a hybrid cloud-based experimental system. This configuration enables secure remote experiment control through cloud-based access, which avoids direct connection to local systems, and facilitates future integration with cloud-based AI services to realize autonomous and feedback-driven control.
        This presentation outlines the conceptual framework, the current status of the system, as well as the remaining technical challenges.

        Speaker: Kentaro Moriyama (CROSS Neutron Science and Technology Center)
      • 521
        Update of Linac and RCS MPS for BLM

        The Machine Protection System (MPS) has been configured to inhibit the beam immediately in irregular events and to minimize the damage caused by beam loss because J-PARC Linac and 3 GeV Rapid Cycling Synchrotron (RCS) are high-intensity accelerators.
        The MPS for J-PARC Linac and RCS mainly consists of standard MPS module and MPS module for beam loss monitor (BLM). However, the existing MPS modules have been used since the beginning of J-PARC operation, and there is a concern that the modules may cause malfunction due to aging. Therefore, we have developed new MPS modules and are now replacing a part of Linac and RCS MPS using new standard MPS modules.
        On the other hand, the development of the MPS module for BLM has not been proceeding because of reasons both the existing MPS module for BLM has a comparator function and the beam loss detection methods for Linac and RCS are different. Therefore, it was decided to separate the comparator function from the MPS module, and to develop each hardware corresponding to the beam loss detection methods for Linac and RCS, and to implement these functions. By combining the new standard MPS module and this hardware, a new MPS for BLM will be configured.
        This paper describes the current status and plan to update Linac and RCS MPS for BLM.

        Speaker: Hiroki Takahashi (Japan Atomic Energy Agency)
      • 522
        Updating and operating the Machine Protection System at J-PARC MR

        The Machine Protection System is a group of devices used to ensure the safety of the accelerator and experimental facility by automatically stopping beam operation and aborting the beam in the event of equipment failure. The Machine Protection System in the J-PARC Main Ring (MR) is called MR-MPS and has been in operation since 2008, when the MR started operation. Development of a new MR-MPS to replace the ageing MR-MPS started around 2020 and was completed in April 2022. New MR-MPS introduced to protect those equipment when the main magnet power supply is renewed and new RF power supply is introduced from autumn 2022. The good results obtained in its operation have led to the completion of mass production of the new MR-MPS, which is now progressively being replaced by the new MR-MPS. This presentation will report on the configuration of the MR-MPS, its EPICS-based operation and the timeline for its complete replacement to be completed in the future

        Speaker: Takuro Kimura (High Energy Accelerator Research Organization)
      • 523
        Upgrades to the FACET-II data acquisition system

        The Data Acquisition System (DAQ) at FACET-II collects and saves synchronized, time-stamped data from various diagnostics on the linac and experimental area, including digital cameras and devices on the EPICS control system. During data acquisition, the camera input-output controllers (IOC) save image data to network-attached storage (NAS). While large files are able to save to the NAS efficiently, writing many small image files to the NAS is 10x slower. Additionally, the DAQ cannot acquire data from several important devices, such as BPMs, klystrons, and magnets, because they are only accessible via the legacy control system, the SLC Control Program (SCP). To address these issues, the FACET-II DAQ has been updated to save data faster by packaging multiple small image files as larger HDF5 files. The updated DAQ is also able to acquire BPM data from SCP by parallelizing the data acquisition process from both control systems, then comparing timestamps across cameras, EPICS devices, and SCP BPMs to ensure data is fully synchronized.

        Speaker: Sharon Perez (SLAC National Accelerator Laboratory)
      • 524
        Using Grafana as a representation layer for control data at the European XFEL

        At European XFEL the control system Karabo has been developed for operating photon beamlines and instruments. The time series database InfluxDB is used as a backend for logging control data. Whilst Karabo exposes interfaces to retrieve and present the historical data stored in InfluxDB, there are benefits in using an established solution such as Grafana. This interface enables close to real-time monitoring of the control data using web-based, customizable panels called dashboards.
        In this paper we present the example of using Grafana as a presentation layer for the Karabo control system at the Data Operation Center (DOC) in XFEL. Common monitoring scenarios and diagnostics use cases as well as lessons learned are discussed.

        Speaker: Valerii Bondar (European X-Ray Free-Electron Laser)
      • 525
        Worry-free Experimental Metadata collection tool for Tango-Controls/TINE and HDF5

        Can software run unattended for years, reliably supporting scientific experiments?

        Since 2016, the P05 beamline at Hereon, DESY, Germany, has operated an experimental metadata collection system that requires close to zero maintenance. This software has supported hundreds of experiments, contributing to numerous scientific publications with minimal intervention.

        In this paper, we present the design choices, technology stack, and architectural patterns that enabled this exceptional reliability. We discuss our extensive use of Java and its ecosystem, including observability frameworks and reactive programming, which were instrumental in ensuring scalability and robustness. Additionally, we highlight recent enhancements in HDF5 integration and upstream control system interoperability, developed as part of the DAPHNE4NFDI project*.

        Our experience demonstrates how well-designed software solutions can operate autonomously for years, providing valuable insights for long-term system sustainability in scientific research facilities.

        Speaker: Igor Khokhriakov (Deutsches Elektronen-Synchrotron DESY)
      • 526
        WRAP: Integrating an event processing framework

        The Web Rapid Application Platform (WRAP) provides a centralised, low-code environment for building Graphical User Interfaces (GUIs) through an intuitive drag-and-drop interface. These GUIs act as high-level user interfaces to the complex network of devices within CERN’s accelerator control system. Some WRAP applications are relatively simple, displaying device data or setting control parameters as entered by the user. However, more advanced scenarios require correlation and processing of multiple asynchronous events triggered by independent devices. In such cases, where data from different sources must be synchronised and transformed before presentation, a more sophisticated processing layer is essential. To this end, an event processing framework has been implemented in TypeScript and integrated into the WRAP front-end. The framework allows users to express data correlations via event building logic and to implement lightweight data processing scripts. Given that the primary WRAP users are not software developers, a simple Domain Specific Language (DSL) was designed to express logic in an accessible and declarative manner. This paper presents the motivation for the event processing framework, the design philosophy and architectural choices made, the technical implementation, and integration into WRAP. Challenges encountered are also described and future directions are outlined, including how the framework positions WRAP as a successor for GUIs developed on older platforms.

        Speaker: Epameinondas Galatas (European Organization for Nuclear Research)
    • FRKG Keynote Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Masanori Satoh (High Energy Accelerator Research Organization)
      • 527
        Visions of Antiquity: Scientific Imaging to Reveal the Original Appearance of Ancient Greek and Roman Sculpture

        For several decades, scientific analysis has played a pivotal role in advancing the field of archaeology, contributing to various objectives, such as interpreting the production and function of excavated artifacts, tracing the development of ancient trade networks, and mapping the migration of peoples across regions. Similarly, the study of artistic production has benefited from scientific methodologies, which have provided valuable insights into the making and meaning of art objects, as well as enabling the differentiation of authentic works from forgeries.

        A particularly intriguing area of inquiry pertains to the examination of ancient Greek and Roman marble sculpture, which, as is well-established, was originally adorned with paint, a practice known today as polychromy. However, the precise appearance of ancient sculptures remains largely speculative. A primary obstacle to interpreting the original appearance of sculpture is the fact that, when traces of paint do survive in archaeological contexts, they are typically minute and highly fragmentary. The analysis and interpretation of these minuscule remnants present significant challenges, making it difficult for scholars to extrapolate meaningful visual reconstructions.

        Recent advancements in digital imaging technologies have revolutionized the study of ancient painted surfaces, offering unprecedented opportunities to uncover painting materials otherwise invisible to the naked eye. Thus, imaging techniques provide critical data on the composition and application of ancient paint, thereby enriching our understanding of the visual appearance of these objects at the time of their creation. In this presentation, the discussion will begin with an examination of material evidence as a foundational approach to the study of painted surfaces on ancient sculptures. Subsequently, the presentation will integrate findings from a range of media and interdisciplinary research to deepen our understanding of ancient polychromy, ultimately bringing us closer to reconstructing the intended appearance and function of these sculptures in their historical and cultural context.

        Perhaps unsurprisingly, the results of this multidisciplinary inquiry demonstrate that it is not only materials—such as pigments—that traversed regional boundaries, but also the techniques of paint application. Consequently, the underlying conceptual framework that underpins the creation of art likewise migrated and transformed itself across temporal and spatial contexts to meet the demands of the cultures in which it flourished.

        Speaker: Giovanni Verri (Art Institute of Chicago)
    • FRAG MC13 Artificial Intelligence and Machine Learning Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Conveners: Mike Fedorov (Lawrence Livermore National Laboratory), Mirjam Lindberg (MAX IV Laboratory)
      • 528
        ML-assisted beamline optimization at LCLS

        LCLS is currently developing and deploying beamline optimization techniques at their x-ray endstations. This is an increasingly important topic at LCLS as it fully leverages its new high rep-rate superconducting beam. The increased throughput of the LCLS-II era suddenly shifts the performance bottleneck to on-shift beam setup time. As part of the Illumine collaboration, LCLS is leveraging bayesian optimization techniques with on-the-fly machine learning in conjunction with more conventional iterative alignment and digital twin techniques to automatically optimize the beam quality and streamline common elements of experiment startup setup. This talk will go over how it works, what worked well, challenges faced, and more from a controls perspective.

        Speaker: Zachary Lentz (Linac Coherent Light Source)
      • 529
        Development and experimental validation of a machine learning-based methodology for cyclotron beam control: results from the PSI HIPA facility

        Transmutex SA is developing an accelerator-driven system (ADS) designed to generate clean energy while reducing the lifetime of radioactive waste. Such a subcritical reactor concept requires high reliability and a high degree of accelerator automation to ensure operational effectiveness.

        To address these demands, a machine learning (ML) methodology was developed and experimentally validated for automatic beam control in cyclotrons. This work reports the first practical demonstration of machine-learning-based beam control in a high power cyclotron, representing a significant step for this class of accelerators.

        The validation experiments were performed on the injector ring of the High Intensity Proton Accelerator (HIPA) at the Paul Scherrer Institute (PSI), whose design closely matches the injector concept developed by Transmutex. Key challenges were addressed, including the identification of suitable observables and actuators, adapting the ML model to the accelerator response dynamics, and integrating ML-based control with existing feedback loops. The approach reliably aligned the beam with the reference trajectory, improving extraction efficiency while minimizing losses.

        Over an extensive 12-day operational test campaign, remarkably long in the context of real-time ML experiments, the model demonstrated robust performance across a range of operational scenarios, including varying beam currents and different turn numbers.

        These results show that machine learning can enhance operational efficiency, reduce operator workload, and increase automation in cyclotron-driven systems.

        Speaker: Evgeny Solodko (Transmutex SA)
      • 530
        Adaptive model-based reinforcement learning for orbit feedback control in NSLS-II storage ring

        The National Synchrotron Light Source II (NSLS-II) uses a highly stable electron beam to produce high-quality X-ray beams with high brightness and low-emittance synchrotron radiation. The traditional algorithm to stabilize the beam applies singular value decomposition (SVD) on the orbit response matrix to remove noise and extract actions. Supervised learning has been studied on NSLS-II storage ring stabilization and other accelerator facilities recently. Several problems, for example, machine status drifting, environment noise, and non-linear accelerator dynamics, remain unresolved in the SVD-based and supervised learning algorithms. To address these problems, we propose an adaptive training framework based on model-based reinforcement learning. This framework consists of two types of optimizations: trajectory optimization attempts to minimize the expected total reward in a differentiable environment, and online model optimization learns non-linear machine dynamics through the agent-environment interaction. Through online training, this framework tracks the internal status drifting in the electron beam ring. Simulation and real in-facility experiments on NSLS-II reveal that our method stabilizes the beam position and minimizes the alignment error, defined as the root mean square (RMS) error between adjusted beam positions and the reference position, down to ~1µm.

        Speaker: Zeyu Dong (Stony Brook University)
      • 531
        Agentic Systems in Accelerator Control and Optimization

        The deployment of agentic AI systems at the Advanced Light Source (ALS) marks a major step toward autonomous, intelligent facility operations. By connecting large language models (LLMs) with diverse data sources, we are developing agents that not only interface with the control system but also provide a natural language interface for operators and scientists. This allows users to interact with complex control infrastructure through intuitive queries, lowering the barrier to expert-level system insights. This paper outlines the architecture of the agentic framework, highlights the integration of natural language interfaces, and discusses early results, implementation challenges, and future directions for distributed autonomous control in light source environments.

        Speaker: Thorsten Hellert (Lawrence Berkeley National Laboratory)
    • 10:00
      Coffee
    • FRIA Closeout Grand Ballroom

      Grand Ballroom

      Palmer House Hilton Chicago

      17 East Monroe Street Chicago, IL 60603, United States of America
      Convener: Oscar Matilla (ALBA Synchrotron Light Source)
      • 532
        Workshop Summary

        Summary of pre-conference workshops.

        Speaker: Martin Pieck (Los Alamos National Laboratory)
      • 533
        Conference Summary

        Summary of the conference.

        Speaker: Karen White (Oak Ridge National Laboratory)
      • 534
        ICALEPCS 2027

        Update on planning for ICALEPCS 2027

        Speaker: Masanori Satoh (High Energy Accelerator Research Organization)
      • 535
        ICALEPCS 2029

        Announcement and preview of ICALEPCS 2027.

        Speaker: Oscar Matilla (ALBA Synchrotron Light Source)
      • 536
        Closing ICALEPCS 2025

        Conference closing and thanks.

        Speakers: Guobao Shen (Argonne National Laboratory), Joseph Sullivan (Argonne National Laboratory)
    • 11:45
      Social - Tours ANL and FNAL