Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
AI Copilot Keeps Berkeley's X-Ray Accelerator on Track - News Directory 3

AI Copilot Keeps Berkeley’s X-Ray Accelerator on Track

January 18, 2026 Lisa Park Tech
News Context
At a glance
  • In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the ‌Advanced Light Source (ALS) particle⁣ accelerator.
  • researchers at the Lawrence Berkeley National​ Laboratory ALS facility recently deployed the Accelerator assistant, a large language model (LLM)-driven system to keep X-ray research on track.
  • The Accelerator ⁤Assistant ⁢- powered by an NVIDIA H100 GPU ​harnessing ⁣CUDA for accelerated inference - taps into institutional ‌knowledge data from the ALS support team and routes...
Original source: blogs.nvidia.com

In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the ‌Advanced Light Source (ALS) particle⁣ accelerator.

researchers at the Lawrence Berkeley National​ Laboratory ALS facility recently deployed the Accelerator assistant, a large language model (LLM)-driven system to keep X-ray research on track.

The Accelerator ⁤Assistant ⁢- powered by an NVIDIA H100 GPU ​harnessing ⁣CUDA for accelerated inference – taps into institutional ‌knowledge data from the ALS support team and routes requests thru Gemini,Claude or ChatGPT. It writes Python and solves problems,‍ either autonomously or with a human in the loop.

This is no small task. ⁣The ALS particle accelerator sends electrons traveling near the speed of light in a 200-yard circular path, emitting ultraviolet and X-ray ⁤light, which ‍is directed through 40 beamlines for 1,700 ‍scientific experiments per year. Scientists worldwide use this process to study materials science,biology,chemistry,physics and environmental science.

At the ALS, beam interruptions can last minutes, hours or days, depending on the complexity, halting concurrent scientific experiments in process.‌ And​ much can go wrong: ⁤the ⁢ALS control system has more than 230,000 process variables.

“It’s really ​important for such a machine to be up, and when we go down, there are ‍40 beamlines that do X-ray experiments, and they are waiting,” said Thorsten ⁤Hellert, staff scientist from the accelerator Technology and Applied Physics Division at Berkeley Lab and led author of ⁤a research paper ‌ on‍ the groundbreaking work.

Until now, facility⁣ staff troubleshooting issues have had to quickly identify the areas, retrieve data​ and gather​ the right personnel for analysis under intense time pressure to get the system back up and running.

“The novel approach offers a blueprint for​ securely and transparently applying large language model-driven systems to particle accelerators, nuclear ⁢and fusion reactor facilities, and other complex scientific infrastructures,” said Hellert.

The research team demonstrated that the Accelerator Assistant can autonomously prepare and run a multistage physics experiment, cutting setup time and reducing efforts ​by 100x.

Applying Context Engineering Prompts to Accelerator Assistant

The ALS operators interact with the system through either a command line interface or Open WebUI, which enables interaction with various LLMs and is accessible from control room stations, as well as remotely. Under the hood,the system uses osprey,a framework developed at Berkeley Lab to apply agent-based AI safely in complex control systems.

each user is‍ authenticated and the framework maintains‍ personalized context and memory across sessions, and multiple sessions can be ‌managed simultaneously.This allows users to organize distinct tasks or experiments into separate‌ threads. These inputs are routed ⁤through the Accelerator Assistant, which makes connections to the database of⁢ more than‌ 230,000 process variables, a⁤ historical database archive service ⁤and⁢ Jupyter ‍notebook-based execution environments.

“We try to engineer the context of⁤ every language model call with⁢ whatever prior knowledge we have from this execution ‌up to this point,” said Hellert.

Inference is done either locally – using Ollama,‌ which is an open-source tool for running LLMs with a personal computer, on an H100 GPU node located within​ the control room network – or externally with the CBorg gateway, which⁢ is ‍a lab-managed interface that ⁤routes requests to external⁢ tools⁤ such as ChatGPT, Claude or Gemini.

T

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

GPU Computing

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service