When SLAC’s superconducting X-ray laser, for example, comes online, it’ll eventually accumulate data at a dizzying rate of a terabyte per second. And the world’s largest digital camera for astronomy, under construction at the lab for the Vera C. Rubin Observatory, will eventually capture a whopping 20 terabytes of data every night.
“The new computing infrastructure will be up for these challenges and more,” said Amedeo Perazzo, who leads the Controls and Data Systems division within the lab’s Technology Innovation Directorate. “We’re adopting some of the latest, greatest technology to create computing capabilities for all of SLAC for years to come.”
The Stanford University-led construction adds a second building to the existing Stanford Research Computing Facility (SRCF). SLAC will become a major tenant of SRCF-II – a modern data center that will provide an environment that is designed to operate 24/7 without service interruptions and with data integrity in mind. SRCF-II will double the current data center capabilities, for a total of 6 megawatts of power capacity.
“Computing is a core competency for a science-driven organization like SLAC,” said Adeyemi Adesanya, head of the Scientific Computing Systems department of Perazzo’s division. “I’m thrilled to see our vision for an integrated computing facility come to life. It’s a necessity for analyzing data on massive scales, and it’ll also pave the way for new initiatives.”
A hub for SLAC’s Big Data
Adesanya’s team is preparing to set up hardware for the SLAC Shared Science Data Facility (S3DF), which will find its home inside SRCF-II. It’ll become a computing hub for all data-intensive experiments performed at the lab.
First and foremost, it’ll benefit future users of LCLS-II, the upgrade of the Linac Coherent Light Source (LCLS) X-ray laser that will produce over 8,000 more pulses per second than the first-generation machine. Researchers hope to use LCLS-II to gain new insights into atomic processes that are fundamental to some of the most pressing challenges of our time, including the chemistry of clean energy technologies, the molecular design of drugs and the development of quantum materials and devices.
But with the new capabilities come tough computational challenges, said Jana Thayer, head of the LCLS Data Systems division. “To get the best science results and make the most of their time at LCLS-II, users will need fast feedback – within minutes – on the quality of their data,” she said. “To do that with an X-ray laser that produces thousands of times more data every second than its predecessor, we need the petaflops of computing power that S3DF will provide.”
Another issue researchers will have to contend with is the fact that LCLS-II will amass too much data to store it all. The new data facility will run an innovative data reduction pipeline that throws out unnecessary data before it gets saved for analysis.
Another computationally demanding technique that will benefit from the new infrastructure is cryogenic electron microscopy (cryo-EM) of biomolecules, such as proteins, RNA or virus particles. In this method, scientists take images of how a beam of electrons interacts with a sample that contains the biomolecules. They sometimes need to analyze millions of images to reconstruct the three-dimensional molecular structure in near-atomic detail. Researchers also hope to visualize molecular components in cells, not just biochemically purified molecules, at high resolution in the future.
The complex image reconstruction process requires lots of CPU and GPU power and involves elaborate machine learning algorithms. Doing these calculations at the S3DF will bring new opportunities, said Wah Chiu, head of the Stanford-SLAC Cryo-EM Center.
“I really hope that the S3DF will become an intellectual hub for computing, where experts gather to write code that allows us to visualize increasingly complex biological systems,” Chiu said. “There is a lot of potential to discover new structural states of molecules and organelles in normal and pathological cells at SLAC.”
In fact, everyone at the lab will be able to use available computing resources. Other potential “customers” include SLAC’s instrument for ultrafast electron diffraction (MeV-UED), the Stanford Synchrotron Radiation Lightsource (SSRL), the lab-wide machine learning initiative and applications in accelerator science. All in all, the S3DF will be able to support 80% of SLAC’s computing needs, while 20% of the most demanding scientific computing will be done at supercomputer facilities offsite.
Multiple services under one roof
SRCF-II will host two other major data facilities.
One of them is Rubin Observatory’s U.S. data facility (USDF). In a few years, the observatory will begin taking images of the Southern night sky from a mountain top in Chile using its SLAC-built 3,200-megapixel camera. For the Legacy Survey of Space and Time (LSST), it’ll take two images every 37 seconds for 10 years. The resulting information might hold answers to some of the biggest questions about our universe, including what exactly speeds up its expansion, but that information will be contained in a 60-petabyte-sized catalog that researchers will have to sift through. The resulting image archive will reach some 300 petabytes, dominating the storage usage in SRCF-II. The USDF, together with two other centers in the UK and France, will handle production of the enormous data catalog.
A third data hub will serve the user community of SLAC’s first-generation X-ray laser. Existing computing infrastructure for the LCLS data analysis will gradually move to SRCF-II and become a much larger system there.
Although each data center has specific needs in terms of technical specifications, they all rely on a core of shared services: Data always need to be transferred, stored, analyzed and managed. Working closely with Stanford, Rubin Observatory, LCLS and other partners, Perazzo’s and Adesanya’s teams are setting up all three systems.
For Adesanya, this unified approach – which includes a cost model that will help pay for future upgrades and growth – is a dream come true. “Historically, computing at SLAC was highly distributed and each facility would have its own, specialized system,” he said. “The new, more centralized approach will help stimulate new lab-wide initiatives, such as machine learning, and by breaking down the silos and converging to an integrated data facility, we’re building something that is more capable than the sum of everything we had before.”
SRCF-II construction is a Stanford project. Large parts of the S3DF infrastructure are funded by the Department of Energy’s Office of Science. LCLS and SSRL are Office of Science user facilities. Rubin Observatory is a joint initiative of the National Science Foundation (NSF) and the Office of Science. Its primary mission is to carry out the Legacy Survey of Space and Time, providing an unprecedented data set for scientific research supported by both agencies. Rubin is operated jointly by NSF’s NOIRLab and SLAC. NOIRLab is managed for NSF by the Association of Universities for Research in Astronomy and SLAC is operated for DOE by Stanford. Stanford-SLAC Cryo-EM Center (S2C2) is supported by the National Institutes of Health (NIH) Common Fund Transformative High-Resolution Cryo-Electron Microscopy program.
SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation.
SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.