NASA uses ORNL supercomputers to plan smooth landing on Mars

Since 2019, a team of NASA scientists and their partners have been using NASA’s FUN3D software on supercomputers located at the Department of Energy’s Oak Ridge Leadership Computing Facility, or OLCF, to conduct computational fluid dynamics, or CFD, simulations of a human-scale Mars lander. The team’s ongoing research project is a first step in determining how to safely land a vehicle with humans onboard onto the surface of Mars.

LLNL scientists among finalists for new Gordon Bell climate modeling award

A team from Lawrence Livermore and seven other Department of Energy (DOE) national laboratories is a finalist for the new Association for Computing Machinery (ACM) Gordon Bell Prize for Climate Modeling for running an unprecedented high-resolution global atmosphere model on the world’s first exascale supercomputer.

James Barr von Oehsen Named Director of the Pittsburgh Supercomputing Center

James Barr von Oehsen has been selected as the director of the Pittsburgh Supercomputing Center (PSC), a joint research center of Carnegie Mellon University and the University of Pittsburgh. Von Oehsen is a leader in the fields of cyberinfrastructure, research computing, advanced networking, data science and information technology.

Nuclear Physics Gets a Boost for High-Performance Computing

Efforts to harness the power of supercomputers to better understand the hidden worlds inside the nucleus of the atom recently received a big boost. A project led by the U.S. Department of Energy’s (DOE’s) Thomas Jefferson National Accelerator Facility is one of three to split $35 million in grants from the DOE via a partnership program of DOE’s Scientific Discovery through Advanced Computing (SciDAC). The $13 million project includes key scientists based at six DOE national labs and two universities, including Jefferson Lab, Argonne National Lab, Brookhaven National Lab, Oak Ridge National Lab, Lawrence Berkeley National Lab, Los Alamos National Lab, Massachusetts Institute of Technology and William & Mary.

Reducing Redundancy to Accelerate Complicated Computations

Computers help physicists solve complicated calculations. But some of these calculations are so complex, a regular computer is not enough. In fact, some advanced calculations tax even the largest supercomputers. Now, scientists at Jefferson Lab and William & Mary have developed MemHC, a new tool that uses memory optimization methods to allow GPU-based computers to calculate the structures of neutrons and protons ten times faster.

Unveiling the Existence of the Elusive Tetraneutron

Nuclear physicists have experimentally confirmed the existence of the tetraneutron, a meta-stable nuclear system that can decay into four free neutrons. Researchers have predicted the tetraneutron’s existence since 2016. The new results, which agree with predictions from supercomputer simulations, will help scientists understand atomic nuclei, neutron stars, and other neutron-rich systems.

PSC and Partners to Lead $7.5-Million Project to Allocate Access on NSF Supercomputers

The NSF has awarded $7.5 million over five years to the RAMPS project, a next-generation system for awarding computing time in the NSF’s network of supercomputers. RAMPS is led by the Pittsburgh Supercomputing Center and involves partner institutions in Colorado and Illinois.

U.S. Department of Energy to Showcase National Lab Expertise at SC21

The scientific computing and networking leadership of the U.S. Department of Energy’s (DOE’s) national laboratories will be on display at SC21, the International Conference for High-Performance Computing, Networking, Storage and Analysis. The conference takes place Nov. 14-19 in St. Louis via a combination of on-site and online resources.

Argonne and Oak Ridge National Laboratories award Codeplay software

Argonne National Laboratory (Argonne) in collaboration with Oak Ridge National Laboratory (ORNL), has awarded Codeplay a contract implementing the oneAPI DPC++ compiler, an implementation of the SYCL open standard software, to support AMD GPU-based high-performance compute (HPC) supercomputers.

LLNL team looks at nuclear weapon effects for near-surface detonations

A Lawrence Livermore National Laboratory team has taken a closer look at how nuclear weapon blasts close to the Earth’s surface create complications in their effects and apparent yields. Attempts to correlate data from events with low heights of burst revealed a need to improve the theoretical treatment of strong blast waves rebounding from hard surfaces.

Physicists Crack the Code to Signature Superconductor Kink Using Supercomputing

A team performed simulations on the Summit supercomputer and found that electrons in cuprates interact with phonons much more strongly than was previously thought, leading to experimentally observed “kinks” in the relationship between an electron’s energy and the momentum it carries.

Scientists Use Supercomputers to Study Reliable Fusion Reactor Design, Operation

A team used two DOE supercomputers to complete simulations of the full-power ITER fusion device and found that the component that removes exhaust heat from ITER may be more likely to maintain its integrity than was predicted by the current trend of fusion devices.

Supercomputers Aid Scientists Studying the Smallest Particles in the Universe

Using the nation’s fastest supercomputer, Summit at Oak Ridge National Laboratory, a team of nuclear physicists developed a promising method for measuring quark interactions in hadrons and applied the method to simulations using quarks with close-to-physical masses.

Designing Materials from First Principles with Yuan Ping

The UC Santa Cruz professor uses computing resources at Brookhaven Lab’s Center for Functional Nanomaterials to run calculations for quantum information science, spintronics, and energy research.

Simulations Reveal Nature’s Design for Error Correction During DNA Replication

A Georgia State University team has used the nation’s fastest supercomputer, Summit at the US Department of Energy’s Oak Ridge National Laboratory, to find the optimal transition path that one E. coli enzyme uses to switch between building and editing DNA to rapidly remove misincorporated pieces of DNA.

AI gets a boost via LLNL, SambaNova collaboration

Lawrence Livermore National Laboratory (LLNL) has installed a state-of-the-art artificial intelligence (AI) accelerator from SambaNova Systems, the National Nuclear Security Administration (NNSA) announced today, allowing researchers to more effectively combine AI and machine learning (ML) with complex scientific workloads.

CARES Act funds major upgrade to Corona supercomputer for COVID-19 work

With funding from the Coronavirus Aid, Relief and Economic Security (CARES) Act, Lawrence Livermore National Laboratory, chipmaker AMD and information technology company Supermicro have upgraded the supercomputing cluster Corona, providing additional resources to scientists for COVID-19 drug discovery and vaccine research

New Calculation Refines Comparison of Matter with Antimatter

An international collaboration of theoretical physicists has published a new calculation relevant to the search for an explanation of the predominance of matter over antimatter in our universe. The new calculation gives a more accurate prediction for the likelihood with which kaons decay into a pair of electrically charged pions vs. a pair of neutral pions.

LLNL pairs world’s largest computer chip from Cerebras with “Lassen” supercomputer to accelerate AI research

Lawrence Livermore National Laboratory (LLNL) and artificial intelligence computer company Cerebras Systems have integrated the world’s largest computer chip into the National Nuclear Security Administration’s (NNSA’s) Lassen system, upgrading the top-tier supercomputer with cutting-edge AI technology.

Summit Helps Predict Molecular Breakups

A team used the Summit supercomputer to simulate transition metal systems—such as copper bound to molecules of nitrogen, dihydrogen, or water—and correctly predicted the amount of energy required to break apart dozens of molecular systems, paving the way for a greater understanding of these materials.

Preparing for exascale: LLNL breaks ground on computing facility upgrades

To meet the needs of tomorrow’s supercomputers, the National Nuclear Security Administration’s (NNSA’s) Lawrence Livermore National Laboratory (LLNL) has broken ground on its Exascale Computing Facility Modernization (ECFM) project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.

Supercomputing Aids Scientists Seeking Therapies for Deadly Bacterial Disease

A team of scientists led by Abhishek Singharoy at Arizona State University used the Summit supercomputer at the Oak Ridge Leadership Computing Facility to simulate the structure of a possible drug target for the bacterium that causes rabbit fever.

Simulations forecast nationwide increase in human exposure to extreme climate events

Using ORNL’s now-decommissioned Titan supercomputer, a team of researchers estimated the combined consequences of many different extreme climate events at the county level, a unique approach that provided unprecedented regional and national climate projections that identified the areas most likely to face climate-related challenges.

Four Years of Calculations Lead to New Insights into Muon Anomaly

Two decades ago, an experiment at Brookhaven National Laboratory pinpointed a mysterious mismatch between established particle physics theory and actual lab measurements. A multi-institutional research team (including Brookhaven, Columbia University, and the universities of Connecticut, Nagoya and Regensburg, RIKEN) have used Argonne National Laboratory’s Mira supercomputer to help narrow down the possible explanations for the discrepancy, delivering a newly precise theoretical calculation that refines one piece of this very complex puzzle.

Major upgrades of particle detectors and electronics prepare CERN experiment to stream a data tsunami

For an experiment that will generate big data at unprecedented rates, physicists led design, development, mass production and delivery of an upgrade of novel particle detectors and state-of-the art electronics.

Advanced software framework expedites quantum-classical programming

An ORNL team developed the XACC software framework to help researchers harness the potential power of quantum processing units, or QPUs. XACC offloads portions of quantum-classical computing workloads from the host CPU to an attached quantum accelerator, which calculates results and sends them back to the original system.