This is a continuing profile series on the directors of the Department of Energy (DOE) Office of Science User Facilities. Michael E. Papka is the director of the Argonne Leadership Computing Facility.
Tag: Supercomputing
Upgrades for LLNL supercomputer from AMD, Penguin Computing aid COVID-19 research
To assist in the COVID-19 research effort, Lawrence Livermore National Laboratory, Penguin Computing and AMD have reached an agreement to upgrade the Lab’s unclassified, Penguin Computing-built Corona high performance computing (HPC) cluster with an in-kind contribution of cutting-edge AMD Instinct™ accelerators, expected to nearly double the peak performance of the machine.
ORNL is in the fight against COVID-19
In the race to identify solutions to the COVID-19 pandemic, researchers at the Department of Energy’s Oak Ridge National Laboratory are joining the fight by applying expertise in computational science, advanced manufacturing, data science and neutron science.
Lab researchers aid COVID-19 response in antibody, anti-viral research
Lawrence Livermore National Laboratory scientists are contributing to the global fight against COVID-19 by combining artificial intelligence/machine learning, bioinformatics and supercomputing to help discover candidates for new antibodies and pharmaceutical drugs to combat the disease.
Department of Energy to Provide $60 Million for Science Computing Teams
The U.S. Department of Energy (DOE) announced a plan to provide $60 million to establish multidisciplinary teams to develop new tools and techniques to harness supercomputers for scientific discovery.
The Department of Energy Tackling the Challenge of Coronavirus
The Department of Energy has a vital role to play in the national response to COVID-19. Researchers have already used tools at national laboratories to make major inroads to analyzing the virus and its spread.
Early research on existing drug compounds via supercomputing could combat coronavirus
Researchers at the Department of Energy’s Oak Ridge National Laboratory have used Summit, the world’s most powerful and smartest supercomputer, to identify 77 small-molecule drug compounds that might warrant further study in the fight against the SARS-CoV-2 coronavirus, which is responsible for the COVID-19 disease outbreak.
LLNL and HPE to partner with AMD on El Capitan, projected as world’s fastest supercomputer
Lawrence Livermore National Laboratory (LLNL), Hewlett Packard Enterprise (HPE) and Advanced Micro Devices, Inc. (AMD) today announced the selection of AMD as the node supplier for El Capitan, projected to be the world’s most powerful supercomputer when it is fully deployed in 2023.
Valentino Cooper: Building foundations for solid science
Valentino Cooper of Oak Ridge National Laboratory uses theory, modeling and computation to improve fundamental understanding of advanced materials for next-generation energy and information technologies.
Less is More: Berkeley Lab Breaks New Ground in Data Center Optimization
Lawrence Berkeley National Laboratory’s decades of leadership in designing & enhancing energy-efficient data centers is being applied to NERSC supercomputing resources through a collaboration that’s using operational data analytics to optimize cooling systems & save electricity.
ORNL researchers develop ‘multitasking’ AI tool to extract cancer data in record time
To better leverage cancer data for research, scientists at ORNL are developing an artificial intelligence (AI)-based natural language processing tool to improve information extraction from textual pathology reports. In a first for cancer pathology reports, the team developed a multitask convolutional neural network (CNN)—a deep learning model that learns to perform tasks, such as identifying key words in a body of text, by processing language as a two-dimensional numerical dataset.
ORNL researchers advance performance benchmark for quantum computers
Researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL) have developed a quantum chemistry simulation benchmark to evaluate the performance of quantum devices and guide the development of applications for future quantum computers.
Globus Connector for Google Cloud Now Available
Globus, a leading research data management service, today announced the general availability of Globus for Google Cloud, a new solution for accessing and managing data stored in Google Cloud object storage.
AI for Plant Breeding in an Ever-Changing Climate
In this Q&A, Oak Ridge National Laboratory’s Dan Jacobson talks about his team’s work on a genomic selection algorithm, his vision for the future of environmental genomics, and the space where simulation meets AI.
A New Parallel Strategy for Tackling Turbulence on Summit
A team at Georgia Tech created a new turbulence algorithm optimized for the Summit supercomputer. It reached a performance of less than 15 seconds of wall-clock time per time step for more than 6 trillion grid points—a new world record surpassing the prior state of the art in the field for the size of the problem.
LLNL leads multi-institutional team in modeling protein interactions tied to cancer
Computational scientists, biophysicists and statisticians from Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL) are leading a massive multi-institutional collaboration that has developed a machine learning-based simulation for next-generation supercomputers capable of modeling protein interactions and mutations that play a role in many forms of cancers.
Search for Lightweight Alloying Solutions Earns Team a Gordon Bell Finalist Nomination
A team used the Summit supercomputer to simulate a 10,000-atom magnesium dislocation system at 46 petaflops, a feat that earned the team an ACM Gordon Bell Prize finalist nomination and could allow scientists to understand which alloying materials to add to improve magnesium alloys.
Gordon Bell Finalist Team Tackles Transistors with New Programming Paradigm
A team simulated a 10,000-atom 2D transistor slice on the Summit supercomputer and mapped where heat is produced in a single transistor. Using a new data-centric version of the OMEN nanodevice simulator, the team sustained the code at 85.45 petaflops and earned a Gordon Bell Prize finalist nomination.
Google quantum computing breakthrough a ‘remarkable milestone’
Google announced Wednesday an experimental quantum processor completed a calculation in just a few minutes, a process that would take a traditional supercomputer thousands of years. Peter McMahon, professor of applied and engineering physics, researches the physics of computation and…
ORNL develops, deploys AI capabilities across research portfolio
To accelerate promising artificial intelligence applications in diverse research fields, ORNL has established a labwide AI Initiative. This internal investment brings the lab’s AI expertise, computing resources and user facilities together to facilitate analyses of massive datasets.
Gaute Hagen
Profiled is physicist Gaute Hagen of the Department of Energy’s Oak Ridge National Laboratory, who runs advanced models on powerful supercomputers to explore how protons and neutrons interact to “build” an atomic nucleus from scratch.
ECP’S EXASTAR PROJECT SEEKS ANSWERS HIDDEN IN THE COSMOS
ExaStar aims to create simulations for comparison with experiments and observations to help answer a variety of questions: Why is there more iron than gold in the universe? Why is anything rarer than anything else? Why is finding transuranic elements on the face of the earth difficult?
Machine Learning Helps Create Detailed, Efficient Models of Water
A team devised a way to better model water’s properties. They developed a machine-learning workflow that offers accurate and computationally efficient models.
Study Uses Supercomputers to Advance Dynamic Earthquake Rupture Models
SDSC’s Comet Supports UC Riverside Study of San Andreas Fault System Multi-fault earthquakes can span fault systems of tens to hundreds of kilometers, with ruptures propagating from one segment to another. During the last decade, seismologists have observed several cases…
Berkeley Lab’s John Shalf Ponders the Future of HPC Architectures
What will scientific computing at scale look like in 2030? With the impending demise of Moore’s Law, there are still more questions than answers for users and manufacturers of HPC technologies as they try to figure out what their next…