Researchers have documented for the first time the unique chemistry dynamics and structure of high-temperature liquid uranium trichloride salt, a potential nuclear fuel source for next-generation reactors.
Tag: HPC
Groundbreaking LLNL and BridgeBio Oncology Therapeutics collaboration announces start of human trials for supercomputing-discovered cancer drug
In a substantial milestone for supercomputing-aided drug design, Lawrence Livermore National Laboratory (LLNL) and BridgeBio Oncology Therapeutics (BridgeBio) today announced clinical trials have begun for a first-in-class medication that targets specific genetic mutations implicated in many types of cancer.
Promethium bound: Rare earth element’s secrets exposed
Scientists have uncovered the properties of a rare earth element that was first discovered 80 years ago at the very same laboratory, opening a new pathway for the exploration of elements critical in modern technology, from medicine to space travel.
LLNL debuts trio of systems on new Top500 list of world’s most powerful supercomputers, including El Capitan Early Delivery System
Three new systems currently or soon-to-be sited at Lawrence Livermore National Laboratory (LLNL) on Monday debuted on the latest Top500 list of most powerful supercomputers in the world, including the first portion of the exascale machine El Capitan.
Globus Announces Multi-User Support for Globus Compute
Globus, the de facto standard platform for research IT, announced multi-user support for Globus Compute, a service that enables reliable, scalable, and high performance remote function execution, and delivers the same “fire-and-forget” capabilities for computation as the Globus core platform does for data management.
GlobusWorld 2024 Program Announced
This year’s program includes guest keynote addresses by Ben Brown, Director, Facilities Division, Advanced Scientific Computing Research at the U.S. Department of Energy, and Greg Gunther, Science Data Management Branch Chief, U.S. Geological Survey.
Early Frontier users seize exascale advantage, grapple with grand scientific challenges
With the world’s first exascale supercomputing system now open to full user operations, research teams are harnessing Frontier’s power and speed to tackle some of the most challenging problems in modern science.The HPE Cray EX system at the Department of Energy’s Oak Ridge National Laboratory debuted in May 2022 as the fastest computer on the planet and first machine to break the exascale barrier at 1.
GlobusWorld 2023 Program Announced
Globus, the leading research data management service, today announced the lineup of speakers for GlobusWorld 2023, being held April 25-27, 2023 in Chicago, IL, and online. Now in its 12th year, GlobusWorld brings together over 200 researchers, systems administrators, developers and IT leaders from top computing centers, labs and universities around the world.
James Barr von Oehsen Named Director of the Pittsburgh Supercomputing Center
James Barr von Oehsen has been selected as the director of the Pittsburgh Supercomputing Center (PSC), a joint research center of Carnegie Mellon University and the University of Pittsburgh. Von Oehsen is a leader in the fields of cyberinfrastructure, research computing, advanced networking, data science and information technology.
National Energy Technology Laboratory and Pittsburgh Supercomputing Center Pioneer First Ever Computational Fluid Dynamics Simulation on Cerebras Wafer-Scale Engine
Cerebras Systems, the pioneer in high performance artificial intelligence (AI) compute, today announced, for the first time ever, the simulation of a high-resolution natural convection workload at near real-time rates.
Globus Welcomes New Subscribers
As the volume of data explodes, and gigabyte and terabyte data sets become the new norm, effective research data management tools become a necessity for today’s researchers. Globus, a non-profit service run by the University of Chicago, delivers a service and platform to do just that. Globus achieves sustainability via a hybrid free and subscription-based model whose primary goal is to maximize the value delivered to science, and provides positive returns to scale as a result of a growing subscriber base.
LLNL and Amazon Web Services to cooperate on standardized software stack for HPC
Lawrence Livermore National Laboratory and Amazon Web Services have signed a memorandum of understanding (MOU) to define the role of leadership-class high performance computing (HPC) in a future where cloud HPC is ubiquitous.
PSC and Partners to Lead $7.5-Million Project to Allocate Access on NSF Supercomputers
The NSF has awarded $7.5 million over five years to the RAMPS project, a next-generation system for awarding computing time in the NSF’s network of supercomputers. RAMPS is led by the Pittsburgh Supercomputing Center and involves partner institutions in Colorado and Illinois.
U.S. Department of Energy to Showcase National Lab Expertise at SC21
The scientific computing and networking leadership of the U.S. Department of Energy’s (DOE’s) national laboratories will be on display at SC21, the International Conference for High-Performance Computing, Networking, Storage and Analysis. The conference takes place Nov. 14-19 in St. Louis via a combination of on-site and online resources.
World-renowned data science experts to discuss the future of digital fairness
ICDS’s two-day Fall Symposium will be held Oct. 6 and 7, bringing together researchers from around the U.S. to discuss data, equity, reproducibility and other topics related to fairness in data science.
Australia’s National Research and Education Network Partners with Globus
The Australian national research and education network AARNet, a non-profit provider of network, cyber security, data and collaboration services, has signed an agreement with Globus, a department within the University of Chicago, to add Globus as a research data management service.
LLNL, IBM and Red Hat to explore standardized High Performance Computing Resource Management interface
Lawrence Livermore National Laboratory (LLNL), IBM and Red Hat are combining forces to develop best practices for interfacing high-performance computing (HPC) schedulers and cloud orchestrators, an effort designed to prepare for emerging supercomputers that exploit cloud technologies.
Coalition’s new leadership renews focus on advocating for academic scientific computation
Computers play an integral role in nearly every discipline of research today, giving scientists the ability to discover new drugs, develop new materials, forecast the impacts of climate change, and solve some of today’s most challenging problems.
Los Alamos announces details of new computational storage deployment
Los Alamos National Laboratory announced an industry-first computational storage deployment targeting a next-generation storage system for HPC sited at Los Alamos.
Globus for iRODS Connector Released
Globus, the leading research data management platform, today announced the general availability of Globus for iRODS, offering researchers an enhanced solution for policy managed data preservation.
LLNL welcomes “Ruby” supercomputer for national nuclear security mission & COVID-19 research
Lawrence Livermore National Laboratory (LLNL), along with partners Intel, Supermicro and Cornelis Networks, have deployed “Ruby,” a high performance computing (HPC) cluster that will perform functions for the National Nuclear Security Administration (NNSA) and support the Laboratory’s COVID-19 research.
Mammoth “big memory” computing cluster to aid in COVID-19 research
Lawrence Livermore National Laboratory and its partners AMD, Supermicro and Cornelis Networks have installed a new high performance computing (HPC) cluster with memory and data storage capabilities optimized for data-intensive COVID-19 research and pandemic response.
Los Alamos National Laboratory brings next-generation HPC to the fight against COVID-19
Los Alamos National Laboratory has completed the installation of a next-generation high performance computing platform, with aim to enhance its ongoing R&D efforts in support of the nation’s response to COVID-19.
AI gets a boost via LLNL, SambaNova collaboration
Lawrence Livermore National Laboratory (LLNL) has installed a state-of-the-art artificial intelligence (AI) accelerator from SambaNova Systems, the National Nuclear Security Administration (NNSA) announced today, allowing researchers to more effectively combine AI and machine learning (ML) with complex scientific workloads.
CARES Act funds major upgrade to Corona supercomputer for COVID-19 work
With funding from the Coronavirus Aid, Relief and Economic Security (CARES) Act, Lawrence Livermore National Laboratory, chipmaker AMD and information technology company Supermicro have upgraded the supercomputing cluster Corona, providing additional resources to scientists for COVID-19 drug discovery and vaccine research
Virtual symposium experts offer insights on big data issues, opportunities
Registration is now open for Penn State’s Institute of Computational and Data Sciences’ (ICDS) 2020 Symposium. The two-day symposium will be held virtually Oct. 21-22 and will feature an interdisciplinary group of speakers and experts who will focus on both the challenges — and opportunities — of big data and data science.
LLNL scientists pair 3D bioprinting and computer modeling to examine cancer spread in blood vessels
Lawrence Livermore National Laboratory scientists have paired 3D-printed, living human brain vasculature with advanced computational flow simulations to better understand tumor cell attachment to blood vessels, the first step in secondary tumor formation during cancer metastasis.
Physical Scientists Turn to Deep Learning to Improve Earth Systems Modeling
The role of AI in science is at a turning point, with weather, climate, and Earth systems modeling emerging as an exciting application area for physics-informed deep learning. In this Q&A, NERSC’s Karthik Kashinath discusses what is driving the scientific community to embrace these new methodologies.
New NSF Physics Frontier Center Will Focus on Neutron Star Modeling in ‘Gravitational Wave Era’
A new Physics Frontier Center at UC Berkeley, supported by the National Science Foundation, expands the reach and depth of existing capabilities on campus and at neighboring Berkeley Lab in modeling one of the most violent events in the universe: the merger of neutron stars and its explosive aftermath.
Love-hate relationship of solvent and water leads to better biomass breakup
Scientists at the Department of Energy’s Oak Ridge National Laboratory used neutron scattering and supercomputing to better understand how an organic solvent and water work together to break down plant biomass, creating a pathway to significantly improve the production of renewable biofuels and bioproducts.
Preparing for exascale: LLNL breaks ground on computing facility upgrades
To meet the needs of tomorrow’s supercomputers, the National Nuclear Security Administration’s (NNSA’s) Lawrence Livermore National Laboratory (LLNL) has broken ground on its Exascale Computing Facility Modernization (ECFM) project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.
Upgrades for LLNL supercomputer from AMD, Penguin Computing aid COVID-19 research
To assist in the COVID-19 research effort, Lawrence Livermore National Laboratory, Penguin Computing and AMD have reached an agreement to upgrade the Lab’s unclassified, Penguin Computing-built Corona high performance computing (HPC) cluster with an in-kind contribution of cutting-edge AMD Instinct™ accelerators, expected to nearly double the peak performance of the machine.
Mitch Allmond: Shaping a better fundamental understanding of matter
Profiled is Mitch Allmond of Oak Ridge National Laboratory, who conducts experiments and uses theoretical models to advance our understanding of the structure of atomic nuclei.
Less is More: Berkeley Lab Breaks New Ground in Data Center Optimization
Lawrence Berkeley National Laboratory’s decades of leadership in designing & enhancing energy-efficient data centers is being applied to NERSC supercomputing resources through a collaboration that’s using operational data analytics to optimize cooling systems & save electricity.
Globus Connector for Google Cloud Now Available
Globus, a leading research data management service, today announced the general availability of Globus for Google Cloud, a new solution for accessing and managing data stored in Google Cloud object storage.
Los Alamos National Laboratory teams with Arm to develop tailored, efficient processor architectures for extreme-scale computing
Los Alamos National Laboratory and Arm are teaming up to make efficient, workload-optimized processors tailored to the extreme-scale computing requirements of the Laboratory’s national-security mission.
Gaute Hagen
Profiled is physicist Gaute Hagen of the Department of Energy’s Oak Ridge National Laboratory, who runs advanced models on powerful supercomputers to explore how protons and neutrons interact to “build” an atomic nucleus from scratch.
Berkeley Lab’s John Shalf Ponders the Future of HPC Architectures
What will scientific computing at scale look like in 2030? With the impending demise of Moore’s Law, there are still more questions than answers for users and manufacturers of HPC technologies as they try to figure out what their next…