With the world’s first exascale supercomputing system now open to full user operations, research teams are harnessing Frontier’s power and speed to tackle some of the most challenging problems in modern science.The HPE Cray EX system at the Department of Energy’s Oak Ridge National Laboratory debuted in May 2022 as the fastest computer on the planet and first machine to break the exascale barrier at 1.
Globus, the leading research data management service, today announced the lineup of speakers for GlobusWorld 2023, being held April 25-27, 2023 in Chicago, IL, and online. Now in its 12th year, GlobusWorld brings together over 200 researchers, systems administrators, developers and IT leaders from top computing centers, labs and universities around the world.
James Barr von Oehsen has been selected as the director of the Pittsburgh Supercomputing Center (PSC), a joint research center of Carnegie Mellon University and the University of Pittsburgh. Von Oehsen is a leader in the fields of cyberinfrastructure, research computing, advanced networking, data science and information technology.
Cerebras Systems, the pioneer in high performance artificial intelligence (AI) compute, today announced, for the first time ever, the simulation of a high-resolution natural convection workload at near real-time rates.
As the volume of data explodes, and gigabyte and terabyte data sets become the new norm, effective research data management tools become a necessity for today’s researchers. Globus, a non-profit service run by the University of Chicago, delivers a service and platform to do just that. Globus achieves sustainability via a hybrid free and subscription-based model whose primary goal is to maximize the value delivered to science, and provides positive returns to scale as a result of a growing subscriber base.
Lawrence Livermore National Laboratory and Amazon Web Services have signed a memorandum of understanding (MOU) to define the role of leadership-class high performance computing (HPC) in a future where cloud HPC is ubiquitous.
The NSF has awarded $7.5 million over five years to the RAMPS project, a next-generation system for awarding computing time in the NSF’s network of supercomputers. RAMPS is led by the Pittsburgh Supercomputing Center and involves partner institutions in Colorado and Illinois.
The scientific computing and networking leadership of the U.S. Department of Energy’s (DOE’s) national laboratories will be on display at SC21, the International Conference for High-Performance Computing, Networking, Storage and Analysis. The conference takes place Nov. 14-19 in St. Louis via a combination of on-site and online resources.
ICDS’s two-day Fall Symposium will be held Oct. 6 and 7, bringing together researchers from around the U.S. to discuss data, equity, reproducibility and other topics related to fairness in data science.
The Australian national research and education network AARNet, a non-profit provider of network, cyber security, data and collaboration services, has signed an agreement with Globus, a department within the University of Chicago, to add Globus as a research data management service.
Lawrence Livermore National Laboratory (LLNL), IBM and Red Hat are combining forces to develop best practices for interfacing high-performance computing (HPC) schedulers and cloud orchestrators, an effort designed to prepare for emerging supercomputers that exploit cloud technologies.
Computers play an integral role in nearly every discipline of research today, giving scientists the ability to discover new drugs, develop new materials, forecast the impacts of climate change, and solve some of today’s most challenging problems.
Los Alamos National Laboratory announced an industry-first computational storage deployment targeting a next-generation storage system for HPC sited at Los Alamos.
Globus, the leading research data management platform, today announced the general availability of Globus for iRODS, offering researchers an enhanced solution for policy managed data preservation.
Lawrence Livermore National Laboratory (LLNL), along with partners Intel, Supermicro and Cornelis Networks, have deployed “Ruby,” a high performance computing (HPC) cluster that will perform functions for the National Nuclear Security Administration (NNSA) and support the Laboratory’s COVID-19 research.
Lawrence Livermore National Laboratory and its partners AMD, Supermicro and Cornelis Networks have installed a new high performance computing (HPC) cluster with memory and data storage capabilities optimized for data-intensive COVID-19 research and pandemic response.
Los Alamos National Laboratory has completed the installation of a next-generation high performance computing platform, with aim to enhance its ongoing R&D efforts in support of the nation’s response to COVID-19.
Lawrence Livermore National Laboratory (LLNL) has installed a state-of-the-art artificial intelligence (AI) accelerator from SambaNova Systems, the National Nuclear Security Administration (NNSA) announced today, allowing researchers to more effectively combine AI and machine learning (ML) with complex scientific workloads.
With funding from the Coronavirus Aid, Relief and Economic Security (CARES) Act, Lawrence Livermore National Laboratory, chipmaker AMD and information technology company Supermicro have upgraded the supercomputing cluster Corona, providing additional resources to scientists for COVID-19 drug discovery and vaccine research
Registration is now open for Penn State’s Institute of Computational and Data Sciences’ (ICDS) 2020 Symposium. The two-day symposium will be held virtually Oct. 21-22 and will feature an interdisciplinary group of speakers and experts who will focus on both the challenges — and opportunities — of big data and data science.
Lawrence Livermore National Laboratory scientists have paired 3D-printed, living human brain vasculature with advanced computational flow simulations to better understand tumor cell attachment to blood vessels, the first step in secondary tumor formation during cancer metastasis.
The role of AI in science is at a turning point, with weather, climate, and Earth systems modeling emerging as an exciting application area for physics-informed deep learning. In this Q&A, NERSC’s Karthik Kashinath discusses what is driving the scientific community to embrace these new methodologies.
A new Physics Frontier Center at UC Berkeley, supported by the National Science Foundation, expands the reach and depth of existing capabilities on campus and at neighboring Berkeley Lab in modeling one of the most violent events in the universe: the merger of neutron stars and its explosive aftermath.
Scientists at the Department of Energy’s Oak Ridge National Laboratory used neutron scattering and supercomputing to better understand how an organic solvent and water work together to break down plant biomass, creating a pathway to significantly improve the production of renewable biofuels and bioproducts.
To meet the needs of tomorrow’s supercomputers, the National Nuclear Security Administration’s (NNSA’s) Lawrence Livermore National Laboratory (LLNL) has broken ground on its Exascale Computing Facility Modernization (ECFM) project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.
To assist in the COVID-19 research effort, Lawrence Livermore National Laboratory, Penguin Computing and AMD have reached an agreement to upgrade the Lab’s unclassified, Penguin Computing-built Corona high performance computing (HPC) cluster with an in-kind contribution of cutting-edge AMD Instinct™ accelerators, expected to nearly double the peak performance of the machine.
Profiled is Mitch Allmond of Oak Ridge National Laboratory, who conducts experiments and uses theoretical models to advance our understanding of the structure of atomic nuclei.
Lawrence Berkeley National Laboratory’s decades of leadership in designing & enhancing energy-efficient data centers is being applied to NERSC supercomputing resources through a collaboration that’s using operational data analytics to optimize cooling systems & save electricity.
Globus, a leading research data management service, today announced the general availability of Globus for Google Cloud, a new solution for accessing and managing data stored in Google Cloud object storage.
Los Alamos National Laboratory and Arm are teaming up to make efficient, workload-optimized processors tailored to the extreme-scale computing requirements of the Laboratory’s national-security mission.
Profiled is physicist Gaute Hagen of the Department of Energy’s Oak Ridge National Laboratory, who runs advanced models on powerful supercomputers to explore how protons and neutrons interact to “build” an atomic nucleus from scratch.
What will scientific computing at scale look like in 2030? With the impending demise of Moore’s Law, there are still more questions than answers for users and manufacturers of HPC technologies as they try to figure out what their next…