Lawrence Livermore National Laboratory scientists and engineers have earned three R&D 100 awards from the trade journal R&D World Magazine. The LLNL awards include a spectral beam combining optic that enables a single, high-power beam with unparalleled compactness and damage resistance; an open-source memory-mapping library with increased power and flexibility; and a user-level file system for high performance computing systems. With this year’s results, the Laboratory has now collected a total of 182 R&D 100 awards since 1978.
Tag: high-performance computing
Advancing Quantum Research – DOE Inks MOU with Department of Defense
Today, the U.S. Department of Energy (DOE) and the Defense Advanced Research Projects Agency (DARPA) announce a Memorandum of Understanding (MOU) to coordinate efforts to move the needle on quantum computing.
At the Climate READi workshop: Resilient power systems in the context of climate change
The Department of Energy’s Oak Ridge National Laboratory and other institutions joined industry stakeholders in exploring solutions for power grid climate resilience at the Climate READi Southeast workshop co-hosted by EPRI and ORNL’s Water Power Program on April 10-11.
$300,000 NSF MRI grant awarded to Furman, Mount Holyoke, Richmond to expand program for young chemists
The three-year grant is earmarked for the purchase of an additional high-performance computer cluster to join existing MERCURY resources hosted offsite. The grant will enable 13 more undergraduate-focused research groups to benefit, growing the consortium to 47 computational scientists at 41 institutions nationwide.
Department of Energy Announces $80 Million for Research to Accelerate Innovations in Emerging Technologies
Today, the U.S. Department of Energy (DOE) announced $80 million, provided by the Office of Science, to support fundamental research to drive the innovation cycle in support of the Accelerate Innovations in Emerging Technologies (Accelerate) initiative.
Department of Energy Announces $8.5 Million in High-Performance Algorithms for Complex Energy Systems and Processes
Today, the U.S. Department of Energy (DOE) announced $8.5 million in funding for basic research in the development of randomized algorithms for understanding and improving the properties and behavior of complex energy systems. Problems involving the design of scientific experiments or energy and communication infrastructures can often be viewed as a discrete, networked system of systems that needs to be optimized. Such discrete optimization problems cannot be efficiently solved with conventional algorithms that are not well-suited for graphs, networks, and streaming data.
Build-a-satellite program could fast track national security space missions
Valhalla, a Python-based performance modeling framework developed at Sandia National Laboratories, uses high-performance computing to build preliminary satellite designs based on mission requirements and then runs those designs through thousands of simulations.
Public release of ORNL global population distribution data aids humanitarian support
ORNL’s suite of LandScan population distribution models is available online to the global public for the first time ever under a new open-source creative commons license.
Supercomputing, neutrons crack code to uranium compound’s signature vibes
Oak Ridge National Laboratory researchers used the nation’s fastest supercomputer to map the molecular vibrations of an important but little-studied uranium compound produced during the nuclear fuel cycle for results that could lead to a cleaner, safer world.
VA, ORNL and Harvard develop novel method to identify complex medical relationships
A team of researchers from the Department of Veterans Affairs, Oak Ridge National Laboratory, Harvard’s T.H. Chan School of Public Health, Harvard Medical School and Brigham and Women’s Hospital has developed a novel, machine learning–based technique to explore and identify relationships among medical concepts using electronic health record data across multiple healthcare providers.
Story tips: Predicting water quality, stronger & ‘stretchier’ alloys, RAPID reinforcement and mountainous water towers
ORNL story tips: Predicting water quality, stronger & ‘stretchier’ alloys, RAPID reinforcement and mountainous water towers
Physicists Crack the Code to Signature Superconductor Kink Using Supercomputing
A team performed simulations on the Summit supercomputer and found that electrons in cuprates interact with phonons much more strongly than was previously thought, leading to experimentally observed “kinks” in the relationship between an electron’s energy and the momentum it carries.
ORNL’s superb materials expertise, data and AI tools propel progress
At the Department of Energy’s Oak Ridge National Laboratory, scientists use artificial intelligence, or AI, to accelerate the discovery and development of materials for energy and information technologies.
High-Performance Computing Makes a Splash in Water Cycle Science
The Comet supercomputer will end formal service as an NSF resource and transition to exclusive use by the Center for Western Weather and Water Extremes to leverage computing capabilities to enhance decision-making associated with reservoir management over California.
Story tips: Volcanic microbes, unbreakable bonds and flood mapping
ORNL story tips: Volcanic microbes, unbreakable bonds and flood mapping
Connected Moments for Quantum Computing
Connected moments math shortcut shaves time and cost of quantum calculations while maintaining accuracy
Simulations Reveal Nature’s Design for Error Correction During DNA Replication
A Georgia State University team has used the nation’s fastest supercomputer, Summit at the US Department of Energy’s Oak Ridge National Laboratory, to find the optimal transition path that one E. coli enzyme uses to switch between building and editing DNA to rapidly remove misincorporated pieces of DNA.
High-Performance Computing Helps Grid Operators Manage Increasing Complexity
PNNL, in partnership with industry, has developed a computational tool called HIPPO, which accelerates the increasingly complex calculations grid operators must make in scheduling energy resources to meet the next day’s forecasted electricity demand.
New NSF Physics Frontier Center Will Focus on Neutron Star Modeling in ‘Gravitational Wave Era’
A new Physics Frontier Center at UC Berkeley, supported by the National Science Foundation, expands the reach and depth of existing capabilities on campus and at neighboring Berkeley Lab in modeling one of the most violent events in the universe: the merger of neutron stars and its explosive aftermath.
Summit Helps Predict Molecular Breakups
A team used the Summit supercomputer to simulate transition metal systems—such as copper bound to molecules of nitrogen, dihydrogen, or water—and correctly predicted the amount of energy required to break apart dozens of molecular systems, paving the way for a greater understanding of these materials.
Knocking Out Drug Side Effects with Supercomputing
A team at Stanford University used the OLCF’s Summit supercomputer to compare simulations of a G protein-coupled receptor with different molecules attached to gain an understanding of how to minimize or eliminate side effects in drugs that target these receptors.
Supercomputing Aids Scientists Seeking Therapies for Deadly Bacterial Disease
A team of scientists led by Abhishek Singharoy at Arizona State University used the Summit supercomputer at the Oak Ridge Leadership Computing Facility to simulate the structure of a possible drug target for the bacterium that causes rabbit fever.
Major upgrades of particle detectors and electronics prepare CERN experiment to stream a data tsunami
For an experiment that will generate big data at unprecedented rates, physicists led design, development, mass production and delivery of an upgrade of novel particle detectors and state-of-the art electronics.
Fighting COVID with computing: Fermilab, Brookhaven, Open Science Grid dedicate computational power to COVID-19 research
Scientists and engineers at Fermilab and Brookhaven are uniting with other organizations in the Open Science Grid to help fight COVID-19 by dedicating considerable computational power to researchers studying how they can help combat the virus-borne disease.
Los Alamos high-performance computing veteran to chair SC22
Candace Culhane, a program/project director in Los Alamos National Laboratory’s Directorate for Simulation and Computation, has been selected as the general chair for the 2022 SC Conference (SC22).
BP Looks to ORNL, ADIOS to Help Rein in Data
British Petroleum researchers invited ORNL data scientists to give the company’s high-performance computing team a tutorial of the laboratory’s ADIOS I/O middleware. ADIOS has helped researchers achieve scientific breakthroughs by providing a simple, flexible way to describe data in their code that may need to be written, read, or processed outside of the running simulation. ORNL researchers Scott Klasky and Norbert Podhorszki demonstrated how it could help the BP team accelerate their science by helping tackle their large, unique seismic datasets.
AI for Plant Breeding in an Ever-Changing Climate
In this Q&A, Oak Ridge National Laboratory’s Dan Jacobson talks about his team’s work on a genomic selection algorithm, his vision for the future of environmental genomics, and the space where simulation meets AI.
A New Parallel Strategy for Tackling Turbulence on Summit
A team at Georgia Tech created a new turbulence algorithm optimized for the Summit supercomputer. It reached a performance of less than 15 seconds of wall-clock time per time step for more than 6 trillion grid points—a new world record surpassing the prior state of the art in the field for the size of the problem.
Search for Lightweight Alloying Solutions Earns Team a Gordon Bell Finalist Nomination
A team used the Summit supercomputer to simulate a 10,000-atom magnesium dislocation system at 46 petaflops, a feat that earned the team an ACM Gordon Bell Prize finalist nomination and could allow scientists to understand which alloying materials to add to improve magnesium alloys.
Gordon Bell Finalist Team Tackles Transistors with New Programming Paradigm
A team simulated a 10,000-atom 2D transistor slice on the Summit supercomputer and mapped where heat is produced in a single transistor. Using a new data-centric version of the OMEN nanodevice simulator, the team sustained the code at 85.45 petaflops and earned a Gordon Bell Prize finalist nomination.