Today, the U.S. Department of Energy (DOE) announced $8.5 million in funding for basic research in the development of randomized algorithms for understanding and improving the properties and behavior of complex energy systems. Problems involving the design of scientific experiments or energy and communication infrastructures can often be viewed as a discrete, networked system of systems that needs to be optimized. Such discrete optimization problems cannot be efficiently solved with conventional algorithms that are not well-suited for graphs, networks, and streaming data.
Valhalla, a Python-based performance modeling framework developed at Sandia National Laboratories, uses high-performance computing to build preliminary satellite designs based on mission requirements and then runs those designs through thousands of simulations.
ORNL’s suite of LandScan population distribution models is available online to the global public for the first time ever under a new open-source creative commons license.
Oak Ridge National Laboratory researchers used the nation’s fastest supercomputer to map the molecular vibrations of an important but little-studied uranium compound produced during the nuclear fuel cycle for results that could lead to a cleaner, safer world.
A team of researchers from the Department of Veterans Affairs, Oak Ridge National Laboratory, Harvard’s T.H. Chan School of Public Health, Harvard Medical School and Brigham and Women’s Hospital has developed a novel, machine learning–based technique to explore and identify relationships among medical concepts using electronic health record data across multiple healthcare providers.
ORNL story tips: Predicting water quality, stronger & ‘stretchier’ alloys, RAPID reinforcement and mountainous water towers
A team performed simulations on the Summit supercomputer and found that electrons in cuprates interact with phonons much more strongly than was previously thought, leading to experimentally observed “kinks” in the relationship between an electron’s energy and the momentum it carries.
At the Department of Energy’s Oak Ridge National Laboratory, scientists use artificial intelligence, or AI, to accelerate the discovery and development of materials for energy and information technologies.
The Comet supercomputer will end formal service as an NSF resource and transition to exclusive use by the Center for Western Weather and Water Extremes to leverage computing capabilities to enhance decision-making associated with reservoir management over California.
ORNL story tips: Volcanic microbes, unbreakable bonds and flood mapping
Connected moments math shortcut shaves time and cost of quantum calculations while maintaining accuracy
A Georgia State University team has used the nation’s fastest supercomputer, Summit at the US Department of Energy’s Oak Ridge National Laboratory, to find the optimal transition path that one E. coli enzyme uses to switch between building and editing DNA to rapidly remove misincorporated pieces of DNA.
PNNL, in partnership with industry, has developed a computational tool called HIPPO, which accelerates the increasingly complex calculations grid operators must make in scheduling energy resources to meet the next day’s forecasted electricity demand.
A new Physics Frontier Center at UC Berkeley, supported by the National Science Foundation, expands the reach and depth of existing capabilities on campus and at neighboring Berkeley Lab in modeling one of the most violent events in the universe: the merger of neutron stars and its explosive aftermath.
A team used the Summit supercomputer to simulate transition metal systems—such as copper bound to molecules of nitrogen, dihydrogen, or water—and correctly predicted the amount of energy required to break apart dozens of molecular systems, paving the way for a greater understanding of these materials.
A team at Stanford University used the OLCF’s Summit supercomputer to compare simulations of a G protein-coupled receptor with different molecules attached to gain an understanding of how to minimize or eliminate side effects in drugs that target these receptors.
A team of scientists led by Abhishek Singharoy at Arizona State University used the Summit supercomputer at the Oak Ridge Leadership Computing Facility to simulate the structure of a possible drug target for the bacterium that causes rabbit fever.
For an experiment that will generate big data at unprecedented rates, physicists led design, development, mass production and delivery of an upgrade of novel particle detectors and state-of-the art electronics.
Scientists and engineers at Fermilab and Brookhaven are uniting with other organizations in the Open Science Grid to help fight COVID-19 by dedicating considerable computational power to researchers studying how they can help combat the virus-borne disease.
Candace Culhane, a program/project director in Los Alamos National Laboratory’s Directorate for Simulation and Computation, has been selected as the general chair for the 2022 SC Conference (SC22).
British Petroleum researchers invited ORNL data scientists to give the company’s high-performance computing team a tutorial of the laboratory’s ADIOS I/O middleware. ADIOS has helped researchers achieve scientific breakthroughs by providing a simple, flexible way to describe data in their code that may need to be written, read, or processed outside of the running simulation. ORNL researchers Scott Klasky and Norbert Podhorszki demonstrated how it could help the BP team accelerate their science by helping tackle their large, unique seismic datasets.
In this Q&A, Oak Ridge National Laboratory’s Dan Jacobson talks about his team’s work on a genomic selection algorithm, his vision for the future of environmental genomics, and the space where simulation meets AI.
A team at Georgia Tech created a new turbulence algorithm optimized for the Summit supercomputer. It reached a performance of less than 15 seconds of wall-clock time per time step for more than 6 trillion grid points—a new world record surpassing the prior state of the art in the field for the size of the problem.
A team used the Summit supercomputer to simulate a 10,000-atom magnesium dislocation system at 46 petaflops, a feat that earned the team an ACM Gordon Bell Prize finalist nomination and could allow scientists to understand which alloying materials to add to improve magnesium alloys.
A team simulated a 10,000-atom 2D transistor slice on the Summit supercomputer and mapped where heat is produced in a single transistor. Using a new data-centric version of the OMEN nanodevice simulator, the team sustained the code at 85.45 petaflops and earned a Gordon Bell Prize finalist nomination.