A universe evolves over billions upon billions of years, but researchers have developed a way to create a complex simulated universe in less than a day. The technique, published in this week’s
Proceedings of the National Academy of Sciences
, brings together machine learning, high-performance computing and astrophysics and will help to usher in a new era of high-resolution cosmology simulations.
Cosmological simulations are an essential part of teasing out the many mysteries of the universe, including those of dark matter and dark energy. But until now, researchers faced the common conundrum of not being able to have it all ¬– simulations could focus on a small area at high resolution, or they could encompass a large volume of the universe at low resolution.
Carnegie Mellon University Physics Professors Tiziana Di Matteo and Rupert Croft, Flatiron Institute Research Fellow Yin Li, Carnegie Mellon Ph.D. candidate Yueying Ni, University of California Riverside Professor of Physics and Astronomy Simeon Bird and University of California Berkeley’s Yu Feng surmounted this problem by teaching a machine learning algorithm based on neural networks to upgrade a simulation from low resolution to super resolution.
“Cosmological simulations need to cover a large volume for cosmological studies, while also requiring high resolution to resolve the small-scale galaxy formation physics, which would incur daunting computational challenges. Our technique can be used as a powerful and promising tool to match those two requirements simultaneously by modeling the small-scale galaxy formation physics in large cosmological volumes,” said Ni, who performed the training of the model, built the pipeline for testing and validation, analyzed the data and made the visualization from the data.
The trained code can take full-scale, low-resolution models and generate super-resolution simulations that contain up to 512 times as many particles. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-resolution simulation using a single processing core. With the new approach, the researchers need only 36 minutes.
The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers’ new method took 16 hours on a single graphics processing unit. Using current methods, a simulation of this size and resolution would take a dedicated supercomputer months to complete.
Reducing the time it takes to run cosmological simulations “holds the potential of providing major advances in numerical cosmology and astrophysics,” said Di Matteo. “Cosmological simulations follow the history and fate of the universe, all the way to the formation of all galaxies and their black holes.”
Scientists use cosmological simulations to predict how the universe would look in various scenarios, such as if the dark energy pulling the universe apart varied over time. Telescope observations then confirm whether the simulations’ predictions match reality.
“With our previous simulations, we showed that we could simulate the universe to discover new and interesting physics, but only at small or low-res scales,” said Croft. “By incorporating machine learning, the technology is able to catch up with our ideas.”
Di Matteo, Croft and Ni are part of Carnegie Mellon’s National Science Foundation (NSF) Planning Institute for Artificial Intelligence in Physics, which supported this work, and members of Carnegie Mellon’s McWilliams Center for Cosmology.
“The universe is the biggest data sets there is — artificial intelligence is the key to understanding the universe and revealing new physics,” said Scott Dodelson, professor and head of the department of physics at Carnegie Mellon University and director of the NSF Planning Institute. “This research illustrates how the NSF Planning Institute for Artificial Intelligence will advance physics through artificial intelligence, machine learning, statistics and data science.”
“It’s clear that AI is having a big effect on many areas of science, including physics and astronomy,” said James Shank, a program director in NSF’s Division of Physics. “Our AI planning Institute program is working to push AI to accelerate discovery. This new result is a good example of how AI is transforming cosmology.”
To create their new method, Ni and Li harnessed these fields to create a code that uses neural networks to predict how gravity moves dark matter around over time. The networks take training data, run calculations and compare the results to the expected outcome. With further training, the networks adapt and become more accurate.
The specific approach used by the researchers, called a generative adversarial network, pits two neural networks against each other. One network takes low-resolution simulations of the universe and uses them to generate high-resolution models. The other network tries to tell those simulations apart from ones made by conventional methods. Over time, both neural networks get better and better until, ultimately, the simulation generator wins out and creates fast simulations that look just like the slow conventional ones.
“We couldn’t get it to work for two years,” Li said, “and suddenly it started working. We got beautiful results that matched what we expected. We even did some blind tests ourselves, and most of us couldn’t tell which one was ‘real’ and which one was ‘fake.'”
Despite only being trained using small areas of space, the neural networks accurately replicated the large-scale structures that only appear in enormous simulations.
The simulations didn’t capture everything, though. Because they focused on dark matter and gravity, smaller-scale phenomena — such as star formation, supernovae and the effects of black holes — were left out. The researchers plan to extend their methods to include the forces responsible for such phenomena, and to run their neural networks ‘on the fly’ alongside conventional simulations to improve accuracy.
###
The research was powered by the Frontera supercomputer at the Texas Advanced Computing Center (TACC), the fastest academic supercomputer in the world. The team is one of the largest users of this massive computing resource, which is funded by the NSF Office of Advanced Cyberinfrastructure.
This research was funded by the NSF, the NSF AI Institute: Physics of the Future and NASA.
This part of information is sourced from https://www.eurekalert.org/pub_releases/2021-05/cmu-mla050421.php