For centuries, scientists have marveled at telescopic imagery, theorized about much of what they see and drawn conclusions from their observations.
More recently, astronomers and astrophysicists are using the computing performance of GPUs and AI to glean more from that imagery than ever.
A research team at the University of California, Santa Cruz, and Princeton University has been pushing these limits. Led by Brant Robertson, of UC Santa Cruz and NASA Hubble Fellow Evan Schneider, the team has been optimizing its use of NVIDIA GPUs and deep learning tools to accommodate larger calculations.
The goal: expand its ability to do more accurate hydrodynamic simulations, and thereby gain a better understanding of how galaxies are formed.
The team started by simply moving its efforts from CPUs to GPUs. Doing so to measure matter passing in and out of the cell faces of a 3D grid mesh was akin to suddenly being able to solve many Rubik’s Cubes simultaneously.
With CUDA in the mix, the team could transfer an array of grids onto the GPU to do the necessary calculations, resulting in more detailed simulations.
Once it had squeezed the most of that setup, the team’s ambitions shifted to a more powerful cluster of NVIDIA GPUs, namely the Titan supercomputer at the U.S. Department of Energy’s Oak Ridge National Laboratory. But to perform higher-resolution simulations, the team needed some powerful code to harness Titan’s 16,000-plus Tesla GPUs.
[Read how researchers from the University of Western Australia have trained a GPU-powered AI system to recognize new galaxies.]
Schneider, Robertson’s former graduate student and now a postdoctoral fellow at Princeton, was up to the task. She wrote a GPU-accelerated piece of hydrodynamic code called CHOLLA, which stands for Computational Hydrodynamics On paraLLel Architectures.
Robertson believes CHOLLA will help scientists answer previously unanswerable questions. For instance, applying the code to M82, a galaxy revered by astronomers for its prodigious star-formation rate and powerful galactic winds, could provide new levels of understanding of how stars are formed.
“How does that wind get there? What sets the properties in the wind? How does the wind control the mass of the galaxy? These are all questions we’d like to understand, but it’s a very difficult computational problem,” Robertson said. “Evan is the first person to solve this with any fidelity.”
CHOLLA, which was written a few years ago, has enabled Schneider and Robertson to leverage 100 million core hours on Titan. The code is unique in that it performs all calculations on GPUs, enabling the team to do sophisticated simulations on their NVIDIA DGX and DGX-1 deep learning systems in the lab and then transfer them to Titan, where they can be expanded.
“You want to take advantage of the floating-point operation power of the GPUs. You don’t want to spend your time waiting for information to go back and forth between the GPUs if you can avoid it,” said Robertson. “Spending as much time as possible computing on the GPU is where you want to be.”
Pushing to the Summit
CHOLLA’s ability to scale vast numbers of GPUs has enabled the team to perform a test calculation of 550 billion cells that Robertson called “one of the largest hydrosimulations ever done in astrophysics.”
Another student, Ryan Hausen, has paved the way to even more ambitious work by developing a deep learning framework called Morpheus that uses raw telescope data to classify galaxies. That opens the door to potentially processing giant surveys with billions of galaxies on a DGX system.
“That’s something I didn’t think was possible just a few years ago,” said Robertson.
Yet another huge leap may be coming soon, as Robertson is hoping to obtain time on Summit, the world’s most powerful supercomputer — powered by NVIDIA Volta GPUs. He believes CHOLLA will enable the team to do even more with Summit’s expansive GPU memory than it has with Titan.
“The computational power of NVIDIA GPUs enabled us to perform numerical simulations that were not possible before,” said Robertson. “And we plan to use NVIDIA GPUs to push what is possible.”