Logo Logo
Hilfe
Kontakt
Switch language to English
High-order discontinuous Galerkin hydrodynamics for supersonic astrophysical turbulence
High-order discontinuous Galerkin hydrodynamics for supersonic astrophysical turbulence
Throughout the universe a primary way of transferring large scale kinetic energy down to viscous scales where it can heat gas, mix chemical elemental abundances, regulate star formation, etc., is turbulence. Turbulence is ubiquitous across astrophysical objects, from regulating star and planet formation, to being the main driver of flame propagation in Type Ia supernovae, to controlling the propagation of cosmic rays and launching outflows and winds in galaxies. However, turbulence is difficult to simulate as one needs to resolve a wide dynamic range, from the stirring at large scales down to viscous scales, to properly account for the influence it has on the astrophysical system in question. To be able to simulate such systems while sufficiently resolving turbulent motions new algorithms and bigger computers are needed. Since the 1970s the number of transistors in an integrated circuit has doubled about every two years -- commonly known as Moore's law -- and this has given rise to a similar expansion of their computational performance. Since the mid 2010s, however, this progress has slowed down, and the computer industry has turned to a variety of approaches for still increasing the number of floating point operations per second their chips can deliver. One important current trend along this line is to use graphical processing units (GPUs) and re-purpose them for use in high performance computing. In fact, this approach is powering the first exascale supercomputers that have emerged recently. The simpler chip design of GPUs poses however fundamental challenges to (astrophysical) codes as one cannot simply recompile them for these systems. Instead, a fundamental rewrite is necessary, including a change of data structures, memory accesses, code logic and communication strategy. Motivated both by these hardware trends and by the need for more efficient numerical algorithms, this thesis investigates the use of Discontinuous Galerkin (DG) methods for simulations of astrophysical turbulence on GPUs. The numerical technique of DG promises high-order convergence with comparatively small communication needs, potentially giving it superior computational performance and accuracy, particularly when realized on modern computer hardware accelerated with GPUs. To investigate this prospect, I have implemented a high-order DG method in the GPU-native Compute Unified Device Architecture (CUDA), combined with a parallelization across multiple distributed memory nodes with the Message Passing Interface (MPI). First, I realized the method for subsonic flows where it exhibits a very promising exponential convergence with increasing spatial order. This allows for subsonic flows to be simulated at very high resolution at a low computational cost. I furthermore improved the classic DG method with a reconstruction-based Navier-Stokes solver, something that had proven difficult in previous work. I also proposed a new sub-cell shock resolving method based on an appropriate injection of artificial viscosity. Building on these steps, I have then addressed the case of supersonic turbulence. The network of strong interacting shocks appearing in such simulations required further algorithm and code improvements. I introduced a simplified artificial viscosity implementation based on the classic von Neumann-Richtmeyer formulation and solved the stability problem of DG for supersonic flows by proposing a novel projection approach for the primitive variables so that they can be stably extrapolated to cell surfaces. The developments of this thesis thus make high-order, GPU-accelerated Discontinuous Galerkin methods for the first time applicable to a wide und essentially unrestricted range of research applications in astrophysics. This includes, for example, studies of the subsonic turbulence in galaxy clusters which can be represented much more precisely and in a more cost effective way with high-order DG methods than with traditional finite volume techniques. Importantly, this now also includes systems that contain strong shocks, or are even governed by networks of shocks such as regions filled with supersonic turbulence, a regime that could previously not be treated effectively with high-order DG hydrodynamics.
Not available
Cernetic, Miha
2023
Englisch
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Cernetic, Miha (2023): High-order discontinuous Galerkin hydrodynamics for supersonic astrophysical turbulence. Dissertation, LMU München: Fakultät für Physik
[thumbnail of cernetic_miha.pdf]
Vorschau
Lizenz: Creative Commons: Namensnennung 4.0 (CC-BY)
PDF
cernetic_miha.pdf

28MB

Abstract

Throughout the universe a primary way of transferring large scale kinetic energy down to viscous scales where it can heat gas, mix chemical elemental abundances, regulate star formation, etc., is turbulence. Turbulence is ubiquitous across astrophysical objects, from regulating star and planet formation, to being the main driver of flame propagation in Type Ia supernovae, to controlling the propagation of cosmic rays and launching outflows and winds in galaxies. However, turbulence is difficult to simulate as one needs to resolve a wide dynamic range, from the stirring at large scales down to viscous scales, to properly account for the influence it has on the astrophysical system in question. To be able to simulate such systems while sufficiently resolving turbulent motions new algorithms and bigger computers are needed. Since the 1970s the number of transistors in an integrated circuit has doubled about every two years -- commonly known as Moore's law -- and this has given rise to a similar expansion of their computational performance. Since the mid 2010s, however, this progress has slowed down, and the computer industry has turned to a variety of approaches for still increasing the number of floating point operations per second their chips can deliver. One important current trend along this line is to use graphical processing units (GPUs) and re-purpose them for use in high performance computing. In fact, this approach is powering the first exascale supercomputers that have emerged recently. The simpler chip design of GPUs poses however fundamental challenges to (astrophysical) codes as one cannot simply recompile them for these systems. Instead, a fundamental rewrite is necessary, including a change of data structures, memory accesses, code logic and communication strategy. Motivated both by these hardware trends and by the need for more efficient numerical algorithms, this thesis investigates the use of Discontinuous Galerkin (DG) methods for simulations of astrophysical turbulence on GPUs. The numerical technique of DG promises high-order convergence with comparatively small communication needs, potentially giving it superior computational performance and accuracy, particularly when realized on modern computer hardware accelerated with GPUs. To investigate this prospect, I have implemented a high-order DG method in the GPU-native Compute Unified Device Architecture (CUDA), combined with a parallelization across multiple distributed memory nodes with the Message Passing Interface (MPI). First, I realized the method for subsonic flows where it exhibits a very promising exponential convergence with increasing spatial order. This allows for subsonic flows to be simulated at very high resolution at a low computational cost. I furthermore improved the classic DG method with a reconstruction-based Navier-Stokes solver, something that had proven difficult in previous work. I also proposed a new sub-cell shock resolving method based on an appropriate injection of artificial viscosity. Building on these steps, I have then addressed the case of supersonic turbulence. The network of strong interacting shocks appearing in such simulations required further algorithm and code improvements. I introduced a simplified artificial viscosity implementation based on the classic von Neumann-Richtmeyer formulation and solved the stability problem of DG for supersonic flows by proposing a novel projection approach for the primitive variables so that they can be stably extrapolated to cell surfaces. The developments of this thesis thus make high-order, GPU-accelerated Discontinuous Galerkin methods for the first time applicable to a wide und essentially unrestricted range of research applications in astrophysics. This includes, for example, studies of the subsonic turbulence in galaxy clusters which can be represented much more precisely and in a more cost effective way with high-order DG methods than with traditional finite volume techniques. Importantly, this now also includes systems that contain strong shocks, or are even governed by networks of shocks such as regions filled with supersonic turbulence, a regime that could previously not be treated effectively with high-order DG hydrodynamics.