Neural Network-Based Limiter with Transfer Learning
- PDF / 3,822,108 Bytes
- 41 Pages / 439.37 x 666.142 pts Page_size
- 10 Downloads / 205 Views
Neural Network‑Based Limiter with Transfer Learning Rémi Abgrall1 · Maria Han Veiga2 Received: 19 December 2019 / Revised: 9 June 2020 / Accepted: 17 June 2020 © Shanghai University 2020
Abstract Recent works have shown that neural networks are promising parameter-free limiters for a variety of numerical schemes (Morgan et al. in A machine learning approach for detecting shocks with high-order hydrodynamic methods. https://doi.org/10.2514/6.2020-2024; Ray et al. in J Comput Phys 367: 166–191. https://doi.org/10.1016/j.jcp.2018.04.029, 2018; Veiga et al. in European Conference on Computational Mechanics and VII European Conference on Computational Fluid Dynamics, vol. 1, pp. 2525–2550. ECCM. https://doi. org/10.5167/uzh-168538, 2018). Following this trend, we train a neural network to serve as a shock-indicator function using simulation data from a Runge-Kutta discontinuous Galerkin (RKDG) method and a modal high-order limiter (Krivodonova in J Comput Phys 226: 879–896. https://doi.org/10.1016/j.jcp.2007.05.011, 2007). With this methodology, we obtain one- and two-dimensional black-box shock-indicators which are then coupled to a standard limiter. Furthermore, we describe a strategy to transfer the shock-indicator to a residual distribution (RD) scheme without the need for a full training cycle and large dataset, by finding a mapping between the solution feature spaces from an RD scheme to an RKDG scheme, both in one- and two-dimensional problems, and on Cartesian and unstructured meshes. We report on the quality of the numerical solutions when using the neural network shock-indicator coupled to a limiter, comparing its performance to traditional limiters, for both RKDG and RD schemes. Keywords Limiters · Neural networks · Transfer learning · Domain adaptation Mathematics Subject Classification 65M99 · 65Y15 · 65Y20
* Maria Han Veiga [email protected] Rémi Abgrall [email protected] 1
University of Zurich, Zurich, Switzerland
2
University of Michigan, Ann Arbor, USA
13
Vol.:(0123456789)
Communications on Applied Mathematics and Computation
1 Introduction When dealing with nonlinear conservation laws, it is well known that discontinuous solutions can emerge, even for smooth initial data [14]. The numerical approximation of the discontinuous solution will develop non-physical oscillations around the discontinuity, which in turn will negatively impact the accuracy of the numerical scheme. There exist many different stabilisation methods to control these oscillations, for example, through the addition of a viscous term (as denoted by the right-hand side of (1)) or use of limiters:
𝜕 u + ∇ ⋅ f (u) = ∇ ⋅ (𝜈(u)∇u). 𝜕t
(1)
Neural networks regained popularity in the past decade due to the computational tractability of the back-propagation algorithm, used for the learning of weights and biases in a deep neural network. Deep neural networks have been shown to generate robust models for classification in many areas of applications [23, 39] and theoretically, to generate universal classifiers an
Data Loading...