A hierarchical a posteriori error estimator for the Reduced Basis Method

  • PDF / 1,349,625 Bytes
  • 24 Pages / 439.642 x 666.49 pts Page_size
  • 50 Downloads / 210 Views

DOWNLOAD

REPORT


A hierarchical a posteriori error estimator for the Reduced Basis Method Stefan Hain1 · Mario Ohlberger2 · Mladjan Radic1 · Karsten Urban1 Received: 7 March 2018 / Accepted: 14 February 2019 / © Springer Science+Business Media, LLC, part of Springer Nature 2019

Abstract In this contribution, we are concerned with tight a posteriori error estimation for projection-based model order reduction of inf-sup stable parameterized variational problems. In particular, we consider the Reduced Basis Method in a Petrov-Galerkin framework, where the reduced approximation spaces are constructed by the (weak) greedy algorithm. We propose and analyze a hierarchical a posteriori error estimator which evaluates the difference of two reduced approximations of different accuracy. Based on the a priori error analysis of the (weak) greedy algorithm, it is expected that the hierarchical error estimator is sharp with efficiency index close to one, if the Kolmogorov N-with decays fast for the underlying problem and if a suitable saturation assumption for the reduced approximation is satisfied. We investigate the tightness of the hierarchical a posteriori estimator both from a theoretical and numerical perspective. For the respective approximation with higher accuracy, we study and compare basis enrichment of Lagrange- and Taylor-type reduced bases. Numerical experiments indicate the efficiency for both, the construction of a reduced basis using the hierarchical error estimator in a greedy algorithm, and for tight online certification of reduced approximations. This is particularly relevant in cases where the inf-sup constant may become small depending on the parameter. In such cases, a standard residual-based error estimator—complemented by the successive constrained method to compute a lower bound of the parameter dependent inf-sup constant—may become infeasible. Keywords Reduced Basis Method · A posteriori error estimator · Hierarchical error estimator Mathematics Subject Classification (2010) 65N30 · 65N15 · 65M15

Communicated by: Anthony Nouy  Karsten Urban

[email protected]

Extended author information available on the last page of the article.

S. Hain et al.

1 Introduction Model order reduction has become a field of great significance, both with respect to solving real-world problems and with respect to mathematical research. In this article, we consider the Reduced Basis Method (RBM), which is a well-known projection- based model order reduction technique for Parameterized Partial Differential Equations (PPDEs), for instance in multi-query and/or real-time contexts [22, 24, 35]. The key idea for the RBM is to construct a problem-specific reduced order model, e.g., in a computationally expensive offline phase, and then use this reduced model to construct an approximation in an online phase extremely fast by solving very low-dimensional Petrov-Galerkin problems. A posteriori error estimates play an important role within the RBM, at least for the following reasons: (1) The error estimator is used in a greedy algorithm to co