Asynchronous Richardson iterations: theory and practice

  • PDF / 588,802 Bytes
  • 17 Pages / 439.642 x 666.49 pts Page_size
  • 41 Downloads / 175 Views

DOWNLOAD

REPORT


Asynchronous Richardson iterations: theory and practice Edmond Chow1 · Andreas Frommer2 · Daniel B. Szyld3 Received: 4 September 2020 / Accepted: 24 September 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract We consider asynchronous versions of the first- and second-order Richardson methods for solving linear systems of equations. These methods depend on parameters whose values are chosen a priori. We explore the parameter values that can be proven to give convergence of the asynchronous methods. This is the first such analysis for asynchronous second-order methods. We find that for the first-order method, the optimal parameter value for the synchronous case also gives an asynchronously convergent method. For the second-order method, the parameter ranges for which we can prove asynchronous convergence do not contain the optimal parameter values for the synchronous iteration. In practice, however, the asynchronous second-order iterations may still converge using the optimal parameter values, or parameter values close to the optimal ones, despite this result. We explore this behavior with a multithreaded parallel implementation of the asynchronous methods. Keywords Asynchronous iterations · Parallel computing · Second order Richardson

1 Introduction A parallel asynchronous iterative method for solving a system of equations is a fixedpoint iteration in which processors do not synchronize at the end of each iteration.  Edmond Chow

[email protected] Andreas Frommer [email protected] Daniel B. Szyld [email protected] 1

Georgia Institute of Technology, Atlanta, GA, USA

2

Bergische Universit¨at Wuppertal, Wuppertal, Germany

3

Temple University, Philadelphia, PA, USA

Numerical Algorithms

Instead, processors proceed iterating with the latest data that is available from other processors. Running an iterative method in such an asynchronous fashion may reduce solution time when there is an imbalance of the effective load between the processors because fast processors do not need to wait for slow processors. Solution time may also be reduced when interprocessor communication costs are high because computation continues while communication takes place. However, the convergence properties of a synchronous iterative method are changed when running the method asynchronously. Consider the system of equations x = G(x) in fixed point form, where G : Rn → Rn , which can be written componentwise as xi = gi (x), i = 1, . . . , n. An asynchronous iterative method for solving this system of equations can be defined mathematically as the sequence of updates [2, 4, 6]:  x k−1 if i ∈ / Jk k , xi = i s1 (k) s2 (k) s (k) gi (x1 , x2 , . . . , xnn ) if i ∈ Jk where xik denotes component i of the iterate at time instant k, Jk is the set of indices updated at instant k, and sj (k) ≤ k − 1 is the last instant component j was updated before being read when evaluating gi at instant k. We point out that (a) not all updates are performed at the same time instant, and (b) updates may use stale