The Algebraic Path Problem Revisited
We derive an efficient linear simd architecture for the algebraic path problem (APP). For a graph with n nodes, our array has n processors, each with 3n memory cells, and computes the result in 3n2 - 2n steps. Our array is ideally suited for VLSI, since t
- PDF / 216,296 Bytes
- 10 Pages / 431 x 666 pts Page_size
- 109 Downloads / 234 Views
IRISA, Rennes, France, http://www.irisa.fr/cosi/Rajopadhye University of Yaounde, Cameroon, [email protected]
2
Abstract. We derive an efficient linear simd architecture for the algebraic path problem (app). For a graph with n nodes, our array has n processors, each with 3n memory cells, and computes the result in 3n2 − 2n steps. Our array is ideally suited for vlsi, since the controls is simple and the memory can be implemented as fifos. i/o is straightforward, since the array is linear. It can be trivially adapted to run in multiple passes, and moreover, this version improves the work efficiency. n processors is no more than For any constant α, the running time on α 2 2 (α + 2)n . The work is no more than (1 + α )n3 and can be made as close to n3 as desired by increasing α. Keywords: transitive closure, shortest path, matrix inversion, WarshallFloyd & Gauss-Jordan elimination, systolic synthesis, scheduling, spacetime mapping, recurrence equations.
1
Introduction
The algebraic path problem (app) unifies a number of well-known problems into a single algorithm schema. It may be stated as follows. We are given a weighted graph G = V, E, w with vertices V = {1, 2, . . . n}, edges E ⊆ V × V and a weight function w : E → S, where S is a closed semiring S, ⊕, ⊗, ∗, 0, 1 (closed in the sense that ∗ is a unary “closure” operator, defined as the infinite sum x∗ = x ⊕ (x ⊗ x) ⊕ (x ⊗ x ⊗ x) ⊕ . . .. A path in G is a (possibly infinite) sequence of nodes p = v1 . . . vk and the weight of a path is defined as the product w(p) = w(v1 , v2 ) ⊗ w(v2 , v3 ) ⊗ . . . ⊗ w(vk , vk−2 ). Let P (i, j) denote the (possibly infinite) set of all paths from i to j. The app is the problem of computing, for all pairs i, j, such that 0 < i, j ≤ n, the value d(i, j) defined as follows ( is a “summation” operator for S) d(i, j) = w(p) p∈P (i,j)
Warshall’s transitive closure (tc) algorithm, Floyd’s shortest path (sp) algorithm and the Gauss-Jordan matrix inversion (mi) algorithm are but instances of a single, generic algorithm for the app (henceforth called the wfgj algorithm), the only difference being the semiring involved. It’s sequential complexity (and hence the work) is n3 semiring operations. There has been considerable research on implementing the app (or particular instances thereof) on systolic arrays (see Table 1). Most of the early work was ad P. Amestoy et al. (Eds.): Euro-Par’99, LNCS 1685, pp. 698–707, 1999. c Springer-Verlag Berlin Heidelberg 1999
The Algebraic Path Problem Revisited
699
Authors Application Area Time Guibas et al. [5] tc n2 6n Nash-Hansen [9] mi 3n2 /2 5n Robert-Tchuent [13] mi n2 5n Kung-Lo [6] tc n2 7n Rote [15] app n2 7n Robert-Trystam [14] app n2 5n Kung-Lo-Lewis [7] tc & sp n2 5n Benaini et al. [1] app n2 /2 5n Benaini-Robert [2] app n2 /3 5n Scheiman-Cappello [16] app n2 /3 5n Clauss-Mongenet-Perrin [4] app n2 /3 5n Takaoka-Umehara [17] sp n2 4n Rajopadhye [11] app n2 4n Risset-Robert [12] app n2 4n Chang-Tsay [3] app n2 4n Djamegni et al. [18] app n2 /3 4n Linear arrays (systolic and otherwise) K
Data Loading...