Fast hybrid matrix multiplication algorithms

  • PDF / 131,704 Bytes
  • 11 Pages / 595.276 x 793.701 pts Page_size
  • 6 Downloads / 220 Views

DOWNLOAD

REPORT


FAST HYBRID MATRIX MULTIPLICATION ALGORITHMS UDC 681.322.012

L. D. Jelfimova

Abstract. New hybrid algorithms for matrix multiplication are proposed that have the lowest computational complexity in comparison with well-known matrix multiplication algorithms. Based on x

the proposed algorithms, efficient algorithms are developed for the basic operation D = C + å A l B l l =1

of cellular methods of linear algebra, where A , B , and D are square matrices of cell size. The computational complexity of the proposed algorithms is estimated. Keywords: linear algebra, matrix multiplication algorithm, cellular method, basic operation.

A fundamental operation in constructing many algorithms of computational linear algebra (LA) (solution of systems of linear algebraic equations, matrix inversion, determinant calculation, etc.) is the matrix multiplication whose total computational complexity determines the asymptotic complexity of these algorithms [1]. It is the basic operation in many object domains and especially in digital signal processing when high-speed computations are performed in real-time [2]. With the help of fast matrix multiplication algorithms, the basic operation of cellular methods of solution of LA problems of x

the form D = C + AB or D = C + å A l B l is realized, where A , B , C , and D are square matrices of cell size [3,4]. l =1

The development of a family of matrix multiplication algorithms [1, 5–13] and their cellular analogues [14, 15] was conditioned by the intention of their authors to most efficiently solve the problem of optimization of computational complexity. As is well known, in the traditional (classical) algorithm [1], two matrices of size n ´ n can be multiplied using n 3 multiplication operations and ( n 3 - n 2 ) addition operations. The total computational complexity of the algorithm is Wtot = ( 2n 3 - n 2 ) multiplication/addition operations. Since the multiplication of numbers is a more computationally intensive operation than their addition, the need arises to decrease the multiplicative complexity of algorithms. In 1968, S. Winograd [5] developed a fast regular matrix multiplication algorithm whose multiplicative complexity was equal to Wm = ( 0. 5n 3 + n 2 ) multiplication operations. However, the decrease in the multiplicative complexity practically by 50% conditioned the increase in the additive complexity more than by 50%, namely, the latter complexity equals Wa = (1. 5n 3 + 2n 2 - 2n ) addition operations. Thus, the total computational complexity of the algorithm [5] is Wtot = ( 2n 3 + 3n 2 - 2n ) multiplication/addition operations. In 1969, V. Strassen [6] proposed a fast recursive algorithm whose multiplicative and additive complexities are equal to Wm = n log 2 7 ~ n 2 .807 multiplication operations and Wa = 6( 7log 2n - 22 log 2n ) addition operations, respectively. For multiplying ( 2 ´ 2) matrices, he used seven multiplication operations and 18 addition operations in contrast to the traditional algorithm requiring eight multiplication operations and four addition opera