Globally maximizing the sum of squares of quadratic forms over the unit sphere
- PDF / 372,818 Bytes
- 13 Pages / 439.37 x 666.142 pts Page_size
- 89 Downloads / 186 Views
Globally maximizing the sum of squares of quadratic forms over the unit sphere Xiaoli Cen1 · Yong Xia1 Received: 28 April 2019 / Accepted: 29 October 2019 © Springer-Verlag GmbH Germany, part of Springer Nature 2019
Abstract We first lift the problem of maximizing the sum of squares of quadratic forms over the unit sphere to an equivalent nonlinear optimization problem, which provides a new standard quadratic programming relaxation. Then we employ a simplicial branch and bound algorithm to globally solve the lifted problem and show that the time-complexity is linear with respect to the number of all nonzero entries of the input matrices under certain conditions. Numerical results demonstrate the efficiency of the new algorithm. Keywords Polynomial optimization · Sum of squares · Quadratic programming · Branch-and-bound
1 Introduction We consider the sphere constrained homogenous polynomial optimization, where the objective is to maximize the sum of squares of quadratic forms: (P) max
u=1
m
(u T Ak u)2 ,
(1)
k=1
where · is the Euclidean norm, A1 , . . . , Am are n × n real symmetric matrices, and m is a positive integer number. Sphere constrained homogeneous polynomial optimization is also known as the Z-eigenvalue maximization problem [7] or the best rank-one approximation of a supersymmetric tensor [13,19]. It has wide applications in signal processing, combinatorics, and high-order statistics, see [15] and references therein. A direct application of problem (P) in solving the Motzkin-Straus problem is
B 1
Yong Xia [email protected] LMIB of the Ministry of Education, School of Mathematical Sciences, Beihang University, Beijing 100191, People’s Republic of China
123
X. Cen, Y. Xia 50
f(u 1 ,u 2 )
40 30 20 10 0 1 0.5 0
u2
-0.5 -1
-1
-0.5
0.5
0
1
u1
Fig. 1 The function f (u 1 , u 2 ) = (2u 21 − 5u 22 − 6u 1 u 2 )2 + (u 21 + 2u 22 + 2u 1 u 2 )2 (solid line) over its feasible region u 21 + u 22 = 1 (dotted line)
due to [18]. Problem (P) also appears in improving Polyak’s local convexity result for quadratic transformations, see [27, p.347]. When m = 1, problem (P) is trivial to solve as the optimum value equals to max{λmin (A1 )2 , λmax (A1 )2 }, where λmin (·) and λmax (·) denote the minimal and maximal eigenvalues of (·), respectively. The other easy case is when A1 , . . . , Am are simultaneously diagonalizable by a single orthogonal matrix, that is, there is an orthogonal matrix V = [V1 · · · Vn ] such that V T Ai V (i = 1, . . . , m) are all diagonal. Introducing t j = (V jT u)2 ( j = 1, . . . , n) in problem (P) yields a concave quadratic programming problem in terms of t over a standard simplex, with an optimum solution attained at one of n vertices. However, in general, problem (P) is shown to be NPhard [18]. The non-convexity can be illustrated by a 2-dimensional instance in Fig. 1. Designing approximate methods is a research hotspot for sphere constrained homogenous polynomial optimization [15]. Different approaches based on the multilinear relaxation, convex polynomial relaxat
Data Loading...