A novel projected gradient-like method for optimization problems with simple constraints

  • PDF / 464,084 Bytes
  • 18 Pages / 439.37 x 666.142 pts Page_size
  • 48 Downloads / 189 Views

DOWNLOAD

REPORT


A novel projected gradient-like method for optimization problems with simple constraints Wei Wei1 · Hua Dai1 · Weitai Liang2 Received: 20 December 2018 / Revised: 29 April 2020 / Accepted: 26 May 2020 © SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional 2020

Abstract We propose a novel projected gradient-like method to solve a minimization problem with simple constraints. The new search direction is consistent with the first-order optimality condition for the simple constrained optimization problem, and makes the search step-size easier to determine. Convergence properties of the method are analyzed. Numerical experiments are presented to illustrate the performance and application of the proposed method. Keywords Constrained optimization · Nonnegative constraints · Bound constraints · Projected gradient method Mathematics Subject Classification 65K05 · 90C30 · 90C52

1 Introduction We are interested in the numerical solutions of large-scale minimization problem min f (x),

(1)

x∈Ω

where f is a continuously differentiable function mapping Ω into R, and Ω is a nonempty closed convex subset of Rn . In this paper, we consider the case where the constrained set Ω is defined by the nonnegative constraints Ω = {x ∈ Rn | x ≥ 0}  Rn+

(2)

and the bound constraints

B

Hua Dai [email protected] Wei Wei [email protected] Weitai Liang [email protected]

1

Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China

2

Nanjing Research Institute of Electronics Engineering, Nanjing 210007, China 0123456789().: V,-vol

123

168

Page 2 of 18

W. Wei et al.

Ω = {x ∈ Rn | l ≤ x ≤ u}  B

(3)

for some given vectors l, u ∈ Rn with l < u, respectively. The vector inequalities ≥, ≤ and < are all meant to be componentwise. The constraints (2) and (3) are also referred to as simple constraints. The optimization problem (1) arises in a wide range of applications and has received much attention in recent decades, and many methods for its solution have been developed (Andreani et al. 2010; Bertsekas 1976, 1982, 1999; Birgin and Martínez 2002; Byrd et al. 1995; Calamai and Moré 1987; Cheng et al. 2014; Coleman and Li 1996; Conn et al. 1988, 1991; Cristofari et al. 2017; De Santis et al. 2012; Facchinei et al. 1998, 2002; Goldstein 1964; Hager and Zhang 2006; Kanzow and Klug 2006; Levitin and Polyak 1966; Lin and More 1999; McCormick and Tapia 1972; Pytlak 1998; Schwartz and Polak 1997). Among these methods, the projected gradient methods are the classical approaches for solving the problem (1), and are also attractive for large-scale problems. Our main interest is in the projected gradient methods. The methods of this type were originally proposed by Goldstein (1964) and Levitin and Polyak (1966) as follows x (k+1) = PΩ [x (k) − αk ∇ f (x (k) )],

(4)

where PΩ [v] denotes the unique projection of a vector v ∈ Rn onto Ω, ∇ f (x (k) ) denotes the gradient of f at the kth approximation point x (k) , and αk ≥ 0 denotes the search step-size. The projection operation PΩ in (4) can be