On the Convergence Properties of a Second-Order Augmented Lagrangian Method for Nonlinear Programming Problems with Ineq

  • PDF / 573,256 Bytes
  • 18 Pages / 439.37 x 666.142 pts Page_size
  • 69 Downloads / 155 Views

DOWNLOAD

REPORT


On the Convergence Properties of a Second-Order Augmented Lagrangian Method for Nonlinear Programming Problems with Inequality Constraints Liang Chen1

· Anping Liao1

Received: 13 April 2015 / Accepted: 3 November 2015 © Springer Science+Business Media New York 2015

Abstract The objective of this paper is to conduct a theoretical study on the convergence properties of a second-order augmented Lagrangian method for solving nonlinear programming problems with both equality and inequality constraints. Specifically, we utilize a specially designed generalized Newton method to furnish the second-order iteration of the multipliers and show that when the linear independent constraint qualification and the strong second-order sufficient condition hold, the method employed in this paper is locally convergent and possesses a superlinear rate of convergence, although the penalty parameter is fixed and/or the strict complementarity fails. Keywords Second-order augmented Lagrangian method · Nonlinear programming · Generalized Newton method · Nonsmooth analysis Mathematics Subject Classification

90C30 · 49J52 · 65K05

1 Introduction The augmented Lagrangian method was first proposed by Hestenes [1] and Powell [2] for solving equality constrained nonlinear programming (NLP) problems and was generalized by Rockafellar [3] for NLP problems with inequality constraints. The

B

Liang Chen [email protected] Anping Liao [email protected]

1

College of Mathematics and Econometrics, Hunan University, Changsha 410082, People’s Republic of China

123

J Optim Theory Appl

convergence rate of the augmented Lagrangian method has been discussed by Powell [2], Rockafellar [3,4], Tretyakov [5], Bertsakas [6], Conn [7], Ito and Kunisch [8] and Contesse-Becker [9] under different assumptions. For convex programming problems, the augmented Lagrangian method can be viewed as a proximal point algorithm applied to the dual of the primal problem [10]. One may refer to the monograph by Bertsakas [11] and references therein for systematic discussions on the augmented Lagrangian method. Locally, we can interpret the iteration procedure for updating the multipliers in the augmented Lagrangian method as a steepest ascent step applied to a certain concave problem with the unit step-length. Thus, it is natural to consider adopting second-order approaches to update the multipliers. For NLP problems without inequality constraints, Buys [12] first introduced a second-order procedure for updating the multipliers, which can be viewed as an application of the Newton method to a certain problem. Such a procedure was later independently proposed and refined by Bertsakas [11,13,14]. More recently, Brusch [15] and Fletcher [16] independently proposed updating the multipliers by the quasi-Newton method, and this was also considered by Fontecilla et al. [17]. For NLP problems with inequality constraints, the augmented Lagrangian function is no longer twice continuously differentiable. Therefore, pursuing a secondorder augmented Lagrangian method for such problems is m