A new wide neighborhood primal-dual second-order corrector algorithm for linear optimization

  • PDF / 340,053 Bytes
  • 17 Pages / 439.37 x 666.142 pts Page_size
  • 115 Downloads / 203 Views

DOWNLOAD

REPORT


A new wide neighborhood primal-dual second-order corrector algorithm for linear optimization Zsolt Darvay1

· Behrouz Kheirfam2 · Petra Renáta Rigó1,3

Received: 22 May 2018 / Accepted: 17 August 2019 © Springer-Verlag GmbH Germany, part of Springer Nature 2019

Abstract We propose a new large-step primal-dual second-order corrector interior-point method for linear optimization. At each iteration, our method uses the new wide neighborhood introduced by Darvay and Takács (Cent Eur J Oper Res 26(3):551–563, 2018. https:// doi.org/10.1007/s10100-018-0524-0). In this paper we would like to improve the directions proposed by Darvay and Takács by adding a second-order corrector direction. The corrector step is multiplied by the square of the step length in the expression of the new iterate. To our best knowledge, this is the first primal-dual second-order corrector interior-point algorithm based on Darvay–Takács’s new wide neighborhood, which has the same complexity as the best short-step algorithms for linear optimization. Finally, numerical experiments show that the proposed algorithm is efficient and reliable. Keywords Linear optimization · Wide neighborhood · Interior-point methods · Second-order methods · Polynomial complexity

B

Zsolt Darvay [email protected] Behrouz Kheirfam [email protected] Petra Renáta Rigó [email protected]

1

Faculty of Mathematics and Computer Science, Babe¸s-Bolyai University, No. 1 Mihail Kog˘alniceanu Street, 400084 Cluj-Napoca, Romania

2

Department of Applied Mathematics, Azarbaijan Shahid Madani University, Tabriz, Iran

3

Department of Differential Equations, Budapest University of Technology and Economics, Budapest, Hungary

123

Zs. Darvay et al.

1 Introduction Interior-point methods (IPMs) became a powerful tool for solving linear optimization (LO) problems after that Karmarkar [7] published his projective potential-reduction algorithm. In order to see the main results in this field, we refer the reader to the books of Roos et al. [20], Ye [25] and Wright [24]. It turned out that several IPMs could solve linear complementarity problems (LCPs) [12], semidefinite optimization (SDO) [4,23] and convex programming [16]. The IPMs can be classified in a variety of ways, e.g., primal-dual path-following methods, affine-scaling, feasible and infeasible IPMs. Moreover, IPMs have been distinguished from the point of view of the step length. In this way, we can talk about short- and large-update methods, that work in smalland wide neighborhoods of the central path, respectively. However, there exists a difference between the theory and practice of IPMs. This means that the large-update methods give better results in practice, while the short-update methods have better theoretical complexity results. To overcome this problem, Peng et al. [17] used selfregular barriers to obtain different algorithms and they were able to reduce this gap. Potra [19] proposed a predictor–corrector method for degenerate LCP problem based on a negative infinity neighborhood √ of the central path a