Acceleration techniques for level bundle methods in weakly smooth convex constrained optimization
- PDF / 2,930,611 Bytes
- 22 Pages / 439.37 x 666.142 pts Page_size
- 91 Downloads / 154 Views
Acceleration techniques for level bundle methods in weakly smooth convex constrained optimization Yunmei Chen1 · Xiaojing Ye2 · Wei Zhang1 Received: 9 June 2017 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract We develop a unified level-bundle method, called accelerated constrained levelbundle (ACLB) algorithm, for solving constrained convex optimization problems. where the objective and constraint functions can be nonsmooth, weakly smooth, and/or smooth. ACLB employs Nesterov’s accelerated gradient technique, and hence retains the iteration complexity as that of existing bundle-type methods if the objective or one of the constraint functions is nonsmooth. More importantly, ACLB can significantly reduce iteration complexity when the objective and all constraints are (weakly) smooth. In addition, if the objective contains a nonsmooth component which can be written as a specific form of maximum, we show that the iteration complexity of this component can be much lower than that for general nonsmooth objective function. Numerical results demonstrate the effectiveness of the proposed algorithm. Keywords Convex optimization · Acceleration · Bundle method · Functional constrained optimization
* Wei Zhang [email protected] Yunmei Chen [email protected] Xiaojing Ye [email protected] 1
Department of Mathematics, University of Florida, Gainesville, FL 32611, USA
2
Department of Mathematics and Statistics, George State University, Atlanta, GA 30303, USA
13
Vol.:(0123456789)
Y. Chen et al.
1 Introduction 1.1 Problem description In this paper, we are interested in solving the constrained optimization problem where the objective function F may contain either or both of f0 and f as follows:
min {F(x) ∶= f0 (x) + f (x)}, x∈X
s.t. gi (x) ≤ 0, ∀ i = 1, … , nc .
(1)
Here X is a compact convex set in ℝdX , the functions f0 , gi ∶ X → ℝ are proper, closed, and convex functions, and there exist L0 , Li > 0 and 𝜌0 , 𝜌i ∈ [0, 1] such that
f0 (̄x) − f0 (x) − ⟨𝜉(x), x̄ − x⟩ ≤
gi (̄x) − gi (x) − ⟨𝜁i (x), x̄ − x⟩ ≤
L0 ‖̄x − x‖1+𝜌0 , 1 + 𝜌0
Li ‖̄x − x‖1+𝜌i , 1 + 𝜌i
∀ x, x̄ ∈ X,
In (1), f is a special max-type (possibly nonsmooth) function:
f (x) ∶= max {⟨Ax, y⟩ − 𝜒(y)}, y∈Y
∀ x, x̄ ∈ X,
(2)
i = 1, … , nc . (3) (4)
where Y ⊆ ℝdY is a compact convex set, 𝜒 ∶ Y → ℝ is a proper, closed convex function, and A ∶ ℝn → ℝm is a linear operator (such as a matrix). In (2)–(4), ‖ ⋅ ‖ is the standard Euclidean norm, ⟨⋅, ⋅⟩ is the inner product of vectors in ℝn , and 𝜉(x) ∈ 𝜕f0 (x) and 𝜁i (x) ∈ 𝜕gi (x) are any (sub)gradient of f0 and gi at x, respectively. The conditions given in (2) and (3) allow us to present our algorithm and convergence results in a unified framework with f0 and gi ’s of any smoothness level 𝜌0 , 𝜌gi ∈ [0, 1] . More precisely, we know that f0 is nonsmooth if 𝜌0 = 0 , weakly smooth if 𝜌0 ∈ (0, 1) , and smooth if 𝜌0 = 1 . Similar statements hold for the constraint functions gi’s. Therefore, for example, if 𝜌0 = 0 but 𝜌i = 1 for all i = 1, … , nc , then we will be dealing with proble
Data Loading...