Continuous Dynamics Related to Monotone Inclusions and Non-Smooth Optimization Problems
- PDF / 627,843 Bytes
- 32 Pages / 439.642 x 666.49 pts Page_size
- 5 Downloads / 195 Views
Continuous Dynamics Related to Monotone Inclusions and Non-Smooth Optimization Problems Erno¨ Robert Csetnek1 Received: 16 March 2020 / Accepted: 30 June 2020 / © The Author(s) 2020
Abstract The aim of this survey is to present the main important techniques and tools from variational analysis used for first and second order dynamical systems of implicit type for solving monotone inclusions and non-smooth optimization problems. The differential equations are expressed by means of the resolvent (in case of a maximally monotone set valued operator) or the proximal operator for non-smooth functions. The asymptotic analysis of the trajectories generated relies on Lyapunov theory, where the appropriate energy functional plays a decisive role. While the most part of the paper is related to monotone inclusions and convex optimization problems in the variational case, we present also results for dynamical systems for solving non-convex optimization problems, where the Kurdyka-Łojasiewicz property is used. Keywords Dynamical systems · Lyapunov analysis · Krasnosel’ski˘ı–Mann algorithm · Monotone inclusions · Resolvent · Proximal operator · Forward-backward algorithm · Non-smooth optimization problem · Kurdyka-Łojasiewicz property Mathematics Subject Classification (2010) 34G25 · 47J25 · 47H05 · 90C25
1 Introduction and preliminaries Dynamical systems approaching monotone inclusions and optimization problems enjoy much attention since the seventies of the last century (Br´ezis, Baillon and Bruck, see [25, 52, 53]), not only due to their intrinsic importance in areas like differential equations and applied functional analysis, but also because they have been recognized as a valuable tool for discovering and studying numerical algorithms for optimization problems obtained by time discretization of the continuous dynamics. The dynamic approach to iterative methods in optimization can furnish deep insights into the expected behavior of the method and the
Research supported by FWF (Austrian Science Fund), project P 29809-N32. Ern¨o Robert Csetnek
[email protected] 1
Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria
E.R. Csetnek
techniques used in the continuous case can be adapted to obtain results for the discrete algorithm. For more on the relations between the continuous and discrete dynamics we refer the reader to [75]. Let us mention that the discretization x(t) ˙ ≈ γ1 (xn+1 − xn ) (with γ > 0) of the gradient flow x(t) ˙ = −∇g(x(t)), where g : H → R is a smooth function, leads to the well known steepest descent method (gradient method) xn+1 = xn − γ ∇g(xn ), used for solving the minimization problem min g(x).
x∈H
In the case of convex non-smooth optimization problems, the discretization of the differential inclusion x(t) ˙ ∈ −∂f (x(t)) leads to the subgradient method xn+1 ∈ xn − γ ∂f (xn ), or, when we use the discretization xn+1 − xn ∈ γ ∂f (xn+1 ), we obtain the proximal point algorithm [79] xn+1 = (Id +γ ∂f )−1 (xn ). Let us proceed and consider cont
Data Loading...