On the convergence of a class of inertial dynamical systems with Tikhonov regularization
- PDF / 829,270 Bytes
- 28 Pages / 439.37 x 666.142 pts Page_size
- 17 Downloads / 185 Views
On the convergence of a class of inertial dynamical systems with Tikhonov regularization Bo Xu1 · Bo Wen2 Received: 9 June 2020 / Accepted: 30 October 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract We consider a class of inertial second order dynamical system with Tikhonov regularization, which can be applied to solving the minimization of a smooth convex function. Based on the appropriate choices of the parameters in the dynamical system, we first show that the function value along the trajectories converges to the optimal value, and prove that the convergence rate can be faster than o(1/t 2 ). Moreover, by constructing proper energy function, we prove that the trajectories strongly converges to a minimizer of the objective function of minimum norm. Finally, some numerical experiments have been conducted to illustrate the theoretical results. Keywords Convex optimization · Inertial gradient system · Tikhonov regularization · Convergence analysis
1 Introduction In recent years, convex optimization problems draw many researchers’ attention due to its arisen in a lot of application areas, such as machine learning [10,30], statistics [18], image processing [20,32] and so on. Hence, various algorithms have been proposed for solving different structured convex optimization problems. One simple and often powerful algorithm is Nesterov accelerated gradient algorithm, whose convergence rate can be O(1/t 2 ). Many accelerated algorithms based on Nesterov’s accelerated technique has been proposed since then, we refer the readers to [11,19,25,26,29,31] and the reference therein for an overview of these algorithms.
B
Bo Wen [email protected] Bo Xu [email protected]
1
School of Science, Hebei University of Technology, Tianjin, People’s Republic of China
2
Institute of Mathematics, Hebei University of Technology, Tianjin, People’s Republic of China
123
B. Xu, B. Wen
Most literatures consider Nesterov’s accelerated method by using different numerical optimization techniques. However, differential equations are also important and efficient tools to study numerical algorithms. Recently, Su, Boyd, Candés [28] propose a class of second order differential equations to study Nesterov’s accelerated gradient method, which is ⎧ ⎨ x¨ + α x˙ + ∇ (x) = 0 t (1.1) ⎩ x (t ) = u , x˙ (t ) = v , 0
0
0
0
where is convex and differentiable, and ∇ is Lipschitz continuous, t0 > 0. They show that this system can be seen as the continuous version of Nesterov’s accelerated gradient method. In addition, they prove that the convergence rate of the function value along the trajectories of (1.1) is O(1/t 2 ), if α is chosen as 3, which is the same as the convergence rate of Nesterov’s accelerated gradient method. Moreover, they show that 3 is the minimum constant that guarantees the convergence rate of O(1/t 2 ). Su, Boyd, Candés work [28] motivates subsequent studies on the second order differential equation (1.1), see, for example, [3,8,13–16,24,33]. Particularly, Attouch, Chbani, Peypouquet and Redont [3] es
Data Loading...