Asynchronous parallel algorithms for nonconvex optimization
- PDF / 571,754 Bytes
- 34 Pages / 439.37 x 666.142 pts Page_size
- 33 Downloads / 241 Views
Series A
Asynchronous parallel algorithms for nonconvex optimization Loris Cannelli1 · Francisco Facchinei2 Gesualdo Scutari1
· Vyacheslav Kungurtsev3 ·
Received: 5 October 2017 / Accepted: 4 June 2019 © Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2019
Abstract We propose a new asynchronous parallel block-descent algorithmic framework for the minimization of the sum of a smooth nonconvex function and a nonsmooth convex one, subject to both convex and nonconvex constraints. The proposed framework hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of modern computational architectures and asynchronous implementations in a more faithful way than current state-of-the-art models. Other key features of the framework are: (1) it covers in a unified way several specific solution methods; (2) it accommodates a variety of possible parallel computing architectures; and (3) it can deal with nonconvex constraints. Almost sure convergence to stationary solutions is proved, and theoretical complexity results are provided, showing nearly ideal linear speedup when the number of workers is not too large. Keywords Asynchronous algorithms · Nonconvex constraints · Parallel methods · Probabilistic model Mathematics Subject Classification 90C30 · 90C26 · 65K05 · 68W10
The work of Cannelli and Scutari was supported by the USA NSF under Grants CIF 1564044, and CIF 1719205; and the Army Research Office under Grant W911NF1810238. Facchinei was partially supported by the Italian Ministry of Education, Research and University, under the PLATINO (PLATform for INnOvative services in future internet) PON project, Grant Agreement No. PON01_01007. Kungurtsev was supported by the Czech Science Foundation Project 17-26999S and the OP VVV project CZ.02.1.01/0.0/0.0/16_019/0000765 “Research Center for Informatics”. Part of this work has been presented to the 50th Asilomar Conference on Signals, Systems, and Computers [6] and the 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing [5]. A two-part preliminary technical report was posted on arxiv on July 2016 [3] and January 2017 [4].
B
Francisco Facchinei [email protected]
Extended author information available on the last page of the article
123
L. Cannelli et al.
1 Introduction We study asynchronous parallel block-descent methods for the following class of nonconvex nonsmooth minimization problems: min
x(x1 ,...,x N )
F(x) f (x) + xi ∈ Xi ,
N
gi (xi )
i=1
(P)
i = 1, . . . , N ,
where f is a smooth, possibly nonconvex function, gi are possibly nonsmooth, convex functions, and Xi ⊆ Rn i is a closed, possibly nonconvex set. Instances of Problem (P) arise in many fields, including compressed sensing, machine learning, data mining, and genomics, just to name a few. Typically, in datadriven applications f might measure the misfit between the observations and the postulated model, parametrized on x, while the regularizers gi encode struc
Data Loading...