Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis
- PDF / 671,920 Bytes
- 46 Pages / 439.37 x 666.142 pts Page_size
- 94 Downloads / 165 Views
Series A
Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis Junyu Zhang1 · Shiqian Ma2 · Shuzhong Zhang1,3 Received: 29 January 2018 / Accepted: 1 August 2019 © Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2019
Abstract In this paper we study nonconvex and nonsmooth multi-block optimization over Euclidean embedded (smooth) Riemannian submanifolds with coupled linear constraints. Such optimization problems naturally arise from machine learning, statistical learning, compressive sensing, image processing, and tensor PCA, among others. By utilizing the embedding structure, we develop an ADMM-like primal-dual approach based on decoupled solvable subroutines such as linearized proximal mappings, where the duality is with respect to the embedded Euclidean spaces. First, we introduce the optimality conditions for the afore-mentioned optimization models. Then, the notion of -stationary solutions is introduced as a result. The main part of the paper is to show that the proposed algorithms possess an iteration complexity of O(1/ 2 ) to reach an -stationary solution. For prohibitively large-size tensor or machine learning models, we present a sampling-based stochastic algorithm with the same iteration complexity bound in expectation. In case the subproblems are not analytically solvable, a feasible curvilinear line-search variant of the algorithm based on retraction operators is proposed. Finally, we show specifically how the algorithms can be implemented to solve a variety of practical problems such as the NP-hard maximum bisection problem, the q regularized sparse tensor principal component analysis and the community detection problem. Our preliminary numerical results show great potentials of the proposed methods.
B
Shuzhong Zhang [email protected]; [email protected] Junyu Zhang [email protected] Shiqian Ma [email protected]
1
Department of Industrial and Systems Engineering, University of Minnesota, Minneapolis, USA
2
Department of Mathematics, University of California, Davis, Davis, USA
3
Institute of Data and Decision Analytics, The Chinese University of Hong Kong, Shenzhen, China
123
J. Zhang et al.
Keywords Nonconvex and nonsmooth optimization · Riemannian manifold · -Stationary solution · ADMM · Iteration complexity Mathematics Subject Classification 90C60 · 90C90
1 Introduction Multi-block nonconvex optimization with nonsmooth regularization functions has recently found important applications in statistics, computer vision, machine learning, and image processing. In this paper, we aim to solve a class of constrained nonconvex and nonsmooth optimization models. To get a sense of the problems at hand, let us consider the following Multilinear (Tensor) Principal Component Analysis (MPCA) model, which has applications in 3-D object recognition, music genre classification, and subspace learning (see e.g. [43,50]). Details of the model will be discussed in Sect. 5. We want to highlight here that a sparse optimi
Data Loading...