Accelerated inexact matrix completion algorithm via closed-form q -thresholding $$(q = 1/2, 2/3)$$ ( q = 1 / 2 , 2 / 3

  • PDF / 2,692,551 Bytes
  • 13 Pages / 595.276 x 790.866 pts Page_size
  • 97 Downloads / 204 Views

DOWNLOAD

REPORT


ORIGINAL ARTICLE

Accelerated inexact matrix completion algorithm via closed‑form q‑thresholding (q = 1∕2, 2∕3) operator Zhi Wang1   · Chao Gao1 · Xiaohu Luo1 · Ming Tang1 · Jianjun Wang2 · Wu Chen1 Received: 12 April 2019 / Accepted: 26 March 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract lq ( 0 < q < 1 ) regularization is a dominating strategy for matrix completion problems. The main goal of nonconvex lq regularization based algorithm is to find a so-called low-rank solution.Unfortunately, most existing algorithms suffer from full singular value decomposition (SVD), and thus become inefficient for large-scale matrix completion problems. To alleviate this limitation, in this paper we propose an accelerated inexact algorithm to handle such problem. The key idea is to employ the closed-form q-thresholding ( q = 1∕2, 2∕3 ) operator to approximate the rank of a matrix. The power method and the special “sparse plus low-rank” structure of the matrix iterates are adopted to allow efficient SVD. Besides, we employ Nesterov’s accelerated gradient method and continuation technique to further accelerate the convergence speed of our proposed algorithm. A convergence analysis shows that the sequence {Xt } generated by our proposed algorithm is bounded and has at least one accumulation point. Extensive experiments have been conducted to study its recovery performance on synthetic data, image recovery and recommendation problems. All results demonstrate that our proposed algorithm is able to achieve comparable recovery performance, while being faster and more efficient than state-of-the-art methods. Keywords  Matrix completion · lq regularization · q-thresholding operator · Nesterov’s rule · Power method

1 Introduction In various machine learning and data analysis areas, such as collaborative filtering [1, 2], dimensionality reduction [3], subspace clustering [4], multiple labels learning [5, 6], and image processing [7, 8], one needs to consider the following matrix completion problem:

min rank(X) m×n

X∈ℝ

s.t. P𝛺 (X) = P𝛺 (M),

(1)

where M ∈ ℝm×n is the incomplete low-rank matrix to be reconstructed, X is the considered low-rank matrix in ℝm×n , * Zhi Wang [email protected] Jianjun Wang [email protected] Wu Chen [email protected] 1



College of Computer and Information Science, Southwest University, Chongqing 400715, People’s Republic of China



College of Artificial Intelligence, Southwest University, Chongqing 400715, People’s Republic of China

2

rank(X) is the rank of X, 𝛺 is the location of the observed entries, and P𝛺 is the orthogonal projection onto the span of matrices vanishing outside of 𝛺 . The goal of problem (1) is to find the lowest-rank solution X ∗ of P𝛺 (X) = P𝛺 (M) , which is minimal to rank(X). More often, we consider the following regularization problem, which can be cast as:

min

X∈ℝm×n

1 ‖P𝛺 (X − M)‖2F + 𝜆rank(X), 2

(2)

where ‖ ⋅ ‖F is the Frobenius norm, 𝜆 is a positive regularization parameter. However, since the nonconvexity and discontinuous nature of rank(X), pro