Preface to the Special Issue on Optimization for Data Sciences

  • PDF / 146,618 Bytes
  • 2 Pages / 439.37 x 666.142 pts Page_size
  • 33 Downloads / 231 Views

DOWNLOAD

REPORT


Preface to the Special Issue on Optimization for Data Sciences Gabriel Peyré1 · Antonin Chambolle2

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Optimization is a core ingredient to most recent successes in data science. State of the art approaches in imaging sciences and machine learning integrate efficient methods bringing together key ideas from non-smooth and possibly non-convex optimization, and developing large scale stochastic algorithms. Advancing the front of research to cope with large-scale data sets and vastly over-parameterized models is of utmost important to solve difficult inverse problems in imaging or train deep networks for supervised task and generative unsupervised models. This special issue on Optimization for Data Science brings together a wide spectrum of contributions in this very active field of research. A first set of papers provides core contributions in optimization, with an emphasis on the development of highly scalable algorithms tailored for data sciences. The papers “Convergence of Stochastic Proximal Gradient Algorithm” and “Point Process Estimation with Mirror Prox Algorithms” studies stochastic first order methods, which are ubiquitous in machine learning and imaging. In the first paper, the authors Rosasco, Villa and Vu study first order stochastic optimization methods involving a splitting into a smooth function and a non-smooth function with a simple structure. This type of algorithms is a crucial ingredient to perform regularized empirical risk minimization, and this paper proposes a theoretical analysis of the convergence rate of these methods. In the second paper, He, Harchaoui, Wang and Song develop algorithms which can cope with non-smooth (in particular non-Lipschitz) optimization problems by leveraging the geometry of a saddle point reformulation. This approach finds for instance applications in machine learning involving point processes. When the parameters of the models are constrained to belong to some Riemannian manifolds, it is important to integrate the geometry of this manifold in the optimization procedure.

B

Gabriel Peyré [email protected] Antonin Chambolle [email protected]

1

CNRS and École Normale Supérieure, DMA, Ecole Normale Supérieure, 45 Rue d’Ulm, 75230 Paris Cedex 05, France

2

CMAP, CNRS and École Polytechnique, 91128 Palaiseau, France

123

Applied Mathematics & Optimization

The paper “Simple algorithms for optimization on Riemannian manifolds with extra constraints” by Liu and Boumal explains how to integrate additional constraints to this type of approaches either by penalization or Lagrange multipliers, and establishes convergence results. Many problems in data sciences need to be regularized to cope with noise and ill-posedness, using some form of prior information on the simplicity of the model. Sparse and low-rank priors are the most iconic examples of such regularizations. The non-smoothness of the resulting optimization problems poses many challenges which are the heart of a set of thr