Adaptive Global Optimization Based on a Block-Recursive Dimensionality Reduction Scheme

  • PDF / 653,230 Bytes
  • 11 Pages / 612 x 792 pts (letter) Page_size
  • 55 Downloads / 174 Views

DOWNLOAD

REPORT


PICAL ISSUE

Adaptive Global Optimization Based on a Block-Recursive Dimensionality Reduction Scheme R. G. Strongin∗,a , V. P. Gergel∗,b , and K. A. Barkalov∗,c ∗

Lobachevsky Nizhny Novgorod State University, Nizhny Novgorod, Russia e-mail: a [email protected], b [email protected], c [email protected] Received July 23, 2019 Revised October 29, 2019 Accepted January 30, 2020

Abstract—Multidimensional multiextremal optimization problems and numerical methods for solving them are studied. The objective function is supposed to satisfy the Lipschitz condition with an a priori unknown constant, which is the only general assumption imposed on it. Problems of this type often arise in applications. Two dimensionality reduction approaches to multidimensional optimization problems, i.e., the use of Peano curves (evolvents) and a recursive multistep scheme, are considered. A generalized scheme combining both approaches is proposed. In the new scheme, an original multidimensional problem is reduced to a family of lower-dimensional problems, which are solved using evolvents. An adaptive algorithm with the simultaneous solution of all resulting subproblems is implemented. Computational experiments on several hundred test problems are performed. In accordance with experimental evidence, the new dimensional reduction scheme is effective. Keywords: global optimization, multiextremal objective functions, dimensionality reduction, Peano curves, recursive optimization DOI: 10.1134/S0005117920080093

1. INTRODUCTION This paper considers global optimization problems of the form ϕ(y ∗ ) = min {ϕ(y) : y ∈ D},





D = y ∈ RN : ai  yi  bi , 1  i  N ,

(1)

with the following assumptions: the objective function can be multiextremal and is implicitly defined, representing the black box; the values of this function are obtained by numerical modeling, which is a computationally intensive operation. In principle, any opportunity to estimate reliably the global optimum in a multiextremal problem with the black-box objective function involves a priori information that can be used to associate the possible values of this function with its known values at trial points. For many applications, a typical situation is when a limited change in the parameter vector y causes a limited change in the values of ϕ(y). The mathematical model describing this assumption is the Lipschitz condition     ϕ(y  ) − ϕ(y  )  L y  − y   ,

y  , y  ∈ D,

0 < L < ∞.

The assumption that the objective function satisfies the Lipschitz condition is common for many approaches to the development of optimization algorithms. The first methods of Lipschitz optimization were proposed in the early 1970s [1–3]. Since that time, this line of research has continued 1475

1476

STRONGIN et al.

to develop actively [4–8]. For example, many well-known methods are based on different techniques to divide an original search domain into a system of subdomains and to choose the most promising subdomain for performing a next trial (i.e., to calculate the objective function). The