A compromise method in constrained optimization problems

  • PDF / 72,349 Bytes
  • 4 Pages / 594 x 792 pts Page_size
  • 66 Downloads / 322 Views

DOWNLOAD

REPORT


A COMPROMISE METHOD IN CONSTRAINED OPTIMIZATION PROBLEMS UDC 519.9

A. N. Voronin

Abstract. The possibility of obtaining a compromise solution to a constrained optimization problem is investigated. The problem is to make the solution to reflect a compromise between the conflicting requirements of extremalization of some objective function and constraint satisfaction. To solve the problem, multicriteria optimization with the use of a nonlinear compromise scheme is applied. An illustrative example is given. Keywords: nonlinear programming, objective function, multicriteria optimization, constraint, nonlinear compromise scheme. PROBLEM CONTENT Optimization problems in different object domains are usually reduced to searching for an extremum of an objective function under conditions imposed on optimization arguments. To solve such problems, the apparatus of mathematical programming is used. Let us briefly consider some basic concepts used in this field. We will formulate a rather general mathematical programming problem. Find x* = arg extr f ( x ) xÎ X

when X = {x| x i ³ 0 "i Î [1, n], y j ( x ) £ 0 "j Î [1, m]}. Functions f ( x ) and y j ( x ) are arbitrary. Depending on properties of the objective function and constraint functions, mathematical programming problems are subdivided into the following two basic classes: • linear programming problems, • nonlinear programming problems. This article considers nonlinear programming problems. To constructively solve a problem, additional partial assumptions are introduced. In the simplest case, an unconstrained problem is considered (optimization in an open domain or unconstrained optimization). For differentiable functions f ( x ) , a necessary condition obtained for the extremum of a function as a consequence of the Fermat theorem ¶f ( x ) / ¶x i = 0, i Î [1, n] , is usually used, which means that a sufficient condition (a minimum or a maximum) follows from the physical meaning of the problem. If necessary, the sufficient condition is determined from the sign of the second derivative at the point of extremum (the sign + indicates a minimum of a function, and the sign – indicates its maximum). Solving this system of equations, we obtain a required collection õ* of n optimization arguments. The presence of constraints makes mathematical programming problems fundamentally distinct from conventional problems of mathematical analysis that consist of finding extreme values of a function. In this case, constrained optimization takes place. In the classical statement, the case when constraints are strictly equal to some constants y j ( x ) = b j "j Î [1, m] , i.e., the so-called isoperimetric problem (Dido’s problem) is considered. Figure 1 shows an isoperimetric problem that is two-dimensional with respect to its arguments and has one constraint. The surface of the function f ( x ) is represented by isolevel lines f ( x ) = const (as on topographic maps), and the constraint is represented by the projection of the spatial curve y( x ) = b onto the coordinate plane. National Avia

Data Loading...