On Sufficient Conditions for Optimality of Lipschitz Functions and Their Applications to Vector Optimization
We shall start with the following numerical example concerning an optimization problem involving differentiable functions.
- PDF / 638,944 Bytes
- 10 Pages / 481.89 x 691.654 pts Page_size
- 30 Downloads / 246 Views
		    We shall start with the following numerical example concerning an optimization problem involving differentiable functions. EXAMPLE 1.
 
 \·le
 
 consider the following optimization problem in
 
 three-dimensional space f
 
 (X
 
 1
 
 y 1Z) =
 
 X
 
 + 2y -
 
 X
 
 2
 
 +y
 
 2
 
 -
 
 Z
 
 2
 
 -+
 
 inf
 
 under conditions (?)
 
 g 1 (x,y,z) =-(x+y) +z g 2 (X 1 } ' 1 Z)
 
 = -y
 
 + Z
 
 4
 
 2
 
 2._0
 
 2._ 0.
 
 We want to show that (0,0,0) is a local minimum of problem (P) .
 
 We shall first verify that the Kuhn-Tucker necessary conditions for optimality hold. Indeed, taking A1 = A2 = 1 and formulating the Lagrange function L(x,y,z,A. 1 ,A. 2 ) = f(x,y,z) + A. 1g 1 (x,y,z)
 
 -x
 
 2
 
 +y
 
 2
 
 +z
 
 4
 
 we trivially obtain that
 
 V. F. Demyanov et al. (eds.), Nondifferentiable Optimization: Motivations and Applications © Springer-Verlag Berlin Heidelberg 1985
 
 130
 
 vr.(x,y,z)
 
 I (0,0,0) = o
 
 Unfortunately the classical sufficient condition of optimality (Hesteness (1947), ricShane
 
 (1942))
 
 second differential of the Lagrangian at
 
 does not hold. The (0,0,0)
 
 is determined
 
 by the Matrix
 
 and on the line orthogonal to the gradients (-1 ,-1 ,0) z
 
 and
 
 vg 2 j (O,O,O) = (0,-1 ,0),
 
 Le
 
 Vg 1 j (O,O,O) = on the axis of
 
 it siMply vanishes. This stimulates an approach to sufficient conditions which
 
 is different to the classical one proposed by
 
 ~1cShane
 
 (1942)
 
 and Hesteness (1947). The classical idea was based on direct approximation of the problem by approximations of linear and quadratic type. Another approach is based on the idea of the implicit function theorem and in the simplest case can be expressed by the following: THEOREM 1. Let
 
 (Rolewicz; 1980b)
 
 D
 
 be a domain contained in
 
 Euclidean space
 
 Rn.
 
 Let
 
 n~dimensional
 
 f,g 1 , ••• gm
 
 differentiable functions defined on
 
 D.
 
 We consider the fcfllowing optimization problem f(x) (P)
 
 -+
 
 inf
 
 g.(x) 0, i
 
 at
 
 x0
 
 ••• m)
 
 strictly positive, such that the gradient
 
 of the Lagrange function is equal to
 
 O,
 
 ITl
 
 V(f+
 
 E A,g.)l
 
 i=1
 
 ~
 
 ~
 
 _
 
 x-xo
 
 =0.
 
 if and only is a local minimum of the problem (P) Then if it is a local minimum of the following equality problems: f(Y)
 
 -+
 
 inf 0 .
 
 (Pe)
 
 Having Theorem 1, we can easily show that (o,o,.O) is a local minimum in Example 1. Indeed, implies
 
 y =z
 
 4
 
 x =z
 
 4
 
 - z
 
 2
 
 g 1 (x,y,z) =O=g 2 (x,y,z)
 
 and
 
 6
 
 f(x,y,z) = 2z • Theorem 1 gives an algorithm reducing the problem of sufficient conditions for a problem with inequality constraints given by
 
 m
 
 functions of
 
 n
 
 variables, to the problem of sufficient
 
 conditions for a function of (n-m) independent variables.
 
 The
 
 reduction procedure requires only the inversion of one matrix (Jacobian matrix at
 
 x0
 
 )
 
 and for this reason is not computation-
 
 ally difficult. Of course, a number of natural questions arise.
 
 How will
 
 the situation change if (a)
 
 there are also equality constraints
 
 (b)
 
 the Kuhn-Tucker necessary optimality conditions hold,
 
 132
 
 =0
 
 >..
 
 but 'l'li th certain
 
 1
 
 or more generally (c)
 
 the functions f,g 1 , .•. ,gm are defined on a Banach space
 
 (d)
 
 the conditions
 
 gi (x) .:::_ 0 are replaced by a condi		
Data Loading...
 
	 
	 
	 
	 
	 
	 
	 
	 
	 
	 
	