Young programming

  • PDF / 488,027 Bytes
  • 7 Pages / 594 x 828 pts Page_size
  • 49 Downloads / 206 Views

DOWNLOAD

REPORT


U

=

- ¢(v))v

-

dt

/

¢(~) V

- (v - ~(u))u - /

¢(t) dt

.

, 2

~(~) is called the inaccuracy of (u, v) with respect to ~. I-I Geometrically S~(u, v) is the area of the shaded region in Fig. 1.

\ •

imation o/ the corresponding linear programming problem. Finally in section 5 a row-action method

S

(u,v)

for the solution of Young programming problems is presented. The algorithm interpreted in terms of the dual problem leads to an alternative method, we call it dir-action method. u

T h e Y o u n g i n e q u a l i t y . The Young inequality was first published by W.H. Young in 1912. A generalized form of the inequality can be found in [5]. We recall the inequality in a form that is convenient to use in the rest of the paper. Let ~" R -+ R be a continuous, strictly decreasing function, and consider the curve ~,~ - {(x, ~(x))" x C R}. The following definition offers a way to describe how much an arbitrary point (u,v) of the plane is 'away' from the curve ~,~. DEFINITION 1 Let (u,v) E R 2 and denote ¢ ~-1, then

\

Fig. 1. The Young inequality states that

S~(u,v) >_0 for e v e r y ( u , v ) C R 2 and equality holds if and only if v = ~(u). The geometric interpretation or a straightforward computation proves the statement. The inaccuracy can be termed in R 2n as follows. DEFINITION 2 Let (u, v) C R 2n, u -

v - ( v l , . . . , v n ) , then

( u l , . . . , Un),

Young programming 71

v)

S:(u,

v,)

.- Z j-1

is called the inaccuracy of (u, v) E R 2n, i.e. the inaccuracy is computed coordinatewisely. [2] Interesting examples of functions of the form S : ( u , v ) are displayed by certain discrepancy or divergence functions. Let f : R --+ R be a strictly concave and differentiable function. T h e n a class of divergence functions can be generated by the following mapping D I : R × R --+ R,

Df(u]lv)

:(v) + :'(v)(u

-

-

v)

Linear Programming. Since the Young programming will be introduced as an analytical approximation of linear programming, it is convenient to recall some basic facts of linear programming. Let A be an m × n matrix and denote a ( 1 ) , . . . , a (m) E R n the rows of A. Denote £ and L:J- the rowspace of matrix A, and the solution space of Ax - 0, respectively. Let ~.,~ E R n be arbitrary but fixed vectors. Let us define the affine subspaces ~ ® £ - {z E R n" z - ~ + w f o r somew E £} and ~.O£ ± - {x E R n" x - ~.+wfor somew E £±}. Then clearly

- / ( u ) ,

xE~.®£±

u, v E R . D / can serve as a 'measure of distance' between u and v E R with respect to f, although strictly speaking D S is not a distance function for it is not symmetric and the triangle inequality does not hold. Nevertheless the family of functions of the form D S (f being not necessarily of a sum form) was introduced by L.M. Bregman [1]. Obviously

Dy(ullv) = :(v)+

v) + f f'(t) dt,

-

Ax-A~,

and zE~®£

¢,

z-~+yA

for some y E R m. Denote xj, zj the j t h coordinate of x, z, respectively, and consider the following feasibility problem, which is in fact the linear programming problem in an equilibrium form. PROBLEM 3 Let