Optimal Summation and Integration by Deterministic, Randomized, and Quantum Algorithms

We survey old and new results about optimal algorithms for summation of finite sequences and for integration of functions from Hölder or Sobolev spaces. First we discuss optimal deterministic and randomized algorithms. Then we add a new aspect, which has

  • PDF / 1,420,371 Bytes
  • 13 Pages / 439.37 x 666.14 pts Page_size
  • 14 Downloads / 215 Views

DOWNLOAD

REPORT


2

Universitat Kaiserslautern, Fachbereich Informatik , Postfach 3049, D-67653 Kaiserslautern, heinrich _

Fd

-

{I E

cr , 11/1100::; 1, IDi/(x) -

°

D i/(y)1 ::; Ix

-

YIC>, X,Y E [O ,l]d,

Iii = k} ,

where kENo, < ex ::; 1, c- stands for the set of functions 1 which are continuous together with all their partial derivatives D i 1 up to order k, II lip (1 ::; p ::; (0) denotes the Lp-norm with respect to the Lebesgue measure on [O,l]d, [z - yl is the Euclidean distance between x and y, and Iii means the sum of the components of the multiindex i . The Sobolev classes are defined by W;,d = {I : IID i Illp ::; 1, Iii::; k}, where kENo, 1 ::; p ::; 00, and D i is here the weak partial derivative. For the integration problem in Sobolev spaces, we always assume the embedding condition (1) k ·p> d, which guarantees, by the Sobolev embedding theorem, that the elements of W;,d are continuous functions , and hence function values are well-defined. Let us briefly describe the organization of the paper. In Section 2 we survey known results about optimal deterministic algorithms for S N on L: and Id on F; 'C> and W;,d' Section 3 is concerned with randomized (or Monte Carlo) algorithms for the same problems. In Section 4 we give an introduction into the model of quantum computation and survey recent results of Novak (2001) and Heinrich (2001a,b) on optimal algorithms for summation and integration on a quantum computer.

2

Deterministic Algorithms

We consider numerical algorithms (methods) of the form

(2) where Xi E D (i = 1, . .. , n), and ip : R" -7 R is an arbitrary mapping. (In the terminology of information-based complexity, this is the class of all nonadaptive, in general nonlinear algorithms using n function values.) A special

53

subclass is formed by linear algorithms, i.e. quadratures n

A~n(J)

=L

(3)

ai f(Xi)

i= l

with ai E R and Xi E D. The error of a method An of the form (2) is defined as

= sup IS(J) -

e(A n, F)

fEF

An(J)I .

The central quantity for our analysis is the n-th minimal error defined for n E NasI e~et(F) = e(A n , F) .

T!

The classes L: , F;'O:, and W;,d are unit balls in Banach spaces, so they are convex and symmetric. The operators SN and Id are linear. It is known that under these assumptions linear methods (3) are optimal (even among all adaptive, nonlinear methods) . This was proved by Smolyak and Bakhvalov, see Bakhvalov (1971) and also Novak (1996) and Traub, Wasilkowski, Wozniakowski (1988). Therefore it is not difficult to find an optimal method for the summation operator SN on L:

A~(J) = ~ and its error *

N

e(An,L p

)

t

f(i)

i=l

-n = (N ~)

I-l ip

,

where n < N. Of course we obtain e~et(L:) = 0 for n 2: N and therefore always assume that n < N . The spaces L: are increasing with decreasing p and for the extreme cases p = 00 and p = 1 we obtain and

nr f(xn,

with random variables at with values in Rand (il, E , P) . The error of a method (4) is e(A~ , F}

= sup(E(S(f} fEF

(5)

;=1

xt with

A~(f}}2}1 /2,

values in D on

(6)

where E is the expectation. The ran