Function Values Are Enough for $$L_2$$ L 2 -Approximation

  • PDF / 390,410 Bytes
  • 11 Pages / 439.37 x 666.142 pts Page_size
  • 71 Downloads / 176 Views

DOWNLOAD

REPORT


Function Values Are Enough for L2 -Approximation David Krieg1 · Mario Ullrich1,2 Received: 2 May 2020 / Revised: 28 September 2020 / Accepted: 5 October 2020 © The Author(s) 2020

Abstract We study the L 2 -approximation of functions from a Hilbert space and compare the sampling numbers with the approximation numbers. The sampling number en is the minimal worst-case error that can be achieved with n function values, whereas the approximation number an is the minimal worst-case error that can be achieved with n pieces of arbitrary linear information (like derivatives or Fourier coefficients). We show that    1 a 2j , en   kn j≥kn

where kn  n/ log(n). This proves that the sampling numbers decay with the same polynomial rate as the approximation numbers and therefore that function values are basically as powerful as arbitrary linear information if the approximation numbers are s (Td ) with square-summable. Our result applies, in particular, to Sobolev spaces Hmix dominating mixed smoothness s > 1/2 and dimension d ∈ N, and we obtain en  n −s logsd (n). For d > 2s + 1, this improves upon all previous bounds and disproves the prevalent conjecture that Smolyak’s (sparse grid) algorithm is optimal. Keywords L 2 -approximation · Sampling numbers · Rate of convergence · Random matrices · Sobolev spaces with mixed smoothness

Communicated by Frances Kuo. Extended author information available on the last page of the article

123

Foundations of Computational Mathematics

Mathematics Subject Classification 41A25 · 41A46 · 60B20 · Secondary: 41A63 · 46E35 Let H be a reproducing kernel Hilbert space, i.e., a Hilbert space of real-valued functions on a set D such that point evaluation δx : H → R, δx ( f ) := f (x) is a continuous functional for all x ∈ D. We consider numerical approximation of functions from such spaces, using only function values. We measure the error in the space L 2 = L 2 (D, A, μ) of square-integrable functions with respect to an arbitrary measure μ such that H is embedded into L 2 . This means that the functions in H are square-integrable and two functions from H that are equal μ-almost everywhere are also equal point-wise. We are interested in the n-th minimal worst-case error en := en (H ) :=

inf

sup

x1 ,...,xn ∈D f ∈H :  f  H ≤1 ϕ1 ,...,ϕn ∈L 2

n      f − f (xi ) ϕi  ,  i=1

L2

which is the worst-case error of an optimal algorithm that uses at most n function values. These numbers are sometimes called sampling numbers. We want to compare en with the n-th approximation number an := an (H ) :=

inf

sup

L 1 ,...,L n ∈H  f ∈H :  f  H ≤1 ϕ1 ,...,ϕn ∈L 2

n      L i ( f ) ϕi  , f − i=1

L2

where H  is the space of all bounded, linear functionals on H . This is the worst-case error of an optimal algorithm that uses at most n linear functionals as information. Clearly, we have an ≤ en since the point evaluations form a subset of H  . The approximation numbers are quite well understood in many cases because they are equal to the singular values of the embedding operator id :