Optimal Polynomial Prediction Measures and Extremal Polynomial Growth

  • PDF / 361,410 Bytes
  • 23 Pages / 439.37 x 666.142 pts Page_size
  • 1 Downloads / 244 Views

DOWNLOAD

REPORT


Optimal Polynomial Prediction Measures and Extremal Polynomial Growth L. Bos1 · N. Levenberg2 · J. Ortega-Cerdà3 Received: 25 January 2020 / Revised: 30 June 2020 / Accepted: 24 July 2020 © The Author(s) 2020

Abstract We show that the problem of finding the measure supported on a compact set K ⊂ C such that the variance of the least squares predictor by polynomials of degree at most n at a point z 0 ∈ Cd \K is a minimum is equivalent to the problem of finding the polynomial of degree at most n, bounded by 1 on K , with extremal growth at z 0 . We use this to find the polynomials of extremal growth for [−1, 1] ⊂ C at a purely imaginary point. The related problem on the extremal growth of real polynomials was studied by Erd˝os (Bull Am Math Soc 53:1169–1176, 1947). Keywords Optimal polynomial prediction measures · Polynomials of extremal growth · Complex polynomials Mathematics Subject Classification 41A17 · 30A10 · 62K05

1 Introduction In this work we consider two classical extremum problems for polynomials. The first is very easy to state. Indeed, let us denote the complex polynomials of degree at most n in d complex variables by Cn [z], z ∈ Cd . Then for K ⊂ Cd compact and z 0 ∈ Cd \K an external point, we say that Pn (z) ∈ Cn [z] has extremal growth relative to K at z 0

Communicated by Doron Lubinsky.

B

L. Bos [email protected]

1

Department of Computer Science, University of Verona, Verona, Italy

2

Department of Mathematics, Indiana University, Bloomington, IN, USA

3

Departament de Matemàtiques i Informàtica, Universitat de Barcelona (UB), BGSMath, Barcelona, Spain

123

Constructive Approximation

if Pn = arg max p∈Cn [z]

| p(z 0 )|  p K

(1)

where  p K denotes the sup-norm of p on K . Alternatively, we may normalize p to be 1 at the external point and use Pn = arg max p∈Cn [z],

p(z 0 )=1

1 .  p K

(2)

We note that for this to be well-defined we require that K be polynomial determining, i.e., if p ∈ C[z] is such that p(x) = 0 for all x ∈ K , then p = 0. We refer the interested reader to the survey [2] for more about what is known about this problem. The second problem is from the field of optimal design for polynomial regression. To describe it we reduce to the real case K ⊂ Rd , and note that we may write any p ∈ Rn [z] in the form p=

N 

θk pk

k=1

  where B s := { p1 , p2 , . . . , p N } is a basis for Rn [z] and N := n+d its dimension. d Suppose now that we observe the values of a particular p ∈ Rn [z] at a set of m ≥ N points X := {x j : 1 ≤ j ≤ m} ⊂ K with some random errors; i.e., we observe y j = p(x j ) +  j , 1 ≤ j ≤ m where we assume that the errors  j ∼ N (0, σ ) are independent. In matrix form this becomes y = Vn θ +  where θ ∈ R N , y,  ∈ Rm and ⎡

⎤ p1 (x1 ) p2 (x1 ) · · · p N (x1 ) ⎢ p1 (x2 ) p2 (x2 ) · · · p N (x2 ) ⎥ ⎢ ⎥ ⎢ · ⎥ · ⎢ ⎥ ⎢ · ⎥ · ⎥ ∈ Rm×N Vn := ⎢ ⎢ · ⎥ · ⎢ ⎥ ⎢ · ⎥ · ⎢ ⎥ ⎣ · ⎦ · p1 (xm ) p2 (xm ) · · · p N (xm ) is the associated Vandermonde matrix. Our assumption on the error vector  means that cov() = σ 2 Im ∈ Rm×m .

123

Constructive Appr