Smoothing and Approximation with Signals
- PDF / 140,183 Bytes
- 8 Pages / 467.441 x 666.252 pts Page_size
- 34 Downloads / 191 Views
The set Mn of n-monotone sequences is clearly of crucial importance in the subject of nonlinear smoothing. Many smoothers map onto or into Mn . Is there a possibility of selecting any that are optimal in some sense? Considering the operator B that maps x onto a “nearest” element in Mn , in the sense of the metric chosen, the questions that come to mind are: For which metric does B exist, and if it maps onto an interval of sequences, can a suitable choice be made to define an unambiguous n-monotone image? How does the LU LU -interval compare with the best? The questions come from experience in Approximation Theory. Given a function f , we seek a best approximation Bf in a subspace S of approximating functions. For some norms the function Bf , though uniquely defined, is computationally complex. The idea then is to use a simpler operator P that is computationally efficient, like an interpolant, and use the Lebesgue inequality to show that this sub-optimal approximation P f is not “far” from the best approximation Bf . Thus P f suffices for practical purposes, or can possibly be used as a good start for an iterative procedure converging to Bf . Restrepo and Bovic did a logical thing by considering the problem of best approximation in all the usual p-norms and seminorms as well (0 < p < 1). This idea had been relatively unexplored, but seems sound. The p-norms are mappings from X × X to R are positive definite and symmetric and thus form a semi-metric. They are positive homogenous (dp (γx, γy) = |γ|dp (x, y)) and translation invariant. For p ≥ 1 a metric is obtained, but for 0 < p < 1 the triangle inequality does not hold. They make a case for the semi-metric, but it is sufficient here to consider only the cases p ≥ 1. They prove the existence of and suggest computational procedures for obtaining such “best” locally monotone sequences. The computational complexity is however uninviting and non-local. As in the case of function approximation, it is thus natural to seek simple, possibly local, suboptimal methods that yield sequences in Mn that are near the
44
5. Smoothing and Approximation with Signals
optimal p-projection image, or best approximation. In linear approximation, the Lebesgue inequality provides such a comparison method, provided the operator norm is small. But in the problem at hand the operators that are candidates cannot be linear for previously argued reasons. It is however worthwhile to pursue the idea for possible generalization. It may even be speculated that the best approximation to a sequence x lies in the LU LU -interval! This turns out to be not true in general, but still possibly true for a large class of sequences. When this is not so, this also may actually reflect more on the best approximation than on the LU LU -interval! The Lebesgue inequality requires that the operator is linear, idempotent and of bounded norm. Is the idea of the Lebesgue inequality transferable to the problem at hand; where the approximating set Mn is not a subspace and the operators not linear, though often idempotent? It is
Data Loading...