Random Sampling with Removal
- PDF / 451,234 Bytes
- 34 Pages / 439.37 x 666.142 pts Page_size
- 112 Downloads / 208 Views
Random Sampling with Removal Kenneth L. Clarkson1 · Bernd Gärtner2 · Johannes Lengler2 · May Szedlák2 Received: 28 May 2019 / Revised: 20 January 2020 / Accepted: 2 March 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract We study randomized algorithms for constrained optimization, in abstract frameworks that include, in strictly increasing generality: convex programming; LP-type problems; violator spaces; and a setting we introduce, consistent spaces. Such algorithms typically involve a step of finding the optimal solution for a random sample of the constraints. They exploit the condition that, in finite dimension δ, this sample optimum violates a provably small expected fraction of the non-sampled constraints, with the fraction decreasing in the sample size r . We extend such algorithms by considering the technique of removal, where a fixed number k of constraints are removed from the sample according to a fixed rule, with the goal of improving the solution quality. This may have the effect of increasing the number of violated non-sampled constraints. We study this increase, and bound it in a variety of general settings. This work is motivated by, and extends, results on removal as proposed for chance-constrained optimization. For many relevant values of r , δ, and k, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of k constraints. For a large range of values of k, the new upper bounds improve the previously best bounds for LP-type problems, which moreover had only been known in special cases, and not in the generality we consider. Moreover, we show that our results extend from finite to infinite spaces, for chance-constrained optimization. Keywords Random sampling · Removal · LP-type problems · Violator spaces · Consistent spaces · Chance-constrained optimization
Editor in Charge: János Pach Dedicated to the memory of Ricky Pollack. The research of the last author was supported by the Swiss National Science Foundation (SNF Project 200021_150055/1). Extended author information available on the last page of the article
123
Discrete & Computational Geometry
1 Introduction On a high level, random sampling can be described as an efficient way of learning something about a problem by first solving a subproblem of much smaller size. A classical example is the problem of finding the smallest element in a sorted compact list [14, Prob. 11-3]. Such a list stores its elements in an array, but in arbitrary order. Additional pointers are used to link each element to the next smaller one in the list. Given a √ sorted compact list of size n, the√smallest element can be found in expected time O( n) as follows: sample a set of n array elements at random. Starting from their minimum, follow the predecessor pointers to the global minimum. √ The key fact is that the expected number of pointers to be followed is bounded by n, and this yields the expected runtime. More generally, if we pick a sample R of r array el
Data Loading...