Obtaining minimax lower bounds: a review

  • PDF / 490,762 Bytes
  • 29 Pages / 439.37 x 666.142 pts Page_size
  • 39 Downloads / 217 Views

DOWNLOAD

REPORT


Online ISSN 2005-2863 Print ISSN 1226-3192

REVIEW ARTICLE

Obtaining minimax lower bounds: a review Arlene K. H. Kim1 Received: 4 June 2018 / Accepted: 30 September 2019 © Korean Statistical Society 2020

Abstract Minimax lower bounds determine the complexity of given statistical problems by providing fundamental limit of any procedures. This paper gives a review on various aspects of obtaining minimax lower bounds focusing on a recent development. We first introduce classical methods, then more involved lower bound constructions such as testing two mixtures, two directional method, and global metric entropy method are provided with various examples including manifold learning, approximation sets and neural nets. In addition, we consider two different types of restrictions on the set of estimators. In particular, we consider the lower bounds when the set of estimators is required to be linear, and a private version of minimax lower bounds is discussed. Keywords Minimax lower bounds · Le Cam · Assouad · Fano · Two directional method · private estimation

1 Introduction In a decision-theoretic framework, the aims are twofold: first, we wish to elucidate the difficulty of the problem at hand by providing a lower bound on the rate at which any estimator can converge to the population quantity of interest as the sample size increases. Second, we want to find estimators achieving fast rates of convergence; if we can obtain a procedure attaining the rate provided by the lower bound, we describe it as rate optimal. Typically, the performance of a statistical procedure depends on the true element of the parameter space. For this reason, many researchers have sought to measure overall performance by considering the minimax risk (e.g. Donoho and Johnstone 1994, 1998; Raimondo 1998; Cai and Low 2004; Haff et al. 2011; Raskutti et al. 2011; Gao et al. 2015), which controls the behaviour at the worst point for our estimator in the parameter space. On the one hand, minimax lower bounds characterise the fundamental difficulty

B 1

Arlene K. H. Kim [email protected] Department of Statistics, Korea University, Seoul, South Korea

123

Journal of the Korean Statistical Society

of a statistical problem; on the other, estimators achieving the minimax optimal risk are highly desirable, and are usually regarded as having a gold-standard performance guarantee. The problem of obtaining minimax lower bounds is closely related to hypotheses testing on each model in the parameter space. No matter which estimators are considered, we prove that there is a limit of statistical risk that cannot be achieved. This procedure boils down to finding a ‘maximal’ subset of parameter space where each model cannot be well distinguished. Section 3 describes several effective constructions of these maximal sub-parameter space to obtain minimax lower bounds. The simplest such subset contains two points, where related hypothesis testing ideas were pioneered by Le Cam (1973), and then were developed into a more involved hypercube method by Assouad (1983)