Fast Computation of the Multi-Points Expected Improvement with Applications in Batch Selection
The Multi-points Expected Improvement criterion (or \(q\) -EI) has recently been studied in batch-sequential Bayesian Optimization. This paper deals with a new way of computing \(q\) -EI, without using Monte-Carlo simulations, through a closed-form formul
- PDF / 635,571 Bytes
- 11 Pages / 439.37 x 666.142 pts Page_size
- 12 Downloads / 170 Views
2
Institut de Radioprotection et de Sˆ uret´e Nucl´eaire (IRSN), 31, avenue de la Division Leclerc, 92260 Fontenay-aux-Roses, France IMSV, University of Bern, Alpeneggstrasse 22, 3012 Bern, Switzerland {clement.chevalier,ginsbourger}@stat.unibe.ch
Abstract. The Multi-points Expected Improvement criterion (or q-EI) has recently been studied in batch-sequential Bayesian Optimization. This paper deals with a new way of computing q-EI, without using Monte-Carlo simulations, through a closed-form formula. The latter allows a very fast computation of q-EI for reasonably low values of q (typically, less than 10). New parallel kriging-based optimization strategies, tested on different toy examples, show promising results.
Keywords: Computer experiments Expected improvement
1
·
Kriging
·
Parallel optimization
·
Introduction
In the last decades, metamodeling (or surrogate modeling) has been increasingly used for problems involving costly computer codes (or “black-box simulators”). Practitioners typically dispose of a very limited evaluation budget and aim at selecting evaluation points cautiously when attempting to solve a given problem. In global optimization, the focus is usually put on a real-valued function f with d-dimensional source space. In this settings, Jones et al. [1] proposed the now famous Efficient Global Optimization (EGO) algorithm, relying on a kriging metamodel [2] and on the Expected Improvement (EI) criterion [3]. In EGO, the optimization is done by sequentially evaluating f at points maximizing EI. A crucial advantage of this criterion is its fast computation (besides, the analytical gradient of EI is implemented in [4]), so that the hard optimization problem is replaced by series of much simpler ones. Coming back to the decision-theoretic roots of EI [5], a Multi-points Expected Improvement (also called “q-EI”) criterion for batch-sequential optimization was defined in [6] and further developed in [7,8]. Maximizing this criterion enables choosing batches of q > 1 points at which to evaluate f in parallel, and is of particular interest in the frequent case where several CPUs are simultaneously available. Even though an analytical formula was derived for the 2-EI in [7], the G. Nicosia and P. Pardalos (Eds.): LION 7, LNCS 7997, pp. 59–69, 2013. c Springer-Verlag Berlin Heidelberg 2013 DOI: 10.1007/978-3-642-44973-4 7,
60
C. Chevalier and D. Ginsbourger
Monte Carlo (MC) approach of [8] for computing q-EI when q ≥ 3 makes the criterion itself expensive-to-evaluate, and particularly hard to optimize. A lot of effort has recently been paid to address this problem. The pragmatic approach proposed by Ginsbourger and Le Riche [8] consists in circumventing a direct q-EI maximization, and replacing it by simpler strategies where batches are obtained using an offline q-points EGO. In such strategies, the model updates are done using dummy response values such as the kriging mean prediction (Kriging Believer) or a constant (Constant Liar), and the covariance parameters are re-estimated only when real data
Data Loading...