Modelling human active search in optimizing black-box functions

  • PDF / 2,521,614 Bytes
  • 15 Pages / 595.276 x 790.866 pts Page_size
  • 75 Downloads / 191 Views

DOWNLOAD

REPORT


(0123456789().,-volV)(0123456789(). ,- volV)

FOCUS

Modelling human active search in optimizing black-box functions Antonio Candelieri1



Riccardo Perego1



Ilaria Giordani1



Andrea Ponti1 • Francesco Archetti1

 The Author(s) 2020

Abstract Modelling human function learning has been the subject of intense research in cognitive sciences. The topic is relevant in black-box optimization where information about the objective and/or constraints is not available and must be learned through function evaluations. In this paper, we focus on the relation between the behaviour of humans searching for the maximum and the probabilistic model used in Bayesian optimization. As surrogate models of the unknown function, both Gaussian processes and random forest have been considered: the Bayesian learning paradigm is central in the development of active learning approaches balancing exploration/exploitation in uncertain conditions towards effective generalization in large decision spaces. In this paper, we analyse experimentally how Bayesian optimization compares to humans searching for the maximum of an unknown 2D function. A set of controlled experiments with 60 subjects, using both surrogate models, confirm that Bayesian optimization provides a general model to represent individual patterns of active learning in humans. Keywords Bayesian optimization  Cognitive models  Active learning  Search strategy

1 Introduction We consider as reference problem the black-box optimization: the objective function and/or constraints are analytically unknown and evaluating them might be very expensive and noisy. In black-box situations, as we cannot assume any prior knowledge about the objective function f ð xÞ, any functional form is a priori admissible and the value of the function at a point says nothing about the value at other points: the only way to develop a problem specific algorithm is to assume a model of f ð xÞ and to learn through a sample of function values. Such an algorithm must be sample efficient because the cost of function evaluations is the dominating cost. This problem has been addressed in several fields under different names, including active learning (Kruschke et al. 2008; Griffiths et al. 2008; Wilson et al. 2015), Bayesian optimization (BO) (Zhigljavsky and Zilinskas 2007; Candelieri et al. 2018; Archetti and Candelieri 2019), hyperparameter

Communicated by Yaroslav D. Sergeyev. & Antonio Candelieri [email protected] 1

optimization (Eggensperger et al. 2019), Lipschitz global optimization (Sergeyev et al. 2020a, b; Lera et al. 2021), and others. In BO, a probabilistic surrogate model of the objective function is built to sum up our a priori beliefs about the objective function and the informative value of new observations. Two probabilistic frameworks are usually considered: the Gaussian processes (GPs) and random forests (RF) which offer alternative ways to update the beliefs as new data arrives and to provide an estimate of the expected value of the objective function and the uncertainty in t