The Runtime of the Compact Genetic Algorithm on Jump Functions
- PDF / 3,262,175 Bytes
- 49 Pages / 439.37 x 666.142 pts Page_size
- 70 Downloads / 187 Views
The Runtime of the Compact Genetic Algorithm on Jump Functions Benjamin Doerr1 Received: 19 August 2019 / Accepted: 24 October 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract In the first and so far only mathematical runtime analysis of an estimation-of-distribution algorithm (EDA) on a multimodal problem, Hasenöhrl and Sutton (GECCO 2018) showed for any k = o(n) that the compact genetic algorithm (cGA) with any hypothetical population size 𝜇 = Ω(ne4k + n3.5+𝜀 ) with high probability finds the optimum of the n-dimensional jump function with jump size k in time O(𝜇n1.5 log n) . 1 ln n − 1 . In this case, We significantly improve this result for small jump sizes k ≤ 20 √ already for 𝜇√= Ω( n log n) ∩ poly (n) the runtime of the cGA with high probability is only O(𝜇 n) . For the smallest admissible values of 𝜇 , our result gives a runtime 5+𝜀 of O(n log n) , whereas the previous one only shows √ O(n ) . Since it is known that the cGA with high probability needs at least Ω(𝜇 n) iterations to optimize the unimodal O NEM AX function, our result shows that the cGA in contrast to most classic evolutionary algorithms here is able to cross moderate-sized valleys of low fitness at no extra cost. For large k, we show that the exponential (in k) runtime guarantee of Hasenöhrl and Sutton is tight and cannot be improved, also not by using a smaller hypothetical population size. We prove that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size k. This result might be the first non-trivial exponential lower bound for EDAs that holds for arbitrary parameter settings. To complete the picture, we show that the cGA √ with hypothetical population size 𝜇 = Ω(log n) with high probability needs Ω(𝜇 n + n log n) iterations to optimize any n-dimensional jump function. This bound was known for OneMax, but, as we also show, the usual domination arguments do not allow to extend lower bounds on the performance of the cGA on OneMax to arbitrary functions with unique optimum. As a side result, we provide a simple general method based on parallel runs that, under mild conditions, (1) overcomes the need to specify a suitable population size and still gives a performance close to the one stemming from the best-possible population size, and (2) transforms EDAs with high-probability performance guarantees into EDAs with similar bounds on the expected runtime.
Extended author information available on the last page of the article
13
Vol.:(0123456789)
Algorithmica
Keywords Estimation-of-distribution algorithm · Runtime analysis · Combinatorial optimization · Local optima · Parallel runs
1 Introduction Estimation-of-distribution algorithms (EDAs) [52, 60] are a particular class of evolutionary algorithms. Whereas typical classic evolutionary algorithms evolve a population of (hopefully good) solutions, EDAs evolve a probabilistic model of the search space, that is, a probability distribution over the set of all solutions. The tar
Data Loading...