Self-adaptive potential-based stopping criteria for Particle Swarm Optimization with forced moves

  • PDF / 2,054,707 Bytes
  • 27 Pages / 439.37 x 666.142 pts Page_size
  • 8 Downloads / 171 Views

DOWNLOAD

REPORT


Self‑adaptive potential‑based stopping criteria for Particle Swarm Optimization with forced moves Bernd Bassimir1   · Manuel Schmitt1 · Rolf Wanka1  Received: 27 June 2019 / Accepted: 18 August 2020 © The Author(s) 2020

Abstract We study the variant of Particle Swarm Optimization that applies random velocities in a dimension instead of the regular velocity update equations as soon as the so-called potential of the swarm falls below a certain small bound in this dimension, arbitrarily set by the user. In this case, the swarm performs a forced move. In this paper, we are interested in how, by counting the forced moves, the swarm can decide for itself to stop its movement because it is improbable to find better candidate solutions than the already-found best solution. We formally prove that when the swarm is close to a (local) optimum, it behaves like a blind-searching cloud and that the frequency of forced moves exceeds a certain, objective function-independent value. Based on this observation, we define stopping criteria and evaluate them experimentally showing that good candidate solutions can be found much faster than setting upper bounds on the iterations and better solutions compared to applying other solutions from the literature. Keywords  Particle Swarm Optimization · Stopping criterion · Heuristics

1 Introduction Particle Swarm Optimization (PSO) is a meta-heuristic for the so-called continuous black box optimization problems, which means that the objective function is not explicitly known, e.g., in form of a closed formula. PSO produces good results in a variety of different real-world applications. The classical PSO as first introduced by [Kennedy and Eberhart (1995); Eberhart and Kennedy (1995)] is an iterative algorithm which tries to improve a fixed number of candidate solutions, each represented by a particle. The algorithm can be very easily implemented and adapted to the users’ applications, which lead to increased * Bernd Bassimir [email protected] Manuel Schmitt [email protected] Rolf Wanka [email protected] 1



Department of Computer Science, University of Erlangen-Nuremberg, Cauerstraße 11, 91058 Erlangen, Germany

13

Vol.:(0123456789)



Swarm Intelligence

attention not only among computer scientists. In order to further improve performance, many authors present changes to the original, “plain,” or classical PSO scheme (for exact definitions, see Sect. 3) to improve the quality of the returned solution. A serious problem of PSO is the phenomenon of premature stagnation, i.e., the convergence of the swarm to a non-optimal solution. To address the issue of premature stagnation, van den Bergh and Engelbrecht (2002) introduce Guaranteed Convergence PSO (GCPSO), where a random perturbation that scales with the number of global attractor updates is added to the best particle. They show on a set of benchmarks that the PSO finds a local minimum with an improved convergence speed and later prove convergence to local optima. Kennedy and Eberhart (1995) first realized that adding a noise ter