Initialization and Displacement of the Particles in TRIBES, a Parameter-Free Particle Swarm Optimization Algorithm

This chapter presents two ways of improvement for TRIBES, a parameter-free Particle Swarm Optimization (PSO) algorithm. PSO requires the tuning of a set of parameters, and the performance of the algorithm is strongly linked to the values given to the para

  • PDF / 533,415 Bytes
  • 21 Pages / 430 x 660 pts Page_size
  • 23 Downloads / 213 Views

DOWNLOAD

REPORT


mmary. This chapter presents two ways of improvement for TRIBES, a parameterfree Particle Swarm Optimization (PSO) algorithm. PSO requires the tuning of a set of parameters, and the performance of the algorithm is strongly linked to the values given to the parameter set. However, finding the optimal set of parameters is a very hard and time consuming problem. So, Clerc worked out TRIBES, a totally adaptive algorithm that avoids parameter fitting. Experimental results are encouraging but are still worse than many algorithms. The purpose of this chapter is to demonstrate how TRIBES can be improved by choosing a new way of initialization of the particles and by hybridizing it with an Estimation of Distribution Algorithm (EDA). These two improvements aim at allowing the algorithm to explore as widely as possible the search space and avoid a premature convergence in a local optimum. Obtained results show that, compared to other algorithms, the proposed algorithm gives results either equal or better. Keywords: Particle swarm optimization, estimation of distribution algorithm, continuous optimization.

1 Introduction Particle Swarm Optimization (PSO) is a population-based optimization technique proposed by Kennedy and Eberhart in 1995 [6]. Like ant colony algorithms or genetic algorithms, PSO is a biologically-inspired metaheuristic. The method is inspired from the social behavior of animals evolving in swarms, like birds or fishes. The principle is to use collaboration among a population of simple search agents to find the optimum in a function space. More precisely, a simple agent has basically the knowledge of the characteristics of its surroundings but, by communicating with other particles of the swarm, it also has a global knowledge of the search space, as it can be seen in a fish school which tries to find something to eat. PSO is also a particular case in the metaheuristic field because PSO was directly designed for solving continuous problems. This point has its importance because most of the applications deal with continuous problems. A state of the art of PSO and all the concepts which are linked to it is available in [2]. C. Cotta et al. (Eds.): Adaptive and Multilevel Metaheuristics, SCI 136, pp. 199–219, 2008. c Springer-Verlag Berlin Heidelberg 2008 springerlink.com 

200

Y. Cooren, M. Clerc, and P. Siarry

Like other metaheuristics, PSO shows the drawback of comprising many parameters which have to be defined. The difficulty is that the performances of the algorithm are strongly linked to the values given to these parameters. Such a remark implies that it is difficult and time consuming to find the optimal combination of parameter values. Moreover, in real problems, the parameters are often correlated, which makes the choice of parameters harder. So, researches are led to reduce the number of ”free” parameters. The aim is to design algorithms which are as efficient as classical algorithms, but with a lower number of parameters. Such algorithms can be called ”adaptive algorithms”, because information gradually collected