Particle Swarm Optimization and Differential Evolution Algorithms: Technical Analysis, Applications and Hybridization Pe

Since the beginning of the nineteenth century, a significant evolution in optimization theory has been noticed. Classical linear programming and traditional non-linear optimization techniques such as Lagrange’s Multiplier, Bellman’s principle and Pontyagr

  • PDF / 1,169,320 Bytes
  • 38 Pages / 439.2 x 666 pts Page_size
  • 82 Downloads / 234 Views

DOWNLOAD

REPORT


2

Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700032, India, [email protected], [email protected] Center of Excellence for Quantifiable Quality of Service, Norwegian University of Science and Technology, Norway, [email protected]

Summary. Since the beginning of the nineteenth century, a significant evolution in optimization theory has been noticed. Classical linear programming and traditional non-linear optimization techniques such as Lagrange’s Multiplier, Bellman’s principle and Pontyagrin’s principle were prevalent until this century. Unfortunately, these derivative based optimization techniques can no longer be used to determine the optima on rough non-linear surfaces. One solution to this problem has already been put forward by the evolutionary algorithms research community. Genetic algorithm (GA), enunciated by Holland, is one such popular algorithm. This chapter provides two recent algorithms for evolutionary optimization – well known as particle swarm optimization (PSO) and differential evolution (DE). The algorithms are inspired by biological and sociological motivations and can take care of optimality on rough, discontinuous and multimodal surfaces. The chapter explores several schemes for controlling the convergence behaviors of PSO and DE by a judicious selection of their parameters. Special emphasis is given on the hybridizations of PSO and DE algorithms with other soft computing tools. The article finally discusses the mutual synergy of PSO with DE leading to a more powerful global search algorithm and its practical applications.

1 Introduction The aim of optimization is to determine the best-suited solution to a problem under a given set of constraints. Several researchers over the decades have come up with different solutions to linear and non-linear optimization problems. Mathematically an optimization problem involves a fitness function describing the problem, under a set of constraints representing the solution space for the problem. Unfortunately, most of the traditional optimization techniques are centered around evaluating the first derivatives to locate the optima on a given constrained surface. Because of the difficulties in evaluating S. Das et al.: Particle Swarm Optimization and Differential Evolution Algorithms: Technical Analysis, Applications and Hybridization Perspectives, Studies in Computational Intelligence (SCI) 116, 1–38 (2008) c Springer-Verlag Berlin Heidelberg 2008 www.springerlink.com 

2

S. Das et al.

the first derivatives, to locate the optima for many rough and discontinuous optimization surfaces, in recent times, several derivative free optimization algorithms have emerged. The optimization problem, now-a-days, is represented as an intelligent search problem, where one or more agents are employed to determine the optima on a search landscape, representing the constrained surface for the optimization problem [1]. In the later quarter of the twentieth century, Holland [2], pioneered a new concept on evolutionary searc