Chaotic-based grey wolf optimizer for numerical and engineering optimization problems

  • PDF / 2,152,738 Bytes
  • 28 Pages / 595.276 x 790.866 pts Page_size
  • 34 Downloads / 181 Views

DOWNLOAD

REPORT


REGULAR RESEARCH PAPER

Chaotic-based grey wolf optimizer for numerical and engineering optimization problems Chao Lu1 · Liang Gao2 · Xinyu Li2 · Chengyu Hu1 · Xuesong Yan1 · Wenyin Gong1 Received: 7 August 2017 / Accepted: 23 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Grey wolf optimizer (GWO) is a recently proposed optimization algorithm inspired from hunting behavior of grey wolves in wild nature. The main challenge of GWO is that it is easy to fall into local optimum. Owing to the ergodicity of chaos, this paper incorporates the chaos theory into the GWO to strengthen the performance of the algorithm. Three different chaotic strategies with eleven various chaotic map functions are investigated and the most suitable one is regarded as the proposed chaotic GWO. Extensive experiments are made to compare the proposed chaotic GWO against other metaheuristics including adaptive differential evolution (JADE), cellular genetic algorithm, artificial bee colony, evolutionary strategy, biogeography-based optimization, comprehensive learning particle swarm optimization, and GWO. In addition, the proposal is also successfully applied to practical engineering problems. Experimental results demonstrate that the chaotic GWO is better than its compared metaheuristics on most of test problems and engineering optimization problems. Keywords Grey wolf optimizer · Chaos theory · Global optimization · Engineering optimization · Metaheuristic

1 Introduction The aim of global optimization aims at finding a maximum/minimum of one objective function. Many optimization algorithms have been employed to address such optimization problems. In general, these optimization approaches can be classified into two groups (1) deterministic and (2) approximate approaches. The deterministic methods based on the gradient mechanism have a strict step. They can generate the same solution when its initial values are set to the same point for a given problem. The merit of the deterministic methods is that the convergence speed is quite fast. Unfortunately, they require the optimization problem to have some mathematical properties that may not exist in most problems. Contrary to the deterministic algorithms, gradient-free approximate algorithms are based on random walks. Consequently, the final optimization results cannot be same. The

B

Chengyu Hu [email protected]

1

School of Computer Science, China University of Geosciences, Wuhan, China

2

State Key Lab of Digital Manufacturing Equipment andTechnology, Huazhong University of Science and Technology, Wuhan, China

advantage of the approximate algorithms is that they do not depend on the rigorous mathematical characteristics of the optimization problem. Additionally, their mechanisms are easy to implement. Nevertheless, these approximate algorithms easily trap into local optima. Recently, nature-inspired metaheuristics (one kind of gradient-free approximate algorithms) show a powerful performance in solving optimization problems and have been broadly appl