Enhanced Salp Swarm Algorithm based on random walk and its application to training feedforward neural networks
- PDF / 1,167,271 Bytes
- 17 Pages / 595.276 x 790.866 pts Page_size
- 97 Downloads / 243 Views
METHODOLOGIES AND APPLICATION
Enhanced Salp Swarm Algorithm based on random walk and its application to training feedforward neural networks Yongqiang Yin1 · Qiang Tu1 · Xuechen Chen1
© Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract Salp Swarm Algorithm (SSA) is a new type of metaheuristic and has shown superiority over other well-known algorithms such as Particle Swarm Optimization and Grey Wolf Optimizer in solving challenging optimization problems. Despite its superior performance, SSA still has problems such as insufficient convergence speed. Moreover, its local optima avoidance ability is not as good as those evolutionary algorithms using crossover operators. In this paper, we propose a modified Salp Swarm Algorithm (m-SSA) which improves the exploitation and exploration of SSA by integrating random walk strategy and especially enhances exploration by adding a new controlling parameter. In addition, a simulated annealing-type acceptance criterion is adopted to accept the fittest follower position as the new best leader position. The performance of the proposed algorithm is benchmarked on a set of classical functions and CEC2014 test suite. The proposed algorithm (m-SSA) outperforms SSA significantly on most test functions. When compared with other state-of-the-art metaheuristics, it also presents very competitive results. Besides, we apply the proposed algorithm on training feedforward neural networks (FNNs) and the results prove the effectiveness and efficiency of m-SSA. Keywords Metaheuristic search · Salp Swarm Algorithm · Random walk · Optimization techniques · Simulated annealing
1 Introduction In order to solve many nonconvex or nondifferentiable optimization problems, a huge number of metaheuristic search algorithms have been extensively used, and they have shown great potential to solve the challenging real-world problems. The metaheuristic search can easily find highly qualified solutions in an acceptable amount of time (Yang 2014). In fact, metaheuristic algorithms are the most effective methods to solve many difficult optimization problems (Lourenço et al. 2019) and they have received increasing attention over the years. Metaheuristic algorithms in the literature can be divided into two families: trajectory-based and population-based. In Communicated by V. Loia. The work was supported by the National Natural Science Foundation of China (Grant No. U1530120).
B 1
Xuechen Chen [email protected] School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, Guangdong, China
the first category, a metaheuristic algorithm performs optimization using a single solution that moves in the search space. The solution is generally transitioned and improved by randomization and certain guidance such as greedy criterion in a constrained number of iterations. The most popular algorithms in this category are Simulated Annealing (SA) (Kirkpatrick et al. 1983), Tabu Search (TS) (Glover 1989) and Iterated Local Search (ILS) (Lourenço et al. 2001, 2019). The advan
Data Loading...