Group competition-cooperation optimization algorithm
- PDF / 1,850,918 Bytes
- 16 Pages / 595.224 x 790.955 pts Page_size
- 15 Downloads / 193 Views
Group competition-cooperation optimization algorithm Haijuan Chen1,2 · Xiang Feng1,2
· Huiqun Yu1,2
© Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract In order to solve complex practical problems, the model of deep learning can not be limited to models such as deep neural networks. To deepen the learning model, we must actively explore various depth models. Based on this, we propose a deep evolutionary algorithm, that is group competition cooperation optimization (GCCO) algorithm. Unlike the deep learning, in the GCCO algorithm, depth is mainly reflected in multi-step iterations, feature transformation, and models are complex enough. Firstly, the bio-group model is introduced to simulate the behavior that the animals hunt for the food. Secondly, according to the rules of mutual benefit and survival of the fittest in nature, the competition model and cooperation model are introduced. Furthermore, in the individual mobility strategy, the wanderers adopt stochastic movement strategy based on feature transformation to avoid local optimization. The followers adopt the variable step size region replication method to balance the convergence speed and optimization precision. Finally, the GCCO algorithm and the other three comparison algorithms are used to test the performance of the algorithm on ten optimization functions. At the same time, in the actual problem of setting up the Shanghai gas station the to improve the timely rate, GCCO algorithm achieves better performance than the other three algorithms. Moreover, Compared to the Global Search, the GCCO algorithm takes less time to achieve similar effects to the Global Search. Keywords Deep evolution · Competition model · Cooperation model · Feature transformation
1 Introduction The deep forest [1] proposed by Zhou Zhihua et al. in 2017 has caused great repercussions. It is a deep learning model based on decision tree forest rather than neural network, and its performance is comparable to deep neural network. In deep learning, it has multiple hidden layers, the ability to characterize learning, and a large amount of data to train to make its model complex enough [2, 3]. As pointed out in [1], if these attributes can be given to other suitable learning Xiang Feng
[email protected] Haijuan Chen [email protected] Huiqun Yu [email protected] 1
Department of Computer Science and Engineering, The East China University of Science and Technology, Shanghai, 200237, China
2
Shanghai Engineering Research Center of Smart Energy, Shanghai 200237, China
models, it may also have similar effects to deep neural networks. In recent years, evolutionary technology, which can be roughly divided into two categories, one is physics-like optimization technology, and the other is biology-like optimization technology, has also made great progress. There is not much research on the physics-like optimization technology, but many researches on biology-like optimization. For example, in 2008, D. Simon proposed the BiogeographyBased Optimization (BBO) by simulatin
Data Loading...