Performance of evolutionary wavelet neural networks in acrobot control tasks
- PDF / 1,605,078 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 54 Downloads / 184 Views
(0123456789().,-volV)(0123456789().,-volV)
ORIGINAL ARTICLE
Performance of evolutionary wavelet neural networks in acrobot control tasks Maryam Mahsal Khan1
•
Alexandre Mendes2 • Stephan K. Chalup2
Received: 24 June 2018 / Accepted: 3 July 2019 Ó Springer-Verlag London Ltd., part of Springer Nature 2019
Abstract Wavelet neural networks (WNN) combine the strength of artificial neural networks and the multiresolution ability of wavelets. Determining the structure and, more specifically, the appropriate number of neurons in a WNN is a timeconsuming process. We propose a type of multidimensional evolutionary WNN and, using an acrobot, evaluate this approach with two benchmark nonlinear control tasks: a height task and a hand-stand task. To facilitate direct comparison with other methods, we report on swing-up and balance times. In 50 trials, the controllers produced faster swing-up times— 1.0 s for the best controller and 2.3 s on average—than any other methods reported in the literature. Moreover, the controller with the best swing-up time had a maximum balance time of 1.25 s, surpassing most other methods. Keywords Evolutionary algorithms Wavelet neural networks Acrobot Intelligent control
1 Introduction Wavelet neural networks (WNN) combine the strength of an artificial neural network’s (ANN) learning with wavelet decomposition [1]. The finite support and self-similarity of wavelets are well suited for many identification systems and nonlinear, complex dynamics. Their reported applications include control of a robotic manipulator [2], adaptive load frequency in power systems [3], a hydraulic generator unit [4], direct current motor speed [5], an unmanned aerial vehicle [6] and the flocculation process of sewage treatment [7]. A review of the relevant literature indicated that WNN parameters are commonly optimised through gradient descent algorithms like the stochastic gradient [8] or the & Maryam Mahsal Khan [email protected] Alexandre Mendes [email protected] Stephan K. Chalup [email protected] 1
CSIRO Energy Technology, Newcastle, NSW, Australia
2
Interdisciplinary Machine Learning Research Group (IMLRG), School of Electrical Engineering and Computing, The University of Newcastle, Callaghan, NSW 2308, Australia
conjugate gradient [9]. However, due to the nonlinearity of WNNs, more advanced techniques for finding optimal parameter values [10], including genetic algorithms [11, 12], evolutionary algorithms [13] and evolutionary programming [14], were needed. When training a WNN, it is critical to define the number of neurons in the hidden layer, usually through trial and error or simple heuristics [5]. There are certain characteristics of the problem that can help to indicate the neuron population. Too many neurons might result in increased computational time and over-fitting; too few may stop the WNN from capturing the variability of the data—rendering the method ineffective [15]. These issues have been researched extensively. In 2002, Yongyong et al. [13] proposed
Data Loading...