Evolving Artificial Neural Network Ensembles

Artificial neural networks (ANNs) and evolutionary algorithms (EAs) are both abstractions of natural processes. In the mid 1990s, they were combined into a computational model in order to utilize the learning power of ANNs and adaptive capabilities of EAs

  • PDF / 415,958 Bytes
  • 30 Pages / 439.2 x 666 pts Page_size
  • 56 Downloads / 285 Views

DOWNLOAD

REPORT


2

Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh , [email protected] Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK, [email protected]

1 Introduction Artificial neural networks (ANNs) and evolutionary algorithms (EAs) are both abstractions of natural processes. In the mid 1990s, they were combined into a computational model in order to utilize the learning power of ANNs and adaptive capabilities of EAs. Evolutionary ANNs (EANNs) is the outcome of such a model. They refer to a special class of ANNs in which evolution is another fundamental form of adaptation in addition to learning [52–57]. The essence of EANNs is their adaptability to a dynamic environment. The two forms of adaptation in EANNs – namely evolution and learning – make their adaptation to a dynamic environment much more effective and efficient. In a broader sense, EANNs can be regarded as a general framework for adaptive systems – in other words, systems that can change their architectures and learning rules appropriately without human intervention. EAs have been introduced into ANNs at roughly three different levels: (i) connection weights, (ii) architectures, and (iii) learning rules. The evolution of connection weights introduces an adaptive and global approach to training, especially in the reinforcement learning and recurrent network learning paradigms, where gradient-based training algorithms often experience great difficulties. Architecture evolution enables ANNs to adapt their topologies to different tasks without human intervention. The evolution of learning rules can be regarded as a process of ‘learning to learn’ in ANNs, where the adaptation of learning rules is achieved through evolution. There is strong biological and engineering evidence to support the assertion that the information processing capability of ANNs is determined by their 

Portions reprinted with permission, from X. Yao and M.M. Islam, “Evolving artificial neural network ensembles,” IEEE Computational Intelligence Magazine, 3(1):31–42, February 2008. Copyright IEEE.

M.M. Islam and X. Yao: Evolving Artificial Neural Network Ensembles, Studies in Computational Intelligence (SCI) 115, 851–880 (2008) c Springer-Verlag Berlin Heidelberg 2008 www.springerlink.com 

852

M.M. Islam and X. Yao

architecture. A large amount of the literature is therefore devoted to finding optimal or near optimal ANN architectures by using EAs (see review papers [48,54,59]). However, many real-world problems are too large and too complex for a single ANN alone to solve. There are ample examples from both natural and artificial systems that show that an integrated system consisting of several subsystems can reduce the total system complexity while satisfactorily solving a difficult problem. Many successes in evolutionary computation have already demonstrated this. A typical example of the s