Locate the Bounding Box of Neural Networks with Intervals

  • PDF / 489,467 Bytes
  • 11 Pages / 439.37 x 666.142 pts Page_size
  • 19 Downloads / 221 Views

DOWNLOAD

REPORT


Locate the Bounding Box of Neural Networks with Intervals Nikolaos Anastasopoulos1 · Ioannis G. Tsoulos2 · Evangelos Karvounis2 · Alexandros Tzallas2 Accepted: 1 September 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract A novel hybrid method is proposed for neural network training. The method consists of two phases: in the first phase the bounds for the neural network parameters are estimated using a genetic algorithm that uses intervals as chromosomes. In the second phase a genetic algorithm is used to train the neural network inside the bounding box located by the first phase. The proposed method is tested on a series of well-known datasets from the relevant literature and the results are reported. Keywords Neural networks · Genetic algorithms · Intervals · Optimization

1 Introduction Artificial neural networks [1] have been used with success in many fields during the past years such as chemistry [2–4], economics [5,6], medicine [7,8] etc. A neural network usually → → → → is formulated as a function N (− x ,− w ), with − x is the input vector and − w is the vector of the parameters (weight vector) under estimation. The estimation of the parameter vector is performed by minimizing the so -called error function: M  → −    − 2  → E N − x ,→ w = N → x i, − w − yi

(1)

i=1

→  The set − xi , yi , i = 1, . . . , M is the data used to train the neural network. The symbol yi → stands for the output of the function estimated at the point − xi . Many methods have been proposed to locate the optimal weight vector such as methods designed for neural networks like the Back Propagation method [9] or the RPROP method [10]. Also, common optimization methods have been used to optimize the weight vector such as Quasi Newton methods [11], Genetic Algorithms [12,13], Particle Swarm Optimization [14] etc. This articles proposes a novel genetic algorithm that incorporates interval optimization techniques [15–17] to locate

B

Ioannis G. Tsoulos [email protected]

1

Department of Electrical and Computer Engineering, University of Patras, Patras, Greece

2

Department of Informatics and Telecommunications, University of Ioannina, Ioannina, Greece

123

N. Anastasopoulos et al.

the best bounding box for the weight vector. Interval optimization methods have been widely used to locate the global minimum of multivariable functions f (x) by locate x ∗ where x ∗ = arg min f (x) x∈S

(2)

The set S is a subset of Rn and it is described as: S = [a1 , b1 ] ⊗ [a2 , b2 ] ⊗ . . . [an , bn ]

(3)

In interval optimization methods usually the set S is divided subsequently and subregions that not contain the global solution are discarded using some criteria. The usual practice to most of global search approaches used for training neural networks is that the bounds of the search region are instinctively defined. This procedure, frequently, shows to be insufficient to support training neural networks with global optimization methods. Thus, defining effective bounds of the search space is a basis for u