A New Activation Function in the Hopfield Network for Solving Optimization Problems

This paper shows that the performance of the Hopfield network for solving optimization problems can be improved by using a new activation (output) function. The effects of the activation function on the performance of the Hopfield network are analyzed. It

  • PDF / 1,085,107 Bytes
  • 5 Pages / 595.28 x 793.7 pts Page_size
  • 60 Downloads / 174 Views

DOWNLOAD

REPORT


Abstract This paper shows that the performance of the H opfield network for solving optimization problems can be improved by using a new activation (output) function. The effects of the activation function on the performance of the Hopfield network are analyzed. It is shown that the sigmoid activation function in the Hopfield network is sensitive to noise of neurons. The reason is that the sigmoid function is most sensitive in the range where noise is most predominant. A new activation function that is more robust against noise is proposed. The new activation function has the capability of amplifying the signals between neurons while suppressing noise. The performance of the new activation function is evaluated through simulation. Compared with the sigmoid function, the new activation function reduces the error rate of tour length by 30.6% and increases the percentage of valid tours by 38.6% during simulation on 200 randomly generated city distributions of the 10-city traveling salesman problem.

1

Introduction

The subject of combinatorial optimization consists of a large set of important problems in computer science and engineering. A classical example of combinatorial optimization problems is the traveling salesman problem ( TSP). TSP belongs to the class of NP problems. All exact methods known for determining an optimal tour require a computing effort that increases exponentially with N for an N -city TSP problem. Interest in solving combinatorial optimization problems by neural networks was motivated by the work of Hopfield and Tank [1]. They suggested that a suboptimal solution of TSP can be obtained by finding a local minimum of an appropriate energy function. The energy function can be implemented by a neural network. For aN-city TSP problem, the network consists of N x N neurons and the links that connect these neurons. The weights are set to encode the information about the constraints and the cost function of a particular city distribution of TSP. Each neuron updates its input value based on the information received from all other neurons. A. Dobnikar et al., Artificial Neural Nets and Genetic Algorithms © Springer-Verlag/Wien 1999

They showed that the neural network can often find a near-optimal solution in a short time. This network is commonly referred to as the Hopfield network. The advantages of the Hopfield network over other heuristic methods for solving combinatorial optimization problems include massive parallelism and convenient hardware implementation. Another advantage is that the procedure of the Hopfield network is more general for applications. There is an ad hoc procedure for mapping the constraints and cost function into the weight settings of the network. This general procedure can be applied to solve many different types of combinatorial optimization problems. Since Hopfield and Tank showed that neural computation can be effective for solving combinatorial optimization problems, some work has been done to improve the performance of the Hopfield network. The research focuses on analyz