An Improved Self-Structuring Neural Network
Creating a neural network based classification model is traditionally accomplished using the trial and error technique. However, the trial and error structuring method nornally suffers from several difficulties including overtraining. In this article, a n
- PDF / 855,978 Bytes
- 13 Pages / 439.37 x 666.142 pts Page_size
- 81 Downloads / 198 Views
2
University of Huddersfield, Huddersfield, UK {rami.mohammad,t.l.mccluskey}@hud.ac.uk Nelson Marlborough Institute of Technology, Nelson, New Zealand [email protected]
Abstract. Creating a neural network based classification model is traditionally accomplished using the trial and error technique. However, the trial and error structuring method nornally suffers from several difficulties including overtraining. In this article, a new algorithm that simplifies structuring neural network classification models has been proposed. It aims at creating a large structure to derive classifiers from the training dataset that have generally good predictive accuracy performance on domain applications. The proposed algorithm tunes crucial NN model thresholds during the training phase in order to cope with dynamic behavior of the learning process. This indeed may reduce the chance of overfitting the training dataset or early convergence of the model. Several experiments using our algorithm as well as other classification algorithms, have been conducted against a number of datasets from University of California Irvine (UCI) repository. The experiments’ are performed to assess the pros and cons of our proposed NN method. The derived results show that our algorithm outperformed the compared classification algorithms with respect to several performance measures.
Keywords: Classification Structure
1
·
Neural network
·
Phishing
·
Pruning
·
Introduction
Artificial Neural Network (ANN) methods proved their merit in several classification domains [1]. However, one downside of any neural network (NN) based classification models is that their outcomes are difficult to interpret and they are considered as a black-box. We believe that this particular drawback can have a positive impact on application domains when the outcome’s predictive accuracy is vital to the domain users and often more important than the understanding of how the NN model works [4]. An important issue that gained the attention of many researchers is the NN structuring process. Although selecting a suitable number of hidden neurons and determining the value of some parameters, i.e. learning rate, momentum value and epoch size, showed to be crucial when constructing any NN model [4]. Still, there is no clear mechanism for determining such parameters values at preliminary c Springer International Publishing Switzerland 2016 H. Cao et al. (Eds.): PAKDD 2016 Workshops, LNAI 9794, pp. 35–47, 2016. DOI: 10.1007/978-3-319-42996-0 4
36
R.M. Mohammad et al.
stages, and most model designers rely on trial and error technique. The trial and error technique might be suitable for domains where rich prior experience and a NN expert are available, though, it often involves a tedious process since prior knowledge and experienced human experts are hard to get in practice. Also, the trial and error technique has been criticized of being a time-consuming process [3]. A poorly structured NN model may cause the classifier to underfit the training dataset. Whereas, overexageration in restructu
Data Loading...