Surface Approximation Using the 2D FFENN Architecture

  • PDF / 973,016 Bytes
  • 9 Pages / 600 x 792 pts Page_size
  • 68 Downloads / 194 Views

DOWNLOAD

REPORT


Surface Approximation Using the 2D FFENN Architecture S. Panagopoulos Institute for Communications & Signal Processing, University of Strathclyde, Royal College Building, Glasgow G1 1XW, UK Email: [email protected]

J. J. Soraghan Institute for Communications & Signal Processing, University of Strathclyde, Royal College Building, Glasgow G1 1XW, UK Email: [email protected] Received 27 August 2003; Revised 10 March 2004; Recommended for Publication by Bernard Mulgrew A new two-dimensional feed-forward functionally expanded neural network (2D FFENN) used to produce surface models in two dimensions is presented. New nonlinear multilevel surface basis functions are proposed for the network’s functional expansion. A network optimization technique based on an iterative function selection strategy is also described. Comparative simulation results for surface mappings generated by the 2D FFENN, multilevel 2D FFENN, multilayered perceptron (MLP), and radial basis function (RBF) architectures are presented. Keywords and phrases: neural networks, sea clutter, surface modeling.

1.

INTRODUCTION

One of the main properties of feed-forward neural networks is that of learning an input-output mapping from a set of examples characterizing a real system. The network is trained with some examples comprising an input signal and the desired response. The network weights are then modified, using an adaptive optimization technique to minimize the difference between the desired response and actual response. Two well-known feed-forward artificial neural networks are the multilayered perceptron (MLP) and radial basis function (RBF). Both networks have been termed as universal approximators [1, 2]. Their performance has been demonstrated in various application areas such as linear and nonlinear adaptive filtering [3], time series prediction [4], dynamic reconstruction [5], and black-box modeling [6]. However, these networks suffer from a number of drawbacks, such as convergence characteristics and network topology selection [7]. MLP networks traditionally employ sigmoidal activation functions that cannot model local nonlinearity optimally. Also, their nonlinear in-the-parameters structure requires complex and computationally intense learning algorithms, such as the backpropagation algorithm. Furthermore, there is no way to say whether a single hidden layer is optimum to support the MLP network learning or a way to specify the

exact number of hidden neurons required in order for a system to be generalizable. On the other hand, RBF networks that traditionally employ radial symmetric functions cover only small localized regions and therefore they cannot model global nonlinearity well. Moreover, dealing with RBF networks’ great difficulty is experienced in selecting the appropriate centers for the radial basis functional expansion. Additionally, a large number of basis functions is usually required in order to cover highdimensional input spaces. Nonetheless, simple learning algorithms may be used for training, as the RBF structu