Logo

CIRG - Research  -  Neural Networks 



[Bullet] Home
About
News
[Bullet]

- NN
- DM
- SI
- EC
- MAS
- AIS
- IA
- Bioinf
- Games
- Opt
- FA
- Industry

Publications
People
Resources
Links
Contact Us

OVERVIEW

The focus of the neural networks group is to investigate aspects of training and optimization of neural networks, and to apply neural networks to solve real-world problems. The activities of this focus area are mainly centered around architecture selection, active learning, and the development of new an efficient training algorithms. Some work is done on self-organizing maps.

Current applications are directed towards data mining, spam detection, user authentication, fraud detection, gesture recognition, and trading on financial markets. Applications of self-organization maps to exploratory data analysis, data mining, and species identification are done.

ACTIVE MEMBERS

List the current members actively doing research in this focus area. [ Show ]

ALUMNI MEMBERS

List alumni of this research focus area. [ Show ]

GROUP PUBLICATIONS


A New Particle Swarm Optimiser for Linearly Constrained Optimisation
Paquet, U. Engelbrecht, AP. 2003.
IEEE Congress on Evolutionary Computation, Canberra, Australia, 2003, 227-233, IEEE

Download this publication from the Swarm Intelligence and Neural Networks groups.

Abstract:

A new PSO algorithm, the Linear PSO (LPSO), is developed to optimise functions constrained by linear constraints of the form Ax = b. A crucial property of the LPSO is that the possible movement of particles through vector spaces is guaranteed by the velocity and position update equations. This property makes the LPSO ideal in optimising linearly constrained problems. The LPSO is extended to the Converging Linear PSO, which is guaranteed to always find at least a local minimum.

Back to top Up Arrow



Computer Aided Identification of Biological Specimens using Self-Organizing Maps
Dean, EJ. Engelbrecht, AP. Nicholas, A. 2003.
Fourth International Conference on Data Mining, Rio de Janeiro, 2003

Download this publication from the Neural Networks group.

Abstract:

It is often necessary or desirable that biological material be identified. However, given that there is an estimated 10 million living organisms on Earth, the identification of biological material can be problematic and consequently the services of a taxonomist specialist are often required. If such an expert is not readily available it is necessary to attempt an identification using an alternative method; but some of the alternative methods available are unsatisfactory or can lead to a wrong identification. One of the most common problems encountered when identifying specimens is that important diagnostic features are often not easily observed, or may even be completely absent. A number of techniques can be used to try to overcome this problem, one of which, the Self Organizing Map (or SOM), is a particularly appealing technique because of its ability to handle missing data. This paper explores the use of SOMs as a technique for the identification of indigenous trees of Acacia in KwaZulu-Natal, South Africa. The ability of the SOM technique to perform exploratory data analysis through data clustering is utilized and assessed, as is it?s usefulness for visualizing the results of the analysis of numerical, multivariate botanical datasets. The SOM?s ability to investigate, discover and interpret relationships within these datasets is examined, and the technique?s ability to identify tree species successfully is tested. The tests performed so far have provided promising results and suggest that the application of the SOM to the problem of identification could provide the breakthrough in computerized identification for which botanists have long been hoping.

Back to top Up Arrow



Training Support Vector Machines with Particle Swarms
Paquet, U. Engelbrecht, AP. 2003.
International Joint Conference on Neural Networks, Portland, OR, 2003

Download this publication from the Swarm Intelligence and Neural Networks groups.

Abstract:

Training a Support Vector Machine requires solving a constrained quadratic programming problem. Linear Particle Swarm Optimization is intuitive and simple to implement, and is presented as an alternative to current numeric SVM training methods. Performance of the new algorithm is demonstrated on the MNIST character recognition dataset.

Back to top Up Arrow



Pruning Product Unit Neural Networks
Ismail, A. Engelbrecht, AP. 2002.
Proceedings of International Joint Conference on Neural Networks, Honolulu, Hawaii , IEEE World Congress on Computational Intelligence

Download this publication from the Neural Networks group.

Abstract:

Selection of the optimal architecture of a neural network is crucial to ensure good generalization by reducing the occurrence of overfitting. While much work has been done to develop pruning algorithms for networks that employ summation units, not much has been done on pruning of product unit neural networks. This paper develops and tests a pruning algorithm for product unit networks, and illustrates its performance on several function approximation tasks.

Back to top Up Arrow



Supervised Training Using an Unsupervised Approach to Active Learning
Engelbrecht, AP. Brits, R. 2002.
Neural Processing Letters, 15:247-260, Kluwer Academic Publishers

Download this publication from the Neural Networks group.

Abstract:

Active learning algorithms allow neural networks to dynamically take part in the selection of the most informative training patterns. This paper introduces a new approach to active learning, which combines an unsupervised clustering of training data with a pattern selection approach based on sensitivity analysis. Training data is clustered into groups of similar patterns based on Euclidean distance, and the most informative pattern from each cluster is selected for training using the sensitivity analysis incremental learning algorithm in \cite{eng99d}. Experimental results show that the clustering approach improves on standard active learning as presented in \cite{eng99d}.

Back to top Up Arrow



A Cluster Approach to Incremental Learning
Brits, R. Engelbrecht, AP. 2001.
IEEE International Conference on Neural Networks, Washington DC, USA

Download this publication from the Neural Networks group.

Abstract:

The sensitivity analysis approach to incremental learning presented in \cite{eng99} is extended in this paper. The approach in \cite{eng99} selects at each subset selection interval only one new informative pattern from the candidate training set, and adds the selected pattern to the current training subset. This approach is extended with an unsupervised clustering of the candidate training set. The most informative pattern is then selected from each of the clusters. Experimental results are given to show that the clustering approach to incremental learning performs substantially better than the original approach in \cite{eng99}.

Back to top Up Arrow



A New Pruning Heuristic Based on Variance Analysis of Sensitivity Information
Engelbrecht, AP. 2001.
IEEE Transactions on Neural Networks, 12(6):1386-1399

Download this publication from the Neural Networks group.

Abstract:

Architecture selection is a very important aspect in the design of neural networks to optimally tune performance and computational complexity. Sensitivity analysis has been used successfully to prune irrelevant parameters from feedforward neural networks. This paper presents a new pruning algorithm that uses sensitivity analysis to quantify the relevance of input and hidden units. A new statistical pruning heuristic is proposed, based on variance analysis, to decide which units to prune. The basic idea is that a parameter with a variance in sensitivity not significantly different from zero, is irrelevant and can be removed. Experimental results show that the new pruning algorithm correctly prunes irrelevant input and hidden units. The new pruning algorithm is also compared with standard pruning algorithms.

Back to top Up Arrow



Selective Learning for Multilayer Feeforward Neural Networks
Engelbrecht, AP. 2001.
International Work-Conference on Artificial Neural Networks, Granada, Spain, In: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence, J Mira, A Prieto (eds), Part I, pp 386-393, Springer-Verlag series Lecture Notes in Computer Science, Vol 2084

Download this publication from the Neural Networks group.

Abstract:

Selective learning is an active learning strategy where the neural network selects during training the most informative patterns. This paper investigates a selective learning strategy where the informativeness of a pattern is measured as the sensitivity of the network output to perturbations in that pattern. The sensitivity approach to selective learning is then compared with an error selection approach where pattern informativeness is defined as the approximation error.

Back to top Up Arrow



Sensitivity Analysis for Selective Learning by Feedforward Neural Networks
Engelbrecht, AP. 2001.
Fundamenta Informaticae, 45(1):295-328, IOS Press

Download this publication from the Neural Networks group.

Abstract:

Research on improving the performance of feedforward neural networks has concentrated mostly on the optimal setting of initial weights and learning parameters, sophisticated optimization techniques, architecture optimization, and adaptive activation functions. An alternative approach is presented in this paper where the neural network dynamically selects training patterns from a candidate training set during training, using the network's current attained knowledge about the target concept. Sensitivity analysis of the neural network output with respect to small input perturbations is used to quantify the informativeness of candidate patterns. Only the most informative patterns, which are those patterns closest to decision boundaries, are selected for training. Experimental results show a significant reduction in the training set size, without negatively influencing generalization performance and convergence characteristics. This approach to selective learning is then compared to an alternative where informativeness is measured as the magnitude in prediction error.

Back to top Up Arrow



Cooperative Learning in Neural Networks using Particle Swarm Optimizers
van den Bergh, F. Engelbrecht, AP. 2000.
South African Computer Journal, 26:84-90

Download this publication from the Neural Networks and Swarm Intelligence groups

Abstract:

This paper presents a method to employ particle swarms optimizers in a cooperative configuration. This is achieved by splitting the input vector into several sub-vectors, each which is optimized cooperatively in its own swarm. The application of this technique to neural network training is investigated, with promising results.

Back to top Up Arrow



Data Generation using Sensitivity Analysis
Engelbrecht, AP. 2000.
International Symposium on Computational Intelligence, Kosice, Slovakia

Download this paper

Abstract:

This paper presents a new approach to the generation of training data for supervised neural networks. Sensitivity analysis is used to find the most informative regions in input space. Knowledge of these informative regions can be used to remove redundant training patterns and to generate additional training patterns withing the informative regions. Preliminary experimental results show that this apporach to data generation holds much promise.

Back to top Up Arrow



Global Optimization Algorithms for Training Product Unit Neural Networks
Ismail, A. Engelbrecht, AP. 2000.
IEEE International Conference on Neural Networks, Como, Italy, paper 032, IEEE

Download this publication from the Neural Networks and Swarm Intelligence groups.

Abstract:

Product units in the hidden layer of multilayer neural networks provide a powerful mechanism for neural networks to efficiently learn higher-order combinations of inputs. Training product unit networks using local optimization algorithms is difficult due to an increased number of local minima and increased chances of network paralysis. This paper discusses the problems with using gradient descent to train product unit neural networks, and shows that particle swarm optimization, genetic algorithms and LeapFrog are efficient alternatives to successfully train product unit neural networks.

Back to top Up Arrow



Using the Taylor Expansion of Multilayer Feedforward Neural Networks
Engelbrecht, AP. 2000.
South African Computer Journal, 26:181-189

Download this publication from the Neural Networks group.

Abstract:

The Taylor series expansion of continuous functions has shown - in many fields - to be an extremely powerful tool to study the characteristics of such functions. This paper illustrates the power of the Taylor series expansion of multilayer feedforward neural networks. The paper shows how these expansions can be used to investigate positions of decision boundaries, to develop active learning strategies and to perform architecture selection.

Back to top Up Arrow



A New Selective Learning Algorithm for Time Series Approximation using Feedforward Neural Networks
Engelbrecht, AP. Adejumo, A. 1999.
In: Development and Practice of Artificial Intelligence Techniques, VB Bajic, D Sha (eds), pp 29-31, Proceedings of the International Conference on Artificial Intelligence, Durban, South Africa

Download this publication from the Neural Networks group.

Abstract:

Various research results have shown that training on a fixed set of patterns do not produce best results. Much gain can be achieved by dynamically changing the contents of the training set, during training, to reflect patterns which are most informative to the training objective. This paper presents a training strategy which orders time series training data after each epoch into large-next-day changes and small-next-day changes training subsets. The training strategy then selects patterns more frequently from the large-next-day changes subset.

Back to top Up Arrow



Approximation of a Function and its Derivative in Feedforward Neural Networks
Basson, E. Engelbrecht, AP. 1999.
IEEE International Joint Conference on Neural Networks, Washington DC, USA, paper 2152, IEEE

Download this publication from the Neural Networks group.

Abstract:

A new learning algorithm is presented that learns a function and its first-order derivatives. Derivatives are learned together with the function using gradient descent. Preliminary results show that the algorithm produces acceptable approximations to the derivatives.

Back to top Up Arrow



Sensitivity Analysis for Decision Boundaries
Engelbrecht, AP. 1999.
Neural Processing Letters, 10(3):253-266, Kluwer Academic Publishers

Download this publication.

Abstract:

A novel approach is presented to visualize and analyze decision boundaries for feedforward neural networks. First order sensitivity analysis of the neural network output function with respect to input perturbations is used to visualize the position of decision boundaries over input space. Similarly, sensitivity analysis of each hidden unit activation function reveals which boundary is implemented by which hidden unit. The paper shows how these sensitivity analysis models can be used to better understand the data being modelled, and to visually identify irrelevant input and hidden units.

Back to top Up Arrow



Training Product Unit Neural Networks
Engelbrecht, AP. Ismail, A. 1999.
Stability and Control: Theory and Applications, 2(1/2):59-74

Download this publication from the Neural Networks and Swarm Intelligence groups.

Abstract:

Product units enable a neural network to form higher-order combinations of inputs, having the advantages of increased information capacity and smaller network architectures. Training product unit networks using gradient descent, or any other local optimization algorithm, is difficult, because of an increased number of local minima and increased chances of network paralysis. This paper illustrates the shortcomings of gradient descent optimization when faced with product units, and presents a comparative investigation into global optimization algorithms for the training of product unit neural networks. A comparison of results obtained from particle swarm optimization, genetic algorithms, LeapFrog and random search show that these global optimization algorithms successfully train product unit neural networks. Results of product unit neural networks are also compared to results obtained from using gradient optimization with summation units.

Back to top Up Arrow



Training Product Units in Feedforward Neural Networks using Particle Swarm Optimization
Ismail, A. Engelbrecht, AP. 1999.
In: Development and Practice of Artificial Intelligence Techniques, VB Bajic, D Sha (eds), pp 36-40, Proceedings of the International Conference on Artificial Intelligence, Durban, South Africa

Download this publication from the Neural Networks and Swarm Intelligence groups.

Abstract:

Product unit (PU) neural networks are powerful because of their ability to handle higher order combinations of inputs. Training of PUs by backpropagation is however difficult, because of the introduction of more local minima. This paper compares training of a product unit neural network using particle swarm optimization with training of a PU using gradient descent.

Back to top Up Arrow



Variance Analysis of Sensitivity Information for Pruning Multilayer Feedforward Neural Networks
Engelbrecht, AP. Fletcher, L. Cloete, I. 1999.
IEEE International Joint Conference on Neural Networks, Washington DC, USA, paper 379, IEEE

Download this publication from the Neural Networks group.

Abstract:

This paper presents an algorithm to prune feedforward neural network architectures using sensitivity analysis. Sensitivity Analysis is used to quantify the relevance of input and hidden units. A new statistical pruning heuristic is proposed, based on variance analysis, to decide which units to prune. Results are presented to show that the pruning algorithm correctly prunes irrelevant input and hidden units.

Back to top Up Arrow



Feature Extraction from Feedforward Neural Networks using Sensitivity Analysis
Engelbrecht, AP. Cloete, I. 1998.
International Conference on Advances in Systems, Signals, Control and Computers, V Bajic (ed), Durban South Africa, 2:221-225

Download this publication from the Neural Networks group.

Abstract:

Sensitivity analysis is a powerful tool to extract meaningful information from trained multilayer feedforward neural networks. A neural network (NN) numerically encodes its knowledge about a problem in the weights of the network. This knowledge is used to generalize to data not seen during training, and can be used to optimize the network architecture, to optimize use of the training set, to enhance rule extraction, and to analyze the function of each hidden unit. This paper shows how sensitivity analysis with respect to the NN output function can be used to achieve these objectives.

Back to top Up Arrow



Optimizing the Number of Hidden Nodes of a Feedforward Artificial Neural Network
Fletcher, L. Katkovnik, V. Steffens, FE. Engelbrecht, AP. 1998.
Proceedings of the International Joint Conference on Neural Networks, pp 1608 - 1612, IEEE World Congress on Computational Intelligence

Unavailable for download.

Back to top Up Arrow



Selective Learning using Sensitivity Analysis
Engelbrecht, AP. Cloete, I. 1998.
Proceedings of International Joint Conference on Neural Networks, Anchorage, Alaska, pp 1150-1156, IEEE World Congress on Evolutionary Computation

Download this publication from the Neural Networks group.

Abstract:

Research on improving generalization performance and training time of multilayer feedforward neural networks has concentrated mostly on the optimal setting of initial weights, learning rates and momentum, optimal architectures, and sophisticated optimization techniques. In this paper we present an alternative approach where the network dynamically selects patterns during training. We apply sensitivity analysis to select only patterns closest to the separating hyperplanes. Experimental results of an artificial and two real world classification problems show that our selective learning method significantly reduces the training set size without decreasing generalization performance - in fact, the results presented show that generalization is improved compared to learning with all training patterns.

Back to top Up Arrow



A Sensitivity Analysis Algorithm for Pruning Feedforward Neural Networks
Engelbrecht, AP. Cloete, I. 1996.
IEEE International Joint Conference on Neural Networks, Washington DC, USA, 2:1274-1277

Download this publication from the Neural Networks group.

Abstract:

A pruning algorithm, based on sensitivity analysis, is presented in this paper. We show that the sensitivity analysis technique efficiently prunes both input and hidden layers. Results of the application of the pruning algorithm to various N-bit parity problems agree with well-known published results.

Back to top Up Arrow



Automatic Scaling using Gamma Learning in Feedforward Neural Networks
Engelbrecht, AP. Cloete, I. Geldenhuys, J. Zurada, J. 1995.
International Workshop on Artificial Neural Networks, Torremolinos, Spain, in J Mira, F Sandoval (eds), 930:374-381, From Natural Science to Artificial Neural Computing, in the Springer-Verlag series Lecture Notes in Computer Science

Download this publication from the Neural Networks group.

Abstract:

Standard error back-propagation requires output data that is scaled to lie within the active area of the activation function. We show that normalizing data to conform to this requirement is not only a time-consuming process, but can also introduce inaccuracies in modelling of the data. In this paper we propose the gamma learning rule for feedforward neural networks which eliminates the need to scale output data before training. We show that the utilization of ``self-scaling'' units results in faster convergence and more accurate results compared to the rescaled results of standard back-propagation.

Back to top Up Arrow



Determining the Significance of Input Parameters using Sensitivity Analysis
Engelbrecht, AP. Cloete, I. Zurada, J. 1995.
International Workshop on Artificial Neural Networks, Torremolinos, Spain, J Mira, F Sandoval (eds), 930:382-388, From natural Science to Artificial Neural Computing, in the Springer-Verlag series Lecture Notes in Computer Science

Download this publication from the Neural Networks group.

Abstract:

Accompanying the application of rule extraction algorithms to real-world problems is the crucial difficulty to compile a representative data set. Domain experts often find it difficult to identify all input parameters that have an influence on the outcome of the problem. In this paper we discuss the problem of identifying relevant input parameters from a set of potential input parameters. We show that sensitivity analysis applied to a trained feedforward neural network is an efficient tool for the identification of input parameters that have a significant influence on any one of the possible outcomes. We compare the results of a neural network sensitivity analysis tool with the results obtained from a machine learning algorithm, and discuss the benefits of sensitivity analysis to a neural network rule extraction algorithm.

Back to top Up Arrow



Dimensioning of Telephone Networks using a Neural Network as Traffic Distribution Approximator
Engelbrecht, AP. Cloete, I. 1995.
Proceedings of the International Workshop on the Applications of Neural Networks to Telecommunications, J Alspector, R Goodman, TX Brown (eds), Stockholm, Sweden, pp 72-79, Lawrence Erlbaum Associates

Download this publication from the Neural Networks group.

Abstract:

A feedforward neural network is used to approximate the distribution of both primary (first-offered) traffic and overflow traffic in a telephone network. We show that a neural network accurately approximates both primary and overflow traffic distributions, and if utilized in an automated dimensioning model, reduces the complexity of that model.

Back to top Up Arrow






You are visitor #29298
Contact webmaster
Back to top

QualNet Network Simulator University Program Valid XHTML 1.0! Valid CSS!


Computational Intelligence Research Group
University of Pretoria
Copyright © 2017