Logo

CIRG - Research  -  Swarm Intelligence 



[Bullet] Home
About
News
[Bullet]

- NN
- DM
- SI
- EC
- MAS
- AIS
- IA
- Bioinf
- Games
- Opt
- FA
- Industry

Publications
People
Resources
Links
Contact Us

OVERVIEW

The Swarm Intelligence focus area is currently the most active in the group, with the largest number of members. The focus area's main interest is particle swarm optimization (PSO), with the development of new and improved PSO algorithms. Theoretical analyses of PSO are also being done, with convergence proofs being studied. Techniques are developed for constrained optimization, niching (locating multiple solutions), multi-objective optimization, dynamic optimization problems, and to cope with discrete search spaces.

Applications of PSO techniques that are under investigation include the coevolutionary training of neural networks for game playing and financial traders, scheduling, image analysis, and data clustering. The research focus area is also investigating the application of ant colony optimization techniques to exploratory data analysis, workload distribution in computer grids, energy efficient routing in mobile ad hoc networks, and network topology design.

ACTIVE MEMBERS

List the current members actively doing research in this focus area. [ Show ]

ALUMNI MEMBERS

List alumni of this research focus area. [ Show ]

GROUP PUBLICATIONS


A Study of Particle Swarm Optimization Particle Trajectories
van den Bergh, F. Engelbrecht, AP. 2005.
Information Sciences Journal

Download this publication from the Swarm Intelligence group.

Abstract:

Particle swarm optimization (PSO) has shown to be an efficient, robust and simple optimization algorithm. Most of the PSO studies are empirical, with only a few theoretical analyses that concentrate on understanding particle trajectories. These theoretical studies concentrate mainly on simplified PSO systems. This paper overviews current theoretical studies, and extend these studies to investigate particle trajectories for general swarms to include the influence of the inertia term. The paper also provides a formal proof that each particle converges to a stable point. An empirical analysis of multi-dimensional stochastic particles is also presented. Experimental results are provided to support the conclusions drawn from the theoretical findings.

Back to top Up Arrow



Particle Swarm Optimization Method for Image Clustering
Omran, MG. Engelbrecht, AP. Salman, A. 2005.
International Journal on Pattern Recognition and Artificial Intelligence

Download this publication from the Swarm Intelligence group.

Abstract:

An image clustering method that is based on the particle swarm optimizer (PSO) is developed in this paper. The algorithm finds the centroids of a user specified number of clusters, where each cluster groups together similar image primitives. To illustrate its wide applicability, the proposed image classifier has been applied to synthetic, MRI and satellite images. Experimental results show that the PSO image classifier performs better than a conventional image classifier (namely, K-means) in all measured criteria. The influence of different values of PSO control parameters is also illustrated.

Back to top Up Arrow



SIGT: Synthetic Image Generation Tool for Clustering Algorithms
Salman, A. Omran, MG. Engelbrecht, AP. 2005.
International Journal on Graphics, Vision and Image Processing , 2:33--44

Download this publication from the Swarm Intelligence group.

Abstract:

A new automatic image generation tool is proposed in this paper tailored specifically for verification and comparison of different image clustering algorithms. The tool can be used to produce different images (in raw format) with different criteria based on user specification. The user specifies the number of clusters to be included in the image along with the probability distribution that govern set of points that belong to different clusters. On the other hand, the tool can be used to verify the degree of approximation a new algorithm has been able to achieve compared to the original image. This allows for a scientific confident comparison between any new algorithm and existing algorithms. The features of the tool is demonstrated with reference to the well-known K-means clustering algorithm.

Back to top Up Arrow



A New Particle Swarm Optimiser for Linearly Constrained Optimisation
Paquet, U. Engelbrecht, AP. 2003.
IEEE Congress on Evolutionary Computation, Canberra, Australia, 2003, 227-233, IEEE

Download this publication from the Swarm Intelligence and Neural Networks groups.

Abstract:

A new PSO algorithm, the Linear PSO (LPSO), is developed to optimise functions constrained by linear constraints of the form Ax = b. A crucial property of the LPSO is that the possible movement of particles through vector spaces is guaranteed by the velocity and position update equations. This property makes the LPSO ideal in optimising linearly constrained problems. The LPSO is extended to the Converging Linear PSO, which is guaranteed to always find at least a local minimum.

Back to top Up Arrow



CIRG@UP OptiBench: A Statistically Sound Framework for Benchmarking Optimisation Algorithms
Peer, ES. Engelbrecht, AP. van den Bergh, F. 2003.
IEEE Congress on Evolutionary Computation, Canberra, Australia, 2003, 2386-2392, IEEE

Download this publication from the Swarm Intelligence group.

Abstract:

This paper is a proposal, by the Computational Intelligence Research Group at the University of Pretoria (CIRG@UP), for a framework to benchmark optimisation algorithms. This framework, known as OptiBench, was conceived out of the necessity to consolidate the efforts of a large research group. Many problems arise when different people work independently on their own research initiatives. These problems range from duplicating effort to, more seriously, having conflicting results. In addition, less experienced members of the group are sometimes unfamiliar with the necessary statistical methods required to properly analyse their results. These problems are not limited internally to CIRG@UP but are also prevalent in the research community at large. This proposal aims to standardise the research methodology used by CIRG@UP internally (initially in the optimisation subgroup and later in subgroups working in other paradigms of computational research). Obviously this paper cannot dictate the methodologies that should be used by other members of the broader research community, however, the hope is that this framework will be found useful and that others will willingly contribute and become involved.

Back to top Up Arrow



Comparing PSO Structures to Learn the Game Checkers from Zero Knowledge
Franken, N. Engelbrecht, AP. 2003.
IEEE Congress on Evolutionary Computation, Canberra, Australia, 2003, 234-241, IEEE

Download this publication from the Swarm Intelligence group.

Abstract:

This paper investigates the effectiveness of various particle swarm optimiser structures to learn how to play the game of checkers. Co-evolutionary techniques are used to train the game playing agents. Performance is compared against a player making moves at random. Initial experimental results indicate definite advantages in using certain information sharing structures and swarm size configurations to successfully learn the game of checkers.

Back to top Up Arrow



Data Clustering using Particle Swarm Optimization
van der Merwe, DW. Engelbrecht, AP. 2003.
IEEE Congress on Evolutionary Computation, Canberra, Australia, 2003, 215-220, IEEE

Download this publication from the Swarm Intelligence group.

Abstract:

This paper proposes two new approaches to using PSO to cluster data. It is shown how PSO can be used to find the centroids of a user specified number of clusters. The algorithm is then extended to use K-means clustering to seed the initial swarm. This second algorithm basically uses PSO to refine the clusters formed by K-means. The new PSO algorithms are evaluated on six data sets, and compared to the performance of K-means clustering. Results show that both PSO clustering techniques have much potential.

Back to top Up Arrow



Scalability of Niche PSO
Brits, R. Engelbrecht, AP. van den Bergh, F. 2003.
IEEE Swarm Intelligence Symposium, Indianapolis, pp 228 - 234

Download this publication from the Swarm Intelligence group.

Abstract:

In contrast to optimization techniques intended to find a single, global solution in a problem domain, niching (speciation) techniques have the ability to locate multiple solutions in multimodal domains. Numerous niching techniques have been proposed, broadly classified as temporal (locating solutions sequentially) and parallel (multiple solutions are found concurrently) techniques. Most research efforts to date have considered niching solutions through the eyes of genetic algorithms (GAs), studying simple multimodal problems. Little attention has been given to the possibilities associated with emergent swarm intelligence techniques. Particle swarm optimization (PSO) utilizes properties of swarm behaviour not present in evolutionary algorithms such as GAs, to rapidly solve optimization problems. This paper investigates the ability of two genetic algorithm niching techniques, sequential niching and deterministic crowding, to scale to higher dimensional domains with large numbers of solutions, and compare their performance to a PSO-based niching technique, NichePSO.

Back to top Up Arrow



Training Support Vector Machines with Particle Swarms
Paquet, U. Engelbrecht, AP. 2003.
International Joint Conference on Neural Networks, Portland, OR, 2003

Download this publication from the Swarm Intelligence and Neural Networks groups.

Abstract:

Training a Support Vector Machine requires solving a constrained quadratic programming problem. Linear Particle Swarm Optimization is intuitive and simple to implement, and is presented as an alternative to current numeric SVM training methods. Performance of the new algorithm is demonstrated on the MNIST character recognition dataset.

Back to top Up Arrow



Using Neighborhoods with Guaranteed Convergence PSO
Peer, ES. van den Bergh, F. Engelbrecht, AP. 2003.
IEEE Swarm Intelligence Symposium, Indianapolis, pp 235-242

Download this publication from the Swarm Intelligence group.

Abstract:

The standard Particle Swarm Optimiser (PSO) may prematurely converge on suboptimal solutions that are not even guaranteed to be local extrema. The guarenteed convergence modificationsto the PSO algorithm ensure that the PSO at least converges on a local extremum at the expense of even faster convergence. This faster convergence means that less of the search space is explored reducing the opportunity of the swarm to find better local extrema. Various neighbourhood topologies inhibit premature convergence by preserving swarm diversity during the search. This paper investigates the performance of the Guaranteed Convergence PSO (GCPSO) using different neighbourhood topologies and compares the results with their standard PSO counterparts.

Back to top Up Arrow



A New Locally Convergent Particle Swarm Optimizer
van den Bergh, F. Engelbrecht, AP. 2002.
IEEE Conference on Systems, Man, and Cybernetics

Download this publication from the Swarm Intelligence group.

Abstract:

This paper introduces a new Particle Swarm Optimisation (PSO) algorithm with strong local convergence properties. The new algorithm performs much better with a smaller number of particles, compared to the original PSO. This property is desirable when designing a niching PSO algorithm.

Back to top Up Arrow



A Niching Particle Swarm Optimizer
Brits, R. Engelbrecht, AP. van den Bergh, F. 2002.
4th Asia-Pacific Conference on Simulated Evolution and Learning

Download this publication from the Swarm Intelligence group.

Abstract:

This paper describes a technique that extends the unimodal particle swarm optimizer to efficiently locate multiple optimal solutions in multimodal problems. Multiple subswarms are grown from an initial particle swarm by monitoring the fitness of individual particles. Experimental results show that the proposed algorithm can successfully locate all maxima on a small set of test functions during all simulation runs.

Back to top Up Arrow



Image Classification using Particle Swarm Optimization
Omran, M. Salman, A. Engelbrecht, AP. 2002.
4th Asia-Pacific Conference on Simulated Evolution and Learning

Download this publication from the Swarm Intelligence and Image Analysis groups.

Abstract:

A new image classification algorithm that is based on the particle swarm optimizer (PSO) is proposed in this paper. The algorithm finds the centroids of a user specified number of clusters, where each cluster groups together similar pixels. The new image classifier has been applied successfully to three types of images to illustrate its wide applicability. These images include synthesized, MRI and satellite images. The proposed algorithm is compared with the benchmark classification algorithm, ISODATA, yielding promising results.

Back to top Up Arrow



Learning to Play Games using a PSO-based Competitive Learning Approach
Messerschmidt, L. Engelbrecht, AP. 2002.
4th Asia-Pacific Conference on Simulated Evolution and Learning

Download this publication from the Swarm Intelligence group.

Abstract:

A new competitive approach is developed for learning agents to play two-agent games. This approach uses particle swarm optimizers (PSO) to train neural networks to predict the desirability of states in the end nodes of a game tree. The new approach is applied to the TicTacToe game, and compared to the performance of the evolutionary approach developed by Fogel. The results show that the new PSO-based approach outperforms the evolutionary approach.

Back to top Up Arrow



Solving Systems of Unconstrained Equations using Particle Swarm Optimization
Brits, R. Engelbrecht, AP. van den Bergh, F. 2002.
IEEE Conferece on Systems, Man, and Cybernetics, Tunisa

Download this publication from the Swarm Intelligence group.

Abstract:

A new particle swarm optimization algorithm (PSO), nbest, is developed in this paper to solve systems of unconstrained equations. For this purpose, the standard gbest PSO is adapted by redefining the fitness function in order to locate multiple solutions in one run of the algorithm. The new algorithm also introduces the concept of shrinking particle neighborhoods. The resulting nbest algorithm is a first attempt to develop a niching PSO algorithm. The paper presents results that show the new PSO algorithm to be successful in locating multiple solutions.

Back to top Up Arrow



Effects of Swarm Size on Cooperative Particle Swarm Optimizers
van den Bergh, F. Engelbrecht, AP. 2001.
Genetic and Evolutionary Computation Conference, San Francisco, USA

Download this publication from the Swarm Intelligence group.

Abstract:

Particle Swarm Optimisation is a stochastic global optimisation technique making use of a population of particles, where each particle represents a solution to the problem being optimised. The Cooperative Particle Swarm Optimiser (CPSO) is a variant of the original Particle Swarm Optimiser (PSO). This technique splits the solution vector into smaller vectors, where each sub-vector is optimised using a separate PSO. This paper investigates the effect the swarm size on the CPSO, showing that a swarm size of only 10 particles is usually sufficient.

Back to top Up Arrow



Using Cooperative Particle Swarm Optimization to Train Product Unit Neural Networks
van den Bergh, F. Engelbrecht, AP. 2001.
IEEE International Joint Conference on Neural Networks, Washington DC, USA

Download this publication from the Swarm Intelligence group.

Abstract:

The Cooperative Particle Swarm Optimiser (CPSO) is a variant of the Particle Swarm Optimiser (PSO) that splits the problem vector, for example a neural network weight vector, across several swarms. This paper investigates the influence that the number of swarms used (also called the split factor) has on the training performance of a Product Unit Neural Network. Results are presented, comparing the training performance of the two algorithms, PSO and CPSO, as applied to the task of training the weight vector of a Product Unit Neural Network.

Back to top Up Arrow



Cooperative Learning in Neural Networks using Particle Swarm Optimizers
van den Bergh, F. Engelbrecht, AP. 2000.
South African Computer Journal, 26:84-90

Download this publication from the Neural Networks and Swarm Intelligence groups

Abstract:

This paper presents a method to employ particle swarms optimizers in a cooperative configuration. This is achieved by splitting the input vector into several sub-vectors, each which is optimized cooperatively in its own swarm. The application of this technique to neural network training is investigated, with promising results.

Back to top Up Arrow



Global Optimization Algorithms for Training Product Unit Neural Networks
Ismail, A. Engelbrecht, AP. 2000.
IEEE International Conference on Neural Networks, Como, Italy, paper 032, IEEE

Download this publication from the Neural Networks and Swarm Intelligence groups.

Abstract:

Product units in the hidden layer of multilayer neural networks provide a powerful mechanism for neural networks to efficiently learn higher-order combinations of inputs. Training product unit networks using local optimization algorithms is difficult due to an increased number of local minima and increased chances of network paralysis. This paper discusses the problems with using gradient descent to train product unit neural networks, and shows that particle swarm optimization, genetic algorithms and LeapFrog are efficient alternatives to successfully train product unit neural networks.

Back to top Up Arrow



Training Product Unit Neural Networks
Engelbrecht, AP. Ismail, A. 1999.
Stability and Control: Theory and Applications, 2(1/2):59-74

Download this publication from the Neural Networks and Swarm Intelligence groups.

Abstract:

Product units enable a neural network to form higher-order combinations of inputs, having the advantages of increased information capacity and smaller network architectures. Training product unit networks using gradient descent, or any other local optimization algorithm, is difficult, because of an increased number of local minima and increased chances of network paralysis. This paper illustrates the shortcomings of gradient descent optimization when faced with product units, and presents a comparative investigation into global optimization algorithms for the training of product unit neural networks. A comparison of results obtained from particle swarm optimization, genetic algorithms, LeapFrog and random search show that these global optimization algorithms successfully train product unit neural networks. Results of product unit neural networks are also compared to results obtained from using gradient optimization with summation units.

Back to top Up Arrow



Training Product Units in Feedforward Neural Networks using Particle Swarm Optimization
Ismail, A. Engelbrecht, AP. 1999.
In: Development and Practice of Artificial Intelligence Techniques, VB Bajic, D Sha (eds), pp 36-40, Proceedings of the International Conference on Artificial Intelligence, Durban, South Africa

Download this publication from the Neural Networks and Swarm Intelligence groups.

Abstract:

Product unit (PU) neural networks are powerful because of their ability to handle higher order combinations of inputs. Training of PUs by backpropagation is however difficult, because of the introduction of more local minima. This paper compares training of a product unit neural network using particle swarm optimization with training of a PU using gradient descent.

Back to top Up Arrow






You are visitor #7495
Contact webmaster
Back to top

QualNet Network Simulator University Program Valid XHTML 1.0! Valid CSS!


Computational Intelligence Research Group
University of Pretoria
Copyright © 2017