This work was part of the project I did during my undergrad research internship in the summer of 2018 at the Centre for Process Integration, The University of Manchester on stochastic optimization.
Particle Swarm Optimization (PSO) is a stochastic optimization algorithm inspired by the behavior of several animal communities. It implements two important variables, cognition and social behaviour, in an attempt to mimic the intelligence of such communities. This algorithm was firstly proposed by Kennedy and Eberhart (1995). The algorithm is initialized with a set of points (particles) randomly distributed within the search space. The postion and velocity of these particles is progressively updated according to the two main parameters mentioned above: cognition (self-confidence) and social behaviour (population-confidence). At each iteration the particles remember their own best position visited so far, and the best position visited by the swarm as a whole. They adjust their velocity and position taking into account both best values (individual and collective) to explore the search space collectively. In this way, the swarm attempts to move towards the global optimum.
The equations that describe the velocity and position of the particles at each iteration are:
where and are the velocity and the postion of the particle respectively; is an scaling factor (a.k.a. inertia weight) that prevents the exponential growth of the velocity at each iteration; stands for the current iteration; and are the cognition and population-confidence parameters respectively; and are random variables uniformly distributed between 0 and 1; denotes the best position found so far and denotes the best position found until the current iteration by the whole swarm.
The function requires Python 3.0 (or more recent versions). The stoch_optim_utilities.py file (which contains common utilities needed in stochastic optimization algorithms) needs to be in the same directory as the function file PSO.py.
PSO(f, num_par, bounds, max_iter, c1, c2, w, w_red)
1. The function to be optimized. The functions needs to be of the form .
2. The number of particles.
3. The bounds for each dimension of the fucntion. This has to be a list of the form [(lb1, ub1), (1b2, ub2), ...]
.
4. The maximum number of iterations which is the stopping criteria in this implementation.
5. The cognition parameter as an integer or float.
6. The social-confidence parameter as an integer or float.
7. The inertia weight parameter as an integer or float.
8. The reduction parameter . This reduces the inertia weight at each iteration following:
Optimum: (class) Results with:
Optimum.f: (float) The best function value found in the optimization
Optimum.x: (array) The best point in which the function was evaluated
Optimum.traj_f: (array) Trajectory of function values
Optimum.traj_x: (array) Trajectory of positions
-
In this implementation the initial position of the particles is set randomly, while their initial velocities are set to zero.
-
The random constants are updated at each iteration to a random value uniformly distributed between 0 and 1.
-
The stoping criteria implemented in this algrithm is a maximum number of iterations defined by the user.
-
The file example_PSO.py exemplify the use of the optimization algorithm.
Kennedy, J. and Eberhart, R., 1995, November. Particle swarm optimization. In Proceedings of ICNN'95-International Conference on Neural Networks (Vol. 4, pp. 1942-1948). IEEE.
A. Kaveh, Advances in Metaheuristic Algorithms for Optimal Design of Structures, Chapter 2. Particle Swarm Optimization, DOI 10.1007/978-3-319-46173-1_2
This repository contains a MIT LICENSE