Skip to main content Link Search Menu Expand Document (external link)

Quantum-behaved Particle Swarm Optimization

Table of contents

  1. PSO: Classical PSO
  2. QPSO: Quantum-behaved PSO
    1. Type-1 particles
    2. Type-2 particles
    3. Gaussian Attractor (GA) particles
  3. References

PSO: Classical PSO

The Particle Swarm Optimisation (PSO), first introduced by Kennedy, Eberhart & Shi [1,2], is a nature inspired heuristic optimisation method. In the classical PSO a candidate solution $\boldsymbol{x}=[x_1,…, x_{N}]^T$ (with the dimension of the search space $N$ is iteratively adjusted using a velocity update method.

\[\boldsymbol{x}^{t+1}_i = \boldsymbol{x}^{t}_i + \boldsymbol{v}^{t+1}_i,\]

where $\boldsymbol{x}^{t}_i$ is the position of the considered $i$th particle at time $t$ (the last iteration) and $\boldsymbol{v}^{t+1}_i$ is the velocity of the particle at time $t+1$. To calculate the velocity, the following rule is applied:

\[\boldsymbol{v}_i^{t+1} = w \boldsymbol{v}_i^{t} + c_1 r_1 \Big( \boldsymbol{x}_{i,\text{p-best}}^{t} - \boldsymbol{x}_i^{t} \Big) + c_2 r_2 \Big( \boldsymbol{x}_{\text{g-best}}^{t} - \boldsymbol{x}_i^{t} \Big)\]

The index g-best denotes the best candidate solution ever explored by the swarm and $\boldsymbol{x}_{i,\text{p-best}}^{t}$ is the personal best position of the considered particle so far. $c_1$ and $c_2$ are often referred to as ‘‘cognitive weights’’. They control if the particle follows the overall swarm’s best solution or its personal one. $w$ is the ‘‘inertia weight’’ and controls how much the particles velocity influences the update. $r_1$ and $r_2$ are random numbers between 0 and 1.

QPSO: Quantum-behaved PSO

Unlike the classical PSO, the QPSO uses a strategy based on a quantum $\delta$ potential well model to sample around the previous best points [3] or around mean best position [4] and thus need no velocity vectors for the particles. For a detailed discussion of the theory behind the QPSO reference is made to [4]. The general update of the particle position in QPSO takes the following form:

\[\boldsymbol{x}^{t+1}_i = \boldsymbol{A}^{t}_i \pm \dfrac{\boldsymbol{L}^t_i}{2} \ln\left(\dfrac{1}{\boldsymbol{u}^{t+1}_i}\right)\]

Therein, $\boldsymbol{u}_i$ are random numbers uniformly distributed over $(0,1)$. $\boldsymbol{A}^{t}_i$ and $\boldsymbol{L}^{t}_i$ are the stochastic attractor of the particle and the characteristic length (or standard deviation) of the $\delta$ potential well, respectively. Their calculation depends on the formulation used as described in the following subsections.

Type-1 particles

For ‘‘Type-1’’ particles, the stochastic attractor $\boldsymbol{A}^{t}_i$ is defined as:

\[\boldsymbol{A}_i^{t} = \boldsymbol{\varphi}_i^{t} \boldsymbol{x}_{i,\text{p-best}} + \left( 1-\boldsymbol{\varphi}_i^{t} \right) \boldsymbol{x}_{g-best}^{t} ~~~\text{where:}~~~ \boldsymbol{\varphi}_i^{t} \sim \boldsymbol{U}(0,1).\]

The characteristic length $\boldsymbol{L}^{t}_i$ of the delta well is defined depending on the stochastic attractor:

\[\boldsymbol{L}_i^{t} = 2 \alpha \left| \boldsymbol{x}_i^{t} - \boldsymbol{A}_i^{t} \right|\]

Therein, $\alpha$ is the contraction–expansion (CE) coefficient controlling the convergence speed of the algorithm.

In [4] it has been shown that linearly decreasing $\alpha$ from $1.0$ to $0.5$ during the course of the optimisation leads to a good performance of the algorithm in general.

Type-2 particles

In [4] it was proposed to use the mean best (m-best) position defined as the mean of the (p-best) positions of all particles $np$ to to calculate the characteristic length:

\[\boldsymbol{L}^{t}_i = 2 \alpha \left| \boldsymbol{x}^{t}_i - \boldsymbol{x}^{t}_{\text{m-best}} \right| ~~~ \text{where}~~~ \boldsymbol{x}^{t}_{\text{m-best}} = \frac{1}{np} \sum^{np}_{i=1} \boldsymbol{x}^{t}_{\text{i,p-best}}\]

The local attractor is the same as for the Type-1 particles.

Gaussian Attractor (GA) particles

In [5] it was proposed to use a Gaussian distribution to determine the local attractor (this version of PSO is also referred to as GAQPSO). A modified stochastic attractor $\boldsymbol{\bar{A}}^{t}_i$ is used instead of the original attractor $\boldsymbol{A}^{t}_i$:

\[\boldsymbol{\bar{A}}^{t}_i = N(\text{mean},\text{std}) = N(\boldsymbol{A}^{t}_i, \boldsymbol{x}^{t}_{\text{m-best}}-\boldsymbol{x}^{t}_{\text{i,p-best}})\]

Therein, $N(\text{mean},\text{std})$ denotes a Gaussian normal distribution with the mean value taken as the original stochastic attractor and the standard deviation as the distance between the mean best particle and the personal best particle..

The same definition for $\boldsymbol{L}^{t}_i$ as for Type-2 particles is used.

References

[1] Y. Shi and R. Eberhart, ‘A modified particle swarm optimizer’, in 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Anchorage, AK, USA, 1998, pp. 69–73. doi: 10.1109/ICEC.1998.699146.

[2] J. Kennedy and R. Eberhart, ‘Particle swarm optimization’, in Proceedings of ICNN’95-international conference on neural networks, 1995, vol. 4, pp. 1942–1948.

[3] Jun Sun, Bin Feng, and Wenbo Xu, ‘Particle swarm optimization with particles having quantum behavior’, in Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753), Portland, OR, USA, 2004, pp. 325–331. doi: 10.1109/CEC.2004.1330875.

[4] J. Sun, W. Fang, X. Wu, V. Palade, and W. Xu, ‘Quantum-Behaved Particle Swarm Optimization: Analysis of Individual Particle Behavior and Parameter Selection’, Evolutionary Computation, vol. 20, no. 3, pp. 349–393, Sep. 2012, doi: 10.1162/EVCO_a_00049.

[5] J. Sun, W. Fang, V. Palade, X. Wu, and W. Xu, ‘Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point’, Applied Mathematics and Computation, vol. 218, no. 7, pp. 3763–3775, Dec. 2011, doi: 10.1016/j.amc.2011.09.021.