The Scientific World Journal

The Scientific World Journal / 2014 / Article
Special Issue

Swarm Intelligence and Its Applications 2014

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 194706 | 14 pages | https://doi.org/10.1155/2014/194706

Human Behavior-Based Particle Swarm Optimization

Academic Editor: Y. Zhang
Received03 Dec 2013
Accepted17 Mar 2014
Published17 Apr 2014

Abstract

Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients and in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost.

1. Introduction

Particle swarm optimization (PSO) [1] is a population-based intelligent algorithm, and it has been widely employed to solve various kinds of numerical and combinational optimization problems because of its simplicity, fast convergence, and high performance.

Researchers have proposed various modified versions of PSO to improve its performance; however, there still are premature or lower convergence rate problems. In the PSO research, how to increase population diversity to enhance the precision of solutions and how to speed up convergence rate with least computation cost are two vital issues. Generally speaking, there are four strategies to fulfill these targets as follows.

(1) Tuning control parameters. As for inertial weight, linearly decreasing inertial weight [2], fuzzy adaptive inertial weight [3], rand inertial weight [4], and adaptive inertial weight based on velocity information [5], they can enhance the performance of PSO. Concerning acceleration coefficients, the time-varying acceleration coefficients [6] are widely used. Clerc and Kennedy analyzed the convergence behavior by introducing constriction factor [7], which is proved to be equivalent to the inertial weight [8].

(2) Hybrid PSO, which hybridizes other heuristic operators to increase population diversity. The genetic operators have been hybridized with PSO, such as selection operator [9], crossover operator [10], and mutation operator [11]. Similarly, differential evolution algorithm [12], ant colony optimization [13], and local search strategy [14] have been introduced into PSO.

(3) Changing the topological structure. The global and local versions of PSO are the main type of swarm topologies. The global version converges fast with the disadvantage of trapping in local optima, while the local version can obtain a better solution with slower convergence [15]. The Von Neumann topology is helpful for solving multimodal problems and may perform better than other topologies including the global version [16].

(4) Eliminating the velocity formula. Kennedy proposed the bare-bones PSO (BPSO) [17] and variants of BPSO [18, 19]. Sun et al. proposed quantum-behaved PSO (QPSO) and relative convergence analysis [20, 21].

In recent years, some modified PSO have extremely enhanced the performance of PSO. For example, Zhan et al. proposed adaptive PSO (APSO) [22] and Wang et al. proposed so-called diversity enhanced particle swarm optimization with neighborhood search (DNSPSO) [23]. The former introduces an evolutionary state estimation (ESE) technique to adaptively adjust the inertia weight and acceleration coefficients. The later ones, a diversity enhancing mechanism and neighborhood-based search strategies, were employed to carry out a tradeoff between exploration and exploitation.

Though all kinds of variants of PSO have enhanced performance of PSO, there are still some problems such as hardly implement, new parameters to just, or high computation cost. So it is necessary to investigate how to trade off the exploration and exploitation ability of PSO and reduce the parameters sensitivity of the solved problems and improve the convergence accuracy and speed with the least computation cost and easy implementation. In order to carry out the targets, in this paper, the global worst position (solution) was introduced into the velocity equation of the standard PSO (SPSO), which is called impelled/penalized learning according to the corresponding weight coefficient. Meanwhile, we eliminate the two acceleration coefficients and from the SPSO to reduce the parameters sensitivity of the solved problems. The so-called HPSO has been employed to some nonlinear benchmark functions, which compose unimodal, multimodal, rotated, and shifted high-dimensional functions, to confirm its high performance by comparing with other well-known modified PSO.

The remainder of the paper is structured as follows. In Section 2, the standard particle swarm optimization (SPSO) is introduced. The proposed HPSO is given in Section 3. Experimental studies and discussion are provided in Section 4. Some conclusions are given in Section 5.

2. Standard PSO (SPSO)

The PSO is inspired by the behavior of bird flying or fish schooling; it is firstly introduced by Kennedy and Eberhart in 1995 [1] as a new heuristic algorithm. In the standard PSO (SPSO) [2], a swarm consists of a set of particles, and each particle represents a potential solution of an optimization problem. Considering the th particle of the swarm with particles in a -dimensional space, its position and velocity at iteration are denoted by and . Then, the new velocity and position on the -dimension of this particle at iteration will be calculated by using the following: where , and is the population size; , and is the dimension of search space; and are two uniformly distributed random numbers in the interval ; acceleration coefficients and are nonnegative constants which control the influence of the cognitive and social components during the search process. , called the personal best solution, represents the best solution found by the th particle itself until iteration ; , called the global best solution, represents the global best solution found by all particles until iteration . is the inertial weight to balance the global and local search abilities of particles in the search space, which is given by where is the initial weight, is the final weight, is the current iteration number, and is the maximum iteration number. Then, update particle’s position using the following: and check , where and represent lower and upper bounds of the th variable, respectively.

3. Human Behavior-Based PSO (HPSO)

In this section, a modified version of SPSO based on human behavior, which is called HPSO, is proposed to improve the performance of SPSO. In SPSO, all particles only learn from the best particles and . Obviously, it is an ideal social condition. However, considering the human behavior, there exist some people who have bad habits or behaviors around us, at the same time, as we all known that these bad habits or behaviors will bring some effects on people around them. If we take warning from these bad habits or behaviors, it is beneficial to us. Conversely, if we learn from these bad habits or behaviors, it is harmful to us. Therefore, we must give an objective and rational view on these bad habits or behavior.

In HPSO, we introduce the global worst particle, who is of the worst fitness in the entire population at each iteration. It is denoted as and defined as follows: where represents the fitness value of the corresponding particle.

To simulate human behavior and make full use of the , we introduce a learning coefficient , which is a random number obeying the standard normal distribution; that is, . If , we consider it as an impelled learning coefficient, which is helpful to enhance the “flying” velocity of the particle; therefore, it can enhance the exploration ability of particle. Conversely, if , we consider it as a penalized learning coefficient, which can decrease the “flying” velocity of the particle; therefore, it is beneficial to enhance the exploitation. If , it represents that these bad habits or behaviors have not effect on the particle. Meanwhile, in order to reduce the parameters sensitivity of the solved problems, we take place of the two acceleration coefficients and with two random learning coefficients and , respectively. Therefore, the velocity equation has been changed as follows: where and are two random numbers in range of and . The random numbers , , and are the same for all but different for each particle, and they are generated anew in each iteration. If overflows the boundary, we set boundary value to it. Consider where and are the minimum and maximum velocity of the -dimensional search space, respectively. Similarly, if flies out of the search space, we limit it to the corresponding bound value.

In SPSO, the cognition and social learning terms move particle towards good solutions based on and in the search space as shown in Figure 1. This strategy makes a particle fly fast to good solutions, so it is easy to trap in local optima. From Figure 2, we can clearly observe that both impelled learning term and penalized term provide a particle with the chance to change flying direction. Therefore, the impelled/penalized term plays a key role in increasing the population diversity, which is beneficial in helping particles to escape from the local optima and enhance the convergence speed. In HPSO, the impelled/penalized learning term performs a proper tradeoff between the exploration and exploitation.

To sum up, Figure 3 illustrates the flowchart of HPSO. Meanwhile, the pseudocodes of implementing the HPSO are listed as shown in Algorithm 1.

Initialize Parameters:
population size;
the dimensionality of search space;
the number of maximum iteration;
the inertial weight;
the allowable position boundaries, ;
the allowable velocity boundaries, ;
Initialize Population:   , ,  ;
;
;
Initialize   , and   :
 Evaluate fitness of all particles in ;
;
;
;
For  
For each particle
  Update velocity according to (5) and check the boundaries;
  Update position according to (3) and check the boundaries;
Endfor
Evaluate fitness of all particles in   ;
Update   , and   ;
Endfor.
Return the best solution.

4. Experimental Studies and Discussion

To evaluate the performance of HPSO, 28 minimization benchmark functions are selected [22, 24, 25] as detailed in Section 4.1. HPSO is compared with SPSO in different search spaces and the results are given in Section 4.2. In addition, HPSO is compared with some well-known variants of PSO in Section 4.3.

4.1. Benchmark Functions

In the experimental study, we choose 28 minimization benchmark functions, which consist of unimodal, multimodal, rotated, shifted, and shifted rotated functions. Table 1 lists the main information; please refer to papers [22, 24, 25] to obtain further detailed information about these functions. Among these functions, are unimodal functions. is the Rosenbrock function, which is unimodal for and but may have multiple minima in high dimension cases. are unrotated multimodal functions and the number of their local minima increases exponentially with the problem dimension. are rotated functions. are shifted functions and and are shifted rotated multimodal functions and is a randomly generated shift vector located in the search space. To obtain a rotated function, an orthogonal matrix [26] is considered and the rotated variable is computed. Then, the vector is used to evaluate the objective function value.


Number Function name Dimension ( )

Sphere model −100, 100 0
Schwefel’s problem 2.22 −10, 10 0
Schwefel’s problem 1.2 −100, 100 0
Schwefel’s problem 2.21 −100, 100 0
Step function −100, 100 0
Quartic function, that is, noise −1.28, 1.28 0
Rosenbrock’s function −10, 10 0
Schwefel’s function −500, 500 0
Generalized Rastrigin’s function −5.12, 5.12 0
Noncontinuous Rastrigin’s function −5.12, 5.12 0
Ackley’s function −32, 32 0
Generalized Griewank’s function −600, 600 0
Weierstrass’s function −0.5, 0.5 0
Generalized penalized function −50, 50 0
Cosine mixture problem −1, 1
Rotated elliptic function −1.28, 1.28 0
Rotated Schwefel’s function −500, 500 0
Rotated Ackley’s function −32, 32 0
Rotated Griewank’s function −600, 600 0
Rotated Weierstrass’s function −0.5, 0.5 0
Rotated Rastrigin’s function −5.12, 5.12 0
Rotated Salomon’s function −100, 100 0
Rotated Rosenbrock’s function −100, 100 0
Shifted Rosenbrock’s function −100, 100 390
Shifted Rastrigin’s function −5, 5 −330
Shifted Schwefel’s problem 2.21 −100, 100 −450
Shifted rotated Ackley’s function −32, 32 −140
Shifted rotated Weierstrass’s function −0.5, 0.5 90

4.2. Comparison of HPSO with SPSO

The performance on the convergence accuracy of HPSO is compared with that of SPSO. The test functions listed in Table 1 are evaluated. For a fair comparison, we set the same parameters value. Population size is set to 30 (), upper bounds of velocity , and the corresponding lower bounds , where and are the lower and upper bounds of variables, respectively. Inertia weight is linearly decreased from 0.9 to 0.4 in SPSO and HPSO. Acceleration coefficients and in SPSO are set to 2. The two algorithms are independently run 30 times on the benchmark functions. The results in terms of the best, worst, median, mean, and standard deviation (SD) of the solutions obtained in the 30 independent runs by each algorithm in different search spaces are as shown in Tables 2, 3, and 4. At the same time, the maximum iteration is 1000 for , 2000 for , and 3000 for , respectively.


Fun DimBest Worst Meadian Mean SD Significant

30SPSO 666.6686
HPSO0 0 0 0 0 +
50SPSO 0.0078
HPSO0 0 0 0 0 +
100SPSO
HPSO0 10000 0 333.3333 +

30SPSO 30.0018 10.0017 11.3364 10.0777
HPSO0 0 0 0 0 +
50 SPSO 0.0329 70.0010 40.0006 37.3438 15.2918
HPSO0 0 0 0 0 +
100SPSO 51.0214 181.4054 110.5934 114.3039 29.0723
HPSO0 0 0 0 0 +

30 SPSO
HPSO0 0172.5975 945.3557 +
50 SPSO
HPSO0 0232.6222 +
100SPSO
HPSO0 0 +

30 SPSO 8.6091 21.2711 12.9945 13.3502 3.5341
HPSO0 0 0 0 0 +
50 SPSO 24.2031 39.5127 31.0562 31.1715 4.2886
HPSO0 0 0 0 0 +
100SPSO 54.1172 75.3686 64.7834 64.2358 4.2202
HPSO0 0 0 0 0 +

30 SPSO 0 10001 0
HPSO0 0 0 0 0 +
50 SPSO 0 20004 4.5000
HPSO0 0 0 0 0 +
100SPSO 127 90040 40068
HPSO0 0 0 0 0 +

30 SPSO 0.0344 18.8556 0.0959 3.5587 5.1400
HPSO 0.0030 0.0012 0.0012 +
50 SPSO 0.0780 72.6594 13.648919.660419.3860
HPSO 0.0017 +
100SPSO 86.7855 381.9209 200.8146 211.9720 88.3159
HPSO 0.0019 +

30 SPSO 14.3237 140.5176
HPSO28.6353 28.9456 28.8793 28.8461 0.0932 +
50 SPSO 97.0317 376.2306
HPSO48.4886 48.8766 48.7600 48.7513 0.0875 +
100SPSO 706.1328
HPSO98.4280 98.8373 98.7133 98.7129 0.0818 +

30 SPSO 733.1063
HPSO
50 SPSO
HPSO
100SPSO
HPSO

30 SPSO 28.7299 160.3815 87.6754 92.5142 32.6994
HPSO0 0 0 0 0 +
50 SPSO 175.2643 351.6480 260.4359 258.0518 48.4078
HPSO0 0 0 0 0 +
100SPSO 555.8950 993.3887 750.1694 749.1658 749.1658
HPSO0 0 0 0 0 +

30 SPSO 61.4129 221.0445 132.7694 134.5414 33.8073
HPSO0 0 0 0 0 +
50 SPSO 157.1020 440.0897 324.2632 310.3595 64.3675
HPSO0 0 0 0 0 +
100SPSO 623.5658 804.6981 813.3435 88.5932
HPSO0 25 0 0.83334.5644 +


Fun DimBest WorstMedianMeanSDSignificant

30 SPSO 0.0043 19.9630 0.0595 2.3935 5.4041
HPSO 0 +
50 SPSO 0.0598 19.9646 12.6912 10.5673 6.3042
HPSO 0 +
100SPSO 15.4237 20.2143 19.5200 19.4135 0.8672
HPSO 0 +

30 SPSO 90.8935 0.0178 12.0794 31.2763
HPSO0 0 0 0 0 +
50 SPSO 0.0014 270.8170 0.0415 45.1971 70.1274
HPSO0 0 0 0 0 +
100SPSO 1.1140 721.0594 361.0858 376.1758 158.6584
HPSO0 0 0 0 0 +

30 SPSO 0.1403 4.3952 0.3210 1.0567 1.4863
HPSO0 0 0 0 0 +
50 SPSO 0.8657 15.2389 7.5828 8.2388 3.6607
HPSO0 0 0 0 0 +
100SPSO 27.6235 64.4826 49.3984 47.7138 10.0126
HPSO0 0 0 0 0 +

30 SPSO 2.2031 0.4202 0.5373 0.5730
HPSO0.0710 0.2803 0.1301 0.1444 0.0513 +
50 SPSO 0.1882 6.9784 2.2774 2.3889 1.5688
HPSO0.1016 0.3137 0.1652 0.1702 0.0438 +
100SPSO 32.5063 457.9143
HPSO0.1866 0.5097 0.2703 0.2736 0.0653 +

30 SPSO −3.0000 −2.8522 −3.0000 −2.9507 0.0709
HPSO−3 −3 −3 −3 0 +
50 SPSO −5.0000 −2.3044 −4.4827 −4.2127 0.6865
HPSO−5 −5 −5 −5 0 +
100SPSO −7.9165 4.7637 −5.2127 −4.6977 2.8465
HPSO−10 −10 −10 −10 0 +

30 SPSO
HPSO0 0 390.6710 +
50 SPSO
HPSO0 0 224.6749 873.6249 +
100SPSO
HPSO0 0 +

30 SPSO 739.7223
HPSO 442.4330
50 SPSO