Abstract

Particle swarm optimization (PSO) is an evolutionary algorithm for solving global optimization problems. PSO has a fast convergence speed and does not require the optimization function to be differentiable and continuous. In recent two decades, a lot of researches have been working on improving the performance of PSO, and numerous PSO variants have been presented. According to a recent theory, no optimization algorithm can perform better than any other algorithm on all types of optimization problems. Thus, PSO with mixed strategies might be more efficient than pure strategy algorithms. A mixed strategy PSO algorithm (MSPSO) which integrates five different PSO variants was proposed. In MSPSO, an adaptive selection strategy is used to adjust the probability of selecting different variants according to the rate of the fitness value change between offspring generated by each variant and the personal best position of particles to guide the selection probabilities of variants. The rate of the fitness value change is a more effective indicator of good strategies than the number of previous successes and failures of each variant. In order to improve the exploitation ability of MSPSO, a Nelder–Mead variant method is proposed. The combination of these two methods further improves the performance of MSPSO. The proposed algorithm is tested on CEC 2014 benchmark suites with 10 and 30 variables and CEC 2010 with 1000 variables and is also conducted to solve the hydrothermal scheduling problem. Experimental results demonstrate that the solution accuracy of the proposed algorithm is overall better than that of comparative algorithms.

1. Introduction

In recent years, optimization algorithms are applied more and more widely in various fields [1]. One of the most famous ones is PSO. In 1995, Kennedy and Eberhart developed particle swarm optimization (PSO) [2]. PSO as a global optimization method is an important tool to solve difficult optimization problems without a good problem-specific approach efficiently. Its original inspiration comes from birds flocking behaviours. In PSO, each individual in the population is a particle. A particle represents a potential solution in solution space. Particles scan the search area and converge to the optimum by flying in the space and adjusting its flying velocity based on its personal best historical experience and the best solution in the population. PSO is a robust stochastic optimization algorithm that is easy to implement. Its parameter settings are negligible. On account of its simple realization and high efficiency, PSO has been successfully applied to various real-world problems such as wireless sensor networks [3], feature selection [4], traffic control [5], road identification [6], task allocation [7], and crowd user selection [8].

Recently, various improvements of PSO are proposed to enhance these comprehensive performances. In this research, according to a recent theory [9], this study presented a mixed strategy PSO (MSPSO). In the theory, the hardest problem to one evolutionary algorithm might be the easiest for another algorithm and vice versa. Thus, the mixed strategy PSO algorithms might be more efficient than pure strategy PSO algorithms. Just as a company wants to run well, it needs talents who are good at management, good at marketing, and good at purchasing to work together. If a company employs talents who are good at management for all work, the company will not operate well because employees who are not good at what they do spend more time and get worse results. Inspired by this theory, MSPSO integrates five different PSO variants and adopts a new probability update strategy according to the proportion of differences in fitness values. According to our experimental verification, the probability of those variants is guided by the rate of change of fitness value between offspring generated by each variant and the personal best positions of the particles. According to the rank of the rate of change, the variants are assigned the different probabilities. Using the rate of fitness change to guide the selection probabilities of variants could increase the probability of selecting excellent variants than using the number of previous successes and failures of each variant. In addition, in order to enhance the exploitation ability of MSPSO, a local search method inspired by the Nelder–Mead method is proposed.

In summary, we have made the following contributions:(1)We propose a cooperative strategy to integrate multiple PSO operators, and the integrated algorithm can achieve better generalization capability(2)We add a local search operator to the integrated algorithm, so that the algorithm can further obtain better performance(3)We also demonstrate the performance of MSPSO on benchmark suites and real-world problem instances

The rest of the study is organized as follows. First, related methods are reviewed in Section 2. Second, Section 3 introduces the MSPSO. Third, Section 4 evaluates the proposed MSPSO and gives the results of the experiments. Finally, the conclusion of the study is shown in Section 5.

2.1. Canonical PSO

In the optimization process, the velocity vector for the ith particle in the population is updated using (1) given in [2] iteratively through the guidance of and .

Acceleration parameters and are usually set to 2.0. and are two random numbers within [0,1] for the dth dimension of the ith particle.

To avoid the premature convergence, the authors in [10] introduced an inertia weight to update the flying velocities of particles. The particle velocity is adjusted through the following formula:

In (2), commonly decreases linearly from 0.9 to 0.4 with generations to balance exploration capability and exploitation capability. A large value of enhances the exploration capability, whereas a small value of encourages the capability of convergence during the search process.

2.2. PSO Variants

To enhance the performance of PSO on global optimization problems, a lot of researches have been working on improving PSO algorithms, and numerous PSO variants have been presented. Designing new strategies, new techniques and topological structures of PSO are an important research trend. Various topologies have been suggested. In PSO, the trajectory of particles is adjusted by their own personal best positions and the best position in the population. However, this may cause premature convergence when solving multimodal functions. Because the best particle in the population is the best solution for the whole population, it could be a local optimum for a multimodal function and is far away from the global optimum. The authors in [11] proposed a social learning PSO (SL-PSO) which introduced social learning into PSO. The advantage of social learning was that individuals could learn from others without paying for their own trials and mistakes. In SL-PSO, each particle was updated based on any better particles in the current population. Furthermore, to reduce parameter settings, SL-PSO proposed a dimension-dependent parameter control method. Compared with other optimization algorithms, SL-PSO could be implemented easily, be computed efficiently, and require no complicated adjustment of the control parameters. In order to accelerate convergence speed and improve exploitation ability, the authors in [12] proposed prey-predator PSO (PP-PSO). PP-PSO achieved this goal by deleting or transforming “slothful particles” which were the particles with low velocities. It was hard for these slothful particles to find the global optimum, and this reduced the convergence speed. Furthermore, in order to enhance population diversity, PP-PSO designed a proportional-integral control parameter to control the population to fluctuate within a relatively stable range during the iterative process. The above-mentioned PSO variant algorithm mainly improves the classical PSO algorithm from the perspective of designing a new information-sharing mode between particles and building a new particle search model. However, when solving some complex optimization problems, PSO and its variants are still prone to premature convergence in the search process. Furthermore, when trapped in the local optimum, it is difficult for particles to get rid of this region. Therefore, in the past decades, researchers have also tried to solve this problem by proposing various improvement strategies based on existing PSO algorithms.

Another popular modification is to combine PSO with other mathematical methods or evolutionary computation techniques. The authors in [13] integrated a PSO algorithm with the sine cosine algorithm (SCA) and the Lévy flight approach, to overcome the shortcoming that PSO tended to fall into a local optimum. The solution in SCA was updated by sine and cosine functions to ensure the exploitation and exploration capabilities. In addition, SCA used Lévy distribution which was a more effective search to produce a random walk in the search space. The combination of SCA, Lévy flight, and PSO enhanced the exploration capability of the original PSO and prevented being trapped in the local minimum. In addition, the hybridization of PSO with GAs has also been presented in [14, 15]. A hybrid PSO with bat algorithm (BA) has been proposed in [16] for numerical optimization problems. A communicating strategy provided information flow between the population of PSO and the population of BA. In this work, several best individuals in BA replaced the worst individuals in PSO after fixed iterations, and on the contrary, the finest particles of PSO replaced the poorer individuals of BA.

Multipopulation strategy and ensemble optimizer are also effective methods to optimize the performance of PSO. In order to avoid the phenomena of “oscillation” and “two steps forward, one step back” in PSO, the authors in [17] proposed a two-swarm learning PSO algorithm called TSLPSO. The algorithm hybridized two different learning strategies which were dimensional learning strategy (DLS) and comprehensive learning strategy, respectively. One of the swarms used DLS to construct the learning exemplars. DLS used the information of the best particle in the population for the local search of the particles. However, in order to guide the global search, the other swarm used the comprehensive learning strategy to construct the learning exemplars. In [18], Xu et al. constructed a DMS-PSO-CLS algorithm that combined the dynamic multiswarm particle swarm optimizer (DMS-PSO) and a new cooperative learning strategy (CLS). In the CLS subpopulation, in order to learn more excellent examples, the two poor particles updated their dimensions with the better particle which was selected from two random subswarms using a tournament selection strategy. By using this method, particles could search the global optimum more easily. The simulation results showed that the performance of the DMS-PSO-CLS algorithm was superior compared with other comparison PSO variants. The above algorithms also have their limitations. For example, some hybrid algorithm frameworks require more computing resources to execute the iterative process of different algorithms, while most multipopulation strategies cannot perform fine local search at the later stage of the search process, so it is difficult to obtain the final search results with high accuracy.

2.3. Complementary Strategy Theorem

The authors in [19] proposed a complementary strategy theorem. According to this theorem, mixed strategy evolutionary algorithms might outperform pure strategy evolutionary algorithms. One advantage was that the overall performance of mixed strategy evolutionary algorithms might be the same as the best performance of pure strategy evolutionary algorithms.

Theorem 1. If a pure strategy evolutionary algorithm is better than another pure strategy evolutionary algorithm , then for any initial population P, the expected hitting time of mixed strategy evolutionary algorithms derived from and satisfies that and for some state P.

Theorem 2. If a pure strategy evolutionary algorithm is equivalent to another pure strategy evolutionary algorithm , then for any initial population P, the expected hitting time of mixed strategy evolutionary algorithm derived from and satisfies that .

Theorem 3. If a pure strategy evolutionary algorithm complements with another pure strategy evolutionary algorithm , then there exists a mixed strategy evolutionary algorithm derived from and , and its expected hitting time satisfies that for any initial population P and for some initial population P.

Theorem 4 (complementary strategy theorem). The condition that a pure strategy evolutionary algorithm is complementary to another pure strategy evolutionary algorithm is sufficient and necessary if there exists a mixed strategy evolutionary algorithm derived from them such that for any initial population P and for some initial population P.

The complementary strategy theorem can be interpreted intuitively as follows:(1)If one pure strategy evolutionary algorithm is better than another pure strategy evolutionary algorithm, then the design of a mixed strategy evolutionary algorithm with the same performance as the better pure strategy evolutionary algorithm is impossible. So mixed strategy evolutionary algorithms do not usually outperform pure strategy evolutionary algorithms that they derived from.(2)If one pure strategy evolutionary algorithm is complementary to another, then the design of a mixed strategy evolutionary algorithm better than both pure strategy evolutionary algorithms is possible. However, this does not mean all mixed strategy evolutionary algorithms will outperform pure strategy evolutionary algorithms that they derived from.(3)The following principle should be followed when a better-mixed strategy evolutionary algorithm is designed: if a pure strategy evolutionary algorithm has a better performance than another at a state, then the mixed strategy evolutionary algorithm should apply the pure strategy with a higher probability at that state.

3. Mixed Strategy PSO

3.1. PSO Strategies

MSPSO hybridizes PSO [10], MCLPSO [20], LIPS [21], HPSO-TVAC [22], and FDR-PSO [23] with an adaptive selection strategy. Velocity update formulae of these four PSO variants except PSO are given as follows.

3.1.1. MCLPSO

We proposed a modified CLPSO (MCLPSO) in [20] a few years ago. Compared with CLPSO, MCLPSO could improve the convergence ability while maintaining the population diversity. Furthermore, MCLPSO has a better balance of exploration and exploitation than CLPSO. The updating equation of MCLPSO for a particle velocity is given as follows:where is the current generation number, α is an adjustment coefficient between 0 and 1, is the maximum number of generations, is a random number within range [0, 1], is the dth dimension of the average value of velocities in the whole population, and represents the dth dimension of the best position of the particle located in a list of particles selected randomly from the whole population, and the rest of the parameters have the same meanings as those in (2).

Using the above equation, the velocity of MCLPSO is fast with a high probability in the early stage of the search. Conversely, in the later stage, the velocity is slow with a high probability for better exploitation.

3.1.2. LIPS

LIPS used the best experiences of adjacent particles rather than the global best experience of the population to guide the particles to the optimum [21]. This algorithm adopted the personal best position of neighbor particles measured by Euclidean distance to adjust the particle velocity. The formula is given as follows:where nbestk is the best position of the kth neighbor particle of the ith particle, is a random number that obeys uniform distribution within [0, 4.1/nsize], and nsize is the number of neighbor particles.

3.1.3. HPSO-TVAC

In order to make the particles converge quickly to the global optimum, the authors in [22] designed a new formula for calculating velocity without using the previous velocity. The formula is given as follows:where , , , and have the same meanings as those in (1).

3.1.4. FDR-PSO

In order to avoid the premature convergence, the authors in [23] added the position of neighbor particle in the formula of particle velocity. The formula is given as follows:where nbestid is the dth dimension of the best experience of the neighbor of the ith particle which minimizes the fineness-distance ratio (FDR), and the rest of the parameters have the same meanings as those in (2). The formula of fineness-distance ratio for a minimization problem is given as follows:where Pi denotes the best experience of other particles in the population except the ith particle.

3.2. Local Search

A local search method inspired from the Nelder–Mead method is used in MSPSO, in order to improve its exploitation ability. The Nelder–Mead method is a numerical algorithm that adapts to local landscapes [24]. It makes down-hill search using a simplex instead of derivatives. Introducing the Nelder–Mead method into MSPSO can further improve the performance of MSPSO.

In our work, we make a modification of the Nelder–Mead method, in order to reduce time consumption. The number of testing points is set to 3, rather than n + 1 (the dimension). The following is the detail of the method. Given 3 test points x1,x2, x3, a Nelder–Mead variant is given in Algorithm 1. In Line 1, three individuals are sorted in the order of the function value from low to high. In Line 2, various xo is the centre of triangle △x1x2x3. Lines 3–22 are used to implement the Nelder–Mead process.

Input: population P with three individuals.
(1)Sort the three points in the order: f(x1) f(x2) f(x3).
(2)Calculate xo as follows:
    
(3)[Reflection] Compute the reflected point xr = xo + α(xo − x3). Where α is a reflection coefficient. Its standard value is α = 1.
(4)if f(x1) f(xr) <f(x3), then
(5) = the reflected point xr.
(6)else if f(xr) <f(x1) then
(7) [Expansion] Compute the expanded point xe = xo + γ(xr −  xo). Where γ is an expansion coefficient. Its standard value is γ = 2.
(8) if f(xe) <f(xr) then
(9)   = the expanded point xe
(10) else
(11)   = the reflected point xr
(12) end if
(13)else
(14) [Contraction] Compute the contracted point xc = xo + ρ(x3 − xo). Where ρ is a contraction coefficient. Its standard values is ρ = .
(15) if f(xc) <f(x3) then
(16)   = the contracted point xc
(17) else
(18)  for i = 2, 3 do
(19)   [Shrink]  = x1 + σ(xi − x1). Where σ is a shrink coefficient. Its standard value is σ = .
(20)  end for
(21) end if
(22)end if
Output: population P = {}.
3.3. Improved Adaptive Probability Adjustment Method

In some ensemble evolutionary algorithms, the selection probability of different variants is adjusted based on the number of previous successes and failures of each variant at a fixed iteration interval. However, the number of successes and failures is not a perfect indicator of a good strategy because it cannot measure the degree of improvement of successful offspring generated by each variant. Thus, we use the rate of change of fitness value between offspring generated by each variant and the personal best position to adjust the selection probabilities of variants. The adaptive probability adjustment method is given as follows.

Step 5. Initialize the probability pk of the kth PSO variant to 1/K and the change rate Crk of the kth variant as 0. In our work, K is equal to 5.

Step 6. Generate a random number randpk. If 0 ≤ randpk < p1, choose the first variant to generate an offspring. If  ≤ randpk < , 1 < k < K, choose the kth variant to generate an offspring. If  ≤ randpk ≤ 1, choose the Kth variant to generate an offspring.

Step 7. If the kth variant is selected to generate an offspring for a particle, then the change rate is recorded as follows:where and are the fitness of the offspring and the fitness of the personal best historical position of the particle, respectively. When the fitness of the offspring is larger than the fitness of pbest, i.e., the offspring is worse than the parent, the change rate is reduced. Otherwise, the change rate is increased.

Step 8. After lp generations, update the probability pk of each PSO variant, and set to 0. The probability pk is updated by the following steps:(1)Sort the K strategies in descending order based on . Get a new sequence K′.(2)Assign probabilities to the strategies according to their ranking, , i.e., the probability of the strategy with the largest is set to 0.4.In this study, we use the change rate rather than the difference between the fitness of the offspring and the fitness of pbest to guide the adjustment of the probability. Because the difference cannot reflect the merits and demerits of each strategy especially when the fitness values of a strategy and pbest are large, however, the fitness values of another strategy and pbest are small. In this situation, the strategy with a small difference in fitness value may be better than the strategy with a large difference. Furthermore, we use a fixed probability distribution to assign a significantly larger selection probability to a good strategy and a significantly smaller selection probability to a bad strategy.

3.4. Framework of MSPSO

MSPSO integrates PSO, MCLPSO, LIPS, HPSO-TVAC, and FDR-PSO together. It adopts two subpopulations in the early stage and a whole population in the later stage of the search process. In the early stage, MSPSO adopts a subpopulation that implements MCLPSO and a subpopulation that implements ensemble PSO. In the later stage, the whole population implements ensemble PSO. Furthermore, the Nelder–Mead method variant is used in this stage to improve the exploitation ability of MSPSO. In this way, the population diversity and convergence ability of the algorithm can be improved in the early stage. Its convergence ability can be improved in the later stage. The pseudo-code of MSPSO is given in Algorithm 2. In lines 1-2, population and parameters are initialized. Lines 3–10 give the steps of the early stage of MSPSO. In this stage, the whole population is divided into two subpopulations, one of which is composed of individuals and the other is composed of individuals. Lines 5–7 indicate that MCLPSO is implemented in the first subpopulation, while lines 8–10 indicate that the ensemble PSO is implemented in the second subpopulation. Lines 11–16 give the steps of the late stage of MSPSO. In this stage, the whole population implements the ensemble PSO, and then, the Nelder–Mead method variant is implemented. In lines 17–22, the best fitness value of the current population is obtained, and the best fitness value of the algorithm is updated if necessary. The framework figure is given in Figure 1.

Input: fitness function , dimension n, population size μ, subpopulation size , maximum number of generation Max_Gen, MCLPSO limit value limitgp1and limitgp2, MCLPSO adjustment coefficient and , iterative parameter β, interval iteration lp.
(1)Generate an initial population consisting of individuals at random.
(2)Set lp = 10, Crk = 0, pk = 1/5.
(3)for do
(4) if
(5) for do
(6)  Perform MCLPSO to generate offsprings.
(7) end for
(8) for do
(9)   Using improved adaptive probability adjustment method to generate offsprings and update the selection probability.
(10)  end for
(11) else
(12)  for do
(13)   Using improved adaptive probability adjustment method to generate offsprings and update the selection probability.
(14)  end for
(15)  Update the best three individuals using Algorithm 1;
(16) end if
(17) Obtain the fitness value fbest of the optimal in the population
(18) if fmin > fbest then
(19)  fmin = fbest
(20)  Imin=xmin
(21) end if
(22)end for
Output: the best fitness value .

4. Experiments and Results

4.1. Benchmark Functions and Comparative Algorithms

CEC 2014 [25] benchmark functions are used to evaluate the performance of MSPSO. Benchmark problems in CEC 2014 are developed with several novel features such as novel basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems, and so on. In CEC 2014 benchmark suite, F1–F3 are unimodal functions, F4–F16 are simple multimodal functions, F17–F22 are hybrid functions, and F23–F30 are composition functions. In order to evaluate the mean and standard deviation of solution errors, we take thirty independent runs for each algorithm on each problem in 10 and 30 dimensions. For 10 dimensions, each run lasts up to 100,000 function evaluations (FES). For 30 dimensions, it is up to 300,000 FES per run.

The performance of MSPSO is compared with eight other PSO variants, which are PSO [10], CLPSO [26], LIPS [21], HPSO-TVAC [22], FDR-PSO [23], EPSO [27], OSC-PSO [28], and A-PSO [29]. All the selected peer algorithms are proposed in the last decade. EPSO is an ensemble PSO. OSC-PSO drives particles into oscillatory trajectories. A-PSO introduces the nonlinear dynamic acceleration coefficients, logistic map, and a modified particle position update approach in PSO. In order to verify the effectiveness of all the improved strategies proposed by us, we compare MSPSO with the original CLPSO algorithm. The parameter settings of these algorithms are listed in Table 1. The parameter settings of MSPSO in different dimensions are given in Table 2. The high dimension of the function is more complex compared to the low dimension of the function, so we use a larger population size for the high dimension of the function to maintain the diversity of the population. Parameters limitgp, , and β can affect the population diversity and convergence of MSPSO. The larger the limitgp, the greater the probability of updating particles in MCLPSO based on the global optimal position. Also, the larger the , the greater the probability of updating particle velocities in MCLPSO based on the mean velocity of the population. The smaller the β, the more times ensemble PSO is used in the whole population. The benchmark problems with 30 variables in CEC 2014 are more complex than those with 10 variables. So, we use different parameter settings in MSPSO for different dimensions.

Experimental results in the CEC 2014 suite with 10 and 30 dimensions are reported in Tables 3 and 4, respectively. The error is an absolute value of the difference between the best value for 30 runs and the actual optimal value of a specific objective function.

The nonparametric statistical test has become an important method to compare a group of evolutionary algorithms recently [30]. In this study, the Wilcoxon signed-rank test is employed to estimate MSPSO and other PSO variants with the significance level of 5%. For each algorithm, Tables 3 and 4 show the number of best/2nd best/worst ranking, the number of average ranking, and the number of +/=/− in the last three rows, respectively. The algorithms are ranked according to the mean error of each algorithm. Symbol “+,” “=,” and “−” indicate that MSPSO is significantly better than, similar to, and worse than the compared PSO variant, respectively.

The simulation results on 30 functions with 10 variables in CEC 2014 are shown in Table 3. The results show that MSPSO outperformed the other eight algorithms on functions F2, F3, F5, F6, F13, F17, F21, F24, F25, F27, and F30. For function F8 and F26, the mean error of MSPSO was equal to other optimal algorithms. Specifically, for unimodal functions, compared with other algorithms, MSPSO generally outperformed other algorithms. It is superior or equal to the other eight algorithms on all functions except F1. But it ranked third on functions F1. For simple multimodal functions, MSPSO shows the best performance on four functions and a moderate performance compared with other comparative algorithms on other functions. With regard to hybrid functions, the performance of MSPSO ranked within the top 4 on all functions. As for composition functions, MSPSO shows the best performance on five functions. And it ranked within the top 2 on all functions except F28. For other algorithms, EPSO and CLPSO perform well, ranking second and third, respectively. PSO, FDR-PSO, HPSO-TVAC, and OSC-PSO perform moderately. All of these algorithms win on no more than 3 functions. LIPS and A-PSO also win on a few functions, but their average ranking is the lowest. To sum up, first, compared with the other eight competitors, MSPSO indicates the best overall performance on all 30 functions in CEC 2014 with 10 variables in terms of the number of the best and average ranking. Second, MSPSO is significantly different from other algorithms in most of the functions.

The simulation results on 30 functions with 30 variables in CEC 2014 are shown in Table 4. The results show that MSPSO outperformed the other eight algorithms on functions F2, F3, F4, F5, F6, F7, F11, F15, F17, F20, and F27. For function F26, the mean error of MSPSO was equal to other optimal algorithms. Specifically, for unimodal functions, MSPSO generally performs better than most of the other algorithms. Also, it is superior or equal to the other eight algorithms on all functions except F1. Furthermore, for simple multimodal functions, MSPSO exhibits a better performance. Also, it is superior to EPSO in eight functions. Regarding hybrid functions, MSPSO shows a better performance compared with other comparative PSO variants. Also, it ranked within the top 3 on all functions. As for composition functions, MSPSO shows the best performance on two functions and a moderate performance compared with other comparative algorithms on other functions. For other algorithms, EPSO and CLPSO also perform well, ranking second and third, respectively. The rest of these algorithms win on no more than 3 functions. The average ranking of A-PSO is the lowest. In summary, first, MSPSO also has the best overall performance compared with the other eight competitors on all functions in CEC 2014 with 30 variables. Second, MSPSO is also significantly different from other algorithms in most of the functions.

From Tables 3 and 4, we see that the performance of two ensemble PSO algorithms (EPSO and MSPSO) is better than other types of PSO algorithms. This result can be explained by the theory of easy and hard fitness functions [17]. According to that theory, the hardest problem to one evolutionary algorithm could be the easiest to another algorithm. Thus, given an ensemble of different PSO algorithms, a hard problem might be solved easily by one of them. Of course, if a problem is hard (or easy) to all of them, using an ensemble does not bring too much improvement.

The convergent speed is evaluated in Figures 28. From Figure 2, we can see that MSPSO obtains a better performance than other PSO variants. The convergent speed of MSPSO is not fast, but its optimization accuracy is higher than other competitors in many functions. This is because, in the early stage of the search, different particle generation strategies may interfere with the direction in which particles quickly find good positions. However, in the later stage of the search, different particle generation strategies increase the chances of particles finding good positions.

4.2. Application to the Hydrothermal Scheduling Problem

The hydrothermal scheduling problem [31] is a complex optimization problem from the real world. Its main objective is to schedule the power generations of the thermal and hydro units in the system to meet the load demands, under the premise of satisfying the constraints of the hydraulic systems and the power system networks. In order to evaluate the performance of hydrothermal scheduling problem in dealing with real-world problems, we apply MSPSO to solving this problem. In the hydrothermal scheduling problem, decision variables are nonlinearly related to the major operation problem of hydrothermal systems. The objective of the problem is to minimize the fuel cost of thermal units for 24 hours with four hydro units in the system, and the dimension of the problem is 96.

In order to meet load requirements during the scheduling period, the total fuel cost of the thermal system operation is expressed by F. The objective function is given as follows:

In the previous formula, PTi is the power generation of an equivalent thermal unit at ith interval, and fi represents the cost function corresponding to PTi. M is the total number of intervals considered for the short-term planning. The cost function fi is expressed as follows:

MSPSO is compared with five other algorithms in three hydrothermal scheduling instances, which are CoBiDE [32], TLBO [33], ALC-PSO [34], DNS-PSO [35], and EPSO [27]. CoBiDE incorporates the covariance matrix learning and the bimodal distribution parameter setting into DE. TLBO designs an optimization mechanism inspired by the effect of the influence of a teacher on learners. TLBO divides the optimization process into “Teacher Phase” and “Learner Phase.” ALC-PSO transplants the aging mechanism to PSO to overcome the problem of premature convergence. DNS-PSO employs a diversity-enhancing mechanism and neighborhood search strategies in PSO to achieve a trade-off between exploration and exploitation abilities.

The computational results of hydrothermal scheduling instances are shown in Table 5. From the table, MSPSO outperformed the other five comparative algorithms in two instances. This means that MSPSO is a good alternative algorithm for solving the hydrothermal scheduling problem.

4.3. The IEEE CEC 2010 Standard Test Functions Set

In order to further analyze the performance of MSPSO to solve the large-scale global optimization problem, CEC 2010 [36] is employed in experiments. The performance of MSPSO is compared with PSO [2], grey wolf optimizer (GWO) [37], standard sine cosine algorithm (SCA) [38], and slap swarm algorithm (SSA) [39]. All experiments are tested 30 times in 1000 dimensions. The mean and standard deviation of all algorithms are shown in Table 6. The average rank and rank are also recorded in the last two rows of Table 6. From Table 6, it shows that MSPSO has outperformance than other comparative algorithms to solve the large-scale global optimization problems. For most CEC 2010 functions, the MSPSO improves the accuracy by some orders of magnitudes. Therefore, the experimental results demonstrate that MSPSO has a good performance in solving the large-scale optimization problems.

5. Conclusions

The paper proposes a mixed-strategy PSO algorithm called MSPSO. MSPSO uses the rate of fitness change which measures the degree of improvement of successful offspring generated by each variant to guide selection probabilities of variants. Compared with previous PSO algorithms which use the number of previous successes and failures of each variant to adjust selection probabilities, MSPSO can increase the probability of selecting excellent variants. Furthermore, the proposed Nelder–Mead variant method is introduced in MSPSO to improve the exploitation ability. The proposed algorithm is tested on CEC 2014 benchmark suites with 10 and 30 variables. Experimental results demonstrate that MSPSO has a better overall performance than the other eight PSO algorithms on all problems in terms of the solution accuracy. MSPSO is also applied to three instances of the hydrothermal scheduling problem. Computational results show that the MSPSO algorithm also has a good performance in dealing with this real-world optimization problem. MSPSO is further tested on CEC 2010 with 1000 variables. The experimental results show that MSPSO has a good performance in solving large-scale optimization problems.

Our work shows a promising direction for designing efficient mixed strategy PSO algorithms; that is, the rate of fitness change guides the selection probabilities of variants. Thus, using the rate of fitness change to design other mixed strategy evolutionary algorithms will be left for testing as a future work.

Data Availability

The data used to support the findings of this study are included within the article.

Disclosure

The short version of this study has been accepted by the 2021 IEEE International Conference on Space-Air-Ground Computing (SAGC 2021). We expanded that conference paper in this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this study.

Acknowledgments

This work was supported by the National Key R&D Program of China under Grant No. 2020YFB1710200, the National Natural Science Foundation of China under Grant Nos. 61872105 and 62072136, and the Educational Science Planning of Heilongjiang Province under Grant Nos. ZJB1421113 and GJB1421251.