Abstract

As a novel evolutionary optimization method, extremal optimization (EO) has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO) for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA) versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO), and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

1. Introduction

It has been widely recognized that a variety of real-world complex engineering optimization problems can be formulated as continuous unconstrained optimization problems [1]. On the other hand, these benchmark functions of unconstrained optimization problems have been often used to evaluate the performances of various evolutionary optimization algorithms [2], for example, genetic algorithm (GA) and its modified versions. This paper focuses on another novel evolutionary algorithm called extremal optimization (EO) for continuous unconstrained optimization problems.

Originally inspired by far-from-equilibrium dynamics of self-organized criticality (SOC) [3, 4], EO provides a novel insight into optimization domain because it merely selects against the bad instead of favoring the good randomly or according to a power-law distribution [5, 6]. From the perspectives of evolutionary computation, EO is much simpler than other popular evolutionary algorithms, such as genetic algorithms (GA), because it has only selection and mutation operations and less adjustable parameters [7, 8]. As a consequence, the basic EO algorithm and its modified versions have been successfully applied to a variety of benchmark and real-world engineering optimization problems, such as graph partitioning [9], graph coloring [10], travelling salesman problem [11, 12], maximum satisfiability (MAX-SAT) problem [13, 14], and steel production scheduling [15]. The more comprehensive introduction concerning EO is referred to in the surveys [16, 17].

However, the applications of EO in continuous optimization problems are relatively rare [1823]. Sousa and Ramos [19] presented generalized EO (GEO) for continuous optimization problems, where each variable is encoded a binary bit. Furthermore, the GEO has been successfully applied to complex heat pipe design [20]. However, the constraints have been incorporated simply into the GEO algorithm by setting a high fitness value for a minimization problem when the solution is infeasible. In [21], a modified algorithm called population-based EO (PEO) has been proposed for solving constrained optimization problems. The main advantage of PEO is the iterated optimization based on the basic EO algorithm starting from an initial population that consists of a set of individuals. Additionally, Chen et al. [22] have proposed a hybrid algorithm called PSO-EO by combining the exploration ability of particle swarm optimization (PSO) with the exploitation ability of EO. The effectiveness of PSO-EO has been demonstrated by the experimental results on 6 benchmark test functions. Another similar hybrid algorithm for continuous problem is based on improved shuffled frog-leaping algorithm and EO [23]. Following this line of PEO, this paper extends the basic idea of PEO to continuous unconstrained optimization problems and presents an improved real-coded population-based EO (IRPEO) algorithm. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. Compared with GEO, the proposed algorithm adopts real-coded representation of the solutions and population-based iterated optimization based on a set of solutions. The superiority of IRPEO to various GA [24] algorithms with different mutation operations is demonstrated by the experimental results on 10 continuous unconstrained optimization benchmark test functions. Furthermore, the experimental results on these benchmark functions have also shown that the proposed IRPEO provides better performance than other evolutionary algorithms such as PSO, original population-based EO, and the hybrid PSO-EO algorithm [22].

The rest of this paper is organized as follows. The preliminaries on continuous optimization problems and EO are introduced in Section 2. Then, Section 3 presents the proposed IRPEO algorithm. Furthermore, the experimental results on the benchmark test functions are given to demonstrate the effectiveness of IRPEO in Section 4. Finally, the conclusion and the open issues of this paper are given in Section 5.

2. Preliminaries

2.1. Continuous Optimization Problems

An unconstrained continuous optimization problem [1] is generally defined as the following form: where is the objective function under the decision variables and are the vector of minimum and maximum of decision variables, respectively. In other words, , , where and are the minimum and maximum of the decision variable , respectively.

2.2. EO

In general, the -EO [5, 6] algorithm and its modified versions consist of the following basic operations, such as initialization of a random solution, evaluation of global fitness and local fitness, selection of some bad local variables based on power-law probability distribution, mutation for the selected variables and generation of a new solution, and updating the solution by accepting the new solution unconditionally [7]. The flowchart of the -EO for a minimization optimization problem is presented in Figure 1.

Remark 1. According to the seminal work [5, 6], the global fitness of a solution for an optimization problem with optimized variables should be decomposed into equivalent degrees of freedom, that is, the local fitness . Furthermore, Liu et al. [12] give consistency and equivalence conditions between global fitness and local fitness.

Remark 2. The power-law based probability selection [25] is described as follows: where is the probability of the th rank variable (or element) selected from the variables (or elements) for mutation and is a positive parameter controlling the power-law probability.

3. The Proposed Algorithm for Continuous Unconstrained Optimization Problems

In this section, we propose an improved real-coded population-based EO (IRPEO) algorithm for continuous unconstrained optimization problems. The basic idea behind the IRPEO is the population-based iterated optimization consisting of the following operations: generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The proposed algorithm is described in the following steps.

Input. They are a continuous unconstrained optimization problem and the control parameters of the IRPEO, including the control parameter of power-law probability distribution, the size of population SP, and maximum number of iteration .

Output. They are the best solution and the corresponding fitness .

Step 1. Generate an initial population that consists of a set of solutions, where each solution is generated as a set of real-coded values subject to the given domains randomly by the equations, and set : More specifically, each variable in is rewritten as follows: where is a random number during 0 and 1.

Step 2. Evaluate the fitness of each solution according to the objective function of the continuous problem to be optimized and the fitness of the population according to the following equation:

Step 3. Rank the values of ; that is, find a permutation of the labels such that for minimization problems ( for maximization problem) and obtain the best solution and .

Step 4. Select a bad solution based on power-law probability distribution defined as (2) and generate a new solution by adopting uniform random mutation. To be more precise, generate a random number during 0 and 1 firstly and then the th element of th new solution; that is, ( ) is obtained by (6). Denote the new population as , where we replace by the new solution and keep the other solutions unchanged:

Step 5. Evaluate the fitness of each solution according to the objective function of the continuous problem to be optimized and rank the values of ; that is, find a permutation of the labels such that for minimization problems (or for maximization problems).

Step 6. If for minimization problems (if for maximization problems), then and .

Step 7. Accept unconditionally.

Step 8. Repeat Step 2 to Step 7 until the stopping criteria; for example, the maximum number of iteration is satisfied.

Step 9. Output the best solution and the corresponding fitness .

From the above description of the proposed algorithm, it is obvious that the parameters used in IRPEO algorithm including the size of population (SP), the maximum number of iterations , and power-law coefficient play critical roles in controlling the performances of IRPEO. From the perspectives of algorithm design, IRPEO is simpler than other reported popular algorithms, for example, GA [24], PSO [22], PEO [22], and PSO-EO [22], because IRPEO has less parameters to be tuned in the practical experiments. More details concerning the parameters used in different evolutionary algorithms and the effects of these parameters SP, , and on the performance of IRPEO will be discussed in the next section.

The optimization dynamics of the proposed algorithm for the benchmark test functions [24] are illustrated in Figure 2. The test function is a maximum optimization problem with the global optimum 1.000 while is a minimum optimization problem with the global optimum 0.000. Obviously, IRPEO can all find a potential optimal region fast and converge fast to the optimum (red dashed) for minimum and maximum optimization problems.

4. Experimental Results

4.1. Experimental Results

To demonstrate the superiority of the proposed IRPEO algorithm, 10 benchmark functions [24] including to with the dimension and 2 benchmark functions [22] including , with the dimension , respectively, shown in Table 1 are chosen as test functions. These test functions include unimodal and multimodal functions. The performances of these algorithms for each benchmark test function are measured by the statistical results including best fitness , the average fitness , the worst fitness , and the standard deviation (SD) under 30 independent runs. It should be noted that all the experiments have been implemented by MATLAB software on a 2.50 GHz PC with processor i5-3210 M and 2 GB RAM.

The control parameters used in IRPEO for the following experiments are set as , , and . The comparative performances of IRPEO with the recently reported GA with different mutation operations including GA-ADM, GA-RM, GA-PLM, GA-NUM, GA-MNUM, and GA-PM [24] for the above test functions are shown in Table 2. Clearly, the proposed RPEO outperforms these reported modified GA versions [24] and PEO [22] for test functions in terms of and SD, so the comprehensive performance of IRPEO ranks first among these evolutionary algorithms. For the test functions , the performance of IRPEO is only worse than GA-ADM and GA-MNUM while being better than the other four GA versions. For the same random mutation (RM) operation, the proposed IRPEO provides better performance than GA-RM [24] for all these test functions. Furthermore, the IRPEO is much simpler than these GA versions because IRPEO has only selection and mutation operations with fewer adjustable parameters to tune. In this sense, the proposed IRPEO is competitive or even better than the recently reported various GA versions with different mutation operations.

Table 3 gives the comparative performances of IRPEO with other reported evolutionary algorithms [22] including PSO-EO, PSO, PEO, and GA for the test functions and . It is evident that the proposed IRPEO is also superior to these algorithms in terms of , , , and SD.

4.2. Parameters versus Performance

The parameters used in these tested evolutionary algorithms in the last subsection are shown in Table 4. It is clear that the proposed IRPEO is simpler than other reported popular algorithms, for example, GA [24], PSO [22], PEO [22], and PSO-EO [22], because IRPEO has fewer parameters to be tuned in the practical experiments.

The effects of parameters including SP, , and on the performance of IRPEO for test function are illustrated in Figures 3, 4, and 5, respectively. It should be noted that the performance of IRPEO is measured by the error between the statistical results ( , , and ) under 10 independent runs and the optimum of the test function. Generally, the performance is improved as the values of SP and increase, but its improvement is not distinct when these values reach to some constants. Additionally, the satisfied performance is obtained when ranges 1.02 to 1.06. In fact, the effects of these parameters on the performance of IRPEO for other test functions are obtained by similar analysis method.

Remark 3. On the basis of the above analysis, the performance of IRPEO shown in Tables 2 and 3 is to be improved by tuning the parameters carefully, for example, increasing the values of SP and appropriately and choosing optimal value of by trial and error.

5. Conclusion

In this paper, an improved real-coded population-based EO method (IRPEO) has been proposed to solve continuous unconstrained optimization problems. The proposed IRPEO is population-based iterated optimization consisting of the following operations: generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 continuous unconstrained optimization benchmark test functions have shown that the average performance of IRPEO is competitive or even better than that of various GAs [24] with different mutation operations and the original PEO algorithm [22]. In addition, the superiority of IRPEO to other evolutionary algorithms such as PSO, original PEO, and the hybrid PSO-EO algorithm [22] is also demonstrated by the experimental results on these benchmark functions. Furthermore, the effect of the adjustable parameters used in IRPEO on its performance is discussed in this work. In fact, the IRPEO algorithm is also enhanced by tuning the parameters carefully. However, more experiments for benchmark test functions and real-world engineering optimization problems will be done to further validate the superiority of IRPEO to other optimization algorithms. Moreover, the extension of IRPEO algorithm to constrained optimization problems is another future research work.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors gratefully acknowledge the helpful comments and suggestions of the reviewers and editor, which have improved the presentation. This work is partially supported by the National Natural Science Foundation of China (no. 51207112), Program of “Xinmiao” (Potential) Talents in Zhejiang Province (no. 2012R424044), and Training Programs of Innovation and Entrepreneurship for Undergraduates in Wenzhou University (no. DC2012060).