Abstract

This paper proposes an interactive two-archive method to solve many-objective optimization problems. Two updating strategies based on the aggregation-based framework are presented and incorporated into a two-archive framework. In addition, we further extend this method by introducing an interactive mechanism in which evolutionary information is passed from the diversity archive to the convergence archive. Given the requirement to balance convergence and diversity, a mating selection method is proposed to regulate the evolutionary speed of these two archives collaboratively. The proposed algorithm has been tested extensively on several problems with different peer algorithms to validate its effectiveness. The results show that the proposed method can outperform several state-of-the-art evolutionary algorithms for handling many-objective optimization.

1. Introduction

Multi-objective optimization problems (MOPs), which refer to the task of optimization with more than one objective, commonly exist in real-world applications. e.g., engineering design problems [1], software engineering problems [2], and water management [3]. Generally, a continuous MOP can be defined as follows:where is the decision vector, is the decision space, and is the objective function vector, mapping from the -dimensional decision space to the objective space . As a specific category of MOPs, many-objective optimization problems (MaOPs) refer to MOPs with more than three objectives (). The reason for separating this single category is that a large number of multi-objective evolutionary algorithms (MOEAs) encounter substantial difficulties when solving MaOPs. For example, two representative Pareto-based methods, NSGA-II [4], and SPEA2 [5] have shown to be very effective in handling MOPs but degraded drastically in addressing MaOPs due to dominance resistance [6]. Specifically, the proportion of nondominated solutions increases significantly in higher objective space, and the strictly defined dominance relationship cannot discriminate between these solutions.

As the weakest preferred structure for the decision-makers, Pareto dominance can be defined as follows: a vector is said to dominate another vector , represents as , if and . In order to address the issues caused by dominance resistance, there are a number of methods to improve the Pareto-based method for MaOPs. The most intuitive attempt is to relax the original dominant relationship, making solutions more comparable in a higher objective space. Some studies, such as -dominance [7], fuzzy-dominance [8], and -dominance [9], were proposed. By enlarging the dominated area of a solution, these new dominances can increase the convergence ability to some extent. Nevertheless, setting a proper value for the parameters in these methods is not an easy task since it varies over problems and relies on the number of objectives.

Objective reduction is a direct way to handle dominance resistance by converting a MaOP into a MOP. For some real-life MaOPs, their objectives can be highly correlated. Thus, the basic idea behind these methods is to keep the essential objectives and discard the redundant ones. After object reduction, if the remaining number of objectives is lower than four, any state-of-the-art MOEA can be directly introduced. Various machine learning algorithms, such as principal component analysis (PCA), maximum variance unfolding (MVU) [10], and feature selection [11], can be considered tools for objective reduction.

Additionally, even MOEA can be used to remove redundant objectives by considering different kinds of errors simultaneously [12]. The advantage of the objective-reduction-based method is that we intend to do something other than any extra ways to promote the scalability of existing MOEAs. Yet, these reduction strategies may only be valid when more redundant objectives exist.

Indicator-based methods combine convergence and diversity performance into a single indicator as the selection criteria in environmental selection. Theoretically, they can avoid the dominance resistance phenomenon, but in practise, it is doubtful that they will be able to maintain population coverage and diversity effectively when dealing with MaOPs. The hypervolume estimation (HypE) [13] and the indicator-based evolutionary algorithm (IBEA) [14] can be considered as two representatives in this category. Because of its strong theoretical qualities, the HypE are commonly utilized in solving MaOPs, but their computing complexity grows exponentially as the number of objectives grows. The indicator in IBEA and its variant [15] are gaining popularity in MaOPs as does not require a reference point or other prior knowledge of the real PF and has a relatively low-computational complexity. However, only exhibits good performance in termes of convergence, whereas diversity tends to be poor; combing both convergence and diversity performance into a single indicator is not a easy task.

Another promising method for MaOPs is to decompose a MOP into several subproblems by using a series of aggregation functions, and then solve these subproblems simultaneously. These methods can be referred to as decomposition-based or aggregation-based algorithms. The MOEA based on decomposition (MOEA/D) can be considered the most representative of this class of methods [16]. Although the MOEA/D is not originally proposed for MaOPs, it is still competitive to handle in the many-objective scenario. So far, many variants based on MOEA/D have been proposed from different aspects. e.g., adaptively adjusting weight vectors [17], exploiting the perpendicular distance from solution to weight vectors [18], enhancing diversity by a hybrid method [19].

The basic principle for designing an MOEA is to find a set of nondominated solutions that are distributively close to the Pareto front (PF), reflecting two goals of the search process: convergence and diversity. Thus, some literature begins to separate these two basic goals into different subpopulations; this strategy begins to focus on the structure of selection instead of a specific selection procedure. Two-archive based method can be considered as a representative method following this idea [20]. As one of hybrid methods [21], typically, in two-archive-based method, the convergence archive (CA) targets the solutions with good convergence, and the diversity archive searches for the solutions with good diversity. Both of these evolutionary processes have occurred separately. The original two-archive algorithm is unsuitable for MaOPs, since, in the absence of any restrictions, the number of nondominated solutions in CA would increase dramatically, failing to provide sufficient selection pressure [20]. In Two Archive 2 (Two_Arch2), the structure and survival strategies were updated significantly: the size of two archives are fixed, and dominance-based methods are not directly utilized to avoid dominance resistance [22]. Cai et al. [23] proposed a new two-archive method based on an aggregation-based framework that combines the benefits of two archive and aggregation-based methods. Besides, as the demands of convergence and diversity are decoupled in two different archives, two-archive techniques are better suited for tackling multimodel MOPs [24] and constrained MOPs [25].

Although the two-archive methods have been extensively used for MaOPs, they still suffer many deficiencies when applied to the other cases. First, the two archives evolve their populations according to their distinct standards, and there is no interaction between these two archives. Second, different problems have different characteristics; some MOPs converge quickly, while others may necessitate an emphasis on diversity. As a result, the two-archive method with separate yet fixed evolutionary strategies fails to balance convergence and diversity in some cases.

To address such issues, we propose an interactive two-archive method for aggregation-based MOEA, termed iTwoArch. The main contributions of this work are summarized as follows:(i)In iTwoArch, unlike its predecessor, replacement information on subproblems in DA can be transmitted to CA, thereby avoiding redundant searches to some extent.(ii)A mating selection is carefully designed according to the evolutionary speed of these two archives, which effectively enhances the capability of balancing convergence and diversity in iTwoArch.(iii)Different kinds of test problems are used to verify the performance of the proposed method, compared with several state-of-the-art peer algorithms.

The remainder of this paper is organized as follows: the proposed algorithm is described in detail in the next section. The third section presents the experimental design, test problems, and performance indicators for investigating the performance of the proposed iTwoArch, followed by experimental results and relevant discussions. Finally, the conclusion and some possible future research directions are given in the last section.

2. The Proposed Method

2.1. Main Idea

Algorithm 1 gives the general framework of the proposed iTwoArch; like many other steady-state algorithms [16, 23]. First, a series of uniformly spread weight vectors are generated in advance. Das and Dennis [26] systematic approach is used, and the two-layer method is adopted for the number of objectives over seven [23]. Then, two populations are initialized separately. The ideal point set , which is defined as the best scalar value for each objective in the current state, is obtained from two populations (Step 3). Next, the neighborhood for each subproblem is determined in Step 4. After that, every vector in stores the indexes of closet weight vectors, i.e., , where { indicates closet weight vectors to current . Steps 6–16 are iterated until the termination criterion is fulfilled. Specifically, in each iteration, two mating solutions are chosen according to the replacement status of two archives. Once parents are determined, simulated binary crossover (SBX) and polynomial mutation are executed sequentially to generate a new solution . Finally, is used to update the ideal point set and two different populations.

1: weightVectorsGenerator()
2: initializePopulation()
3: initializeIdealPoint()
4: initializeNeighborhood()
5:
6: while termination criterion is not fulfilled do
7:  for  to do
8:   (,) matingSelection()
9:    crossover
10:    mutation()
11:   updateIdealPoint()
12:    updateDA(,)
13:   updateCA()
14:  end for
15:  
16: end while
17: return all the non-dominated solutions in

After presenting the framework of the proposed algorithm, we would like to discuss the main similarities and differences between the improved version and the original TwoArchA [23]. For the similarities, both introduce a set of weight vectors to guide the search behavior in two archives separately. Besides, both of them use the steady-state method to update two archives, which means that once an offspring is produced, it directly goes through twin archives to determine whether or not to replace a specific solution. However, compared with its predecessor, the characteristic procedure of the proposed algorithm mainly lies in two updating steps and mating selection. In our design, CA and DA’s evolution is based not only on different aims but also on interaction. The results of updating DA are considered necessary for the evolution of CA. Additionally, the mating selection in this study intends to address two issues: (1) to prevent two distant solutions from being combined; (2) to balance the convergence and diversity according to the state of the two archives. In the following subsection, we would like to present the two updated mechanisms and mating selection in detail.

2.2. Diversity Archive

The update mechanism of the DA is presented in detail in Algorithm 2. As we know, the main goal of DA is to select well-diversified solutions. The solutions in this archive should be distributed in the PF with large diversified degrees. For that, some characteristic procedures are designed.

 Input: offspring solution , ideal point set , weight vector set , diversity archive , count of replacement in current generation
 Output: index of matched sub-problem
1: associate()
2: if then
3:   ,
4: else
5:  if  and are nondominated with each other then
6:   if then
7:   ,
8:   end if
9:   else
10:    
11:   end if
12: end if
13: return

Specifically, when the new solution is created, we first find the most suitable weight for via association (Step 1 of Algorithm 2). The procedure of association is presented in the Algorithm 3. For each weight vectors , we compute perpendicular distance from to as follows:where is translated objectives: (). After the appropriate weight is determined, the original solution associated to this weight competes with , and the operation of replacement happens in two cases:(1) dominates (Step 3 in Algorithm 2).(2)if and are nondominated with each, but has lower perpendicular distance to (Step 7 in Algorithm 2).

The replacement count increases once a replacement happens.

 Input: offspring solution , ideal point set , weight vector set
 Output: index of matched sub-problem
1: for each do
2:   Compute the perpendicular distance
3: end for
4: Assign
5: return 
2.3. Convergence Archive

The main idea of the update mechanism in CA is to evaluate the convergence in the th subproblem: for each solution, in the neighborhood , replace if is better than in terms of convergence. The update procedure for convergence archive is shown in Algorithm 4. Similar to the mechanism of DA, we still use two standards to evaluate the ability of convergence. First, if dominates , can be considered as a substitute for . In addition to that, if has a smaller aggregation function value, i.e., , can also be considered as a winner.

Like TwoArchA, we still use a modified version of the Tchebycheff function as the aggregation function in this study. A smaller Tchebycheff function value of a solution implies its better convergence performance. The aggregation function for -th weight vector can be defined as follows:where is the translated objective value for -th objective value .

It is worth noting that the index of the best-matched subproblem is passed by the update procedure of DA, which means that we use perpendicular distance to confirm the most suitable subproblem for the replacement scheme in CA.

 Input: offspring solution , ideal point set , weight vector set , convergence archive , count of replacement in current generation , the index of sub-problem
1:
2: for each  do
3:   if then
4:    ,
5:   else
6:    if then
7:     ,
8:    end if
9:   end if
10: end for
11: return
2.4. Mating Selection

Algorithm 5 provides the pseudocode of the whole mating selection. Commonly, mating selection aims at selecting better solutions for variation. In two-archive-based methods, the different solutions generally come from two archives independently, based on the assumption that combining them could generate a better solution by inheriting all abilities from their parents. Based on this idea, the mating selection in this work contains two stages.

First, taking advantage of the subregion defined by weight vectors, we restrict the solutions to be selected within some regions. This restriction may help alleviate issues in MaOPs, where recombining two distance solutions is not likely to generate good offspring. Additionally, to enhance the exploration ability, we also allow selection from the whole population rarely happens in the entire population (Step 4 in Algorithm 5).

The second stage is the collaborative process (Steps 7–12). Typically, two mating parents are picked up from CA and DA separately when two archives have a similar status. However, in some cases, the evolution of DA is faster than CA, which means there is an imbalance between convergence and diversity. Therefore, we need to control the speed of the evolutionary process in two archives. To make the collaboration work, there are two issues that need to be adequately attended to. First, estimating the state of two archives is necessary. In our design, we use replacement frequency as a criterion for state estimation. and , which count the replacement in CA and DA, respectively, can be directly considered as the indicators for the evolutionary state. Second, the control of the evolutionary process should be adequate. Here, we use a straightforward strategy: when DA has a higher proportion of replacement, the mating selection is more likely carried in CA only (Step 9 in Algorithm 5).

 Input: index of sub-problem, probability of selection in neighbourhood, , , ,
 Output: the selected two solutions
1: if rand() then
2:   
3: else
4:   
5: end if
6: Randomly select two indexes and from
7:
8: if rand() then
9:   ,
10: else
11:   ,
12: end if
13: return ,
2.5. Discussion

This subsection is devoted to discussing the main similarities and differences between the proposed method and its predecessors.

iTwoArch vs. TwoArch2 [22]: Despite the fact that they both use the two-archive frames to lead the solution set to the PF, the updating procedure in each archive is distinct. In TwoArch2, CA is maintained by an indicator-based method, whereas DA is maintained using a Pareto-based method, necessitating a new -norm based diversity management strategy. For the proposed iTwoArch, both CA and DA are maintained by the decomposition-based selection method.

iTwoArch vs. TwoArchA+ [23]: both employ a steady-state decomposition-based approach to updating two archives, but their detailed maintenance methods are quite different. The updating of two archives is done independently in TwoArchA+, with no interaction. However, with the proposed iTwoArch, the updated solution information in the DA would be passed to the CA update process to better assist the selection. In addition, there is no specific mating selection designed for TwoArchA+, yet in our iTwoArch, the mating selection is based on the frequency of updates for two archives, keeping the two updating procedures more balanced.

3. Experimental Design

This section introduces test problems, performance metrics, algorithms in comparison, and parameter settings for the experimental studies.

3.1. Test Problems

To test the effectiveness of the proposed iTwoArchD, two well-known continuous benchmark suites, DTLZ1-4 [27] and WFG1-9 [28], are involved in our empirical studies. In particular, the number of objectives for each problem is set as . For the DTLZ problems, the number of decision variables is set to , where for DTLZ1, and for DTLZ2-4. As for WFG problems, we set the number of decision variables , and the position-related parameter is . For a fair comparison, we apply the same conditions to each test problem.

3.2. Parameter Settings

For a fair comparison, we utilize the same parameter settings with common parameters for the proposed iTwoArch and its rivals. First, in this research, we employ SBX and polynomial mutation to determine the parameters for the reproduction processes. The crossover and mutation probabilities are set to 1 and , respectively. Besides, the the crossover and mutation distribution index has been set at 30 and 20, respectively. Following the suggestion of its original works, the neighborhood size for the decomposition-based method is set to 20, and the probability is set to 0.9. Furthermore, the population size is kept the same to ensure a fair comparison, and it is controlled by two parameters ( and ) since the two-layer weight vector method is used in this works [29]. The detailed population size is summarized in Table 1.

3.3. Algorithms in Comparisons

Here, we choose five state-of-the-art many-objective optimization algorithms to comprehensively study the performance of the proposed algorithm. These algorithms have covered different categories of MaOPs, including two decomposition-based algorithms (MOEA/D and MOEA/D-M2M), an algorithm that combines Pareto-based and decomposition-based selection together (NSGAIII), and two algorithms based on two archives (TwoArchA+ and TwoArch2). Some brief reviews and comments are listed as follows:(i)MOEA/D [16]: MOEA/D can be considered a representative algorithm of the decomposition-based method. This study chooses the original MOEA/D with a modified version of the Tchebycheff function for a fair comparison.(ii)MOEA/D-AM2M [30]: lacking prior knowledge of the PF, aggregation-based methods are hard to initialize an applicable set of weight vectors for a specific problem. To address this issue, MOEA/D-AM2M assumes that the current evolutionary population can be an approximation to the PF, and periodically resets the subregion setting and weight vectors. This practice of resetting makes MOEA/D-AM2M suitable for some degenerated MaOPs.(iii)NSGAIII [29]: in NSGAIII, the maintenance of diversity is also aided by a series of weight vectors (or called reference lines in NSGAIII). Based on that, a niche-preservation operation was employed to identify the promising candidates. Overall, NSGAIII prefers some solutions that are nondominated but close to a set of well-distributed reference lines.(iv)TwoArch2 [22]: TwoArch2 maintains two archives according to indicator-based selection and an -norm-based diversity maintenance method. Different solutions are picked up from two archives containing different characteristics.(v)TwoArchA+ [23]: a new two-archive method to deal with MaOPs. Similar to TwoArch2, the updating mechanism of CA and DA is based on different rules yet under the same decomposition-based framework. To further control the diversity, each subproblem’s neighborhood has been extended to the whole population.

All algorithms, including the proposed iTwoArch, are implemented using Java and the jMetal framework [31], and all experiments are conducted on a Lenovo ThinkCentre computer with an Intel(R) Core i5-8400 (2.8 GHz) processor and 64-GB RAM. Each algorithm is independently executed 30 times on each test instance, and the average values of the evaluation metrics are recorded. Furthermore, the Wilcoxon [32] signed-rank test at 5% significance level is performed on the metric values generated by two competing algorithms in order to examine the difference for statistical significance in different instances.

3.4. Performance Metrics

We used a hypervolume indicator (HV) as a performance indicator in the experiment. The good theoretical qualities of HV make it relatively fair to evaluate the performance of algorithms. And it can simultaneously measure an algorithm’s convergence and diversity ability. The larger the HV value, the better the quality of obtained solutions to approximate the whole PF. Mathematically, the exact definition of HV is defined by Equaion (4) as follows:where is a reference point set which is dominated by all Pareto-optimal objective vectors. Thus, before calculating HV, a reference point is to be assigned. In our experiment, the setting of reference points for each test instance is the same as the experiments on TwoArchA [23], and the presented HV values in this paper are all normalized by dividing the corresponding . As for the problem with 15 objectives, the HV value is approximated by Monte Carlo simulation, and 10,000,000 points are sampled to ensure accuracy.

4. Empirical Result and Discussion

4.1. Performance Comparison on DTLZ Problems

For the DTLZ1 problem, it has a linear PF with a large number of optima, arousing a challenge for an algorithm to converge toward PF. According to the results in Table 2, iTwoArch shows better performance than the other algorithms in almost all instances. For the three-objective instance, TwoArch2 reaches the best result.

As a relatively simple question, DTLZ2 is used to test a specific algorithm’s diversity ability. It is clear that iTwoArch can achieve the best performance. For the rest of the problems, all the performance of algorithms appears similar except for the five-objective problem. Thus, in order to give more comparison, Figure 1 plots the 3-dimensional radial coordinate visualization (3D-RadVis) to present the same results on five-objective instances [33]. As shown, the proposed iTwoArch and MOEA/D-AM2M have a better performance in terms of both convergence and diversity. MOEA/D can archive a good convergence but struggle to maintain a set of well-distributed individuals. Last, solutions obtained by TwoArch2 fail to get the boundary on the PF, and some of its solutions are far from the optimal front.

The PF of DTLZ3 is exactly the same as DTLZ2, yet it contains too many local optima in its search space. The proposed iTwoArch obtains medium performance in all these instances. TwoArch2 get the worst performance on this problem, its indicator-based convergence mechanism may lose efficiency in higher-dimensional problems. In contrast, MOEAD-AM2M performs better on 8- and 10-objective instances, which means that a suitable mechanism of adaptive weight adjustment may also help in escaping the local optima.

Similar to DTLZ2, the main challenge of DTLZ4 is also on the maintaining diversity of the population. From the statistical results, it can be observed that iTwoArch and NSGA-III have equivalent performances and both algorithms are better than their competitors on 15-objective instance. We can also find that both weight-based and dominance-based algorithms can archive good performance on this problem, which means that only considering DTLZ does not provide sufficient information. Next, other comparison results are devoted to the WFG problems, concerning more analysis of the behavior of our algorithm.

4.2. Performance Comparisons on WFG Problems

WFG test suits pose some powerful challenges for the algorithm by introducing several complexities [28]. Table 3 illustrates the comparison results of iTwoArch with other peer algorithms in terms of HV values.

WFG1 to WFG3 problems have mixed PFs, and it has been empirically proven that the aggregation-based algorithms itself not be suitable for this kind of problem with irregular PF. However, for the WFG1 problem, iTwoArch performs better than the other five algorithms in three- to eight-objective test instances. TwoArch2 is found to archive the best overall performance on 10-objective problems, and it also shows remarkable performance on some instances in WFG2 and WFG3. The diversity mechanism in TwoArch2, which is likely to fail to find boundary solutions in DTLZ, shows relatively competitive performance in WFG1-3 problems. By introducing the adaptive subregion division, MOEA/D-AM2M obtains the best performance on 8- and 10-objective WFG2 problems, which is composed of some disconnected convex. As a connected version of WFG2, WFG3 has a linear and degenerate PF shape. It is worth noting that the aggregation-based methods (iTwoArch, TwoArchA+, MOEA/D, and MOEA/D-AM2M) are significantly outperformed by the NSGA-III in the scope of WFG3 problems. One possible reason for this occurrence is that the mechanism of normalization in NSGA-III can effectively drive the population toward the PF.

Despite WFG4-9 problems having the same hyper-ellipse PF shape, they have different characteristics. For such problems, the observation is similar. The proposed iTwoArch has a clear advantage over other algorithms. But the performance of algorithms varies not only over problem characteristics but also over the number of objectives. The WFG4 problem, which has a multimodel, is designed for assessing algorithms’ ability to escape from the local optima. iTwoArch can win two cases in this problem. TwoArchA+ gives the highest performance on 3- and 50-objective issues. Interestingly, all aggregation-based algorithms are very competitive in this group of problems; only MOEA/D finds it hard to show a clear advantage. WFG5 is a deceptive problem. The performance of two-archive based methods and NSGA-III demonstrated much better performance than the other algorithms. Conversely, MOEA/D still scale poorly on the high-dimensional objective instances. Similar to the observations before, for WFG6-9 problems, the proposed iTwoArchD averagely wins twice and NSGAIII dose once in 50-objective instances. The deterioration of HV in MOEA/D indicates that applying a single aggregation-based method may struggle to solve high-dimensional WFG problems. Yet, any hybrid strategy may help to alleviate this disadvantage. For example, aided by subregion division and adaptive allocating strategies, MOEA/D-AM2M shows more competitiveness on WFG6-9. The primary reason could be that the subregions in MOEA/D-AM2M protect some solutions to be easily replaced and enhance diversity. Besides, the balance between convergence and diversity could be efficiently archived by incorporating two archives with different purposes. It makes iTwoArch and TwoArch+ behave quite well on WFG problems.

To intuitively investigate the effectiveness of the proposed iTwoArch, the parallel coordinates of the nondominated fronts with median metric value with other compared algorithms are plotted in Figure 2, reflecting the distributions of the solutions obtained by the iTwoArch and five compared algorithms on the 10-objective WFG6. ITwoArchA has a good performance on both convergence and diversity since it can reach the upper and lower boundaries and obtain a better distribution for all 10 objectives.

4.3. Summary of Performance Comparison

In this subsection, we compared the proposed iTwoArch with all the state-of-the-art algorithms mentioned in Section 3.3. Table 4 summarizes the significant test on HV results between the proposed iTwoArch and the peer algorithms. For a specific row on this table, “B”(“W”) at the second column of a row means the number of instances on which the results of the proposed algorithm are significantly better (or worse) than the compared one, and “E” means the number of results where there is no statistical significance between two compared algorithms.

To better qualify the overall performance of algorithms in the aspect of test problem as well as the number of objectives, we introduce Conover-Inman [34] procedure to evaluate performance scores. Specially, suppose algorithms are included for evaluation . An score for an algorithm is obtained by , where means that is significantly better than , and 0 otherwise. The score of can reveal how many other algorithms outperform the current one on all test instances. Thus, an algorithm is said to be good if it has a lower score.

At first glance of these summary results (Figure 3), we can get some initial observations: (1) considering the summary of comparison, the proposed iTwoArch gets the most winning times on all the problem instances considered in this study. (2) The proposed algorithm performs best on DTLZ1-2, DTLZ4, WFG1, WFG4, and WFG6-9 problems. (3) iTwoArch performs best on 5- and 10-objective instances and it remains competitive on 8- and 15-objective test problems.

5. Conclusion

This paper aims to propose an improved two-archive evolutionary optimization algorithm, where convergence and diversity are addressed in two separate archives for effectively solving MaOPs. To archive this, we first create two independent archiving procedures using a steady-state manner. The newly created offspring can go through these two archives, which determine whether or not they will survive. We then design an interactive mechanism between these two archives: the index of the best-matched subproblem in CA is directly acquired by updating DA. This interaction may allow producing comparatively higher-quality offspring. A matting selection is carefully designed to balance the convergence and diversity in these two archives. The frequency of replacement in these two archives is counted as an indicator to represent the speed of the evolutionary process. According to this indicator, two individuals for crossover are restricted to distinct situations. Thus, the evolutionary speed of these two archives can be controlled collaboratively through quantitative comparisons on different test problems with five different scales of objective numbers vs. five state-of-the-art peer algorithms. The proposed iTwoArch performs considerably better in addressing MaOPs.

One major future work is to further investigate the proposed iTwoArch in more problems with different characteristics since the performance of MaOPs is also restricted to specific problem features by virtue of the “no free lunch” theorem. In addition, it is also interesting to study how to deal with the problems with irregular PF. This will probably call for the development of new environmental selection mechanisms.

Data Availability

Data supporting this research article are available on request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is finacially supported by the Natural Science Foundation of China under Grant 40872087.