Abstract

The aim of the research is to propose a new optimization method for the multiconstrained optimization of sparse linear arrays (including the constraints of the number of elements, the aperture of arrays, and the minimum distance between adjacent elements). The new method is a modified wolf pack optimization algorithm based on the quantum theory. In the new method, wolves are coded by Bloch spherical coordinates of quantum bits, updated by quantum revolving gates, and selectively adaptively mutated when performing poorly. Because of the three-coordinate characteristics of the sphere, the number of global optimum solutions is greatly expanded and ultimately can be searched with a higher probability. Selective mutation enhances the robustness of the algorithm and improves the search speed. Furthermore, because the size of each dimension of Bloch spherical coordinates is always [−1, 1], the variables transformed by solution space must satisfy the constraints of the aperture of arrays and the minimum distance between adjacent elements, which effectively avoids infallible solutions in the process of updating and mutating the position of the wolf group, reduces the judgment steps, and improves the efficiency of optimization. The validity and robustness of the proposed method are verified by the simulation of two typical examples, and the optimization efficiency of the proposed method is higher than the existing methods.

1. Introduction

Since the 1960s, the sparse array has been widely studied for its high target resolution and low cost. It has been successfully applied in the fields of interference array in radio astronomy, high-frequency ground radar, and satellite receiving antenna against environmental interference [14]. Pattern synthesis as the key technology of the array antenna has been playing an important role in antijamming, interception rate, and parameter estimation [59].

The purpose of antenna pattern synthesis is to determine some parameters of the array antenna so that certain radiation characteristics of the array meet the design requirements [10]. These parameters include array element number, array element spacing, array element excitation amplitude, and phase coefficient. Compared with uniform array synthesis, the optimization of nonuniform array placement has always been a difficult problem. To solve this problem, many synthesis methods have been proposed, such as dynamic programming [11], fractional Legendre transform [12], simulated annealing [13], particle swarm optimization [1416], and genetic algorithm [1721].

In the literature [7], nonuniform arrays are divided into two categories. One is the sparse array based on a grid, in which the spacing of elements is only an integer multiple of half wavelength. [11, 13, 14]. The other is the sparse array of antenna elements distributed randomly in a certain aperture. In recent years, more attention has been paid to the second approach, which not only has more freedom but also can obtain lower peak side-lobe level under the same array element number and array aperture.

Because the genetic algorithm is suitable for nonlinear optimization problems, it has been widely used in antenna design and optimization in recent years [1721]. However, the problem of optimal placement of the sparse array is very complex. When the genetic algorithm is used to optimize the element position of the sparse array, sometimes it is difficult to obtain satisfactory solution in limited time. The main reasons are as follows: (1) the basic genetic algorithm has the disadvantages of slow convergence speed and easy to fall into local optimum when dealing with complex problems; (2) a sparse array spacing optimization is a multiconstrained optimization problem with constraints on the number of elements, aperture, and spacing of elements. In the operation of the optimization algorithm, the existence of infeasible solutions will seriously affect the optimization efficiency of the algorithm. Aiming at the first problem, the improved genetic algorithm is used in the literature [17, 18, 20] to optimize the sparse array element spacing and amplitude weighting. In reference [15, 16], an improved particle swarm optimization algorithm is used to obtain a better numerical solution. In order to solve the second problem, a new optimization model is proposed in reference [17]. By introducing intermediate variables and designing matrix transformations, generalized crossover operators, and mutation operators to deal with constraints, the infeasible solutions of gene recombination and mutation are effectively avoided under multiconstraints. In [18], the constraints of element spacing are separated from genetic operations by complex mapping criteria, and the purpose of avoiding the occurrence of infeasible solutions is also achieved.

Wolf pack algorithm (WPA) is a new swarm intelligence optimization algorithm proposed by Yang in 2007, which imitates wolf predation behavior and prey allocation. Verified by typical test functions, the WPA has been proved to have better global optimization ability and faster search speed than the GA in complex nonlinear optimization problems and has achieved good results in sensor optimal placement [22] and hydropower station optimal dispatch [23]. However, the application of the WPA in array synthesis has not been discussed at home and abroad. This paper takes it as a research topic.

In recent years, quantum computing has attracted wide attention for its excellent performance. Many scholars have integrated the quantum theory with intelligent optimization algorithms and proposed many efficient quantum evolutionary algorithms [2428]. Inspired by this, we introduced quantum Bloch spherical coordinates into the WPA and proposed a modified quantum wolf pack algorithm (MQWPA) based on Bloch spherical coordinates. Then, the new algorithm is applied to the multiconstraint synthesis of sparse linear arrays. By means of intermediate variables, the constrained optimization problem of the element spacing is successfully transformed into a nonconstrained optimization problem. At the same time, because the size of each dimension of Bloch spherical coordinates is always [−1, 1], the distance variable obtained by solution space transformation can satisfy the constraints of the aperture of arrays and the minimum distance between adjacent elements. Therefore, wolves will not exceed the feasible solution space in the process of location updating and mutation. The whole algorithm no longer needs to judge the feasible solution and improves the optimization efficiency. Compared with the existing literature methods, this paper presents a new scheme for the multiconstraint synthesis of one-dimensional sparse linear array, which has higher optimization efficiency.

2. The Problem Formulation

Pattern synthesis is a complex optimization problem, which determines the position, amplitude, and phase of the antenna array elements according to the shape or performance index of the given beam pattern. In engineering design, in order to reduce the mutual coupling between the elements and keep the narrow main lobe width, some constraints are imposed on the aperture of the array and the spacing of the elements. At this time, pattern synthesis becomes a multiconstrained, multivariable, and nonlinear global optimization problem [18].

In this paper, the problem of multiconstraint synthesis of the symmetric one-dimensional sparse linear array is discussed. The array structure is shown in Figure 1.

The number of elements of the symmetrical sparse linear array is 2N + 1 (), and all elements are the same and have no directivity. represent the excitation and position of the n-th antenna element, respectively, and is the scanning angle. Because the array has a symmetrical structure, we can set the position of the central element of the array as the reference origin and then and . At this time, the pattern function of the array is expressed aswhere equals and equals .

Because arrays are symmetrical, we only need to consider the structure of arrays when . When the aperture of the sparse array is constant, the distance between adjacent elements is not less than and the peak side-lobe level (PSLL) is the lowest; the sparse array synthesis optimization model can be expressed as

For the above optimization problems, when the intelligent algorithm is used to optimize, the penalty function is usually introduced into the fitness function. By punishing the infeasible solution, the whole optimization problem evolves towards the feasible solution. This method is applicable in the case of few and simple constraints, while it is difficult for complex constrained optimization problems. For this reason, this paper adopts the method of the literature [17]. In this method, the intermediate variables are used to replace the element spacing and the minimum spacing constraint problem is transformed into a nonminimum spacing constraint problem. The specific steps are as follows.

Divide into two parts, and . Then, we can get

In order to ensure that the spacing between adjacent elements satisfies the following conditions,

The following relationships must be established:

Through the above operations, the problem of element spacing constraints is transformed into an unconstrained optimization problem of on interval . Then, the array synthesis optimization model can be transformed into the following form:

3. Proposed Method

3.1. Modified Wolf Pack Algorithm

Wolf pack algorithm (WPA) is a new group search method for simulating the social hierarchy mechanism and predatory behavior of wolves in nature. The heuristic optimization of the WPA is mainly realized by simulating three kinds of wolf swarm intelligence behaviors: wandering, summoning, and besieging, as well as the rules of leading wolf generation and the mechanism of wolf swarm renewal. The algorithm changes the position of wolves with the change of the operator and measures the position of individual wolves by the value of an objective function. The algorithm has excellent computational robustness and global search ability [22].

The WPA is more efficient in optimizing the extreme of typical test functions than the GA and PSO [22]. However, it is still easy to fall into local optimum in multidimensional nonlinear problems. In order to improve the global optimization ability of the algorithm, this paper introduces quantum Bloch spherical coordinates into the WPA and then proposes the MQWPA. The new algorithm uses three-dimensional Bloch spherical coordinates to encode wolves and carries out stagnation detection to selectively mutate wolves that do not perform well. The new algorithm significantly improves the diversity of population, avoids premature convergence, and achieves a good balance in convergence speed and accuracy through selective mutation. The basic principle of the MQWPA is given below.

3.1.1. Initialization of Quantum Wolves

In the spherical coordinates of Bloch, a point can be determined by two angles and . See Figure 2.

Qubits with Bloch spherical coordinates are expressed as . In the MQWPA, the initial wolves were encoded by Bloch spherical coordinates of qubits, which were coded as follows:where . are the random numbers between , , are the size of the wolf pack, and is the dimension of the function to be optimized.

3.1.2. Solution Space Transformation

In the MQWPA, each one-dimensional traversal space of wolves in Bloch coordinates is . In order to calculate the adaptive value of wolves, the solution space transformation is required to map the three coordinate positions occupied by each wolf from the unit space to the solution space of the optimization problem. Assuming that the Bloch coordinate of the i-th wolf in the j-th qubit is , the corresponding solution space variable iswhere is represented as the value range of the qubit, .

From the above transformation relations, we can find that each solution of the optimization problem will correspond to three feasible solutions in Bloch coordinates. According to the proof in [25], we know that this coding method can significantly increase the diversity of the population, expand the number of global optimal solutions, and improve the probability of finding global optimal solutions. Therefore, compared with the standard WPA, the MQWPA must have better global optimization ability.

3.1.3. Quantum Bit Status Update

In the MQWPA, the corresponding qubits of wolves will also perform three kinds of intelligent behaviors, namely, wandering, attacking, and siege. In these processes, if a wolf in the population is less adaptive than the leader wolf, then the leader wolf will be replaced by the wolf.

The position of the leader wolf in the process of wandering and attacking is

The location of the food is

Based on the above assumptions, the update behavior of wolves can be described as follows:① When the i-th wolf is a wolf detector, the updating formula of quantum bit amplitude and angular increment in the j-dimensional space is as follows:② When the i-th wolf is a fierce wolf, the updating formula of quantum bit amplitude and angular increment in the j-dimensional space is as follows:where means the number of iterations. means the walking step of wolves. means the running step of wolves. means the attack step of wolves when they perform siege. . . denotes the number of exploring directions for wolves. is a random number with a uniform distribution between [−1, 1]. is a compression factor. The compression factor is used to accelerate population convergence.③ In the new algorithm, the wolf swarm position is moved by a quantum revolving gate. The quantum bit phase of the wolf pack position is updated by the following quantum revolving gate U:

3.1.4. Quantum Bit State Selective Variation

Through analysis, we find that each time the algorithm randomly selects a certain number () of poorly performing wolves to eliminate them and then generates the same number of new wolves as normal distribution to keep the population number constant. This scheme prevents the convergence stagnation near the local optimal solution to some extent, but it also increases the computational complexity. Because the wolves are initialized randomly, the phenomenon of convergence and stagnation in wolf search is not certain. Also, we can detect whether the algorithm is in convergence stagnation by certain means. When the test results are positive, it is known from the literature [16] that it is a good solution to increase population diversity through mutation. Through variation, the wolves can jump out of the local best and try to move closer to the overall best (food). When the test result is negative, the original wolf information is maintained and the next search process is continued without mutation. In this way, the global search ability of the algorithm can be guaranteed and the search speed of the algorithm can be improved.

In response to the above issues, this article proposes two improvements:① In the iterative process of the algorithm, the convergence stagnation detection mechanism is introduced. The detection formula is as follows:where denotes the fitness of the head wolf and denotes the average best fitness of the individual wolf pack. If the value of still tends to 1 after () iterations and the algorithm does not terminate, the algorithm is considered to be stagnant.② When the algorithm is stagnating, the wolves are selectively mutated. According to the previous analysis, the probability of variation in the wolves should be proportional to the distance between wolves and the leader. The distance corresponds exactly to the wolves’ fitness. In the minima optimization, the smaller the adaptation value, the smaller the distance. So, we mutate according to the level of wolves’ fitness. The greater the wolf's fitness value, the higher the probability of its mutation. This will not only maintain the diversity of the population but also increase the iterative efficiency.

Firstly, we find out the different mutation probabilities of wolves. The mutation probability of the k-th wolf is as follows:where m is the size of the wolf pack, and the three fitness values of the k-th wolf are , , and , . Each wolf is assigned a random number between . If the random number is less than the probability of variation calculated by the wolf, the wolf is mutated. On the contrary, the wolf population remains in its original state.

In the MQWPA, mutation operation is realized by constructing operator V:

From the above formula, it is known that the variation is actually a larger rotation of the qubit along the Bloch sphere and the rotation angles are and . Using the quantum Hadamard gate to perform the mutation operation on the rotation angle can effectively improve the population diversity and avoid premature convergence.

The implementation steps of the improved new algorithm can be summarized as follows:Step 1: initialize the wolves based on the quantum Bloch coordinate.Step 2: calculate the fitness of wolves and select the leader wolves.Step 3: intelligent search of wolves, including roaming, running, and siege.Step 4: update the leader wolf position; then, according to the stasis test results, the wolves with poor performance were selectively mutated.Step 5: determine whether the optimization accuracy requirement or the maximum number of iterations is met. If reached, output the three coordinate positions of the leader wolf and compare the corresponding adaptive values of the three coordinates. The best coordinates are the optimal solution of the problem; otherwise, Step 2 is carried out.

The specific flow chart is shown in Figure 3.

3.2. MQWPA for Antenna Pattern Synthesis

Through the extremum optimization experiments of various typical test functions, we find that the MQWPA is very suitable for solving complex nonlinear optimization problems. Therefore, this paper applies the MQWPA to the multiconstraint synthesis of the symmetric one-dimensional sparse linear array, hoping to get good results.

3.2.1. Wolf Colony Initialization and Fitness Function Creation

In order to optimize the pattern synthesis model represented by formula (6), the wolf population with the number of individuals and dimension is used as the intermediate population. Wolves are coded by quantum bit Bloch spherical coordinates in the following way:where . are the random numbers between , . is the size of the wolf pack, and is the dimension of the function to be optimized.

The initial wolves are randomly distributed on the Bloch sphere, and the traversal space of each wolf is [−1, 1]. According to the previous analysis, we know that each wolf corresponds to three approximate solutions in the feasible solution space of the optimization problem. In the multiconstraint synthesis of symmetric one-dimensional sparse linear arrays, the search region of the elements is

The Bloch coordinate of the j-th qubit on the i-th wolf is . According to (13), three approximate solutions in the corresponding feasible solution space can be obtained as follows:

We call the above three approximate solutions X, Y, and Z, respectively. That is to say, after initialization of wolves, the corresponding three feasible solution matrices in the feasible solution space are as follows:

In order to ensure that the spacing between adjacent elements is larger than the minimum spacing, it is necessary to rank the variables of each dimension intermediate solution of wolves and then add . The actual distance intervals DX, DY, and DZ, respectively, are obtained. The analysis shows that the elements in DX, DY, and DZ all meet the condition of restriction that the distance between adjacent elements is not less than that of .

Since the aim of linear array synthesis is to obtain the lowest peak side-lobe level (PSLL), the fitness function is constructed as follows:

The larger the fitness of wolves, the lower the PSLL obtained by array synthesis. denotes the peak value of the main lobe of the array pattern, and can only be selected in the side-lobe region of the pattern. The corresponding fitness functions of , , and are , , and , respectively. The best value of the three is chosen as the optimal result of this iteration.

3.2.2. Predatory Behavior of Wolves

The wolves generated in the MQWPA are heuristically searched through four main predatory behaviors: wandering, summoning, sieging, and competitive renewal. The location of wolves is updated according to the rule of (11)–(14). Then, the solution space is transformed and the fitness is calculated, and the selective variation of wolves is carried out through stagnation detection.

Because the size of each dimension is , that is, , , and , the wolves are updated according to (11)–(14), and then the following results can be obtained by solving space transformation:

The variables of each dimension intermediate solution are sorted by size and added with . From our analysis, we can see that the variable of element spacing must satisfy the multiconstraint condition that the aperture is and the distance between adjacent elements is not less than . It can be seen that due to the characteristics of Bloch spherical coordinates, the infeasible solutions of wolves are effectively prevented in the updating process, the steps of judgment are reduced, and the efficiency of optimization is improved.

3.2.3. Selective Variation

In order to avoid falling into local optimum, the stagnation detection mechanism is introduced in the process of algorithm optimization. When the algorithm stagnates, a selective variation of wolves is detected. The variation mode is that the quantum bit rotates along the Bloch sphere, and the rotation amplitude is large, which increases the population diversity better.

4. Simulation Examples

The fractional-order Legendre transformation method proposed in [12] achieves the multiconstrained thin-line linear array with the goal of reducing the PSLL. The literature [17] also obtained good optimization results using the MGA (modified genetic algorithm). In recent years, some new efficient heuristic algorithms have been proposed and have achieved good results in complex optimization problems, such as the MQPSO (modified quantum particle swarm optimization) proposed in [26]. The algorithm designs a new location and dynamic parameter update strategy, which greatly improves the algorithm search efficiency and has an excellent ability to deal with nonlinear optimization problems. Then, in order to verify the validity and robustness of the MQWPA, it is compared with the MGA [17] and MQPSO in two typical examples. The simulation environment of the example is as follows: Lenovo G460, CPU: Intel (R) Core (TM) i3-380M 2.53 GHz, memory: 2.00 GB, hard disk capacity: 500 GB, software platform: Windows 7 (32-bit operating system), and simulation software: MATLAB 2012.

4.1. Example 1

Linear arrays with 17 elements have a central symmetry and an aperture of . All elements are omnidirectional with equal amplitude, and the beam direction is . Sparse synthesis of arrays is needed. The requirements for the new array are as follows: the aperture is unchanged, the distance between adjacent elements is greater than half wavelength, and the side-lobe level of the pattern is the lowest.

The analysis shows that, in this case . This paper uses the MQWPA to optimize. The basic parameters are as follows: population size , maximum iteration steps 300, wolf detection ratio factor , maximum number of strolls , distance determination factor , step size factor , and update ratio factor . In order to avoid falling into local optimum, the stagnation detection mechanism was adopted and the wolves were mutated selectively.

In order to verify the robustness of the proposed method, 10 simulation experiments were carried out independently and randomly for all three algorithms.

Table 1 lists the optimization results of the three algorithms. It can be seen from the table that the optimal array PSLL obtained by the MQWPA is the lowest, which is 0.1224 dB and 0.1014 dB lower than that obtained by the MGA and MQPSO, respectively.

Figures 46 are the optimal convergence curve, the worst convergence curve, and the average convergence curve of the three algorithms in 10 simulation experiments. From the three figures, we can see that the best PSLL of the array obtained from 10 simulation experiments of the MQWPA is −19.9194 dB and the worst PSLL is −19.7991 dB. The difference between them is only 0.1203 dB. However, the best PSLL obtained by the MQPSO algorithm is 0.2701 dB lower than the worst PSLL. The best PSLL obtained by the MGA is 0.4330 dB lower than the worst PSLL. Obviously, the MQWPA has the best numerical stability. Moreover, from the above fitness function convergence curve, we can see that the MGA converges around 70 generations and the MQPSO algorithm converges around 50 generations. The proposed MQWPA has converged in about 25 generations. The MQWPA takes 244 seconds, far less than the other two algorithms. It is obvious that the MQWPA has both good global optimization ability and high optimization efficiency in Case 1 optimization.

Although the MQWPA has excellent optimization performance, the algorithm involves relatively many parameters. After many simulations, we found that the distance judgment factor and the step factor of the algorithm have a great influence on the performance of the algorithm in the antenna pattern synthesis. Therefore, this paper mainly analyzes these two parameters.

Keep the other parameters unchanged, set , when , and continue to optimize example 1 with the MQWPA. Each combination of parameters was optimized 100 times and then averaged. The statistical results are shown in Figure 7.

It can be clearly seen from Figure 7 that when , the optimization effect of the MQWPA is the best and the PSLL of the array is the lowest.

Our analysis believes that the distance determination factor is an important parameter to control the change from the running state to the siege behavior of the fierce wolves. When is increased (less than 500), the decreases. The fierce wolves will turn into siege when they are closer to the leader wolf. At this time, only a small solution region near the leader wolf will be finely searched, which promotes the fast convergence of the algorithm. Finally, the average number of iterations is reduced and the optimization accuracy is improved. However, if continues to increase (greater than 500), the decision distance will become too small. It makes it difficult for the fierce wolves to turn into the siege behavior, and the number of iterations increases and the optimization precision decreases. When increases to a certain degree (), only a small number of fierce wolves turn into the siege. Because of the lack of a fine search for a good solution domain, the optimization effect is degraded, resulting in multiple poor results in 100 optimization calculations.

Step size factor reflects the fineness of wolf search in solution space. The search fineness increases with the increase of . From the point of convergence accuracy, when the value of is [100, 800], increasing is beneficial to improve the convergence accuracy of the algorithm. When is greater than 800, as continues to increase, the step size of wolves moving in the solution domain will become too small. The algorithm cannot traverse the good solution domain effectively, which results in a decrease of convergence accuracy and the increase of the PSLL for the array.

Through the above analysis, it can be seen that the MQWPA has a wide range of adaptability to parameters and in pattern synthesis. The final performance of the MQWPA will not be greatly affected by the slight change of parameters, which shows that the parameter selection of the MQWPA is not difficult and the algorithm has good robustness. In order to get the best antenna pattern synthesis effect, we take and in the following examples.

4.2. Example 2

The number of elements is 37. For a linear array with symmetrical center, the aperture is . All elements are omnidirectional with equal amplitude, and the beam direction is . It is required to synthesize the array, keep the aperture unchanged, the distance between adjacent elements is greater than half wavelength, and the side-lobe level of the pattern is the lowest.

Because of the symmetrical distribution, there are 18 elements in the half-aperture. The MQWPA is used to optimize and , and the remaining parameters are consistent with Example 1. Ten independent random simulation experiments were also carried out, and the best results of 10 optimizations for the three algorithms were shown in Table 2. The convergence curves of fitness functions in the optimization process of the three algorithms are shown in Figures 810.

In the 10 independent experiments optimized by the MQWPA, the best PSLL is −21.2134 dB, the average PSLL is −21.1140 dB, and the worst PSLL is −21.0591 dB. It can be seen that the 10 arrays obtained by the MQWPA are better than those optimized by the MGA and MQPSO.

From Figure 10, it can be seen that the best value of PSLL obtained from 10 MQWPA simulation tests is only 0.1543 dB lower than the worst value. However, the best PSLL of MQPSO 10 simulation tests is 0.8512 dB lower than the worst value and the best PSLL of MGA10 simulation tests is 0.3871 dB lower than the worst value. Obviously, the MQWPA has the best numerical stability. In addition, the MGA converges around 100 generations in the optimization process and the MQPSO algorithm converges around 60 generations. The proposed MQWPA has converged in about 40 generations. The MQWPA takes 540 seconds, far less than the other two algorithms. It is obvious that, among the three algorithms, the MQWPA can not only optimize the lowest PSLL but also has the best convergence speed. It has a good advantage in the pattern synthesis of multiconstrained sparse linear array.

Through the simulation of the above typical examples, we find that the MQWPA can synthesize the multiconstrained sparse array robustly and efficiently.

The coding method based on Bloch spherical coordinates greatly expands the number of global optimal solutions and improves the probability of obtaining global optimal solutions. Selective mutation enhances the robustness of the algorithm and improves the search efficiency.

At the same time, because the size of each dimension of Bloch spherical coordinates is always [−1, 1], the variables transformed by solution space must satisfy the constraints of aperture and minimum element spacing. It effectively avoids the infeasible solution in the algorithm iteration, reduces the judgment steps, and improves the optimization speed. Therefore, compared with the MGA in [17] and MQPSO in [26], the algorithm proposed in this paper has better accuracy and speed in pattern synthesis of multiconstrained sparse linear arrays.

5. Conclusion

In this paper, the MQWPA based on quantum Bloch spherical coordinates is proposed for the multiconstrained optimization of sparse linear arrays (including the constraints of the number of elements, the aperture of arrays, and the minimum distance between adjacent elements). Multiconstrained sparse array synthesis is not only a nonlinear optimization problem but also a multiconstrained problem. Constraint condition judgment is needed for each iteration update. It takes a lot of time and is difficult to be optimized. The MQWPA adopts a new encoding method, which not only expands the number of global optimal solutions and improves the probability of obtaining global optimal solutions but also effectively avoids the occurrence of infeasible solutions in iteration, reduces the steps of judging multiconstraint conditions, and improves the optimization efficiency. Several independent simulation experiments show that the MQWPA performs well in solving the multiconstrained optimization problem of sparse linear arrays. It not only has high convergence accuracy and fast convergence speed but also has good numerical stability.

In the future, we can proceed from two aspects. One is to further analyze the intelligent search strategy, mutation operator, and range of important parameters of the WPA and continue to tap the optimization potential of the algorithm. On the other hand, the MQWPA is applied to more antenna designs, such as circular array, plane array, and conformal array. In short, the new algorithm has a good ability to improve continually, as well as a wide range of applications, and has a good value of promotion.

Data Availability

The data used to support the findings of this study are included within the article. For the simulation environment, we use the following platforms: the microcomputer type Lenovo G460, Intel (R) Core (™) i3-380-m 2.53 GHz CPU, 2.00 GB memory, 500 GB hard drive. For the software platform, we use Windows 7 (32 bit operating system) and the simulation software MATLAB 2012. If there are other data problems, you can contact the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interests regarding the publication of this article.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (no. U1813222), Tianjin Natural Science Foundation (no. 18JCYBJC16500), and Key Research and Development Project from Hebei Province (no. 19210404D).