Abstract

One of the most recently developed heuristic optimization algorithms is dragonfly by Mirjalili. Dragonfly algorithm has shown its ability to optimizing different real-world problems. It has three variants. In this work, an overview of the algorithm and its variants is presented. Moreover, the hybridization versions of the algorithm are discussed. Furthermore, the results of the applications that utilized the dragonfly algorithm in applied science are offered in the following area: machine learning, image processing, wireless, and networking. It is then compared with some other metaheuristic algorithms. In addition, the algorithm is tested on the CEC-C06 2019 benchmark functions. The results prove that the algorithm has great exploration ability and its convergence rate is better than the other algorithms in the literature, such as PSO and GA. In general, in this survey, the strong and weak points of the algorithm are discussed. Furthermore, some future works that will help in improving the algorithm’s weak points are recommended. This study is conducted with the hope of offering beneficial information about dragonfly algorithm to the researchers who want to study the algorithm.

1. Introduction

Computational intelligence (CI) is one of the newest areas of research. CI is a set of methodologies inspired by nature. CI based techniques are useful to solve complex real-world problems when the traditional methods are ineffective. Fuzzy logic, artificial neural network, and evolutionary computation are part of CI.

Evolutionary computation mainly concerns optimization problems including combinatorial, mixed, or continuous problems. Evolutionary strategies and genetic algorithms are examples of evolutionary computation. However, this field has extended its scope to cover other areas.

Swarm intelligence (SI), for example, is part of the evolutionary computation. Nevertheless, the efficiency of SI-based algorithms has attracted many researchers in various areas, which makes SI become a separate field [1]. Swarm-based algorithms produce low cost, fast, and robust solutions to complex real-world problems [2]. In SI-based techniques a number of agents form a population. Agents in a population interact with each other and their environment. Nature (particularly, biological systems) is a great inspiration for these algorithms [3]. In SI techniques, agents practice simple rules. General control structures do not exist to show how individuals should behave. Interaction between agents causes the disclosure of global intelligent behaviour, which is not known to the agents [4]. Recently many SI-based algorithms have been proposed. Most of them are mimicking the swarm and animal behaviours in nature. The most popular SI algorithms include particle swarm optimization (PSO) proposed by Kennedy and Eberhart [5]. PSO can be counted as a significant improvement in the field. It mimics the behaviours of the school of birds or fish. A particle represents a single solution that has a position in the search space. Furthermore, at the beginning of the 1990s, Marco Dorigo completed his Ph.D. thesis on optimization and nature-inspired algorithms. In his thesis, he examined a novel idea known as an ant colony optimization (ACO) algorithm [6]. Chu et al. developed the cat swarm optimization (CSO) algorithm based on the behaviour of cats [7]. Grey wolf optimizer (GWO) was introduced by Mirjalili et al. [8]. GWO mimics the hunting behaviour of wolfs. Later, the dragonfly optimization algorithm (DA) was proposed by the same author [9]. DA was mainly inspired by the hunting and migration behaviours of the dragonfly. Another example is the differential evolution (DE) algorithm [10]. DE is a direct search population-based technique, stimulated by the evolution of living species. In [11], artificial bee colony (ABC) was proposed. ABC mimics the behaviours of honeybees. The results of this algorithm proved that it has well-balanced exploitation and exploration ability. Fitness dependent optimizer (FDO) was proposed in [12]. FDO was inspired by bee swarming reproductive process. However, it does not have any algorithmic connection with the artificial bee colony algorithm or the honey bee colony algorithm. Instead, it is based on the PSO. Donkey and Smuggler Optimization (DSO) algorithm was proposed in [13]. DSO mimics the searching behaviours of donkeys. Searching and selecting routes by donkeys were utilized as an inspiration for the algorithm. Firefly algorithm (FA) [14] is another metaheuristic algorithm. It simulates the flashing behaviour of fireflies and the fact of bioluminescent communication. FA is counted as one of the most powerful techniques for solving constrained optimization problems and NP-hard problems.

The importance of the metaheuristic algorithms and the reason that they have been used in many applications has encouraged the researchers to publish survey papers on the algorithms, for example, a systematic and meta-analysis survey of whale optimization algorithm [15]. In [16], a survey on the new generation of metaheuristic algorithms was presented. Another survey on nature-inspired metaheuristic algorithms with its domain specifications was proposed in [17]. In [18], the recent development on the modifications of the cuckoo search (CS) algorithm was proposed. The CS was originally developed by Yang and Deb [19]. It mimics the brood parasitic behaviour of cuckoo species and utilized the Levy flight action of some fruit flies and birds.

One of the most recently swarm-based algorithms is the dragonfly algorithm. It has been successfully utilized in many different applications. The DA is found to produce competitive and efficient results in almost all the applications that utilized it. After publishing the algorithm in 2016 until the end of working on this survey, it has been utilized to optimize a lot of problems in different areas. Thus, this paper is placed to review the dragonfly algorithm as one of the most recent algorithms in the area.

This work first presents an overview of the DA. The variants of the algorithm are then described. Furthermore, the hybridization versions of the algorithm with other algorithms are addressed. Additionally, applications in the applied science fields are discussed. Moreover, a comparison between the DA and some other metaheuristics is made. The advantages and disadvantages of DA are then discussed. The DA is also tested on the CEC-C06 2019 benchmark functions. Furthermore, the PSO, DE, and FA are tested on the traditional benchmark functions and the results are shown and compared with the results of the DA. In addition, a discussion and some problems of DA are presented along with providing solutions and future works to make the algorithm work better. Finally, a conclusion is given.

2. Overview of DA

DA is mimicking the swarming behaviours of a dragonfly. The reason for their swarming is either migration or hunting (dynamic swarm or static swarm, respectively). In a static swarm, small groups of dragonflies move over a small area to hunt other insects. Behaviours of this type of swarming include local movements and abrupt changes. In dynamic swarming, however, a massive number of dragonflies create a single group and move towards one direction for a long distance [20]. The aforementioned swarming behaviours are counted as the main inspiration of DA. Static and dynamic swarming behaviours are, respectively, in line with the exploration and exploitation phases of the metaheuristic optimization algorithm. Figure 1 shows the behaviours of dragonflies in static and dynamic swarming. To direct artificial dragonflies to various paths, five weights were used, which are separation weight (s), alignment weight (a), cohesion weight (c), food factor (f), enemy factor (e), and the inertia weight (). To explore the search space high alignment and low-cohesion weights are used, however, to exploit the search space low alignment and high-cohesion weights can be used. Furthermore, to transfer between exploration and exploitation, the radii of neighbourhood enlarged proportionally to the number of iterations were used. Tuning the swarming weights (s, a, c, f, e, and ) adaptively during the optimization process is another way to balance exploration and exploitation. Mathematically, each of the aforementioned weight factors are shown in equations (1)–(5).

The separation can be calculated as mentioned by Reynolds [21]:

In equation (1), X indicates the position for the current individual, Xj is the position for the jth neighbouring dragonfly, N is the number of individual neighbours of the dragonfly swarm, and S indicates the separation motion for the ith individual.

Equation (2) was used for calculating the alignment [9]:where Ai is the alignment motion for ith individual and V is for the velocity of the jth neighbouring dragonfly.

Cohesion was expressed as follows:where Ci is the cohesion for ith individual, N is the neighbourhood size, Xj is the position of the jth neighbouring dragonfly, and X is the current dragonfly individual.

Attraction motion towards food is computed as follows:where Fi is the attraction of food for ith dragonfly, X+ is the position of the source of food, and X is the position of the current dragonfly individual. Here, the food is the dragonfly that has the best objective function so far.

Distraction outwards predators are calculated as follows:where is the enemy’s distraction motion for the ith individual, is the enemy’s position, and X is the position of the current dragonfly individual.

For position updating in the search space, artificial dragonflies use two vectors: step vector ΔX and position vector X. The step vector is an analogy to the velocity vector in the PSO algorithm [5]. The position updating is also based mainly on the PSO algorithm framework. The step vector is defined in [9] as follows:where Si represents the separation for the ith dragonfly; Ai is the alignment for ith dragonfly; Ci represents the cohesion for ith dragonfly; Fi represents the food source for the ith individual; Ei represents the position of the enemy for ith dragonfly; is the inertia weight; and t indicates the iteration counter.

When the step vector calculation is finished, the calculation for the position vectors start as follows:where t indicates the current iteration.

In order to raise the probability of exploring the whole decision space by an optimization algorithm, a random move needs to be added to the searching technique. When no neighbouring solutions are there, to increase randomness, stochastic behaviour, and exploration of artificial dragonfly individuals, dragonflies are required to use a random walk (Lévy flight) to fly throughout the search space.

3. Convergence and Divergence of DA

For the transition from intensification to diversification, dragonflies should adaptively change their weights. This guarantees the convergence of dragonfly individuals during the optimization process. As the optimization process progresses, to adjust the flying path, the neighbourhood area is expanded; hence, at the final stage of optimization, the swarm becomes one group to converge to a global optimum. The best and the worst solutions found so far become the food source and enemy, respectively. This makes convergence and divergence towards the promising area and outwards nonpromising area of the search space, respectively.

4. Variants of DA

DA has three variants.

4.1. DA for Single-Objective Problems

In DA, at the beginning of the optimization process, randomly a set of solutions is created. Initially, the step and position vectors of artificial dragonflies are assigned to stochastic values between lower and upper bounds. The position and step vectors for each dragonfly should be updated in each iteration using Equations (7) or (8) and (6). To update the step vector and position vector of dragonflies, their neighbourhood is chosen by Euclidean distance calculation. The position updating is continued until meeting the end criterion. Visual 1 shows the pseudocode for DA for single objective problems.

The single DA is the most popular variant among the other variants of DA.

Initialize the dragonflies population Xi (i = 1, 2, …, n)
Initialize step vectors Xi (i = 1, 2, …, n)
while the end condition is not satisfied
 Calculate the objective values of all dragonflies
 Update the food source and enemy
 Update , s, a, c, f, and e
 Calculate S, A, C, F, and E using equations (1)–(5)
 Update neighbouring radius
If a dragonfly has at least one neighbouring dragonfly
  Update velocity vector using equation (6)
  Update position vector using equation (7)
Else
  Update the position vector using Lévy flight
End if
  Check and correct the new positions based on the
  boundaries of variables
End while
4.2. DA for Binary Problems

In binary search space, the position vector can take 0 or 1. Hence, the position of search agents cannot be updated by adding a step vector to the position vector. Using the transfer function is the easiest way to convert continuous SI technique to binary algorithm [22]. Transfer function takes velocity (step) values as input and returns a number between 0 and 1 as output, which indicates the probability of changing the individual’s position. Similar to continuous optimization, the function simulates sudden changes in particles with large velocity. To use the DA for binary problems (BDA), Equation (8) was used [22]:

Equation (8) was used to calculate the changing probability of the position of all artificial dragonflies. Equation (9) for updating position was then created to update the search agent’s position in binary search spaces:where r is a number in [0, 1].

In BDA, it was assumed that all of the artificial dragonflies are in one swarm. Therefore, BDA simulates exploration and exploitation by adaptively tuning the swarming factors (s, a, c, f, and e) and the inertia weight (). Visual 2 presents the pseudocode for BDA.

Initialize the dragonflies population Xi (i = 1, 2, …, n)
Initialize step vectors Xi (i = 1, 2, …, n)
while the end condition is not satisfied
 Calculate the objective values of all
 Dragonflies Update the food source and
 enemy Update , s, a, c, f, and e
 Calculate S, A, C, F, and E using equations (1)–(5)
 Update step vectors using equation (6)
 Calculate the probabilities using equation (8) Update
 position vectors using equation (9)
End while
4.3. The DA for Multiobjective Problems

Multiobjective problems have multiple objectives. The result for multiobjective problems is a set called Pareto optimal set. The set contains the best trade-offs between the objectives [23].

In order to use the DA to deal with multiobjective problems (MODA), an archive was first provided to save and retrieve the best Pareto optimal solutions during the process of optimization. To update the position, the food source is selected from the archive, and the rest of the process of position updating is identical to that of DA.

Similar to the multiobjective particle swarm optimization (MOPSO) algorithm [24], to observe the well-spread Pareto optimal front, the food source is chosen from the least populated region of the produced Pareto optimal front. In MODA, this was carried out through finding the worst and the best objectives of the current Pareto optimal solutions. Furthermore, a hypersphere to cover all the solutions was defined, and then in each iteration, the hyperspheres are divided into equal sub-hyperspheres. When the segments are created a roulette-wheel mechanism with the following probability for every segment was used for the selection process [25]:where c is a constant number and greater than one and is the number of Pareto optimal solutions obtained in the ith segment. Equation (10) gives the MODA a higher probability to select the food source from the less populated segments.

On the other hand, to select predators from the archive, the worst (most populated) hypersphere was chosen, so that the artificial dragonflies are prevented from searching around nonpromising areas. For the selection process the roulette-wheel mechanism with the following probability was used:where c is a constant number and greater than one and is the number of Pareto optimal solutions obtained in the ith segment.

In Equation (11), using the roulette-wheel mechanism, the most crowded hyperspheres have a higher probability of being selected as enemies. To prevent the archive from becoming full, if at least one of the archive residences dominates the solution, then it should not be allowed to enter the archive. However, the Pareto optimal solutions dominated by the solution should be removed from the archive and the solution should be added to the archive. If the archive is full, then one or more solutions may be removed from the most populated segments [25]. In addition to the parameters of DA, MODA has two new parameters: one for defining the maximum number of hyperspheres and another parameter to specify the archive size. The pseudocode for MODA is presented in Visual 3.

5. Hybridization Versions of DA

In the metaheuristic context, hybridization refers to merge the powerful characteristics of two or more algorithms in order to provide a new powerful algorithm based on the features of the merged ones [26]. In the following sections, the hybridization versions of DA are discussed.

Initialize the dragonflies population Xi (i = 1, 2, …, n)
Initialize step vectors Xi (i = 1, 2, …, n)
Define the maximum number of hyperspheres (segments). Define the archive size
while the end condition is not satisfied
 Calculate the objective values of all dragonflies
 Find the nondominated solutions
 Update the archive with respect to the obtained nondominated solutions
If the archive is full
  Run the archive maintenance mechanism to omit
  one of the current archive members Add the new
  the solution to the archive
End if
If any of the new added solutions to the archive is
  located outside the hyperspheres
  Update and reposition all of the hyperspheres to cover the new solution(s)
End if
 Select a food source from the archive: =
 SelectFood (archive)
 Select an enemy from archive: = SelectEnemy (archive)
 Update step vectors using equation (8)
 Update position vectors using equation (9)
 Check and correct the new positions based on the
 boundaries of variables
End while

Ranjini and Murugan [26] combined some features of PSO with DA and produced a new algorithm called memory-based hybrid dragonfly algorithm (MHDA). In MHDA, two more features are added to the DA to refine its performance. (1) Internal memory is added to observe the possible solution. This internal memory has a great role in converging to global optima. When the internal memory is added, each dragonfly individual will be able to keep track of its correlates in the problem space, which are related to the value of fitness. In the PSO algorithm, this is named as pbest. The best fitness value in each iteration is compared to the search agent’s fitness value of the current population. As a result, the DA-pbest is created from the better-saved solutions. The dragonfly individuals are also able to keep track of the best value founded so far by any dragonfly in the neighbourhood. This is similar to the concept of in the PSO. Here, DA- is used to store the best value. The capability of exploitation in DA is enhanced by these two novel concepts: pbest and . The internal memory feature gives a greater performance compared with the conventional algorithm and gives the power to escape from local optima [27].

(2) Iterative level hybridization with PSO runs on the saved solution’s set. To enhance the performance of optimization in the iteration level hybridization approach, two algorithms are executed iteratively in sequence [28]. Here to extend the search space and converge to a more promising area, DA with internal memory is used, and then the previously limited area is exploited using PSO to find better solutions.

Thus, to reach global optimal solutions in the MHDA, the DA’s exploration features in the initial stage and PSO’s exploitation features in the final stage were combined. Visual 4 shows the pseudocode of MHDA. MHDA’s superior performance on unimodal functions showed speed converge and an accurate diversification of the algorithm. The results of the algorithm showed the competitive performance of the algorithm and that it can be used to optimize hard problems.

Hence, compared with the original DA, the hybrid algorithm-MHDA showed better performance because the MHDA provided good stability between exploration and exploitation capabilities offered by DA and PSO, respectively. Then, the pbest and of PSO were initialized using DA-pbest and DA- matrixes, respectively.

The equations for position and velocity of PSO were modified as follows:where and are the pbest and in PSO, respectively, and k is the size of the swarm.

In [29], DA was combined with an extreme learning machine (ELM) to overcome the problems in gradient-based algorithms. In this technique to optimally select the biases of the hidden layer, DA was used. Using DA, the overall performance of ELM was improved. The convergence of DA-ELM was expected in a small number of iterations. Moreover, the overfitting problems in traditional ELM were overcome using the DA-ELM model. The results showed that in general DA-ELM could outperform both GA-ELM and PSO-ELM. It was also examined that DA has a good ability in searching the feature space adaptively and showed its capability in avoiding local minima that may cause premature convergence. Furthermore, it was proved that the DA has the minimum root mean square error that showed the ability of DA in finding optimal feature combination in less prediction error. To some extent, the average computational time of the DA was also compatible with PSO and GA. The compared and proposed models trained using a thousand iterations.

Initializing the set of parameters:
Maximum iteration (Max-iter), maximum number of search agents (N max) number of search agents (N), number of dimensions (d), upper bound and lower bound of variables. Initialize the dragonflies’ populations (X). Initialize the step vectors (ΔX)
while maximum iterations not done
For each dragonfly calculate fitness value
  If Fitness Value < DA-pbest
   In this iteration move the current value to DA-pbest matrix
  End if
  if fitness value < DA-
   set current value as DA-
  End if
End
For each dragonfly
 Update the food source and enemy
 Update , s, a, c, f, and e
 Calculate S, A, C, F, and E using Equtaions (2)–(6)
 Update neighbouring radius
If a dragonfly has at least one neighbouring dragonfly
  Update velocity vector using equation (8) Update
  position vector using equation (9)
Else
  Update position vector using equation (10)
End if
 Check and correct new positions based on boundaries of variables
End
------------------End of DA and Start of PSO------------------
For each particle
 Initialize particle with DA-pbest matrix Set PSO-as DA-
End
while maximum iterations or minimum error criteria is not attained
For each particle
  Calculate fitness value
  If fitness value < PSO-pbest in history
   set current value as the new PSO-pbest
  End if
End
 Choose the particle with the best fitness value of all the
 particles as the PSO-
For each particle
  Calculate particle velocity according to equation (12)
  Update particle position according to equation (13)
End
End while

For the optimization process, RMSE was examined as a fitness function and the number of iterations was used as a criterion to stop the process. The ranges of Reference [30] proposed a hybrid version of the dragonfly algorithm with support vector regression (SVR) for online voltage stability assessment. Parameter selection in SVR highly affects its performance. The important parameters for SVR include penalty parameters C, nonsensitivity coefficient ε, and the kernel parameters. The DA was used in parameter settings of SVR, which improved the performance of the technique. For training the examined model (DFO-SVR) as an input, the voltage magnitude produced from PMU buses were utilized for various operating conditions and the least values of voltage stability index (VSI) were used as output variables. Three statistical indices were utilized for evaluating the DFO-SVR model. Those statistical indices were correlation coefficient (R), root mean square error (RMSE), and the percentage of mean square error (PRMSE).

Parameters (C, γ, and ε) were [1 1000], [0.0001 0.1], and [0.1 1], respectively. The optimal solution values of the C, ε, and γ parameters for this work are shown in Table 1.

The produced results proved that the proposed model has a good performance for prediction.

Depending on reference [31] determining an adequate cluster radius is required for generating fuzzy rules. In this work, the radius of the cluster was between 0.2 and 0.5. The predicted outputs for the systems IEEE 30-bus and Algerian 59-bus systems were compared with the actual ones using DFO-SVR and ANFIS techniques, respectively. The produced results proved that the performance of prediction for both systems was better in the DFO-SVR technique compared with the ANFIS technique.

In reference [32], a hybrid version of the binary dragonfly algorithm with an enhanced particle swarm optimization algorithm for solving the feature selection problem was examined. The proposed approach called Hybrid Binary Dragonfly Enhanced Particle Swarm Optimization Algorithm (HBDESPO). The ability of DA to secure diverse solutions and the enhanced PSO ability to converge to the best global solution produced a hybrid algorithm with better performance. In the examined hybrid technique, dual exploration was used and excessive exploitation was avoided. In the HBDESPO technique, the velocities of participated algorithms updated independently. In this examined system, the K-nearest neighbour (KNN) was used as a classifier for ensuring the robustness of the training data and reach better feature combinations. The proposed algorithm was tested on 20 standard datasets from the UCI repository. The datasets were divided into three sets: testing, validation, and training. The value of K in the KNN was assigned to 5 based on trial and error.

The training set was used for evaluating the KNN on the validation set by using the proposed technique to advise the feature selection process. For the final evaluation of the best nominated feature, the training set was used. The minimization problem for this work is shown in equation (14). The setting of the optimizer and global specific parameters are shown in Table 2.where ER(D) is the classifier’s error rate, R represents the selected feature’s length, and C shows the total number of features. β and are constants for controlling the weights of classification accuracy of the minimization feature.

The proposed technique was compared with the results of binary DA from [9] and the enhanced PSO from [22]. From this research work, it was concluded that the examined algorithm could provide high classification accuracy while keeping the ratio of feature selection to the minimum. Moreover, small fitness values were reached and across various runs the algorithm kept its stability. It was also concluded that the values of the standard deviation observed the robustness of the algorithm as it repeatedly could converge to a similar solution.

In DA, having an excessive number of social interactions may reduce the accuracy of the solution, fall easily into local optima, and cause an imbalance between exploitation and exploration. To control these deficiencies, in reference [33], DA was merged with an improved version of the Nelder–Mead algorithm (INMDA). The reason for this hybridization was to make the capability of local explorative stronger and prevent falling into local optima. INMDA consists of two stages. First, to search the solution space, DA was utilized and gave the required diversity to the individuals to find the global optimum solution. Second, the improved version of the Nelder–Mead (INM) simplex method was utilized to find the best and worst points and calculate the population centroid. One of the main features of INM is that the population’s centroid was used to update the position. This improves the possibilities of jumping out of local optima. The efficiency of the proposed technique was tested using 19 unconstraint and 13 large-scale benchmark functions with dimensions of 30, 500, and 1000, respectively. Table 3 shows the parameter settings for INMDA. In Table 3, t represents the current iteration and T is the number of iterations.

For single-objective functions, the proposed technique was compared to other algorithms, such as Memory-based Hybrid Dragonfly Algorithm (MHDA), DA, PSO, and recently SI-based optimization algorithms, such as ALO and WOA. For each optimization algorithm, 30 independent runs were used. The number of agents and the maximum number of iterations were 30 and 1000, respectively, following the literature [26]. Fried man’s test functions and Wilcoxon rank sum were used to statistically test the significance of the experimental results for the 19 unconstrained benchmark functions. The results showed the great performance of the proposed work for solving high-dimensional problems compared with the other algorithms, such as DA and MHDA, while they cannot be used to solve high-dimensional problems because they easily encounter “dimensional curse.” The work concluded that the INMDA owns a superior performance comparing to the other algorithms. The enhanced exploitation and exploration capabilities from using reverse learning techniques generated this great performance.

Furthermore, for enhancing the performance of optimization by DA, reference [34] examined an improved version of DA. The proposed DA is based on exponential function adaptive steps and elite opposition-based learning strategy. The elite individual was presented to construct the opposite solutions using elite opposition-based learning. The scope of the search area expanded by using the mentioned technique, and it was useful for improving the capability of the global exploration of DA. Furthermore, to replace the original stochastic step, an adaptive step with the exponential step was designed. The improved work was named as dragonfly based on elite opposition-based learning and exponential function adaptive steps (EOEDA). The reason behind this modification was that DA sometimes has problems to solve complex optimization problems and it easily falls into local optimum, and the speed of convergence was low. The results proved that the EOEDA had better convergence accuracy and the speed of convergence was faster.

In reference [35], different features were selected using a new chaotic dragonfly algorithm (CDA). In CDA, searching iterations of DA were merged into the chaotic maps. To modify the main parameters for movement in DA, ten chaotic maps were utilized to improve the convergence rate and efficiency of the DA. The proposed method was used to select features in the extracted dataset from the drug bank database, which has 6712 drugs. In this work, 553 biotransformed drugs were used. The proposed method was utilized to assess the toxicity of hepatic drugs. The proposed model, in general, consisted of three phases: data preprocessing, feature selection, and classification phase. In phase two, CDA was utilized to pick the features. The k-NN classifier was used to measure the goodness of the selected features. Table 4 presents the initial parameter settings for CDA.

Furthermore, the examined technique was evaluated using three different experiments. The results proved that using chaotic maps with the dragonfly algorithm produced better results. Another experiment was conducted by comparing the performance of CDA with Gauss’s chaotic map with seven methods for optimization, namely, PSO, ABC, GWO, CSA, SCA, SSA, and CSO. Table 5 shows the parameters for the participated algorithms. The results proved that CDA provided a better score in most of the cases. However, CDA in most cases provided minimum stability. On the other hand, examining the value proved that statistically the produced results were important, which shows the superiority of CDA comparing to the aforementioned algorithms. To examine the selected features SVM classifier with various kernel methods was used. The results showed the superiority of the CDA compared with the other techniques. Moreover, in terms of computational time, it was noted that the computational time reduced while using the feature selection algorithm and that the CDA minimized the time remarkably compared with the original DA.

Khadanga et al. [36] proposed a method to control frequency in an islanded AC microgrid (MG). MG is formed by integrating various sources, such as wind power generation, renewable sources of energy, and solar energy generation. In this work, ABC was merged with DA. The idea behind hybrid ABC/DA (HAD) was to merge the exploration and exploitation ability of the DA with the exploration ability of ABC. The proposed technique consists of three components: the dynamic and static swarming behaviour in the DA and the two phases of the global search in ABC. The global search was accomplished by the first component (DA phase), local search was accomplished by the second component (onlooker phase), and the third one accomplished global search (modified scout bee phase).

All the parameters from the original DA and ABC were adapted to the merged technique with one more parameter, which is Prob. Prob parameter was used to balance the dragonfly bee phase and the onlooker bee phase. It was also utilized to make a balance between exploration and exploitation. The prob parameter was set to 0.1 in the carried experiments in the work, which was based on another confirmed experiment. HAD considered D-dimensional solutions and population size of N. Observing the worst case, the time complexity of the iterative process of the hybridized technique was analyzed as follows: in the first phase, the main operation for creating an initial population and the complexity time was O (ND). In the second phase, the stopping criteria were judged and the time complexity was O (1). In the third phase, the value of the rand parameter was judged. If rand is smaller than prob, then perform dragonfly bee phase, else perform onlooker bee phase, then perform a modified scout bee phase, and the time complexity was O (N). In the fourth phase, the solution was updated and the time complexity was O (N). In the fifth phase, continue with the iterations and go back to the second step. Hence, the time complexity of the examined technique was O (ND). 50 iterations were used to calculate the performance of the hybrid technique.

In terms of convergence speed, the results proved that comparing to the original DA and ABC the proposed technique provided better performance and in some cases the results were comparative. In another contribution, the examined technique was used to train the multilayer perceptron (MLP) neural network. The results showed that in terms of convergence speed, achieving the best optimal value, accuracy, and avoiding local minima, the proposed hybrid technique was a better trainer for MLPs in comparison to the original version of the participated algorithms.

6. Applications of DA

Due to the power of DA, enormous research applications in applied sciences have been conducted. For example, machine learning, image processing, wireless, and network applications, and some other areas. In this section, we present applications of the dragonfly algorithm in the aforementioned areas. The purpose of using the DA in the applications in various areas and the results are shown in Table 6.

6.1. Image Processing

In [37], two key crucial factors for traditional ship classification, classifier design and feature selection, were joined together and a novel ship classification model called BDA-KELM classification for high-resolution synthetic aperture radar (SAR) images was proposed. Dragonfly algorithm in a binary search space in this work was used as a searching technique. For the fitness function, the accuracy of the prediction of the subsequent classifier was used. Wrapper-based methods needed too much computational time; in order to overcome this drawback, the kernel extreme learning machine was operated as the elementary classifier and the DA helped in searching for the optimal parameter sets (kernel parameter and penalty factor) for KELM and the optimal feature subset among the feature candidates simultaneously. Integrating both selecting features and classifier design on the base of DA made BDA-KELM simple. The experiment was conducted based on Terra SAR-X SAR imagery. For training, 60% of the samples were used and the rest were used as testing datasets. The same datasets were used in ship classification using the most popular models for classification (KNN, Bayes, Back Propagation neural network (BP neural network), and Support Vector Machine (SVM)). A number of experiments were conducted using different classification models. For each model, the evaluation metrics were computed to prove the superiority of BDA-KELM. Each experiment was run 10 times. To evaluate the proposed model several evaluation metrics were used. They include recall, precision, and F1-score. The classification results proved the accuracy of the proposed model, which was 97%. Thus, the performance of the examined technique for classification was better than the examined techniques in the literature.

The thresholding of histograms is a technique that is widely used for segmenting grey scale images. However, for colour images, it is not trivial because of its multilevel structure [38, 39]. For this reason, in [40], the authors tried to overcome this problem by using a dragonfly optimization algorithm for doing multilevel segmentation (SADFO) of colour fundus image. The problem of multilevel segmentation was shown as an optimization problem and DA was used to solve it. The threshold values were optimized for the chromatic channels of colour fundus images by exploring the solution space effectively and finding the global best solution. Kapur’s entropy was used in the proposed technique. The result proved that the proposed method produces much better results compared with segmentation after changing the image to grey scale.

In the medical images, watermarking is a hot topic that gives security to the capsulated secret code to the images. Hemamalini and Nagarajan [41] examined a powerful watermarking technique that depends on the weight of the pixels. To determine effective pixels for watermarking, discrete wavelet transform (DWT) was utilized for extracting the low- and high-frequency bands. DA used to select the effective pixels which followed the objective function based on edge level, neighbourhood strength, gradient energy, and wavelet energy (ENeGW) of the pixels. Medical retinal images were used for the experiment, and patient data were used as a watermark. In terms of performance metrics, a comparative analysis was carried out in this work. The metrics used were correlation coefficient and peak signal-to-noise ratio (PSNR). In this work, it was observed that the correlation coefficient values for the examined technique compared with the other techniques, such as random selection, PSO, and GA, were greater. The proposed work obtained the correlation at a rate of 0.936719 for the random noise, 0.974479 for salt and pepper noise, and 0.983073 for the rotational noise. Similarly, the PSNR results for the examined dragonfly technique under the existence of the random noise, salt and pepper noise, and rational noise were greater and they were 62.39155 dB, 62.95912 dB, and 63.02815 dB, respectively. However, for the already existing method, such as random selection, the values of PSNR with respect to the noises were 59.59593 dB, 61.48404 dB, and 59.87195 dB, the PSNR values of PSO were 60.20927 dB, 61.63731 dB, and 60.53219 dB, respectively, and the PSNR values of the genetic algorithm were 59.62668 dB, 61.46258 dB, and 59.90074 dB, respectively.

Połap and Woźniak [42] used dragonfly to explore the pixels in images and assess which of these pixels represent significant components of the objects. Therefore, this technique works as a detection model for finding interesting features. In this work, a fitness function was modelled to work as a detection tool to select pixels related to the shapes of objects. In each iteration, the individuals in a given population were assessed for adaptation to the environment. In the case of images, unfortunately, the essential search areas may vary in many places. Hence, a set of tree functions was examined. Results of the examined bioinspired extraction technique showed that utilizing different components of the fitness function resulted in a different selection of key points. This was because different aspects of the image were focused on the different components. From this work, it was concluded that the proposed technique helped on efficient and fast object extraction from images.

6.2. Machine Learning

Parameters in the SVM such as the kernel and penalty parameters have a great impact on the accuracy and complexity of the classification model. In order to decrease classification errors in [43], the dragonfly optimization algorithm was used for parameter optimization of the SVM. The values of kernel and penalty parameters were sent by the DA for training the SVM using the training data. The bounds for searching range of penalty parameter of the SVM was Cmin = 0.01 and Cmax = 35000, and the bounds of the searching range of σ was σmin = 0.01 and σmax = 100 [44]. In this work, the effects of the number of solutions on the computational time and testing error rate were investigated.

As shown in Figures 2(a) and 2(b), it was concluded that increasing the number of solutions decreases the rate of misclassification. However, computational time was raised. In addition to the number of dragonflies, it was also proved that the number of generations also had effects on the testing error rate and computational time. Increasing the number of generation reduces the error rate to an extent after that enlarging the number of generation did not make any changes in the error rate of misclassification. Furthermore, the computational time increased with increasing number of generations.

For testing the datasets, nonparametric Wilcoxon signed rank test was used in this proposed work. However, to test the estimated error rate, 10-fold cross validation was used.

Furthermore, it was shown that the classification error rates for the DA + SVM algorithm were lower than those in the PSO + SVM algorithm [45]. The reason for this is that the DA parameters could be altered iteratively, whereas PSO parameters were fixed and they had to be set first. Thus, DA automatically made the best trade-off between explorations to exploitation. Furthermore, in comparison to the GA + SVM algorithm [46], in most cases, the DA-SVM produced lower classification errors.

The data for both Figures 2(a) and 2(b) are taken from [43], and here it is shown as figures.

Hariharan et al. [47] proposed a combination technique of wavelet packet-based features and the improved version of binary dragonfly optimization (IBDFO) algorithm-based feature selection that was used for classifying various signal types of infant cry. Cry signals were obtained from two different databases. Each database contained a number of samples for different reasons for crying, as shown in Table 7. In Mel Frequent Cepstral Coefficient (MFCC) (16 features), Linear Predictive Coding (LPC) based cepstral (56 features), Wavelet packet transform energy, and nonlinear entropies (496 features) were extracted. IBDFO algorithm was used to overcome the dimensionality problems and choose the most salient features. A wrapper-based feature technique was proposed and various types of infant cry were classified. Extreme learning machine kernel classifier was used, and all and highly informative features were utilized. New updating mechanism and elitism were added to the basic binary dragonfly optimization algorithm (BDFO) to enhance its performance when optimizing the crying features. It was noted that by using a two-class experiment, the percentage rate of recognition accuracy of IBDFO was improved very well compared with BDFO.

It was discovered that the results achieved for IBDFO using seven class experiments were better than those achieved using other techniques (improved binary dragonfly optimization algorithm (IBDFO), GA, and PSO). The results indicated that the combination of feature selection and extraction technique provided better classification accuracy.

In reference [48], DA-based artificial neural network (ANN) model was utilized to estimate India’s primary fuel demand. Two multilayer feedforward networks were used. Each of the networks processed input, output, and hidden layers and each network trained with DA. Along with the networks socioeconomic indicators are involved, for example, population and per capita gross domestic product (GDP). The connection weights of ANN models were optimized via searching problem space effectively to find the global best solution. The model proposed in this work required as input the forecast year, and then the primary fuel demands are predicted. The forecast up to the year 2025 was compared with the regression model, and the forecast’s accuracy was calculated using the mean absolute percent error (MAPE). This work showed that the proposed model was more accurate than the existing regression model.

Wavelength selection is a notable issue of preprocessing in near-infrared (NIR) in spectroscopy analysis and modelling. Chen and Wang [49] examined a new technique for wavelength selection based on a binary dragonfly algorithm, which consisted of three typical frameworks: multi-BDA, single-BDA, and ensemble learning-based BDA settings. It was discovered that, for the aforementioned problem, using binary-BDA could cause instability. However, both ensemble learning-based BDA and multi-BDA techniques could boost stability. Here, the key technical skill was to reduce the randomization inherent in the BDA. In addition, the computational complexity of multi-BDA was higher than that of the ensemble learning-based BDA, which mainly has an effect on the computation of the fitness function. For performance validation of the abovementioned techniques, the public gasoline NIR spectroscopy dataset was utilized. The aim was to observe the most representative wavelengths for predicting the content of octane. The values of the parameters of the BDA are listed in Table 8. The results proved that like the traditional swarm optimization techniques, the BDA could be used to deal with the wavelength selection problem.

The difference between multi-BDA and single-BDA is that a voting strategy was used to aggregate the results of the wavelength selection of the multiple-time search. From this experiment, it was found that by adjusting the vote’s percentage value (VP), the number of selected wavelengths could be controlled. It was obvious that having a small number of selected wavelengths would cause a reduction in the performance of the quantitative analysis model.

Moreover, in the selection of wavelength using the ensemble learning method and the BDA technique before the BDA, a serious of bootstrap sampling generators were added, which was the main difference between this technique and multi-BDA. In this work, it was shown that although the size of the sample reduced using bootstrap sampling, the performance of the quantitative analysis models with the features selected was adjacent to that of the multi-BDA method. Thus, using this technique helped in reducing the computational complexity.

Designing filters in the field of the infinite impulse response (IIR) depends mainly on the conventional selection of parameters filtered among a huge possible combination. The system identification problem requires exploiting the adaptive IIR filter coefficients by using a new algorithm until it is equivalent to the examined unidentified system and adaptive filter. The design of the filter for a problem depends on discovering the optimal set of parameters for the unrevealed model so that it is a close counterpart with the parameters of the filtered benchmark. In reference [50], DA was used to design the IIR filters instead of using gradient-based methods such as least mean square (LMS). Utilizing the DA prevented locating in the local optima, could evaluate the coefficients close to the actual value, and the minimum mean square value was found. Furthermore, the results proved the superiority of DA against PSO, CSO, and BA to solve the aforementioned problem.

Internet of vehicles (IoV) is utilized to communicate vehicles together. As vehicular nodes are usually considered in moving, hence it makes periodic changes in the topology. Major issues are caused by these changes, such as scalability, routing shortest path, and dynamic changes in topology. Clustering is one of the solutions to such problems. Aadil et al. [51] proposed a new method called the dragonfly-based clustering algorithm (CAVDO) to focus on the scalability of IoV topology. Furthermore, mobility aware dynamic transmission range algorithm (MA-DTR) was used to transmit range adaptation based on traffic density. The proposed work was compared to comprehensive learning PSO (CLPSO) and ant colony optimization. The results proved that in a number of cases CAVDO performed better. The CAVDO performed better in a high density, an average in medium density, and performed worst in low density. However, CLPSO performed well in a very low density only.

6.3. Wireless and Network

Daely and Shin [52] utilized a dragonfly algorithm in two scenarios: (a) to predict the location of randomly deployed nodes in a designated area and (b) to localize different noise percentage of distance measurement (Pn). In both scenarios, the localization was simulated using PSO and DA. In the first scenario, as shown in Figure 3, the simulation results showed that for range-based localization with varying Pn dragonflies could produce fewer errors. In the second scenario, on the other hand, different number of unknown nodes were used for the localization; the simulation result proved that the distances between real and approximated nodes by DA and PSO were increased with the increase of Pn (see, Figure 4).

In radio frequency identification (RFID) network, in order to make an improvement in energy efficiency and maximize it, the network’s life should be maximized too by minimizing the use of energy RFID readers and balance the use of energy by every reader in the network. To enlarge the RFID network lifetime, DA was used in reference [53] to develop centralized and protocole-based energy efficient cluster. A high-energy node was used as a cluster head; this devoted less amount of energy while aggregated data were transmitting to the base station. Required residual energy to receive data from the whole readers was defined as a threshold value. The readers with higher leftover energy compared with the threshold value became the cluster head.

The data for both Figures 3 and 4 are taken from [52].

An optimal cluster head among all the cluster heads was chosen using dragonfly clustering. The proposed work decreased the cluster breakage by choosing the cluster head that had similar mobility but high leftover energy. Avoiding the cluster breakage decreased consuming of energy. In addition, redundancy in data was avoided in the cluster head by aggregating the data. Hence, compared with the existing methods, in the RFID network, the efficiency was increased. The algorithm steps for selecting a cluster head and formation are described below:Step 1: initializing the tags and readers in the network RiStep 2: assigning potential scores (energy and mobility) to each tag nodes and readers.Step 3: finding the threshold value by sending the reader’s energy level to the base station.Step 4: if Ri > value of threshold, go to step 5 else go to step 6Step 5: the readers are updated as eligible cluster headStep 6: the readers are updated as remaining readers in the networkStep 7: separation, alignment, and cohesion are calculated for the eligible head in the network using equations (8)–(10), respectivelyStep 8: add the values found in step 7Step 9: select the value from step 8 as “Optimal head” if it is highStep 10: else the value “become ordinary readers” in the networkStep 11: cluster formation ends

In the cloud environment, keeping the stability of processing multiple tasks is a difficult issue. Therefore, a load balancing method is required to allocate the task to the virtual machines (VMs) without influencing the system’s performance. Kumar et al. [54] provided a method for load balancing, named as a fractional dragonfly load balancing algorithm (FDLA). In the proposed work two selection probabilities and fractional DA were examined. FDLA was implemented by integrating fractional calculus (FC) into the process of position updating in DA. Equation (15) is the position updating equation for the proposed work. The examined model used certain parameters of physical machines (PMs) and VMs for selecting the function to be reallocated in the VMs for load balancing. Probabilities were used to select the tasks. Task selection probability (TSP) and VM selection probability (VSP) were used. The objective function for the examined technique was based on three objectives, for example, the load of VMs, task migration cost, and the capacity of VMs. The objective was to maximize the solution’s fitness. The values of parameters utilized in the proposed work are shown in Table 9.where Y(t − 1), Y(t − 2), and Y(t − 3) represent the individual positions at iterations (t − 1), (t − 2), and (t − 3), respectively and represents the step vector.

Three different techniques were compared with the examined technique in this paper. The other techniques are PSO [55], Honey Bee Behaviour-inspired Load Balancing (HBB-LB) [56], and DA (DA was applied instead of the FDLA for balancing the load).

It was observed that the examined work provided a minimum load with allocating 14 tasks. Thus, the proposed FDLA obtained maximum performance than the other techniques. And the number of tasks reduced from 27 to 14 using FDLA compared with the PSO and HBB-LB.

7. The Comparison between DA and Other Algorithms

In reference [57], whale optimization algorithm (WOA), moth-flame optimization (MFO), and DA were compared. In the mentioned reference, these algorithms were implemented to examine the optimal sizing and location of distributed generation in radial distribution systems to reduce the power loss in the network. For this, multiple-DG units were allocated simultaneously and analyzed by considering two load power factors, i.e., unity and optimal. Bus systems 69 and 119 were used to test the algorithms. Four different cases were used to perform simulation, as shown in Table 10.

The performance for the aforementioned optimization algorithms for cases 1, 2, 3, and 4 proved that the MFO algorithm showed its superiority compared with WOA and DA. It performed better and converged earlier for the mentioned objective functions.

Parmar and Darji [58] used DA, MFO, and WOA to optimize a nonlinear and stochastic optimization problem. The addressed problem in this work was finding the optimum allocation of capacity pricing and capacity between two individual markets for electricity. The markets were having nonidentical designs, and they were interconnected. One of the markets had an energy-only market, while the second one had a capacity-plus-energy market. The objective function for this problem was optimally allocating capacity by generation companies (GenCos) in a way that this allocation could be able to increase general revenue. Likewise, the independent system operator (ISO) acquires energy and capacity; thus, it could reduce the cost of the purchase. In this paper, it was discovered that the maximum value of GenCos’s revenue, the capacity price, and the smallest value of ISO’s purchase cost were increased with the cost of recall, probability of recall, and the load forecasting error. Furthermore, it was concluded that various algorithms required various numbers of iterations to converge. It is worth mentioning that none of the algorithms were proved its superiority with respect to the examined problem.

Wongsinlatam and Buchitchon [59] focused on court case assignment that has a great impact on improving the judicial system’s efficiency. The ability of the judicial system highly depends on the time and the efficiency of operation of the court case. In this work, a mixed integer linear programming (MILP) was used to examine the case assignment issue in the justice court. The assignment problem objective included N cases that should be assigned to M teams and each team had the ability to do all the cases. Nevertheless, due to case specification, personal capability, and working on other cases at the time, the teams needed to spend different time to maximize or minimize the assignment problem’s objective. To find the assignment problem’s optimal solution DA and FA were used. Two problems were examined for uniform distribution the problems and were shown as, for example, problem one (P1): effectiveness rate (μi) = (1, 90), lower bound (Ll) = (1, 30), upper bound (Ui) = (1, 90), and problem two (P2): effectiveness rate (μi) = (1, 90), lower bound (Ll) = (1, 60), and upper bound (Ui) = (1, 90). The produced results proved that DA required less CPU time to find the optimal solution and an average of percent deviation for maximizing effectiveness compared with FA.

The results showed that, for 50 cases and 3 justice teams for experimental parameters P1 (50 : 3, 4, 5) and P2 (50 : 3, 4, 5), the results of DA were superior compared to those of FA.

In reference [60], DA, moth flame (MFA), and GWO techniques were examined to optimize the capacitor’s optimum sitting in different radial distribution systems (RDSs). The factor of loss sensitivity was considered to determine the candidate buses. To validate the efficiency and effectiveness of the examined optimization techniques 33-, 69-, and 118-bus RDSs were considered. The main aim of this study was to minimize power loss and total cost with voltage profile enhancement. The results of the aforementioned optimization techniques were later compared with PSO to prove the superiority of the techniques. To ensure the equality in comparing the results of the techniques, the same initial population was selected for MFO, DA, GWO, and PSO for the 33-bus distribution system. The results proved that DA-, GWO-, and MFO-based optimization methods were much superior compared with PSO-based technique in terms of a small number of iterations and convergence time for the examined study.

Furthermore, MFA-, DA-, and MFA-based optimization showed a higher convergence rate for the 69-bus distribution system case. However, PSO was able to determine the optimal sizing and sitting of the capacitors for the 33-bus system, but it could not find the optimal solution for the 69-bus system accurately. The metrics used to evaluate the performance of algorithms are shown in Table 11.

Additionally, the three algorithms DA, GWO, and MFA were evaluated using statistical tests. The parameter settings were implemented as the original references. Moreover, 35 iterations are used, the population size was 20, and each algorithm runs 30 times for each case. From the evaluation results for the mentioned algorithms, it was observed that DA, GWO, and MFA had a tolerable root mean square error (RMSE). However, with respect to the other two techniques, GWO proved that it has the best values. Additionally, the stability of DA, GWO, and MFA was proved by the values of standard deviation (STD).

8. Advantages and Disadvantages of DA

DA is one of the most recently developed algorithms in the area. As shown in the literature, it has been used to optimize various problems in different areas. One of the reasons that this algorithm has been able to contribute to different applications is that it is very simple and easy to implement. It can be seen that it suits applications in different areas. Furthermore, selecting the predators from the archive, the worst (most populated) hypersphere prevents the artificial dragonflies from searching around nonpromising areas. Moreover, having few parameters for tuning is another advantage of DA. Furthermore, the convergence time of the algorithm is reasonable. Over other optimization algorithms, it is firmer and it easily can be merged with other algorithms.

On the other hand, it does not have an internal memory that can lead to premature convergence to the local optimum. This disadvantage was overcome in reference [26] by proposing a novel Memory-based Hybrid Dragonfly Algorithm (MHDA). Furthermore, DA is easily stuck into local optima because it has a high exploitation rate. Levy flight mechanism was utilized to model the random flying behaviour of dragonflies in nature. The disadvantages of Levy flight are overflowing of the search area and interruption of random flights due to its big searching steps.

9. Results and Evaluations

In the original research work, three groups of classical benchmark functions were used to test the performance of the algorithm. The groups are unimodal (F1–F7), multimodal (F8–F13), and composite test functions (F14–F23). Furthermore, the Wilcoxon ranksum test functions were used to show the significance of the results statistically, as shown in Table 12. The results of F1–F19 in Tables 12 and 13 are taken from the original research work. However, the authors of this research work have tested both PSO and DA for the results of F20–F23 in both tables. Though, the DE and FA were tested on all the benchmark functions (F1–F23) by the authors.

As shown in Table 13, for unimodal test functions, in general, the results of the FA and DE outperformed DA and PSO. This proved that the FA and DE have superior exploitation and greater convergence speed compared to the DA and PSO. On the other hand, in comparison to the PSO, DA showed superior results. Furthermore, the results of the aforementioned references in this research work proved the high convergence speed of the DA. As shown in reference [59], the DA and FA were used to optimize the same problem. The results of DA were superior to that of FA.

In addition, the results of the multimodal test functions proved the high exploration of the DA, which helps in searching the search space. Nonetheless, the results of the DE, in general, were better in this group of the test functions.

For the composite test functions, the FA showed superior results comparing to the other algorithms. The DA has the third place among the four. It was better than the PSO and worse than the other two. This means that the balance of exploration and exploitation of the FA algorithm is superior. The reason for this is that the exploration of the DA is higher than the exploitation rate.

For statistical test functions, the results of the DA and PSO were used. As shown in Table 12, the values of the unimodal test functions are less than 0.05, which means that the results were statistically significant. For most of the multimodal test functions, as shown in the table, the results were statistically significant and less than 0.05. Moreover, the statistical results of the PSO and DA for most of the composite test functions were significant and less than 0.05.

In the original research work, the DA was not evaluated for large-scale optimization problems using the CEC-C06 benchmark functions. For most of the real-world problems, time is not as important as much as providing an accurate answer. In addition, in reality, people run an algorithm for more than one trail. This means users try to find the most successful technique in their scenario regardless of time. The 100-digit challenge also known as “CEC-C06 benchmark test functions” examines this feature of the number optimization process [61]. In this survey, the algorithm is tested using the CEC-C06 2019 test functions. The utilized CEC-C06 2019 test functions are shown in Table 14. All the CEC test functions are scalable. Furthermore, the CEC04 to CEC10 is shifted, rotated, and these functions are set as minimization problems that have dimension of 10; however, the CEC01 to CEC03 is not shifted and rotated and these functions have different dimensions.

Since in the original paper the results of the DA were compared with the PSO, hence the results of DA for the CEC-C06 test functions will be compared with PSO. 100 iterations and 30 agents were used for both algorithms. The results are presented in Table 15. As shown, the results of both algorithms are comparative in CEC03 and CEC10. However, the DA provided better results in CEC01, CEC02, and CEC06; in the rest of the CEC functions the results of the PSO were better.

The results of the CEC-C06 2019 benchmark functions proved that the DA algorithm can be used to solve large-scale optimization problems.

10. Discussion and Future Works

Unlike evolutionary algorithms and similar to other swarming techniques, the DA algorithm has few parameters for adjusting, and this makes it easier to implement the algorithm. It provides a good optimization capability. As proved in the aforementioned references, DA has become a strong metaheuristic algorithm to address complex problems in most of the cases. It also provides a good convergence towards global optima. Furthermore, the superiority and effectiveness of this algorithm were proved by the applications that utilized this technique.

The DA uses low-cohesion weight and high alignment for exploring the search space. For exploiting the search space, on the other hand, low alignment and high-cohesion weights are used. Another technique to balance exploration and exploitation is tuning the swarming weights (s, a, c, f, e, and ) adaptively during the optimization process. To switch between exploitation and exploration, the radii of the neighbourhood enlarged proportionally to the iteration numbers can be used. For small- and medium-scale problems the DA usually can produce good results. For large-scale problems, however, it needs more affords.

One of the difficulties that the users may face of the DA is that position updating and population centroid of the algorithm are not correlated. This may cause trapping into local optima and difficulty in finding global optima and solutions with low accuracy. As mentioned in [18], the performance of the cuckoo algorithm was improved in a number of references by changing the levy flight mechanism. Hence, testing other strategies instead of the levy flight in the DA is highly recommended.

Moreover, the exploration and exploitation of the DA algorithm are mainly determined by alignment, separation, cohesion, and attraction towards food sources and distraction towards enemy sources. This technique improved the diversity of solutions and caused exploration of the algorithm to become stronger. Nevertheless, the performance of the algorithm decreased with a lot of operators of exploration and exploitation because they cause an increase in the convergence time and trapping into local optima. As discussed in [33], for complex optimization problems, the DA easily falls into local optima and the convergence speed is low. However, for simple problems, the static swarming behaviour of the DA increases the exploration level of the algorithm and helps in avoiding local optima. Furthermore, increasing the number of iterations will result in a high exploitation degree and will increase accuracy in finding approximate global optimum.

Additionally, evaluating the algorithm in the previous section proved that the DA may have problems in balancing exploration and exploitation in some cases; this was because the exploration of the DA is high. In the early steps of the optimization process, the high rate of exploration is good; however, it should be decreased in the final steps of the process and the exploitation rate should be increased. Contrarily, the results of the unimodal test functions and the results of most of the reviewed research works proved the superior convergence of the algorithm for simple to medium problems.

To overcome the bottlenecks of the algorithm, it was hybridized with other algorithms. For instance, MHDA was proposed to overcome the problem of premature convergence to local optima. In spite of the fact that the DA and its hybridized versions have been able to provide good results in solving a number of complex optimization problems, some drawbacks still exist. In dragonfly algorithm, attraction towards food and distraction towards enemies provide a high capacity of exploration and exploitation during optimization technique. Nevertheless, the correlation of position updating rule of DA with the centroid of the population from the previous generation is less. Thus, this may result in a solution with low accuracy, premature convergence to local optima, and difficulties in finding the global optima. Hence, research studies are encouraged to find new techniques to update the positions of dragonflies. Furthermore, another point that would help in improving the algorithm is balancing the exploration and the exploitation phases of the algorithm. This would prevent the algorithm to trap into the local optima. Furthermore, combining new search techniques with the DA and examining new transfer functions with the binary DA are highly recommended. We recommend integrating DA with other methods to dynamically tune the parameters during the optimization process. This technique would be able to provide a better balance between exploration and exploitation.

11. Conclusions

In this paper, one of the most recently developed algorithms was reviewed. The different variants of the algorithms including the hybridization versions with other algorithms were discussed. Furthermore, convergence, exploration, and exploitation of the algorithm were addressed. In addition, a number of optimization problems and applications that used DA were reviewed. During the research, it has been discovered that the DA in most of the cases has a good ability to converge towards the global optimum. Moreover, it has the ability to optimize different and complex problems in various areas. Additionally, the results of the benchmark functions proved that the DA has a great ability in solving simple to medium problems. However, for complex problems, it may face some difficulties during the optimization. These difficulties were overcome by merging the algorithm with other algorithms. Moreover, it was also shown that the balance of exploration and exploitation of the algorithm is not good. However, in some applications, it was shown that the balance of exploration and exploitation of the algorithm is reasonable. For example, for optimizing the parameters in the analyzing stress of perforated orthotropic plates, DA outperformed GA and PSO and it converged earlier since it has higher exploration and exploitation rate. Additionally, DA was successfully used to develop a new method to solve economic dispatch incorporating solar energy. The results proved that the economy could be raised and at the same time system loss could be minimized. In a binary search space, DA was successfully used as a searching technique. Furthermore, compared to GA and PSO, BDA showed a better searching ability and showed the ability to select features with more information. Furthermore, for doing multilevel segmentation for colour fundus image, DA showed its superiority over the other techniques that required changing the image to colour scale before doing the segmentation.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.