Abstract

Artificial intelligence (AI) has evolved into a powerful tool that has wide-spread application in computer vision such as computer-aided inspection, industrial control systems, and navigation of robots. Monitoring the condition of machineries and mechanical components for the presence of faults with the aid of image-based automated analysis is one major application of computer vision. Diagnosing machinery faults from images can be made feasible with the adoption of deep learning and machine learning techniques. The primary objective of this study is to detect malfunctions in photovoltaic (PV) modules by utilizing a combination of deep learning and machine learning methodologies, with the assistance of RGB images captured via unmanned aerial vehicles. Six test conditions of PV modules such as good panel, snail trail, delamination, glass breakage, discoloration, and burn marks were considered in the study. The overall experimentation was carried out in two phases: (i) deep learning phase and (ii) machine learning phase. In the initial deep learning phase, the final fully connected layer of six pretrained networks, namely, DenseNet-201, VGG19, ResNet-50, GoogLeNet, VGG16, and AlexNet, was utilized to extract PVM image features. During the machine learning phase, feature selection from the extracted features was carried out using the J48 decision tree algorithm. Post selection of features, three families of classifiers such as tree, Bayes, and lazy were applied to determine the best feature extractor-classifier pair. The combination of DenseNet-201 features with k-nearest neighbour (IBK) classifier produced the overall classification accuracy of 100.00% among all other pretrained network features and classifiers considered.

1. Introduction

Photovoltaic energy production is one of the most favourable alternatives to conventional fossil fuels in the near future. Solar energy is acclaimed to be a clean and ecofriendly renewable source of energy. Pollution-free, abundant availability, and wide accessibility are certain factors that make solar energy more preferred over other sources of renewable energy [1]. Solar energy captured by photovoltaic modules can be widely used in the production of sustainable energy. Globally, photovoltaic modules (PVM) have seen an increase in installation over the last three decades due to the drastic drop in manufacturing prices. Such sudden drop in prices have grabbed the attention of capitalists, industrialists, and the scientific community [2]. However, numerous challenges persist in the PV industry, such as (i) the initial installation cost, (ii) the reliability of the PVM, (iii) induction of various faults, and (iv) varying climatic conditions [3]. The change in outdoor climatic conditions can lead to various faults in the operating PVM, like glass breakage, burn marks, discoloration, snail trail, and delamination [3]. These faults hamper the life span and reliability and reduces the power output of PVM. According to recent reports, a loss of 18.9% in the PVM power output exists annually due to the induction of aforementioned faults [4]. To elevate such scenarios, fault diagnosis is necessitated to identify various faults and reduce the negative impacts accordingly. Timely and precise PVM fault diagnosis can extend the lifespan and improvise the power output of PVM. Conventionally, trained professionals visually inspected faults in PVM that consumed more time, elevated manpower and nonfeasible for large geographical installation regions. However, with the evolution in technology, numerous nondestructive fault diagnosis techniques have come into existence that includes electroluminescence imaging, photoluminescence imaging, electrical measurements, and thermographic assessments [5].

Unmanned aerial vehicles (UAVs) are now being used by many industrialists and capitalists to inspect the occurrence of faults in PVM. Due to the technological advancements, UAVs have extended their usage in several fields like inspections on a large scale, surveillance, traffic monitoring, photography, and weather monitoring. Application of UAVs are coupled with several advantages such as reduced human interference, reduced time consumption, and ecofriendliness. Various applications of UAVs in fault diagnosis are discussed in several earlier publications. Fault diagnosis in PVM is done by equipping thermal cameras in UAVs [6]. Due to the high speed of the UAV and the poor image resolution of the thermal cameras, this system is restricted in its ability to detect hotspots at higher temperatures. The formation of hot spots can be considered a sign for the existence of corrosion, microcracks, short circuits, partial shading, and solder bond failure. Since thermal pictures are made up of thermal radiation information (displayed as pseudo-colour images), it might be difficult to identify the above-mentioned fault occurrences from thermal images [7]. In consideration of the shortcomings listed above, thermal-imaging cameras are substituted by digital cameras with high resolution such that minor faults can also be precisely spotted. High-resolution cameras mounted on UAVs collect true-colour images from the photovoltaic modules that help in identifying apparent visible defects like glass breakage, burn marks, discoloration, snail trail, and delamination. There are several image processing methods proposed by various authors to identify PVM faults, namely, aerial triangulation [8], edge detection [9], extraction of correlated textural features [10], and image mosaicking [11]. However, the efficiency of all the above-mentioned techniques heavily depends on the quality of the UAV-acquired photos. The collected UAV picture resolution may also be negatively impacted by a number of additional elements such reflection, haze, wind speed, and vehicle vibration.

Convolutional neural networks (CNN) have demonstrated superior accuracy in several defect diagnostic applications while operating on low-resolution pictures. CNN is regarded as a potential method for extracting and classifying visual information in computer vision and serves as the building blocks for deep learning (DL). Due to the better image classification and feature extraction skills, deep learning methods are applied in various sectors including health care, robotics, and automation industry [12]. Deep learning algorithms have outperformed more conventional approaches in classification problems. Numerous studies have shown how CNN can be applied to identify faults in PVM. The authors in [13] used an intelligent fault pattern identification based on deep learning to find and categorise five test circumstances. The earlier method employed support vector machine (SVM) algorithm as a classification tool. With the help of a label cascading autoencoder (CASAE) architecture, metallic surface flaws were identified by the authors in [14]. The architecture was used to segment and locate the defect regions, while a compact CNN was used to classify the faults into the appropriate classes. CNN was used by Akram et al. to automatically identify flaws in electroluminescence images. Using a publicly available collection of images, the authors obtained a classification accuracy of 93.02% [15]. Deep learning techniques were used by Li et al. to perform fault diagnostics on PVM available in large-scale PV farms. Deep learning was used to extract features from images of PVM and classified further [16].

According to the recent research, there are two ways to create a CNN model architecture: building it from scratch or utilizing pretrained models available online. Recent research states that pretrained networks surpass models developed from the scratch due to the following reasons: (i) pretrained models are developed by training with large volumes of data, (ii) ease of access in public repositories, (iii) customized usage, (iv) extensive compatibility, and (v) reliable result generation. Among the numerous pretrained network models, AlexNet [17], DenseNet-201 [18], GoogLeNet [19], ResNet 50 [20], VGG16 [21], and VGG19 [22] are commonly used in image classification. Nevertheless, the size of the training dataset and the duration of the training period affect the output of DL models. Therefore, bigger datasets must be used to make CNN models learn the image characteristics properly. Dataset acquisition and scarcity of data aligned towards specific application can be a challenging scenario. One method for addressing such issues of dataset expansion is termed as data augmentation. Many research papers use augmentation based on generative adversarial networks (GANs). This is done in order to artificially increase datasets and reduce data shortages [23]. On the other hand, GAN-based algorithms include a lot of convolutional layers that can necessitate more hardware and training time. Techniques for rapid and easy data augmentation might be an effective choice.

Several machine learning (ML) techniques were used to categorise PVM defects in the literature. The observations state that ML algorithms can deliver accurate results while encountering numerical data with less training time and complexity [24]. The shortcomings of individual techniques can be reduced, and their strengths can be combined to provide more reliable results. Such techniques termed as ensemble methodologies are growing of more interest among researchers. Traditionally, I-V characteristics acquired at photovoltaic plants or different analytical models were used to get numerical data for ensemble learning models [25]. However, greater research is needed in using ensemble learning algorithms that leverage features extracted from images. Recent years have seen the introduction of fusing ML and DL techniques to achieve higher levels of accuracy. Some studies that adopted the aforementioned scenario are provided as follows. Kaplan et al. detected Alzheimer’s disease from CT and MR images with the aid of exemplar histogram-based features and eight different classifiers [26]. The authors used neighbourhood component analysis for feature selection and eight machine learning classifiers for classification. Similarly, Baygin et al. identified kidney stones automatically through exemplar Darknet19 features with coronal CT images. A kNN classifier was used to classify the exemplar features that returned an accuracy of 99.22% [27]. In another study, Dogan et al., adopted an ensemble of pretrained residual networks to detect fire scenario accurately. The model achieved 98.91% and 99.15% using the SVM classifier [28]. The aforementioned studies provided the motivation for the current research. The following findings were inferred based on the literature study. (1)The use of thermal or electroluminescence images for fault classification has been the subject of interest in major research works. However, the use of visible colour images was sparse and briefly considered. PVM thermographic analyses are only capable of identifying the existence of hot spots in PV modules(2)Literature state that deep learning- and machine learning-based approaches were used to detect and categorise defects in a PVM. Additionally, the efficiency and potential of a combination strategy utilising deep learning and machine learning were minimally investigated(3)Data collection was considered a major drawback since public sources are unavailable and datasets are scarce(4)Too many convolutional layers in GAN-based data augmentation result in greater training time and hardware requirements. Hence, an alternative for GAN is necessary to artificially expand the collected dataset(5)The literature makes it clear that just a small number of defects were considered while classifying and detecting faults in PVM. As a result, it is critical to develop a fault detection strategy that employs both deep learning and ML approaches collectively to detect various faults

1.1. Research Gap

Based on the findings portrayed in the literature survey, the following research gaps were identified: (i)Conventionally, thermographic and electroluminescence images were widely adopted to diagnose faults in PVM. However, RGB image-based visual fault detection have not been attempted(ii)Deep learning and machine learning algorithms were used to perform classification and fault diagnosis tasks individually. However, combining both machine learning and deep learning approaches is unexplored

The abovementioned research gaps can be encountered through the following means: (i) adopting RGB-based image acquisition to detect visible faults in PV modules. The process can be feasible by utilising a UAV equipped with digital camera. (ii) The literature survey states that deep learning techniques are excellent feature extractors, while machine learning techniques provide accurate results over numerical data. Thus, a fusion of techniques must be attempted.

1.2. Novelty and Contribution

The overall working of the proposed methodology is presented in Figure 1. The important technological contributions are listed below, and the current work seeks to enhance the current technology. (1)In the current work, PVM fault identification and diagnosis were carried out using RGB photos (obtained from UAVs)(2)Along with panels in good condition, several aesthetic flaws such delamination, snail trails, burn marks, glass breakage, and discolouration were considered in the study(3)The collection of captured aerial images was expanded using data augmentation with different transformation functions(4)An attempt to combine ML and DL techniques was adopted in the study. Six pretrained networks, namely, DenseNet-201, VGG19, ResNet-50, GoogLeNet, VGG16, and AlexNet, were utilized to extract image features(5)J48 decision tree algorithm was used as the feature selection algorithm. Three families of classifiers such as tree-based, lazy-based, and Bayes-based were adopted to perform image classification

2. Experimental Studies

Detecting whether PVM is in excellent or bad condition is the primary aim of the current study. The goal of the suggested technique is to identify the kind of fault if the state of PVM is discovered to be deficient. The first two blocks are detailed in the part below using the suggested technique shown in Figure 1. The study took place in a lab environment using modules that were positioned over a platform.

2.1. Experimental Setup

An UAV-based surveillance scheme outfitted with a high-quality professional camera, a variety of on-board processors, a centre for ground control, and sensors makes up the experimental setup [29]. The general operation of the monitoring platform using UAV is shown in Figure 2. Aerial photographs of PVM were taken in the laboratorial condition using a DJI Mavic 2 Zoom drone with an RGB professional camera that was remotely piloted using a handheld remote controller. The acquired photos were transmitted (wireless) to the ground control station which is further saved in a storage container. Acquired photos are then subjected to further offline processing using deep learning and ML techniques, as well as the analysis of the classification results. Table 1 displays a complete specification of the UAV utilised for the study. The modules were positioned at six different locations across the facility for image collection. Five flawed and one flawless test modules were examined for UAV image acquisition. The modules were placed at six different locations across the facility for purpose of image collection.

The drone was made to fly between a height range of 1 to 5 meters above the PVM during data collection to collect image data. For the purpose of obtaining PVM images, the drone was operated twice for around 14 minutes with an interval between each session. For the purpose of collecting image data, each PVM test condition, namely, snail trails, glass breakage, delamination, discoloration, good panels, and burn marks, was set up at various locations across the facility. In around 2.3 minutes, 100 photos were taken for each module condition and saved into separate folders. The PVM utilized in the experimental investigation was produced by Udhaya Semiconductors Ltd. The comprehensive description of the PVM employed is described in Table 2, while a sample of the acquired images is presented in Figure 3. The measured values were determined under typical test circumstances comprising of temperature of 25°C, irradiance of 1000 W/m2, and AM1.5.

2.2. Experimental Procedure

The complete experiment was carried out in four stages: (1) dataset construction and acquisition, (2) feature extraction based on CNN, (3) feature selection based on the J48 algorithm, and (4) usage of various classifiers for feature classification. In the current research, an UAV was used to collect image data from different PVM test conditions, namely, discoloration, glass breakage, snail trail, burn marks, delamination, and good panel. Figure 4 depicts the complete picture of the data collection apparatus. As shown in Figure 4, the controller held in hand was employed to direct the UAV’s flight and includes features for temporarily storing the PVM images that are captured. By using data cables or memory cards, these images were subsequently uploaded to storage systems. For the purpose of the experiment, a total of 600 images (100 images for each test condition) were gathered and divided into several classes. Deep learning-based models, on the other hand, may underperform with small amount of datasets. Hence, huge datasets are essential for training the model, thereby enhancing the classification accuracy through the proper retrieval of characteristics with greater significance.

Acquiring image dataset is considered a stimulating task while employing CNN. The difficulties encountered during dataset creation can be resolved by data augmentation techniques. With the use of data augmentation approach, the number of images acquired was artificially boosted from 600 to 3150 (a homogenous dataset including 525 images for each condition). The images were subjected to a series of transformations such as blur, flip, rotation, noise, shift, zoom, and warp to expand the image data collected. Table 3 lists the settings that were utilised to modify the images. The J48 decision tree (DT) algorithm was used to identify features from the extracted features that are highly significant and contribute towards classification. The PVM test situations are then classified into their appropriate classes using various classifiers based on the chosen features.

2.3. Visual Faults in PVM

Faults in PVM can be induced due to the delicate circumstances brought on by varying climatic conditions, high environmental uncertainties, and thermal loads. Such scenarios can impact PVM performance, dependability, and operational lifespan. The most frequent visual fault occurrences in a PVM are described as follows, and a brief outline is provided in Table 4.

2.3.1. Glass Breakage

Glass breakage in PV modules prevails due to the influence of several factors like harsh climatic conditions, improper installation, poor packing, road shocks during transportation, thermal stresses, and sudden impacts. The front glass cover of a PVM is supported by a tempered glass which, on complete breakage, does not affect the PVM working. However, glass breakage fails to deliver the standard power output since it paves the way for moisture penetration that assists oxide formation and corrodes the PVM interconnects.

2.3.2. Burn Marks

Burn marks in a PV module can be induced due to failed solder bons, ribbon tear in PV cells, and localized heating. The presence of burn marks can be a concern for safety and reliability in PV modules.

2.3.3. Delamination

Delamination occurs in a PVM due to the loss in adhesive properties between the PV cell and EVA encapsulant that worsens over changing climatic conditions. The phenomenon of delamination can be observed predominantly at corners and edges of PVM that can incur water penetration, minimized radiation, increased reflection, electrical issues, and power loss. Increased exposure to moisture and heat can elevate the series resistance and salinity inside PVM, thereby accelerating the degradation of EVA.

2.3.4. Discoloration

Discoloration is the appearance of browning or yellowing in PV cells that signifies degradation phenomenon. The change in color of the encapsulant material was induced by the photothermal (UV ray exposure) and thermal (heat exposure) mechanisms. The change in color can differentiate the transmittance of light that can reduce sunlight absorption and lead to output power loss. The primary reason for the occurrence of discoloration is the chemical reaction inside the encapsulant material due to water and UV ray penetration at high temperatures.

2.3.5. Snail Trail

Snail trails are a significant representation of the presence of microcracks inside a PVM. In general, snail trails originate from the corners of a PV cell and gradually expand due to the thermal stresses imposed. The occurrence of snail trails is enhanced due to the interference of UV radiation and high-temperature application. The rate of snail trail formation accelerates during the summer due to an increase in thermal stress applied on PVM. Additionally, studies state that the selection of back cover and EVA materials can also play a major role in the formation of snail trails. Microcracks in a PV cell, in combination with UV radiations, mechanical load, and freezing facilitate the prevalence of snail trails in PVM. Pale grey or black color representation in silver interconnects can also be a significance for snail trail occurrence. The panels with snail trail appearance must be replaced, and the phenomenon is irreversible.

3. Feature Extraction Based on CNN

The present section details the extraction of features followed by the selection of features utilised in the current research, as well as a brief overview of CNN. Several networks, including AlexNet, DenseNet-201, Resnet-50, GoogLeNet, VGG16, and VGG19, were used for feature extraction. The most significant characteristics were identified using the decision tree—J48 algorithm. Also, brief explanations of various adopted classification samples are given below.

3.1. A General Summary of Convoluted Neural Network

Deep learning has recently emerged as a powerful technology that has been utilised across a wide spectrum of machine vision and computer intelligence activities. Deep learning algorithms are conceived and implemented using nonlinear convolutional neural networks (CNN). CNN are widely recommended feature extraction tools due to their remarkable capacity to extract high-level visual features and compatibility with low-resolution pictures [35]. In addition to multiple hyperparameters and special layers, the CNN architecture includes the pooling layer, convolutional layer, and fully connected layer. A brief description of the CNN layers is presented in Table 5.

Throughout the training process, the convolutional and pooling layers continuously learn the patterns generated by the images. In CNN models, the error backpropagation approach is used to automatically modify the filters/weights, thereby minimising the occurrence of errors. In general, CNN models are built in such a way that they absorb important characteristics from image input and contribute favourably to classification performance. To train and develop a CNN architecture from scratch, massive amounts of correctly labelled data are required. On the other hand, creating such vast datasets takes more time, needs high levels of human intellect, and makes extensive use of data analytics. Despite the difficulties outlined above, some researchers have successfully approved and confirmed the use of pretrained networks for unique applications. Since pretrained networks have been trained on millions of pictures, they offer a remarkable feature extraction abilities. Several pretrained networks like VGG19, VGG16, AlexNet, Resnet-50, DenseNet-201, and GoogleNet have been explored in various literatures, and their pretrained versions are accessible freely on the internet.

3.2. Extraction of Features Using Pretrained Networks

The feature extraction procedure compresses the number of variables to interpret and explain a substantially bigger amount of data. To aid classification, CNN develops features to differentiate between every class automatically based on the dataset labels. Transfer learning has grown into an efficient approach for extracting and categorising unique image datasets by incorporating only a few changes to the final few layers. A brief description of the pretrained networks adopted is presented in Table 6. In the current study, multiple pretrained networks such as VGG19, VGG16, AlexNet, ResNet50, DenseNet 201, and GoogleNet were used to extract features from PVM aerial images. The features from every network were derived from the final layer (fully connected) that consists of 1000 features. The collected features were stored as a data file in “.csv” format that was used for further processing.

4. J48 Algorithm for Feature Selection

Feature selection involves identification and selection of the most relevant features that can effectively aid in predicting targeted classes. The addition of irrelevant data might degrade the classifier performance and considerably increase the computing complexity. As a result, the feature selection procedure eliminates less significant features in order to increase the performance of the classification algorithm. Decision tree (DT) algorithms are commonly employed in feature selection since they can effectively and consistently capture relevant information. DT is a graphical representation that appears like a tree and is used to generate classification rules. A tree is made up of roots, nodes, leaves, and branches. The characteristics of classification are depicted as the nodes that are connected through branches from root to leaf. In a DT, the leaves represent the unique labels of different classes, while the nodes represent the classes that are to be categorized. The branches identify and reflect the communal decisions that result in the leaves [36]. Feature selection in decision trees begins at the root and descends via the nodes till a pure leaf is successfully located. At the decision node, highly relevant and contributing attributes that influence categorization are identified through appropriate criteria of estimation. Among the known DT algorithms, J48 is frequently adopted for the process of feature selection. For the current study, feature selection was conducted with the J48 DT algorithm on the extracted features of all networks (VGG16, AlexNet, ResNet50, DenseNet 201, and GoogleNet). The features displayed in the decision tree were recorded, and an analysis was performed to determine the optimal number of features necessary for feature classification. Further experimentation was performed to identify the optimal number of features from the represented decision tree features. The experiment consisted of removing the least significant features and recording the classification accuracy to determine the optimal number of features. Thus, the impact of features based on the variations in feature combinations for every network can be observed from Figures 5(a)5(f). The features (among 1000 features) that were selected in the J48 decision tree are presented in Table 7. The features portrayed in Table 7 are depicted in order of significance, i.e., from high to low.

To understand the process of feature selection, for instance considering the feature selection process for the GoogLeNet pretrained network. Figure 5(c) shows the impact on classification accuracy for varying counts of features. According to Figure 5(c), the classification accuracy varies between 63.23% and 96.54%. Notably, a high classification accuracy of 96.12% was obtained using only 23 selected features. Selecting 23 features can be considered an optimal factor since more quantity of features can demand sophisticated computational devices. Additionally, the model complexity in computation increases which results in larger consumption of time and money. The same process was repeated for all the pretrained networks.

5. Feature Classification

Feature classification is the procedure for categorising instances once they have been identified based on their features. The features are ranked in the order of decreasing importance with the most significant features listed first and the least significant ones listed last. During tree visualization, the features of lesser relevance and value are ignored. In this study, there are three types of classifiers, namely, (i) tree-based (ii) Bayes-based, and (iii) lazy-based classifiers. The selected features were separated into training data and testing data. The data were then tested on various classifiers for training accuracy, validation accuracy, and testing accuracy, out of which, the best classifiers are selected. A short summary of the adopted classifiers is presented in Table 8.

6. Results and Discussions

In this section, the performance of multiple classifiers is evaluated using the features selected through the J48 decision tree algorithm for each pretrained network examined in the study. Snail trails, glass breakage, delamination, discoloration, good panels, and burn marks were the six conditions of PVM considered. The image dataset was built with 3150 images comprising 525 images per condition, using the data augmentation technique. A train-test split ratio of 80%-20% was adopted in the study, i.e., 420 images for training and 105 images for testing. A tenfold cross-validation approach was utilized to determine the validation accuracy of the classifiers. The performance of tree-, lazy-, and Bayes-based classifiers for the features extracted and selected from pretrained networks is evaluated in this section.

6.1. Performance Evaluation of Tree-Based Classifiers

In the present study, 17 tree-based classifiers were utilized to perform classification on the selected features obtained from the pretrained networks considered. The tree-based classifiers considered in the study are best first tree, cost-sensitive forest, decision stump, extra trees, forest penalizing attribute, functional tree, hoeffding tree, J48, J48graft, least absolute deviation tree, logistic model tree, Naïve Bayes tree, random forest, optimized forest, random tree, reduced error pruning tree, and simple cart. The training, validation, and test accuracies of the adopted tree-based classifiers for every pretrained network are presented in Tables 9(a)9(c).

To compare the results obtained, two parameters, namely, test accuracy and computational time, were taken into consideration. Based on the test accuracies obtained from Table 9(c), one can infer the following. The random forest classifier performed well for all the pretrained network features by producing a classification accuracy above 99% (approx.). The pretrained network VGG-19 pretrained network with random forest produced a classification accuracy of 99.84% in a computational time of 0.74 s followed by DenseNet-201 (99.84% in 0.94 s, random forest), VGG-16 (99.68% in 0.63 s, random forest), ResNet-50 (99.68% in 15.24 s, random forest), GoogleNet (99.23% in 0.71 s, random forest), and AlexNet (99.52% in 0.02 s, random forest), respectively. The confusion matrix in Figure 6 shows the resulting classification accuracy of VGG-19 with the random forest algorithm. The diagonal components of the confusion matrix indicate cases that were correctly identified, whereas the areas around the diagonal matrix represent cases that were incorrectly classified. The test circumstances in a certain research are represented by the column and row labels in the confusion matrix. From Figure 6, one can observe that only one instance among 630 instances was misclassified while considering random forest classifier with VGG19 features. Overall, the random forest algorithm portrays the best result for all pretrained networks.

6.2. Performance Evaluation of Bayes-Based Classifiers

Bayes classifiers are probability-based classifiers that work on the Bayes theorem and found effective on large datasets. In the present study, four different Bayes classifiers, namely, Naïve Bayes updateable, Naïve Bayes multinomial, Naïve Bayes, and Bayes net, were considered. The performance of Bayes classifiers for different pretrained networks is presented in Tables 10(a)10(c).

Based on the testing accuracy obtained from Table 10(c), it can be inferred that the pretrained network AlexNet gives the best results compared to the other pretrained networks with a classification accuracy of 94.60% and a computational time of 0.18 s with Bayes net classifier which is followed by DenseNet-201 (94.13% in 0.04 s, Bayes Net), ResNet-50 (93.65% in 0.05 s, Bayes net), VGG-19 (89.84% in 0.01 s, Bayes net), GoogleNet (89.33% in 0.02 s, Bayes net), and VGG-16 (87.30% in 0.03 s, Bayes Net), respectively. The confusion matrix in Figure 7 shows the derived classification accuracy of AlexNet using the Bayes net classifier. Overall, utilising AlexNet with Bayes net classifier, 539 instances of the 630 instances in the test set were correctly categorised, whereas 91 occurrences were incorrectly classified. In addition, the Bayes net classifier, among other Bayes-based classifiers, can be used for real-time defect detection in a photovoltaic module.

6.3. Performance Evaluation of Lazy-Based Classifiers

k-nearest neighbour (IBK), K-star, and locally weighted learning (LWL) were the lazy classifiers considered in the present study. The adopted classifiers’ classification performance for the pretrained networks considered is presented in Tables 11(a)11(c).

Based on the test accuracies obtained from Table 11(c), it can infer that the pretrained network DenseNet-201 gives the best results compared to the other pretrained networks with a classification accuracy of 100.00% and a computational time of 0.01 s with the IBK classifier which is followed by AlexNet (99.84% in 0.01 s, IBK), VGG-19 (99.84% in 0.05 s, IBK), GoogleNet (99.80% in 0.03 s, IBK), ResNet-50 (99.68% in 0.02 s, IBK), and VGG-16 (99.52% in 0.02 s, IBK), respectively. The confusion matrix in Figure 8 shows the obtained classification accuracy of DenseNet-201 with the IBK algorithm. The confusion matrix presented in Figure 8 states that there were no misclassified instances for the IBK algorithm for DenseNet-201 features.

6.4. Evaluation of Best Classifier for every Pretrained Network Considered

The current study conducted a comprehensive analysis of the pre-trained networks feature extraction ability and the effectiveness of ML classifiers in identifying PVM faults. Based on the results obtained in Sections 6.16.3, one can infer that random forest (tree-based), Bayes net (Bayes-based), and k-nearest neighbour (lazy-based) were the top-performing classifier. The obtained results can be consolidated and presented in Table 12.

The results portrayed in Table 12 infer that IBK classifier displays the highest classification accuracy for all pretrained network features except VGG16. Random forest achieved the highest classification accuracy for VGG16 feature. Additionally, both IBK and random forest portrayed a similar performance for ResNet-50 and VGG19 features. However, considering the computational time consumed by both the classifiers, IBK (0.02 s for ResNet-50 and 0.05 s for VGG19) consumed lesser time than random forest (15.24 s for ResNet-50 and 0.74 s for VGG19). Thus, one can conclude that DenseNet-201 features with IBK classifier can produce accurate and precise classification accuracy.

6.5. State-of-the-Art Performance Comparison

The performance of the proposed methodology can be assessed with the aid of results delivered in various state-of-the-art techniques. Table 13 displays the state-of-the-art techniques adopted for classifying PVM fault conditions. The results portrayed prove that the present methodology outclasses state-of-the-art results.

7. Conclusion

The current study established the application of various types of classifiers to differentiate between different PVM test conditions using aerial images. Several deep learning pretrained network models were utilised to extract aerial image features, and the most noteworthy features were selected using the DT (J48) algorithm. Three types of classifiers were used in the present study, namely, tree-based, Bayes-based, and lazy-based classifiers. Feature selection was performed using the J48 decision tree algorithm. The selected data was analysed with the classifier performance using training accuracy, validation accuracy, and testing accuracy. The results obtained help in identifying the best pretrained network feature and classifier pair to detect real-time defects in a PVM. DenseNet-201 features along with k-nearest neighbour (IBK) classifier produced the maximum classification accuracy of 100.00%. The obtained accuracy value states that there were no misclassified instances, making the solution more feasible for real-time deployment. This research will result in enhanced energy output in PV power generation by decreasing system failure and downtime. The presented methodology can be deployed into real-time monitoring systems for instantaneous results. As a future scope, the computational effective solutions can be developed to provide cost benefits for the investors. Apart from the numerous advantages stated in the study, certain limitations do persist that are listed as follows. (i)The developed model works effectively with the data acquired in this study. However, the adaptability of the model for different dataset was not examined yet(ii)Prior knowledge of UAV operation is necessary to control the flight operation(iii)Acquisition of fault panels can be a challenging task. Additionally, natural fault occurrences can consume more time to generate for a new product. Furthermore, fault simulations are questionable in the case of PVM(iv)Data acquisition is considered the prime challenge, accompanied by computational resources and capital, while developing deep learning models(v)The present work is oriented towards the detection of visual faults alone

Data Availability

Data associated with the study can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare no competing interest.