Abstract

Solar energy has been receiving increasing attention due to the wide range of functionality it now supports and the promise of reduced environmental pollution. However, there are a few ongoing challenges, one of which relates to regular maintenance, especially for those high roof-mounted solar panels whose efficiency might have been degraded because of dust and dirt particle deposits. This paper is aimed at developing an autonomous drone which is equipped with a camera, a GPS sensor, a transceiver, and a pump conical tank to hold water with lotus effects, for the purpose of maintenance cleaning of solar panels which have been mounted on the roof top of buildings. The proposed system considers the combination of MobileNet and VGG-16 CNN techniques together for object detection and recognition that will enable assessment of solar panels, alongside the lotus effect physical technique that will enable maintenance of the solar panels. This would also put out of danger those human lives who have previously risked carrying out such high installation inspections and maintenance. A pump tank is designed and fabricated using a 3D printer to hold water with the lotus effect and is fitted on the drone. The proposed intelligent drone has been prototyped and deployed on-site at Taif University. The experimental results confirm the ability to detect dust and dirt particle deposits on solar panels while flying autonomously before proceeding to maintain them. The cleaning process with the use of the lotus effect has led to an increase in the power efficiency of the solar panels by approximately 31%, while the accuracy of the MobileNet framework for detecting particle deposits on the solar panels and thereafter making maintenance decisions rose to approximately 99%.

1. Introduction

The importance of relying on renewable energy means such as solar panels globally has been prompted. Thus, researchers, engineers, and IT specialists are striving to participate in employing advanced technology to achieve a better life for our planet. However, many researchers have identified the solar panels’ efficiency reduction, due to deposited solid particles on their surfaces which decrease the absorbed solar radiation on which the device depends for generating electricity or heat. Further, efficiency reduction due to dirty surfaces can be as high as 60% if the installation is in dusty or polluted atmosphere and not cleaned frequently [13].

Thus, it is rational to have reliable tools that could save natural resources and adapt technology wisely, especially where it is widely accepted that organizations and research centers tend to reduce environmental pollution by reducing the use of depleted energy sources and replacing them with clean, renewable energy sources. Further, there are various pillars of the Fourth Industrial Revolution (4IR) that have gained attraction from both industry and academia sectors to employ advanced technology such as artificial intelligence (AI), Unmanned Aerial Vehicles (UAVs), and Internet of Things (IoT) to propose innovative solutions towards green energy. Unquestionably, integrating UAV with IoT and AI can wider horizons of smart applications that could save time, resources, and even human lives [4, 5].

UAVs are sophisticated instruments that can be either preprogrammed or remotely controlled. UAVs are diverse in sizes, shapes, and functions. Further, the UAV system is considered a satellite, but without the distance penalty along with list of advantages such as proficiency, reliability, flexibility, portability, applicability, rapid deployment, Line of Sight (LoS) connectivity, low latency, and low cost of manufacturing, launching, and maintenance. Further, when coupling UAVs with the AI framework, more applications can be benefited such as remote sensing, high-resolution imaging, monitor disaster-relief missions, atmospheric studies, service delivery, surveillance, and empowering smart cities [68]. Using AI techniques and UAVs is extraordinarily priceless since it leads to pairing the real-time machine learning ability with the exploratory abilities of unmanned drones offering ground-level operators a human-like eye in the sky. This integration not only becomes help in computer vision and image processing from aerial imaging aspects but also becomes an enabler in wireless communications via drone to fulfil the increasing and diverse requirements across a large range of application scenarios [912].

The term “lotus effect” refers to water resistance and self-cleaning. Figure 1 shows the functional principle of lotus effect and how a droplet can slide off carrying dust and dirt, leaving the surface clean. The self-cleaning property of waterproof surfaces is called the lotus effect. The hydrophobic property of lotus flower has been simulated to obtain beneficial results in most areas, one of which is nanotechnology. The lotus effect can be applied on various surfaces, such as wood, metal, plastic, and glass, through chemical products for the sake of protection [13].

This paper is aimed at building and implementing an autonomous drone quipped with various payloads and paired with the MobileNet technique to enhance efficiency of solar panels installed in high places like roof of houses and buildings. This work’s contributions are noteworthy shift from the existing work, as presented in the paper’s aim and objectives by the end of the following section. The rest of this paper is organized as follows. Related studies are discussed in Section 2. The proposed model and implementation step are presented in Section 3. Section 4 presents the simulation results and validation discussions. Then, Section 5 concludes with a discussion focused on future perspectives.

Solar panels are affected daily by various environmental factors such as dust and dirt, which reduce the efficiency of the solar panels. Therefore, trained workers are one of those approaches to maintain the cleaning process manually. Yet, higher effort, long time, high cost, and risk are issues associated with this approach. More advanced solutions are still presented day after day, such as robotics or drones. This section is aimed at presenting a representative illustration of the related research works that have been considered in the literature, where a set of criteria have been used to source and review these related research works that meet this research scope. The criteria were as follows: (a) UAV platform type, (b) standalone topology, (c) aerial imaging, (d) AI framework if applicable, and (f) low-altitude missions. This section concludes by highlighting research gaps and our own research motivations.

Solar panels are exposed to several factors related to weather conditions throughout the year such as dirt, dust accumulation, atmospheric pollution, and bird droppings. Thus, many solar panel cleaning techniques have been discussed that could positively enhance the solar panel’s performance. Further, researchers believe that energy yield loss is caused by dust deposition on photovoltaic panels as claimed by [14], where it presents a comprehensive review of automated cleaning systems of solar panels. Several methods like pneumatic suction pump, air pump, water pump, drones, jet spray compressed air, self-cleaning brush, or using a central water pumping throughout the solar cell panels were used. Results of various solar panel cleaning techniques emphasize the importance of having reliable cleaning tools of solar panels. This study has opened various options to explore and experimentally carry out.

An intensive review is introduced in [15] where the review emphasizes the existence of different solutions for cleaning photovoltaic panels. Two of those solutions were utilizing a flying robot and an automatic arm (e.g., microfiber brush) to sweep for cleaning photovoltaic panels. Similar work is presented in [16] where self-inspection cleaning devices for solar panels based on machine vision were highlighted. The work has discussed an automatic arm with microfiber brush that can be fixed on the frame of the solar panels as a cleaning method. Additionally, an autonomous robot with wheels and fans is presented that allow the vehicle to fly and reach roof-mounted solar panels and then walk on the panels for an inspection mission.

Uddin et al. in [17] presented a design of a drone for cleaning the high-rise building windows, where the drone can spry water and use microfiber brush washes. Also, the designed drone was remotely controlled to allow workers have better security and maintenance of the surrounding area. Experimental results show the applicability of using a UAV system for cleaning purposes, which can be done in costly and efficient manners. Saurav and Jung in [18] have emphasized the importance of using an autonomous drone for cleaning solar panels in the field. The aim was not only for appropriate maintenance but also for the sake of efficiency, safety, and cost-effectiveness, where these aerial vehicles can lift its own weight and extra payload weights and fly autonomously not carrying out solar panel detection and cleaning.

Soman in [19] uses a drone with RGB and near-infrared data for rooftop detection, where this proposal would be helpful in solar panel deployment. An experimental investigation is presented in [20] on cleaning solar panels in desert climates using a UAV along with brush, microfiber wiper, and vacuum cleaner. The cleaning mission has been carried out remotely using a remote control and camera. Results infer the effectiveness of aerial cleaning where the enhancement in the average power output of solar panels cleaned was around 7%. A fully autonomous UAV is utilized in [21] to fly over solar panels for complete inspections. The drone is associated with a thermal camera to collect thermal and regular video that can be postprocessed via a stitching algorithm. After three different cases, the proposed algorithm shows good level of detecting solar panels.

A drone-supported thermal imaging is used for estimating the efficiency of photovoltaic solar panels as presented in [22]. The proposed model design has focused on several diagnostic methods, all of which reduce the power output and accordingly the financial revenue. The drone along with its detection framework uses visual inspection, thermographic inspection, electrical measurement, and electroluminescence imaging. Results show the effectiveness of deploying these inspection techniques for predictive maintenance.

A drone that is payloaded with a thermographic camera and radiometric sensor to detect dust from solar panels is presented in [23]. Experimental results show the effectiveness of using such an aerial method to monitor the conditions of photovoltaic solar panels for the sake of maintainability, operating costs, availability, reliability, safety, and energy efficiency. Further improvement is introduced in [24] where a convolutional neural network (CNN) novel method is employed to enhance detection. Results show a good level of accuracy of the approach under different real scenarios. Recently, CNN has been explored in surface defect detection in strip steel, textiles, and building [25], where several techniques in intelligent inspection have been discussed. A deep learning (DL) method for detecting faults in solar panels is presented in [26], where the proposed model includes a drone equipped with a thermal camera and GPS sensor. The results of the proposed model show high performance in coupling an AI framework along with drone aerial imaging.

Weimer et al. in [27] employed a CNN to detect surface defects in textile and steel datasets. The impact of the CNN model’s depth and breadth on test outcomes was explored. Wang et al. in [28] developed a new CNN model structure. The model inputs all defect-free and defect samples and produces a 12-class classifier with six nondefective and six defective samples. However, since the dataset is limited, there is a risk of overfitting. The authors in [18] develop a robust and smart technique for recognizing and detecting solar panels experimentally. A TensorFlow-based real-time custom object detector along with its DL capabilities is used. The DL with its Raspberry-pi Interface and Jetson Nano show reasonable detection. However, although the proposal includes having a drone, no real drone deployment has been shown.

A flaw identification algorithm based on transfer learning was suggested in [29] to accomplish weight sharing and reduce overfitting. Further, Pierdicca et al. in [30] utilized a CNN to recognize solar cell remote sensing pictures and identify damaged cells in the solar cell module. Findings show that CNN has successfully identified solar cell defects. The drawback is that owing to the low-resolution remote sensing pictures of solar modules, the precision of CNN in this work is only approximately 70%. Deitsch et al. in [31] used a CNN to identify various electroluminescence faults in solar cells. Compared to traditional machine vision methods, the algorithm in this study achieves 88.36% accuracy on the dataset, a 6% improvement. At the same time, the algorithm’s detection speed fulfils the demands of real-time manufacturing.

Kim et al. in [32] addressed an AI framework coupled with a drone for real-time remote sensing and energy efficiency. Furthermore, Pierdicca et al. in [33] presented solAIr; a mask R-CNN was used for detecting anomalous cells in photovoltaic pictures captured by drones with a thermal infrared sensor. Moreover, Kim et al. in [34] applied the Canny edge operator and image segmentation techniques to analyse IRT pictures obtained by drone. All the mentioned techniques show respected results in detection and inspection solar panel missions. Two self-developed techniques for detecting panels are compared based on classical techniques and the other on deep learning, both with a similar postprocessing phase [35]. The first technique relies on edge detection and classification, whereas the second relies on convolutional neural networks trained on regions to identify a panel. The second technique uses deep learning to train pictures that have undergone three distinct preprocessing processes. Table 1 shows a comparison of the related research windup against the proposed work.

Based on the windup-related studies in Table 1, the research gaps have been identified, so we draw our own research motivations to inform our own proposed model. Clearly, the comparative analysis between approaches presented in Table 1 has highlighted the unresolved issues which are mainly two issues: (1) full autonomous aerial vehicle and (2) considering an effective cleaning approach while doing the intelligent aerial inspection. To our best knowledge and based on the related study, no work has been done to couple a fully autonomous drone with an AI framework to achieve two goals: (1) flying the drone autonomously using VGG-16 CNN and (2) detecting objects/dust on solar panels for cleaning purpose using MobileNet, let alone considering the lotus effect for cleaning and long-term protection. Therefore, our proposed novel contribution is motivated to cover this gap. To illustrate, the process can be seen from two angles: software and hardware. The software consists of two brains: the first brain is related to training the drone to fly autonomously using VGG-16 CNN, while the second brain is related to detection and recognition of solar panels using MobileNet. So, these two brains are working on the go; hence, while the drone flies autonomously, it can also detect solar panels and recognize the need of cleaning or not. The hardware part including the camera, GPS, and pump conical tank is synchronized with the software part, so once the second brain recognizes that solar panels need cleaning, the conical tank pumps water towards solar panels and then applies the lotus effect, whereas the GPS sensor reports the solar panels’ coordinates to the control station.

This paper is mainly aimed at building and implementing an autonomous drone equipped with various payloads and paired with VGG-16 CNN and MobileNet techniques that work harmoniously together not separately to enhance efficiency of roof-mounted solar panels, as well as decrease personnel effort and risk. To achieve our aim, these objectives need to be pursued: (i)Determining drones and camera settings (e.g., flight path, altitude, camera focal length, and frame rate)(ii)Developing an AI framework with the two software brains(iii)Testing the efficiency of solar panels before and after cleaning and applying the lotus effect

From these objectives, we can drive the paper’s contribution that is predominantly seen as a comprehensive framework that consists of a computer software technique (MobileNet and VGG-16 CNN), as well as a physical technique (lotus effect). This combination would not only enhance efficiency of solar panels but also solve issues on endangerment of human lives in such high installation places in a more efficient and timely fashion. MobileNet is used for object detection and recognition capability to identify roof-mounted solar panels and evaluate their status, where this guarantees real-time processing and thus reduces energy consumption on IoT devices, while VGG-16 CNN is used for its simplicity and accuracy to train the drone to fly autonomously. The lotus effect is applied for long-term protection, where this work has designed a pump tank using a 3D printer.

3. Proposed Work

This paper is aimed at evaluating solar panels mounted on the roof using an autonomous and intelligent drone. This has been done by coupling a fully autonomous drone paired with VGG-16 CNN and MobileNet techniques that work harmoniously together not separately to enhance efficiency of roof-mounted solar panels, as well as decrease personnel effort and risk. Further, the proposed system is aimed at not only detecting issues on solar panels but also cleaning solar panels with water and the lotus effect for long-term protection. Figure 2 illustrates a block diagram of the proposed work architecture, which consists of the sky segment and ground segment. The former contains a drone platform with its payloads including a camera paired with the AI framework, a GPS sensor, a 3D conical tank designed and fabricated using a 3D printer to hold water with the lotus effect, and a transceiver module, which is responsible for wireless communication between the drone and ground segment before transmitting the gathered data to the cloud for further storage and analysis. However, the latter segment includes a ground control center (GCC), which acts as a focal point between two segments and host gateways to external networks.

This section is mainly focusing on the architecture of the proposed intelligent AI framework in two phases as shown in Figure 3. Phase one deals with how to train a network to fly a drone autonomously using the VGG-16 CNN technique. Phase 2 deals with solar power detection and recognition to identify roof-mounted solar panels and evaluate their status for cleaning purposes using the MobileNet technique.

Phase 1. It starts with two preflight settings. The drone setting includes the drone’s flight path, speed, and altitude, while the camera setting includes focal length, shutter speed, frame rate, aperture, and configuration (e.g., vertical and oblique). Thus, these preflight settings are vital to be tuned before drone flight execution, which in turn helps the drone fly autonomously. To achieve the autonomous flying, a VGG-16 CNN network is trained to obtain this goal, which also leads to precision aerial imagery. Researchers in [36] show an intensive study on the most frequent tool in support of autonomous flying, where it claims that the VGG-16 CNN is one of the most frequent approaches due to its simplicity and accuracy to train the drone to fly autonomously. The considered VGG-16 network is programmed, trained, and calibrated with the camera using the VGG-16 network support package in MATLAB, which has so many built-in functions and toolboxes to interact with hardware and provide additional processing capabilities [37].

Clearly, the proposed AI system for the autonomous drone allows flying and recording the current front view and yaw of the drone to generate more data for training and validation. Then, the recorded yaw is transformed to qualified yaw commands, before the VGG-16 CNN and a regressor get combined to train the neural network and control the drone path. Further, the regressor helps in generalizing real-world drone path settings in the provided environment. The yaw data take into consideration the drone’s movement budget, which get updated regularly to have safe navigation and then landing. However, the yaw angle facilitates the drone’s rotation while navigating at fixed velocity.

To train the VGG-16 neural network, the state was reset and a random UAV starting position and random movement budget were chosen. The training episode continues if the movement budget is greater than zero and the UAV has not landed. The episode finishes when either the drone lands or the movement budget decreases to zero. Then, a new episode starts unless the maximum number of episodes is reached. The training function chosen here is backpropagation, which is considered due to its speed of convergence, moderate-sized feedforward NN with accurate update weights and biases near an error minimum. Therefore, the Mean Squared Error (MSE) between the predicted yaw and the actual yaw can be calculated as per where refers to the number of training samples, refers to a sample, refers to the actual yaw, and refers to the predicted yaw. The actual yaw can be computed as the angle between the current drone movement and the next drone movement to which it is heading. Thus, the actual yaw is considered successful if it is obstacle-free.

Phase 2. When a drone flies autonomously, a better precision aerial imagery can be obtained. This motivates us to apply a MobileNet for object detection and recognition capability to identify roof-mounted solar panels and evaluate their status. This mission can be done by coupling the camera on board the drone with lightweight computing the MobileNet. Further, this computer vision tool goes beyond processing of images which can reduce noise or recognize objects based on scale, color, or position. Instead, computer vision can understand the scene by identifying and classifying objects from images and videos [38, 39]. The MobileNet tool is seen as small with few parameters and addition operation. Hence, it is widely recognized in mobile-based computer vision applications.

The MobileNet considered here is MobileNetV2, which set between V1 with low computation and V3 with power-consuming computation that is suited to large data. The MobileNetV2 focuses on mobile and embedded vision applications to boost the efficiency of the NN by reducing the number of parameters by not compromising on performance. This MobileNetV2 structure, as shown in Figure 3, is based on depthwise separable convolution “pointwise,” which uses a single filter to every input channel, while the pointwise convolution is utilized to merge the outputs of depthwise convolution. This can help in reducing the number of parameters, which leads to improving network performance and low resource use.

Batch Normalization (BN) is applied to make the neural networks faster and more stable via normalizing the samples in a minibatch throughout the training process. This also would enable a higher learning rate, hence leading to enhancing the model generalization accuracy. The rectified linear unit (ReLU) is selected as the activation function for the convolutional layers attached to the flatten layer. The output feature map for regular convolution is represented as per where refers to the kernel of the size of depthwise convolution, refers to the filtered output feature map, and refer to the number of feature channels, batch size, height, and width of each feature map, respectively. The cost of computation for a depthwise convolution is represented as per equation (3), while the depthwise separable convolution computational cost is represented as per equation (4). where denotes the kernel size, denotes the feature map size, denotes the number of input channel, denotes the number of output channel, represents the depthwise convolution, and represents the pointwise convolution. By using convolution as a two-step process of filtering and grouping, the cost of computation is given as per where both the width multiplier () and resolution multiplier () are introduced to minimize computational cost and control the image resolution, respectively, as per where and values range between 0 and 1. To note, when these two parameters are equal, the MobileNet baseline is achieved.

Since the MobileNetV2 has pretrained models in MATLAB called “MobileNet-v2 Support Package” that helped us to modify these models to be suitable for our proposed framework without requiring training a network from the scratch, the considered approach is adopted on the concatenated feature maps to compound multiple features together and decrease the number of channels and scale of concatenated feature maps at the same time. The considered inception module has accomplished immense success on computer vision missions due to its efficient and delicate design. The refined feature output can be one of the following three: cleaned solar panels, not cleaned solar panels, and no solar panels. Subsequently, three actions can be taken based on the output as follows: no action, clean with water and apply lotus visor, and search again, respectively.

4. Simulation and Discussion

After defining the main components of the proposed intelligent AI framework in two phases, this section starts with outlining the hardware specifications of the experiment setup and the testbed structure. Then, practical measurements and results are presented. Figure 4 shows the final drone installation with all of its hardware elements. We can see that the UAV contains a hexacopter frame, kit 6-axis drone flame, Electronic Speed Controllers (ESC), 6 propellers, brushless DC motor, DC electric source, drone battery, and Pixhawk controller using a Mission Planner software tool. A camera for aerial imaging, a GPS sensor for coordinating location of the drone, a transceiver module for connecting the drone with the GCC, and a battery for powering components are the four main parts connected to the Raspberry Pi 3 B+ microcontroller, which is fitted on board the drone. The conical water and lotus effect tank system is fabricated using a 3D printer. For the sake of the experiment, the tank has been designed to meet the size and the weight capability of the drone, yet scalability is possible.

The experiment had taken place at the Taif University campus on the 19th and 20th of March 2021 at 1 p.m. at altitudes ranging from 10 to 30 meters. The university new campus’s longitude and latitude are 40.4867323 and 21.3320348, respectively, which is in Saysad District in Taif City, Saudi Arabia. A Mission Planner software has been used when flying the drone at the university campus. Additionally, choosing weekend is to minimize the risk of any possible accident when flying a drone over a number of people.

4.1. Experimental Setup

This section contains the experimental setup along with visual results of the designed drone along with its AI framework, as shown in Figures 57. However, before presenting the visual results of the experiment, a briefed discussion is highlighted about the process cycle of utilizing an autonomous drone equipped with various payloads and coupled with the CNN framework to clean roof-mounted solar panels. The process can be summarized as follows: (i)The designed drone was trained to fly autonomously by relying on the VGG-16 CNN network(ii)With the help of the GPS, the drone should be able to locate the solar panels and report that to the GCC(iii)The camera that fitted on board the drone should be able to take pictures of the solar panels and send them to the cloud for deeper analysis and storage purposes(iv)By relying on the MobileNetV2 framework, the drone can determine whether solar panels are defective or not; thus:(1)If the solar panels are cleaned, no action is needed(2)If the solar panels are not cleaned, clean with water and apply lotus visor(3)If there are no solar panels, relaunch the drone flight and search again

The size of a solar panel is 0.5 m width by 1.2 m length, so the drone sprays appropriate amount of water to remove dust and dirt from the solar panels for 23 seconds. Then, the appropriate amount of the lotus shield is sprayed on the surface of the solar panel for 10 seconds to form a protective layer for around 9 months. The spraying process is done vertically from top to base. Figure 5 shows the two steps of cleaning the solar panels, where (a) the drone starts pumping water on the solar panel and (b) the drone applies the lotus effect on the solar panel for protection.

It is worth shedding the light on the dataset that has been used in this work for training, testing, and validation, where due to the nonavailability of sufficient-size and good-quality solar panel image dataset in public deadest platforms, a primary dataset has been considered, so the dataset balanced good-sized and resolution dataset of solar panel images have been taken from the Taif University campus. The dataset includes defective and nondefective images of solar panels to achieve an effective training, CNN classification, and validation. Figure 6 shows a representative sample of the primary dataset. Clearly, many different samples of solar panels have been taken into consideration for better training, testing, and validation. These considerations include angles of view, different types of solar panels, and defective and nondefective solar panels. Our entry point to the simulation is this primary dataset that includes 350 samples which were divided into three subsets: a training set, testing set, and validating set, where 245 of them were dedicated to the training phase, 52 were dedicated to the testing phase, and 52 were dedicated to the validating phase. The dataset distribution of the total dataset was 70%, 15%, and 15% for training, testing, and validating, respectively.

Figure 7 illustrates the output results of applying the proposed MobileNetV2 for object detection and recognition missions. This mission can be done by coupling the camera on board the drone with the MobileNetV2 framework. In our case, the framework can distinguish between the solar panel and other elements either on the top of buildings (e.g., roof windows and dish antennas) or even immediately on the solar panels (e.g., tree papers and dust). Figure 8 shows the defective and nondefective solar panel output at the stage training and testing the proposed MobileNetV2 network. Additionally, the network was trained and tested on the different classes of objects including leaves, dust, pens, and tools. Thus, it shows the ability to recognize the object if it is a solar panel or not. If yes, it can show if it is defective or nondefective. This is vital in the cleaning process without human interventions.

Various characteristics of the lotus effect have attracted us in this work to consider for cleaning solar panels and thus enhancing solar energy performance. Examples of these characteristics are preserving many materials for as long as possible, avoiding constant maintenance and cleaning work, decreasing the amount of water used in the cleaning process, and reducing cleaning time and effort. The panels have dye to absorb visible light 100%, where the nano glass minimizes the dust layer to occur overtime, and thus help the sun light to reach to the panels. Therefore, experimentally, applying lotus effect visor has increased power efficiency of solar panels up to 31%. Further, we found that 1 L of lotus effect liquid could cover around 70 m2 of a solar panel sheet. Furthermore, the current version of the drone can fly up to 30 minutes and cover around 25 solar panels per trip, which represent around 30 m2 of solar photovoltaic (PV) panels.

4.2. Validation Discussion

This section highlights two main performance indicators in the proposed AI framework via two indicators: first is the VGG-16 network loss and accuracy and second is the MobileNetV2 accuracy performance indicator, along with the confusion matrix. Figure 9 shows the VGG-16 network training and validation accuracy and loss. Clearly, the accuracy of 98.6% of the autonomous drone is a very good result that assures us for the drone intelligent navigation. Figure 10 displays the MobileNetV2 training accuracy against root mean square error (RMSE) and loss. This shows an accuracy of 99.1% with object detection and recognition. Figure 11 presents a confusion matrix, which is one of those well-recognized performance indicators in a classification. Thus, images of solar panels get classified with high accuracy into defective or nondetective classes. This is vital in the cleaning process without human interventions.

Table 2 shows the confusion matrix performance of the proposed MobileNetV2 framework. The table shows the four performance indicators of the confusion matrix after training, testing, and validation steps: indicator 1: sensitivity (out of all the positive classes, how much have been predicted correctly); indicator 2: precision (out of all the positive classes that have been predicted correctly, how many are actually positive); indicator 3: accuracy (out of all the classes, how much we predicted correctly); and indicator 4: the F1-score is used, which represents harmonic mean of precision and recall together. As can be seen in Table 1, the overall observation is that the proposed system is trained and classified with a high F1-score close to the value of 1. In turn, this confirms that the proposed MobileNetV2 framework is working well with a high level of accuracy.

To give our validation more depth, another performance indicator is used as verification is the system efficiency, as presented in Figure 12. The figure demonstrates a line graph of calculated input solar radiation (Prad) and the output as electrical power Pel, which refers to the measured current of the reach roof-mounted solar panels in relation to voltage characteristics at four stages dusty condition, partially dusty condition, clean condition, and clean condition with lotus effect. Clearly, there is a direct correlation between dust deposition accumulation on roof-mounted solar panel surface and output power of the energy system. Thus, as the dust deposition increases, a noticeable reduction of the output power and thus reduction in system efficiency were observed. Another observed point is that when applying a lotus effect on the roof-mounted solar panels, the system efficiency increases. That is because the lotus effect can slide off carrying dust and dirt, leaving the surface clean. This self-cleaning would serve as a protection layer and hence give better system efficiency. Mathematically, there is around 31% improvement in the solar power efficiency between dusty condition and clean condition with the lotus effect.

4.3. Comparison

This section presents a comparison between the proposed MobileNetV2 against other neural network techniques that have been widely used for object detection and recognition missions, as shown in Table 3. The techniques that have been used are support vector machines (SVM), random forest, CNN ResNet 50, and SqueezeNet, which are considered examples of their groups. To note, the same dataset has been used for all techniques for the sake of unification and validation. Clearly, the table shows dissimilar neural network models concerning computation cost and accuracy rate. Where the proposed MobileNetV2 achieved better accuracy with less computation cost in comparison to other models. That is due to the total number of parameters and neural layers that affect the performance and computation cost.

5. Conclusion

This work seeks to help workers in the field of cleaning solar panels by providing unconventional methods of cleaning solar panels, hence enhancing their efficiency. This paper is aimed at enhancing the performance of solar panels mounted on the top of buildings by designating an autonomous and intelligent drone, where the drone is equipped with payloads and paired with an AI framework to fulfil specific functions, not only to enhance performance of solar panels but also to solve issues on endangerment of human lives in such high places of installation. The proposed drone with its payloads and AI framework has successfully helped in detecting objects on the solar panels using the machine learning method before the drone utilizes water from the 3D designed conical tank to clean the solar panels and then sprays lotus effect on solar panels for long-term protection. The proposed framework shows great performance in training the drone to fly autonomously, as well as detect solar panels with high level of accuracy. Further, the system performance for both cleaning and dusty panels has been evaluated, where it shows around 31% improvement in the solar power efficiency. It is worth saying that the experiment of flying the drone was not easy and successful from the start, where it took multiple attempts as we have faced some issues. These issues would be an open research topic that can broaden the horizon of many applications to consider in the future. An example of these is the following: (i)Station keeping (drone stability) due to weather conditions (e.g., wind) is the key challenge, especially for lower altitudes, which might lose connectivity or require more power consumption to tracking and positioning(ii)Drone’s load capabilities, which affect the quantity of water and lotus effect. So, it could be beneficial to use a larger drone and/or rely on a tethered drone

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The authors are grateful to the Deanship of Scientific Research at Taif University, Kingdom of Saudi Arabia, for funding this project through Taif University Researchers Supporting Project number (TURSP-2020/265). The work of Amani Abdulrahman Albraikan was supported in part by Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number (PNURSP2022R190). Furthermore, the authors are thankful to Aisha Abdullah, Azouf Alqethami, Aisha Hendawi, Wafaa Alotaibi, Maram Aljuaid, Kholod Almalki, Wejdan Alqethami, Reham Aljuaid, and Khadija Sherif for their cooperation.