Abstract

This study looks into artificial intelligence methods for scaling solar power systems, such as standalone, grid-connected, and hybrid systems, in order to lessen environmental effect. When all essential information is provided, conventional sizing methods may be a feasible alternative. It is impossible to apply typical procedures in instances where data is unavailable. The new suggested artificial intelligence model employing multilayered perceptrons is employed for sizing solar systems, and this model functions on current photovoltaic modules that incorporate hybrid-sizing models; so, they should not be rejected entirely. In this work, the convergence speed of the proposed model for single diode, two diodes, and three diodes are the comparison factors to estimate the performance of the proposed model.

1. Introduction

As a result of the global energy crisis, many scientists and engineers are focusing their attention on renewable energy sources [1]. As a result of these findings, researchers were compelled to investigate new methods and materials [2] and tactics for converting sunlight into electrical energy or some form of energy. The conversion of solar radiation into electrical energy is accomplished through the use of photovoltaic (PV) systems. The difficulty in implementing solar systems is the high cost of their installation. There is a lot of literature devoted to figuring out how to make these systems more successful while also being less expensive. It has been demonstrated that artificial intelligence (AI) algorithms have a substantial impact on the performance of PV systems [3]. In photovoltaic systems, artificial intelligence algorithms can be utilized for modelling, sizing, control, fault diagnostics, and output estimation. It compares artificial intelligence algorithms with classical algorithms for each type of application [4]. Figure 1 shows the AI applications in solar panel.

The slicing is the task to separate the information about the previous predictions. This was helpful to the deep learning model to predict the output details to construct the solar panel outputs. The control unit is the module to control the solar tracking and energy management modules. Research into photovoltaic systems is strongly reliant on the accurate modelling of solar cells, and the various applications of PV system are shown in schematic in Figure 2. To model a PV system, it is necessary to first determine the parameters of the system numerically. The solar cell circuits with a single diode and two diodes are both equivalent circuits in terms of performance [5]. Each of the five properties of the single-diode model is represented by a single value: diode saturation current, series resistance, shunt resistance, and photo generated current [6]. There are seven parameters in the double-diode model. The modelling and sizing of PV systems are dependent on obtaining accurate estimates of these attributes. To determine the parameters of solar cells, several standard procedures have been published and are available online [7].

An analytic-numerical technique for the five-parameter single-diode model is described above [8]. Initially, the analytic part acts as a jumping-off point for the numerical answer to be constructed. A pattern search approach [9] for single diode, two-dimensional diodes, and photovoltaic modules is described. Traditionally used methodologies, on the other hand, are unable to precisely predict the properties of solar-electricity-generating modules. Many scientists have turned to AI technologies for parameter detection as a result of this [1015]. The synergy between artificial intelligence and other technologies can be used to build extremely powerful computer systems. The attempt to make up for flaws in traditional processes is a frequent theme in various alternate techniques. When designing computing systems, the goal is to make them better, more efficient, and more effective in certain situations. It is possible that the ability to learn and extrapolate from current knowledge will be required to achieve this goal. With the right application of intelligent technologies, it is possible to develop usable systems that are superior to those developed using traditional methodologies.

Many wealthy countries have also instituted particular strategies to encourage the use of renewable resources. Renewable energy (RE) technologies such as photovoltaic (PV) technology are among the promising of the current production. The importance of the artificial intelligence and other technologies can be used to build extremely powerful computer systems. It is possible to calculate the output of a unit by adding up all of the values. The deep learning provides the prediction about the earlier performance and is helpful to change the necessary actions in the lagging locations. The major contribution of this paper is to enhance the energy levels of the photovoltaic panel’s capacity. These improvement predictions are performed with the help of artificial intelligence-based smart deep learning approach. So, the energy generations and management activates are performed as per the artificial mode. The results of the proposed model were getting higher level.

An ANN is made up of a collection of small, interconnected processing units. Essentially, these components act as channels for the transmission of information. It is possible to attach both an input and a weight to the data of an incoming connection. It is possible to calculate the output of a unit by adding up all of the values. Even though ANNs are implemented using computers, there are no preprogrammed jobs associated with them. Instead, through training, they are trained to recognize patterns in the datasets that are used as inputs to the machine learning algorithms. It is possible to submit new patterns to them for prediction or categorization after they have been taught. Artificial neural networks [1620] can be trained using a variety of methods, including computer programmes, physical models, and real-world systems. It is feasible to utilize an ANN to process a large number of inputs and produce outputs that can be used by using a large number of inputs and creating outcomes that can be used by designers. To develop artificial neural networks, it is necessary to have a thorough understanding of the human brain and neurological systems. It is necessary to connect processing units with links of various weights to generate a black box representation of systems.

In an artificial neural network, the neurons in each layer are connected to those in the layer above them by a network of connections known as a neural network. Data is fed into the neural network through the input layer, while the network response to the data is stored in the output layer. There may be extra layers between input and output in addition to the input and output layers. It is necessary to process the sum of the weights of each input and output to acquire a result in all hidden and output neurons [2126], which is accomplished through the use of a nonlinear transfer function.

Even in the absence of predefined rules, a neural network is capable of processing massive volumes of data, and it can do so even when the input is skewed or otherwise inaccurate. Up to this point, traditional symbolic or logic-based strategies have been ineffective at dealing with these capabilities [2730]. Conventional computer systems and artificial intelligence techniques will soon realize that neural computing can be used as an alternative or as a compliment. Once the network has been trained, neural computing has the potential to provide a significant speed advantage over classical computing. When it comes to implementing changes, the capacity of a system to be educated using datasets rather than written codes may be more cost-effective and convenient than writing codes. ANN interconnects the weights and that is altered as a means of gaining knowledge about the system. Many artificial neural networks (ANNs) are employed in the solution of issues such as pattern matching and data compression. As a promising and rising technology, artificial neural networks (ANNs) have emerged as a popular tool for forecasting and prediction [31, 32].

Fuzzy techniques can be incorporated into neural networks to improve their performance. A possible solution would be to allow a neural network for processing ambiguous input. Fuzzifying crisp input data before it goes through fuzzy neural processing is yet another way [33] that has been proposed. Modified neurons are the possibility in nodes at every layer of the network that translates fuzziness in the input to crispness in the output. Weights connecting the node to other nodes in the preceding layer have fuzzy values in the input vector, as do the weights connecting the node to other nodes in the preceding layer. Both the input data and the weights that are presented by the membership functions are defined as follows: a total of two membership functions are needed to complete the computation, one that reflects the weight of the fuzzy inputs using an updated summation method, and another that represents the original weighted integration using an additional membership function. The output of the node can then be determined using a crisp value by conducting a centered operation on the result and comparing the results [34, 35].

3. Proposed Method

Research and development on genetic and neurological systems have seen a significant surge in recent years, particularly since the late 1980s. Recent research [1520] has focused on the application of evolutionary algorithms to improve the operation and design of neural networks. The use of genetic algorithms, artificial neural networks, and problem-solving methodologies is also being investigated in another project. New evolutionary computing approaches can be used to make neural networks easier to tune and design. Genetic algorithms can also be used to improve the performance of neural learning methods. The single-diode model consists of the four properties represented by a single value: they are the single-diode saturation current, single-diode series resistance, single-diode shunt resistance, and single-diode photogenerated current. Sizing renewable energy power systems is a complicated procedure due to the difficulty of achieving coordination between renewable energy resources, generators, energy storage, and loads.

The components of a PV power system include an array of solar cells, an energy storage component, and additional accessories. The photovoltaic (PV) array is a device that uses solar energy and converts it to direct current electricity. PV modules are connected to make an array. Electrical energy is stored in the storage component (which is typically a battery) and can be accessed at a later time as needed. The components of the system are in charge of regulating the system’s operation. This includes a PV array tracker to maximize the quantity of the energy received from the solar panels. With the help of a powerful processor, the photovoltaic array output can be converted into a form that can be used by the end-user. Figure 3(a) shows the flowchart for estimating the sizing parameters based on MLP.

Because fuel consumption and maintenance costs are minimized when the null time of the engine-generator is reduced and the engine operates at its peak efficiency, it is possible to reduce fuel consumption and maintenance expenses. Renewable energy sources, energy storage components, and engine generators must all work together to ensure the system’s stability, which requires careful coordination. The generic algorithms are very useful to understand the artificial neural networks and problem-solving methodologies. This will create the better understanding about the solar panel power production management and energy optimization systems. For their operation to be successful, grid-connected systems must be powered by electricity generated by a utility provider. In these configurations, the utility grid acts as both the system storage and backup power supply. To comply with utility power quality regulations for these systems, DC–AC inverters must be employed. The sizing renewable energy power systems is a complicated procedure due to the difficulty of achieving coordination between renewable energy resources, generators, energy storage, and loads.

According to Figure 3(a), MLP can be used to estimate the size parameters by estimating the size parameters. The model receives as inputs the location latitude and longitude, and as outputs, it returns two hybrid-sizing parameters based on the inputs (). The block diagram of the model is depicted in Figure 3(b). The number of important sizing parameters has been determined by the results of the hybrid-sizing approach. Based on the information, it is concluded that the relative error does not surpass 6%.

The hybrid slicing is the function to manage the different entity-based or requirement-based slicing. This is used to separate the data type-based slicing modules. These techniques can be incorporated into neural networks to improve their performance. A possible solution would be to allow a neural network for processing ambiguous input. Predictions made by the model include classifications based on which dataset is used as a training set. Based on the results of fine-tuned hyperparameter studies, the proposed MLP design includes a hidden layer with 16 neurons and an output layer with 16 neurons.

To study the effect of increasing the number of neurons in one hidden layer from 2 to 128 on performance, each one is multiplied by the previous number twice, for a total of four times. Whenever the study discovered the optimal number of neurons (let us say ), the study begins by adding layers of hidden neurons (let us say per hidden layer) to examine how the prediction accuracy changes as the number of hidden layers increases. In these proposed configurations, the power utility and grid-based solar power plants act as both sides the system energy storage and backup power supply monitor. To comply with utility power quality regulations for these systems, direct and alternative current inverters must be employed. For prediction tasks, the sigmoid function is employed in MLP models that perform binary classification and thus perform binary classification. When it comes to deep learning architectures, the sigmoid function is widely used in its output layers. In the domain of IR, input values are transformed into output values in the range [0,1]. As a result, the sigmoid function is used to compress all input values between ­1 and ­1 to the sigmoid function, and this is also known as squash. In our shift to gradient learning, the sigmoid is considered the option because it approximates a thresholding unit in a smooth and differentiable manner. The sigmoid function can be represented mathematically in

where is the data computed by MLP layer.

With the softmax function and -classes, we can classify scenarios that require multiple classifications. It is possible to transform a value into the vector using a probability distribution that consists of a total sum and therefore, it is normalized as follows.

ReLU or the activation function is also implemented in our approach to some extent. Here is the ReLU formula in its most basic form as in

In deep learning systems, ReLU has been the most frequently used activation function, and it has been responsible for some of the most cutting-edge discoveries that have been made available to researchers to date. In contrast to the sigmoid and tanh activation functions, the ReLU activation function assists models in deep learning in providing improved performance and generalization. Because the gradient-descent activation function represents a nearly linear function, linear models are easier to optimize with this method. The output of the previous hidden neural layer is changed by ReLU, which also serves as an activation function for the current hidden neural layer. Using ReLU, neural networks can be improved by accelerating their training, which in turn enhances their overall performance. The positive component of the ReLU is lower than the gradients of the logistic and hyperbolic tangent models, while the negative component of the ReLU is lower still.

Since training progresses at a faster rate than in previous years, the positive element of the score is updated more frequently. To eliminate overfitting concerns, we are exploring employing the Early Stopping technique with a five-day patience period as an alternative. The learning process will be terminated as a result of this after five consecutive epochs of failure. To avoid this, the learning process must be terminated after 10 epochs, unless otherwise specified. In the network implementation with the Adam optimizer function, 100 batches are used with a learning rate of 0.001.

4. Results and Discussions

The PC utilized to conduct this investigation has the following specifications: an Intel (R) Core (TM) M380 CPU running at 2.53 GHz, 2 GB of RAM, and a 64-bit operating system. To validate the MLP-based PV model, a polycrystalline KC200GT PV module is used. As inputs to this MLP technique, information such as PV module datasheets, whale counts, and iterations are all submitted to the algorithm. The findings should be available in a few days. Initially, the MLP generates a random sample in the domain of interest to get things started. The entire number of solutions in this study is equal to the search agents that are assigned as 30. Figure 4 shows the convergence speed.

The study is used to determine the fitness function for each position. After that, the best fitness function is determined by trial and error. According to the individual fitness functions of each whale, a value of 2–0 is chosen for parameter , which is then used to compute the values of two other parameters, and ; the location of each neuron is changed in response to this value. The value of the parameter controls whether the MLP moves in a spiral or a circular fashion. The previous stages are performed indefinitely until a termination threshold is reached. For the sake of this inquiry, the MLP was ended after 500 iterations of the procedure. As illustrated in Figure 4 where it takes around 2.8 s, the advantage of the MLP’s rapid convergence speed is obvious. The algorithm code is developed using the Matlab programming language. According to MLP, a single-diode PV model yields the best optimal values, a double-diode PV model yields the next best optimal values, and a three-diode PV model yields the next best optimal values, according to MLP.

When compared to models constructed using iterations and the GA technique, the MLP-based single-diode PV model provided in this research performs significantly better. This work was carried out using the Matlab Optimization Toolbox, which is an integrated part of Matlab, to optimize the fitness function of a three-diode PV model using GA and SA techniques, with the results to demonstrate the outcomes of the Matlab Optimization Toolbox. MLP-based optimization is competitive when compared to other heuristic optimization strategies. Based on these findings, we may conclude that the MLP-derived PV model parameter values are within a suitable range and are equivalent to those obtained using other optimization methods such as genetic algorithms. Upon further examination of the curves under various temperatures and levels of irradiation, it was discovered that the PV model with a three-diode is accurate in all circumstances and the performance results are shown from Tables 14.

5. Conclusions

As a result of these applications, it is necessary to emphasize the importance of using artificial intelligence for PV system sizing. Artificial intelligence techniques have demonstrated that it is feasible to properly and successfully size photovoltaic systems. In general, artificial intelligence techniques have demonstrated the ability to precisely and successfully size PV systems based on certain readily available data. It is undeniable that artificial intelligence-based solutions for PV system sizing are becoming increasingly popular, particularly in rural areas. It is possible to employ AI as a design tool to assist in determining the proper size of solar PV systems. There has been no attempt made to depict every possible application of AI, and the examples offered here are by no means exhaustive in their scope. This research investigates several AI solutions for scaling PV power systems, including standalone, grid-connected, and hybrid systems, to reduce their environmental impact. When all of the relevant information is available, conventional methods of sizing may be a viable option. It is impossible to use traditional approaches in situations where the data is not available to them. Thus, this work showed that the proposed model provides the best results while compared with existing models.

Data Availability

The data used to support the findings of this study are included within the article. Further data or information is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors appreciate the supports from the Mizan Tepi University, Ethiopia, for the research and preparation of the manuscript.