Research Article  Open Access
Several New ThirdOrder and FourthOrder Iterative Methods for Solving Nonlinear Equations
Abstract
In order to find the zeros of nonlinear equations, in this paper, we propose a family of thirdorder and optimal fourthorder iterative methods. We have also obtained some particular cases of these methods. These methods are constructed through weight function concept. The multivariate case of these methods has also been discussed. The numerical results show that the proposed methods are more efficient than some existing third and fourthorder methods.
1. Introduction
Newton’s iterative method is one of the eminent methods for finding roots of a nonlinear equation: Recently, researchers have focused on improving the order of convergence by evaluating additional functions and first derivative of functions. In order to improve the order of convergence and efficiency index, many modified thirdorder methods have been obtained by using different approaches (see [1–3]). Kung and Traub [4] presented a hypothesis on the optimality of the iterative methods by giving as the optimal order. It means that the Newton iteration by two function evaluations per iteration is optimal with 1.414 as the efficiency index. By using the optimality concept, many researchers have tried to construct iterative methods of optimal higher order of convergence. The order of the methods discussed above is three with three function evaluations per full iteration. Clearly its efficiency index is , which is not optimal. Very recently, the concept of weight functions has been used to obtain different classes of third and fourthorder methods; one can see [5–7] and the references therein.
This paper is organized as follows. In Section 2, we present a new class of thirdorder and fourthorder iterative methods by using the concept of weight functions, which includes some existing methods and also provides some new methods. We have extended some of these methods for multivariate case. Finally, we employ some numerical examples and compare the performance of our proposed methods with some existing third and fourthorder methods.
2. Methods and Convergence Analysis
First we give some definitions which we will use later.
Definition 1. Let be a real valued function with a simple root and let be a sequence of real numbers that converge towards . The order of convergence is given by where is the asymptotic error constant and .
Definition 2. Let be the number of function evaluations of the new method. The efficiency of the new method is measured by the concept of efficiency index [8, 9] and defined as where is the order of convergence of the new method.
2.1. ThirdOrder Iterative Methods
To improve the order of convergence of Newton’s method, some modified methods are given by GrauSánchez and DíazBarrero in [10], Weerakoon and Fernando in [1], Homeier in [2], Chun and Kim in [3], and so forth. Motivated by these papers, we consider the following twostep iterative method: where and is a real constant. Now we find under what conditions it is of order three.
Theorem 3. Let be a simple root of the function and let have sufficient number of continuous derivatives in a neighborhood of . The method (4) has thirdorder convergence, when the weight function satisfies the following conditions:
Proof. Suppose is the error in the th iteration and , . Expanding and around the simple root with Taylor series, then we have
Now it can be easily found that
By using (7) in the first step of (4), we obtain
At this stage, we expand around the root by taking (8) into consideration. We have
Furthermore, we have
By virtue of (10) and (4), we get
Hence, from (11) and (4) we obtain the following general equation, which has thirdorder convergence:
This proves the theorem.
Particular Cases. To find different thirdorder methods we take in (4).
Case 1. If we take in (4), then we get the formula: and its error equation is given by
Case 2. If we take in (4), then we get the formula: and its error equation is given by
Case 3. If we take in (4), then we get the formula: and its error equation is given by
Case 4. If we take in (4), then we get the formula: and its error equation is given by
Case 5. If we take in (4), then we get the formula: which is Huen’s formula [11].
Remark 4. By taking different values of and weight function in (4), one can get a number of thirdorder iterative methods.
2.2. Optimal FourthOrder Iterative Methods
The order of convergence of the methods obtained in the previous subsection is three with three function evaluations (one function and two derivatives) per step. Hence its efficiency index is , which is not optimal. To get optimal fourthorder methods we consider where and are two realvalued weight functions with and is a real constant. The weight functions should be chosen in such a way that the order of convergence arrives at optimal level four without using additional function evaluations. The following theorem indicate the required conditions for the weight functions and constant in (22) to get optimal fourthorder convergence.
Theorem 5. Let be a simple root of the function and let have sufficient number of continuous derivatives in a neighborhood of . The method (22) has fourthorder convergence, when and the weight functions and satisfy the following conditions:
Proof. Using (6) and putting in the first step of (22), we have Now we expand around the root by taking (24) into consideration. Thus, we have Furthermore, we have By virtue of (26) and (22), we obtain Finally, from (27) and (22) we can have the following general equation, which reveals the fourthorder convergence: It proves the theorem.
Particular Cases
Method 1. If we take and , where , then the iterative method is given by and its error equation is given by
Method 2. If we take and , where , then the iterative method is given by and its error equation is given by
Method 3. If we take and , where , then the iterative method is given by and its error equation is given by
Method 4. If we take and , where , then the iterative method is given by and its error equation is
Method 5. If we take and , where , then the iterative method is given by which is same as the formula (11) of [12].
Method 6. If we take and , where , then the iterative method is given by and its error equation is
Remark 6. By taking different values of and in (22), one can obtain a number of fourthorder iterative methods.
3. Further Extension to Multivariate Case
In this section, we extend some third and fourthorder methods from our proposed methods to solve the nonlinear systems. Similarly we can extend other methods also. The multivariate case of our thirdorder method (15) is given by where , ; similarly ; is identity matrix; = , , ; and is the Jacobian matrix of at . Let be any point of the neighborhood of exact solution of the nonlinear system . If Jacobian matrix is nonsingular, then Taylor’s series expansion for multivariate case is given by where , and where is an identity matrix. From the previous equation we can find where , , and . Here we denote the error in th iteration by . The order of convergence of method (40) can be proved by the following theorem.
Theorem 7. Let be sufficiently Frechet differentiable in a convex set , containing a root of . Let one suppose that is continuous and nonsingular in and is close to . Then the sequence obtained by the iterative expression (40) converges to with order three.
Proof. For the convenience of calculation, we replace by in the first step of (40). From (41), (42), and (43), we have
where , . Now from (46) and (44), we can obtain
where
By virtue of (47) the first step of the method (40) becomes
Taylor’s series expansion for Jacobian matrix can be given as
Now
where
Taking inverse of both sides of (51), we get
where
By multiplying (53) and (50), we get
where
and the values of , and are mentioned below:
From multiplication of (47) and (55), we achieve
After replacing the value of the above equation in second part of (40), we get
The final error equation of method (40) is given by
Thus, we end the proof of Theorem 7.
The multivariate case of (33) is given by The following theorem shows that this method has fourthorder convergence.
Theorem 8. Let be sufficiently Frechet differentiable in a convex set , containing a root of . Let one suppose that is continuous and nonsingular in and is close to . Then the sequence obtained by the iterative expression (61) converges to with order four.
Proof. For the convenience of calculation we replace by and put , , and in (61). From (46) and (50), we have From the above equation we have With the help of (62) and (63), we can obtain By multiplying (64) to (58), we have where The final error equation of method (61) is given by which confirms the theorem.
4. Numerical Testing
4.1. Single Variate Case
In this section, ten different test functions have been considered in Table 1 for single variate case to illustrate the accuracy of the proposed iterative methods. The root of each nonlinear test function is also listed. All computations presented here have been performed in MATHEMATICA 8. Many streams of science and engineering require very high precision degree of scientific computations. We consider 1000 digits floating point arithmetic using “SetAccuracy ” command. Here we compare the performance of our proposed methods with some wellestablished thirdorder and fourthorder iterative methods. In Table 2, we have represented Huen’s method by HN3, our proposed thirdorder method (15) by M3, fourthorder method (17) of [5] by SL4, fourthorder Jarratt’s method by JM4, and proposed fourthorder method by M4. The results are listed in Table 2.
