Research Article | Open Access

Volume 2014 |Article ID 828409 | 11 pages | https://doi.org/10.1155/2014/828409

# Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations

Revised12 Dec 2013
Accepted31 Dec 2013
Published23 Feb 2014

#### Abstract

In order to find the zeros of nonlinear equations, in this paper, we propose a family of third-order and optimal fourth-order iterative methods. We have also obtained some particular cases of these methods. These methods are constructed through weight function concept. The multivariate case of these methods has also been discussed. The numerical results show that the proposed methods are more efficient than some existing third- and fourth-order methods.

#### 1. Introduction

Newton’s iterative method is one of the eminent methods for finding roots of a nonlinear equation: Recently, researchers have focused on improving the order of convergence by evaluating additional functions and first derivative of functions. In order to improve the order of convergence and efficiency index, many modified third-order methods have been obtained by using different approaches (see ). Kung and Traub  presented a hypothesis on the optimality of the iterative methods by giving as the optimal order. It means that the Newton iteration by two function evaluations per iteration is optimal with 1.414 as the efficiency index. By using the optimality concept, many researchers have tried to construct iterative methods of optimal higher order of convergence. The order of the methods discussed above is three with three function evaluations per full iteration. Clearly its efficiency index is , which is not optimal. Very recently, the concept of weight functions has been used to obtain different classes of third- and fourth-order methods; one can see  and the references therein.

This paper is organized as follows. In Section 2, we present a new class of third-order and fourth-order iterative methods by using the concept of weight functions, which includes some existing methods and also provides some new methods. We have extended some of these methods for multivariate case. Finally, we employ some numerical examples and compare the performance of our proposed methods with some existing third- and fourth-order methods.

#### 2. Methods and Convergence Analysis

First we give some definitions which we will use later.

Definition 1. Let be a real valued function with a simple root and let be a sequence of real numbers that converge towards . The order of convergence is given by where is the asymptotic error constant and .

Definition 2. Let be the number of function evaluations of the new method. The efficiency of the new method is measured by the concept of efficiency index [8, 9] and defined as where is the order of convergence of the new method.

##### 2.1. Third-Order Iterative Methods

To improve the order of convergence of Newton’s method, some modified methods are given by Grau-Sánchez and Díaz-Barrero in , Weerakoon and Fernando in , Homeier in , Chun and Kim in , and so forth. Motivated by these papers, we consider the following two-step iterative method: where and is a real constant. Now we find under what conditions it is of order three.

Theorem 3. Let be a simple root of the function and let have sufficient number of continuous derivatives in a neighborhood of . The method (4) has third-order convergence, when the weight function satisfies the following conditions:

Proof. Suppose is the error in the th iteration and , . Expanding and around the simple root with Taylor series, then we have Now it can be easily found that By using (7) in the first step of (4), we obtain At this stage, we expand around the root by taking (8) into consideration. We have Furthermore, we have By virtue of (10) and (4), we get
Hence, from (11) and (4) we obtain the following general equation, which has third-order convergence: This proves the theorem.

Particular Cases. To find different third-order methods we take in (4).

Case 1. If we take in (4), then we get the formula: and its error equation is given by

Case 2. If we take in (4), then we get the formula: and its error equation is given by

Case 3. If we take in (4), then we get the formula: and its error equation is given by

Case 4. If we take in (4), then we get the formula: and its error equation is given by

Case 5. If we take in (4), then we get the formula: which is Huen’s formula .

Remark 4. By taking different values of and weight function in (4), one can get a number of third-order iterative methods.

##### 2.2. Optimal Fourth-Order Iterative Methods

The order of convergence of the methods obtained in the previous subsection is three with three function evaluations (one function and two derivatives) per step. Hence its efficiency index is , which is not optimal. To get optimal fourth-order methods we consider where and are two real-valued weight functions with and is a real constant. The weight functions should be chosen in such a way that the order of convergence arrives at optimal level four without using additional function evaluations. The following theorem indicate the required conditions for the weight functions and constant in (22) to get optimal fourth-order convergence.

Theorem 5. Let be a simple root of the function and let have sufficient number of continuous derivatives in a neighborhood of . The method (22) has fourth-order convergence, when and the weight functions and satisfy the following conditions:

Proof. Using (6) and putting in the first step of (22), we have Now we expand around the root by taking (24) into consideration. Thus, we have Furthermore, we have By virtue of (26) and (22), we obtain Finally, from (27) and (22) we can have the following general equation, which reveals the fourth-order convergence: It proves the theorem.

Particular Cases

Method 1. If we take and , where , then the iterative method is given by and its error equation is given by

Method 2. If we take and , where , then the iterative method is given by and its error equation is given by

Method 3. If we take and , where , then the iterative method is given by and its error equation is given by

Method 4. If we take and , where , then the iterative method is given by and its error equation is

Method 5. If we take and , where , then the iterative method is given by which is same as the formula (11) of .

Method 6. If we take and , where , then the iterative method is given by and its error equation is

Remark 6. By taking different values of and in (22), one can obtain a number of fourth-order iterative methods.

#### 3. Further Extension to Multivariate Case

In this section, we extend some third- and fourth-order methods from our proposed methods to solve the nonlinear systems. Similarly we can extend other methods also. The multivariate case of our third-order method (15) is given by where , ; similarly ; is identity matrix; = , , ; and is the Jacobian matrix of at . Let be any point of the neighborhood of exact solution of the nonlinear system . If Jacobian matrix is nonsingular, then Taylor’s series expansion for multivariate case is given by where , and where is an identity matrix. From the previous equation we can find where , , and . Here we denote the error in th iteration by . The order of convergence of method (40) can be proved by the following theorem.

Theorem 7. Let be sufficiently Frechet differentiable in a convex set , containing a root of . Let one suppose that is continuous and nonsingular in and is close to . Then the sequence obtained by the iterative expression (40) converges to with order three.

Proof. For the convenience of calculation, we replace by in the first step of (40). From (41), (42), and (43), we have where , . Now from (46) and (44), we can obtain
where
By virtue of (47) the first step of the method (40) becomes Taylor’s series expansion for Jacobian matrix can be given as Now where Taking inverse of both sides of (51), we get where By multiplying (53) and (50), we get where and the values of , and are mentioned below: From multiplication of (47) and (55), we achieve After replacing the value of the above equation in second part of (40), we get The final error equation of method (40) is given by Thus, we end the proof of Theorem 7.

The multivariate case of (33) is given by The following theorem shows that this method has fourth-order convergence.

Theorem 8. Let be sufficiently Frechet differentiable in a convex set , containing a root of . Let one suppose that is continuous and nonsingular in and is close to . Then the sequence obtained by the iterative expression (61) converges to with order four.

Proof. For the convenience of calculation we replace by and put , , and in (61). From (46) and (50), we have From the above equation we have With the help of (62) and (63), we can obtain By multiplying (64) to (58), we have where The final error equation of method (61) is given by which confirms the theorem.

#### 4. Numerical Testing

##### 4.1. Single Variate Case

In this section, ten different test functions have been considered in Table 1 for single variate case to illustrate the accuracy of the proposed iterative methods. The root of each nonlinear test function is also listed. All computations presented here have been performed in MATHEMATICA 8. Many streams of science and engineering require very high precision degree of scientific computations. We consider 1000 digits floating point arithmetic using “SetAccuracy ” command. Here we compare the performance of our proposed methods with some well-established third-order and fourth-order iterative methods. In Table 2, we have represented Huen’s method by HN3, our proposed third-order method (15) by M3, fourth-order method (17) of  by SL4, fourth-order Jarratt’s method by JM4, and proposed fourth-order method by M4. The results are listed in Table 2.

 Guess HN3 M3 SL4 JM4 M4