Research Article | Open Access

Volume 2010 |Article ID 341982 | 14 pages | https://doi.org/10.1155/2010/341982

# Convergence Analysis of Preconditioned AOR Iterative Method for Linear Systems

Revised21 Feb 2010
Accepted13 May 2010
Published08 Jul 2010

#### Abstract

-(-)matrices appear in many areas of science and engineering, for example, in the solution of the linear complementarity problem (LCP) in optimization theory and in the solution of large systems for real-time changes of data in fluid analysis in car industry. Classical (stationary) iterative methods used for the solution of linear systems have been shown to convergence for this class of matrices. In this paper, we present some comparison theorems on the preconditioned AOR iterative method for solving the linear system. Comparison results show that the rate of convergence of the preconditioned iterative method is faster than the rate of convergence of the classical iterative method. Meanwhile, we apply the preconditioner to -matrices and obtain the convergence result. Numerical examples are given to illustrate our results.

#### 1. Introduction

In numerical linear algebra, the theory of - and -matrices is very important for the solution of linear systems of algebra equations by iterative methods (see, e.g., [114]). For example, (a) in the linear complementarity problem (LCP) (see [5, Section ] for specific applications), where we are interested in finding a such that , , , with and given, a sufficient condition for a solution to exist, and to be found by a modification of an iterative method, especially of SOR, is that is an -matrix, with , [15]; (b) in fluid analysis, in the car modeling design [16, 17], it was observed that large linear systems with an -matrix coefficient are solved iteratively much faster if is postmultiplied by a suitable diagonal matrix , with , , so that is strictly diagonally dominant. We consider the following linear system: where is an square matrix, and are two -dimensional vectors. For any splitting, with the nonsingular matrix , the basic iterative method for solving the linear system (1.1) is as follows:

Without loss of generality, let and , , where and are strictly lower triangular and strictly upper triangular matrices of , respectively. Then the iterative matrix of the iterative method [18] for solving the linear system (1.1) is where and are nonnegative real parameters with .

To improve the convergence rate of the basic iterative methods, several preconditioned iterative methods have been proposed in [8, 12, 13, 1924]. We now transform the original system (1.1) into the preconditioned form where is a nonsingular matrix. The corresponding basic iterative method is given in general by where is a splitting of .

Milaszewicz [19] presented a modified Jacobi and Gauss-Seidel iterative methods by using the preconditioned matrix , where

The author [19] suggests that if the original iteration matrix is nonnegative and irreducible, then performing Gaussian elimination on a selected column of iteration matrix to make it zero will improve the convergence of the iteration matrix.

In 2003, Hadjidimos et al. [4] considered the generalized preconditioner used in this case is of the form where with , , constants. The selection of 's will be made from the -dimensional nonnegative cone in such a way that none of the diagonal elements of the preconditioned matrix vanishes. They discussed the convergence of preconditioned Jacobi and Gauss-Seidel when a coefficient matrix is an -matrix.

In this paper, we consider the preconditioned linear system of the form where and . It is clear that . Thus, we obtain the equality where , and are the diagonal, strictly lower, and strictly upper triangular parts of the matrix , respectively. If we apply the iterative method to the preconditioned linear system (1.8), then we get the preconditioned iterative method whose iteration matrix is

This paper is organized as follows. Section 2 is preliminaries. Section 3 will discuss the convergence of the preconditioned method and obtain comparison theorems with the classical iterative method when a coefficient matrix is a -matrix. In Section 4 we apply the preconditioner to -matrices and obtain the convergence result. In Section 5 we use numerical examples to illustrate our results.

#### 2. Preliminaries

We say that a vector is nonnegative (positive), denoted , if all its entries are nonnegative (positive). Similarly, a matrix is said to be nonnegative, denoted , if all its entries are nonnegative or, equivalently, if it leaves invariant the set of all nonnegative vectors. We compare two matrices , when , and two vectors when . Given a matrix , we define the matrix . It follows that and that for any two matrices and of compatible size.

Definition 2.1. A matrix is called a -matrix if for . A matrix is called a nonsingular -matrix if is a -matrix and .

Definition 2.2. A matrix is an -matrix if its comparison matrix is an -matrix, where is

Definition 2.3 (see [1]). The splitting is called an -splitting if is an -matrix and an -compatible splitting if .

Definition 2.4. Let . is called a splitting of if is a nonsingular matrix. The splitting is called(a)convergent if (b)regular if and (c)nonnegative if (d)-splitting if is a nonsingular -matrix and .

Lemma 2.5 (see [1]). Let be a splitting. If the splitting is an -splitting, then and are -matrices and . If the splitting is an -compatible splitting and is an -matrix, then it is an -splitting and thus convergent.

Lemma 2.6 (Perron-Frobenius theorem). Let be an irreducible matrix. Then the following hold:(a) has a positive eigenvalue equal to .(b) has an eigenvector corresponding to .(c) is a simple eigenvalue of .

Lemma 2.7 (see [3, 25]). Let be an -splitting of . Then if and only if is a nonsingular (singular) -matrix. If is irreducible, then here is a positive vector such that .

Lemma 2.8 (see [5]). Let be a nonnegative matrix. Then the following hold.(a)If for a vector and , then .(b)If for a vector , then ; moreover, if is irreducible and if , equality excluded, for a vector and , then and .

#### 3. Convergence Theorems for -Matrix

We first consider the convergence of the iteration matrix of the preconditioned linear system (1.8) when the coefficient matrix is a -matrix.

Particularly, we consider , . Define where , and are diagonal, strictly lower triangular, and strictly upper triangular parts of the matrix , respectively. Then the preconditioned method is expressed as follows: where , , and are the diagonal, strictly lower, and strictly upper triangular matrices obtained from , respectively.

Lemma 3.1. Let be a -matrix. Then is also a -matrix.

Proof. Since , we have It is clear that is a -matrix for any , .

Lemma 3.2. Let and be defined by (1.3) and (1.10). Assume that . If is an irreducible -matrix with , , for , , then and are nonnegative and irreducible.

Proof. Since is irreducible. Then for , , we have that is also irreducible. Observe that where is a nonnegative matrix. As is an irreducible -matrix and , , it is easy to show that is nonnegative and irreducible. By assumption, are all nonnegative and thus is nonnegative. Observe that can be expressed as where is a nonnegative matrix. Since , , and is irreducible, is irreducible. Hence, is irreducible from (3.5).

Our main result in this section is as follows.

Theorem 3.3. Let and be defined by (1.3) and (1.10). Assume that . If is an irreducible -matrix with , , for , , then(a)for , if ;(b)for , if ;(c)for , if .

Proof. Let be irreducible. It is clear that is an -matrix and . So is an -splitting of . From Lemma 2.7, there exists a positive vector such that where denotes the spectral radius of . Observe that ; we have which is equivalent to Let , where , and are the diagonal, strictly lower, and strictly upper triangular parts of , respectively. It is clear that , so where From (3.8) and (3.10), we have Since , , then is an -matrix. Notice that ,, and . If , then from (3.11), we have . As , Lemma 2.8 implied that . For the case of and , and are obtained from (3.11), respectively. Hence, Theorem 3.3 follows from Lemmas 2.8 and 3.2.

We next consider the case of , ; the convergence theorem is given as follows see [26, 27].

Theorem 3.4. Let and be defined by (1.3) and (1.10). Assume that is an irreducible -matrix and is an irreducible submatrix of deleting the first row and the first column. Then for and , , we have(a) if ;(b) if ;(c) if .

Proof. Let be irreducible. It is clear that is an -matrix and . So is an -splitting of . From Lemma 2.7, there exists a positive vector such that where denotes the spectral radius of . Observe that ; we have which is equivalent to Similar to the proof of the equality (3.11), we have Since , , and , then we have By computation, we have where is a nonnegative matrix, is a matrix, and is an matrix. As is irreducible, then at least one and is nonzero. Since is irreducible, it is clear that is irreducible. Since and , from (3.17), we have that is irreducible. Let From (3.18), and , we know that , and the first component of is zero. Hence and its first component is zero. Let where , , and being a nonzero vector. From (3.16) and (3.17), we have That is, If , from (3.22) and is a nonzero vector, we have Since is irreducible, from Lemma 2.8, we have Since and is a nonzero nonnegative vector, from (3.21), we have . Namely, It is clear that . Hence, we have For the case of , is obtained from (3.22) and equality is excluded. Hence follows from Lemma 2.8 and is irreducible. Since is an -splitting of , from Lemma 2.7, we know that if and only if is a singular -matrix. So is a singular -matrix. Since is an -splitting of ; from Lemma 2.7 again, we have , which completes the proof.

In Theorem 3.4, if we let , then can obtain some results about method. For the similarity of proof of the Theorem 3.4, we only give the convergence result of the method.

Theorem 3.5. Let and be defined by (1.3) and (1.10). Assume that is an irreducible -matrix and is an irreducible submatrix of deleting the first row and the first column. Then for and , , we have(a) if ;(b) if ;(c) if .

#### 4. AOR Method for -Matrix

In this Section, we will consider method for -matrices. For convenience, we still use some notions and definitions in Section 2.

Lemma 4.1 (see [7]). Let be an -matrix with unit diagonal elements, defining the matrices and , where and are the strictly lower and strictly upper triangular components of , respectively; then , , and . Let be a positive vector such that assume that for , and then for and for the splitting is an -splitting and so that the iteration (1.3) converges to the solution of (1.1).

Lemma 4.2. Let be an -matrix, and let , where is defined as Lemma 4.1. Then for any , is also an -matrix.

Proof. The conclusion is easily obtained by Lemma 4.1 [7].

Lemma 4.3. Let . Then is an -compatible splitting.

Proof. Let and , where and . Since we have that(a)if , then (b)if , then since ; observe that if , we have if , we have Hence, we have , that is, is an -compatible splitting.

Theorem 4.4. Let the assumption of Lemma 4.2 holds. Then for any and , we have .

Proof. By Lemmas 2.5, 4.2, and 4.3, the conclusion is easily obtained.

#### 5. Numerical Examples

In this Section, we give three numerical examples to illustrate the results obtained in Sections 3 and 4.

Example 5.1. Consider a matrix of of the form where , , and . It is clear that the matrix satisfies the assumptions of Theorem 3.3. Numerical results for this matrix are given in Table 1.

 𝜔 𝛾 𝜌 ( 𝑇 𝛾 , 𝜔 ) 𝜌 ( 𝑇 𝛾 , 𝜔 ) 𝜔 𝛾 𝜌 ( 𝑇 𝛾 , 𝜔 ) 𝜌 ( 𝑇 𝛾 , 𝜔 ) 0 . 4 0 . 1 0 . 9 9 8 3 0 . 9 8 4 0 0 . 8 0 . 7 0 . 9 9 5 2 0 . 9 5 5 9 0 . 4 0 . 4 0 . 9 9 8 0 0 . 9 8 1 5 0 . 8 0 . 8 0 . 9 9 4 9 0 . 9 5 2 9 0 . 5 0 . 2 0 . 9 9 7 7 0 . 9 7 9 0 0 . 9 0 . 7 0 . 9 9 4 6 0 . 9 5 0 4 0 . 5 0 . 4 0 . 9 9 7 5 0 . 9 7 6 8 0 . 9 0 . 9 0 . 9 9 3 8 0 . 9 4 3 1 0 . 6 0 . 4 0 . 9 9 7 0 0 . 9 7 2 2 1 0 . 8 0 . 9 9 3 6 0 . 9 4 1 1 0 . 6 0 . 6 0 . 9 9 6 6 0 . 9 6 8 9 1 0 . 9 0 . 9 9 3 1 0 . 9 3 6 7

We consider Example 5.1; if we let , , and , it is clear to show that is an -matrix. The initial approximation of is taken as a zero vector, and is chosen so that is the solution of the linear system (1.1). Here is used as the stopping criterion.

All experiments were executed on a PC using MATLAB programming package.

In order to show that the preconditioned method is superior to the basic method. We consider , that is, the method is reduced to the Gauss-Seidel method. In Table 2, we report the CPU time () and the number of iterations (IT) for the basic and the preconditioned Gauss-Seidel method. Here represents the restarted Gauss-Seidel method; the preconditioned restarted Gauss-Seidel method is noted by .

 𝑛 I T ( G S ) C P U ( G S ) I T ( P G S ) C P U ( P G S ) 6 0 2 3 2 0 . 0 7 8 0 2 2 9 0 . 0 7 8 0 9 0 3 4 0 0 . 2 0 3 0 3 3 7 0 . 2 0 3 0 1 2 0 4 4 6 0 . 5 0 0 0 4 4 3 0 . 4 3 8 0 1 5 0 5 5 1 4 . 5 7 8 0 5 4 8 4 . 5 4 7 0 1 8 0 6 5 5 9 . 5 9 3 0 6 5 2 9 . 5 0 0 0 2 1 0 7 5 8 3 6 . 7 1 9 0 7 5 5 3 0 . 0 4 7 0

Example 5.2. Consider the two-dimensional convection-diffusion equation in the unit squire with Dirichlet boundary conditions see [28].
When the central difference scheme on a uniform grid with interior nodes () is applied to the discretization of the convection-diffusion equation (3.5), we can obtain a system of linear equations (1.1) of the coefficient matrix where denotes the Kronecker product, are tridiagonal matrices, and the step size is .
It is clear that the matrix is an -matrix, so it is an -matrix. Numerical results for this matrix are given in Table 3.
From Table 3, for , it can be seen that the convergence rate of the preconditioned Gauss-Seidel iterative method () is faster than the other preconditioned iterative method for -matrices. And iteration numbers are not changed by the change of ; the iteration time slightly changed by the change of . However, it is difficult to select the optical parameters and this needs a further study.

 𝑛 𝛼 𝑖 = 0 . 5 𝛼 𝑖 = 0 . 8 𝛼 𝑖 = 1 𝛼 𝑖 = 1 . 2 𝛼 𝑖 = 2 𝛼 𝑖 = 0 6 4 1 1 1 1 1 1 0 . 0 6 0 1 0 . 0 4 8 8 0 . 0 5 8 7 0 . 0 5 2 4 0 . 0 5 0 1 0 . 0 6 2 9 8 1 1 1 1 1 1 1 0 . 0 5 2 2 0 . 0 5 0 4 0 . 0 5 3 2 0 . 0 5 2 4 0 . 0 5 6 9 0 . 0 6 3 5 1 0 0 1 1 1 1 1 1 0 . 0 5 7 7 0 . 0 5 4 7 0 . 0 4 8 6 0 . 0 5 5 5 0 . 0 5 6 3 0 . 0 6 6 3

Example 5.3. We consider a symmetric Toeplitz matrix where , , and . It is clear that is an -matrix. The initial approximation of is taken as a zero vector, and is chosen so that is the solution of the linear system (1.1). Here is used as the stopping criterion see [29].
All experiments were executed on a PC using MATLAB programming package.
We get Table 4 by using the preconditioner . We report the CPU time () and the number of iterations (IT) for the basic and the preconditioned AOR method. Here represents the restarted AOR method; the preconditioned restarted AOR method is noted by .

 𝑛 𝜔 𝛾 I T ( A O R ) T ( A O R ) I T ( P A O R ) T ( P A O R ) 9 0 0 . 9 0 . 5 1 5 0 . 3 1 9 6 1 1 0 . 0 3 9 0 1 2 0 0 . 9 0 . 5 1 5 0 . 1 5 2 6 1 0 0 . 0 3 0 6 1 8 0 0 . 9 0 . 5 1 5 0 . 1 4 0 7 1 1 0 . 1 0 9 6 2 1 0 0 . 9 0 . 5 1 5 0 . 2 5 7 5 1 1 0 . 1 9 2 0 3 0 0 0 . 9 0 . 5 1 5 1 . 2 6 1 5 1 0 0 . 7 7 0 9 4 0 0 0 . 9 0 . 5 1 5 3 . 2 5 7 3 1 1 2 . 3 2 4 1

Remark 5.4. In Example 5.3, we let , . From Table 4, if is appropriate, the convergence of the preconditioned AOR iterative method can be improved. However, it is difficult to select the optical parameters and this needs a further study.

#### Acknowledgments

The authors express their thanks to the editor Professor Paulo Batista Gonçalves and the anonymous referees who made much useful and detailed suggestions that helped them to correct some minor errors and improve the quality of the paper. This project is granted financial support from Natural Science Foundation of Shanghai (092R1408700), Shanghai Priority Academic Discipline Foundation, the Ph.D. Program Scholarship Fund of ECNU 2009, and Foundation of Zhejiang Educational Committee (Y200906482) and Ningbo Nature Science Foundation (2010A610097).

#### References

1. A. Frommer and D. B. Szyld, “$H-$splittings and two-stage iterative methods,” Numerische Mathematik, vol. 63, no. 3, pp. 345–356, 1992.
2. H. Kotakemori, H. Niki, and N. Okamoto, “Convergence of a preconditioned iterative method for $H$-matrices,” Journal of Computational and Applied Mathematics, vol. 83, no. 1, pp. 115–118, 1997. View at: Publisher Site | Google Scholar | MathSciNet
3. R. S. Varga, Matrix Iterative Analysis, Prentice Hall, Englewood Cliffs, NJ, USA, 1962. View at: MathSciNet
4. A. Hadjidimos, D. Noutsos, and M. Tzoumas, “More on modifications and improvements of classical iterative schemes for $M$-matrices,” Linear Algebra and Its Applications, vol. 364, pp. 253–279, 2003. View at: Publisher Site | Google Scholar | MathSciNet
5. A Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, vol. 9 of Classics in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994. View at: MathSciNet
6. L.-Y. Sun, “Some extensions of the improved modified Gauss-Seidel iterative method for $H$-matrices,” Numerical Linear Algebra with Applications, vol. 13, no. 10, pp. 869–876, 2006. View at: Publisher Site | Google Scholar | MathSciNet
7. Q. Liu, “Convergence of the modified Gauss-Seidel method for $H$- Matrices,” in Proceedings of th 3rd International Conference on Natural Computation (ICNC '07), vol. 3, pp. 268–271, Hainan, China, 2007. View at: Publisher Site | Google Scholar
8. Q. Liu, G. Chen, and J. Cai, “Convergence analysis of the preconditioned Gauss-Seidel method for $H$-matrices,” Computers & Mathematics with Applications, vol. 56, no. 8, pp. 2048–2053, 2008. View at: Publisher Site | Google Scholar | MathSciNet
9. Q. Liu and G. Chen, “A note on the preconditioned Gauss-Seidel method for $M$-matrices,” Journal of Computational and Applied Mathematics, vol. 228, no. 1, pp. 498–502, 2009. View at: Publisher Site | Google Scholar | MathSciNet
10. G. Poole and T. Boullion, “A survey on $M$-matrices,” SIAM Review, vol. 16, pp. 419–427, 1974. View at: Publisher Site | Google Scholar | MathSciNet
11. S. Galanis, A. Hadjidimos, and D. Noutsos, “On an SSOR matrix relationship and its consequences,” International Journal for Numerical Methods in Engineering, vol. 27, no. 3, pp. 559–570, 1989.
12. X. Chen, K. C. Toh, and K. K. Phoon, “A modified SSOR preconditioner for sparse symmetric indefinite linear systems of equations,” International Journal for Numerical Methods in Engineering, vol. 65, no. 6, pp. 785–807, 2006.
13. G. Brussino and V. Sonnad, “A comparison of direct and preconditioned iterative techniques for sparse, unsymmetric systems of linear equations,” International Journal for Numerical Methods in Engineering, vol. 28, no. 4, pp. 801–815, 1989. View at: Publisher Site | Google Scholar
14. Y. S. Roditis and P. D. Kiousis, “Parallel multisplitting, block Jacobi type solutions of linear systems of equations,” International Journal for Numerical Methods in Engineering, vol. 29, no. 3, pp. 619–632, 1990.
15. B. H. Ahn, “Solution of nonsymmetric linear complementarity problems by iterative methods,” Journal of Optimization Theory and Applications, vol. 33, no. 2, pp. 175–185, 1981.
16. L. Li, personal communication, 2006.
17. M. J. Tsatsomeros, personal communication, 2006.
18. A. Hadjidimos, “Accelerated overrelaxation method,” Mathematics of Computation, vol. 32, no. 141, pp. 149–157, 1978.
19. J. P. Milaszewicz, “Improving Jacobi and Gauss-Seidel iterations,” Linear Algebra and Its Applications, vol. 93, pp. 161–170, 1987.
20. A. D. Gunawardena, S. K. Jain, and L. Snyder, “Modified iterative methods for consistent linear systems,” Linear Algebra and Its Applications, vol. 154–156, pp. 123–143, 1991.
21. T. Kohno, H. Kotakemori, H. Niki, and M. Usui, “Improving the modified Gauss-Seidel method for $Z$-matrices,” Linear Algebra and Its Applications, vol. 267, pp. 113–123, 1997. View at: Google Scholar | MathSciNet
22. D. J. Evans and J. Shanehchi, “Preconditioned iterative methods for the large sparse symmetric eigenvalue problem,” Computer Methods in Applied Mechanics and Engineering, vol. 31, no. 3, pp. 251–264, 1982.
23. F.-N. Hwang and X.-C. Cai, “A class of parallel two-level nonlinear Schwarz preconditioned inexact Newton algorithms,” Computer Methods in Applied Mechanics and Engineering, vol. 196, no. 8, pp. 1603–1611, 2007.
24. M. Benzi, R. Kouhia, and M. Tuma, “Stabilized and block approximate inverse preconditioners for problems in solid and structural mechanics,” Computer Methods in Applied Mechanics and Engineering, vol. 190, no. 49-50, pp. 6533–6554, 2001.
25. W. Li and W. Sun, “Modified Gauss-Seidel type methods and Jacobi type methods for $Z$-matrices,” Linear Algebra and Its Applications, vol. 317, no. 1–3, pp. 227–240, 2000. View at: Publisher Site | Google Scholar | MathSciNet
26. J. H. Yun and S. W. Kim, “Convergence of the preconditioned AOR method for irreducible L-matrices,” Applied Mathematics and Computation, vol. 201, no. 1-2, pp. 56–64, 2008. View at: Google Scholar
27. Y. Li, C. Li, and S. Wu, “Improving AOR method for consistent linear systems,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 379–388, 2007. View at: Google Scholar
28. M. Wu, L. Wang, and Y. Song, “Preconditioned AOR iterative method for linear systems,” Applied Numerical Mathematics, vol. 57, no. 5–7, pp. 672–685, 2007.
29. M. G. Robert, Toeplitz and Circulant Matrices: A Review, Stanford University, 2006.

Copyright © 2010 Qingbing Liu and Guoliang Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.