Abstract

We define and investigate a globally convergent iterative method possessing sixth order of convergence which is intended to calculate the polar decomposition and the matrix sign function. Some analysis of stability and computational complexity are brought forward. The behaviors of the proposed algorithms are illustrated by numerical experiments.

1. Introduction

Recently, a large number of papers intended for calculating simple and multiple zeros of the nonlinear equation have been published. There are several attempts to overcome the difficulties in designing new iterative methods for solving or in the application of such iterative methods in computing matrix functions (see [1, 2]). Our current work discusses an application of a high order iterative method for solving nonlinear scalar equations in calculating the polar decomposition and the matrix sign function.

We now restate some preliminaries about the polar decomposition and the matrix sign.

Let denote the linear space of all complex matrices. The polar decomposition of a complex matrix is defined aswherein is a Hermitian positive semidefinite matrix of the order and is a subunitary matrix (partial isometry). Here, the inequality is assumed. A matrix is subunitary if for any , where and denote the linear space spanned by columns of the matrix (range of ) and the null space of , respectively. Note that if , then , and is an orthonormal Stiefel matrix.

It is assumed that the singular value decomposition (SVD) of has the following form:wherein and are unitary matrices andThe Hermitian factor is always unique and it can be expressed as , and the unitary factor is unique if is nonsingular. Note that, here, the exponent denotes the principal square root: the one whose eigenvalues lie in the right half-plane.

The polar decomposition may be easily constructed from an SVD of the given matrix . However, the SVD is a substantial calculation that displays much more about the structure of than does the polar decomposition. Constructing the polar decomposition from the SVD destroys this extra information and wastes the arithmetic work used to compute it. It is intuitively more appealing to use the polar decomposition as a preliminary step in the computation of the SVD.

On the other hand, a related issue to the polar decomposition is the matrix sign decomposition, which is defined for a matrix having no pure imaginary eigenvalues. The most concise definition of the matrix sign decomposition is given in [3]Here, is the matrix sign function, introduced by Roberts [4]. We herewith remark that, in this work, whenever we write about the computation of a matrix sign function, we mean a square matrix with no eigenvalues on the imaginary axis.

Now we briefly review some of the most important iterative methods for computing the matrix polar decomposition.

Among many iterations available for computing the polar decomposition, the most practically useful is the scaled Newton iteration [5] as well as the recently proposed dynamically weighted Halley iteration [6]. The method of Newton introduced for computing the polar decomposition (via unitary polar factor) is defined in [5] by the iterative schemefor the square nonsingular cases and by the following alternative for general rectangular cases [7]:Note that stands for the Moore-Penrose generalized inverse, , and .

The cubically convergent method of Halleyhas been developed in [8] for computing the unitary polar factor.

The particular formula (7) is also applicable to singular or rectangular matrices. This scheme has further been developed adaptively in [6]. In fact the dynamically weighted Halley method (DWH) for computing unitary polar factor has been introduced as follows:The parameters , , and are dynamically chosen to accelerate the convergence. They are computed bywhereIn (9), is a lower bound for the smallest singular value of . Fortunately, once is obtained, then effective and sharp bounds can be attained at no cost from the following recurrence:

An initial matrix must be employed in a matrix fixed-point type method so as to arrive at the convergence phase. Such a sharp initial approximation for the unitary factor can be expressed aswhereas is an estimate of (a safe choice of which is ).

The rest of this paper unfolds the contents in what follows. Section 2 derives a new iterative scheme for solving nonlinear equations. Additionally, by applying the illustration via basins of attraction, we find a scheme with a global convergence. The global convergence of the constructed solver is verified analytically. In Section 3, we generalize the proposed nonlinear equation solver into the iterative method for finding the polar decomposition. Furthermore, we prove that the new scheme is convergent. The computational complexity is discussed in Section 4. In Section 5, we define an extension of the proposed method in numerical computation of the matrix sign function and show its asymptotic stability. Section 6 is devoted to the application of the contributed methods in solving two numerical examples (one in double precision arithmetic and the other in a high precision computing environment). Finally, Section 7 draws a conclusion of this paper.

2. Chebyshev-Halley Type Iteration and Its Extension

Many of the iterative methods for computing matrix functions can be deduced by applying a nonlinear equation solver to a special mapping. For example, applying Newton’s method to the mappingin which is the identity matrix of the appropriate size, could result in iterates (5) (note that the equivalent form of (13) for the matrix sign is ). This reveals a close relation between matrix functions and iterative root-finding methods [3].

Gutiérrez and Hernández in [9] developed a Chebyshev-Halley type scheme in Banach space for finding simple zeros of , which can be written in what follows:wherein , , and the convergence order is cubic.

If we decide to apply (14) for solving (13), we will obtain a family of at least cubically convergent schemes in finding the polar decomposition. But an important barrier occurred and it would be the nonglobal convergence of some of the schemes.

On the other hand, Iannazzo in [10] showed that the matrix convergence is governed by the scalar convergence for the so-called pure matrix iterations. We remark that this is true for the matrix sign function when the scalars are the eigenvalues, while for polar decomposition it is true when scalars are singular values. This is also well illustrated in the textbook [3].

Hence, we should find a member from (14), so that the derived method is new and also possesses the global convergence. Toward this goal, we employ the theory of basins of attraction for (14) so as to solve the quadratic polynomial in a square of the complex plane whereas the maximal number of iterations is fixed to 100 and the stopping criterion is .

This is done in Figures 13 for different values of . In Figures 1(a) and 2(a), the convergence in the process of solving is local, and therefore convergence of the matrix iterations could happen only for very sharp initial matrices. Moreover, the method in Figure 3(b) behaves chaotically and also includes divergence points. The divergence points are included in the black areas and, accordingly, should be outside of interest.

We remark that has been chosen randomly and experimentally. Clearly, the investigation of the other cases can be used in future studies.

Among the six tested methods extracted by different particular values of and illustrated in Figures 13, only the methods illustrated in Figures 1(b), 2(b), and 3(a) have a global convergence. A deeper verification reveals that the methods used in Figures 1(b) and 3(a) are in fact Halley’s method and its reciprocal for , respectively. Since those are not new, the only novel and useful method which has global convergence and remains from the six considered methods of (14) is the iterative method used in Figure 2(b). It can be written as follows:

To increase the order of convergence, let us now compose the method of Newton with iterations (14) corresponding to in order to derive a useful high order globally convergent iterative scheme for solving . This intention leads towherein .

Theorem 1. Let be a simple zero of a sufficiently differentiable function for an open interval , which contains as an initial approximation of . Then the iterative expression (16) without memory has a sixth order of convergence.

Proof. To save the space and in order not to deviate from the main topic, we here exclude the proof. In addition, the steps of the proof are similar to those taken in [11].

The iterative method (16) reaches the sixth-order convergence using five functional evaluations and thus achieves the efficiency index , which is higher than that of Newton; that is, . We remark that the cost of one function evaluation and its first and second derivatives are assumed to be unity. Furthermore, the application of (16) in solving the polynomial equation possesses the global convergence (except for the points lying on the imaginary axis). This is illustrated analytically in what follows.

In terms of the fractal theory, it is necessary to find the global basins of attraction for a zero :where is the -fold composition of the iteration function . Here using (16), we have (in its reciprocal form)

To check the global convergence of (16) in the case of the quadratic polynomial , with the zeros , we start fromand findwherein .

Let denote the boundary of the set . One of basic notions in the fractal theory connected to iterative processes and convergence of an iterative function is a Julia set for the proposed operator . Thus, when , we obtain the following:(1)If , then , and .(2)If , then , and .

Furthermore, the basins of attraction and in the case of the operator are the half-planes on either side in relation to the line (the imaginary axis). Since are attractive fixed points of , the Julia set is the boundary of the basins of attraction and ; that is,

The Julia set is just the line for (19), and thus the new sixth-order method (18) is globally convergent. Therefore, the presented method has global behavior, even outside the square which is considered in Figure 1.

In addition, we remark that a globally convergent sixth-order method can be easily constructed by composing standard Halley and Newton methods.

The main contribution of this work lies in the next two sections at which we extend (16) into the iterative methods for computing the polar decomposition and the matrix sign function.

3. Extension to Polar Decomposition

This section is devoted to the extension of (16) to solve the matrix equation (13). This application enables us to obtain a fast sixth-order iterative matrix method for constructing the polar decomposition. Note that the improvements in hardware and software have been ultimately indispensable, since higher order methods produce approximations of great accuracy and require complicated convergence analysis, which is feasible only by symbolic computation. Subsequently, in this paper, we use Mathematica [12] to illustrate the speed of convergence.

To this goal, an application of (16) on (13) in conjunction with further simplifications produces the following reciprocal form as the rational iteration:where and is given by (12).

Now, we have a novel iterative fixed-point type method for finding the polar decomposition via calculating the unitary matrix .

Theorem 2. Assume that is an arbitrary matrix. Then, the matrix iterates of (22) converge to using .

Proof. To prove the statement, we make use of the SVD of in the form , where , and the zero blocks in may be absent. Note that the steps of the proof are similar to those which recently have been taken in [2]. DefineSubsequently, from (22), we haveSince has diagonal and zero elements, it follows by the induction that the sequence is defined byAccordingly, (25) represents uncoupled scalar iterationsSimple manipulations yield the relation Since is positive, (28) holds for each . It follows that as ; that is to say,Therefore, as , . The proof is complete.

Theorem 3. Let be an arbitrary matrix. Then, method (22) has sixth order of convergence to find the unitary polar factor of .

Proof. The proposed scheme (22) transforms the singular values of according toand leaves the singular vectors invariant. From (30), it is enough to show that the convergence of the singular values to unity has sixth order of convergence for :Now, we attainThis reveals the sixth order of convergence for the new method (22). The proof is ended.

Note that, in 1991, Kenney and Laub [13] have proposed a family of rational iterative methods for the matrix sign, based on Padé approximation. Their principal Padé iterations are convergent globally. Thus, we have convergent methods of arbitrary orders for the matrix sign (subsequently for the polar decomposition). However, here we tried to propose another new and useful method for this purpose.

We now end this section by recalling an important approach for speeding up the convergence speed of (22). The sixth-order convergence for iteration (22) ensures rapid convergence in the final stages of the iterates. The speed of convergence can be slow at the beginning of the process, so it is necessary to scale the matrix before each cycle. An important scaling approach was derived in [14] in the Frobenius norm as comes next:

See [15] for more details. We remark that numerical experiments show improvements in the speed of convergence applying (33).

Remark 4. The new scheme can be expressed in the following accelerated form:

4. Computational Complexity

In this section, we evaluate the computational efficiency of (22). To compare the behavior of different matrix methods for finding , we recall the definition of efficiency index:wherein and denote the computational cost and the convergence order per cycle. Here, in order to have a fair comparison and since there are matrix-matrix multiplications (denoted by mmm) and matrix inversion(s) per computing cycles of (6), (7), and (22), we extend (35) as follows:so as to be able to incorporate all the existing factors of an algorithm into the definition of the computational efficiency index. In (36), , , and denote the whole number of iterations required by a method to converge to , the number of mmm per cycle, and the considered cost of matrix inversion per cycle, respectively.

On the other hand, it is assumed that the cost of mmm is 1 unit and subsequently the cost of one matrix inversion is and the cost for computing the Moore-Penrose inverse is 1.4. We considered these weights empirically after testing many random matrices on a typical PC. We also neglect the cost of additions, subtractions, and so forth, because their costs are negligible in contrast to the cost of mmm and matrix inversion.

Now, the approximated CEI for the studied methods would be wherein , , and are the whole number of iterations required by (6), (7), and (22) to converge, respectively, in the same environment.

Finally, we assume that and thus the number of iterations for (6) and (7) would roughly be and , respectively, since they have the second and the third orders of convergence in contrast to the sixth order for (22). We have obtained these factors empirically via solving many numerical examples.

The results of comparisons have now been drawn and illustrated in the bar chart of Figure 4. We here remark that NMP, HMP, and PMP stand for Newton’s method for polar decomposition, Halley’s method for polar decomposition, and proposed method for polar decomposition. Such naming will be used from now on. In this figure, one may easily observe that when a higher number of iterations are required (occasionally for large scale matrices), then the new method performs much better, because it requires smaller number of matrix inversions. In Figure 4, for example, means that the number of steps required for the convergence of (22) is 4 and subsequently around and steps are required for (6) and (7) to converge. Note that we plotted the logarithm of CEI in Figure 4 to show the distinctions obviously.

The only difficulty in our high order method is that if, in one iteration, it produces results of a lower accuracy (a low tolerance, e.g., ), then an expensive further cycle should be carried out to reach the tolerance in double precision (e.g., ). In our proposed algorithm (ALP, i.e., Algorithm for polar decomposition) hereby, we first apply (22) to arrive at the convergence phase much rapidly and accelerate the beginning of the process. And next, we apply the simple method (6) for the last step only. This switching idea has been illustrated in Algorithm 1.

) is given
() use (22) until
() If the final stop termination has already been occurred, go to (), else
() do (6) to fulfill the stop termination
() end for

5. Extension for Matrix Sign

Herein, we show an application of the proposed iterative method (16) in finding the matrix sign function. In fact, there is a tight relationship between the matrix sign and the polar decomposition. As introduced in Section 1, the sign of a nonsingular square matrix is an important matrix function with potential applications in different branches of Mathematics; see [16, 17].

The iterative rulewhich is also known as the Newton method, is the most common and well-known way for finding the sign of a square nonsingular matrix. It converges quadratically when has been chosen as the initial matrix.

A general family of matrix iterations for finding the matrix sign function was discussed thoroughly in [13]. An example from this class of methods is the method of Halley which is a modification of (7) and is defined by

Accordingly, the new scheme (22) can simply be used for finding the matrix sign function with a little modification as comes next:

Iteration (40) is rational. We investigate the stability of (40) for finding in a neighborhood of the solution. In fact, we analyze how a small perturbation at the th iterate is amplified or damped along the iterates.

Lemma 5. Under the same conditions as in Theorem 3, the sequence generated by (40) is asymptotically stable.

Proof. If is a function of , then the iterates from (40) are all functions of and hence commute with . Let be a numerical perturbation introduced at the th iterate of (40). Next, one hasHere, we perform a first-order error analysis, that is, formally using approximations , since , , is close to zero (matrix). This formal approximation is meaningful if is sufficiently small. We havewherein the identitieswere used (for any nonsingular matrix and the matrix ). Note that, after some algebraic manipulations and the approximation , one can verify (assuming for enough large )We also used the equalities and , relative to the matrix sign function. We can now conclude that the perturbation at the iterate is bounded; that is,Therefore, the sequence generated by (40) is asymptotically stable. This ends the proof.

It is now not difficult to also show that (40) has a sixth order of convergence and reads in the following error inequality:where .

Here we make a comment that we have not given any discussions on backward stability for the polar decomposition in the sense of measuring (this is a much stronger notion than the stability).

An acceleration via scaling, similar to that of (34), is applicable to (40). See [18] for an interesting choice of the scaling parameter. In Figure 5, we have drawn the basins of attractions for (16) in order to solve the complex polynomial of different orders. Although Figure 5(a) shows a global convergence of our method for the matrix sign function (also theoretically shown at the end of Section 2), the application of the proposed method in computation of the matrix sector function needs a deeper care.

A similar implementation for (40) is advised and could be found in Algorithm 2 (we denote it by ALS, i.e., Algorithm for Sign).

) is given
() Use (40) until
() If the final stop termination has already been occurred, go to (), else
() do (38) to fulfill the stop termination
() end for

6. Numerical Experiments

We have tested the contributed method (22), denoted by PMP, and (40), denoted by PMS, using the programming package Mathematica 8 with the machine precision . Apart from this scheme, several iterative methods such as (6), denoted by NMP, and (7), denoted by HMP, which require one (pseudo)inverse per cycle, have been tested. We also use the algorithms ALP, ALS, NMS, and HMS. Note that a sixth-order method from the Padé family [13] for the matrix sign is defined as follows:We denote this iteration by SMS (sixth-order method for sign), that is, [2/3]-Padé approximant of the Padé family. Although the computational complexity of this method is lower than that of PMS, the PMS produces results of higher accuracies in high precision computing as will be observed in Example 2.

The considered stopping termination in performed numerical experiments iswherein is the tolerance. There is a similar stopping termination once we used it to find the matrix sign function.

Example 1. In this experiment, we study the behavior of different methods for finding the unitary polar factor of the complex rectangular matrix which is randomly generated with the uniform distribution using the code SeedRandom[1234]; m = 400; n = 200; A = RandomComplex[-1 - I, 1 + I, m, n]; The results of the comparison are arranged together in Table 1 applying the tolerance . It could easily be observed that a clear reduction in the number of iterations and the CPU time for finding the polar decomposition is obtained using ALP.

A considerable increase of the accuracy of approximations produced by the proposed method could be observed from numerical results. To check the accuracy of the numerical results, we have computed and reported for the last iteration in Table 1.

An important application of higher order methods is in high precision computing. In the next test we considered an academical example, with 128-digit floating point. We do this consideration purposely so as to check the convergence behavior of different methods by the following definition for computational order of convergence (COC) using (48). The COC can be approximated byat which the last three approximations for the polar decomposition or the matrix sign function are used.

Example 2 (academical test). In this experiment, we study the behavior of different methods for finding the matrix sign function of the following matrix: The results of comparison are carried out in Figure 6 and Table 2 applying the tolerance . It could easily be observed that there is a clear reduction in the number of iterations ensured by the application of PMS. To check the accuracy of the results we have computed and reported .

Example 3. In this experiment, we compare the behavior of different accelerated methods via scaling defined by (33) for finding the unitary polar factors. We now compare the Newton method (NMP), accelerated Newton method (ANMP), PMP, and the accelerated proposed method for polar decomposition (34) denoted by APMP. We used the following six complex rectangular matrices (with uniform distribution): m = 310; n = 300; number = 6; SeedRandom[345]; Table[A[l] = RandomComplex [−10−10 I, 10+10 I, m, n];, l, number]; The results of comparison are carried out in Tables 3 and 4 applying the tolerance in double precision. It could easily be observed that there is a clear reduction in the number of iterations, ensured by the acceleration via scaling (33).

It can be observed that taking into account every presented method (e.g., NMP) the number of iterations for all of matrices is the same for that method. It would be more interesting to make a comment that there is an upper bound for the maximal number of iterations for each method in computing the matrix sign and the polar decomposition in double precision arithmetic, as discussed fully in [3].

Example 4. And finally to report the number of iterations when the input matrix is ill-conditioned, we take into consideration the Hilbert matrix of dimension 10 whose condition number is (in ) [18]. The unitary polar factor is known to be the unit matrix in this case. Applying the tolerance , we found that NMP, HMP, and PMP require 49, 31, and 19 cycles, respectively. This again shows the superiority of the proposed approach.

7. Concluding Remarks

The polar decomposition is an important theoretical and computational tool, known because of its approximative properties, its sensitivity to perturbations, and its computation.

In this paper, we have developed a high order method for solving nonlinear scalar equations and then extend it to the iterative method applicable in the computation of the matrix polar decomposition. It has been shown that the convergence of the method is global and its convergence rate is six.

It has further been discussed how the proposed method could be applied in finding the matrix sign function. The presented scheme possesses asymptotic stability and it is very useful in a high precision computing environment. Some numerical tests have been provided to show the performance of the new methods.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The second author gratefully acknowledges support from the Research Project 174013 of the Serbian Ministry of Science. The second and third authors gratefully acknowledge support from the Project “Applying Direct Methods for Digital Image Restoring” of the Goce Delev University.