Abstract

The indefinite inner product defined by , arises frequently in some applications, such as the theory of relativity and the research of the polarized light. This indefinite scalar product is referred to as hyperbolic inner product. In this paper, we introduce three indefinite iterative methods: indefinite Arnoldi’s method, indefinite Lanczos method (ILM), and indefinite full orthogonalization method (IFOM). The indefinite Arnoldi’s method is introduced as a process that constructs a J-orthonormal basis for the nondegenerated Krylov subspace. The ILM method is introduced as a special case of the indefinite Arnoldi’s method for J-Hermitian matrices. IFOM is mentioned as a process for solving linear systems of equations with J-Hermitian coefficient matrices. Finally, by providing numerical examples, the FOM, IFOM, and ILM processes have been compared with each other in terms of the required time for solving linear systems and also from the point of the number of iterations.

1. Introduction

Nowadays, iterative methods are used extensively for solving general large sparse linear systems in many areas of scientific computing because they are easier to implement efficiently on high-performance computers than direct methods.

Projection methods for solving systems of linear equations have been known for some time. The initial development was done by A. de la Garza [1].

One process by which an approximate solution of the linear system can be found is a projection method onto the subspace and orthogonal to . This method focuses on this requirement that belongs to and the new residual vector be orthogonal to (for more details, refer to [25]). Around the early 1950s, the idea of the Krylov subspace iteration was established by Cornelius Lanczos and Walter Arnoldi. Lanczos’s method was based on two mutually orthogonal vector sequences and his motivation came from eigenvalue problems. In that context, the most prominent feature of the method is that it reduces the original matrix to tridiagonal form. Lanczos later applied his method to solve linear systems, in particular, symmetric ones. Krylov subspace iterations or Krylov subspace methods are iterative methods which are used as linear system solvers and also iterative solvers of eigenvalue problems. Krylov subspace methods which built up Krylov subspaces look for good approximations to eigenvectors. It is done by keeping all computed approximates and by combining them for a better solution.

In this paper, we introduce three iterative methods in the space with hyperbolic inner product. These methods are indefinite Arnoldi, indefinite Lanczos (ILM), and indefinite full orthogonalization (IFOM), and we define new algorithms to run these hyperbolic versions. By the numerical examples, we will compare these indefinite algorithms with their common definite modes, from the point of the number of iterations and the required time to run the algorithms.

This paper is organized as follows: in Section 2, the indefinite Arnoldi’s process is proposed to construct a J-orthonormal basis. In Section 3, we present IFOM to solve the linear system of equations, and in Section 4, we give the ILM. At the end of this section, several numerical examples are expressed to compare the run speed and the number of repetitions. Counting the arithmetic act of multiplication in FOM, IFOM, and ILM algorithms and conclusion are the last two sections, respectively.

2. Indefinite Arnoldi’s Method

We know that there are many applications which require a nonstandard scalar product which is usually defined by , where J is some nonsingular matrix and many of these applications consider Hermitian or skew-Hermitian J and an indefinite scalar product is defined by nonsingular Hermitian indefinite matrix as . When J is the signature matrix where for all k, the scalar product is referred to as hyperbolic and takes the following form:

Example 1. Let , and definewhere σ is a permutation for which and and r is an arbitrary integer from 0 to n. It is easy to see that is an indefinite inner product and its corresponding nonsingular Hermitian matrix is in the form , wherein r is the number of and is the number of . Because ifthenand conversely, ifthen it is clear that , for all .

Definition 1. If is any nonzero subspace of , then the basis for is said to be an orthogonal basis with respect to the indefinite inner product if for , and is said to be an orthonormal basis if in addition to orthogonality, for . For the hyperbolic inner product, the above definitions of orthogonal basis and orthonormal basis are said to be J-orthogonal and J-orthonormal bases, respectively.
Let be a vector space with an indefinite inner product . A vector is called nonneutral if . Note first of all that any set of nonneutral vectors which is orthogonal in the sense of the indefinite inner product is necessarily linear independent. To see this, suppose that , and hence, for ,Then, it follows that .
In this section, we construct the indefinite Arnoldi’s method and then we turn it into a practical algorithm.

Definition 2. Let a matrix A and a starting vector be given. Then, the m-dimensional Krylov subspace is spanned by a sequence of m column vectors:It is well known that the construction of a basis with Arnoldi’s method for the Krylov subspace leads to an upper Hessenberg matrix that describes the relation between the basis vectors. Each new basis vector can be constructed from the existing set aswhere follows from the orthogonality requirement and follows from the requirement that . The expression (8) can be formulated in matrix notation aswhere is an upper Hessenberg matrix. For further details, refer to [6]. Now, the purpose of this section is to construct a J-orthonormal basis for the Krylov subspace (7) and the indefinite Arnoldi’s process is an algorithm that brought us to this goal. Furthermore, this process is a tool that condensed the matrix into a Hessenberg form. The indefinite Arnoldi’s algorithm for the computation of a J-orthonormal basis of the Krylov (7) subspace is shown in Algorithm 1.

(1)Choose a vector x such that
(2)Define
(3)For Do:
(4)For Do:
(5)Compute and
(6)Compute
(7) if then stop
(8)
(9)
(10)
(11)EndDo
(12)EndDo

Proposition 1. Assume that the indefinite Arnoldi’s algorithm does not stop before the m-th step. Then, the vectors form a J-orthonormal basis of the Krylov subspace .

Proof. By considering the following expression, the proof is straightforward:

Proposition 2. Define(i), the Hessenberg matrix whose nonzero entries are defined by indefinite Arnoldi’s algorithm(ii), the matrix with column vectors (iii)Then, the following relations are valid:In particular, if , then

Proof. Indeed, in general, (11) is the matrix representation of (10):Now, to see (12), left-multiply relation (14) by . We earnOn the other hand, given that the vectors build a J-orthonormal basis, then,(1)According to the definition of , or it is orthogonal to , i.e., , for . Thus,(2)We haveIn other words,
Therefore, relation (15) can be summarized as follows:and by left-multiplying by , we haveParticularly, if , relation (17) yields that and thereby,It is noteworthy that these concepts are used in [7] to solve an eigenvalue problem.

3. Indefinite Full Orthogonalization Method

The purpose of this section is to build an algorithm to solve the linear system:where A is an complex matrix and the inner product of the space is hyperbolic. Let and be two m-dimensional subspace of . As mentioned in Section 1, one process by which an approximate solution of the linear system (21) can be found is a projection method onto the subspace and orthogonal to . This method focuses on this requirement that belongs to and the new residual vector be orthogonal to . Suppose that is an matrix whose columns constitute a J-orthonormal basis of the nondegenerated subspace and, similarly, is an matrix whose columns form a J-orthonormal basis of the nondegenerated subspace and suppose that the approximate solution of (21) is as follows:where is an initial guess to the solution of (21). Then, for each , we should havewhere indicates orthogonality under the hyperbolic product . Thus,and by letting , we have

This leads to . Now, assuming that the matrix is nonsingular, thenand therewith, for approximate solution , we earn

Now, if , the indefinite full orthogonalization method (IFOM) is a process which seeks for an approximation solution from the subspace with the proviso that

Thus, relation (26) changes to . And, by relation (12), it is equal to . On the other hand, by defining where, , we have . Thereupon,and finally,

Our explanations are summarized in Algorithm 2.

(1)Compute , , and and
(2)Define the matrix ; set
(3)For Do:
(4)Compute
(5)For Do:
(6)
(7)
(8)EndDo
(9) compute
(10)
(11)Compute . If , set and Goto 12
(12)EndDo
(13)Compute and

Proposition 3. The residual vector of the approximate solution calculated by the IFOM algorithm is such that

Proof. By using relations (11) and (29), we have the following relations:We expect that FOM performs better than IFOM in terms of the number of iterations and the required time to run because in the IFOM method, the product of entries of in the entries of H is added to the calculations, when compared to the FOM method. The following example verifies this expectation. In this example, we have used n instead of m, i.e., we have assumed that n is the maximum number of iterations of the algorithm. But instead, we have considered this requirement that .

Example 2. Let A is a tridiagonal matrix and an arbitrary diagonal matrix J is and suppose that b and are two vectors in (see Figure 1). Suppose that the entries of these vectors and the nonzero entries of A have been randomly selected from zero to five. Then, the effects of the FOM and IFOM algorithms are as follows:(i)The IFOM algorithm (Algorithm 2) after 149 iterations and within seconds brings the linear system to the following condition:(ii)The FOM algorithm does the same with 149 replications and at seconds.Despite the superiority of the FOM on the IFOM, there is an important property of the IFOM algorithm that is shown in the next section.

4. Indefinite Lanczos Method

This section is devoted to the indefinite Lanczos method. As can be seen in the following, this method is expressed as a special case of the indefinite Arnoldi’s method in the complex space for the special case when the matrix A is J-Hermitian.

Definition 3. A matrix A is said to be J-Hermitian (J-symmetric) when () and we write ().

Proposition 4. Assume that indefinite Arnoldi’s method is applied to a J-Hermitian matrix A. Then, the matrix obtained from the process is tridiagonal and Hermitian.

Proof. From the indefinite Arnoldi’s method, we have relation (12):Thus, . On the other hand, that A is J-Hermitian yields . Thus, .
Therefore, the resulting matrix by the indefinite Arnoldi’s algorithm (Algorithm 1) for J-Hermitian matrix A is an upper Hessenberg and Hermitian matrix. In other words, is a Hermitian tridiagonal matrix. The resulting matrix is shown by , and the diagonal elements are denoted by , and the off-diagonal elements are denoted by ,In fact, we have the following:and the above relation turns toThus,By using the hyperbolic inner product both sides in , we earnThis implies that ; therefore, is in the above form.
With this explanation, the hyperbolic version of the Hermitian Lanczos algorithm can be formulated as given in Algorithm 3.
Now, consider the linear system for which A is a J-Hermitian matrix and is an initial vector and the indefinite Lanczos vectors together with the tridiagonal matrix are given. Then, the approximate solution obtained from an indefinite orthogonal projection method on to , similar to what was seen for the indefinite Arnoldi’s method, is given byThus, using the above algorithm, Algorithm 4 can be considered as the indefinite Lanczos algorithm for solving a real linear system with the J-Hermitian coefficient matrix.
Similar to what has already been proven for the IFOM algorithm, here also it can be seen thatThe advantage of ILM (the indefinite Lanczos method) is that it solves some classes of linear systems with different coefficient matrices, for different choices of matrix J. The following examples explain more in which we use n instead of m. In other words, we assume that the maximum number of the iterations of the algorithm is n. Besides, we use this stop condition that .

(1)Choose an initial vector such that
(2)Set , ,
(3)For ; do:
(4)
(5)
(6)
(7). If , then stop
(8)
(9)
(10)
(11)EndDo
(1)Compute , , and
(2)Set , , ,
(3)For . Do
(4)
(5)
(6)
(7). If , then stop
(8)
(9)
(10)
(11)EndDo
(12)Set , and
(13)Compute and .

Example 3. Consider the linear system in whichwhere each block is of order and and are symmetric matrices and . By choosing , it can be seen that (see Figure 2). Now, let and suppose that and are diagonal matrices and is a tridiagonal matrix where the diagonal entries of , , and are chosen arbitrarily in and b is a vector in that its entries are selected at this distance (see Figure 3). The performance of the algorithm is as follows:(i)The IFOM algorithm brings the linear system to the condition , after 123 iterations and within seconds(ii)The FOM does the same with 118 iterations and within seconds(iii)The ILM algorithm does the same with 130 iterations and within secondsIt shows that ILM is more efficient than the IFOM and FOM. However, the number of its iterations is higher but less time is required.

Example 4. Consider the assumptions of the previous example, except that and is a tridiagonal matrix with complex entries as in which and and . Then, and :(i)By IFOM: the number of iterations is 140, within seconds(ii)By FOM: the number of iterations is 136, within seconds(iii)By ILM: the number of iterations is 154, within seconds

Example 5. Consider the linear system in whichwhere each block is of order and , , and are symmetric matrices and and and . By choosing , it can be seen that A is J-symmetric, and therefore, it is allowed to apply ILM. Now, suppose that(i), , are diagonal matrices with diagonal entries between zero and ten and and (ii) and are bidiagonal matrices and the entries of the main diagonals and their below entries are chosen between zero and ten(iii)b is a vector in with entries between zero and ten(iv) is an initial vector in with arbitrary entries between zero and tenThen, to achieve the condition , we need to consider the following:(i)By IFOM: the number of iterations is 175, within seconds(ii)By FOM: the number of iterations is 167, within seconds(iii)By ILM: the number of iterations is 190, within secondsAs it is seen, the ILM method is superior to FOM and IFOM methods. It is because of the low length of the recursive relation in its algorithm (tree terms for it) (see Figure 4). In other words, we do not need to do orthogonalizations at each step, on the all earlier vectors of its related Krylov subspace.

5. Counting the Arithmetic Act of Multiplication in FOM, IFOM, and ILM Algorithms

For two n-vectors , we havewhere and are the i-th elements of the vectors, respectively, and is the -th array of J. Thus, multiplication operations are required to perform this indefinite inner product.

Using the above point, the number of multiplication operations required to perform steps (3)–(12) of the IFOM algorithm is equal toand the number of multiplication operations required to perform steps (3)–(12) of the FOM algorithm is equal to

However, the number of required multiplication operations to do steps (3)–(11) of the ILM algorithm is

Comparison of (46) and (47) shows that, for , the number of multiplications of the FOM algorithm is further than the ILM algorithm, and by increasing the value of m, this difference will also increase.

In the aforementioned algorithms, the inverses of the upper Hessenberg matrix or the tridiagonal matrix must be calculated. In this regard, it should be noted that calculating the inverse of the matrix in the ILM algorithm requires less number of multiplications than calculating the inverse of the matrix in the FOM algorithm.

What was said above shows that the run speed of the m steps of ILM is faster than that of the FOM algorithm and this is evident in the preceding section.

6. Conclusion

The indefinite inner product defined by , , arises frequently in applications. It is used, for example, in the theory of relativity and in the research of the polarized light. More on the applications of such products can be found in [811]. These applications in other fields of science inspired us to resume Lanczos, FOM, and Arnoldi’s methods in the indefinite inner product space. The indefinite Arnoldi’s method is a process that constructs a J-orthonormal basis for the nondegenerated Krylov subspace, and the bases that we proved have a particular common property, about the structure of the product of their vectors. In this paper, IFOM process has been introduced and also a process is made which is useful for solving linear systems of equations with J-Hermitian coefficient matrices. This process is the same Lanczos method that has been restored in the hyperbolic inner product space. In this paper, the FOM, IFOM, and ILM processes have been compared with each other in terms of the time required for solving linear systems and the best one is introduced. Indeed, we show that the run speed of the m steps of ILM is faster than that of FOM and IFOM.

Data Availability

All data used to support the findings of this study are accessible and these data are cited at the relevant places within the text. The only exception is MATLAB codes for drawing the figures of the paper, which are also available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.