Abstract

In many applications, and generally speaking in many dynamical differential systems, the problem of transferring the initial state of the system to a desired state in (almost) zero-time time is desirable but difficult to achieve. Theoretically, this can be achieved by using a linear combination of Dirac -function and its derivatives. Obviously, such an input is physically unrealizable. However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration. In this paper, the approximation process of the distributional behaviour of higher-order linear descriptor (regular) differential systems is presented. Thus, new analytical formulae based on linear algebra methods and generalized inverses theory are provided. Our approach is quite general and some significant conditions are derived. Finally, a numerical example is presented and discussed.

1. Introduction

From the point of view of several important applications, in several fields of research, see for instance [1, 2], taking a given state of a linear system to a desired state in minimum time is very desirable, though it is a challenging problem in control and system theory.

Significant attention has been given to this problem in the case of linear systems; see [13]. Recently, Kalogeropoulos et al. [4] have further enriched these first approaches, as they have relaxed some of the rather restricted assumptions that [1, 2] are considered. Afterwards, the method has been also applied to the more general class of linear descriptor (regular) systems; see [5].

In this paper, a further extension of [5], in the class of linear descriptor (regular) equations, is provided. Comparing with the existing literature, see [15], we solve this problem

(1)for higher-order linear differential descriptor (regular) systems (compare with [5]),(2)using higher-order consistent initial conditions (compare with [15]),(3)obtaining more analytical formulas, that is, see Appendices A and B, Theorems 20 and 23. (compare with [15]),(4)without using the controllability matrix (compare with [3]), (5)applying analytical methods for the exact determination of the generalized inverses of Vandermonde and relative to this matrices; see [6].

To summarize, in this paper, we investigate how we can transfer the initial state of an open loop, linear higher-order descriptor (regular) differential system in (practically speaking, almost) zero-time, that is,

with known initial conditions

where , and (i.e., is the algebra of matrices with elements in the field ) with (is the zero element of ), and . For the sake of simplicity, we set in the sequel and .

In order to solve this problem, the appropriate input vector has to be made up as a linear combination of the -function of Dirac and its derivatives, see [1, 2], and for more details consult [7]. Obviously, such an input is very hard to imagine physically. However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration.

Linear descriptor (singular or regular) differential systems have been extensively used in control theory; see for instance [810], and for more details [11].

A brief outline of the paper is as follows. Section 2 provides the incentives and the typical modelling features of the problem. Moreover, a classical approximated expression for the controller, that is, a linear combination of the -function of Dirac and its derivatives that is based on the normal (Gaussian) probability density function, is used. Then, the need to determine the unknown coefficients is derived. Section 3 is divided into four extensive subsections. In Section 3.1, the reduction of the higher-order system to first-order is discussed. The first-order descriptor (regular) is divided into a fast and a slow subsystem, using the Weierstrass canonical form. Section 3.2 investigates and presents some analytical formulas based on the slow subsystem. In Section 3.3, the theory of the -generalized inverses is used. Finally, some significant conditions for the solution of the slow subsystem are presented in Section 3.4. In Section 4, a necessary condition based on the fast subsystem is discussed and obtained. Section 5 provides an interesting numerical application from physics and Section 6 concludes the paper. Two appendices for the analytical calculations of two important integrals are also available.

2. Preliminary Results—Matrix Pencil Framework

In this section, some preliminary results for matrix pencil and system theory are briefly presented. First, we assume that for an -order linear differential system, see (1), the input can be a linear combination of Dirac -function and its first -derivatives as follows:

where or is the -derivative of the Dirac -function, and for are the magnitudes of the delta function and its derivatives. Furthermore, we assume that the state of the system at time is

and at time , it achieves

With the following definitions, a brief presentation of the most important elements of matrix pencil theory is given.

Definition 1. Given and an indeterminate , the matrix pencil is called regular when and . In any other case, the pencil will be called singular.

Definition 2. The pencil is said to be strictly equivalent to the pencil if and only if there exist nonsingular and such as

In this paper, we consider the case that pencil is regular. Thus, the strict equivalence relation can be defined rigorously on the set of regular pencils as follows.

This is the set of elementary divisors (e.d.s) obtained by factorizing the invariant polynomials into powers of homogeneous polynomials irreducible over field .

In the case where is a regular, we have e.d. of the following types:

(i)e.d. of the type are called zero finite elementary divisors (z. f.e.d.),(ii) e.d. of the type , are called nonzero finite elementary divisors (nz. f.e.d.),(iii) e.d. of the type are called infinite elementary divisors (i.e.d.).

Let be elements of . The direct sum of them denoted by is the .

Then, the complex Weierstrass form of the regular pencil is defined by

Now, the Jordan type element, that is, , is uniquely defined by the set of f.e.d.

of and has the form

where

The blocks of the second, that is, see (7), are uniquely defined by the set of i.e.d.

of and has the form

where

Furthermore, is a nilpotent element of with index , where and are defined as

Moreover, for the matrices and , we have the parameterization

Since the state which we wish to reach is specified, we need to determine the unknown coefficients , for .

In practice, we cannot create an exact impulse function nor its derivatives. However, if we use one of the approximations of Dirac -function, we will be able to change the state in some minimum practical time depending mainly upon how well we generate the approximations. Let the Dirac -function be viewed as the limit of sequence function

where is called a nascent delta function. This limit is in the sense that

Some wellknown and very useful in applications nascent delta functions are the Normal and Cauchy distributions, the rectangular function, the derivative of the sigmoid (or Fermi-Dirac) function, the Airy function, and so forth; see for instance [2, 5, 1217]. The results given below are based on the normal function. Thus, by taking into consideration expression (18) and the normal (Gaussian) probability density distribution, we obtain

where .

So, the approximate expression for the controller (2) is given by

Then, we take the limit

In the next section, the main results are presented.

3. The Main Results

3.1. Order Reduction of System (1)

The section begins with the following important lemma.

Lemma 3. System (1) is divided into the following two subsystems:

Proof. Consider the transformation Substituting the previous expression into (1) and considering also (3), we obtain Multiplying by , we arrive at Now, we denote where .
Taking into account the following expressions, that is,
we arrive easily at (23) and (24).

System (23) is standard form of nonhomogeneous higher-order linear differential equations of Apostol-Kolodner type, which may be treated by methods that are more classical; see for instance [18] and references therein.

Thus, it is convenient to define new variables as

Then, we have the following system of ordinary differential equations:

Finally, (31) can be expressed by using vector-matrix equations

where ( is the transpose tensor) and the coefficient matrices and are given by

with corresponding dimension of , .

The state of system (1) at time is and at time it achieves .

Now, considering (25), at time we have

and at time we obtain

Moreover, where for , and .

Furthermore,

3.2. The Solution of Subsystem (32)

In order to solve subsystem (32), the following definitions should be provided.

Definition 4. The characteristic polynomial of matrix is given by with for and . Without loss of generality, we assume that where is the geometric and algebraic multiplicity of the given eigenvalues , respectively.

Generally speaking, the matrix is not diagonalizable. However, we can generate linearly independent vectors and similarity transformation that takes into the Jordan canonical form, as the following definition clarifies.

Definition 5. There exists an invertible matrix such as , ; is it Jordan canonical form of matrix . Analytically, (i)The block diagonal matrix , where is also a diagonal matrix with diagonal elements the eigenvalue , for . Consequently, the dimension of is .(ii) Also, the block matrix , where

According to the classical theory of ordinary differential equations, the solution of system (32) is given by the following lemma.

Lemma 6. The solution of subsystem (32) is given by with initial condition .

Proof. Consider the transformation where nonsingular , and .
Substituting (44) into (43), we obtain
Furthermore, we define , such as the last equation is transformed into Now, according to the relevant theory of first-order differential systems of (46) form, see for instant [16], using also (44), the solution is expressed by (43).

Definition 7. The exponential matrix is defined as where for .
Furthermore,
where where are the Weyr characteristics via Ferrer diagrams, for and . Note that is the index of annihilation for the eigenvalue .
Now, taking into consideration (21) the solution (22) is transposed into
or equivalently where .

Remark 1. [4, 5] Given the large number of terms involved, in order to make our calculations affordable, we consider the fact that and its derivatives tend to zero very strongly with (note also that ). Thus, by letting , where is chosen large enough (i.e., ) the assumption as stated above is valid, that is, and its derivatives , for .

Now, using the Remark 1, we obtain that , for and , as well. Moreover, for the analytic determination of vector (3), we need to calculate the integral .

In this part of the paper, two subsystems are derived; see the following lemmas, and Appendices A and B.

Lemma 8. For the diagonal matrix , the following integral is given by (54):

Proof. See Appendix A.

Lemma 9. For the diagonal matrix , the following integral is given by (55): where

Proof. See Appendix B.

Now, we revisit (54), thus Note that is the well-known Vandermonde matrix.

Combining (52)–(57), we obtain the following system:

where and with and . Furthermore, we obtain the initial condition

The system (58) can now be divided into two types of subsystems:

(S1)

(S2) for and .

Proposition 10. System (60) is solvable if for every nonzero element of vectors , where , for every .
Moreover, if one of the elements of vectors is zero, then the relative element of the th-row of the vector should be also zero.

Proof. System (60) contains -subsystems of the following type: or equivalently (i)If one of the coefficients for 1, 2,, , then the relative element of the row of the vector should be also zero (in order to obtain solution).(ii) If every of the coefficients for 1, 2,, , then we have for every .
Consequently, system (60) is solvable if (62) is satisfied for every .

Now, we will work with subsystems (61) which can be written as follows:

The coefficient matrix can be transposed to the following equivalent matrix; see (67). Note that is a matrix. Moreover, from the elements , for , we can assume that . Thus, we obtain

In order to be able to understand (67) better, the following example is helpful.

Example 1. We take the coefficient matrix We assume that and . Since , we are not interested in , .
Thus, under our assumption, we obtain
Afterwards, we multiply the 3rd row with and it is added to the 2nd row. Moreover, we multiply the 3rd row with and it is added with the 1st row. Then, the following equivalent matrix is derived: Now, we multiply the 2nd row with and it is added with the 1st row. Thus, we obtain Finally, we multiply the matrix with the element and we conclude to obtain

Remark 2. Considering the results already presented, it is clear that there exists a nonsingular matrix , such that Thus, system (61) can be transformed into where

Proposition 11. Subsystems (74) are solvable when the elements of vectors are included into the vector with the greatest dimension, that is, 1 where Equivalently, if one assumes that then each of the following vectors , ,, should be vectors of type with .

Proof. We take the subsystems; as follows: The matrix has rows, has rows, has rows. Without loss of generality, we assume that .
Looking carefully at the type of matrices
we can easily verify that the -first rows of them are identically same. Thus, it is necessary the relevant -first rows of to be also identically the same. Analogously, the -first rows of , , should be identically the same with the relevant -first rows of . And so on until the row.

Consequently, it is time to use the results of Propositions 10 and 11. Thus, system (81) is solvable if we obtain the following.

(i) The first nonzero elements of vector coefficient (practically speaking, without loss of generality, we assume that the first elements are nonzero).(ii) The matrix with dimension , where .

(iii) The matrix with dimension , where,

(iv) The matrix with dimension , where, .

Consequently, system (81) is transposed to the solvable system (83)

or equivalently,

where , since is nonsingular.

Remark 3. The matrices given by (67) with some row transformations can be transformed to the following: The matrix (84) is denoted by Thus, there is a nonsingular matrix such as Then, subsystems (74) are transformed to where for every , and . Consequently, system (83) is transformed to

Comment 1. Without loss of generality, we can assume that the matrices do not contain zero rows. On contrary, if there exist some zero rows, the solvability of system (87) is not affected; only the number of nonzero rows which are included in (87) is changed.

Remark 4. System (87) can be further transposed to a more convenient system. Analytically, if we multiply the 1st row of Vandermonde matrix , that is, with the number (−1) and it is added to the 1st row of each of then is given by (Note that we have assumed that the matrices , for , do not cotain zero rows; see also Comment)

We can easily see that the 1st row of matrix (88) can be rewritten as what follows.

The element

Thus, the first row is presented as

Since, the element , we can multiply by left the equation (88) with a properly chosen transformation matrix, so as to obtain

Finally, this subsection is concluded by considering system (91), where the matrices for are derived by taking into account a properly chosen transformation left-matrix as follows:

where

with is the index of annihilation for the eigenvalue .

Consequently, the system (91) contains the following subsystems

where , for are nonsquare matrices.

In order to solve system (91), we should use the relevant theory of generalized inverses. Some basic elements are briefly presented in the next subsection.

3.3. Solving Subsystem (91)

This subsection is beginning with the brief presentation of the most basic results of -generalized inverses.

Definition 12. For a matrix , the is called -inverses of matrix , if

The following proposition provides the means for finding -inverses; see [19, 20] for proofs and details.

Proposition 13. Let , with and , , invertible such that Then, every , -inverse of matrix is written as follows: for arbitrary .

Definition 14. We define any -inverse of matrix by.

The following result is due to [6].

Theorem 15. Let , , . Then the matrix equation is consistent if and only if for some , and the general solution is given by for arbitrary .

The following characterization of the set of is due to [20].

Corollary 16. Let and , then any other -inverse of matrix is given by for some .

A special case of -inverses is the left/right inverses of a full column/row rank matrix, respectively. The proofs and further details are presented extensively by [19, 20].

Definition 17. Consider the following matrices. (a)Let be a -matrix (The nonzero element is in the row and the column.)
Thus, whenever a matrix is multiplied from the left by , then its row is multiplied by the nonzero number .
(b)Let be an -matrix (The nonzero element is in the row and the column.)
Thus, whenever a matrix is multiplied from the left by , then its row is multiplied by the nonzero number and it is added to the row of .
(c)Let be an -matrix (The nonzero element is in the -row and the -column).
Thus, whenever a matrix is multiplied from the right by , then its column is multiplied by the nonzero number .
(d)Let be an -matrix (The nonzero element is in the row and the column.)
Thus, whenever a matrix is multiplied from the right by , then its column is multiplied by the nonzero number and it is added to the column of .

Now, in order to calculate the generalized inverses of Vandermonde and the relative to it matrices, we use the recent results of paper [6]. In this context, the following propositions are relevant.

Proposition 18. The -inverse of the Vandermonde matrix is given by where (the multiplication counts backwards, starting from etc.) and the general solution of system is given by for an arbitrarily chosen vector .

Proof. The proof is a straightforward application of Theorem 15. Analytically, In order for the matrix to be the -inverse of , we should prove the following equality Thus, Hence, the solution of system is given by (113).

Proposition 19. The -inverse of the matrix for is given by where for , and the general solution of systems is given by for an arbitrarily chosen vector for .

Proof. The proof is also a straightforward application of Theorem 15. Analytically, In order for the matrix to be the -inverse of , we should prove the following equality: Thus, Hence, the solution of system is given by (115).

3.4. The Solution of System (91)

This subsection concludes the whole discussion. The following theorem holds.

Theorem 20. System (91) is solvable, that is, where if the following conditions are satisfied: for and .

Proof. Following Propositions 18 and 19, we have (i)for system, the solution is given by ,(ii)for systems , the solution is given by for arbitrarily chosen vector for .
There are infinitely many solutions, since the vector is arbitrary. Moreover, each of the above subsystems also contains the solution of system . Thus, we assume that the desired solution is given by , then it should also satisfy the first system, that is, . Thus, we take
for every . So (123) has been derived. Analogously, the system should also satisfy every subsystem for.
Consequently, we have that
From the above equalities, the expressions (122)–(124) are obtained.

With this important theorem, the discussion of system (32) has finished. In the next section, we present the solution of system (24).

4. The Solution of Subsystem (24)

The differential system of (15) is given by

We start this subsection by observing that—as is well known, see Section 2—there exists a such that , that is, is the annihilation index of . We obtain

or equivalently

Afterwards, we present a condition, in order to accept the already known solution (see Section 3).

We know that the nilpotent block matrix is given by

Now, taking into consideration that , it follows that for each

Lemma 21 (see [5]). All the elements of the block diagonal matrix are zero, that is, apart from the -elements at positions , , which are one. Note that is the number of maximum block matrices with the same dimension, that is, , where .

Theorem 22. The condition which makes the system (24) solvable and provides an acceptable choice of vector is given by

Proof. First, we obtain system (129), that is, . Then, for the solution of system (129), we take into account that Following Remark 1, we can assume that with big enough so that Moreover, we have and . Then, we take However, Thus, (132) is derived.

An equivalent proof to [5] for Theorem 23 is provided.

Theorem 23. A necessary condition for obtaining the solution of system (24) is given by where are elements of the matrix , the is -inverse of matrix , and for an arbitrarily chosen .

Proof. Considering the transformation (25), we obtain or equivalently by defining and finally System (141) can be expressed as follows: where The proof is concluded by using the results of Theorem 22.

5. An Illustrative Numerical Application

In this section, a numerical example of the above method is presented. We consider the second-order system

where the matrix coefficients are given by

and vector coefficient .

Let us suppose that the system at time is

and at time it achieves

From the regularity of , there exist nonsingular matrices in

such that

where are given by (17).

Consequently, system (144) is divided into the following subsystems:

Moreover, we obtain

Before we go further with the analytic determination of the unknown coefficients , we should verify the important condition (132). First, we calculate the inverse matrix of ,

Consequently, it is not difficult to verify that condition (132) holds, that is, .

Furthermore, the should be calculated. Thus

Now, the approximate expression for (151) is given by

System (150) is a standard form of nonhomogeneous second-order linear differential equations of Apostol-Kolodner type. Thus, using the results of Section 3.1, we obtain

where and the coefficient matrices and are given by

Considering (25), at time we obtain

Moreover, Furthermore, There exists an invertible matrix

such that is Jordan canonical form of matrix . Analytically,

Now, taking into consideration (52) and Remark 1 the solution is given by

where Using (55), we obtain

or equivalently

Thus,

where

Moreover,

where

Thus, considering the result of Section 3, the -inverse of matrices and are given by,

Finally, the general solution is given by the expression

where the following conditions should also be satisfied:

Analytically,

where Following the above conditions, we can see that , and are arbitrary. Thus, we conclude this numerical example with the determination of the unknown coefficients , for

for arbitrary , for and ; is large enough (i.e., ) that the assumption is valid.

6. Conclusions

In this paper, an analytical method and several sufficient conditions were proposed and discussed for the changing of the state of a higher-order linear descriptor (regular) differential system in (almost) zero time by using the impulse function and its derivatives. The input vector has to be made up a linear combination of the -function of Dirac and its derivatives approximated using the normal (Gaussian) probability density function. Using the tools of linear algebra and generalized inverses, exact calculations of the relevant matrices are obtained.

To date, no other approximating function has been used in the analysis of the problem studied in this paper. It will be of interest to use some other approximations and obtain comparative results. In [14, 18, 21], there are only some hints about the minimum time. However, there is no known formula which can calculate the minimum time considering some other significant problem parameters, such as the volatility , and so forth.

Since we have applied an approximation for the impulse input, we transfer the initial state -close to a desired state. Consequently, a natural question is to ask how close we can get, and how we can calculate that distance, and if it is possible to minimize it.

Appendices

A. The Analytic Calculation of Integral

Consider that

and finally, we obtain

with and . Moreover, we take for every

Afterwards, we calculate the integral .

Analytically, considering also (21), we obtain

for every . Thus,

where .

Remark 5. It is well-known that Consequently, where is the inverse function of normal probability distribution which is also a continuous one.

Lemma 24. One has

Proof. First, we calculate the integral Afterwards, we denote Then, d d .
Consequently, we obtain
Then, Continuing our calculations, we take Moreover, considering also (A.10), we obtain Continuing the above procedure, we have to calculate the integral Thus, (we denote then d and Note that because —by definition of the normal probability distribution) (we reuse the transformation ) Consequently, we obtain Following the previous expressions, we assume that Now, in order to conclude the proof, the recursive formula should be proved, that is, Thus, we have (reusing the transformation ) However, the Consequently, we have proved that And the proof has been concluded.

Lemma 25. One has

Proof. We have seen that

Substituting (A.23) into (A.3), we obtain

Thus, combining (A.2) and (A.25), we obtain

Lemma 26. One has

Proof. Considering here, Remark  2.1, and denoting t K with K big enough such as then we obtain (A.27).

This appendix is finished with (A.29). This equation is a consequence of (A.26) and (A.27)

B. The Analytic Calculation of Integral for

Consider that with and the matrix .

Thus,

Remark 6. According to (B.2), in order to able be to calculate the integral , we should calculate the .

Lemma 27. One has However, before we prove Lemma 27, the following lemmas should be considered firstly.

Lemma 28. One has

Proof. After some basic calculations, we obtain First, we calculate hen and we obtain .
Afterwards, we denote .
And, we take
In this part, we use also Remark 2.1, with as with and . Thus Moreover,
Now, we calculate
The , as we have seen (using Remark  2.1).
Then, we obtain
(using B.9 and A.10 ) (K) 0 (K).
Thus, we conclude
Now, we calculate  (since and , then ). Thus, we obtain (using B.9 and A.12 ) .
Consequently,
Generalized the above expression, we take Now, it should be proved the recursive (B.17) Thus, we obtain (using B.16 and A.16 ) . The above expression concludes the recursive proof.

Lemma 29. One has

Proof. Consider that First, we calculate Following Remark  2.1, we assume that 0 and t with . Thus, Moreover, we take And Then, we calculate that (using (B.9) and (B.23)) 0.
Thus, we obtain
Afterwards, we calculate (using (B.12) and (B.25)) (() 0) ().
Hence, we conclude that
Now, we calculate that (using (B.15) and (B.27))(2()) () 3().
Hence, we obtain
Combining the equations (B.23)–(B.29), we obtain recursively (B.30) Thus, in order to conclude, we should prove Hence, (using (B.16) and (B.30))
The proof is concluded by considering.

Lemma 30. One has

Proof. For 1, we have proved (B.16). For 2, we have also proved (B.30). We assume that (B.33) is true.Now, we easily have to prove the following expression for 1, that is,

Now, we return to the proof of Lemma 27.

Proof. Consider that Then, (considering (B.33))

Remark 7. In all the proofs of Lemma 2730, we have used for every and every . In this part of Appendix B, we return to expression (B.2) and we calculate (using (A.23) and (B.3), and calculating the limit as 0, we obtain what follows) Consequently, we obtain Combining (B.1) and (B.41), we obtain (B.42) for every, that is, where are the Weyr characteristics via Ferrer diagrams, for and . Note that is the index of annihilation for the eigenvalue . Note that matrix (B.42) has -blocks. Each one of the blocks can be written as follows:
Consequently, the matrix (B.42), by using (B.43), can be written as follows

Acknowledgments

The authors are very grateful to Editor Professor Amit Bhaya and the anonymous referees for their comments and suggestions, which highly improved the quality of the paper. The second author is also very grateful to Mr. Aristomenis Tsiomos, Mr. Chris Andrianoupolitis, Ms. Maria Kelertzi, Ms. Aggeliki Drakopoulou, and Mr. Michael Filakouris for their important moral support.