Vous êtes sur la page 1sur 34

2nd Reading

January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

International Journal of Mathematics


Vol. 23, No. 2 (2012) 1250034 (34 pages)
c World Scientific Publishing Company
DOI: 10.1142/S0129167X12500346

POWER SERIES SOLUTIONS OF SINGULAR


LINEAR SYSTEMS

JEN-YIN HAN
Mathematics Branch, Nei-Li High School
Nei-Li 320, Tao-Yuan County, Taiwan
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

CHIN-YUAN LIN
Department of Mathematics
National Central University
Chung-Li 320, Taiwan
cylin@@math.ncu.edu.tw

Received 18 February 2011


Accepted 12 September 2011
Published 2 February 2012

By using exponential functions, a fundamental set of power series solutions for singular
linear systems of the first kind is explicitly computed.

Keywords: Weakly singular systems; singularity of the first kind; Jordan canonical forms;
exponential functions.

Mathematics Subject Classification 2010: 15A21, 34A30

1. Introduction
Let A = 0 and Bk , k = 0, 1, 2, . . . , be n n matrices with their entries being
complex numbers, where n N. Consider the singular linear system of n rst order
dierential equations


d 1 
k
u(z) = A+ z Bk u(z) (1.1)
dz z
k=0

in a punctured disk {z C : 0 < |z| < r0 }, where r0 is a given positive number. This
system is called weakly singular [21] or a system with a singularity of the rst kind

at 0 [4, p. 111], [2, p. 17]. Here the series k=0 z k Bk is assumed to be absolutely
convergent, and one example of such a series is an analytic matrix-valued function
of z.

1250034-1
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

One [21, pp. 225235] of the two traditional methods for solving (1.1) consists
of trying a formal solution of the form


u(z) = z +k ck (1.2)
k=0

or the form


u(z)|z=et = z +k pk (t)|z=et , (1.3)
k=0

and then substituting it into (1.1). After equating the coecients of z k or ekt on
both sides of (1.1), one can determine the constant complex number and the
constant column vectors ck , k = 0, 1, . . . , in (1.2), or the constant complex number
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

and the column vector polynomial functions of t in (1.3), pk (t), k = 0, 1, . . . , of


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

degree less than some nonnegative integer n0 . The u(z) determined in this way will
be an actual solution to (1.1), if it converges. The other method [4, pp. 108127],
[2, pp. 1733], lies in trying this form (z) = P (z)z R as a formal fundamental
matrix associated with solutions and then substituting it into (1.1), provided that
no eigenvalues of A dier by positive integers. Here R is a Jordan canonical form
of the matrix A, and


P (z) = z k Pk ,
k=0

in which the constant matrices Pk can be determined as in the case with the rst
method, after equating the coecients on both sides of (1.1). The general case
where eigenvalues of A may dier by positive integers, can be reduced to the for-
mer case by some suitable transformations [4, pp. 120122], [2, pp. 2733], [9, 10].
The (z) determined by this method will be an actual fundamental matrix, if it
converges. Thus, in either method, what remains to do is to verify the convergence
of u(z) or (z), and this is done respectively in [21, pp. 228232] for u(z) and
in [4, pp. 117118], [2, p. 19] for (z). These theories generalize the results by
Frobenius [4, pp. 132135], where the nth order regular singular equation is con-
sidered:
z n y (n) + z n1 b1 (z)y (n1) + + zbn1 (z)y  + bn (z)y = 0. (1.4)
Here bk (z), k = 1, . . . , n, are analytic functions in the disk {z C : 0 |z| < r0 },
and the Eq. (1.4) can be transformed into the form (1.1):
du 1
= B(z)u,
dz z
by the substitutions [4, p. 124]
uk = z k1 y (k1) , k = 1, . . . , n,
in which B(z) is an analytic matrix function in the disk {z C : 0 |z| < r0 }, and
u has the components uk . However, in cases where logarithmic terms are involved

1250034-2
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

in the power series solutions of (1.4), the Frobenius method can sometimes lighten
the load of calculations [4, pp. 132135], [2, pp. 3536]. See [12, pp. 9192] for a
historical account of singular point theory.
In this article, we are interested in this problem: We shall solve (1.1) directly
by using the exponential function [1, 3, 4, 17, 21]

 (tA)k
etA
k!
k=0

and the results in [6] and in paper [11]. We shall compute explicitly the power series
solutions, whether logarithmic terms are involved or not. Furthermore, without
substituting the trial solutions and then performing calculations, the results here
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

will clearly tell in advance, whether or not logarithmic terms are involved. Compare
our results here to those with the traditional methods [21, pp. 224235], [4, pp. 108
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

127], [2, pp. 1733], [9, 10], [4, pp. 132135], [2, pp. 3536]. To illustrate our results,
four examples are presented in Sec. 6. Those examples can also help us to understand
the subsequent proofs in this article. For information about numerical or algebraic
computation of etA , the reader is referred to [7, 13, 18].
The rest of this article is organized as follows. Section 2 states the main results,
and Sec. 3 proves Corollary 2.2. Section 4 proves Corollaries 2.3 and 2.4, and Sec. 5
proves the main Theorem 2.1. The proof of Corollaries 2.3 and 2.4 is done by citing
the known results Propositions 4.2 and 4.3 and then by deriving some useful prop-
erties out of them. Finally, Sec. 6 contains illustrative examples, and Appendix A
presents an appendix about how to easily calculate a Jordan basis using the method
in article [11].

2. Main Results
To achieve our goal, we reduce (1.1), by the substitution z = et , to the system


d 
t kt
v(t) = A + e e Bk v(t), (2.1)
dt
k=0

where v(t) = u(z)|z=et . We further rewrite (2.1) as the system


 
d 
w(t) = etA et ekt Bk etA w(t), (2.2)
dt
k=0

using the change v(t) = etA w(t).

Theorem 2.1. For each element (or generalized eigenvector) w0 in the Jordan
canonical basis J in article [11], the limit w(t), as j , of the successive approx-
imations
  

tA t
wj (t) = w0 + e e e Bk etA wj1 dt, j = 1, 2, . . . ,
kt
(2.3)
k=0

1250034-3
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

of (2.2) by indenite integrals, is a power series solution to (2.2), and that the
n power series solutions w(t), corresponding to the n elements w0 in the Jordan
canonical basis J, is a fundamental set of solutions for (2.2). Thus, the set of the n

u(z) v(t)|t=ln(z) = etA w(t)|t=ln(z)

is a fundamental set of solutions for (1.1).

Remark 2.1. Here the integration constants in the indenite integrals are all
taken to be zero, and the Jordan basis J can be easily calculated by the method
in article [11] for which see the Appendix A.
Also, the calculations of the indenite integral in (2.3) are done by using the
known formula
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

 
t(A 0 ) [t(A 0 )]m0 1
tA t0
w0 + +
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

e w0 = e w0 + w0 ,
1! (m0 1)!
if w0 is a generalized eigenvector satisfying

(C1) (A 0 )m0 w0 = 0 but (A 0 )m0 1 w0 = 0

for some positive integer m0 . Here (A 0 ) means (A 0 E) with E the n n


identity matrix and this denition will be used throughout this article.
Moreover, if the dierence l = of two eigenvalues and of A is a positive
integer and equals a sum

(C2) l = [q0 (k0 + 1)] + [q1 (k1 + 1)] + + [qs0 (ks0 + 1)]

of some integers qj (kj + 1), j = 0, 1, . . . , s0 for some s0 , where kj is the index of


the term Bkj = 0 and the number (kj + 1) is added up repeatedly for some qj
times, qj N, then used in the calculation of the indenite integral in (2.3) are
the additional known formula (see Proposition 4.2 in Sec. 4):
 
s m i 1

etA = tk eti Mi,k (2.4)


i=1 k=0

and the additional properties (see Proposition 4.4 in Sec. 4) that the matrices
Mi,k satisfy:

Mi,l Mj,m = 0 if i = j;
(A i )l (A i )m+l
Mi,m Mi,l = Mi,l Mi,m = Mi,m = Mi,0
l! l!m!
if 0 l, m mi 1 and l + m mi 1; (2.5)
Mi,m Mi,l = 0 if l + m > mi 1; and
Mi,k Mi,0 = Mi,0 Mi,k = Mi,k for i = 1, . . . , s.

Here i , i = 1, . . . , s, are all the eigenvalues of A with respective multiplicity mi .

1250034-4
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

Corollary 2.2. Under counting multiplicity, if no eigenvalues of A dier by non-


negative integers, or more generally, if neither (C1) with the multiplicity m0 2
nor (C2) holds, then no logarithmic terms are involved in the power series solutions.
In this case, all solutions are of the form


z 0 ck z k ,
k=0

where 0 is an eigenvalue of A and ck are constant vectors.

Corollary 2.3. Power series solutions involve logarithmic terms if either (C1) with
the multiplicity m0 2 or (C2) holds.
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

Remark 2.2. In the case where logarithmic terms are involved, a power series
solution is not analytic on the whole punctured disk D {z C : 0 < |z| < r0 },
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

but analytic only on


D\{z = a R : r0 < a < 0},
the D with the line segment on the negative x-axis deleted, starting from the origin
and ending at the boundary of D. This is because each single-valued part is so, of
the multi-valued, logarithmic function ln(z).
Combined with Corollaries 2.2 and 2.3, we have the following corollary.

Corollary 2.4. Every power series solution of (1.1) is of the form


z 0 {h0 (z) + [ln(z)]h1 (z) + + [ln(z)]n0 hn0 (z)}, (2.6)
where 0 is a generalized eigenvalue of A with the multiplicity m0 1, and
hj (z), j = 0, 1, . . . , n0 , for some nonnegative integer n0 , are vector-valued analytic
functions of the form


ck z k ,
k=0

with ck being constant vectors.

Remark 2.3. With regard to the four examples in Sec. 6 for illustrative purpose,
Example 6.1 deals with the case which satises neither the condition (C1) with
m0 2 nor the condition (C2) and so, by Corollary 2.2, involves no logarithmic
terms. Our calculations in this example are also compared with those by the tradi-
tional Frobenius method [4, pp. 108127] of substitution. The other three examples
involve logarithmic terms by Corollary 2.3, in which Example 6.2 satises (C1) with
m0 2 but not (C2), Example 6.3 not satises (C1) with m0 2 but (C2), and
nally, Example 6.4 with matrices of order 4 4 satises both (C1) with m0 2
and (C2). It seems a dicult task to solve Example 6.4 if used are the traditional
methods [21, pp. 224235], [4, pp. 108127], [2, pp. 1733], [9, 10], [4, pp. 132135],
[2, pp. 3536], and [9, 10, 12].

1250034-5
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

3. The Case where Neither (C1) with m0 2 Nor (C2) is Satisfied

Proof of Corollary 2.2. We divide the proof into two steps.

Step 1 (How the wj looks). By the assumption on 0 , we have etA w0 = e0 t w0 ,


where w0 is an eigenvector of 0 . It follows from Eq. (2.3) that
  

tA t
w1 = w0 + e e e Bk1 etA w0 dt
k1 t

k1 =0


= w0 + [(k1 + 1) + (0 A)]1 et[(k1 +1)+(0 A)] Bk1 w0 ,
k1 =0
  
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.



tA t k2 t
w2 = w0 + e e e Bk2 etA w1 dt
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

k2 =0


= w1 + [(k2 + 1) + (k1 + 1) + (0 A)]1 et[(k2 +1)+(0 A)] Bk2
k2 =0


[(k1 + 1) + (0 A)]1 et(k1 +1) Bk1 w0 ,
k1 =0

and, in general, that





wj = w0 + etA et ekj t Bkj etA w0 dt
kj =0



= wj1 + [(kj + 1) + (kj1 + 1) + + (k1 + 1) + (0 A)]1
kj =0

t[(kj +1)+(0 A)]


e Bkj


[(kj1 + 1) + (kj2 + 1) + + (k1 + 1) + (0 A)]1
kj1 =0

et(kj1 +1) Bkj1




[(k1 + 1) + (0 A)]1 et(k1 +1) Bk1 w0 , j = 1, 2, . . . .
k1 =0

Thus the series takes the desired form without logarithmic terms where the substi-
tution z = et is used.

Step 2. We next show convergence of wj , from which it follows that the limit of
wj = wj (t) is a solution of (2.2) and then that u(z) = etA [limj wj (t)]|t=ln(z) is
a solution of (1.1). This is because term by term dierentiation is allowed.

1250034-6
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

Let
0 = sup | 0 + |,
(A)

where (A) is the spectrum set of A and is a nite set.


Choose a j0 N such that j0 > 0 . It follows that, for k, m = 0, 1, 2, . . . ,
(k + m + j0 + 0 A)1  (k + m + j0 )1
2 
0 0
1+ + +
k + m + j0 k + m + j0

= (k + m + j0 0 )1
(m + j0 0 )1 .
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

(3.1)
This will be used in estimating the quantity aj below. Here the Neumann series
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com


(k + m + j0 + 0 A)1 = (k + m + j0 )1 0 +A l
l=0 [ k+m+j0 ] was utilized [20].
For j = 1, 2, . . . , let

aj = [(kj + 1) + (kj1 + 1) + + (k1 + 1) + (0 A)]1 
kj =0

et[(kj +1)+(0 A)] Bkj 




[(kj1 + 1) + (kj2 + 1) + + (k1 + 1) + (0 A)]1 et(kj1 +1)
kj1 =0



Bkj1  [(k1 + 1) + (0 A)]1 et(k1 +1) Bk1 w0 ,
k1 =0

where k=0 e t(k+1)
Bk  converges by assumption. Because of
wj0 +j wj0 +j1  aj0 +j ,

we shall show that j=0 aj0 +j converges by the ratio test, from which follows the
convergence of wj0 +j as j .
Easy calculations in conjuction with (3.1) show that


aj0 +1 (1 + j0 0 )1 et(0 A) et(kj0 +1 +1) Bkj0 +1 aj0 ,
kj0 +1 =0



aj0 +2 (2 + j0 0 )1 et(0 A) et(kj0 +2 +1) Bkj0 +2 aj0 +1 ,
kj0 +2 =0

and, in general, that




aj0 +j (j + j0 0 )1 et(0 A) et(kj0 +j +1) Bkj0 +j aj0 +j1 ,
kj0 +j =0

j = 1, 2, . . . .

1250034-7
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

Since

aj0 +j
(j + j0 0 )1 et(0 A) et(k+1) Bk  0
aj0 +j1
k=0

as j goes to innity, the series j=1 aj0 +j converges by the ratio test. Thus, from
wj0 +j wj0 +j1  aj0 +j ,
it follows that the series

(wj0 +j wj0 +j1 ) = lim (wj0 +j wj0 )
j
j=1

converges, and then so does the series limj wj .


by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

The proof is now complete.


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

4. The Case Where Either (C1) with m0 2 or (C2) is Satisfied


We cite the following known results Lemma 4.1 and Propositions 4.2 and 4.3.

Lemma 4.1. The nth order linear ordinary dierential equation


(D 1 )m1 (D s )ms u(t) = 0,
where D = dt d
, a rst order dierential operator, s N, i C and mi N for
i = 1, . . . , s, i = j for i = j, and 1 + + s = n, has the set of functions
{tk eti : i = 1, . . . , s; k = 0, 1, . . . , (mi 1)}
as a fundamental set of solutions.
See [3, 4] for a proof.

Proposition 4.2. Assume that the characteristic equation |A I| = 0 of A takes


the form
a0 ( 1 )m1 ( 2 )m2 ( s )ms = 0, (4.1)
where s N, i = j for i = j, and m1 + m2 + + ms = n. This is always true
[3, pp. 1718].
Then there exist n unique square matrices
Mi,k , i = 1, 2, . . . , s; k = 0, 1, . . . , (mi 1),
such that
 
s m i 1

etA = tk eti Mi,k .


i=1 k=0

Furthermore, after relabeling the n functions


tk eti , i = 1, 2, . . . , s; k = 0, 1, . . . , (mi 1),

1250034-8
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

as the n functions
zi (t), i = 1, 2, . . . , n,
and relabeling the n matrices
Mi,k , i = 1, 2, . . . , s; k = 0, 1, . . . , (mi 1),
as the n matrices
Nj , j = 1, 2, . . . , n,
it follows that the n matrices Nj , j = 1, . . . , n, can be computed as the n unique
solutions of the algebraic equations:

z1 (0) z2 (0) zn (0) N1 I

by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

z1 (0) z2 (0) zn (0) N2 A



.. .. .. .
=
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

. . .
(n1) (n1) (n1)
z1 (0) z2 (0) zn (0) Nn An1
A proof of Proposition 4.2 is given in [6, pp. 307313], where the Cayley
Hamilton theorem is used.
Here are some important properties that the matrices Mi,k in Proposition 4.2
satisfy the following proposition.
Proposition 4.3. The following is true for i = 1, . . . , s:
1
Mi,k = (A i I)k Mi,0 , k = 0, 1, . . . , (mi 1),
k!
(A i I)mi Mi,0 = 0, and
AMi,mi 1 = i Mi,mi 1 .
Thus each nonzero column vector in the matrix Mi,mi 1 is an eigenvector, and
each column vector in the matrix Mi,k equals its corresponding column vector in
1
the matrix Mi,0 , pre-multiplied by the factor k! (A i I)k .
Proposition 4.3 is proved in article [11], which combines Lemma 4.1 to show the
existence of a Jordan canonical form of a matrix.
We now obtain, for our use, the additional properties that the matrices Mi,k
satisfy the following proposition.
Proposition 4.4. The matrices Mi,k satisfy the following:
Mi,l Mj,m = 0 if i = j;
(A i )l (A i )m+l
Mi,m Mi,l = Mi,l Mi,m = Mi,m = Mi,0
l! l!m!
if 0 l, m mi 1 and l + m mi 1;
Mi,m Mi,l = 0 if l + m > mi 1; and
Mi,k Mi,0 = Mi,k , i = 1, . . . , s.

1250034-9
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

Proof. Making use of Propositions 4.3 and 4.2, we have



 [t(A j )]l
etA Mj,m = etj Mj,m
l!
l=0
mj 1
 [t(A j )]l
tj
=e Mj,m
l!
l=0

by the rst two identities in Proposition 4.3


mj 1 i 1
 
s 
m
= tl etj Mj,l Mj,m + tl eti Mi,l Mj,m .
l=0 i=j,i=1 l=0
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

Since
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

{tk eti : i = 1, . . . , s; k = 0, . . . , (mi 1)}

is linearly independent by Lemma 4.1, we have

Mi,l Mj,m = 0 for i = j;


(A j )l
Mj,l Mj,m = Mj,m .
l!
The rest follows from Proposition 4.3, and the proof is complete.

Corollary 2.3 will be proved by using the following Propositions 4.5 and 4.6.

Proposition 4.5. Power series solutions of (1.1) involve logarithmic terms if A


satises (C1) with m0 2 but not satises (C2).

Proof. We divide the proof into three steps, where step 3 consists of two cases.

Step 1. This is to exhibit how the wj in (2.3) looks. By the assumption on 0 , we


have
 
[t(A 0 )]m0 1
etA w0 = et0 w0 + t(A 0 )w0 + + w0
(m0 1)!

m 0 1

m ,
m=0

where w0 is a generalized eigenvector in the Jordan basis J that satises

(A 0 )m0 1 w0 = 0 but (A 0 )m0 w0 = 0

for some integer m0 2.

1250034-10
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

It follows from the Eq. (2.3) that


 

tA t(k+1)
w1 = w0 + e e Bk etA w0 dt
k=0

 m 1
 0

= w0 + etA et(k+1) Bk m dt,


k=0 m=0

and, in general, that


 

wj = w0 + etA et(k+1) Bk etA wj1 dt, j = 1, 2, 3, . . . .
k=0

By letting
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.




Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

tA
m,1 e e t(k+1)
Bk m dt,
k=0

 (4.2)

m,j etA et(k+1) Bk etA j1 dt, j = 2, 3, . . . ,
k=0

we see that

m 0 1 
j
wj = w0 + m,l , j = 1, 2, . . . . (4.3)
m=0 l=1

Remark 4.1. It is to be observed that the wj in the proof of Corollary 2.2 is the
wj in the above (4.3) with m0 = 1.

Step 2 (The calculation of m,l ).


By the formula of the integration by parts, we have that

 (A 0 )m
m,1 = tm [(k + 1) + (0 A)]1 et[(k+1)+(0 A)] Bk w0
m!
k=0

 (A 0 )m
tm1 m [(k + 1) + (0 A)]2 et[(k+1)+(0 A)] Bk w0
m!
k=0

 (A 0 )m
+ tm2 m(m 1) [(k + 1) + (0 A)]3 et[(k+1)+(0 A)] Bk w0
m!
k=0

 (A 0 )m
+ + (1)m m! [(k + 1) + (0 )](m+1) et[(k+1)+(0 A)] Bk w0
m!
k=0


m! 
m ml
t
= (1)l [(k + 1) + (0 A)](l+1) et[(k+1)+(0 A)]
(m l)!
l=0 k=0

(A 0 ) m
Bk w0 , (4.4)
m!
1250034-11
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

m
which is a polynomial pm,1 (t) = l=0 m,l tl in t with degree m. This is due to the
term m which has the power m in t. Hence, in the second iteration of calculating
m,2 , the result will be, again, a polynomial pm,2 (t) in t with degree m, in which
notice that each m,l tl will contribute to a polynomial in t with degree l. Other
m,l , l = 3, . . . , will be similarly calculated. Thus the series involves logarithmic
terms with the substitution t = ln(z) as we wish to prove.

Step 3. We next show that the m0 series l=1 m,l converge where m =
0, 1, . . . , (m0 1). From this and (4.3), it will follow that the limit of the series
wj = wj (t) is a solution of (2.2) and that u(z) = etA [limj wj (t)]|t=ln(z) is a
solution of (1.1). This is because term by term dierentiation is allowed. We con-
sider two cases.
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

Case 1, where 1 > 0 sup(A) |0 + |. Here (A) is the spectrum


set of A. As in proving Corollary 2.2, we have
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

[(k + m + 1) + (0 A)]1  [(m + 1) 0 ]1 , k, m = 0, 1, 2, . . . . (4.5)

This will be used in estimating aj , bj , cj and dj below.


From Eq. (4.4) and the paragraph following it, we see that

 
m
m,j = m,l tl (4.6)
j=1 l=0

is a polynomial in t with degree m where the coecients m,l are series of functions.
We shall show by the ratio test that the (m + 1) series m,l converge where l =

0, 1, . . . , m, from which follows the convergence of j=1 m,j . We shall treat, in
detail, the coecients m,m , m,m1 , m,m2 , and m,0 of tm , tm1 , tm2 , and t0 ,
respectively, from which the rest will be clear.
The following calculation of m,l , l = m, (m 1), (m 2), and 0 is done by using
Eq. (4.4) and the explanatory paragraph following it.
The coecient m,m of tm is the series

 (A 0 )m
[(k1 + 1) + (0 A)]1 et[(k1 +1)+(0 A)] Bk1 w0
m!
k1 =0


+ [(k2 + 1) + (k1 + 1) + (0 A)]1 et[(k2 +1)+(k1 +1)+(0 A)] Bk2
k2 =0

 (A 0 )m
[(k1 + 1) + (0 A)]1 Bk1 w0
m!
k1 =0


+ [(k3 + 1) + (k2 + 1) + (k1 + 1) + (0 A)]1
k3 =0

et[(k3 +1)+(k2 +1)+(k1 +1)+(0 A)] Bk3

1250034-12
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions



[(k2 + 1) + (k1 + 1) + (0 A)]1 Bk2
k2 =0

 (A 0 )m
[(k1 + 1) + (0 A)]1 Bk1 w0 + ,
m!
k1 =0

in which the jth term corresponds to the jth iteration of calculating m,j . The norm

of this series is bounded by the series of nonnegative numbers j=1 aj , where


aj [(kj + 1) + (kj1 + 1) + + (k1 + 1) 0 ]1 et[(kj +1)+(0 A)] Bkj 
kj =0


by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

[(kj1 + 1) + (kj2 + 1) + + (k1 + 1) 0 ]1 et(kj1 +1) Bkj1 


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

kj1 =0

  
 (A 0 )m 
1 t(k1 +1)
[(k1 + 1) 0 ] e 
Bk1   w0 
m! .
k1 =0

Here note that k=0 et(k+1) Bk  converges by assumption.
Easy calculations combined with (4.5) show that


a2 (2 0 )1 et(0 A) et(k2 +1) Bk2 a1 ,
k2 =0


a3 (3 0 )1 et(0 A) et(k3 +1) Bk3 a2 ,
k3 =0

and, in general, that




aj+1 (j + 1 0 )1 et(0 A) et(kj+1 +1) Bkj+1 aj , j = 1, 2, . . . .
kj+1 =0

Since

aj+1
(j + 1 0 )1 (et(0 A) )et(k+1) Bk  0
aj
k=0

as j goes to innity, as in Step 2 of the proof of Corollary 2.2, we have that j=1 aj
converges by the ratio test. Hence the series m,m converges.
The coecient m,m1 of tm1 is the series


 (A 0 )m
2 t[(k1 +1)+(0 A)]
m [(k1 + 1) + (0 A)] e Bk1 w0
m!
k1 =0


m [(k2 + 1) + (k1 + 1) + (0 A)]2 et[(k2 +1)+(k1 +1)+(0 A)] Bk2
k2 =0

1250034-13
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin


 (A 0 )m
[(k1 + 1) + (0 A)]1 Bk1 w0
m!
k1 =0



+m [(k2 + 1) + (k1 + 1) + (0 A)]1 et[(k2 +1)+(k1 +1)+(0 A)] Bk2
k2 =0



 (A 0 )m
[(k1 + 1) + (0 A)]2 Bk1 w0 ,
m!
k1 =0

 
in which the term with double summations k2 =0 k1 =0 is repeated twice (cor-
responding to the second iteration of calculating m,2 ), the term with triple sum-
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

  
mations k3 =0 k2 =0 k1 =0 is repeated three times (corresponding to the third
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

iteration of calculating m,3 ), . . . , and so on.



The norm of this series is bounded by the series of numbers m j=1 jbj , where
the j corresponds to the jth iteration of calculating m,j , and



bj [(kj + 1) + (kj1 + 1) + + (k1 + 1) 0 ]1 et[(kj +1)+(0 A)] Bkj 
kj =0



[(kj1 + 1) + (kj2 + 1) + + (k1 + 1) 0 ]1 et(kj1 +1) Bkj1 
kj1 =0


  
 (A 0 )m 
[(k1 + 1) 0 ]2 t(k1 +1)
e 
Bk1   w0 
m! .
k1 =0

Here note that

[(l2 + 1) + (0 A)]1  [(l1 + 1) 0 ]1

for 0 l1 < l2 .
As is the case with aj above, we have, by making use of (4.5),


(j + 1)bj+1 j+1
(j + 1 0 )1 et(0 A) et(k+1) Bk  0
jbj j
k=0


as j goes to innity, and so the series m j=1 jbj converges by the ratio test. Thus
the series m,m1 converges.
Similarly, the norm of the coecient m,m2 of tm2 is bounded by the series

of numbers m(m 1) j=1 [1 + (j 1)2]cj , where the j corresponds to the jth
iteration of calculating m,j , the number 2 corresponds to the contribution from

1250034-14
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

the extra two terms m,m tm and m,m1 tm1 , in addition to the term m,m2 tm2 ,
and

cj [(kj + 1) + (kj1 + 1) + + (k1 + 1) 0 ]1 et[(kj +1)+(0 A)] Bkj 
kj =0


[(kj1 + 1) + (kj2 + 1) + + (k1 + 1) 0 ]1 et(kj1 +1) Bkj1 
kj1 =0
 
  (A 0 )m 
[(k1 + 1) 0 ]3 et(k1 +1) Bk1  
 w 
0 .
m!
k1 =0

This series of numbers is convergent by the ratio test, since, by making use of (4.5)
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

as is the case with aj and bj ,


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

(1 + 2j)cj
[1 + (j 1)2]cj1

1 + 2j
(j + 1 0 )1 et(0 A) et(k+1) Bk  0
[1 + (j 1)2]
k=0

as j goes to innity. Thus the series m,m2 converges.


Continuing in this way, we have that the norm of the coecient m,0 of t0 is

bounded by the series of numbers m! j=0 [1 + (j 1)m]dj , where the j corresponds
to the jth iteration of calculating m,j , the number m corresponds to the contribu-
tion from the extra m terms m,l tl , l = 1, 2, . . . , m, in addition to the term m,0 t0 ,
and


dj [(kj + 1) + (kj1 + 1) + + (k1 + 1) 0 ]1 et[(kj +1)+(0 A)] Bkj 
kj =0


[(kj1 + 1) + (kj2 + 1) + + (k1 + 1) 0 ]1 et(kj1 +1) Bkj1 
kj1 =0

  
 (A 0 )m 
[(k1 + 1) 0 ] m t(k1 +1)
e 
Bk1   w0 
m! .
k1 =0

This series of numbers is convergent again by the ratio test, since, by making use
of (4.5),
(1 + jm)dj+1
[1 + (j 1)m]dj

1 + jm
(j + 1 0 )1 et(0 A) et(k+1) Bk  0,
[1 + (j 1)m]
k=0

as j goes to innity. Thus the series m,0 converges, which is the coecient of t0 .

1250034-15
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

Case 2, where 1 0 . As in Step 2 of the proof of Corollary 2.2, there is a


j0 N, such that j0 > 0 and that

[k + m + j0 + (0 A)]1  (m + j0 0 )1 (4.7)

for k, m = 0, 1, . . . .
Instead of considering the m,j , j = 1, 2, . . . , consider m,j = m,j0 +j , j =
1, 2, . . . . These m,i = m,j0 +j will have the factors [(k + m) + j0 + (0 A)]1 ,

since, in each iteration of calculating the m,j in (4.2), a factor k=0 et(k+1) builds
up and so, the above factors appear after the j0 th iteration. Hence (4.7) can be
 
used. It is then readily seen that the convergence of j=1 m,j = j=1 m,j0 +j
follows from Case 1 in Step 3 above.
The proof is complete.
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

Remark 4.2. From the above proof of Proposition 4.5 or, more precisely, from
(4.3), (4.2), (4.4), the paragraph following (4.4), and Remark 4.1, we have

v lim etA wj
j


m 0 1 
[t(A 0 )]i
= et0 w0 + et(0 +l) pl (t), (4.8)
i=0
i!
l=1

where wj is either the wj in the proof of Corollary 2.2 or the wj in the above proof of
Proposition 4.5, and pl (t), l = 1, 2, . . . , are vector polynomials in t, all with degrees
less than some nonnegative integer n0 . Furthermore, the n0 can be chosen, so that,
to the n elements w0 in the Jordan basis J, the corresponding polynomials pl (t) are
all with degrees less than n0 . Here notice that, in each iteration of calculating the

indenite integral in (2.3), a factor k=0 et(k+1) Bk builds up and this contributes
 t(0 +l)
to the terms l=1 e pl (t) above.

Proposition 4.6. Power series solutions of (1.1) involve logarithmic terms if A


not satises (C1) with m0 2 but satises (C2).

Proof. We divide the proof into three cases.


Case 1. Assume that 2 and 1 are the only two eigenvalues of A that satisfy
l = 2 1 and that

l = (k0 + 1).

Because A not satises (C1) with m0 2, i.e. because A satises (C1) with
m0 = 1, the formula (2.4) gives

s
etA = eti Mi,0 . (4.9)
i=1

This will be used in the following calculations.

1250034-16
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

Let w0 be the eigenvector corresponding to the eigenvalue 1 , for which etA w0 =


t1
e w0 . This calculation
 

tA
e e t(k+1)
Bk etA w0 dt
k=0

 
s

= eti Mi,0 + et2 M2,0et(k0 +1) Bk0 + et(k+1) Bket1 w0 dt
i=1,i=2 k=0,k=k0


s 
= tM2,0 Bk0 w0 + [(k + 1) + (1 i )]1 et[(k+1)+(1 i )] Mi,0 Bk w0
i=1,i=2 k=0

by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.



+ [(k + 1) + (1 2 )]1 et[(k+1)+(1 2 )] M2,0 Bk w0
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

k=0,k=k0

1 + 0
shows that = 1 + 0 is a polynomial in t with degree 1, in which 1 is the term
with t and 0 is the constant term. Thus, by letting
  s 
 
tA t(k+1) ti
m,1 = e e Bk e Mi,0 m dt,
k=0 i=1



m,j = etA et(k+1) Bk etA m,j1 dt, j = 2, 3, . . .
k=0

as in (4.2) and (4.3), we have that the wj in (2.3) satises


1 
 j1
wj = w0 + + m,l , j = 2, 3, . . . . (4.10)
m=0 l=1
Therefore, the convergence of wj follows as in the proof of Proposition 4.5. Here the
formula (2.4) for etA , which is (4.9) in our case, will be only used twice in calculating
and m,1 to obtain logarithmic terms with the substitution t = ln(z), and the
properties in (2.5) will be only used once in simplifying the calculation of m,1 .
Case 2. Assume Case 1 in Proposition 4.6 but assume that the l = 2 1 equals
the sum in (C2). Then the in Case 1 of Proposition 4.6, a polynomial in t with
degree 1, will not appear in the rst iteration of calculating w1 , but will appear
k0
in the ( l=0 ql )th iteration. At that time, the proof in Case 1 of Proposition 4.6
carries over.
Case 3. In addition to the pair (1 , 2 ), assume that there are other pairs which
possess the same property as the pair (1 , 2 ) does. Then the proof is completed
by repeating Case 1 and Case 2 in Proposition 4.6 for each other pair.

Remark 4.3. From the above proof of Proposition 4.6, we see that the quantity
vj etA wj where the wj is from (4.10) is again of the form in (4.8).

1250034-17
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

Proof of Corollary 2.3. This follows at once from the proof of Propositions 4.5
and 4.6.

Proof of Corollary 2.4. This is an immediate consequence of the proof of Corol-


laries 2.2 and 2.3.

5. The Proof of the Main Theorem


Proof of Theorem 2.1. We divide the proof into two steps.
Step 1 (The existence of n power series solutions). It follows from the
theorem of Jordan canonical forms of a square matrix [5, 811, 1416] that A has
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

a Jordan basis
 

s
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

J = ik : i = 1, . . . , s; k = 0, 1, . . . , (mi 1); mi = n ,
i=1

and that each element ik in J satises


i,k i,k
1
(A i )l ik = 0 but (A i )l ik = 0

for some i in the set {i }si=1 of the eigenvalues of A, and for some 1 li,k mi .
Here a Jordan basis J is easily calculated by using the method in article [11], for
which see the Appendix A.
Corollariers 2.2 and 2.3 show that the n functions
 
tA i,k
u (z) v (t)|t=ln(z) lim e wj (t) |t=ln(z) ,
i,k i,k
j

each corresponding to a ik in J, are n power series solutions to (1.1).


Step 2 (Linear independence of the n power series solutions). It remains
to prove that the set of the n solutions v i,k (t) is linearly independent. This basically
follows from [21, pp. 234235]. To see this, note that, corresponding to each ik in
the Jordan basis J, the power series solution v i,k (t) takes the form

v i,k (t) = lim etA wji,k (t)


j

li,k
 1 

[t(A i )]i
=e ti
ik + et(i +l) pl (t)
i !
i =0 l=1

ti li,k t(i +1)


e p (t) + O(e ),
i,k
by (4.8) and Remark 4.3, where pl (t) is a vector polynomial in t with degree
(li,k 1), and pl (t), l = 1, 2, . . . , are vector polynomials in t, all with degrees less than
some nonnegative integer n0 ; furthermore, the n0 is such that, to the n elements
v i,k (t) in the Jordan basis J, the corresponding polynomials pl (t) all have degrees
less than n0 . Here notice that, in each iteration of calculating the indenite integral

1250034-18
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions


in (2.3), a factor k=0 et(k+1) Bk builds up and this contributes to the terms of
O(et(i +1) ) above.
i 1
We claim that, for each 1 i s, the set {v i,k }m
k=0 is linearly independent.
To this end, suppose that

m i 1 
m i 1
i,k
ck v i,k = ck [eti pl (t) + O(et(i +1) )] = 0, (5.1)
k=0 k=0

and we will show that ck = 0 for k = 0, 1, . . . , (mi 1).



If li,k = 1 for all k, then divide (5.1) by eti and let t = (1 + 1y), where
 i 1
> 0 and y R. By making , it follows that m k=0 ck ik = 0, and so ck =
i 1
for all k, since {ik }mk=0 is linearly independent. Here also used was the fact that
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

there is some polynomial p(t) with degree n1 > n0 , such that, for large and for
all l 1,
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com


|etl pl (t)| e p( 1 + y 2 ) 0
as .
If li,k > 1 for some k, then let lmax = max0kmi 1 li,k and divide (5.1) by
tlmax 1 eti . As above, letting , it follows that ck = 0 for those ks with
li,k = lmax . This is because, corresponding to those ks, the vectors
(A i )lmax 1
ik
(lmax 1)!
are linearly independent. Here also used was the fact that, for large and for all
integers l 1,
 
1
  1
 tl  1 + y 2 0

as . Repeating this process results in ck = 0 for all k.


Now we suppose that
 
s m i 1  
s m i 1
i,k
cik v i,k = cik [eti pl (t) + O(et(i +1) )] = 0, (5.2)
i=1 k=0 i=1 k=0

and show that cik = 0 for all i, k.


To this end, choose a b R, such that all the s real numbers

i Re[(1 + 1b)i ], i = 1, . . . , s,
are dierent. Here Re means the real part of. This is possible, since, for each i,
the equation

x = Re[(1 + 1y)i ]
is a straight line in the x y plane, and these s lines have at most a nite number of
intersection points. Thus, choosing such a b R that is not the second coordinate
of an intersection point, will do.

1250034-19
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin


Let t = (1 + 1b) where > 0, and assume that t1 has the corresponding
real part 1 which is the largest among the s real numbers i , i = 1, . . . , s. Divide
(5.2) by et1 if l1,k = 1 for all 0 k m1 1; otherwise divide (5.2) by tlmax 1 et1 ,
where lmax = max0km1 1 l1,k . It is to be noted that by letting , we have

|p(t)et(i 1 )+tj | p( 1 + b2 )e (i 1 ) 0

for each 2 i s and for all nonnegative integers j, where p(t) is a polynomial in t
with degree n1 > n0 . Hence, it will follow from the case with (5.1) that c1k = 0 for
all k = 0, 1, . . . , m1 1. Repeating this process, in which we assume that t2 has the
real part 2 which is the largest among the (s1) real numbers i , i = 2, 3, . . . , s,
will lead to c2k = 0 for 0 k m2 1. Continuing in this way, we will eventually
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

have cik = 0 for all i = 3, 4, . . . , s, and k = 0, 1, . . . , (mi 1).


The proof is complete.
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

6. Examples
As illustration, consider the following four examples, which can also help us to
understand the foregoing proofs in this article. Example 6.1 deals with the case
which satises neither the condition (C1) with m0 2 nor the condition (C2)
and so, by Corollary 2.2, involves no logarithmic terms. Our calculations in this
example are also compared with those by the traditional Frobenius method [4,
pp. 108127] of substitution. The other three examples involve logarithmic terms
by Corollary 2.3, in which Example 6.2 satises (C1) with m0 2 but not (C2),
Example 6.3 not satises (C1) with m0 2 but (C2), and nally, Example 6.4
with matrices of order 4 4 satises both (C1) with m0 2 and (C2). It seems a
dicult task to solve Example 6.4 if used are the traditional methods [21, pp. 224
235], [4, pp. 108127], [2, pp. 1733], [9, 10], [4, pp. 132135], [2, pp. 3536], and
[9, 10, 12].
The following calculations use the method in article [11] to compute a Jordan
basis, for the convenience of which we refer to the Appendix A.

Example 6.1.

z 2 u + 4zu z 2 u = 0,

which is from [19, p. 482].

Proof. We will solve this example by both our method and the Frobenius method
[4, pp. 108127] of substitution.
Method 1 (Our method). The substitutions yk = z k1 u(k1) , k = 1, 2, lead
to the associated weakly singular linear system

d 1
y= A + zB1 y,
dz z

1250034-20
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

where
     
y1 0 1 0 0
y= , A= , and B1 = .
y2 0 3 1 0

The eigenvalues of A are 0 and 3, and the dierence l = 0 (3) = 3 is not a


sum of the type in (C2), i.e. there is no positive integer q1 such that 3 = q1 (k1 + 1)
where k1 = 1. Thus, there are no logarithmic terms in the power series solutions
by Corollary 2.2.
The two matrices M1,0 and M2,0 in Proposition 4.2 that satisfy (2.4), i.e.

etA = M1,0 + e3t M2,0 ,


by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

are

Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

1 1
1 0
3 and 3 , respectively.
0 0 0 1

It follows from article [11] (see Appendix A) that a Jordan basis J is



1
1
{10 , 20 } = 3 , 3 ,

0 1
which is the second column of the augmented matrix

1
1
3

  0 0

M1,0
= .
M2,0
1
0
3
0 1
But we will use this Jordan basis

J = {310 , (3)20 }

for easy comparison with the Frobenius method.


Carrying out the computation of power series solutions by Theorem 2.1, we have
that

w1 = w0 + etA e2t B1 etA w0 dt

= w0 + etA e2t B1 et0 w0 dt

= w0 + (2 A)1 e(2A)t B1 w0 ,

1250034-21
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

v1 = etA w1
= w0 + (2 A)1 e2t B1 w0
   
1 z2 1
= + if z = et ,
0 10 2
and

w2 = w0 + etA e2t B1 etA w1 dt

= w0 + etA e2t B1 v1 dt

= w1 + (4 A)1 e(4A)t B1 (2 A)1 B1 w0 ,


by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

v2 = etA w2
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

= v1 + (4 A)1 e4t B1 (2 A)1 B1 w0


 
z4 1
= v1 |z=et + if z = et ,
280 4
#$
where w0 = 310 = 10 and 0 = 0. Other wi and vi are similarly computed.
Therefore the resultant power series solution u1 , which is the rst component of
limj vj |z=et , becomes
z2 z4
u1 = 1 + + + .
10 280
# $
1
On the other hand, corresponding to the eigenvector w0 = (3)20 = 3 for
the eigenvalue 0 = 3, we have

w1 = w0 + etA e2t B1 etA w0 dt

= w0 + etA e2t B1 e3t w0 dt

= w0 + (1 A)1 e(1A)t B1 w0 ,
v1 = etA w1
= e3t w0 + (1 A)1 et B1 w0
   
1 1 1
z
= z 3 + if z = et ,
3 2 1
and

w2 = w0 + etA (e2t B1 )etA w1 dt

= w1 + (1 A)1 et(1A) B1 (1 A)1 B1 w0 ,

1250034-22
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

v2 = etA w2
= v1 + (1 A)1 et B1 (1 A)1 B1 w0
 
z 1
= v1 |z=et + if z = et .
8 1

Other wj and vj are similarly calculated. Therefore the resultant second power
series solution u2 , which is the rst component of limj vj |z=et , becomes
z 1 z
u2 = z 3 + + + .
2 8
From the above computations, we see that although the two eigenvalues 0
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

and 3 dier by the integer 3, there are no logarithmic terms in the power series
solutions. This is consistent with the prediction by Corollary 2.2. However, if the
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

Frobenius method is used, this result will not be known unless further substitutions
and calculations are done. This is presented below.
Method 2 (The Frobenius method). The matrix A has the eigenvalues
{1 , 2 } = {0, 3},
as in Method 1, and so the rst trial series solution u1 takes the form:



u1 = z 1 z k ck = z k ck .
k=0 k=0

After being substituted into the singular equation in this example, u1 satises


u1 = kz k1 ck ,
k=0


u1 = k(k 1)z k1 ck ,
k=0

0 = z 2 u1 + 4zu1 z 2 u1




= k(k 1)z k ck + 4kz k ck z k+2 ck ,
k=2 k=1 k=0

where



z k+2 ck = z k ck2 .
k=0 k=2

It follows by comparing the coecients of xk on both sides that


k = 1 : c1 = 0,
1
k 2 : ck = ck2 .
k(k + 3)

1250034-23
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

A few terms of ck s are:

c1 = 0,
1
c2 = c0 ,
10
c3 = 0 = c1 = c5 = c7 = ,
1 1
c4 = c2 = c0 ,
28 280
..
.

Thus by setting c0 = 1, it follows that


by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

1 2 1 4
u1 = 1 + z + z + ,
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

10 280
the same as that obtained by our method.
Similarly, the second trial series solution u2 takes the form


u2 = cu1 ln(z) + z 2 z k ck ,
k=0

where the constants c, ck are to be determined.


After substitution into the singular equation in this example, u2 satises

1 
u2 = cu1 ln(z) + cu1 + (k 3)z k4 ck ,
z
k=0

2cu1 cu1 
u2 = cu1 ln(z) + 2 + (k 3)(k 4)z k5 ck ,
z z
k=0

0= z 2 u2 + 4zu2 2
z u2




= (2czu1 + 3cu1 ) + (k 3)(k 4)z k3
ck + 4(k 3)z k3
ck z k1 ck
k=0 k=0 k=0




= (2czu1 + 3cu1 ) + k(k 1)z k ck+3 + 4kz k ck+3 z k ck+1 ,
k=3 k=3 k=1

or satises
 
7c 11c 4
(2czu1 + 3cu1 ) = 3c + z 2 + z +
10 280


= 2c1 z 2 + [k(k + 3)ck+3 ck+1 ]z k .
k=1

Here the fact was used that u1 is a solution.

1250034-24
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

By comparing the coecients of z k , it follows that


k = 2 : 2c1 = 0,
k = 0 : 3c = c1 = 0,
k 1, k = 0 : k(k + 3)ck+3 = ck+1 .
The number c and a few terms of ck s are:
c = 0,
c1 = 0,
1
c2 = c0 ,
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

2
1 1
c4 = c2 = c0 ,
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

4 8
1
c5 = c3 ,
10
..
.
and so
 
3 1 1 1
u 2 = c0 z z z +
2 8
 
1 2 1 4
+ c3 1 + z + z +
10 280
c0 2 + c3 1 .
Because 1 = u1 , we can set c0 = 1 and c3 = 0 to obtain
1 1
u2 = 2 = z 3 z 1 z + ,
2 8
which is the same as that obtained by our method.
From the above calculations, we see that u1 should be computed rst in order
to be able to obtain u2 in which c = 0 eventually. This is not the case with our
method.

Example 6.2.
 
d 1
u= A + zB1 u,
dz z
where
   
0 1 0 0
A= , and B1 = .
0 0 1 0
This weakly singular system is associated with the Bessel equation of order zero.

1250034-25
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

Proof. The two eigenvalues of A are 0 and 0, and the two matrices M1,0 and M1,1
in Proposition 4.2 that satisfy (2.4), i.e.

etA = M1,0 + tM1,1 ,

are
   
1 0 0 1
and , respectively.
0 1 0 0

It follows from [11] (see the Appendix A) that a Jordan basis J is


   
0 1
{10 , 11 } = , ,
1 0
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

which is the second column of the augmented matrix


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com


1 0

 0 1
M1,0
= .
M1,1
0 1
0 0

Since A2 10 = A20 = 0, we have that (C1) with m0 = 2 2 is satised and so


logarithmic terms are involved in the power series solutions by Corollary 2.3.
Carrying out the computation of power series solutions by Theorem 2.1, we have

w1 = w0 + etA e2t B1 etA w0 dt

= w0 + etA e2t B1 w0 dt

= w0 + (2 A)1 e(2A)t B1 w0 ,

and

v1 = etA w1 = w0 + e2t (2 A)1 B1 w0 ,


# $
where w0 = 11 = 10 is an eigenvector for the eigenvalue 0 = 0. Other wj and
vj are similarly calculated, and there are no logarithmic terms obviously.
# $
0
However, corresponding to the generalized eigenvector w0 = 10 = 1 satisfy-
2
ing A w0 = 0 for the generalized eigenvalue 0 = 0, we have

w1 = w0 + etA e2t B1 etA w0 dt

= w0 + etA e2t B1 (w0 + tAw0 ) dt

1250034-26
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

= w0 + (2 A)1 e(2A)t B1 w0 + (2 A)1 te(2A)t B1 Aw0


(2 A)2 e(2A)t B1 Aw0
and
v1 = etA w1
= w0 + tAw0 + (2 A)1 e2t B1 w0 + (2 A)1 te2t B1 Aw0
(2 A)2 e2t B1 Aw0 .
Other wj , vj are similarly calculated.
Thus, a logarithmic term t = ln(z) is involved and this is consistent with the
prediction by Corollary 2.3.
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

Example 6.3.
 
d 1
u= A + zB1 u,
dz z
where
   
0 1 0 0
A= , and B1 = .
0 2 1 0

Proof. The eigenvalues of A are 0 and 2, and the dierence l = 0 (2) = 2


equals (k1 +1), a sum of the type in (C2), where k1 = 1. Thus, there are logarithmic
terms in the power series solutions by Corollary 2.3.
The two matrices M1,0 and M2,0 in Proposition 4.2 that satisfy (2.4), i.e.
etA = M1,0 + e2t M2,0 , (6.1)
are
   
1 1/2 0 1/2
and , respectively.
0 0 0 1
It follows from [11] (see Appendix A) that a Jordan basis J is
   
1/2 1/2
{10 , 20 } = , ,
0 1
which is the second column of the augmented matrix

1 1/2
  0 0
M1,0
= .

M2,0
0 1/2
0 1

1250034-27
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

Carrying out the computation of power series solutions by Theorem 2.1, we have

w1 = w0 + etA e2t B1 etA w0 dt

= w0 + etA e2t B1 w0 dt

= w0 + (2 A)1 e(2A)t B1 w0 ,
and
v1 = etA w1
= w0 + (2 A)1 e2t B1 w0 ,
# $
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

where w0 = 10 = 1/2 0 is an eigenvector for the eigenvalue 0 = 0. Other wj and


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

vj are similarly calculated, and there are no logarithmic terms# obviously.


$
1/2
However, corresponding to the eigenvector w0 = 20 = 1 for the eigen-
value 0 = 2, we need to use (6.1) and the additional properties in (2.5) (see
Proposition 4.4):
2 2
M1,0 = M1,0 , M2,0 = M2,0 ,
(6.2)
M1,0 M2,0 = 0 = M2,0 M1,0 .
It follows that

w1 = w0 + etA e2t B1 etA w0 dt

= w0 + etA e2t B1 e2t w0 dt

= w0 + (M1,0 + e2t M2,0 )e2t B1 e2t w0 dt

e2t
= w0 + tM1,0 B1 w0 + M2,0 B1 w0 ,
2
v1 = etA w1
= (M1,0 + e2t M2,0 )w1
1
= e2t w0 + tM1,0 B1 w0 + M2,0 B1 w0 ,
2
and that

w2 = w0 + etA e2t B1 etA w1 dt

= w0 + etA e2t B1 v1 dt

1
= w1 + etA e2t B1 (tM1,0 B1 w0 + M2,0 B1 w0 ) dt
2

1250034-28
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

1
= w1 + (2 A)1 e(2A)t B1 M2,0 B1 w0
2
+ t(2 A)1 e(2A)t B1 M1,0 B1 w0 (2 A)2 e(2A)t B1 M1,0 B1,0 w0 ,
v2 = etA w2
1
= v1 + (2 A)1 e2t B1 M2,0 B1 w0
2
+ t(2 A)1 e2t B1 M1,0 B1 w0 (2 A)2 e2t M1 M1,0 B1 w0 ;

other wj , vj are similarly calculated. Here (6.1) will be only used twice in the cal-
culation of w1 and w2 to obtain logarithmic terms with the substitution t = ln(z),
and (6.2) will be only used once in simplifying the calculation of w2 .
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

Thus, a logarithmic term t = ln(z) is involved and this is consistent with the
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

prediction by Corollary 2.3. However, if the Frobenius method is used, this result
will not be known unless further substitutions and calculations are done.

Example 6.4.
 
d 1 2
u= A + z B2 u,
dz z
where

3 1 5 1 2 4 2 2

1 1 1 0 2 0 1 3
A=
0
, and B2 = .
0 2 1

2 2 3
3

0 0 1 0 2 6 3 7

Here A comes from [16, p. 270], and B2 from [8, p. 438].

Proof. The characteristic equation for A is

0 = |A | = ( 2)2 ( + 1)2 ,

and the four eigenvalues are 2, 2, 1, and 1. Thus A satises both (C1) with
m0 = 2 2 and (C2). This is because

(A + 1)2 20 = 0 but (A + 1)20 = 0

(see below) and because the dierence l = 2 (1) = 3 equals (k2 + 1), a type of
the sum in (C2), where k2 = 2. Therefore, logarithmic terms are involved in the
power series solutions by Corollary 2.3.
The four matrices M1,0 , M1,1 , M2,0 , and M2,1 in Proposition 4.2 that satisfy
(2.4), i.e.

etA = e2t M1,0 + te2t M1,1 + et M2,0 + tet M2,1 , (6.3)

1250034-29
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

are

27 0 25 14 9 9 7 8

1
0 27 4 10 9 9 7
, M1,1 = 1
8
,
M1,0 =
27 0 0 0 0 90 0 0 0
0 0 0 0 0 0 0 0

0 0 25 14 0 0 13 13

1
0 0 4 10 0 0
, and M2,1 = 1
2 2
,
M2,0 =
27 0 0 27 0 9 0 0 9 9

0 0 0 27 0 0 9 9
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

respectively.
It follows from [11] (see Appendix A) that a Jordan basis
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

J = {10 , 11 , 20 , 21 }
is given by


14 8 14 13


1
10 1 8 1 10 1 2

, , , ,

27
0 9 0 27 0 9 9




0 0 27 9
M 
1,0
M1,1
which is the fourth column of the augmented matrix M2,0 .
M2,1
We shall only compute the power series solution for w0 = 20 , for which
(A + 1)2 20 = (A + 1)21 = 0 and
etA w0 = et [w0 + t(A + 1)w0 ] = et (w0 + t21 ).
The cases for w0 equal to other elements in J will be similar.
Carrying out the computation by Theorem 2.1, we need to use (6.3) and the
additional properties in 2.5 (see Proposition 4.4):
2 2 2 2
M1,0 = M1,0 , M2,0 = M2,0 , M1,1 = 0, M2,1 = 0,
M1,0 M1,1 = M1,1 M1,0 = M1,1 , M2,0 M2,1 = M2,1 M2,0 = M2,1 , (6.4)
M1,k M2,i = M2,i M1,k = 0 for i, k = 0, 1.
It follows that

w1 = w0 + etA e3t B2 etA w0 dt

= w0 + [e2t (M1,0 tM1,1 ) + et (M2,0 tM2,1 )]e3t B2 et [w0 + t(A + 1)w0 ] dt

1250034-30
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions


= w0 + [(M1,0 tM1,1 )(B2 w0 + tB2 (A + 1)w0 )

+ e3t (M2,0 tM2,1 )(B2 w0 + tB2 (A + 1)w0 )] dt


t2 t3
= w0 + tM1,0 B2 w0 + [M1,0 B2 (A + 1)w0 M1,1 B2 w0 ] M1,1 B2 (A + 1)w0
2 3

1 1 3t 1 3t
+ e3t M2,0 B2 w0 + te e [M2,0 B2 (A + 1)w0 M2,1 B2 w0 ]
3 3 9

1 2 3t 2 3t 2 3t
t e te + e M2,1 B2 (A + 1)w0 ,
3 9 27
= w0 + { },
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

that
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

v1 = etA w1
= etA w0 + etA { }
= etA w0 + [e2t (M1,0 + tM1,1 ) + et (M2,0 + tM2,1 )]{ }

t 1 1 3 2t
= e [w0 + t(A + 1)w0 ] + + t e M1,1 B2 (A + 1)w0
3 2
+ terms of the order and lower orders of t2 [ ],
that

w2 = w0 + etA e3t B2 etA w1 dt

= w0 + etA e3t B2 v1 dt

1 3 1 1
= w0 + e t(5A)
(5 A) t + B2 M1,1 B2 (A + 1)w0
3 2
+ etA {terms of the order and lower orders of t2 [ ]},
and that
v2 = etA w2

1 1
= et [w0 + t(A + 1)w0 ] + t3 e5t + (5 A)1 B2 M1,1 B2 (A + 1)w0
3 2
+ terms of the order and lower orders of t2 [ ].
Other wj and vj are similarly calculated. Here (6.3) will be only used twice in
calculating w1 and w2 to obtain logarithmic terms with the substitution t = ln(z),
and (6.4) will be only used once in simplifying the calculation of w2 .
Thus, a logarithmic term of highest order t3 = [ln(z)]3 is involved and this is
consistent with the prediction by Corollary 2.3. However, it seems a dicult task to

1250034-31
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

obtain such a power series solution, if used are the traditional methods [21, pp. 224
235], [4, pp. 108127], [2, pp. 1733], [9, 10], [4, pp. 132135], [2, pp. 3536], and
[9, 10, 12].

Appendix A. How to Compute a Jordan Basis


Here we present the method from article [11] about how to compute a Jordan basis.
By referring to Proposition 4.2, we dene

Definition A.1. Suppose that p is a nonzero column vector in the matrix Mi,0 . If
(A i I)l p = 0 but (A i I)l1 p = 0 for some 1 l mi , then the set
Sp(l) = {pl , pl1 , . . . , p1 }
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

{p, (A i I)p, . . . , (A i I)l2 p, (A i )l1 p}


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

is called the cycle of generalized eigenvectors for A, corresponding to the generalized


eigenvalue i , that has the generator p, the initial vector p1 = (A i )l1 p, and
the length l.

Definition A.2. The cycle


Sp(l) = {p, (A i )p, . . . , (A i )l2 p, (A i )l1 p}
is said to have the subcycles:
{(A i )l1 p, 0, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , 0}
{(A i )l2 p, (A i )l1 p, 0, . . . . . . . . . . . . . . . . . . . . . , 0}
{(A i )l3 p, (A i )l2 p, (A i )l1 p, 0, . . . . . . . . . , 0}
..
.
{(A i )p, (A i )2 p, (A i )3 p, . . . , (A i )l1 p, 0}.

Definition A.3. For each 1 i s, the matrices Mi,k , k = 0, 1, . . . , (mi 1), are
said to have the augmented matrix

Mi,0

1!Mi,1

2!Mi,2
Fi = .

..
.
(mi 1)!Mi,mi 1
Proposition A.1. Each nonzero column in Fi is a cycle of generalized eigenvectors
for A, corresponding to the generalized eigenvalue i , that has its corresponding
column in the component matrix Mi,0 of Fi as the generator and has the number
1 l mi as the length.

1250034-32
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

Power Series Solutions

Proof. For proof see [11].

Definition A.4. The subcycles for each cycle column in Fi can be regarded as
additional columns in Fi , relative to the regular columns in Fi .
Here is the main result in article [11]

Theorem A.2. For each 1 i s, perform column operations on the augu-


mented matrix Fi , in such a way that any one of the initial vectors for the regular
column cycles in Fi that is linearly dependent with the others, shall be reduced
to zero. Continue column operations until all the initial vectors for the regu-
lar column cycles in Fi are linearly independent. Here multiplying the additional
columns in Fi by constants and adding them to regular columns in Fi are allowed;
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

however, multiplying the regular columns by constants and adding them to addi-
Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

tional columns are not allowed. Then, by discarding the additional columns (that
come from subcycles) in Fi , we have that the remaining column vectors in the com-
ponent matrices k! Mi,k , k = 0, 1, . . . , (mi 1), i = 1, . . . , s, of the augmented
matrices Fi , i = 1, . . . , s, is a Jordan canonical basis for A.

Proof. For proof see [11].

Remark A.1. At each stage of column operations, if we are able to pick regular
columns in Fi , i = 1, . . . , s, that constitute exactly n linearly independent vectors
in Cn , then we have obtained a Jordan canonical basis for A.

References
[1] T. M. Apostol, Mathematical Analysis (Addision-Wesley Publishing Company, 1974).
[2] W. Balser, Formal Power Series and Linear Systems of Meromorphic Ordinary Dif-
ferential Equations (Springer-Verlag, New York, 2000).
[3] E. A. Coddington, Introduction to Ordinary Dierential Equations (Prentice-Hall,
Englewood Clis, New Jersey, 1961).
[4] E. A. Coddington and N. Levision, Theory of Ordinary Dierential Equations
(McGraw-Hill Book Company, New York, 1955).
[5] C. G. Cullen, Matrices and Linear Transforms, 2nd edn. (Dover Publications, New
York, 1972).
[6] C. G. Cullen, Linear Algebra and Dierential Equations, 2nd edn. (PWS-KENT
Publishing Company, Boston, Massachussetts, 1991).
[7] P. Davies and N. J. Higham, A SchurParlett algorithm for computing matrix func-
tions, SIAM J. Matrix Anal. Appl. 25(2) (2003) 464485.
[8] S. H. Friedberg, A. J. Insel and L. E. Spence, Linear Algebra, 2nd edn. (Prentice-Hall,
New Jersey, 1989).
[9] F. R. Gantmacher, The Theory of Matrices, Vol. I (Chelsea, New York, 1959).
[10] F. R. Gantmacher, The Theory of Matrices, Vol. II (Chelsea, New York, 1959).
[11] J.-Y. Han and C.-Y. Lin, A new proof of Jordan canonical forms of a square matrix,
Linear Multilinear Algebra 57(4) (2009) 369386.
[12] Ph. Hartman, Ordinary Dierential Equations (John Wiley and Sons, New York,
1964).

1250034-33
2nd Reading
January 30, 2012 15:51 WSPC/S0129-167X 133-IJM 1250034

J.-Y. Han & C.-Y. Lin

[13] N. J. Higham, The scaling and squaring method for the matrix exponential revisited,
SIAM J. Matrix Anal. Appl. 26 (2005) 11791193.
[14] K. Homan and R. Kunze, Linear Algebra, 2nd edn. (Prentice-Hall, Englewood Clis,
NJ, 1971).
[15] R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University Press, New
York, 1988).
[16] P. Lancaster and M. Tismenetsky, The Theory of Matrices with Applications, 2nd
edn. (Academic Press, New York, 1985).
[17] C.-Y. Lin, Theory and Examples of Ordinary Dierential Equations (World Scientic,
Singapore, 2011).
[18] C. Moler and C. Van Loan, Nineteen dubious ways to compute the exponential of
matrix, twenty-ve years late, SIAM Rev. 45 (2003) 349.
[19] R. K. Nagel and E. B. Sa, Fundamentals of Dierential Equations and Boundary
Value Problems, 2nd edn. (Addison-Wesley Publishing Company, New York, 1996).
by THE UNIV OF WESTERN ONTARIO on 02/06/15. For personal use only.

[20] A. E. Taylor and D. Lay, Introduction to Functional Analysis (Wiley, 1980).


Int. J. Math. 2012.23. Downloaded from www.worldscientific.com

[21] W. Walter, Ordinary Dierential Equations (Springer-Verlag, New York, 1998).

1250034-34

Vous aimerez peut-être aussi