Vous êtes sur la page 1sur 18

Linear Algebra and its Applications 297 (1999) 6380

www.elsevier.com/locate/laa

Eigenvalues of tridiagonal pseudo-Toeplitz


matrices
Devadatta Kulkarni, Darrell Schmidt, Sze-Kai Tsui *

Department of Mathematical Sciences, Oakland University, Rochester, MI 48309-4485, USA


Received 11 March 1997; accepted 11 May 1999
Submitted by C. Davis

Abstract
In this article we determine the eigenvalues of sequences of tridiagonal matrices that
contain a Toeplitz matrix in the upper left block. 1999 Published by Elsevier Science
Inc. All rights reserved.

AMS classication: 15A18; 15A57; 33A65; 26C10

Keywords: Eigenvalues; Toeplitz matrices; Tridiagonal matrices; Chebyshev polynomials; Graph-


ical method for location of roots

1. Introduction

Although Hermitian matrices are known to have real eigenvalues only, the
evaluation of these eigenvalues remains as misty as ever. For tridiagonal ma-
trices there are several known methods describing their eigenvalues such as
Gershgorin's theorem [5], Sturm sequences for Hermitian tridiagonal matrices
[1,4], etc. The eigenvalues of a tridiagonal Toeplitz matrix can be completely
determined [11]. Attempts have been made to resolve the eigenvalue problem for
matrices which are like tridiagonal Toeplitz matrices but not entirely Toeplitz
(see [2,3,12,13]). This paper falls in the same general direction of investigation.

*
Corresponding author. Partially supported by an Oakland University Research Fellowship 1995.
E-mail addresses: tsui@oakland.edu (S.-K. Tsui), kulkarni@oakland.edu (D. Kulkarni),
schmidt@oakland.edu (D. Schmidt)

0024-3795/99/$ - see front matter 1999 Published by Elsevier Science Inc. All rights reserved.
PII: S 0 0 2 4 - 3 7 9 5 ( 9 9 ) 0 0 1 1 4 - 7
64 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

We study tridiagonal matrices which contain a Toeplitz matrix in the upper left
block. We call them pseudo-Toeplitz to make a distinction from all the matrices
studied before in [2,3,12], etc. The major feature of our treatment is the con-
nection between the characteristic polynomials of these tridiagonal pseudo-
Toeplitz matrices and the Chebyshev polynomials of the second kind, whereby
we can locate the eigenvalues that fall in the intervals determined by the roots of
some Chebyshev polynomials of the second kind. In other words, we use these
intervals derived from roots of some Chebyshev polynomial as a reference to
determine the eigenvalues of the original pseudo-Toplitz matrix. In fact, we are
able to determine the location of all eigenvalues of some tridiagonal pseudo-
Toeplitz matrices which have either all entries real with a nonnegative product
from each o-diagonal pair or the entries on the main diagonal purely imagi-
nary with a negative product from each o-diagonal pair (see Corollary 3.4). In
Section 2, we lay down a basic tool for nding the eigenvalues of tridiagonal
Toeplitz matrices, which is markedly dierent from the traditional approach
used in [2,11,12]. In Section 3, we give a detailed account of the number of ei-
genvalues in each such interval whose end points are consecutive roots of a pair
of Chebyshev polynomials related to the given tridiagonal pseudo-Toeplitz
matrix. We also show that, for a sequence of (real) tridiagonal matrices with a
positive product from each pair of o-diagonal entries, the eigenvalues of two
consecutive matrices in the sequence interlace (see Proposition 3.1). Further-
more, we discuss a lower bound for the number of real eigenvalues for tridi-
agonal pseudo-Toeplitz matrices of a xed dimension (see Theorem 3.6). In
Section 4, we demonstrate examples of tridiagonal pseudo-Toeplitz matrices for
which we can completely determine their real eigenvalues graphically.
These techniques have been also applied in innite dimensional program-
ming [10] and in numerical solutions of heat equations [3]. Standard references
for the Chebyshev polynomials are [6,8,9].

2. Eigenvalues of tridiagonal Toeplitz matrices

It is known that the eigenvalues of tridiagonal Toeplitz matrices can be


determined analytically. The method employs the boundary value dierence
equation [11]. In this section, we provide a dierent approach to the solution
which will be extended to determine eigenvalues of several more general ma-
trices in the later sections.
Let Tn a; b; c be an n  n tridiagonal matrix dened by
2 3
a c 0
6 .. .. 7
6b . . 7
Tn a; b; c 6 .. .. 7:
4 . . c5
0 b a
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 65

When b c 1 we denote Tn a; b; c by Tn a. Tn a; b; c is the same matrix


denoted by Tn0 a; b; c in the later sections. We denote the characteristic poly-
nomial of Tn a by /n ak, and it is related to the nth degree Chebyshev
polynomial of the second kind. Indeed, expanding detTn a kI by the last
row, we have

/n ak a k/n1 ak /n2 ak 1

for n P 2, with /0 ak 1; /1 ak a k. Substituting a k 2x, (1)


becomes

/n ax 2x/n1 ax /n2 ax 2

for n P 2, with /0 ax 1, /1 ax 2x. Thus, /n ax is the nth degree


Chebyshev polynomial of the second kind, denoted by Un . It is well-known
that

sinn 1 cos1 x
Un x for jxj 6 1;
sincos1 x

and the roots of Un x are cos kp=n 1k 1; 2; . . . ; n. Thus, we have the


following proposition.

Proposition 2.1. The eigenvalues of Tn a are

a 2 cos kp=n 1 for k 1; 2; . . . ; n:

Next, we relate Tn a; b; c to Tn a. Note that


 
1 a b c
p Tn a; b; c Tn p ; p ; p ;
bc bc bc bc

and the eigenvalues of aTn a; b; c are just a times the eigenvalues of Tn a; b; c.


Thus,
pit psuces pto consider
p p the
characteristic
p polynomial
/n a= bc;p b= bc;p c= bc pof T
 n a= bc ; b= bc ; c= bc : As above, we can see
that /n a= bc; b= bc
p; c= bc satises the same recurrence relation and initial
conditions as /n a= bc :
 
a
/n k p k /n1 k /n2 kn 6 2;
bc
p
where /0 k 1 and /1 k a= bc k. Thus,
66 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380
   
a b c a
/n p ; p ; p /n p Un :
bc bc bc bc
p
p Proposition 2.1 we know that the eigenvalues for Tn a= bc are
From
a= bc 2 cos kp=n 1 k 1; 2; . . . ; n: Hence, we have the following re-
sult.

Theorem 2.2. The eigenvalues of Tn a; b; c are


p
a 2 bc cos kp=n 1 for k 1; 2; . . . ; n:

3. Main results

In this section we study the eigenvalues of those tridiagonal matrices the


upper left block of which are Toeplitz matrices. That is, we consider

where
2 3 2 3
a c 0 ak ck1 0
6 .. .. 7 6 .. .. 7
6b . . 7 6b . . 7
Tn a; b; c 6 .. .. 7; Bk 6 k1 .. .. 7:
4 . . c5 4 . . c1 5
0 b a 0 b1 a1

In the previous section we have pdetermined


p the eigenvalues of Tn a; b; c
p
completely. Now we consider Tnk a=p bc; b=p
bc; c=p
bc: We denotepthe
char-
acteristic polynomials of Tnk a= bc; b= bc; c= bc and 1= bc Bk by
/kn k
kand p
w
k k; kpP1, respectively,
p
0
 where /0 k 1, w0 k 1. Expanding
det Tn a= bc; b= bc; c= bc k I by the last k rows using the Laplace de-
velopment, we have for n P 1 and k P 1 that
bk c k 0
/kn k /0n kwk k / kwk1 k: 3
bc n1
It follows from Eq. (2) in Section 2 that /0n x is the nth degree Chebyshev
polynomial Un x of the second kind with
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 67

a
p k 2x: 4
bc
If k is a root of (3) but not a common root of Un1 and wk , then
Un k bk ck =bcwk1 k
:
Un1 k wk k

Let Un x=Un1 x pn x, n P 1 and p0 x 1. Then

Un x 2xUn1 Un2 x 1
pn x 2x ; n P 1;
Un1 x Un1 pn1 x

and

0
pn1 x X
n1
2
pn0 x 2 2
2 2 2 2
; n P 1;
pn1 x p p
k1 n1 n2
. . . pnk
2 X n1
2
2 2
Un1k : 5
Un1 k1

Thus, pn0 x > 0 for all n P 1 and for all x in the domain of pn . Next we denote
for 1 6 j 6 k,

wj1 kbj cj =bc


gj k :
wj k

We compare their graphs.


Let g1 ; . . . ; gn1 , be the zeros of Un1 and n1 ; . . . ; nn be the zeros of Un . It is
known that 1 < n1 < g1 < n2 < g2 <    < nn1 < gn1 < nn < 1. Also denote
g0 n0 1 and gn nn1 1. It follows from (5) that pn is strictly in-
creasing in each interval gj1 ; gj , 1 6 j 6 n. The graph of pn x is shown in
Fig. 1.
In order to describe the behavior of gj ; 1 6 j 6 k, we impose the following
conditions for the ensuing paragraphs through Corollary 3.4
aj a bj c j
p ; p are real and P 0; 1 6 j 6 k: 6
bc bc bc

For 2 6 j 6 k, expanding the determinant that generates wj k, by the rst


row, we have
 
aj bj1 cj1
wj k p k wj1 k wj2 k:
bc bc
68 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

Fig. 1. y pn x.

It follows that
bj cj =bcwj1 k
gj k p 
aj = bc k wj1 k bj1 cj1 =bcwj2 k
bj cj =bc
p 
aj = bc k gj1 k

and
0
bj cj =bc1 gj1 k
gj0 k 2
; 26j6k
aj =bc k gj1 k
b1 c1 =bc
g10 k p : 7
a= bc k2

By induction, gk0 k is nonnegative, 0


p and hence gk x 6 0 in view of (4). Due to
(6) the tridiagonal matrices 1= bc Bk are similar to symmetric matrices and
hence they have exactly k real eigenvalues, counting multiplicities (see [7,
p. 174]). p
pthe situation where the eigenvalues of 1= bc Bk and the
Next, we look into
eigenvalues of 1= bc Bk1 are interlacing. For this we prefer to denote
2 3
a1 c 1 0
6 .. .. 7
6b . . 7
Ak 6 1 . . 7
4 .. .. c 5
k1
0 bk1 ak

as a sequence of tridiagonal matrices satisfying bj cj > 0; 1 6 j 6 k 1, and aj ,


1 6 j 6 n, are real. In this notation we have the following proposition.
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 69

Proposition 3.1. The eigenvalues of Ak are distinct and interlace strictly with
eigenvalues of Ak1 for k P 2.

Proof. The proof is by induction on k. We denote the characteristic polynomial


of Ak by uk k. The root of u1 k is a1 and
u2 k k a1 k a2 b1 c1 :

It follows from b1 c1 > 0 that u2 k has one root in maxa1 ; a2 ; 1 and one
root in 1; mina1 ; a2 . Thus, the assertion holds for k 2. Assume that the
assertion holds for order k 1. Let q1 < q2 <    < qk1 be the eigenvalues of
Ak1 , and f1 ; <    <; fk2 be the eigenvalues of Ak2 , where
q1 < f1 < q2    < fn2 < qn1 by hypothesis. Now uk2 k 1k2 kk2
Qk2
lower order terms so that uk2 k j1 fj k. It follows that
1k2j uk2 > 0 on fk2j ; fk1j ; 0 6 j 6 k 2, where f0 1; fk1 1.
Since qk1j 2 fk2j ; fk1j ; 1kj uk2 qk1j > 0, 0 6 j 6 k 2.
Expanding the determinant that generates uk k by the last row yields
uk k ak kuk1 k bk1 ck1 uk2 k; k P 2: 8
kj
Then it follows from (8) and bk1 ck1 > 0 that 1 uk qk1j < 0,
0 6 j 6 k 2. So uk has a zero in qj1 ; qj ; 2 6 j 6 k 1. It remains to show that
uk has a zero in each of 1; q1 and qk1 ; 1. Observe that 1k uk k kk
lower terms, so 1k uk qk1 < 0 < 1k uk k for k suciently larger than
qn1 . Thus, uk has a zero in qk1 ; 1. Also uk q1 < 0 < uk k for k suciently
smaller than q1 . Thus, uk has a zero in 1; q1 . 

Now, let f1 6    6 fk be the roots of wk x and q1 6    6 qk1 be the roots


of wk1 x. Then, it follows from Proposition 3.1 that
f1 < q1 < f2 < q2 <    < qk1 < fk if bj cj =bc > 0 for 1 6 j 6 k. However, if
bj cj 0 for some 1 6 j 6 k, then wk and wk1 have common root(s). Let l be the
largest index, j, such that bj cj 0. Then, expanding the determinant that yields
/kn k by the last l rows according to the Laplace development, we have
/kn k /~nkl kwl k;

where /~nkl k is the characteristic polynomial of T~nk which is the n k l


order square matrix in the upper left corner of Tnk . T~nk is of the form Tnk if we
reindex the entries in the lower right corner of T~nk . We also note that gk x, in its
reduced form, has exactly k l poles and k 1 l zeros. The l real roots of
wl x are roots of /kn k. In this decoupled case, we may focus our attention on
determining roots of /~nkl k. We also have an equation analogous to (3)

bk c k 0
/~kl 0 ~
n k /n kwkl k / kw~k1l k;
bc n1
70 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

where for p 6 j 6 k l 1, w~kjl k is the characteristic polynomial of the


0
matrix 1= bc B~kjl , where
2 3
akj ckj1 0
6 .. .. 7
6 bkj1 . . 7
B~kjl 6 6 .. ..
7:
7
4 . . cl1 5
0 bl1 al1

Thus gk , in its reduced form, is g~kl bk ck =bc w~k1l =w~kl :


It follows from the intermediate value theorem that pn x and gk x, in its
reduced form, must agree with each other at least once in every interval
gj1 ; gj ; 1 6 j 6 n; for pn x is strictly increasing and gk0 x < 0 wherever gk x
is dened in gj1 ; gj , 1 6 j 6 n from (7). If a pole of gk x; fi ; 1 6 i 6 k l is not
a pole of pn x, then fi must fall in an interval gj1 ; gj , for some j, 1 6 j 6 n. If
fi ; fi1 ; . . . ; fir are the poles of gk x that lie in gj1 ; gj for some j, then by the
intermediate value theorem, pn x and gk x must agree exactly once in each of
the following intervals, gji ; fi ; fi ; fi1 ; . . . ; fir ; gj , giving rise to r 1 roots
of /kn k in Eq. (3). In this notation we have the following theorem.

Theorem 3.2. Suppose that gk x is in the reduced form. Then


(i) for 1 6 j 6 n, gj1 ; gj contains one more root of /kn x than poles of gk x;
(ii) /kn x has n k real roots. Furthermore, these roots are distinct, if
bj cj 6 0; 1 6 j 6 k.

Proof. (i) Follows immediately from the discussion before the theorem. For (ii) it
suces to show that /~nkl has n k l real roots. We note that each common
pole of gk x and pn x gives rise to a root of /kn x in Eq. (3). We may now
assume that g~kl x and pn x have no common poles. Thus, it follows from part
(i) that the /~kl
n x must have n k l real roots. If bj cj 6 0 for all 1 6 j 6 k,
then it follows from Proposition 3.1 that /kn x has n k distinct real roots. 

A similar analysis of the location of roots of /kn x can be done with regards
to intervals nj1 ; nj ; 1 6 j 6 n 1, which is in the following theorem.

Theorem 3.3. Suppose bj cj 6 0; 1 6 j 6 k. Each nj1 ; nj for 1 6 j 6 n 1


contains one more root of /kn x than zeros of gk x.

Proof. The graph of pn x in nj1 ; nj is depicted in Fig. 2.


If gk x has no zeros in nj1 ; nj , then it follows from Proposition 3.1 that
gk x can have at most one pole, ql ; in nj1 nj . In addition, if gk x has no pole
in nj1 nj , then the graph of gk x is strictly decreasing on this interval, and
gk x > 0 or gk x < 0 for all x in nj1 ; nj , and hence gk x and pn x agree
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 71

Fig. 2. y pn x.

exactly once in nj1 ; nj . If there is exactly one pole, fi , of gk x such that


nj1 6 fi 6 nj , then gk x is strictly decreasing and gk x < 0 in nj1 ; fi , and is
strictly decreasing and gk x > 0 in fi ; nj . If nj1 6 fi < gj1 , gk x and pn x
agree exactly once in fi ; nj . If gj1 < fi 6 nj , then gk x and pn x agree exactly
once in nj1 ; fi . If fi gj1 , then gk x and pn x are not equal for all x in
nj1 ; nj . But, fi gj1 is a root of /kn x.
In general, suppose gk x has r zeros, qi < qi1    < qir1 , in nj1 ; nj .
Then pn x has no zeros in each of nj ; qi ; qi ; qi1 ; . . . ; qir1 ; nj . Re-
peating the argument in the previous paragraph with the roles of
pn x and gk x reversed, we conclude that there is exactly one root of /kn x in
each of the intervals nj ; qi ; qi ; qi1 ; . . . ; qir1 ; nj , a total of r 1
roots. 

Corollary 3.4. (i) If bj cj P 0; 1 6 j 6 k, bc > 0 and a; aj are real, 1 6 j 6 k, then


Tnk a; b; c has n k real eigenvalues. Furthermore, these eigenvalues are distinct
if bj cj > 0; 1 6 j 6 k.
(ii) If bc < 0, bj cj < 0, and a; aj are purely imaginary complex numbers for
1 6 j 6 k, then Tnk a; b; c has n k distinct eigenvalues.
p
isa root of /kn x, then a= bc 2x0  k0 is a root of /kn k, and
Proof. If x0 p
thus a 2 bc x0 is an eigenvalue of Tnk a; b; c. If bj cj P 0; 1 6 j 6 k,
bc > 0 and a; aj are real 1 6 j 6 k, then condition (6) is satised. If
bc < 0 and bj cj < 0, and a; aj are purely imaginary complex numbers for
1 6 j 6 k, then condition (6) is also satised. The result follows p from Theorem
3.2. The eigenvalues of Tnk a; b; c are of the form a 2 bc xi , 1 6 i 6 n 6 k,
where the xi 's are distinct real roots of /kn x. 
72 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

In the next section we demonstrate examples of Tnk a; b; c in which the ei-


genvalues can be determined completely by graphing in the case of k 1 or 2.
Next, we will determine a lower bound for the number of real eigenvalues of
Tnk a; b; c. For the rest of this section, we assume that bc > 0; n P k, and
a; aj ; bpj ;
cj , 1p6
j 6 k,p
are real. With these assumptions, all entries of
Tnk a= bc; b= bc; c= bc and Tnk a; b; c are real and bc 1. In order to fa-
cilitate an induction argument on k, we rename the entries in Tnk a; b; c by
reversing fa1 ; . . . ; ak g, fb1 ; . . . ; bk g, fc1 ; . . . ; ck g as fak ; . . . ; a1 g, fbk ; . . . ; b1 g,
fck ; . . . ; c1 g, respectively, and thereby write

Expanding the determinant for /kn k by the last row we have


/kn k ak k/nk1 k bk ck /nk2 k; k P 2: 9

In case k 1, we have
/1n x a1 a 2x/0n x b1 c1 /0n1 x

with a k 2x. Thus, by Eq. (2), we have


/1n x a1 aUn x 2xUn x b1 c1 Un1 x;
a1 aUn x Un1 x 1 b1 c1 Un1 x:
Thus, /1n x is a linear combination of Un1 x, Un x and Un1 x. In general,
we show that /kn x is a linear combination of Unk x, Unk1 x; . . . ; Un x,
Un1 x; . . . ; Unk x by induction in k. Suppose that /1n x; . . . ; /nk1 x satisfy
the above assertion. From (9) we have
/kn x ak a 2x/nk1 x bk ck /nk2 x;

and so
/kn x ak a/nk1 x bk ck /nk2 x 2x/nk1 x: 10

By the induction hypothesis, the rst two terms on the right side of (10) are
linear combinations of Unk1 x; . . . ; Unk1 x. The last term on the right side
of (10) is of the form
X
aj 2xUj x:
nk1 6 j 6 nk1
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 73

By (2), 2xUj x Uj1 x Uj1 x and hence


X X X
aj 2xUj x aj Uj1 x Uj1 x bj Uj x
j j nk 6 j 6 nk

with bj real. Thus, /kn x is a linear combination of Unk x; . . . ; Unk x. We


summarize this result in a proposition.

Proposition 3.5. /kn x is a linear combination of Unk x, Unk1 x; . . . ; Unk x


with real coefficients.

Theorem 3.6. /kn k has at least n k real roots located in a 2; a 2.

Proof. Suppose that /kn x has fewer than n k real roots Q in 1; 1, say
m
f1 6 f2 6    6 fm , m < n k. Consider a polynomial f x j1 x fj of
degree m on 1; 1. f x can be Pmwritten as a linear combination of
U0 x; U1 x; . . . ; Um x, i.e., f x j0 aj Uj x. Next consider the weighted
inner product
Z 1
1=2
h/kn x; f xi  1 x2 /kn xf x dx;
1

which is nonzero since /kn x and f x are of either the same sign or the op-
posite sign over each of the following intervals 1; f1 ; f1 ; f2 ; . . . ; fm ; 1. On
the other hand, it follows from Proposition 3.5 that
X
/kn x bj Uj x
nk 6 j 6 nk

and hence
* +

X X
m
/kn ; f bj Uj x; aj Uj x 0; m < n k;
nk 6 j 6 nk j0

a contradiction since the polynomials uj x are orthogonal with respect to this


inner product. Hence, /kn x has at least n k real roots in 1; 1, and
therefore, /kn k has at least n k real roots in a 2; a 2. 

4. Examples

For the sake of simplicity, we require bc > 0 in this section.


We study the eigenvalues of a matrix
74 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

which corresponds to the case of k 1.


By
p examining the roots of the characteristic p polynomial of
1= bcTn1 a; b; c and using the substitution a= bc k 2x, we get from (3)
that, for n P 1, the roots x of /kn ax satisfy the equation

bce1 2xUn x b1 c1 Un1 x 0; 11

and if x is not a common root of Un1 x and e1 2x, then


Un x b1 c 1
; 12
Un1 x bce1 2x
p
where e1 a1 a= bc and Un x denotes the nth degree Chebyshev poly-
nomial of the second kind. We have seen in Corollary 3.4 that if
b1 c1 > 0 and bc > 0, Tn1 a; b; c has n 1 real distinct eigenvalues, obtained by
studying the intersection of the graphs
g1 x b1 c1 =bce1 2x with pn x Un x=Un1 x

in the xy-plane. By looking at the graph of y g1 x, we can determine the


location of eigenvalues of Tn1 a; b; c precisely.
Let g0 < n1 < g1 < n2 <    < gi1 < ni < gi <    < gn1 < nn < gn where
g0 1, gn 1 and n1 ; n2 ; . . . ; nn are the rootspof Un x and g1 ; g2 ; . . . ; gn1
are the roots of Un1 x. If bc > 0 and a a1 =2 bc coincides with one of the
gi 's, it is a root of (11). Otherwise,
p we call the interval gi1 ; gi the distinguished
interval if gi1 < a a1 =2 bc < gi . With this notation we have the following
result.
p
Theorem 4.1. If bc > 0 and gi1 < a a1 =2 bc < gi for some i, there is ex-
actly one root of (12) in each of the n 1 intervals gj1 ; gj
where j 6 i; 1 6 j 6 n. If b1 c1 > 0, then there are precisely two additional roots
of (12), exactly one lying in each of the intervals
   
a a1 a a1
gi1 ; p and p ; gi :
2 bc 2 bc

If b1 c1 < 0, then there may be zero, one or two additional roots of (12) in the
interval (gi1 ; gi ).
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 75

Proof. Let d1 and d2 be the parts of the graph of g1 x for x < e1 =2 and for
x > e1 =2, respectively. We observe that if gi1 < e1 =2 < gi , from Fig. 1, we see
that d1 meets each component of the graph y Un x=Un1 x once in the i 2
intervals on the left of gi1 ; g1 , and d2 meets each component in n i 1
intervals once on the right of gi1 ; g1 , producing
p n 1 roots of (12). This
holds, if b1 c1 > 0, then y g1 x b1 c1 = bce1 2x is decreasing on each
interval 1; e1 =2 and e1 =2; 1 as depicted in Fig. 3; or if b1 c1 6 0. Now
if b1 c1 > 0, the component of the graph of y Un x=Un1 x in the distin-
guished interval, gi1 ; gi , meets both d1 and d2 , and we get two additional
roots of (12) (see Fig. 1 along with Fig. 3). If b1 c1 < 0, the graph of y g1 x is
increasing on 1; e1 =2 and on e1 =2; 1. With bc; a and a1 xed,
b1 and c1 can be chosen so that b1 c1 < 0 and each of the three illustrations in
Fig. 4 occurs. 

Now we study the eigenvalues of a matrix

which is the case k 2.


By
p examining the roots of the characteristic
p polynomial of
1= bcTn2 a; b; c and using the substitution a= bc k 2x, we get from (3),
for n P 1,

Fig. 3. y g1 x.
76 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

Fig. 4. Intersections when b1 c1 < 0:

bc4x2 e1 e2 2x e1 e2 dUn x b2 c2 e1 2xUn1 x 0: 13

If x is not a common root of factors in two summands,


Un x b2 c2 e1 2x
; 14
Un1 x bc4x e1 e2 2x e1 e2 d
2

where
a1 a a2 a b1 c 1
p e1 ; p e2 ; d
bc bc bc

and Un x denotes the nth degree Chebyshev polynomial of the second kind.
We have seen in Corollary 3.4, that if b2 c2 > 0, b1 c1 > 0 and bc > 0,
Tn2 a; b; c has n 2 real distinct eigenvalues.
With the help of the graph of
b2 c2 e1 2x
y g2 x ;
bc4x2 e1 e2 2x e1 e2 d

we can determine the locations of roots of (14). Let


g0 < n1 < g1 <    < gi1 < ni < gi <    < gn1 < nn < gn , where g0 1,
gn 1 and n1 ; n2 ; . . . ; nn are the roots of Un k and g1 ; g2 ; . . . ; gn1 are the
roots of Un1 k. We set
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 77
q
2
e1 e2 e1 e2 4d
h1 ;
4
q
2
e1 e2 e1 e2 4d
h2
4

and
2
D e1 e2 4d:

Note that x h1 and x h2 are vertical asymptotes of y g2 x if D P 0. If


D P 0 and h1 or h2 is a root of Un1 x, it is a root of (13); otherwise, let us
denote by J1 and J2 the intervals to which h1 and h2 belong respectively,
amongst intervals gi1 ; gi for i 1; 2; . . . ; n. In this notation, we have the
following result.

b2 c2 e1 2x
Fig. 5. g2 x bc4x2 e1 e2 2x e1 e2 - d
.
78 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

Theorem 4.2. If D < 0, the Eq. (14) has n real roots, at least one lying in each
interval gj1 ; gj for j 1; 2; . . . ; n.

Proof. The result follows from looking at the graphs given in Fig. 5 comparing
them with the graph of y Un x=Un1 x in Fig. 1. 

Theorem 4.3. Suppose D > 0, and h1 ; h2 are not roots of Un1 x.


(i) Then Eq. (15) has at least one real root in each interval gj1 ; gj , for
1 6 j 6 n, which is not distinguished, accounting for at least n 2 or n 1 roots
of (14) depending on whether J1 6 J2 or J1 J2 .
(ii) If b2 c2 > 0, b1 c1 > 0, then there are exactly n 2 distinct real roots of
(14) and each nondistinguished interval gi1 ; gj ; 1 6 i 6 n, contains exact-
ly one root of (14). If b2 c2 > 0, b1 c1 < 0, then there are at least n real roots
of (14).
(iii) If b2 c2 < 0 and J1 6 J2 , then there are at least n real roots of (14) when
b1 c1 < 0, and at least n 2 real roots of (14), when b1 c1 > 0.

Fig. 6. y g2 x; D > 0.
D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380 79

Proof. (i) Fig. 6 depicts the graph of y g2 x for four cases. From Figs. 1 and
6, it can be seen that g2 x and pn x have at least one intersection in any given
nondistinguished interval gj1 ; gj , for some 1 6 j 6 n, for x gj1 , and x gj
are vertical asymptotes of pn x, and the x-axis is a horizontal asymptote of
g2 x.
(ii) Suppose b2 c2 > 0 and b1 c1 > 0. Then g20 x < 0 for all x 6 h1 ; h2 . If
J1 6 J2 , then each Jj ; j contains two roots accounting for all
2 2 n 2 n 2 roots using (i). If r1 J2 , then J1 contains three roots
accounting for all 3 n 1 n 2 roots using (i). In either case, all roots of
(14) are accounted for, and thus all nondistinguished intervals contain exactly
one root of (14) again. Suppose b2 c2 > 0 and b1 c1 > 0 and b1 c1 < 0. The dis-
tinguished interval J1 or J2 that contains h2 contain two roots (Fig. 6). If
J1 6 J2 , this interval contains at least two roots of (14) accounting for
2 n 2 n roots, using (i). If J1 J2 , J1 contains at least one root of (14)
in J1 \ h1 ; 1 accounting for 1 n 1 n roots, by (i) again.
(iii) The number of real roots is at least n 2 by Theorem 3.6. In addition, if
b2 c2 < 0, b1 c1 < 0 and J1 6 J2 , then it can be seen from Fig. 6 that the dis-
tinguished interval that contains h1 necessarily contains two roots of (14) ac-
counting for 2 n 2 n roots. Finally, if J1 J2 , then one root of (14) is
guaranteed in J1 \ 1; h1 of (14), accounting for 1 n 1 n roots. 

Theorem 4.4. If D 0, then there are at least n distinct real roots of 13.

Proof. The graph of g2 x is given in Fig. 7, where x h1 is the vertical as-


ymptote of g2 x.
If D 0, then c1 b1 < 0. If h1 does not lie in the interval gj1 ; gj ; 1 6 j 6 n,
then Eq. (14) has at least one root in gj1 ; gj . If h1 2 gj1 ; gj , it can be seen
easily from Fig. 7 that only one root is guaranteed in any open interval
gj1 ; gj that contains h1 . Note that if h1 coincides with gj , then one of the
intervals gj1 ; gj and gj ; gj1 necessarily contains a root of (14) while the
other might not contain a root. In this case, h1 is a root of (13). 

Fig. 7. y g2 x; D 0.
80 D. Kulkarni et al. / Linear Algebra and its Applications 297 (1999) 6380

Remark 4.5. The positions of the real roots discussed in Theorems (4.2)(4.4)
can be determined completely by the graphs of pn x and g2 x.

References

[1] K.E. Atkinson, An Introduction to Numerical Analysis, Wiley, New York, 1978.
[2] R.M. Beam, R.F. Warming, The asymptotic spectra of banded Toeplitz and quasi-Toeplitz
matrices, SIAM J. Sci. Comput. 14 (4) (1993) 9711006.
[3] B. Cahlon, D.M. Kulkarni, P. Shi, Stepwise stability for the heat equation with a nonlocal
constraint, SIAM J. Numer. Anal. 32 (2) (1995) 571593.
[4] J.M. Franklin, Matrix Theory, Prentice-Hall, New Jersey, 1968.
[5] G.H. Golub, C.F. van Loan, Matrix Computations, Johns Hopkins University Press,
Baltimore, MD, 1989.
[6] U.W. Hochstrasser, Orthogonal polynomials, in: M. Abramowitz, I.A. Stegun, (Eds.),
Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables,
1966, pp. 771802.
[7] R.A. Horn, C.A. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985.
[8] A. Korsak, C. Schubert, A determinant expression of Chebychev polynomials, Canad. Math.
Bull. 3 (1960) 243246.
[9] T.J. Rivlin, Chebychev Polynomials, Wiley, New York, 1974.
[10] I.E. Schochetman, R.L. Smith, S.-K. Tsui, in: A.V. Fiacco (Ed.), Solutions existence for
innite quadratic programming, the positive semi-denite case, to appear in Mathematical
Programming with Data Perturbations, Marcel Dekker, 1997.
[11] G.D. Smith, Numerical Solution of Partial Dierential Equations: Finite Dierence Methods,
3rd ed., Clarendon Press, Oxford, 1985.
[12] W.F. Trench, On the eigenvalue problem for Toeplitz band matrices, Linear Algebra Appl. 64
(1985) 199214.
[13] S.-K. Tsui, Matrices related to orthogonal polynomials, Preprint 1995.

Vous aimerez peut-être aussi