Académique Documents
Professionnel Documents
Culture Documents
www.elsevier.com/locate/laa
Abstract
A new type of generalized matrix inverse is used to define the generalized inverse matrix
Pad approximants (GMPA). GMPA is introduced on the basis of scalar product of matrices,
with the form of matrix numerator and scalar denominator. It is different from the existing
matrix Pad approximants in that it does not need multiplication of matrices in the construction
process. Some algebraic properties are discussed. The representations of GMPA are provided
with the following three forms: (i) the explicit determinantal formulas for the denominator scalar polynomials and the numerator matrix polynomials; (ii) -algorithm expression; (iii) Thiele-type continued fraction expression. The equivalence relations above three representations
are proposed. 2001 Elsevier Science Inc. All rights reserved.
AMS classification: 65D05; 41A21; CR: G.1.2
Keywords: Scalar product of matrices; Generalized inverse; Matrix Pad approximation; Algebraic properties; Determinantal formula; Thiele-type continued fraction; -Algorithm
1. Introduction
Let f (z) be a given power series with matrix coefficients, i.e.,
f (z) = c0 + c1 z + c2 z2 + + cn zn + ,
ci = (ci(uv)) Cst ,
z C,
The work is supported by the National Natural Science Foundation of China (19871054).
E-mail address: guchqing@guomai.sh.cn (C. Gu).
0024-3795/01/$ - see front matter 2001 Elsevier Science Inc. All rights reserved.
PII: S 0 0 2 4 - 3 7 9 5 ( 0 0 ) 0 0 2 3 0 - 5
(1.1)
142
where Cst consists of all s t matrices with their elements in the complex plan C.
A (right) matrix Pad approximant of f (z) is an expression of the form U (z)V (z)1 ,
such that
f (z)V (z) U (z) = R(z),
(1.2)
where U (z) and V (z) are matrix polynomials of degree at most m and n, respectively,
whose expansion agrees with f (z) up to and including the term zm+n . R(z) in (1.2)
is referred to as the residual of the approximant. A left matrix Pad approximant of
f (z) can be similarly defined.
The definition of a Pad approximant can be made more formal in a variety of
ways. Typically, U (z) and V (z) are s s polynomial matrices, and V (z) is further
restricted by the condition that the constant term, V (0), is invertible (cf. [5,7,27]).
Labahn and Cabay called such approximants matrix Pad fractions, which were consistent with the scalar (p = 1) case (cf. [18]). They introduced and developed the
notion of a matrix power series remainder sequence and its corresponding cofactor
sequence in [25]. An algorithm for constructing these sequences was presented. Xu
and Bultheel considered some possible definitions of matrix Pad approximants for
a power series with rectangular matrix coefficients in [24]. They had to consider
left and right approximants on account of the noncommutativity of the matrix multiplication. Depending on the normalization of the denominator, they defined type I
(constant term is the unit matrix) and type II (by conditions on the leading coefficient) approximants. A uniform approach was given by Beckermann and Labahn for
different concepts of matrix-type Pad approximants, such as descriptions of vector
and matrix Pad approximants along with generalizations of simultaneous and Hermite Pad approximants. In [3], they introduced the definition for a power Hermite
Pad approximant (PHPA) which takes right-hand (left-hand) and rectangular matrix
Pad approximant, matrix Hermite Pad approximant and matrix simultaneous Pad
approxmant as its special case (see [3, Example 2.12.4]).
Various vector rational interpolants were introduced by Graves-Morris (cf. [20
22]). The problem here is to approximate a number of functions by rational functions
with a common denominator. Graves-Morris and Jenkins [22] at first presented an
axiomatic approach which uniquely define vector Pad approximants and established their algebraic structure without reference to matrix-valued C-fractions. It
is important for vector-valued rational approximation problems. However, in contrast to our GMPA approach, Graves-Morris and Roberts extended their approach
from vector Pad approximants to matrix Pad approximants by exploiting an isomorphism between vectors and matrices by means of Clifford algebra representation. By using modified Euclidean and Kronecker algorithms, they established
the interconnections between vector Pad approximation and matrix Pad approximation in [23]. In this respect, Antoulas [1] gave a recursive method of updating the partial realization to include further Markov parameters, so making a major generalization of the generalized Euclidean algorithm. Bultheel and Van Barel [8] formulated a generalization of the Euclidean algorithm to treat the case of
143
144
2. Definition
E , bE Cd . The scalar product of
Let aE = (a1 , a2 , . . . , ad ), bE = (bP
1 , b2 , . . . , bd ), a
d
vector aE and bE is given by aE bE = i=1 ai bi . The following definition is a natural
extension from vector to matrix, which is different from trA of matrix A in the case
of complex matrix.
Definition 2.1. Let A = (aij ), B = (bij ), A, B Cst . The scalar product of matrices A and B is defined by
AB =
t
s X
X
aij bij ,
(2.1)
i=1 j =1
t
s X
X
|aij |2 .
kAk =
(2.2)
i=1 j =1
|aij |2 = kAk2 ,
AA =
(2.3)
i=1 j =1
(2.6)
145
Proof. As both A, B =
/ 0, it is easy to derive b/A = 1/C from A = bC. By the
left-hand side of (2.6), we get from Definition 2.1 that bA /kAk2 = C /kCk2 . Then
A =
kAk2
C
bkCk2
or
A=
kAk2
C.
bkCk2
Thus,
kAk2 = A A =
kAk4 kCk2
.
b2 kCk4
Lemma 2.4 shows that we need not compute each inverse in the construction
process of GMPA in the -algorithm form and Thiele-type continued fraction form.
Definition 2.5. A matrix-valued polynomial N(x) = (a uv (x)) Cst is said to be
of degree m and denoted by N{N(x)} = m, if N{a uv (x)} 6 m for u = 1, 2, . . . , s,
v = 1, 2, . . . , t and N{a uv (x)} = m for some u, v(1 6 u 6 s, 1 6 v 6 t).
Definition 2.6. A GMPA of type [n/2k] for the given power series (1.1) is the
rational function
R(z) = P (z)/Q(z)
(2.7)
defined that P (z) is a matrix polynomial and Q(z) is a real scalar polynomial satisfying:
(i)
(ii)
Q(z)|kP (z)k ,
(iii)
(2.8)
(2.9)
(2.10)
where P (z) = (p(uv) (z)) Cst , the norm kP (z)k of P (z) is as in (2.2).
/ 0 and B(z) be a real
Lemma 2.7. Let A(z) Cst be a matrix polynomial, A(z) =
scalar polynomial. If using (2.4) to rational function
B(z)A (z)
P (z)
B(z)
=
,
=
2
A(z)
Q(z)
kA(z)k
146
Q(z) = kA(z)k2 ,
B0 (xi ) = A(xi ), i = 0, 1, . . . , n,
B1 (x0 x1 ) = (x1 x0 )/(B0 (x1 ) B0 (x0 )),
(2.11)
147
z+1
z
0
+
i
1
0 +
5 4i
i/2 2i
1
2i
2
1
4z(z + 1)i
7z + 4z + 3
= 2
z + 6z + 3 (z + 1)(3 + z) (z2 2z + 3)i
(3.1)
(3.2)
N{U } 6 n + 2k,
(3.3)
then
order{U } > n + 1.
for some real scalar polynomial Q(z). From (3.2) and (3.3), we find that
N{Q} 6 2n,
order{Q} > 2n + 2,
148
and
u = min{N{P }: (P , Q) Mn,2k or (P , Q) Nn,2k },
v = min{N{Q}: (P , Q) Mn,2k or (P , Q) Nn,2k }.
Note that (P , Q) Nn,2k means that Q|kP k2 and degree conditions (2.8) may
not hold as compared with (P , Q) Mn,2k .
Lemma 3.2.
Nn,2k so that
Mn,2k or (P , Q)
(i) There exists unique (P , Q)
= v.
N{P } = u, N{Q}
(3.4)
(ii) For any (P , Q) Mn,2k , there exists a scalar polynomial (z) so that
(3.5)
Proof. By definition of u and v, it is known that there exists (Pi (z), Qi (z)) Mn,2k,
or (Pi (z), Qi (z)) Nn,2k , i = 1, 2, so that
N{P1 } = u,
N{Q1 } > v,
N{P2 } > u,
N{Q2 } = v.
(1 2 )P Q = 2 P 1 Q.
(3.7)
(3.8)
2 = Q Q.
(3.9)
149
1
0
0
0
+
1
0
1
0
z+
2
0
1 2
z +
2
(3.10)
= 25(z + 1).
(3.12)
i = 0, 1, 2 . . . ,
(3.13)
f T (z)
(3.14)
(3.15)
150
Proof. By Definition 2.6, Q(z)f (z) P (z) = O(zn+1 ). According to the usual
multiplication of matrices, we have
Q(z)P (z)f (z) P (z)P (z) = O(zn+1 ).
From the condition f (0) =
/ 0, it is known that
that
(3.16)
fr1 (z)
/ 0. Find
Example 3.7. Let g(z) = f (z)1
r in the power series (3.10). Then f (0) =
[4/4]g .
Solution. By (3.11) and (3.12), we get that
Q(z)g = kP (z)k2 = 625(z + 1)2 (3z2 + 1),
z+1
z
,
P (z)g = Q(z)P (z) = 625(z + 1)3
0
1z
where P (z)g , Q(z)g satisfy:
(i) Q(z)g g(z) P (z)g = O(z5 ),
(ii) N{P (z)g } = 4, N{Q(z)g } = 4,
(iii) kP (z)g k2 = 625(z + 1)3 Q(z)g , Q(z)g |kP (z)g k2 .
So [4/4]g = P (z)g /Q(z)g .
Theorem 3.8. Let f (z) be given by (1.1), z R and
"
#
m1
X
m
i
ci z = cm + cm+1 z + cm+2 z2 +
f (z)
g(z) = z
i=0
If
m > 1,
n m > 2k 1,
[n/2k]f (z)
(3.17)
m1
X
i=0
)
ci z
151
(3.18)
m1
X
) + O(z ) = O(z ).
m
Define
P1 (z) = P (z) Q(z)
m1
X
ci z i
i=m
m
i=0
n+1
!
ci z
zm .
i=0
Then by (3.17),
N{P1 (z)} 6 n m.
From (3.18), we obtain that
Q(z) f (z)
m1
X
ci z i
!
P (z) Q(z)
i=0
m1
X
!
ci z i
= O(zn+1 ).
(3.19)
i=0
!2
m1
X
ci zi Q(z)
kP1 (z)k2 = kP1 (z)k2 + Q2 (z)
i=0
!
#)
"
m1
m1
X
X
i
i
ci z + P (z)
ci z
z2m
P (z)
i=0
i=0
and Q(z)|kP (z)k2 , we obtain Q(z)|kP1 (z)k2 . Hence [n m/2k]g (z) exists and
(
)
m1
X
m
i
ci z .
[n/2k]f (z)
[n m/2k]g (z) = z
i=0
152
L01
0
L21
..
.
L02
L12
0
..
.
..
.
L0,2k1
L1,2k1
L2,2k1
..
.
L2k1,1
L2k1,2
z2k1
z2k2
0
L10
L20
..
.
Q(z) = det
L2k1,0
z2k
L0,2k
L1,2k
L2,2k
..
.
(4.1)
L2k1,2k
1
and
P (z)
L01
0
L21
..
.
L02
L12
0
..
.
..
.
Ln1,1
1
P
ci zi+n1
Ln1,2
2
P
ci zi+n2
i=0
i=0
0
L10
L20
..
.
= det
Ln1,0
c0 z n
L0,n1
L1,n1
L2,n1
..
.
0
n1
P
ci zi+1
i=0
L0,n
L1,n
L2,n
..
.
Ln1,n
P
ci z i
i=0
(4.2)
where
Lij =
jX
i1
cl+i+n2k+1 cjl+n2k
l=0
t
s X
X
u=1 v=1
Lij = Lij ,
where cl =
(uv)
(cl )
jX
i1
(uv)
(uv)
cl+i+n2k+1 cj l+n2k ,
j >i
(4.3)
l=0
j < i,
(4.4)
The proof of (4.1) was given by Chuanqing [13], which is an extension of that
of Graves-Morris and Jenkins [22], from the vector case to the matrix case, but the
proof of (4.2) is at first given.
Proof. (i) n = 2k. We express (P (z), Q(z)) as
Q(z) = Q0 + Q1 z + + Q2k z2k ,
(4.5)
P (z) = P0 + P1 z + + Pn zn , Pi Cst
(4.6)
153
kP (z)k2 /Q(z)
(4.7)
is a polynomial of degree
(4.8)
2n2k+1
(4.9)
2n2k+1
= 0.
(4.10)
By (4.7) we find that (4.10) represents 2k + 1(n = 2k) linear equations for Q0 , Q1 ,
. . . , Q2k of (4.5), which may be expressed as
2k
X
Lij Q2kj = 0,
i = 0, 1, . . . , 2k 1,
(4.11)
j =0
2k
X
L2k,j Q2kj = 0,
(4.12)
j =0
where the coefficients of Q2kj in (4.11) are Lij , as given by (4.3) and (4.4).
Eqs. (4.11) and (4.5) form a system of 2k + 1 non-homogeneous equations for
Q0 , Q1 , . . . , Q2k , as expressed by
L02
L0,2k1
L0,2k
0
L01
Q2k
L10
0
L
L
L
12
1,2k1
1,2k
Q2k1
L21
0
L2,2k1
L2,2k
L20
Q2k2
.
.
..
..
..
..
..
..
..
.
.
.
.
.
1
L2k1,0 L2k1,1 L2k1,2
0
L2k1,2k
Q0
z2k
z2k1
z2k2
z
1
154
..
.
.
Q(z)
(4.13)
n
X
P (z) =c0 Q0 + (c1 Q0 + c0 Q1 )z + +
cnj Qj zn
n
X
!
ci zi Q0 +
i=0
n
X
j =0
!
ci zi+1 Q1
i=0
+ + (c0 z
n1
(4.14)
i = 0, 1, . . . , 2k n 1;
Di = ci2k+n ,
i = 2k n, 2k n + 1, . . . , 2k.
(4.15)
X
ci zi+2kn .
f(z) =
i=n2k
n2k1
X
i=0
ci z i .
(4.16)
1
1 0
f (z) = 0 1 + 1
1
0 0
1
0
0 z + 0
1
0
155
0
0 z 2 .
0
Find [2/2]f .
Solution. By (4.1) and (4.2), we get
0 3 4
Q(z) = det 3 0 2 = 3(2z2 4z + 3),
z2 z 1
0
3
4
0
2
P (z) =det 3
2
2
2
c0 z + c1 z
c0 + c1 z + c2 z
c0 z
2
0
z z+3
=3 z(3 4z) 2z2 4z + 3 ,
z(3 z)
0
where (P (z), Q(z)) M2,2 satisfy:
(i) N{P } = 2, N{Q} = 2,
(ii) kP k2 = 27(3z2 + 2)Q, Q|kP k2 ,
(iii) Q(z)f (z) P (z) = O(z3 ).
To calculate higher-order determinantal formulas (4.1) and (4.2), we introduce
Cayleys theorem.
Lemma 4.3 (see [19]). Let A be a square matrix of even dimension. Let R, C denote the anti-symmetric matrices formed by altering only the rth row, column of A,
respectively. Then
det A = Pf R Pf C,
where Pf R denotes the Pfaffian formula of R.
According to (4.17), we obtain the following result in [14] (also see [19]).
Theorem 4.4. Define Pfaffian formulas, respectively,
z2k
L12 L13 L1,2k+1
..
..
..
,
1 (z) = det
.
.
.
z
L2k,2k+1
(4.17)
156
L12
2 (z) = det
L13
L1,2k+1
2k
P
zi
2k1
P
i+1
ci z
i=0
.
..
1
P
ci zi+2k1
i=0
2k
c0 z
ci
i=0
L23
L2,2k+1
..
..
.
L2k,2k+1
Then hold:
(i)
(4.18)
(ii)
(4.19)
G1
0
G2
2H23
2H24 + G3
2H
G2
0
G3
2H34
Q(z)=det
12
0
G4
2H13 + G2 2H23 G3
z4
z3
z2
z
1
=QPf (z)QPf (0),
where Gi = kci k2 , Hij = ci cj ,
G1 2H12 2H13 + G2
G2
2H23
G3
QPf (z)= det
= det
b
e
c
f
h
d
g
j
k
z4
z3
z2
z
1
2H14 + 2H24
2H24 + G3
2H34
G4
z4
z3
z2
1
1
157
and
P (z)
2H12
2H13 + G2 2H14 + 2H23
0
G1
G1
0
G2
2H23
2H24 + G3
2H12
G
0
G
2H34
2
3
= det
2H13 G2
2H
G
0
G
23
3
4
2
3
4
P
P
P
c0 z 4
c0 z 3 + c1 z 4
ci zi+2
ci zi+1
ci z i
i=0
i=0
i=0
=P Pf (z)P Pf (0),
(4.20)
where
P Pf (z)=(ah bf + ce)(c0 + c1 z + c2 z2 + c3 z3 + c4 z4 )
+ (de + j a gb)(c0 z + c1 z2 + c2 z3 + c3 z4 )
+ (ak gc + df )(c0 z2 + c1 z3 + c2 z4 )
+ (j c bk hd)(c0 z3 + c1 z4 ) + (gh + ek jf )c0 z4 ,
Pf
P (0)=(ah bf + ce)c0 .
Notice that P (z) = P Pf (z)P Pf (0) using usual multiplication operation of matrices
in (4.20).
Let
H (0, 2k, 2k 1)
L01
L00
L10
0
L
L21
20
=
.
..
.
.
.
L2k1,0 L2k1,1
L02
L11
0
..
.
L2k1,2
..
.
L0,2k1
L1,2k1
L2,2k1
..
.
0
L0,2k
L1,2k
L2,2k
.
..
L2k1,2k
Theorem 4.6 (Existence). Let [n/2k]f = P (z)/Q(z), where P (z) and Q(z) be given by (4.1) and (4.2), respectively, and n = 2k. Then [n/2k]f exists if and only
if
Q(0) = det H (0, 2k 1, 2k 1) =
/ 0.
Proof. By the construction of Q(z), we derive from (4.13) that
158
Q2k
Q2k1
Q1
Q0
then
Q2k
Q2k1
H (0, 2k 1, 2k 1) . = Q0
..
Q1
L0,2k
L1,2k
..
.
(4.21)
L2k1,2k1
(4.22)
If Q(0) = det H (0, 2k 1, 2k 1) = 0, it holds from (4.22) that rank H (0, 2k,
2k 1) < 2k. Hence, it follows from (4.1) that Q(z) 0, which is contradictory to
definition (2.7) of GMPA.
Example 4.7 [26]. Let
1 0
1/2
w(z) =
+
0 1
0
0
1/4 2
z +
1/4
0
0
z3 .
1/8
1 = 0,
j = 0, 1, 2 . . . ,
(5.1)
0 =
j
X
j = 0, 1, 2, . . . ,
ci z i ,
i=0
(j )
(j +1)
k+1 = k1
159
(j +1)
+ (k
k )1
r ,
(j )
(5.2)
j, k > 0
(5.3)
2k = [j + 2k/2k]f ,
j, k > 0.
(5.4)
Proof. For zeroth column k = 0, the proof is obvious. From (5.1)(5.3) and (2.4) it
is derived that
(j )
(j +1)
1 = (0
j +1
0 )1
= H0 (z)/zj +1 ,
r = 1/cj +1 z
j
(j )
(5.5)
(j )
2 =
(j )
j
X
1
Ci zi + Cj +1 zj +1 + zj +2 /(Cj1
+2 Cj +1 z).
(5.6)
i=0
2 = O(zj )(z ).
(5.7)
1
2
Let g(z) = Cj1
+2 Cj +1 z. Then N{kg(z)k } = 2 and (5.6) becomes
(j +1)
(j )
2 = (kg(z)k2 0
(5.8)
N{Q2 } = 2.
From
(j +1) 2
kP2 k2 = Q22 k0
we get Q2 (z)|kP2
(z)k2 .
(j +1)
k + Q2 z2(j +1) + Q2 (0
(j +1)
G + 0
G),
(j )
(ii)
(j )
(j )
(j )
(j )
(5.9)
(5.10)
160
where
(j )
(j )
(iii) 2k = [j + 2k/2k]f .
(5.11)
= O(zj 2 ) + O(zj 1 )
=O(zj 1 ) (z ),
(5.12)
(j )
(j )
2k+2 =O(zj +1 ) O((2k+1 )1
r )
= O(z )(z ),
j
so (5.9) is proved for k + 1. As it stands, (i) and (ii) are reduced fractions, suppose
that
(j +1)
2k
(j )
= S(z)/T (z),
2k = U (z)/V (z).
(5.13)
(z)k2
(5.14)
N{T } = 2k,
N{U } 6 j + 2k,
N{V } = 2k.
(5.15)
From (5.14) and (5.15), we get N{Fc } 6 2k, N{V T } = 4k. Then, it is derived that
M(z) = V (z)T (z)/Fc (z) = V (z)T (z)Fc (z)/kFc (z)k2
is a matrix polynomial and NM > 2k. It follows that
(j +1)
(2k
2k )1
r =1/(S(z)/T (z) U (z)/V (z))
(j )
(5.16)
(j )
(j )
(j +1)
2k )1
r
(j )
(j +1)
(j )
(5.17)
(j )
(5.18)
161
(j )
N{G} 6 2k + 1.
(5.19)
(j +1)
S( )/T ( ) = 2k
(j +2)
(j +2)
(j +1)
that
(j +2)
(j +1)
2k1 ( ) = 2k1 ( ).
(5.20)
(j )
(j +1)
2k+1 ( ) = 2k1 ( ),
(j +2)
2k+1 ( ) = 2k1 ( ).
(j )
2k+1 ( ) = 2k+1 ( ).
(5.21)
(5.22)
G0 (z) Cst
(5.23)
(5.24)
(5.25)
162
(5.26)
(5.27)
(5.28)
Example 5.2. Let f (z) = c0 + c1 z + c2 z2 be the same as Example 4.2. Find [2/2]f
by -algorithm.
Solution. By (5.1)(5.3) and (2.4),
0(0) = c0 ,
1(0)
0(1) = c0 + c1 z, 0(2) = c0 + c1 z + c2 z2 ,
1 0
1 0
1
1
(1)
1 0 ,
=
1 = 2 0 0 ,
3z 1 0
2z 1 0
2(0)
z2 z + 3
1
z(3 4z)
= 2
2z 4z + 3
z(3 z)
0
2z2 4z + 3 = [2/2]f .
0
Now suppose
(j +1)
N = 2k2 ,
(j 1)
W = 2k
(j )
C = 2k ,
(j +1)
E = 2k
(j 1)
S = 2k+2
nw = 2k1,
(j +1)
ne = 2k1 ,
(j +1)
sw = 2k+1 ,
(j )
se = 2k+1 ,
which occur in column of odd index. According to the - algorithm (4.1)(4.3), they
are located as follows:
N
nw
W
ne
C
sw
E
se
(5.29)
163
Proof. As
(nw ne) + (se sw) = (se ne) + (nw sw)
(5.31)
it follows from (5.1)(5.3) that (5.31) holds, then (5.30) holds by (5.29).
By virtue of Lemma 5.3 and Theorem 5.1, we obtain the following result.
Theorem 5.4. Wynn identify of GMPA
([j + 2k 1/2k]f [j + 2k/2k]f )1
r
+ ([j + 2k + 1/2k]f [j + 2k/2k]f )1
r
= ([j + 2k 1/2k 2]f [j + 2k/2k]f )1
r
+ ([j + 2k + 1/2k + 2]f [j + 2k/2k]f )1
r
holds provided that the GMPAs involved are well defined by using (5.1)(5.3) and
the generalized inverse (2.4).
6. Continued fraction expression of GMPA
By making use of the generalized inverse (2.4), we construct Thiele-type matrixvalued continued fraction as
z
z
z
+
H (z) = B0 +
B1
B2 + + Bn +
with matrix elements Bi Cst , z R. The nth convergent of H (z) is defined by
z
z
z
+
(6.1)
Rn (z) = B0 +
B1
B2 + + Bn
and it is evaluated by backward recursion.
As the result of [16], recursive coefficient algorithm of (6.1) is given as follows
for the given power series (1.1).
Coefficient algorithm:
A0 (z) = f (z),
A1 (z) =
k > 2,
B0 = B0 (0) = A0 (0),
1
(Df (z))r = [Df (z)] /kDf (z)k2 , B1 = B1 (0) = A1 (0),
2
Bk = k(DAk1 (z))1
r |z=0 = k(DAk1 (z)) /kDAk1 (z)k |z=0 ,
164
Theorem 6.1 (Identification theorem). If zero divisors are not encountered in the
construction of Rn (z) in (6.1) by using coefficient algorithm with the generalized
inverse (2.4), then hold:
[2k/2k]f ,
Rn (z) = [n/2k]f =
[2k + 1/2k]f ,
n = 2k, k = 0, 1, 2, . . . ,
n = 2k + 1, k = 0, 1, 2, . . .
(6.2)
zB1
B0 kB1 k2 + zB1
z
P1 (z)
,
= B0 +
=
=
2
2
B1
kB1 k
kB1 k
Q1 (z)
(6.3)
(6.4)
For n = 2k, k = 1,
R2 (z) = B0 +
z
z
z
,
= B0 +
B1 + B2
S1 (z)
(6.5)
N{Q 1 } = 0,
Q 1 |kP1 k2 .
(6.6)
+
B1
B2
Bn
Sn (z)
where
(6.7)
(6.8)
z
z
z
Pn1 (z)
.
+
=
B2
B3 + + Bn
Q n1 (z)
Using the inductive hypothesis (6.7), we obtain that
n1 } = 2k, Q
n1 |kPn1 k2 .
N{Pn1 } = n 1 = 2k, N{Q
Sn (z) = B1 +
165
(6.9)
(6.10)
n1 P
zQ
z
n1
= B0 +
Sn (z)
kPn1 k2
B0 gn1 + zPn1
Pn (z)
,
=
=
gn1
Qn (z)
Rn (z)=B0 +
(6.11)
N{Qn } = 2k.
(6.12)
For
2
+ z2 kPn1
k2 + zgn1 (B0 Pn1 + B0 Pn1
)
kPn k2 = kB0 k2 gn1
we get
Qn |kPn k2 .
(6.13)
n = 2k, k = 0, 1, 2, . . .
1 0
z
z
R2 (z) = 0 1 +
1 0 +
1 0
0 0
1
1
4 0
1 0
3 1 0
2 1 0
2
z z+3
0
1
z(3 4z) 2z2 4z + 3 = [2/2]f ,
= 2
2z 4z + 3
z(3 z)
0
166
where
1 0
A0 (z) = f (z), B0 = B0 (0) = A0 (0) = 0 1 ,
0 0
1 + 2z 0
1
1
0 ,
A1 (z) = (Df (z))1
r =
8z2 + 8z + 3 1 + 2z 0
1 0
1
B1 = B1 (0) = A1 (0) = 1 0 ,
3 1 0
B2 = B2 (0) = 2(DA1 (z))1
r |z=0
2(8z2 + 8z + 1)
(8z2 + 8z + 3)2
1
=
8(2z + 1)
4 [(8z2 + 8z + 1) + 8(2z + 1)2 ]
2(8z2 + 8z + 1)
1 0
1
4 0 .
=
2 1 0
0
0
0 z=0
We take notice that [2/2]f in Example 4.2 is the same as that of in Example 5.2
and in Example 6.2. It illustrates that the generalized inverse (2.4), which is on the
basis of the scalar product of matrices (2.1), is successful in the matrix-valued rational approximation and interpolation. On the other hand, Lemma 2.4 shows that we
need not compute each inverse in the construction process of GMPA in some case.
Acknowledgement
The author would like to thank the referees for their corrections and many valuable suggestions.
References
[1] A.C. Antoulas, On recursiveness and related topics in linear systems, IEEE Trans. Automat. Control.
31 (1986) 11211135.
[2] A.C. Antoulas, Rational interpolation and the Euclidean algorithm, Linear Algebra Appl. 108 (1988)
157171.
[3] B. Beckermann, G. Labahn, A uniform approach for the fast computation matrix-type Pad approximants, SIAM J. Matrix Anal. Appl. 15 (1994) 804823.
[4] G.A. Baker, P.R. Graves-Morris, Pad Approximants, Part I & II, Addison-Wesley, Reading, MA,
1981.
[5] N.K. Bose, S. Basu, Theory and recursive computation of 1-D matrix Pad approximants, IEEE
Trans. Circuit Syst. CAS 27 (1980) 322325.
167
[6] A. Bultheel, M. Van Barel, Pad techniques for model reduction in linear system theory: a survey, J.
Comput. Appl. Math. 14 (1986) 401438.
[7] A. Bultheel, Recursive algorithms for the matrix Pad table, Math. Comput. 35 (1980) 875892.
[8] A. Bultheel, M. Van Barel, A matrix Euclidean algorithm and the matrix minimal Pad approximation problem, in: C. Brezinski (Ed.), Continued Fractions and Pad approximants, North-Holland,
Amsterdam, 1990, pp. 1151.
[9] S. Cabay, G. Meleshko, A weakly stable algorithm for the Pad approximants and the inversion of
Hankel matrices, SIAM J. Matrix Anal. Appl. 14 (1993) 735765.
[10] G. Chuanqing, Thiele-type and Largrange-type generalized inverse rational interpolation for rectangular complex matrices, Linear Algebra Appl. 295 (1999) 730.
[11] G. Chuanqing, Bivariate Thiele-type matrix valued rational interpolants, J. Comput. Appl. Math. 80
(1997) 7182.
[12] G. Chuanqing, C. Zhibing, Matrix valued rational interpolants and its error formula, Math. Numer.
Sinica. 17 (1995) 7377.
[13] G. Chuanqing, Generalized inverse matrix valued Pad approximants, Numer. Sincia. 19 (1997)
1928.
[14] G. Chuanqing, Pfaffian formula for generalized inverse matrix Pad approximation and application,
J. Numer. Meth. Comput. Appl. 19 (1998) 283289.
[15] G. Chuanqing, Multivariate generalized inverse vector-valued rational interpolants, J. Comput. Appl.
Math. 84 (1997) 137146.
[16] G. Chuanqing, Z. Gongqin, Matrix valued rational approximation, J. Math. Res. Exposition 16
(1996) 301306.
[17] A. Draux, Bibliography-Index, Report AN0-145, Univ. de Sciences et Techniques de Lille, November 1984.
[18] W.B. Gragg, The Pad table and its relation to certain algorithms of numerical anaysis, SIAM Rev.
14 (1972) 161.
[19] P.R. Graves-Morris, G.A. Baker Jr., C.F. Woodcock, Cayleys theorem and its application in the
theory of vector Pad approximants, J. Comput. Appl. Math. 66 (1996) 255265.
[20] P.R. Graves-Morris, Vector valued rational interpolants I, Numer. Math. 41 (1983) 331334.
[21] P.R. Graves-Morris, Vector valued rational II, IMA J. Numer. Anal. 4 (1984) 209224.
[22] P.R. Graves-Morris, C.D. Jenkins, Vector valued rational interpolants III, Constr. Approx. 2 (1986)
263289.
[23] P.R. Graves-Morris, D.E. Roberts, From matrix to vector Pad approximants, J. Comput. Appl. Math.
51 (1994) 205236.
[24] X. Guoliang, A. Bultheel, Matrix Pad approximation: definitions and properties, Linear Algebra
Appl. 137&138 (1990) 67136.
[25] G. Labahn, S. Cabay, Matrix Pad fractions and their computation, SIAM J.Comput. 4 (1989) 639
657.
[26] C. Pestano-Gabino, C. Gonzalez-Concepcion, Rationality minimality and uniqueness of representation of matrix formal power series, J. Comput. Appl. Math. 94 (1998) 2338.
[27] Y. Starkand, Explicit formulars for matrix-valued Pad approximants, J. Comput. Appl. Math. 5
(1979) 6365.