Vous êtes sur la page 1sur 10

zerosspectral

i i
2008/5/15
page
i

On the zero-modules of spectral


factors using state space methods
the role of a coupled Sylvester-
and homogeneous linear equation.
Gyorgy Michaletzky

1 Abstract
The poles and zeros of a transfer function can be studied by various means. The main motivation of the
present paper is to give a state-space description of the module theoretic definition of zeros introduced and
analyzed by Wyman et al. in [12] and [13] and to find the connections between the zero modules of a
matrix-valued rational spectral density and its (left, rectangular) spectral factor without assuming the the
spectral density is of full rank. This analysis is carried out for proper (in general acausal) spectral factors.
The transformation of a spectral factor into a left-invertible transfer function using a tall-inner function
plays an important role in this analysis eliminating this way the generic zeros corresponding to the kernel
of the spectral factor. As it is well-known the zeros are connected to various invariant subspaces arising in
geometric control, see e.g. Aling and Schumacher [1] for a complete description. The connections to these
subspaces are also mentioned in the paper.

2 Introduction
In this paper the connections between the zero structure of a rational, matrix-valued spectral density and
its spectral factor are analysed. The study of zeros of transfer functions has already a long history. During
this various zeros were defined, various approaches were used to describe them. We cannot give a detailed
description of this history but the book written by H. Rosenbrock (70) should be cited here [9] as well as
that of T. Kailath (80) [5]. One of the approaches used in these books to define the zeros of a transfer
function is based on the Smith-McMillan form of these functions. These are the so-called transmission zeros.
C. B. Schrader and M. K. Sain (89) in [10] give a survey on the notions and results of zeros of linear time
invariant systems, including invariant zeros, system zeros, input-decoupling zeros, output-decoupling zeros
and input-output-decoupling zeros, as well. The connection of these zeros to invariant subspaces appearing
in geometric control theory was considered e.g. in A. S. Morse (73) [8] for strictly proper transfer functions
and for proper transfer functions not assuming the minimality of the realization in H. Aling and J. M.
Schumacher (84) [1] showing that the combined decomposition of the state space considering Kalmans
canonical decomposition and Morses canonical decomposition in the same lattice diagram corresponds to
Eotvos Lorand University

H-1111 Pazmany Peter setany 1/C,


Budapest, Hungary
e-mail: michgy@ludens.elte.hu

i i
zerosspectral
i i
2008/5/15
page
i

the various notions of multivariate zeros.


The book written by J. Ball, I. Gohberg and L. Rodman [2] uses the concept of left (and Right) zero
pairs. This offers the possibility of analyzing together with the position of the zeros the corresponding
zero directions, as well.
The zeros play an important role in the theory of spectral factors. The connection between the zeros
of spectral factors, splitting subspaces and the algebraic Riccati-inequality was studied in A. Lindquist et
al. (95) [6]. An important aspect of this paper was further analyzed by P. Fuhrmann and A. Gombani (98)
were the concept of externalized zeros was introduced. (Interestingly, this concept can be formulated in the
framework of the dilation theory, as was pointed out by the author in [7].)
The starting point of the present paper is the module-theoretic approach to the zeros of multivariate
transfer functions defined by B. F. Wyman and M. K. Sain (83) [11], and further analyzed by Wyman et
al. in [12], [13]. In this extension the so-called Wedderburn-Forney-spaces play an important role. The
main result in [13] is that the number of zeros and poles of a rational transfer function coincide (even in
the matrix case) assuming that the zeros are counted in a right way. It is well-known that to define the
multiplicity of a finite zero (or even an infinite zero) the Rosenbrock matrix provides an appropriate tool.
But it is an easy task to construct (non-square) matrix-valued transfer function with no finite (infinite) zeros.
In such cases it might happen that there are rational functions mapped to the identically zero function by
the transfer function. Then the functions in the kernel of the transfer function form an infinite dimensional
vector space over the space of scalars, but it is finite dimensional over the field of rational functions. But
defining the multiplicity of this zero-function as the corresponding dimension of the kernel subspace does
not give a satisfactory result. To this aim the notion of minimal polynomial bases should by used as in [3]
by G. D. Forney.

3 Preliminaries and notation


The main motivation of the present paper is to give a matrix theoretic description of the corresponding
zero-concepts, i.e. to show how to compute these zero-modules starting from a state-space realization of the
transfer function.
The following theorem can be considered as the starting block of the analysis. (See Theorem 3.1 in
[4].)

1
Theorem 1. Let F (s) = D +C (sI A) B be a matrix-valued rational function of size pm and consider
1
a function g(s) = H (sI ) .
(i) Assume that there exists matrix G and a (matrix-valued) polynomial such that the pair (, G) is
controllable and
F (gG + ) is analytic at the eigenvalues of .
Then if the pair (C, A) is observable , there exists a matrix such that
    
A B
= . (3.1)
C D H 0

(ii) Assume that there exists a matrix such that , H, satisfy the equation (3.1). If now (H, ) is an
observable pair and (A, B) is a controllable pair then there exists a matrix polynomial such that
the function F (g + ) is a polynomial, especially it is analytic at the eigenvalues of .

The first equation in (3.1) is a Sylvester-equation, while the second one is homogeneous linear equation.
They are coupled leading to a coupled Sylvester- linear equation mentioned in the title.
Introducing the notation Cp [s] (and Cm [s]) for the space of p-vectors ( m-vectors) of polynomials we
can write that in case of (ii) the columns of g are in F 1 (Cp [s]) + Cm [s]. This condition should be compared
with the zero module definition of Wyman and Sain (see [11]). The finite zero module Zfin (F ) is defined
as follows:
F 1 (Cp [s]) + Cm [s]
Zfin (s) = .
ker F + Cm [s]

i i
zerosspectral
i i
2008/5/15
page
i

In words, the numerator contains those rational input functions for which there exists a polynomial such
that applying the transfer function F to the sum a polynomial is obtained.
Note that equation (3.1) implies that

A Im Im + ImB
C Im ImD

implying that there exists a state feedback K such that

(A + BK)Im Im ker(C + DK) .

In other words Im is an output-nulling controlled invariant subspace. It is well-known that for a


given system = (A, B, C, D) there exists a maximal output-nulling controlled invariant subspace denoted
by V (). This can be obtained as the image of a maximal solution (max , Hmax , max ) of equation (3.1).
(The maximality is meant in terms of the subspace inclusion sense for Im.) We might assume that the
columns of max provide a basis in V ().
Let us observe that even the maximal solution triplet (, H, ) of equation (3.1) is not unique. Although
the subspace Im(max ) = V () = V (A, B, C, D) is given by the realization of the transfer function F , so
without loss of generality we might fix a basis in it, determining this way the matrix max , but even for a
fixed the matrices and H are not necessarily uniquely defined. Obviously, the differences R0 = H1 H2 ,
0 = 1 2 of two solutions are solutions of the following equation:
    
A B 0 max 0
= (3.2)
C D R0 0

holds.
The following two statement give the next building blocks of the analysis of zero-modules in terms of
state-space equations.
1
Theorem 2. Let (C, A) be an observable pair. Assume that the columns of the function Hf zk (sI f zk )
provide a basis in Z(F ) W (kerF ). Let f zk be the corresponding solution of (3.1).
Consider now a maximal solution in terms of 0 and R0 of the equation
    
A B 0 max 0
= (3.3)
C D R0 0

(the maximality is meant is the subspace inclusion sense for Im R0 ). Let us introduce the notation:
1
K0 (s) = R0 + Hmax (sI max ) 0 .

Then F K0 = 0. Moreover, if for the proper rational q-tuple g the identity

F (s)g(s) = 0

holds. Then there exists a rational function h(s) such that

g(s) = K0 (s)h(s) ,

i.e. the columns of K0 generate the kernel of F over the field of rational functions.

The it kernel zero-module W (ker F ) is defined as


(ker F )
W(ker F ) =
ker F s1 Cm [[s1 ]]

where the mapping renders to a rational function its strictly proper part, while Cm [[s1 ]] denotes the
m-tuples of power-series in s1 .

i i
zerosspectral
i i
2008/5/15
page
i

Proposition 3. Under the assumptions of the previous theorem then the equivalence classes in W (ker F )
are determined by the functions
1
Hmax (sI max )
where is any vector in
0 , max 0 , 2max 0 , . . . .
 
hmax | 0 i = Im

It can be proved that the subspace max hmax | 0 i coincides with the maximal output-nulling
reachability subspace R () of the given realization of F .
A subspace C is called input-containing subspace if there exists an output-injection L such that
(A + LC) C C and Im(B + LD) C. The minimal input-containing subspace is denoted by C (). Note
that R () = V () C (). (See e.g. Aling and Schumacher [1].)
The zero module a infinity is defined as

F 1 (s1 Cp [[s1 ]]) + s1 Cm [[s1 ]]


Z (F ) = .
ker F + s1 Cm [[s1 ]]

I.e. the q-tuples of rational functions g should be considered for which there exist a strictly proper rational
q-tuple h such that
F (g + h) is strictly proper, (3.4)
and g1 , g2 with this property are considered to be equivalent if for some strictly proper q-tuple h the identity

F (g1 g2 + h) = 0 .

Theorem 4. Assume that the pair (C, A) is observable. Then the equivalence classes in Z (F ) are de-
termined by the vectors in C () in the sense that for any C () there exists a finite input sequence
producing no output but giving as the next immediate state-vector. The input sequence gives the coefficients
of a polynomial in F 1 s1 Cp [[s1 ]] + s1 Cm [[s1 ]].
Two polynomials are taken to be equivalent if the difference of the corresponding vectors are in
R () = V () C () .

The last notion from the sequence of zero-modules we have to recall is the zero-module corresponding
to the range of F . (More precisely to the defect of the range of F .) This is denoted by W (ImF ).

(ImF )
W (ImF ) = .
ImF + s1 Cp [[s1 ]]

Theorem 5.1 in [12] claims that the sum of the dimensions of these four zero modules is exactly the
number of poles.
The state-space description of the zero-module at infinity is given by the following statement:

Proposition 5. Assume that the pair (C, A) is observable. Then the equivalence classes of W (Im F ) are
determined by the functions
1
C (sI A)
where hA | Bi and two functions given by the vectors 1 , 2 are considered to be equivalent if

1 2 V () C () .

In the statements above we have assumed that the pair (C, A) is observable. In the general case a
full geometric description of the zeros is given by Aling and Schumacher in [1]. Under the assumption
of observability their diagram simplifies to the following simpler one, where B denotes the reachability
subspace of the given realization B = hA | Bi, and N the unobservability subspace .

i i
zerosspectral
i i
2008/5/15
page
i

X
i.d.syst



V B RR
l l lll RRR
Ri.d.inv
lll RRR
co-range ind.

ll l RRR
ulll RRR
)
lV C R R llB
ll R RRRi.d.inv lll
llll RRR co-r.i.lll
ll l R R ll ll
ll RR( vlll
vlll
V RRR (V C ) B
RRR ll RRR
RRR llll RRRtrm.
RRR RRR
RRR l llll RRR
RRR
vll
i.d.inv.
( (
V B RR llC
RRR l
RRR lll
R llll
trm. RRR ll
R( vlll
(V C )
kern.ind.


{0}

4 Zero-modules of a spectral density


For a matrix A its adjoint will be denoted by A , while for a matrix valued function G(s) the notation G (s)

refers to its para-hermitian conjugate function (in continuous time sense), i.e. G (s) = (G(s)) .
For now on we are going to assume that the matrix A the following property holds:

(A) (A ) = (4.5)

Denote by the spectral density corresponding to the function F considering it as a left spectral factor.
1 1
I.e. (s) = F (s)F (s) = R + C (sI A ) C + C (sI A) C . Then

A 0 C

(s) = 0 A C (4.6)

C C R

where P is the unique solution of the Lyapunov-equation

AP + P A + BB = 0 (4.7)

and
C = CP + DB , R = DD . (4.8)
1
Also for the parahermitian adjoint function F (s) = D + B (sI A ) C :
!
0 A C
G = . (4.9)
B D

Since the function is the product of two functions we might expect that the zeros of both factors
influence the zeros of . Until now the zero-modules were considered with respect to the right action g F g.
Applying this to F , i.e. g F g this can be equivalently described by the left action of F : g g F .
The corresponding state-space equation can be obtained from the previous discussion in a straightforward
manner. The subscript left will refer to the corresponding notions.
The following statement provides a connection between the various subspaces corresponding to the
right- and left zero-modules.

i i
zerosspectral
i i
2008/5/15
page
i

 
1
Proposition 6. Assume that F has the realization F (s) = D + C (sI A) B. Then Vleft () =
 
C () , and Cleft

() = V () .

To utilize this observation first consider the various subspaces for the realization above of the spectral
density . To this aim let us introduce the notation

A 0 C
S = 0 A C . (4.10)
C C R
To describe the finite zero-module of the corresponding form of equation (3.1)

A 0 C X XW
0 A C Y = YW (4.11)
C C R Z 0
should be considered.
Now
0 I 0 0 I 0
I 0 0 S = S I 0 0 . (4.12)
0 0 I 0 0 I
Consequently, if X, Y, Z, W give a solution of (4.11) then

Y YW
S X = XW ,
Z 0
or in other words
[Y , X , Z ] S = [W Y , W X , 0] . (4.13)
A special consequence of this that if X, Y, Z, W and X1 , Y1 , Z1 , W1 are solutions of (4.11) then
(X1 Y Y1 X) W + W1 (X1 Y Y1 X) = 0 (4.14)
holds giving especially that ker (X1 Y Y1 X) is W-invariant.
Now assume that the realization (4.6) is minimal and consider a maximal solution (Xmax , Ymax , Zmax , Wmax )
of (4.11).
Then the maximal output-nulling controlled invariant subspace for this realization of will be:
 
Xmax
V ( ) = Im (4.15)
Ymax
and     
Ymax 0 I Xmax
Vlef t ( ) = Im = Im . (4.16)
Xmax I 0 Ymax
Furthermore, using the orthogonality property stated in Proposition 6 we obtain that

Clef t ( ) = ker [Xmax , Ymax ] (4.17)
C ( ) = ker [Ymax

, Xmax ] (4.18)
  
0 I
= ker [Xmax , Ymax ] (4.19)
I 0
implying that
dim V ( ) = dim Vlef

t ( ) ,

dim C ( ) = dim Clef t ( ) ,
dim V ( ) + dim C ( ) = 2n .

i i
zerosspectral
i i
2008/5/15
page
i

Finally,
 
Xmax
R ( ) = V ( ) C ( ) = ker (Xmax Ymax Ymax Xmax ) ,
Ymax
 
Ymax
Rlef t ( ) = Vlef

t ( ) Clef t ( ) =

ker (Xmax
Ymax Ymax Xmax ) .
Xmax

5 Elementary connections between the zeros of the spectral density


and its spectral factors
There is an obvious connection between the kernel zero-module of and that of F .

Proposition 7. Assume that = F F hold. Then


ker = ker F , especially W (ker ) = W (ker F ) .
If moreover the realizations of F and then
 
P  0
R ( ) = R .
I

5.1 From spectral factors to spectral density


 0 0 0
Lemma 8. Assume that , H , is a solution of the following equation:
i 
A B
h 0 0
h 0 0 i
,H = ,0 . (5.20)
C D
0 0 0 0
Then X = P , Y = , Z = H and W = are solutions of the equation

A 0 C X XW
0 A C Y = Y W , (5.21)
C C R Z 0
 
P
 0
thus if V then V ( ) .

This shows that the finite zeros and the zeros corresponding to the kernel module of F appear directly
in the zero structure of . This is not surprising due to the fact that right zero-modules are considered, i.e.
the action g F g and g g = F F g.
REMARK 1 Lemma 8 and identity (4.14) implies that
 0
(Xmax P Ymax ) hWmax | ker (Xmax P Ymax )i V , (5.22)

where P is the solution of the Lyapunov-equation (4.7).


Concerning the finite- and kernel-zeros of F with respect to the action g F g to appear in the zero
structure of F should be applied to a function h in such a way that the output F h should be the
zero-function of F . This observation is reflected in the next statement formulated in a more general way
needed later:

Lemma 9. Assume that , H, are solutions of the following equation:


    
A B
= (5.23)
C D H 0

i i
zerosspectral
i i
2008/5/15
page
i

and furthermore L, K, and U satisfy

A C
    
L L + U
= . (5.24)
B D K H

Then for the matrices X = P L + , Y = L, Z = K the following equation holds:



A 0 C X X + P U
0 A C Y = Y + U . (5.25)
C C R Z 0

Especially, if U = 0 then we might see that the corresponding zero of F appears among the zeros of .
The problem to find those triplets (, H, ) solving (5.23) for which there exist a triplet (L, K, U = 0) such
that (5.24) holds would lead to solving a Riccati-equations. To assure the solvability of the corresponding
Riccati-equation additional conditions are needed. We are going to show that this can be avoided via reducing
the problem to left-invertible spectral factors of the same spectral density. To obtain this transformation
a special Riccati-equation should be solved but it can be proven that this Riccati-equation has always a
solution. To following statement shows that in case of left-invertibility the solvability of equations (5.23),
(5.24) is always guaranteed.

Theorem 10. Assume that the given realizations of F and are minimal. Consider an arbitrary solution
(, H, ) of the equation (5.23).
If the system is left-invertible then there exists a solution (K, L, U ) of the equation (5.24) for which
0

the columns of L are in Clef t (), the matrix U can be written in the form U = max V for some matrix V ,
 0 0 0

where max , Hmax , max is a maximal solution of (5.20).

Using Lemma 8 the following immediate corollary is obtained.

Corollary 11. Under the conditions of the previous theorem

[I, P ] V ( ) V () . (5.26)

5.2 From spectral density to spectral factors


Continuing with the elementary connections the next statement shows that some converse of the previous
constructions also hold:

Lemma 12. Assume that


A 0 C X XW
0 A C Y = YW (5.27)
C C R Z 0
holds. Define the matrices and H as follows

= X PY , H = B Y + D Z .

Then
A C
         
Y YW A B W
= , = .
B D Z H C D H 0
Consequently,
[I, P ] V ( ) V () .

i i
zerosspectral
i i
2008/5/15
page
i

6 Left-invertible spectral factors


Summarizing and extending the results listed so far for left-invertible spectral factors the following connec-
tions hold.


Theorem 13. Assume that F is left-invertible
 0  and the given realizations of F and = F F are minimal.
Then dim C ( ) = dim C () + dim C . Furthermore, denoting by (Xmax , Ymax , Zmax , Wmax ) the
maximal solution of (4.11) the following relations hold:
(i)  0
C () = [I, P ] ker [Ymax

, Xmax ] , C = ker (Xmax

Ymax P) ,

(ii)  0
V () = Im (Xmax P Ymax ) , V = Ymax ker (Xmax P Ymax ) .

7 General case
If F is non necessarily left-invertible then there exists a tall inner function L such that F L is already left-

invertible and moreover (F L) (F L) = , i.e. F L is a spectral factor of the same spectral density. This
tall-inner function can be constructed as follows.

Lemma 14. Let (C, A) be an observable pair. Consider maximal solutions of (3.1) and (3.3) assuming
w.l.o.g. that the column-vectors of the matrix R0 are orthonormal. Then there exists a matrix such that
the function
1
K(s) = R0 + (Hmax + R0 ) (sI (max + 0 )) 0
is a tall inner (in continuous time sense) function.
Consider a square inner extension of K denoted by [K, L]. Then the function F L is left-invertible,
moreover provides a left spectral-factor of = F F .

Let us point out that the role of the inner function L is to eliminate the kernel zero-module W () of
the given realization of F . The corresponding zeros are transformed into the finite zero-module.

The following theorem gives a full description of the connections between the zero modules of and
its spectral factors.

Theorem 15. Consider a transfer function F with minimal realization. Form = F F and assume that
(4.6) provides a minimal realization for moreover property (4.5) holds.
Assume that (Xmax , Ymax , Zmax , Wmax ) determines a maximal solution of the equation

A 0 C Xmax Xmax Wmax
0 A C Ymax = Ymax Wmax (7.28)
C C R Zmax 0
 
Xmax
for which the kernel of = {0} . Then
Ymax
 0
(i) V () = Im (Xmax P Ymax ) , (or equivalently C = ker (Xmax
Ymax P )) ;
 0
(ii) R () = (Xmax P Ymax ) hWmax | ker (Xmax P Ymax )i , R = R () .
 0

(iii) V = (Ymax ker (Xmax P Ymax )) R () (or equivalently C () = R () [I, P ] C ( )).
Furthermore, the finite zeros of F and F appear in the spectrum of Wmax .

i i
zerosspectral
i i
2008/5/15
page
i

Bibliography

[1] H. Aling and J. M. Schumacher. A nine-fold decomposition for linear systems. Int. J. Control, 39/4:779
805, 1984.

[2] J. Ball, I. Gohberg, and L. Rodman. Interpolation of Rational Matrix Functions. Birkhauser, 1990.
[3] G. D. Forney. Minimal bases of rational vector spaces, with application to multivariate linear systems.
SIAM J. Control, 13/3:493520, 1975.
[4] A. Gombani and Gy. Michaletzky. On the Nevanlinna-Pick interpolation problem: Analysis of the
McMillan degree of the solutions. Linear Alg. Applic., 425:486517, 2007.
[5] T. Kailath. Linear Systems. Prentice-Hall, Englewood Cliffs, NY, 1980.
[6] A. Lindquist, Gy. Michaletzky, and G. Picci. Zeros of spectral factors, the geometry of splitting sub-
spaces, and the algebraic Riccati inequality. SIAM J. Control Optim., 33:365401, 1995.
[7] Gy. Michaletzky. Quasi-similarity of compressed shift operators. Acta Sci. Math. Szeged, 69:223239,
2003.
[8] A. S. Morse. Structural invariants of linear multivariable systems. SIAM J. Control, 11/3:446465,
1973.
[9] H. H. Rosenbrock. State-space and Multivariable Theory. Thomas Nelson and Sons, 1970.
[10] C. B. Schrader and M. K. Sain. Research on system zeros: a survey. Int. J. Control, 50/4:14071433,
1989.
[11] B. F. Wyman and M. K. Sain. On the zeros of minimal realization. Linear Algebra and Applic.,
50:621637, 1983.
[12] B. F. Wyman, M. K. Sain, G. Conte, and A. M. Perdon. On the zeros and poles of a transfer function.
Linear Algebra and Applic., 122-124:123144, 1989.
[13] B. F. Wyman, M. K. Sain, G. Conte, and A. M. Perdon. Poles and zeros of matrices of rational functions.
Linear Algebra and Applic., 157:113139, 1991.

i i

Vous aimerez peut-être aussi