Académique Documents
Professionnel Documents
Culture Documents
Periodic Coefficients
MEMOIRE DE MASTER 2
Option: Numerical Analysis, Mathematical and Application
Presented by
Charles ADEYEMI
Supervisors :
The purpose of this paper is to illustrate the effect of the perturbation on the Hamiltonian systems
with periodic coefficients and to analyze its stability. A method for the determination of the mon-
odromy matrix of unperturbed system from [5]is given. We study the behavior of eigenvalues of the
solution of such system under a perturbation entailing the increasing of Hamiltonian. The analysis
of stability of perturbed system is based on its of strong stability of fundamental solution evaluated
at the end of a period with perturbation parameter values chosen at random. However, the matrix
solution being symplectic, this analysis leads to the search of strong stability of symplectic matrices.
The result about the theory perturbation study of symplectic matrix under rank one perturbation
from [19] is presented. Some numerical examples are presented for a Hamiltonian system with peri-
odic coefficients we used the both criterion that are the Krein,Gelfand and Liskii criterion and Dosso
and Sadkane criterion to analyze the strong stability of monodromy matrix.
Key Words : Hamiltonian system, Canonical system, matrizant, monodromy matrix, sym-
plectic matrix, stability, strong stability, rank one perturbation, Hamiltonian perturbation
i
Résumé
L’objectif de ce papier est d’illustrer l’effet de la perturbation sur les systèmes Hamiltoniens à co-
efficients périodiques et d’analyser leur stabilité. Une méthode pour la détermination de la matrice
monodromique du système non perturbé obtenue du [5] est présentée. Nous étudions le comporte-
ment des valeurs propres de la matrice monodromique du système sous une perturbation Hamil-
tonienne croissante. L’analyse de la stabilité du système perturbé est basée sur celle de la stabilité
forte de la solution fondamentale évaluée à la fin d’une période avec des valeurs du paramètre de
perturbation choisies au hasard. Cependant, la matrice solution étant symplectique, cette analyse
conduit à la recherche de la stabilité des matrices symplectiques. Le résultat sur l’étude de la théorie
de perturbation des matrices symplectiques sous une perturbation de rang un a été rappelé [19]. Des
exemples numériques avec un système Hamiltonien à coefficients périodiques sont donnés et nous
avons utilisé les critères de Krein, Gelfand et Lidskii et celui proposé par Dosso et Sadkane pour
analyser la stabilité forte de la matrice monodromique.
This paper is dedicated to the memory of my father Djiman ADEYEMI, to my mother Celestine
BABATOUNDE and My big father Julien BABATOUNDE
4 Numerical examples 57
Acknowledgements
First, I thank God almighty who has preserved my life till this day. May his name be glorified!
I would like to thank the owner of International Chair in Mathematical Physics and Applications
( ICMPA-UNESCO-CHAIR ), Prof M. Norbert HOUKONNOU, for having accepted me to attend
his doctorate school and for his determination for a better quality of training of the young peoples
so as to include our country into the international group of Scientists
I am grateful to my monitors E. BALOÏTCHA and M. DOSSO for having agreed to supervise
this work and particularly for the pertinence of their remarks and suggestions. They succeeded in
orientating my calculations pertinently granting me a great liberty of initiative, which, thus offers
me a rare privilege for a master student.
The presented work here owes a lot to a number of persons that I have the chance to meet in
my existence. It is always difficult to explain the beginning of a vocation, but it is certain that mine
wouldn’t have come without those persons. My taste for the science comes from the contact I had
with lots of teachers and professors I met during my schooling. But I think the teachers of Master
for the quality of Physics and Mathematics teaching they gave me, particularly V. ADANHOUME,
G. EDAH, A. KANFON and D. OUSAMARY for their advice.
I would like to thank my seniors in ICMPA, especially J. Allognon, M. Landry Dassoundo, Damien,
Elias, Dr Sama and Lazare, for their advice and encouragement that they usually gave me during
the two years of training. I would like to thank my classmates G. Djossou, C. Hello, P. Adomassè,
F. Mavoa, B. Natta, L. Gbaguidi and M. Sagbohan with whom I have some scientific discussions
without forget Robinson and Minto, for thier doubtless availability for the resolution of computing
problems while calculation and writing.
Once again, I thank G. DJOSSOU, F. MAVOA and B. NATTA for their moral, financial and
material support when I was facing some difficulties in the evolution of the research work, and
particularly in my private life.
I have some friendly thoughts toward those who were at my sides especially my friends Epiphane
L., Bariyou M., Islam O., Emile A., Brice O., Gaspard L., Marius A., Vincent O., Victor O., Innocent
A. and my cousins, particularly A. Sévérin, L. Fridaossi, F. Samuel, A. Parfaite, F. Innocent, L.
Crépin, L. François and all those who rendered my life better.
I would like to thank G. Adéléro, P. Adéguélou, B. Ogouayèni, V. Ifêchinan, M. Folahan and S.
Adékambi for their very especial support without forget G. Ladélé.
I thank my dear uncles F. Babatoundé, R. Babatoundé F. Babatoundé and my tent Colette
Babatoundé for their help.
I could not get, doubtless, my Master at ICMPA without the regular financing and advice from
my cousin François ADJIBODE since my first year at university. Therefore, I thank him very much.
I would like to thank my cousin Jean ADEYEMI too, for his advice, and who never stops helping
me since my father’s death. Peace to his soul!
I am grateful to my whole family, my brothers and my sisters, especially our elder sister Bernadette
ADEYEMI, for her inestimable financial support.
To end, my eternal gratitude goes to my wife Mariette AGNILARA, for having, so marvelously,
changed my single life in giving birth to a handsome boy who is ADEYEMI B. J. Hubert, and for
her patience in difficult moments she went through with me.
field of complex numbers either the field of real numbers. The real, imaginary parts of a complex
number λ are denoted by
λ + λ̄
Re(λ) =
2
λ − λ̄
Im(λ) =
2i
respectively. The set of positive integers is denoted by N and Z the union of N and -N. The mean
value of a matrix function F(t) is denoted by [F (t)]∇ .
Let J ∈ R2N ×2N be a skew-symmetric and non-singular matrix. We have following definition (see
[7, 6, 14])
A standard example of J is
0 −I
J= (1.1)
I 0
Where each identity block has dimension N. In this chapter, we summarizes some works already
done over the symplectic matrices that is the strong stability of symplectic matrices (see [8, 4]) and
the rank perturbation of symplectic matrices (see [19])
? λ and λ1 if λ ∈ R
2) Let x and y be two eigenvectors of W associated respectively to the eigenvalues λ and µ.
If λµ̄ 6= 1, then y ? Jx := (Jx, y) = 0. In particular, if |λ| =6 1, (Jx, x) = 0 for x ∈ Eλ (Eλ
eigenspace corresponding to λ)
3) Let P ∈ R2N ×k1 and Q ∈ R2N ×k2 be two matrices whose colons engender the invariant
subspace of W: WP = PA; WQ = QB . If the eigenvalues λi (A) and λj (B) of A and B are
such that λi (A)λ̄j (B) 6= 1 for all i and j then P ? JQ = 0
4) If λ is an eigenvalue defective on the unit circle, then there exist an eigenvector x belonging
to λ such that (Jx, x) = 0.
5) There exists T1 ∈ R2N ×2N non singular and three matrices W∞ ∈ RN∞ ×N∞ ; W1 ∈ RN1 ×N1 and
W0 ∈ RN0 ×N0 with N∞ = N0 ≤ N and N1 = 2(N − N0 ) whose the eigenvalues are respectively
outside of unit circle, on the unit circle and inside of unit circle such that
W∞ 0 0
T1−1 W T1 = 0 W1 0
0 0 W0
and so
0 0 −M T
T1T JT1 = 0 J1 0
M 0 0
Ω = diag(w1 , . . . , wN ) with w1 ≥ · · · ≥ wN ≥ 1
Proof. :1) Let p(λ) be the characteristic polynomial (with real coefficients) of 2n × 2n-matrix
W with λ 6= 0 and det W = 1.Then we have
y ? Jx := (Jx, y)
= (W T JW x, y)
= (JW x, W y)
= (Jλx, µy)
= λµ̄(Jx, y) (1.2)
(Jx, x) = 0 = (Jy, y)
where k0 + k1 + k2 + · · · + ks = 2n and λi =
6 λj ( i,j = 0; 1 ; 2; . . . ; s). In accordance with
theorem (1), if λ occurs so does λ and p(λ) = λ2n p( λ1 )
1
λ0 k0 λ1 λ2 λs
p(λ) = λk0 +k1 +k2 +···+kS (1 − ) (1 − )k1 (1 − )k2 . . . (1 − )ks
λ λ λ λ
PERTURBATION OF HAMILTONIAN SYSTHEM Charles ADEYEMI , cipmout@yahoo.fr
CIPMA
c 2015
Some properties of symplectic matrices 6
λ0 k0 λ1 λ2 λs
= λ2n (1 − ) (1 − )k1 (1 − )k2 . . . (1 − )ks
λ λ λ λ
2n k0 1 1 k0 λ1 k1 λ2 k2 λs
= λ λ0 ( − ) (1 − ) (1 − ) . . . (1 − )ks
λ0 λ λ λ λ
by identification
1 1 1 λ1 λ2 λs
p( ) = λk00 ( − )k0 (1 − )k1 (1 − )k2 . . . (1 − )ks
λ λ0 λ λ λ λ
k0 k1 1 1 k0 1 λ1 k1 λ2 k2 λs
= λ0 λ1 ( − ) ( − ) (1 − ) . . . (1 − )ks
λ0 λ λ1 λ λ λ
1
Assuming that λ1 = λ0
with λ0 6= 0, we have
1 λk00 1 1 1 λ2 λs
p( ) = k
( − )k0 (λ0 − )k1 (1 − )k2 . . . (1 − )ks
λ λ0 λ0 λ
1 λ λ λ
1 λ
⇐⇒ λ2n p( ) = λk00 −k1 ( − 1)k0 (λ0 λ − 1)k1 (λ − λ2 )k2 . . . (λ − λs )ks
λ λ0
= λk00 −k1 p(λ) by assumption
= p(λ)
=⇒ λk00 −k1 = 1
=⇒ k0 = k1
λ
=⇒ p(λ) = ( − 1)k0 (λ0 λ − 1)k0 (λ − λ2 )k2 . . . (λ − λs )ks
λ0
if λ0 = 1
is J-symplectic
? If Y ∈ RN ×N is symmetric, then the matrix
IN Y
0 IN
is J-symplectic
? If X, Y ∈ RN ×N satisfies XY T = Y X T and X is no singular, then the matrix
X Y
0 X −T
is J-symplectic
2) If W ∈ R2N ×2N is J-symplectic, then W T , W −T and W −1 are J-symplectic.
3) The product of J-symplectic matrices is J-symplectic matrix.
Definition 2. We say that W is stable, if kW k k < ∞ for all k ∈ N and for all norm k.k
As W T JW = J, the condition kW k k < ∞ is equivalent to kW −k k < ∞ ∀ k ∈ N
It is clear that a symplectic matrix is stable if and only if all its eigenvalues are on the unit circle
and are not defective.The following definition characterizes another type of stability: the strongly
stability.[8, 5, 4]
Definition 3. We say that W is strongly stable if there exists ε > 0 such that all J-symplectic
f verifying kW − W
matrix W f k ≤ ε is stable
In the other words, W is strongly stable and remains when it is subjected to small perturbations
conserving the symplecticity. This concept is robust than the stability. It is also difficult to charac-
terize. Obviously, the eigenvalues of such matrix should be on the unit circle and semi-simple, but
it is not enough to characterize this type of stability. Indeed,the eigenvalues must, in addition be
either of first kind or second kind according to the following definition [8, 5, 4]
This above definition has a meaning since the matrix iJ is complex hermitian. Krein, Gelfand
and Lidskii introduced this classification and used it to characterize the strong stability of symplectic
matrices.The characterization is summarized in the following theorem, we subsequently call: the
criterion of Krein, Gelfand and Lidskii (KGL criterion) [8, 5, 28]
Theorem 2. (KGL criterion) The matrix W is strongly stable if and only all its eigenvalues
are on the unit circle and either of first kind or second kind.
In addition to the conditions of this theorem, there must be a sufficient gap between the eigen-
values of first and second kind. This theorem, that deduces by M. Dosso at the time of its research,
is summarized of several other theorems.He proposed a staple summary the more important for its
demonstration.
Note that any eigenvector x of W associated with eigenvalue λ outside of unit circle satisfies
(iJx, x) = 0.This allows to extend the definition of an eigenvalue of mixed kind with eigenvalues
outside the unite circle. When W has not any eigenvalue of mixed kind, we have the following
theorem [8, 5, 28]
Theorem 3. Let λ be an eigenvalue of the type not mixed kind of W, i.e.,λ is on unit circle
and either of first kind or either second kind. Then, there exist constants δ > 0 and γ > 0
such that for any J-symplectic matrix Wf satisfying kW − W f k < δ,the eigenvalue λ̄ of W
f in the
neighborhood |λ − λ̄| < γ are on the unit circle and are semi- simple.
Proof. : Reasoning by contradiction: There exists a sequence of matrices W1 ; W2 ;. . . → W of
eigenvalues, respectively, λ1 ;λ2 ;. . . → λ such that λm is an eigenvalue of Wm with |λm | =
6 1, where
λ is of modulus 1 but defective (i.e., not semi-simple). Let xm be an eigenvector corresponding
to λm such that kxm k = 1.Then this gives us (Jxm , xm ) = 0, m = 1,2,3,. . .. The sequence xm
being bounded, we can extract a convergent subsequence to a vector x, and by passing to the
limit, we obtained W x = λx, and kxm k = 1, (Jx, x) = 0. The eigenvalue λ is of mixed kind.
Hence a contradiction
Concerning the eigenvalues of mixed kind, M. Dosso proposed a result in [4] generalized by the
following theorem (see [5, 28] for the proof)
Theorem 4. If λ is an eigenvalue of mixed kind of J-symplectic matrix W, then there exists a
J-symplectic matrix W
f close to W having the outside and inside eigenvalues of unit circle in a
neighborhood of λ
Now, let us come back to the proof of theorem2. If W is strongly stable, it is clear that all
its eigenvalues are on the unit circle. These eigenvalues can not be of mixed kind because if not,
according to theorem4, W would be closed to J-symplectic matrix W f having the outside and inside
eigenvalues for unit circle, that contradict the stability of W. Mutually, if all eigenvalues of W are
on the unit circle and are of first kind or second kind, the theorem3 shows that the eigenvalues of
small perturbations J-symplectic W f of W are on the unit circle and semi-simple. In other words, W
is strongly stable.That end the proof
The KGL criterion is based solely on the numerical computation of eigenvalues and eigenvectors.
The calculation of those values may be misled at the time of inevitable approximations. For example,
it is possible that (Jx, x) 6=0 for a computed eigenvector associated with eigenvalue of a miss kind,
which is not consistent with Definition4 see [4]. For this problem, S.R.Godunov and M. Sadkane
proposed another classification better suited for numerical computations through the matrix S (o)
defined as follow [8, 5, 4, 16]
1
S (o) = J(W − W −1 )
2
PERTURBATION OF HAMILTONIAN SYSTHEM Charles ADEYEMI , cipmout@yahoo.fr
CIPMA
c 2015
The strong stability of symplectic matrices 9
The matrix S (o) is symmetric.It is singular if and only if W has eigenvalues ± 1 which are of mixed
kind. It has properties similar to those of J.
Indeed, as we write for J the relation W T JW = J, we have also:
1
W T S (o) W = W T ( (JW − JW −1 )W
2
1
= (W T JW − W T JW −1 )W
2
1
= (W T JW W − W T JW −1 W )
2
1
= (W T JW W − W T JW W −1 )
2
1
= (JW − JW −1 )
2
= S (o) (1.3)
1
(S (o) x, x) = (JW − JW −1 x, x)
2
1
= (Jλ − Jλ−1 x, x)
2
1
= (iJλ − iJλ−1 x, x)
2i
λ − λ−1
= (iJx, x)
2i
eiθ − e−iθ
= (iJx, x)
2i
= (iJx, x)sinθ (1.4)
The sign of (S (o) x, x) depend uniquely of sign of (iJx, x) because sinθ > 0. Let use (S (o) x, x) to
give another definition.See [4]
Definition 5. Let λ be a semi- simple eigenvalue of W lying on the unit circle. Then λ is an
eigenvalue with a red(green) color or in short r-eigenvalue (g-eigenvalue) if (S (o) x, x) is positive
(negative) on the eigenspace associated with λ
This classification is more convenient for numerical calculations than that of Definition 4 since it
deals with symmetric matrices and avoids complex vectors. The main difference between Definitions
4 and Definition 5 can be explained with the following example:
Suppose that λ = eiθ and λ̄ = e−iθ with (0 < θ < π) are eigenvalues of W respectively of first
and second kind associated with eigenvectors x and x̄, where x̄ denotes the conjugate of x. Then it
follows that for any linear combination z = αx + β x̄, we have
Q−1 W Q = W1 ⊕ · · · ⊕ Wp , (1.6)
T
Q JQ = J1 ⊕ · · · ⊕ Jp
(1.7)
(−1)0
0
Jj =
(−1)nj −1 0
and upper triangular Toeplitz matrix with [δ, 1, r2 , . . . , rnj −1 ] as its first row denoted by
Wj = T
oep(δ, 1, r2 , . . . , rnj −1 )
δ 1 r2 . . . rnj −1
0 ..
.
= 0
. .. .. .
r2
..
. 1
0 ... δ
More over, rk = 0 for odd k and the parameters rk for even k are real and uniquely determined
by the recursive formula
k
2
−1
δ δ X
r2 = , rk = r2ν r2( k −ν) , 4 ≤ k ≤ nj
2 2 ν=1 2
The special case of Toeplitz matrix is Jnj (λ) = T oep(λj , 1, 0, . . . , 0) that is the upper triangular
nj × nj Jordan block associated with eigenvalues δ = λj .
(2) Paired blocks associated with the eigenvalues λj = ±1, of W with size mj , where mj ∈ N is
add
Jmj (λj ) 0 0 Imj
Wj = and Jj = (1.8)
0 (Jmj (λj ))−T −Imj 0
Although the canonical forms in theorem 6 display the associated block with all possible eigen-
values of the symplectic matrices, it will be necessary to further investigate the blocks associated
with eigenvalues ±1. We start by presenting results on all possible skew-symmetric matrices J for
which a Jordan block associated with the eigenvalue 1 is J-symplectic. Thus, we recall the following
proposition.Without loss of generality, we give this proposition exactly as it is presented in [19].
Proposition 3. Let n = 2k, where k ∈ N, and let Jn (1) be the upper triangular Jordan block
of size n with eigenvalue 1. Then the set
Un = {J ∈ Cn×n /(Jn (1))T JJn (1) = J and J T = -J} is a vector space of dimension k. In
particular, any J = [hij ] ∈ Un has the form
h2n
0 0 h1n
J = 0 Jn−2 hn ; hn =
..
.
−h1n −hTn 0 hn−1,n
where Jn−2 ∈ Un−2 and h1n = -h2,n−1 ; hjn = -hj,n−1 for j = 2,3,. . . ,n-3; -hn−2,n = -hn−2,n−1
and where hn−1,n ∈ C is arbitrary. Moreover, J is uniquely determined by hk,k+1 ,. . . ,hn−1,n and
for each m = k,. . . ,n-1, the entries hij depending on hm,m+1 are only those satisfying i+j ≥
2m+1 and min{i, j} ≤ m. In particular,
hjn = (−1)j−1 hj,j+1 + βj+1,j hj+1,j+2 + . . . + βn−1,j hn−1,n for some coefficients βij , i = j+1,. . . ,n
for j = k,. . . ,n-1.
and we have
T 1 0 0 h12 1 1
J2 (1) JJ2 (1) =
1 1 −h12 0 0 1
0 h12 1 1
=
−h12 h12 0 1
0 h12
=
−h12 0
= J
where ei denotes the column vector with 1 in the ith position and zero elsewhere. Let us compute
(Jn (1))T Jn Jn (1)
with
A1 = (e1 gnT Jn−2 (1) + Jn−2
T
(−gn eT1 + Jn−2 Jn−2 ))
A2 = e1 (gnT en−2 + h1n ) + Jn−2T
(Jn−2 en−2 + hn )
A3 = en−2 (−gn e1 + Jn−2 Jn−2 (1)) + (−h1n eT1 − hTn Jn−2 (1))
T T
0 0 0
T T T
0 Jn−2 Jn−2 Jn−2 − Jn−2 e1 h1n + Jn−2 (1)Jn−2 en−2 + Jn−2 (0)hn
T T T T T
0 −en−2 (Jn−2 Jn−2 (1)) − h1n e1 − hn Jn−2 (0) en−2 (Jn−2 en−2 + hn ) − hn en−2
T
that is zero.Then Jn−2 (1)Jn−2 Jn−2 (1) − Jn−2 = 0 which implies that Jn−2 ∈ Un . Furthermore,
we obtain
T T
e1 h1n + Jn−2 (1)Jn−2 en−2 + Jn−2 (0)hn = 0
1 0 0 0 ... 0 0 0 0 0 0 ... 0 0
1 1 0 0 0
0
1 0 0 0 0
0
0 1 1 0 0 0 0 1 0 0 0 0
T
T
Jn−2 (1) = 0 0 1 1 0 0
; Jn−2 (0) = 0 0 1 0 0 0
.. ... ... ... .. .. .. ... ... ... .. ..
. . . . . .
1 1 0 1 0 0
0 0 ... 0 1 1 0 0 ... 0 1 0
h2,2 h2,3 ... h2,n−2 h2,n−1 0 h1n
h3,2 h3,3 ... h3,n−2 h3,n−1
0
0
Jn−2 = .. .. .. = ... ; h1n e1 = ...
; en−2
. . .
hn−2,2 hn−2,3 . . . hn−2,n−2 hn−2,n−1 1 0
hn−1,2 hn−1,3 . . . hn−1,n−2 hn−1,n−1 0 0
T T
where Jn−2 (1) , Jn−2 (0) and Jn−2 are all square matrices (n − 2) × (n − 2)
T
T
Jn−2 (0)hn = 0 h2,n h3,n . . . hn−3,n hn−2,n
T
T
Jn−2 (1)Jn−2 en−2 = h2,n−1 h2,n−1 + h3,n−1 . . . hn−3,n−1 + hn−2,n−1 hn−2,n−1
h1n 0 h2,n−1
0 h2,n
h2,n−1 + h3,n−1
0 h3,n h3,n−1 + h4,n−1
T T
e1 h1n + Jn−2 (1)Jn−2 en−2 + Jn−2 (0)hn =
+ .. ..
+
..
. .
.
0 hn−4,n hn−4,n−1 + hn−3,n−1
0 hn−3,n hn−3,n−1 + hn−2,n−1
0 hn−2,n hn−2,n−1
By identification, that imply the following system
h1n = −h2,n−1
h = −h
2,n−1 − h3,n−1
2,n
h3,n = −h3,n−1 + h4,n−1
..
.
hn−4,n = −h3,n−1 − h4,n−1
hn−3,n = −hn−3,n−1 − hn−2,n−1
n−2,n = −hn−2,n−1
h
Thus, hjn is uniquely determined for j = 1,. . . n-2 and hn−1,n is arbitrary (hn,n = 0). Using
the induction hypothesis on hn−2 , the claim concerning the entries depending on hm,m−1 follow
directly from expression of hjn . That end the proof
e−1 W Q
Q e = Ŵ ⊕ W
f; Q e = Jˆ ⊕ Je
e−T J Q
f ) ⊆ C\ {λ, 1 }, and where Ŵ and Jˆ have the same size and the
where σ(Ŵ ) = {λ, λ1 } , σ(W λ
following forms:
W1 0 0 Ia
(1) If λ ∈
/ {+1, −1} then Ŵ = , Jˆ =
0 W1−T −Ia 0
where ! ! !
l1
M l2
M lm
M
W1 = Jn1 (λ) ⊕ Jn2 (λ) ⊕ ··· ⊕ Jnm (λ)
j=1 j=1 j=1
Proof. Part (1) follow from theorem 6. Consider the case λ = 1 and a single pair (Jni (1), J i,j )
of blocks as in (12). Obviously, J (i,j) is skew-symmetric and invertible, and by proposition 3 it
follows that Jni (1)is J i,j -symplectic. According the theorem 6, we find that there exists a non
singular matrix Q−1 i,j
i,j for the pair (Jni , J ) such that
Q−1
i,j Jni (1)Qi,j = T oep(1, 1, r2 , . . . , rni −1 )
and
QTi,j J i,j Qi,j = Ji
where r2 , . . . , rni −1 are as in 1) of theorem 6. Then we can replace a pair (T oep(1, 1, r2 , . . . , rni −1 ), Ji )
of blocks of the form (i) by the equivalent pair (Jni , J i,j ) with block as in equation (1.13) in
the canonical form of this theorem. Doing the same thing with ni odd, a pair of blocks of the
form (1.8) is replaced by equivalent pair
J (i,s)
Jni (1) 0 0
,
0 Jni (1) −J (i,s) 0
with blocks as in equation (1.14). Taking -W instead of W, we have the corresponding argument
for the case λ = -1.So the prove of part 2). That end the proof
It is important to point out that n1 , that the first and biggest value of ni , is not chosen at
random. They are multiplicities of each eigenvalue in minimal polynomial.
and
In accordance with the Definition 7, the integer n1 can take each number mr,1 . It is also important
to note that in the case λ = -1 and ni >1, the blocks -W (i) is not Jordan matrix, It is direct sums of
the negatives of Jni (1). To obtain a form analogous equation (1.12) but Ŵ a Jordan matrix in the
case λ = -1, in corollary 1 we replace Ŵ by W 1 · · · W m ; replace Jni (1) in equation (1.13)
L L
and (1.14) by Jni (−1);and replace the requirements that J (i,j) ∈ Uni in part1) and J i,s ∈ Vni in
part 2b), by the requirements that J i,j ∈ ψni Uni ψni in part 2a) and J i,s ∈ ψni Vni ψni in part 2b)
[19, 18, 21] where ψn := JRn that is an unitary and hermitian diagonal matrix with
0 1
(−1)0
0 .
J= and R =
n−1 n
(−1) 0 .
1 0
We have already sufficiently ingredients to prove the main result concerning structured rank one
perturbations of J-symplectic matrices, where J T = −J.This completes the proof
We now present the result describing the behavior of structured rank one perturbations concerning
the symplectic matrices. To do it, we will start by the following Lemma which will be used to present
this result(see [19]).
fT J W
W f = (W + zy T )T J(W + zy T ) = W T JW + W T Jzy T + (zy T )T JW + (zy T )T Jzy T
As W is J-symplectic then W T JW =J and , since J is skew symmetric (z T Jz = 0), consequently
W
f = W + z(W T Jz)T
= W + czz T J T W
= W − czz T JW
= (I − czz T J)W
Writing −c = a2 , we obtain
f = (I + a2 zz T J)W
W
= (I + az(az)T J)W (1.15)
Letting x = az, we obtain the form Wf = (I + x(x)T J)W that is general additive rank one
perturbations of the J-symplectic matrix W. To prove the conversely, we consider the matrix
M = (I + xxT J) for any vector x
we see that I + xxT J is J-symplectic. Consequently, the matrix (I + xxT J)W as a product
of two J-symplectic matrices is J-symplectic matrix (according to theorem 1).This prove the
symplecticity of W
f . That end the proof
Throughout this Lemma we can conclude that the general additive rank one perturbation of
J-symplectic matrix W are in the form Wf = (I + xxT J)W . The norm of the additive perturbation
xxT JW can be arbitrarily small.A time the perturbed form of matrix is known, we may pass to
determination of its Jordan canonical form.We must also point out the simplicity of new eigenvalue
of W
f . Then we have the following theorem (see [19] for the proof)
l1
! l2
! lm
!
M M M
Jn1 (λ) ⊕ Jn2 (λ) ⊕ ··· ⊕ Jnm (λ) ⊕J (1.16)
j=1 j=1 j=1
where n1 > . . . nm and where J with σ(J ) ⊆ C\{λ} contains all Jordan blocks associated with
eigenvalues different from λ. Furthermore, let x ∈ C2n and B = xxT JW
(1) If λ ∈
/ {−1, 1}, then generically with respect to the components of x, the matrix W + B
has the Jordan canonical form
l1 −1
! l2
! lm
!
M M M
Jn1 (λ) ⊕ Jn2 (λ) ⊕ ··· ⊕ Jnm (λ) ⊕ Je (1.17)
j=1 j=1 j=1
where Je contains all Jordan blocks of S+B associated with eigenvalues different from λ
(2) If λ ∈ {−1, 1} and if n1 is even, then generically with respect to the components of x,
the matrix W + B has the Jordan canonical form
l1 −1
! l2
! lm
!
M M M
Jn1 (λ) ⊕ Jn2 (λ) ⊕ ··· ⊕ Jnm (λ) ⊕ Je (1.18)
j=1 j=1 j=1
where Je contains all Jordan blocks of S+B associated with eigenvalues different from λ
(3) If λ ∈ {−1, 1} and if n1 is odd, then generically with respect to the components of x, the
matrix W + B has the Jordan canonical form
l1 −2
! l2
! lm
!
M M M
Jn1 +1 (λ) ⊕ Jn1 (λ) ⊕ Jn2 (λ) ⊕ ··· ⊕ Jnm (λ) ⊕ Je (1.19)
j=1 j=1 j=1
where Je contains all Jordan blocks of S+B associated with eigenvalues different from λ
In finding, for a given complex J-symplectic matrix W, a rank one additive perturbation that
results again in an J-symplectic matrix, generically ( with respect to the vector parameter representing
the perturbation) destroys the biggest Jordan block for every eigenvalue of W, except for the case
of the eigenvalue ±1 and the biggest Jordan block corresponding to this eigenvalue if n1 is odd. In
the exceptional case, generically the two biggest blocks are destroyed and one block of size n1 +1
is created (corresponding to the same eigenvalue ±1). Moreover, generically the new eigenvalues,
that is to say eigenvalues of the perturbed matrix that are not eigenvalues of W are all simple (see
[19])
Where αr and αr+N are respectively generalized coordinates and generalized momenta. Hamilton’s
equations of this system may be written:
2N
dαr+N P
= − wi,r (t)αi
dt
i=1
2N (2.2)
dαr
P
dt = wi,r+N (t)αi
i=1
The above equation system, composed of 2N equations that is N equations with N-generalized
coordinates and n equations with N-generalized momenta can be rewritten as follow:
dαN +1
dt
= −w1,1 (t)α1 − · · · − wN,1 (t)αN − wN +1,1 (t)αN +1 − · · · − w2N,1 (t)α2N
dαN +2
= −w1,2 (t)α1 − · · · − wN,2 (t)αN − wN +1,2 (t)αN +1 − · · · − w2N,2 (t)α2N
. dt
..
dα2N = −w (t)α − · · · − w (t)α − w
dt 1,N 1 N,N N N +1,N (t)αN +1 − · · · − w2N,N (t)α2N
dα1
(2.3)
dt
= w1,1+N (t)α1 + · · · + wN,1+N (t)αN + wN +1,1+N (t)αN +1 + · · · + w2N,1+N (t)α2N
dα 2
= w1,2+N (t)α1 + · · · + wN,2+N (t)αN + wN +1,2+N (t)αN +1 + · · · + w2N,2+N (t)α2N
dt
.
..
dαN = w1,2N (t)α1 + · · · + wN,2N (t)αN + wN +1,2N (t)αN +1 + · · · + w2N,2N (t)α2N
dt
w1,1 (t) w2,1 (t) ... wN,1 (t) ... w2N,1 (t) α1
− dαdt
N +1
w1,2 (t) w2,2 (t) ... wN,2 (t) ... w2N,2 (t) α2
− dαdt
N +2
..
..
.. . .
.
− dα2N −1
w1,N −1 (t) w2,N −1 (t) ... wN,N −1 (t) ... w2N,N −1 (t)
αN −1
dt
w1,N (t) w2,N (t) ... wN,N (t) ... w2N,N (t)
αN
− dαdt2N = (2.4)
w1,1+N (t) w2,1+N (t) ... wN,1+N (t) ... w2N,1+N (t) αN +1
dα1
dt w1,2+N (t) w2,2+N (t) ... wN,2+N (t) ... w2N,2+N (t) αN +2
dα2
.. ..
dt
.. . .
.
dαN
w1,2N −1 (t) w2,2N −1 (t) . . . wN,2N −1 (t) . . . w2N,2N −1 (t) α2N −1
dt w1,2N (t) w2,2N (t) . . . wN,2N (t) . . . w2N,2N (t) α2N
The matrix of left hand may be also written in the product form of two matrices as follow:
− dαdt
dα1
N +1
dt
dαN +2 0 0 . . . 0 −1 0 . . . 0 dα2
− dt
dt
0 0 . . . 0 0 −1 . . . 0
.. ..
. .. ... . . . . .. .
0 ..
− dα2N −1
. .
dαN
dt
dt 0 0 ... 0 0 0 . . . −1
dαN +1
− dαdt2N =
1 0 ... 0 0 0 ... 0
dt
dα1 dαN +2
dt
dt
dα2
0 1 ... 0 0 0 ... 0 ..
.. ... . .. . .
dt
.. . 0 .. . ..
dα2N −1
. dt
dαN 0 0 ... 1 0 0 ... 0 dα2N
dt | {z } dt
J2N
α1 w1,1 (t) ... wN,1 (t) ... w2N,1 (t)
α2
w1,2 (t) ... wN,2 (t) ... w2N,2 (t)
.. ..
.
.
α w
1,N −1 (t) . . . wN,N −1 (t) . . . w2N,N −1 (t)
N −1
Putting x = αN and H(t) = w1,N (t) ... wN,N (t) . . . w2N,N (t)
αN +1 w1,1+N (t) . . . wN,1+N (t) . . . w2N,1+N (t)
.. ..
. .
α2N −1 w1,2N −1 (t) . . . wN,2N −1 (t) . . . w2N,2N −1 (t)
α2N w1,2N (t) . . . wN,2N (t) . . . w2N,2N (t)
then, the new system (2.4) becomes
dx(t)
J2N = H(t)x(t) (2.5)
dt
In continuation instead of J2N , we will write simply J that is always of 2N-dimensions. In following
section, We present some results about canonical and Hamiltonian systems.
Through the last two Definitions 8 and 9, we noticed that a canonical equation is Hamiltonian
with real matrices. The matrix-function H(t) will be called a Hamiltonian for both canonical and
Hamiltonian equations and represent its coefficient. If H(t) = H0 is constant real matrix, the system
is said to be with constant coefficient and said to be variable if H(t) is variable. When H(t) moreover
be variable function , is periodic, the system is said with periodic coefficient. The coefficient H(t)
in example 2 is π-periodic and its of example3 is 2π γ
-periodic ,consequently those equations will be
called canonical Hamiltonian systems with periodic coefficients.
dX(t)
J = H(t)X(t) (2.7)
dt
X(0) = I2N
We have some following important definitions see [5].
Definition 10. 1) The 2N × 2N matrix X(t) satisfying Equation (2.7) is called the matrizant
of Equation (2.6)
2) A fundamental set of solutions of Equation (2.6) is any set of 2N solutions x1 (t),x2 (t),. . . ,x2N (t),
which are linearly independent for any t ∈ R. The square matrix X(t) with columns x1 (t),x2 (t),. . . ,x2N (t),
is called a fundamental matrix of equation (2.6) and xi (t) = X(t)xi (0). We will call X(t) the
fundamental solution of equation (2.7).
3) The value at the period T of the fundamental matrix X(t) defined by the initial condition
X(0) = I2N , is call the monodromy matrix and its eigenvalues are the multipliers of equation
(2.6).The set of all multipliers is called the spectrum of system (2.6)
We recall the following proposition which shows an important property of the fundamental
solution of Equation (2.7) see [5, 28]
Proposition 4. The martizant X(t) of Hamiltonian equation (2.6) satisfies the identity
conversely, if a matrix X(t) satisfies condition (2.8) such that: it is continuous, has an integrable
piecewise-continuous derivative, and X(0) = I2N , then it is the matrizant of some Hamiltonian
Equation (2.8).
Proof. : Let X(t) be the matrizant of a Hamiltonian equation then (2.8) is true at t = 0, because
X(0) = I2N that to say X(0)? JX(0) = J . the derivative of the left-hand side of the equality
is zero:
d d d d
(X ? JX) = (X ? )JX + X ? (J)X + X ? J (X)
dt dt dt dt
d ? ? d
= (X )JX + X J (X)
dt dt
d ? ? d
= ( X) JX + X J (X)
dt dt
= (J −1 HX)? JX + X ? JJ −1 HX
= X ? H ? (J −1 )? JX + X ? HX
= X ? H ? (−J −1 )JX + X ? HX
= −X ? H ? J −1 JX + X ? HX
= −X ? H ? X + X ? HX
= −X ? HX + X ? HX
= 0 (2.9)
consequently, the equation(2.8) is valid for all t
Now for converse, taking determinants on both sides of equation (2.8), we obtain
det(X ? JX) = det(J) ⇔ det(X ? ) det(J)det(X) = det(J)
⇔ det(X ? ) det(X) = 1
⇔ | det X|2 = 1
so that X −1 exists. Differentiating (25), we get
d ? d
X JX + X ? J X = 0 (2.10)
dt dt
Multiplying this identity on the left by (X −1 )? and on the right by X −1 . We obtain
d ? d d d
(X −1 )? X J + J XX −1 = 0 ⇔ (X −1 )? (−J X)? + J XX −1 = 0
dt dt dt dt
d d
⇔ (−J XX ) + J XX −1 = 0
−1 ?
dt dt
d
⇔ (−HXX ) + J XX −1 = 0
−1 ?
dt
d −1
⇔ −H + J XX = 0
dt
d
⇔ −HX + J X = 0
dt
d
⇔ HX = J X
dt
That end the proof
Since this proposition is valid that is to say X(t)? JX(t) = J, coming back to the Definition 1,
we can say that matrizant X(t) and its value at end of a period X(T ) for a canonical system are
said to be J-symplectic because we have X T JX = J.
The identity (2.8) yields several properties of a Hamiltonian equation, first pointed out by Poincaré.
We recall those properties through following definition and proposition. See [28]
Definition 11. Let equation (2.6) be a Hamiltonian system. The adjoint system is defined by
dx
= −(J −1 H)? x (2.11)
dt
If (J −1 H) is real matrix, then (J −1 H)? = (J −1 H)T
(Jxi , xj ) ≡ constant.
2) Suppose that a Hamiltonian equation (2.6) has a linear first integral,i.e., an integral of the
form ψ(t, x) = (z, v(t)). Then the vector function ϕ(t) = J −1 v(t) is a solution of equation (2.6)
dv
= −(J −1 H)? v = −H(J −1 )? v = H(J −1 )v
dt
since v(t) = Jϕ(t), we have
d
J ϕ = Hϕ
dt
We can not close this part without to talk about the spectrum aspect of monodromy matrix X(T )
that allow the stability of Hamiltonian system. The symplectic matrices are sometimes obtained from
Hamiltonian differential system with periodic coefficients.
We consider a canonical equation (2.6) with T-periodic coefficient (J T = −J, H(t + T ) =
H(t) = H(t)T , t ∈ R? )
It is follows from the identity X T JX = J that
JX = (X T )−1 J then
X = J −1 (X T )−1 J (2.12)
The multipliers of (2.6) are the roots of equation det[X(T ) − λI2N ] = 0 that is called characteristic
equation of this system.
Let {λν } be a complete set of multipliers for Equation ((2.6)).The matrix X(T ) and X(T )T
have the same eigenvalues. Thus the eigenvalues of the matrix [X(t)T ]−1 are numbers λ−1 ν . Thus it
−1
follows from ((2.12)) that sets {λν } and {λν } coincide.
At present, we consider the case of complex coefficient and hermitian T-periodic matrix-function
H(t) (H(t + T ) = H(t) = H(t)? ) and skew-hermitian matrix J (J ? = −J) for Equation ((2.6)).
Instead of (2.12) we have X ? JX = J then
and X(t) is in general a complex matrix. The set of eigenvalues of X(t)? is now {λ¯ν }.Thus it follows
−1
from (2.13), as before, that the sets {λν } and {λ¯ν } coincide. If |λ| =6 1, the points λ and λ̄−1
are symmetric (in the sense of inversion) about the unit circle.
Thus the spectrum of a Hamiltonian equation is symmetric about the unit circle. In the case of
a canonical equation, the spectrum is also symmetric about the real axis, so that the non real
multipliers not on the unit circle may be partitioned into quadruples λ; λ̄−1 ; λ̄; λ̄−1 .
Ẋ(t) = J −1 H(t)X(t)
so we have
ψ̇(t) = Ẋ(t + T ) = J −1 H(t + T )X(t + T ) = J −1 H(t)ψ(t)
then ψ(t) is also solution of equation (2.14) therefore ψ(t) is a fundamental matrix solution.
ψ(t) and X(t) satisfy the same equation then there there exist a constant matrix C such that
ψ(t) = X(t)C that is
X(t + T ) = X(t)C
Letting S(t) = X(t + T )X(T )−1 , we have
Ṡ(t) = Ẋ(t + T )X(T )−1
= J −1 H(t + T )X(t + T )X(T )−1
= J −1 H(t)S(t)
Then, S(t) and X(t) verify the same equation.End then
S(T ) = X(T )X(T )X(T )−1 = X(T )
S(0) = X(T )X(T )−1 = I2n = X(0)
thus S(t) and X(t) themselves coincide and we write
X(t + T )X(T )−1 = X(t) ⇐⇒ X(t + T ) = X(t)X(T )
By identification, we see X(T ) = C. That end the proof
dX(t)X(t)−1 = J −1 H(t)dt
If J −1 H(t) is an integrable continuous matrix-function then X(t) is continuous and non singular
matrix function so does X(T)
Proposition 7. If X(T) a 2N×2N matrix is non singular then there exists least one 2N×2N
matrix (complex) T R such that
X(T ) = eT R (2.16)
Proof. Suppose det X(T) 6= 0, then there exist a non singular matrix P such that
P −1 X(T )P = J
(j)
with Nnj the nilpotent matrices. X(T) is non singular, this suppose that δj 6= 0 and det Xj (T ) 6=
0. Then we have
ln X(T ) = ln (X1 (T ) ⊕ · · · ⊕ Xs (T ))
= ln X1 (T ) ⊕ · · · ⊕ ln Xs (T )
1 (1) −1 1 (s) −1
= ln δ1 In1 (In1 + Nn1 In1 ) ⊕ · · · ⊕ ln δs Ins (Ins + Nns Ins )
δ1 δs
1 (1) 1 (s)
= In1 ln δ1 + ln In1 + Nn1 ⊕ · · · ⊕ Ins ln δs + ln Ins + Nns
δ1 δ1
1 (1) 1 (s)
= (In1 ln δ1 ⊕ · · · ⊕ Ins ln δs ) + ln In1 + Nn1 ⊕ · · · ⊕ ln Ins + Nns
δ1 δs
s s
M M 1
= Ini ln δi + ln Ini + Nn(i)i
i=1 i=1
δi
= TR (2.17)
(s)
If k δ1i Nni k < 1, the equation (2.17) gives
s s X ∞ k
(−1)k−1 1 (i)
M M
TR = Ini ln δi + N (2.18)
i=1 i=1 j=1
k δi n i
m+1
1 (i)
If m denotes the integer such that N
δi ni
= 0, equation (2.18) becomes
s s X m k
(−1)k−1 1 (i)
M M
TR = Ini ln δi + Nni (2.19)
i=1 i=1 k=1
k δ i
for all k ∈ Z
We have following theorems that is Floquet-lyapunov theorem for Hamiltonian system with
periodic coefficient. This theorem combines two theory: the Floquet representation theory for general
solution of those type equations and Lyapunov reducibility theory (see [25, 28]).This combination
occurs because there are not a significant distinction between those results.
Theorem 8. (Floquet-Lyapunov theorem) The matrizant X(t) of system (2.6) with T-periodic
coefficient may be expressed in the form
where U(t) is a T-periodic 2n×2n matrix-function for all t, continuous, with an integrable
piecewise-continuous derivative,and R is a constant 2n×2n matrix. Conversely, if U(t) is a
matrix-function with these properties and an arbitrary constant matrix, then the matrix X(t)
defined by equation (2.20) is the matrizant of equation (2.6) with T-periodic coefficient.
U (0) = I2n = U (T )
so that
X(t) = U (T )etR
that is Floquet factorization Now suppose that U(t) and TR defined above exist and set
ẋ = U̇ y + U (t)ẏ (2.23)
since R is constant and y(0) = x(0) is defined. That is Lyapunov transformation (reduction).
This completes the proof
The equation (2.16) is a factorization of X(t) into a T-periodic matrix U(t) that is continuous
with a continuous derivative, and matrix exponential etR . This is a Floquet factorization, and the
Floquet representation theorem states the existence of these factors, see [25] .
Let equation (2.6) be a canonical equation with T-periodic coefficient. The solution of this
equation is given by following proposition.
Proposition 8. The matrizant X(t) of a canonical equation with T-periodic Hamiltonian H(t)
may be expressed as X(t) = U (t)etQ , with U(t) a 2T-periodic real symplectic matrix-function
having an integrable piecewise-continuous derivative, and Q a constant real matrix
Proof. Let assume that there exists a matrix R such that X(T ) = eRT . If X(T) is real and R is
complex we have
=⇒ X(T )2 = e(R+R̄)T
R+R̄
=⇒ X(T ) = e 2 T
=⇒ X(T ) = eQT
R+R̄
were Q = 2
. Then there exist a matrix Q real such that
U (t + 2T ) = X(t + 2T )e−(t+2T )Q
= X(t)X(2T )e−2T Q e−tQ
= X(t)X(T )2 e−2QT e−tQ
= X(t)e−tQ
= U (t) (2.26)
The general solution can be written X(t) = etQ V (t). The same argument allows us to see that
we can choose R = Q but contrarily, V(t)6= U(t) in general case.
By assumption of the Theorems 8 and Proposition 11, the solving of the Hamiltonian system with
periodic coefficient requires the determination of monodromy matrix. We must find some numeric
methods allowing the computation of X(T).
Several methods allows to determine the solution of Hamiltonian system as all systems at the end
of a period. The following paragraph gives a method to compute X(T ). This method is called “the
method of double orthogonal sweep” [5]. This method consists subdividing the integration interval
into 2n subintervals. In other words, we subdivide each half-period in n intervals. Thus, we use the
orthogonal sweep method on each half period setting. The starting time, t = 0 if the integration is
over the interval [−π, π] or t = T2 if the integration is over the interval [0, 2π]. In our case, we take
the initial time t = T2 to resolve the following system:
dX(t)
J = H(t)X(t)
dt
This equation implies another form that is:
dX(t)
= J −1 H(t)X(t)
dt
Then, we consider the following linear system:
(
dU (t)
dt
= J −1 H(t)U (t); T2 ≤ t ≤ T
dV (t)
dt
= −J −1 H(t)V (t); 0 ≤ t ≤ T
2
I√
of initial conditions V( T2 ) = U( T2 ) = 2N
2
. Some properties of U(t) and V(t) are given through
the following Lemma: [5]
Lemma 2. The solution U(t) and V(t) of the equations of above system verify:
(U (t))T JU (t) = ( 12 )J , T2 ≤ t ≤ T
(V (t))T JV (t) = ( 21 )J , 0 ≤ t ≤ T2
Proof:
d dU (t)T dU (t)
(U (t)T JU (t)) = JU (t) + (U (t))T J
dt dt dt
= U (t)T (J −1 H(t))T JU (t) + U (t)T J(J −1 H(t)U (t))
= U (t)T H(t)T (J −T )JU (t) + U (t)T H(t)U (t))
= −U (t)T H(t)T U (t) + U (t)T H(t)U (t))
= 0 (2.27)
and also
d dV (t)T dU (t)
(V (t)T JV (t)) = JV (t) + (V (t))T J
dt dt dt
= −V (t)T (J −1 H(t))T JV (t) − V (t)T J(J −1 H(t)V (t))
= −V (t)T H(t)T (J −T )JV (t) − V (t)T H(t)V (t))
= V (t)T H(t)T V (t) − V (t)T H(t)V (t))
= 0 (2.28)
Then there exists a constant matrix C such that U (t)T JU (t) = C = V (t)T JV (t), ∀ t ∈ R ;
According to initial condition, we have
T T T T I2N I2N 1
V ( )JV ( ) = U ( )JU ( ) = √ J √ = ( )J
2 2 2 2 2 2 2
Putting X(T) = U (T )(V (0))−1 ,the method gives us the monodromy matrix W = X(T). Let t1k
and t2k (1 ≤ k ≤ n) the sequences be defined by.
T k−1
t1k = (1 + )
2 n
and
T k−1
t2k =
(1 − )
2 n
The solution of above equation systems leads us to the resolution of the system:
∀k = 1, 2, . . . , n;
(
dUk (t)
dt
= J −1 H(t1k + t)Uk (t)
dVk (t)
(2.29)
dt
= −J −1 H(t2k − t)Vk (t)
T
on the interval ]0, 2n ] (by the one of method of Runge-Kutta for example).The initial condition
Uk+1 (0)
Vk+1 (0)
of equation (2.20) at iteration k+1 is obtained from the normalization of solution at iteration k at
T
time t = 2n , we can have this normalization with QR-algorithm (or singular value decomposition
(SDV) algorithm). At iteration k = n, we have the matrix
T T
W = X(t) = Un ( )(Vn ( ))−1
2n 2n
.
T
Proposition 9. For all n > 1, the matrix W = X(t) = Un ( 2n T
)(Vn ( 2n ))−1 is J-symplectic and
gives a good approximation of the monodromy matrix X(T) of system J dX(t)dt
= H(t)X(t)
dV −1 (t) dV −1 (t)
Remark that kdt = -Vk−1 (t) kdt Vk−1 (t) = Vk−1 (t)J −1 H(t2k − t),
T
then We obtain ∀ k = 1,2,. . . ,n, ∀t ∈ [0, 2n ]
T
∀t ∈ [0, 2n ] with J −1 H( 2n
T T
− t) → 0 when t → 2n . Thus, the solution Wn (t) of above equation
converges to the monodromy matrix X(T ) of equation
dX(t)
J = H(t)X(t)
dt
T
that is to say Wn ( 2n ) ≡ X(T )
that end the proof
Definition 12. System (2.6) is stable if each of all its solution x(t) remains bounded for all
t∈ R
Thus, basing on the Definition 10 we may give the following proposition.[5, 4, 28]
Proposition 10. The stability (respectively, strong stability) of system (2.6) is reduced to the
stability (respectively, strong stability) of J-symplectic matrix X(T) .
Proof. : A necessary condition for stability of (2.6) is that the powers X(T ) remain bounded.
Since X(T) is stable, then X n (T ) and X −n (T ) are bounded (according to Definition 2) that is
to say there exist a constant K such that
sup kX n (T )k ≤ K
n∈Z
and we can say that the eigenvalues of X(t) lie on the unit circle and are semi-simple.
But those conditions are not sufficient to guarantee that the X(t) remains stable when the
system is subjected to some Hamiltonian perturbation. Let us give another definition for strong
stability of Hamiltonian system [5, 4]
Definition 13. The system (2.6) is strongly stable if all Hamiltonian system with T-periodic
coefficient enough neighbor is stable. In other word, if (2.6) is stable, then there exists ε > 0
such that all Hamiltonian system with T-periodic coefficient in the form
dx
J = Hx
e
dt
that satisfies the condition
ZT
kH − Hk
e ≡ |H(t) − H(t)|dt
e <ε
0
is stable.
In short, we have the following theorem over the monodromy matrix of any system that will be
very used in the continuation, see [5, 28]
Theorem 9. The system (2.6) is strongly stable if and only if the J-symplectic matrix X(T) is
strongly stable
dz(t)
J = H(t)z(t) (3.1)
dt
Let E = {H(t)} and E = {Z(t)} be respectively the set of all integrable piecewise-continuous
matrix-functions on [0, T ] and the set of all matrix-functions those are matrizant of equation (3.1).
Suppose that the correspondence defined above between E and E is homomorphism (one-to-one
bi-continuous mapping), that is to say ∀Hi (t) ∈ E, there exists an unique Zi ∈ E that satisfies the
equation (3.1). We define a norm k.k in E and a metric Γ(., .) in E [28] to study the variation of
Z(t) with respect to the variation of H(t).
ZT
kHk = |H(t)|dt
0
and
ZT
Γ(Z1 , Z2 ) = sup |Z1 (t) − Z2 (t)| + |Ż1 (t) − Ż2 (t)|dt
0≤t≤T
0
Proof. We know that the general solution of an inhomogeneous equation is the sum of the gen-
eral solution of homogeneous equation and a particular solution of the inhomogeneous equation.
According to properties of Hamiltonian equation, the homogeneous solution of equation (3.2)
is given by
zh (t) = X(t)x(0)
We will now look for a particular solution in the form
zp (t) = X(t)x(t)
Zt
x(t) = X(τ )−1 J −1 f (τ )dτ
0
so that
z(t) = zh + zp
Zt
= X(t)x(0) + X(t) X(τ )−1 J −1 f (τ )dτ
0
Zt
= X(t) x(0) + X(τ )−1 J −1 f (τ )dτ
0
Basing on above Definition, we may give an important proposition to study the variation of a
system see [28]
Proposition 12. Two coefficients (H1 (t), H2 (t)) ∈ E 2 which are close together in the norm k.k
determine two matrizants (Z1 (t) according to Equation (14) we have: H(t) = J dZ dt
1
Z1−1 , Z2 (t)
∈ E 2 in the metric Γ(Z1 , Z2 ) and conversely
Proof. Consider (H1 (t), H2 (t)) ∈ E 2 and ((Z1 (t), Z2 (t)) ∈ E 2 , we have:
H2 (t) − H1 (t) = J −1 Ż2 Z2−1 − Ż1 Z1−1
Using the above definition we obtain:
ZT
kH2 − H1 k = |H(t) − H(t)|dt
0
ZT
= |J −1 Ż2 Z2−1 − Ż1 Z1−1 |dt
0
ZT
= |J −1 Ż2 Z2−1 − Ż1 Z1−1 + Ż2 Z1−1 − Ż2 Z1−1 |dt
0
ZT
−1
|Ż2 Z2−1 Z1−1 + Ż2 − Ż1 Z1−1 |dt
≤ kJ k −
0
ZT ZT
≤ kJ −1 k{ sup |Z2−1 − Z1−1 | |Ż2 |dt + sup |Z1−1 | |Ż2 − Ż1 |dt} (3.6)
0≤t≤T 0≤t≤T
0 0
kH2 − H1 k 0
dZ1
Now , to prove the opposite we consider: dt
= J −1 H1 Z1 = Ż1 and
dZ2
= J −1 H2 Z2 = Ż2
dt
dZ2 dZ1
Thus we have dt
− dt
= J −1 (H2 Z2 − H1 Z1 ), that implies
d(Z2 − Z1 )
= J −1 (H2 Z2 + H2 Z1 − H2 Z1 − H1 Z1 )
dt
= J −1 (H2 (Z2 − Z1 ) + (H2 − H1 )Z1 ) (3.7)
The equation (3.7) has the form of equation (3.4). Consequently the solution is in form (3.5).
Then we have t
Z
Z2 − Z1 = Z2 (t) Z2 (τ )−1 (H2 (τ ) − H1 (τ ))Z1 (τ )dτ
0
Zt
kZ2 − Z1 k ≤ kZ2 (t)k |Z2 (τ )−1 (H2 (τ ) − H1 (τ ))Z1 (τ )|dτ
0
If H2 (τ ) is sufficiently close to H1 (τ ), the right -hand of this inequality is closed to zero, there
for
|Z2 − Z1 | −→ 0
That end the proof
Under the perturbation, it is obvious that a Hamiltonian system change to another system. The
state of new system depends on the consideration given to this perturbation, that is to say, more
the perturbation is considerable, more the system move away from its origin. Consequently, if the
perturbation is small, the system will turn into neighbor system. The following theorem will allow
to describe the stability of original system from new,([28] page 129 )
Theorem 10. Suppose that a Hamiltonian Equation (31) with Hamiltonian H0 (t) has distinct
multipliers, all on he unit circle. Then there exists ε > 0 such that the trivial solution of any
equation (31) with Hamiltonian H(t) such that
ZT
kH − H0 k = |H(t) − H0 (t)|dt < ε
0
is stable.
(0)
Proof. : Describe disjoint circles γν around each of the multipliers λν (ν = 1, . . . , 2N ) of
the original system. If the matrix H0 (t) changes to H(t) continuously in the sense of the norm
(shown in above proposition), the corresponding change in the matrizant Z(t) is also continuous.
Consequently, the monodromy’s matrix Z(T) also changes continuously, and hence so do the
multipliers. Thus, for sufficiently small ε > 0, the interior of each circle γν will contain exactly
one multiplier of Equation (3.1) with Hamiltonian H(t). By the symmetry properties of the
spectrum, it can lie only on the unit circle. In fact, if there is a multiplier λ, say , inside the
unit circle, then, since the spectrum of Z(T ) is symmetric with respect to the unit circle, if ε
is sufficiently small there is another multiplier outside the unit circle λ̄−1 . But this contradicts
the assumption that Equation (3.1) has only one multiplier inside γ. Thus all multipliers of
Equation (3.1) are distinct and lie on the unit circle. That end the proof
It is necessary to recall that in our arguments we have assumed that when the matrix H0 (t) is
changed to H(t) by a perturbation, system (3.1) preserves its form. In other word, the T-periodic
matrix H(t) (like H0 (t)) is hermitian in the case of complex matrices and real symmetric in the case
of real matrices. Under the perturbation that distorts the system, the theorem is no valid.
At present, we will try to examine the behavior of multipliers with increase in the Hamiltonian.
are given by
ZT
1
σj = (P (t)zj , zj )dt (3.10)
(iJaj , aj )
0
where aj are suitably chosen eigenvectors of the monodromy matrix, Z(T, 0)aj = ρ(0)aj , and
zj = Z(T, 0)aj are the corresponding solutions at λ = 0
dX(T, λ) daj (λ) dρj (λ)
(iJ aj (0), aj (0)) + ρ0 (iJ , aj (0)) = (iJ aj (0), aj (0))
dλλ=0 λ=0 dλ dλ λ=0
daj (λ)
+ρ0 (iJ , aj (0)) (3.12)
dλ
dρj (λ)
=⇒ (iJ dX(T,λ)
dλ
a j (0), aj (0)) = (iJ dλ
aj (0), aj (0))
λ=0 λ=0
dρj (λ) dX(T,λ)
=⇒ dλ
(iJaj (0), aj (0)) = (iJ dλ
aj (0), aj (0))
λ=0 λ=0
dρj (λ)
=⇒ dλ
= 1
(iJaj (0),aj (0))
(iJ( dX(T,λ)
dλ
)λ=0 aj (0), aj (0))
λ=0
We can also rewrite Z(t) (3.5), that is the solution of perturbed system, in another way as
follow
Zt
X(t, λ) = X(t, 0) X(0) + X(τ, 0)−1 J −1 F (τ )dτ
0
Now let us differentiate the perturbed equation with respect to λ we obtain
∂ ∂X(t,λ)
∂λ
∂H(t, λ)
∂X(t, λ)
iJ = X(t, λ) + H(t, λ)
∂t ∂λ ∂λ
In our case F (τ ) = λP (τ )X(τ ), consequently X(t, λ) becomes
Zt
X(t, λ) = X(t, 0) X(0) + X(τ )−1 J −1 (λP (τ )X(τ )) dτ
0
After differentiating X(t, λ) with respect to λ and taking its value at the end of a period
with λ = 0 we get
ZT
dX(T, λ)
= X(T, 0) X(τ )−1 J −1 P (τ )X(τ )dτ (3.13)
dλ λ=0
0
dρj (λ)
By insertion of (3.13) in expression of dλ
we obtain
λ=0
ZT
dρj (λ) 1 iJX(T, 0) X(τ )−1 J −1 P (τ )X(τ )dτ aj (0), aj (0)
=
dλ λ=0 (iJaj (0), aj (0))
0
ZT
i JX(T, 0) X(τ )−1 (J)−1 P (τ )X(τ )dτ aj (0), aj (0)
=
(iJaj (0), aj (0))
0
ZT
i X(T, 0)−T J X(τ )−1 J −1 P (τ )X(τ )dτ aj (0), aj (0)
=
(iJaj (0), aj (0))
T 0
Z
i JX(τ )−1 J −1 P (τ )X(τ )dτ aj (0), X(T, 0)−1 aj (0)
=
(iJaj (0), aj (0))
0T
Z
i X(τ )T P (τ )X(τ )dτ aj (0), ρ−1
= 0 aj (0)
(iJaj (0), aj (0))
0
ZT
iρ¯0 −1
= (P (τ )X(τ )aj (0), X(τ )aj (0))dτ
(iJaj (0), aj (0))
0
ZT
iρ0
= (P (τ )xj (τ ), xj (τ ))dτ (3.14)
(iJaj (0), aj (0))
0
Putting
ZT
1
σj = (P (t)xj (τ ), xj (τ ))dτ
(iJaj (0), aj (0))
0
so we have
dρj (λ)
= iρ0 σj
dλ λ=0
It is also necessary to notice that all quantities (iJaj , aj ) are real numbers non zero of like sign
according to the kind of ρ(0). Since (P (t)xj , xj ) is defined positive, that is to say
ZT
(P (t)xj , xj )dt > 0
0
consequently, the sign of σj depends on sign of (iJaj , aj ). We now calculate the norm of ρj (λ).
DISCUSSION: Let ρ(o) be a definite multiplier of the first kind. Then σ1 , . . . , σr > 0 and conse-
quently we have:
• If Imλ > 0, then 1 − 2σj Im(λ) < 1 thus |ρj (λ)| < 1
• If Imλ < 0, then 1 − 2σj Im(λ) > 1 thus |ρj (λ)| > 1
In the case where ρ(0) is a definite multiplier of the second kind (σ1 < 0, . . . , σr < 0) we have:
• If Imλ > 0, then 1 − 2σj Im(λ) > 1 thus |ρj (λ)| > 1
• If Imλ < 0, then 1 − 2σj Im(λ) < 1 thus |ρj (λ)| < 1
In summary, the r-multipliers which form an r-fold multiplier of the first kind at λ = O move
into the interior of the unit circle as λ moves into the upper half-plane (Imλ increase), and into
the exterior of the unit circle as λ moves into the lower half-plane (Imλ decrease). Contrary to
this first case, the r-multipliers moves into the interior of the unit circle as λ moves into the lower
half-plane, and into the exterior of the unit circle as λ moves into the upper half-plane if ρ(0) is a
definite multiplier of the second kind.
Let us examine the particular case of parameter real λ (Imλ = 0). Of course, it is the case of our
work. We will still suppose that P (t) is positive definite for all t. Then if λ > 0, H(t)+λP (t) > H(t),
we say that there is an increase in the Hamiltonian and if λ < 0, H(t) + λP (t) < H(t), we say that
there is a decrease in the Hamiltonian.
Because ρ(0) is on the unit circle (|ρ(0)| = 1), its exponential form may be written as follow:
ρ(0) = eiϕ0
As the modules of ρj (λ) are constant (|ρj (λ)| = 1) then its variation is tied to arguments.
To introduce our knowledge about the behavior of the multipliers with the increase in the
Hamiltonian we use the following theorem known as Krein theorem [28].
Theorem 12. With increase in the Hamiltonian, the multipliers of the first kind on the unit
circle move along the unit circle in the sense of increasing argument, while its of second kind
does so in the sense of decreasing argument.
Proof: Set ρj (λ) = eiϕj (λ) , of course if λ = 0 ρj (0) = ρ0 = eiϕ0 .We have:
dρj dϕj dϕ dρ dϕ
dλ
= ieiϕj (λ) dλ
= iρj (λ) dλj , =⇒ ( dλj )λ=0 = iρj (0)( dλj )λ=0 then
dϕj i dρj
( )λ=0 = − ( )λ=0 (3.16)
dλ ρ(0) dλ
dϕj
( )λ=0 = σj
dλ
PERTURBATION OF HAMILTONIAN SYSTHEM Charles ADEYEMI , cipmout@yahoo.fr
CIPMA
c 2015
Some perturbation theory methods of Hamiltonian systems with periodic coefficients (with a
small parameter) 43
dϕj
ϕj (λ) = ϕj (0) + λ( )λ=0 + O(|λ|2 )
dλ
that is
ϕj (λ) = ϕj (0) + σj λ + O(|λ|2 ) (3.18)
dϕ
If ρ0 is of first kind that to say σj is positive for all j (j = 1, . . . , r), dλj = σj > 0, then these
multipliers move along the unit circle in the sense of increasing argument.
If ρ0 is of second kind (σj < 0 ), then these multipliers move along the unit circle in the sense of
decreasing argument. This completes the proof
dx(t, ε)
= C + εA1 (t) + ε2 A2 (t) . . . x(t, ε)
(3.23)
dt
PERTURBATION OF HAMILTONIAN SYSTHEM Charles ADEYEMI , cipmout@yahoo.fr
CIPMA
c 2015
Some perturbation theory methods of Hamiltonian systems with periodic coefficients (with a
small parameter) 44
Here x(t, ε) is unknown vector, C a constant matrix and Aj (t) are piecewise continuous matrix
function integrable over the interval [0; T ]. Those matrices are the square matrices and x(t, ε)
is an 2N-vector.We assume that equation (3.23) is Hamiltonian for real ε that is to say C and
A1 (t), A2 (t) . . . are J-anti hermitian for all (t ∈ R+ ).In practice these matrices will themselves be
real, but certain of the subsequent manipulations may make them complex. We also assume that
the series
C + εA1 (t) + ε2 A2 (t) . . .
to be convergent so that the representation (3.23) is no meaningless (not loss the sense). For the
sake of generality, therefore, we shall allow the coefficients of equation (3.23) to be complex from
the start, and point out the specific feature of the real case.Let return to the solution of equation
(3.19) that is
X(t) = U (t)etR
.It is evident that the properties of stability or instability of trivial solution (that is to say, boundedness
or unboundedness of solution as t−→ ∞) depend entirely on the matrix R. For practical purpose,
therefore , it is sufficient to characterize the matrix R or even only its eigenvalues. We have following
definition see [28].
Definition 15. The eigenvalues of the matrix R are known as the characteristic exponents of
system (3.19)
In practice, however, it is frequently more convenient to start with an arbitrary fundamental set
of solution, considering instead of some similar matrix R1 ,which has the same eigenvalues like R
denoted by R = S −1 R1 S (Jordan canonical form for example) where S is an invertible matrix. Let
α be eigenvalue of R and ”y” the corresponding eigenvector. We have
Ry = αy;
then by X(t) = U (t)etR , the solution satisfying the initial condition x(0) = y is
From equation (3.24), it follows that x(T ) = eαT x(0) and therefore
so that the number ρ = eαT is an eigenvalue of the matrix X(T ) that is also called multiplier of
equation
dX(t)
= J −1 H(t)X(t) (3.25)
dt
If ρ = eαT , then α = T1 lnρ.Thus the characteristic exponent αµ may be expressed in term of
the multipliers ρµ as follow
1
αµ = lnρµ ; (µ = 1; 2; . . . ; 2N ) (3.26)
T
In the case where ε= 0, the equation (3.23) will becomes
dx(t)
= Cx(t) (3.27)
dt
Definition 16. Let z1 ; z2 and ω be tree numbers. We say that z1 congruent to z2 modulo ω if
there exist an integer k such that
z1 − z2 = kω
and we note
z1 ≡ z2 (ω)
Suppose that the equation (3.27) has a pure imaginary r-fold characteristic exponent α0 = iω0
or simply an r-fold multiplier ρ0 = eiω0 T .Then the matrix C has exactly r eigenvalues λj congruent
to iω0 modulo 2πiT
In this case, for sufficiently small ε equation (3.23) has exactly r-characteristic
exponents in the neighborhood of iω0 which tend to iω0 at ε = 0. Let λ1 . . . λ2n be the eigenvalues
of the matrix C. According to equation (3.26), the multipliers of system (3.27) are
ρµ = eλµ T ; (µ = 1; 2; . . . ; 2N ) (3.28)
These value ρµ are not necessarily distinct. Divide the eigenvalues λµ into congruence class corre-
spond to an r-fold multipliers of system (3.27) that we may consider like an unperturbed system.For
each class λµi . . . λµr , let α be any number congruent to its member modulo 2πi
T
. Thus, for any fixed
class,
2πi
λµ = α + mµ
T
where mµ (with µ = µ1 . . . µr ) are integers.Notice also that for any other choice of α all numbers
mµ for a fixed class are modified and then we have following definition see [28]
Definition 17. If at least one of the congruence classes λµi . . . λµr consists of distinct numbers
λη 6= λν (η, ν = µ1 . . . µr ) we shall say that the singular case holds for system (3.23). If each
class λµi . . . λµr consists of equal numbers, we shall say that the non singular case holds.
At present, we try to describe a method allowing to compute the exponents matrix R(ε) and
matrizant X(t, ε) of system (3.23) (see [28]) but first of all, we will give all necessaries for description.
According to the Floquet-Lyapunov theorem, the matrizant of system (3.15) may be expressed
as
The matrices I and Uj , Rj are square 2N-matrices with Rj constant and Uj are T-periodic with
Uj (0) = 0. It is easy to see that X(t, 0) = etC . Series (3.30) are convergent for small ε in the
non-singular case when the matrix C has no distinct eigenvalues which are congruent modulo 2πi T
. In
the singular case, when such eigenvalues exist, the representation (3.30) is meaningless. One must
reduce it to the non singular case.
The reduction of singular case to non-singular case is very simple. First ,we construct a constant
2N× 2N-matrix C0 satisfying the following conditions:
- CC0 = C0 C
i2πmγ
Kγ = Ikγ ; (γ = 1, . . . , s) (3.31)
T
2πi
Letting α be any number congruent to the numbers of the class modulo T
, we define the integers
mγ by
i2πmγ
λ(γ) = α + (3.32)
T
Then the matrix S −1 (C −C0 )S will have Jordan form obtained by replacing each λµ by corresponding
α. Consequently, to each class of eigenvalues λµi . . . λµr of C, it corresponds an r-fold eigenvalue α
of R0 = C − C0 . Once R0 is fund, the two first conditions are obvious. Let
x = etC0 y (3.33)
dy
C0 etC0 y + etC0 C + εA1 (t) + ε2 A2 (t) . . . etC0 y
=
dt
dy
=⇒ etC0 = C + εA1 (t) + ε2 A2 (t) . . . etC0 y − C0 etC0 y
dt
tC0 dy
= (C − C0 )etC0 + εA1 (t)etC0 + ε2 A2 (t)etC0 . . . y
=⇒ e
dt
dy
= e−tC0 (C − C0 )etC0 + e−tC0 εA1 (t)etC0 + e−tC0 ε2 A2 (t)etC0 . . . y
=⇒
dt
dy
= e−tC0 (C − C0 )etC0 + εe−tC0 A1 (t)etC0 + ε2 e−tC0 A2 (t)etC0 . . . y
(3.34)
dt
Since by definition, C and C0 commute, then it is obvious that each matrix commute with their
commutator that is
[[CO , C], C] = 0 = [[C0 , C], C0 ]
so that one each them commute with a function of the other that is to say
−tC0
, C = 0 = C0 , e−tC
e
dy
= R0 + εB1 (t) + ε2 B2 (t) . . . y
(3.35)
dt
The Bl (t) are T-periodic. In fact, the Al (t) being T-periodic and
This above reduction from singular case to non singular is devoted uniquely for the complex
coefficients. Concerning the case where the coefficients are real, the matrix C0 is made itself real.
As the choice of α (eigenvalues of R0 ) impose its of mj , we must therefore choose these numbers
in such a way that the set {α} is symmetric about the real axis.
Proof. At first suppose that eigenvalues λ and λ̄ of C(λ 6= λ̄) belong to one class
λ - λ̄ = 2imπ
T
(m is even) =⇒ 2Im(λ) = ± 2mπT
since λ = Re(λ) + iIm(λ) = Re(λ) ± iIm(λ)
the numbers λ and Re(λ) are congruent modulo 2iπ T
). Then the corresponding α is real. That is
2imj π 2im̄ π
to say if λj = α + T is one of the numbers of the class, then λ̄j = α + T j is in same class
and m̄j = mj ( there m̄j is not complex conjugate of mj but the integer corresponding to λ̄j )
Suppose that eigenvalues λ and λ̄ are in different classes. Let assign some number α for the
class {λ} and ᾱ for {λ̄} then the set {α} is symmetries about the real axes. A class in which
belong λ and λ̄ consists of complex conjugate eigenvalues and the symmetry of the set {λ}
implies that the corresponding numbers are complex conjugate α and ᾱ (with α 6= ᾱ). The
numbers corresponding to complex conjugates λ and λ̄ are mj and -mj . That is to say if
2im π 2im π
λj = α + T j , then λ̄j = ᾱ − T j
By identification we get
C = R0
dU1
= R0 U1 − U1 R0 + A1 − R1
dt
dU2
dt
= R0 U2 − U2 R0 + A1 U1 + A2 − U1 R1 − R2
dU3 = R0 U3 − U3 R0 + A1 U2 + A2 U1 + A3 − U1 R2 − U2 R1 − R3
dt
dU4 (3.36)
dt
= R0 U4 − U4 R0 + A1 U3 + A2 U2 + A3 U1 + A4 − U1 R3 − U2 R2 − U3 R1 − R4
.
..
dUj
dt
= R0 Uj − Uj R0 + ([A1 Uj−1 + · · · + Aj−1 U1 ] − [U1 Rj−1 + · · · + Uj−1 R1 ] + Aj ) − Rj
.
..
The system (3.36) allow successively the determination of (R1 , U1 ); (R2 , U2 ); . . . ; (Rj , Uj ); . . .
. The matrices Rj are constants (independent to t) and are chosen in such away that each Uj is
periodic. This choice is simpler if the original system has either R0 = 0 or R0 = %I with % a number.
Putting
Θj (t) = [A1 (t)Uj−1 (t) + · · · + Aj−1 (t)U1 (t)] − [U1 (t)Rj−1 + · · · + Uj−1 (t)R1 ] + Aj (t) (3.37)
Integrating this equation over the period and dividing by T, we get the mean value of each member.
The mean values of Uj are null and as Rj is constant, we have
ZT ZT
1 1 1
[Uj (T ) − Uj (0)] = Θj (τ )dτ − Rj = 0 =⇒ Rj = Θj (τ )dτ
T T T
0 0
Putting
Rj = [Θj ]∇ (3.40)
we get finally
Zt
Uj (t) = Θj (τ )dτ − [Θj ]∇ (3.41)
0
In this case, the computations of Uj (t) and Rj is not easy. Indeed, R0 is neither null matrix nor
the product of identity matrix by a number so that the mean value of R0 [Uj ]∇ − [Uj ]∇ R0 can be
vanished.For this reason, we shall find another method [28]that facilitate the computation.
Now, instead of the matrizant we shall determine some arbitrary fundamental matrix Ỹ (t, ε) for
dy
= R0 + εB1 (t) + ε2 B2 (t) . . . y
(3.43)
dt
as
Ũ (t, ε) = U (t, ε)Ỹ (0, ε); R̃(ε) = Ỹ (0, ε)−1 R(ε)Ỹ (0, ε) (3.45)
with
Ũ (t, ε) = In + εŨ1 + ε2 Ũ2 + ε3 Ũ3 . . .
R̃(ε) = R̃0 + εR̃1 + ε2 R̃2 + ε3 R̃3 . . .
and
Ỹ (0, ε) = In + εV˜1 + ε2 V˜2 + ε3 V˜3 . . .
The coefficients Vj will be chosen in such a way that the computations involved in the determination
of the coefficients Ũj and R̃j can be simplified. Substituting (3.44) into equation (3.43) and passing
to identification like in the first method, we obtain
R̃0 = R0
dU˜1
= R0 Ũ1 − Ũ1 R0 + B1 − R̃1
dt
˜2
d U
= R0 Ũ2 − Ũ2 R0 + B1 Ũ1 + B2 − Ũ1 R̃1 − R̃2
dt
˜
dU3 = R0 Ũ3 − Ũ3 R0 + B1 Ũ2 + B2 Ũ1 + B3 − Ũ1 R2 − Ũ2 R̃1 − R̃3
dt
dU˜4 (3.46)
dt
= R0 Ũ4 − Ũ4 R0 + B1 Ũ3 + B2 Ũ2 + B3 Ũ1 + B4 − Ũ1 R̃3 − Ũ2 R̃2 − Ũ3 R̃1 − R̃4
..
.
dŨj
= R Ũ − Ũ R̃ + [B Ũ + · · · + B Ũ ] − [Ũ R̃ + · · · + Ũ R̃ ] + B − R̃j
dt 0 j j 0 1 j−1 j−1 1 1 j−1 j−1 1 j
.
..
h i
Proposition 14. If we suppose Z (0) = Ũj (t) = 0, the matrix Vj can be chosen equal to
∇
Ũj (0)
Consequently, equation (3.50) gives L = Φ(0) since R0 Z (0) − Z (0) R0 = 0.Now let us find a
solution of equation (3.49) in the form
Integrating the j th equation of system (3.46) over the period and dividing by T we obtain
1
[Ũj (T ) − Ũj (0)] = R0 [Ũj ]∇ − [Ũj ]∇ R0 +
T
[B1 Ũj−1 + · · · + Bj−1 Ũ1 ]∇ − [Ũ1 R̃j−1 + · · · + Ũj−1 R̃1 ]∇ + [Bj ]∇ − R̃j
The conditions [Ũj (t)]∇ = 0 and Ũj (T ) = Ũj (0) imply that
2π
X (m)
Ũ1 (t) ≈ eim T t Km (B1 ) (3.53)
m6=0
dŨ2
= R0 Ũ2 − Ũ2 R0 + B1 Ũ1 + B2 − Ũ1 R̃1 − R̃2
dt
X (−m) (m)
R̃2 = [B2 ]∇ + B̄1 Km (B1 ) (3.54)
m6=0
(m)
Φ(m) = B2 + [B1 Ũ1 ](m) − [Ũ1 R̃1 ]m
(m) (m)
X (h) (k)
= B2 − Km (B1 )R̃1 + B1 Kk (B1 ) (3.55)
m=k+h
(h) (k)
were B1 and Kk (B1 ) denotes the hth and k th Fourier coefficient of B1 (t) and Ũ1 (t) respectively
(h) (k)
B1 Kk (B1 ) denote (h + k)th Fourier
P
(with h, k not simultaneity null or either sum) so that
m=k+h
coefficient of B1 (t)Ũ1 (t). Consequently equation (3.51) lead to.
2π
X
Ũ2 (t) = eim T t Km (Φ(m) ) (3.56)
m6=0
For j=3,
dŨ3
= R0 Ũ3 − Ũ3 R0 + B1 Ũ2 + B2 Ũ1 + B3 − Ũ1 R2 − Ũ2 R̃1 − R̃3
dt
=⇒ 0 = [B3 ]∇ + [B1 Ũ2 ]∇ + [B2 Ũ1 ]∇ − R̃2
=⇒ R̃3 = [B3 ]∇ + [B2 Ũ1 ]∇ + [B1 Ũ2 ]∇
X (−m) (m)
X (−m)
R̃3 = [B3 ]∇ + B2 Km (B1 ) + B1 Km (Φ(m) ) (3.57)
m6=0 m6=0
2π
X
Ũ3 (t) ≈ eim T t Km (Φ(m)
• )
m6=0
(m)
were Φ• denotes the mth Fourier coefficient of matrix defined by
Φ• (t) = B1 Ũ2 + B2 Ũ1 + B3 − Ũ1 R̃2 − Ũ2 R̃1
that is
(m)
X (i)
X (h)
Φ(m)
• = B3m − Km (B1 )R̃2 − Km (Φ(m) )R̃1 + B1 Kj (Φ(j) ) + B2 Kk (Φ(k) )
i+j=m h+k=m
Theorem 13. Suppose that the nonsingular case holds for the system of differential equa-
tion.Then the system
dy
= R0 + εB1 (t) + ε2 B2 (t) . . . y
dt
has a fundamental matrix Ỹ (t, ε) expressible in the form
where the matrices Ũ (t + T, ε) ≡ Ũ (t, ε) and R̃(ε) are analytic functions of ε at t=0 and the
first coefficients of expansions
2π
X (m)
Ũ1 (t) ≈ eim T t Km (B1 )
m6=0
X (−m) (m)
R̃2 = [B2 ]∇ + B̄1 Km (B1 )
m6=0
(m) (m)
X (h) (k)
Φ(m) = B2 − Km (B1 )R̃1 + B1 Kk (B1 )
m6=0
2π
X
Ũ2 (t) ≈ eim T t Km (Φ(m) )
m6=0
X (−m) (m)
X (−m)
R̃3 = [B3 ]∇ + B2 Km (B1 ) + B1 Km (Φ(m) )
m6=0 m6=0
∞
X
(−m) (m) (−m)
R̃3 = [B3 ]∇ + 2Re B2 Km (B1 ) + B1 Km (Φ(m) )
m=1
The matrices Q and G are of the size of the system with Q the unknown of equation. The matrix
G is always known and change the components according to step.The values of G at steps 1,2 and
(m)
3 are respectively B m , Φ(m) and Φ•
Example 4. Let us now consider the Hill equation in the general form.
d2 y
= −(c2 + εf (t))y (3.58)
dt2
where f(t) is an integrable piecewise-continuous T-periodic function and c a constant real.The
study of the perturbation of this equation depend some values of c.If c is zero then equation
(3.58) (involving a parameter) may be written as follow
dx
= εD(t)x (3.59)
dt
y 0 1
Where x = and D(t)=
ẏ −f (t) 0
This equation has the form of equation (3.36) but C=0, B1 (t) = D(t) and B2 (t) = · · · = 0.
This value of C implies that R0 is also vanished so does R0 Uj − Uj R0 .Thus the computation of
Uj and Rj will be easy using the first method.
2
If c2 is either of type Tπ 2 p2 (c2 = p2 ) or c2 6= p2 (with p∈ Z), the problem will be seen in another
way.Throughout the following example, we shall try to deal the case where c2 6= p2 (with p∈ Z) and
come back to some other cases in nest section (numerical example) for the details.
P (λ) = λ2 + c2
and it’s eigenvalues are λ1 = −ci and λ2 = ci. The assumption that c2 6= p2 (with p∈ Z)
guarantee the condition λ1 − λ2 6= i2π
T
p (where p is an integer). Then the non singular case
holds.Here the matrix R0 is not null consequently R0 Uj − Uj R0 6= 0.We shall evaluate the
matrices Uj and Rj using the second method.In accordance to system (3.46) we get
R̃0 = C
At present, we must fund the expression of matrix Km (Φm ). As we are in the case of matrices
with size two,then
letting
m b11 b12 m k11 k12
Φ = and Km (Φ ) = equation (3.28) becomes
b21 b22 k21 k22
im 2π k − k21 − c2 k12 − b11 = 0
T 11
im 2π k − k + k − b = 0
T 12 22 11 12
2π
(3.62)
2 2
im T k21 + c k11 − c k22 − b21 = 0
2π
im T k22 + c2 k12 − k21 − b22 = 0
2
1 Ti 2π 2 Ti 2
k12 (Φ) = 2 2m 2 − c b12 + b22 − b11 + c b21
4(c2 − m2 Tπ 2 ) mπ T mπ
2
1 Ti 2π 2 2 2 Ti 4
k21 (Φ) = 2 2m 2 − c b21 + c b22 − c b11 + c b12
4(c2 − m2 Tπ 2 ) mπ T mπ
2
1 Ti 2π 2 2 Ti 2
k22 (Φ) = 2 2m 2 − c b22 − c b12 − b21 − c b11 (3.63)
4(c2 − m2 Tπ 2 ) mπ T mπ
For j = 2,
m 0 0
Φ = B1m = m
−f 0
where f m denotes the mth Fourier coefficient of function f defined by
ZT
2π
f m
= f (t)e−im T t dt
0
if f is real
ZT
2π
f −m = f (t)eim T t dt = f¯m
0
Thus we have
Ti m
!
1 −f m − mπ f
Km (B1m ) = Ti
2
2
4(c − m2 Tπ 2 )
2 − mπ 2m2 Tπ 2 − c2 f m fm
Ti
!
Km (B1m ) =
fm −1 2 − mπ
Ti
2
4(c2 − m2 Tπ 2 ) − mπ 2m2 Tπ 2 − c2 1
Ti
!
f −m −1 mπ
K−m (B1−m ) = Ti
2
= K−m (B1−m )
2
4(c2 − m2 Tπ 2 ) mπ
2m2 Tπ 2 − c2 1
Ti
!
fm −1 2 − mπ
(m) (m) 0 0
B̄1 Km (B1 ) =
4(c2 − m2 Tπ 2 ) −f¯m 0
Ti
2
− mπ 2m2 Tπ 2 − c2 1
f¯m f m
0 0
= 2 Ti
4(c2 − m2 Tπ 2 ) −1 mπ
f¯−m f −m
(−m) (−m) 0 0
B̄1 K−m (B1 ) = 2 Ti
4(c2 − m2 Tπ 2 ) −1 − mπ
f m f¯m
0 0
= 2 Ti
4(c2 − m2 Tπ 2 ) −1 − mπ
since B2 = 0 and B (0) = 0. The computation of Ũj and R̃j can be continues. But in our case,
we will interest only in questions of stability so that we can halt at this step after determination
of Ũ2 and R̃2 . Then, the approximation of R̃(ε) and X̃(t, ε) in this step give:
R̃(ε) ≈ R0 + ε2 R̃2
X̃(t, ε) ≈ I + εŨ1 + ε Ũ2 etR̃(ε)
2
Numerical examples
Let now consider the equation of Mathieu with a small real parameter ε.
d2 y
= −(c2 + ε sin 2t)y (4.1)
dt2
2
Here, we first assume that c2 is of type Tπ 2 p2 that is to say c = π
T
p (with even p and T the period).
We rewrite equation (3.52) as the following vector equation
dx
= (C + εD(t)) x (4.2)
dt
y 0 1 0 0
Where x = ;C= and D(t)=
ẏ −c2 0 − sin 2t 0
The characteristic polynomial of matrix C is given by
p(λ) = λ2 + c2
The root of this polynomial those are naturally the eigenvalues of matrix C are λ1 = −ic and
λ2 = ic. As already announced at the beginning of this section, the function sin 2t is an integrable
piecewise-continuous π-periodic function. Then the eigenvalues of matrix C take the form λ1 = −ip
and λ2 = ip and they are congruent modulo 2pi but are different (λ1 6= λ2 ). Consequently the
singular case holds. We now try to reduce it to non singular case.
The Jordan form of matrix C is defined by
−1 −ip 0
S CS =
0 ip
!
i
1 1 1
where S = and S −1 = 21 p
−ip ip 1 − pi
As the real parties of λ1 and λ2 are zero then their transformations to the form λ1 = α + i 2π
T
m1
2π
and λ2 = α + i T m2 impose α that would be eigenvalues of matrix R0 = C − C0 to be zero. We
now define the matrix C0 by
−1 i2m1 0
S C0 S =
0 i2m2
!
e−ip 0 e−ip 0 1 pi
1 1 1 1
etC0 = S S −1 =
2 0 eip 2 −ip ip 0 eip 1 − pi
! !
1
cos pt sin pt cos pt − p1 sinpt
etC0 = p
, e−tC0 = (4.4)
−p sin pt cos pt p sin pt cospt
Before applying equation (4.4) , we must compute B(t) and then we find that
!
1
sin2pt sin 2t 1
p2
sin 2t sin2 pt
B(t) = e−tC0 D(t)etC0 = 2p
2 1 (4.5)
− sin 2t cos pt − 2p sin 2t sin 2pt
We now compute the coefficients Uj and Rj like above example. Here , the matrix R0 = 0 then
the computation are become also simpler.
Letting
R1 = [B(t)]∇ ≡ (eij )2i,j=1
,
Zt
U1 (t) = [B(t) − R1 ]dt ≡ (fij )2i,j=1
0
Zπ
1 1
e11 = sin2pt sin 2tdt
π 2p
0
= 0
Zπ
1 1
e12 = sin 2t sin2 ptdt
π p2
0
= 0
Zπ
1
e21 = − sin 2t cos2 ptdt
π
0
= 0
Zπ
1 1
e22 = − sin 2t sin 2ptdt
π 2p
0
= 0
Zt
1
f11 = sin2pt sin 2tdt
2p
0
sin 2pt cos 2t − pcos2ptsin2t
=
4p(p2 − 1)
Zt
1
f12 = sin 2t sin2 ptdt
p2
0
1 sin 2pt sin 2t cos 2pt sin 2pt cos 2t
= − + 2 2 + − 2 2
4(p2 − 1) 4p (p − 1) 2 2
4p (p − 1) 4p (p − 1)
Zt
f21 = − sin 2t cos2 ptdt
0
2 − p2 cos 2ptcos2t p sin 2pt sin 2t cos 2t
= 2
− − +
4(p − 1) 4(p2 − 1) 4(p2 − 1) 4
Zt
1
f22 = − sin2pt sin 2tdt
2p
0
sin 2pt cos 2t cos 2pt sin 2t
= − +
4p(p2 − 1) 4p(p2 − 1)
Organizing these result, we get
0 0
R1 = (4.6)
0 0
f11 f12
U1 (t) =
f21 f22
Rt
Letting R2 = [B(t)U1 ]∇ ≡ (gij )2i,j=1 , U2 (t) = [B(t)U1 − R1 ]dt ≡ (hij )2i,j=1 we have
0
!
1 1
sin 2t sin2 pt
2p
sin2pt sin 2t p2 f11 f12
B(t)U1 = 2 1
− sin 2t cos pt − 2p sin 2t sin 2pt f21 f22
Zπ
1 f11 f21 2
g11 = sin2pt sin 2t + 2 sin 2t sin pt dt
π 2p p
0
= 0
Zπ
1 f12 f22 2
g12 = sin2pt sin 2t + 2 sin 2t sin pt dt
π 2p p
0
p4 − 5p2 − 4
= −
16p2 (−6p4 + p6 + 9p2 − 4)
Zπ
1 2 f21
g21 = −f11 sin 2t cos pt − sin 2pt sin 2t dt
π 2p
0
5p3 − p5 − 4p
= −
16(−6p4 + p6 + 9p2 − 4)
Zπ
1 2 f22
g22 = −f12 sin 2t cos pt − sin 2pt sin 2t dt
π 2p
0
= 0
Zt
f11 f21 2
h11 = sin2pt sin 2t + 2 sin 2t sin 2t dt
2p p
0
1 − cos 4pt 1 − cos 4t 1 − cos 2t 1 − cos(4p − 4)t 2(1 − cos 4t)
= − 2 2
+ 2 2
+ 2
+ − +
128p (p − 1) 128p (p − 1) 16(p − 1) 256p2 (p − 1)2 128p3
1 − cos(4p − 2)t 1 − cos(2pt − 4)t 1 − cos(4p + 2)t 1 − cos 6t
+ − +
16(p2 − 1)(4p − 2) 32p2 (2p + 2)(2p − 4) 16(p2 − 1)(4p + 2) 192p3 (2p − 2)
1 − cos(2p + 4)t 1 − cos 2t 1 − cos 6t 1 − cos(6p − 4)t
− + −
16p (2p + 2)(2p + 4) 16p (2p + 1) 192p (2p − 2) 32p2 (2p − 2)(6p − 4)
2 3 3
Zt
p4 − 5p2 − 4
f12 f22
h12 = sin2pt sin 2t + 2 sin 2t sin2 2t + dt
2p p 16p2 (−6p4 + p6 + 9p2 − 4)
0
sin(2p − 2)t 1 − cos 6t sin(2p + 2)t 4 sin 4pt + sin 4t
= − + −
16p(p − 1)(2p − 2) 48p (2p + 2) 16p(2p + 2)(p − 1) 256p3 (2p + 2)(p2 − 1)
2 3 2
Zt
5p3 − p5 − 4p
2 f21
h21 = −f11 sin 2t cos pt − sin 2pt sin 2t + dt
2p 16(−6p4 + p6 + 9p2 − 4)
0
p sin(2p − 2)t p(1 − cos(2p + 2)t) sin 4t sin 4pt
= − + − +
8(p2 − 1)(2p − 2) 8(p2 − 1)(2p + 2) 16(2p + 2)p2 32p(2p + 2)
sin(4p + 4)t sin(2p − 4)t sin 4pt sin(2p − 2)t
+ + − 2
+
16p(2p + 2)(4p + 4) 16p(4p − 4)(2p − 2) 64p (2p − 2) 4p(2p − 2)
sin 4t sin(2p + 2)t sin(2p − 2)t sin 2t sin(2p + 4)t
− − − + 4
−
32p(2p − 2) 4p(2p + 2) 16p(2p − 4) 64p (2p + 2) 8p(2p − 2)(2p + 4)
sin(6p + 4)t sin(2p + 4)t sin 6pt sin(2p − 4)t
+ + + +
32p3 (2p + 2)(6p + 4) 16p(2p + 4) 48p(2p − 2) 8p(2p − 2)(2p − 4)
sin 2pt sin 2pt sin(2p + 4)t p sin(2p − 4)t
− 2
− 2
+ −
16p (2p − 2) 16p (2p + 2) 8p(2p + 2)(2p + 4) 8p(2p + 2)(2p − 4)
1 − cos 2pt sin 6t 1 − cos(6p − 4)t sin(8p + 4)t
+ (
+ 2
− −
16p 2p − 2) 48p (2p + 2) 8p(2p − 2)(6p − 4) 16p(2p + 2)(8p + 4)
5p3 − p5 − 4p
t
16(−6p4 + p6 + 9p2 − 4)
Zt
2 f22
h22 = −f12 sin 2t cos pt − sin 2pt sin 2t dt
2p
0
1 − cos(4p − 2)t 1 − cos 6t 1 − cos(2p − 4)t 1 − cos(6p − 4)t
= 2
+ 3
+ 2 − 2
4(p − 1)(4p − 2) 48p (2p + 2) 8p (2p + 2)(2p − 4) 8p (2p − 2)(6p − 4)
1 − cos 2t 1 − cos(4p − 4)t 1 − cos 4pt 1 − cos(4p + 2)t 1 − cos 4pt
− 3
− 2
− 3
− +
16p (2p − 2) 8p (4p − 4) 32p 4(p − 1)(4p + 2) 32p2 (2p − 2)
2
Interesting only in questions of stability, we halt at this step after determining U2 and R2 . Then,
the approximation of R(ε) and U (t, ε) in this step gives:
R(ε) ≈ ε2 R2 (4.8)
Recapitulating some Uj and Rj finally we find that
4 2
!
p −5p −4
0 −ε2 16(−6p 4 +p6 +9p2 +4)
R(ε) = 5
−p −5p −4p 3 (4.9)
−ε2 16(−6p4 +p6 +9p2 +4) 0
X(T, ε) = W (π, ε)
ε2 (p4 −5p2 −4)π
!
1 − 4pε2 + 16(−6p4 +p6 +9p2 +4)
= 4ε ε2 (−p5 +5p3 −4p)π eπR(ε)
16(p2 −1)
+ 16(−6p4 +p6 +9p2 +4)
1
One of the eigenvalue is of first kind and the other of the second kind, then there is not the eigenvalue
of mixed kind consequently the system is strong stable in accordance with KGL criterion. We notice
also that all eigenvalues are of green color. Therefore in accordance with the classification proposed
by Dosso and Sadkane, the system is strong stable.
√
• For p=4 and ε = π.10−3 , the computation gives
√
−3 0, 9999 −0, 0000
W (π, π.10 ) =
4, 7967 1, 0000
An eigenvalue is of first kind and the other of the second kind, then there is not the eigenvalue of
mixed kind consequently the system is strong stable in accordance with KGL criterion. All eigenvalues
have not the specific color but the quantity
R̃1 = [B(t)]∇
0 0
= (4.12)
0 0
(−1) 0 0
B = 1 (4.14)
2i
0
The other coefficients are zero so that the approximation of B(t) is written as
0 0 0 0
B(t) = 1
i2t
e + 1 e−i2t (4.15)
− 2i 0 2i
0
Recall that T =π. Inserting B (1) and B (−1) successively in equation (3.62) we obtain the Fourier
coefficients of matrix Ũ1 (t). That is
(1) 1 i −1
Ũ1 = (4.16)
8(c2 − 1) c2 − 2 −i
(−1) 1 −i −1
Ũ1 = 2 (4.17)
2
8(c − 1) c −2 i
f¯(1) f (1)
0 0
R̃2 =
2(c2 − 1) −1 0
i
× 2i1
2 0 0
=
2(c2 − 1) −1 0
1 0 0
= (4.19)
8(c2 − 1) −1 0
Ũ2 (t) = 0
B (h) Kk (B (k) ) gives also zero since B (i) = 0, ∀ i 6= ±1 Let us interest only in
P
Indeed
h+k=±1
questions of stability and halt at this step after determining Ũ2 and R̃2 . Then, the approximation of
R̃(ε) and X̃(t, ε) in this step gives:
!
0 1
R̃(ε) ≈ 2 (4.20)
−c2 − 8(cε2 −1) 0
1 0 ε 0 −1 −1 0
X̃(t, ε) = + 2 cos 2t + sin 2t etR(ε) (4.21)
0 1 2
4(c − 1) c −2 0 0 1
The value of X̃(t, ε) at the end of a period that are the monodromy matrix is
!
1 − 4(c2ε−1)
X̃(T, ε) = ε(c2 −2) eT R(ε) (4.22)
4(c2 −1)
1
kW T JW − Jk = 8, 2550.10−12 ≈ 0
thus W is J-symplectic.The eigenvalues and it’s (iJxk , xk ) and (S 0 xk , xk ) are consigned in following
tabular
One of the eigenvalue is of first kind and the other of the second kind, then there is not the eigenvalue
of mixed kind consequently the system is strong stable in accordance with KGL criterion. We notice
that all eigenvalues have green color. Therefore in accordance with classification proposed by Dosso
and Sadkane, the system is strong stable.
• For c2 = 155 and ε = 109.10−5 , we obtain
−5 0, 1601 0, 0793
X(π, 109.10 ) = W =
−12, 2960 −0, 1567
satisfying the equality
kW T JW − Jk = 7, 4256.10−8 ≈ 0
thus W is J-symplectic.
One of the eigenvalue is of first kind and the other of the second kind, then there is not the
eigenvalue of mixed kind consequently the system is strong stable in accordance with KGL criterion.
All eigenvalues are of red color. Therefore in accordance with the classification proposed by Dosso
and Sadkane system is also strong stable.
• For c2 = 199999 and ε = 0, 052631578, we obtain
−4, 4280 0, 0089
X(π; 0, 052631578) = W =
276, 8520 −0, 7853
The eigenvalues of W ( λ1 = 5, 0139 ; λ2 = 0, 1994 ) are not on unit circle and it satisfy the equality
kW T JW − Jk = 1, 7313.10−4
The eigenvalues are not on unit circle. Consequently, the system is not stable.
We proposed a method to extract the monodromy matrix X(T )of original system (non perturbed
system). This method, called the method of double orthogonal sweeps is presented by M.Dosso
and N.Coulibaly [5].We explained the behaviors of the multipliers of Hamiltonian systems under an
increasing Hamiltonian. To do it, we expressed the multipliers as a linear function of the perturbation
parameter.We presented how to perturb a Hamiltonian system with periodic coefficients.We proposed
two methods to determine the matrizant of perturbed system [28]. These methods are called first
and second method.The choice of method depend upon constant matrix R0 . If the constant R0 is
null matrix or the product of identity matrix by a number, the problem is simpler. We used directly
the first method to evaluate the perturbed matrizant.If not, the access to computation become
difficult so we need certain transformation leading to simple form. We illustrate our theory by two
numerical examples. In each example, we introduced some values of perturbation parameter taken
at random to compute the monodromy matrix of the system. End then, we computed the values
of (iJx, x) and (S 0 x, x) to analyse the strong stability of chosen Hamiltonian system with periodic
coefficients.
In future work, we will deal the cases of dimensional greater than two and search a method to
determine the convergence radius of series R(ε).
[1] G.Alsmeyer, L.Matthias; Random Matrices and iterated Randum Functions; Munster,October
2011
[3] Jeffrey DaCunha, Lyapunov Stability and Floquet Theory for Nonautonomous Linear Dynamic
Systems on Time Scales, Dissertation, Baylon University, 2004
[4] M.Dosso, Sur quelques algorithmes d’analyse de stabilité forte des matrices symplectiques,
PHD Thesis (september 2006), Université de Bretagne Occidentale. Ecolee Doctorale SMIS,
Laboratoire de Mathematique, UFR Science et techniques
[5] M.Dosso, N.Coulibaly, Symplectic matrices and strong stability of Hamiltonian system with
periodic coefficients, J.of Mathematical science: Advances and Applications, Vol. 28, 2014,
Pages 15-38
[6] M.Dosso, N.Coulibaly and L.Samassi, Strong stability of symplectic matrices using a spectral
dichotomy method. Far Eeat Journal Applied Mathematics 79(2)(2013), 73-110
[7] M. Dosso, and M. Sadkane, A spectral trichotomy method for symplectic matrices, Nu-
mer.Algor. 52(2009), 187-312
[8] M.Dosso, M.Sadkane, On the strongly stability of symplectic matrices, Numerical Linear Algebra
with application. 20(2)(2013), 234-249
[9] Mario DEFILIPPI, Methode numerique pour les systemes differentiels a coefficients periodique:
Application à un rotor industriel, Publication du L.M.A. Repertoriées dans la base Pascal de
l’I.N.I.S.T n0 135 (Novemer 1991)
[11] Gentle J. E., Matrix Algebra: Theory, Computations and Application in Statistics. News York
(2003)
[12] S.K. Godunov,Ordinary differential equations with constant coefficient, American Mathematical
Soc. 1 Janv(1997), 282
[13] S.K. Godunov , stability of iterations of symplectic transformations, Siberian Math. J 30(1989),
54-63
[14] S.K. Godunov and M.Sadkane,Spectal analyses of symplectic matrices with application to the
theory of parametric resonance, SIAM J.Matrix Anal. Appl. 28(2006),1083-1096
[15] S.K. Godunov and M.Sadkane, Some new algorithms for the spectral dichotomy methods,linear
Algebra Appl.358(2003),173-194
[17] F. Hiai and D. Petz, Introduction to Matrix Analusis and Application. Publ. RIMS Kyoto
University 48(2012), 525-542
[18] C. Mehl, On classification of normal matrices in indefinite inner product space. Electron. J.Linear
Algebra, 15: 30-83-, 2006
[19] Christian Mehl, Volker Mehrmann, André C.M.Ran and Leiba Rodman, Eigenvalue perturba-
tion theory of classes of structured matrices under generic structured rank one perturbations:
symplectic; orthogonal and unitary matrices. BIT, 54: 219-255, 2014
[20] Christian Mehl, Volker Mehrmann, André C.M.Ran and Leiba Rodman, Eigenvalue perturbation
theory of classes of structured matrices under generic structured rank one perturbations, Linear
Algebra Applil., 435-687: 716, 2011
[21] C.Mehl; V.mehrmann,A.C.M.Ran and L. Jordan forms of real and complex matrices under rank
one perturbation. Oper.Matrices,7: 351-398,2013
[22] Christian Mehl; Volker Mehrmann; André C. M. Ran; Leiba Rodman , Perturbation analysis of
Lagrangian invariant subspaces of symplectic matrices
[23] Pierre Montagnier; Christopher C.Paige; Raymond J. Spitere, Real Floquet Factors of linear
time-periodic systems. Syst. Control Lett., 59(4), 251-262
[24] André C.M. RAN; Michal WOJTYLAK, Eigenvalues of rank one perturbations of matrices.
Linear Algebra Appl.,437-589-600, 2012
[25] Laurent Serlet, Onze leçons d’Algèbre linéaire pour la licence de mathématiques,Version 2004-
2005
[26] Steven H. Weintraud, Jordan canonical form:Theory and practice, WWW. morganclapool.com
[27] H.Xu, An SVD-like matrix Decomposition and Its Application,Linear Algebra Appl., 368 (2003)
1-24
[28] V.A.Yakubovich and V.M.Starzhinskii, Linear Differential Equations with Periodic coefficients,
Vols.1, Wiley, New York, 1975