Vous êtes sur la page 1sur 10

Problem Set 3

Alejandro Crdenas, Daniel Daz& Manuel Londoo


7/05/2012

1.
a)
Define a Map [0, 1[ ! [0, 1[ por x 7! x0n+1 = 3xn (mod 1). In anallogy to the binary code introduced
for the Bernoulli Map we can generate a symbol code.
So we wan to get a iteration function map as follows
8
>
0 x < 13
<3x
f (x) = 3x 1 13 x < 23
>
:
3x 2 23 x < 1
We need a code that generate the sequence (x1 , x2 , x3 , . . .) which is called orbit of the point x0 .
Every number between the range can be written like this
x=

1
X
xn
j=1

3j

with xn = 0, 1, 2 and we get this symbolic code

(x1 , x2 , x3 , . . . , xn , . . .)
But the sequence
(2, 2, 2, . . . , 2, . . .)
converges to number 1, so this sequence is not allowed, due to the range. And if we make an
iteration we get

Estudiante de Fsica. Cdigo: 133476


Estudiante de Fsica. Cdigo: 133483

Estudiante de Fsica. Cdigo: 133622

f (x) = 3x (mod 1)
1
X
xn
= 3
(mod 1)
3j
j=1
!
1
x1 X xn
= 3
+
(mod 1)
j
3
3
j=2
= x1 +
=

1
X
xn
(mod 1)
j 1
3
j=2

1
X
xn+1
j=2

3j

(mod 1)

So we get the shifted sequence


(x2 , x3 , x4 , . . . , xn+1 , . . .)
shifted by one digit. Now we want it to became cyclic so if the iteration produces (2, 2, 2, . . . , 2, . . .)
it must go to (0, 0, 0, . . . , 0, . . .), just to guaratee convergence.

b)
Using the equation fo the map, x 7! x0n+1 = 3xn (mod 1), the figure one shows an solution
with period 3.

Figure 1: Periodic solution


And we see that just two points generate period one, (0, 0.5), points on the line x0 = x
that intersects the map.

Period Number of points Points f N


f
1
2
2
f
2
8
8
f
3
26
26
Table 1: Some examples

Figure 2: Fixed points


A fixed point is a point where the next orbit is exactly the same. This is xk = xk+1 for the
first iteration, xk = xk+2 for the second and so on. So for the i-th iteration we get
(x1 , x2 , x3 , . . . , x4 , . . .) = (x1+i , x2+i , x3+i , . . . , xn+i , . . .)
So
x = f i (x)
= 3i x (mod 1)
And fixed points are numbers f N which satisfies 0 f < 3i

x + f = 3i x
f
x = i
3
1
So
Note that the points 0 and 0.5 have any period. All fixed points are unestable, therefore there
is infinite (countable) set of unstable periodic orbits and these orbits are dence in [0,1), because
the eigenvalue of the stability matrix, in one dimention, is the coecent 3, and this value is allways
unreachable for any value of xn .
3

If x0 . The Lyapunov exponent is


n 1

with xf is a fixed point, so

1X
(f, x) = lim
ln (f 0 (xf ))
n!1 n
k=0

n 1

1X
(f, x) = lim
ln (3)
n!1 n
k=0
n 1

1X
n!1 n
k=0

= ln (3) lim

1
(n
n!1 n

= ln (3) lim

1)

= ln (3)

2.
a)
The Hamiltonian function for this system is given by the following equation
l2
H(l, ) =
2I
So by the canonical Hamiltons equations we have the next equation of motion
@H
=0
@t
@H
=
@

l = 0 =) l = cte

@H
l
I 2
= = =
= 2
@l
I
I
But we have that the initial angular momenta is conserved so
l = l(t0 ) = l0 = I 2
r

r
0
l
d
l0
=
=)
=
I
dt
I
Integrating this equation we have the next solution for this system
r
l0
(t) =
t+c
I
r
r
l0 0
l0 0
0
0
0
(t ) = =
t + c =) c =
t
I
I
4

(t) =

l0
(t
I

l = cte =

t0 ) +
r

l0
I

b)
We have a system with one degree of freedom so lets consider the density distribution of probability
as the next function
(
a al
0 2 ; 2L l0 L2
0
2
D( , l, t ) = D0 ( , l) =
0
any other case
That is a rectangle in the phase space, so the integration becomes now to do

as

l
2
l
2

dld D0 ( , l) = 1

Doing the integration process we have that the constants of this distribution fucntion are defined

La al = 1 =) a al =

1
L

Then the disrtibution function is


D0 ( , l) =

1
L

0 2;
any other case
2

L
2

l0

L
2

This is then the normalized distribution of probability function for this system.

c)
We know the relation between the initial and the time dependent distribution function as follows
D(

00

, l00 , t00 ) = det |M | D( 0 (

00

), l0 (l00 ), t0 )

where the matrix M is defined by


M=

is

@l0
@l00
@ 0
@l00

@l0
@ 00
@ 0
@ 00

Assumint the solution given in the point a) then the relation between the solution in t and t
00

= lI t00 +

l0 0
t
I

=)
5

00

l0 00
(t
I

t0 )

l00 = l0
Then the matrix M is
M=

(t00 t0 )
I

0
1

Then the time dependent distribution function is defined by


D(

00

, l00 , t00 ) = D( 0 (

00

), l0 (l00 ), t0 ) =

1
L

But now the limits for the angle coordinate is


l0 00
(t
I

t0 )

00

l0 00
(t
2 I

t0 )

That is now a function of time, so the distribution function is moving in the phase space only
in the angle coordinate. That is represented by the next sequence of images

Figure 3: First Step of the distribution function; this is the inital function.
Then in a time t we have the next image in the phase space

Figure 4: The distribution function in a time t


6

And finally in the limit case that the distribution function arrives to 2 the distribution function
divides itself and startas again in the begin of the interval in the angel coordinate, like this

Figure 5: The distribution divides itself and then starts over.

d)
Because the density probability function is a constant between the phase space limits as we have
seen in part b) of this problem, we can think that this system can be represented in the phase space
as a mesuarement aplication, that tells us the posible state of this system in a constant energy
curve. So if we take this measure aplication as the density probability function, we then can say
that the system covers all of his own phase space. Finally we can takes the normalized probability
function, so states of the system can be represented as a binary code from this we can conclude
that the system is ergodic as defined by Boltzman ergoidicity. This is only valid in the set of the
phase space defined by the trajectory of the moving distribution function.

3.
a)
The given equation represents a particle subject to a viscous force, which is not derived from a
potential, and this nonconservative systems exhibit the familiar arrow of time due to irreversible
dissipative eect. The traditional Lagrangian and Hamilton mechanics cannot be used with this
non-coservative forces, so it is a classical Hamiltonian system. The Lagrange equations with
dissipation become

d dL
dL dz
+
=0
dt dq
dq
dq

where z is the Rayleighs dissipation function. So it takes two scalar functions to specify the
equation of motion, the momentum and the Hamiltonian are the same as if no friction were present.

b)
If the only force present in the system is the friction force, the equation of motion that we have
for the problem is given by the equation:
dq
dt
As we have one-dimentional system, the coordinate q shold be treated as scalaer, not as vector;
this equation is equivatent to:
q = m

dv
+ v=0
dt
Where we have made q = v. Dividing by the mass, we obtain:
m

dv
+ v=0
(1)
dt m
This equation is satisfied by a solution, for v, of the form v = A exp(t), where A is a constant that we may find from the initial conditions and should have, due to exponential funtion is
dimensionless, velocity dimension. Replacing this solution in equation (1), we obtain:
A exp(t) +

A exp(t) = 0

To find for any time, we just have:


A exp(t)( +

) = 0 =) =

m
So the solution is of the form v = A exp( m t). The initial conditions tell us that for t = 0,
0
q(t
) = q0 or what is the same, v(t0 ) = v 0 , then we have:
v 0 = A exp(

t0 ) =) A =

v0
exp( m t0 )

The final sotulion for v is:


v=

v0
exp(
t) = v 0 exp(
(t
exp( m t0 )
m
m

t0 ))

Taking again q = v, the expresion becomes:


q = q0 exp(

t0 )) =

(t

dq
dt

Integrating, we obtain:
q(t)

q(t0 ) =

q0

(exp(

(t

t0 ))

1)

So:
q(t) = q(t0 ) + q0

(1

The momentum is just p = mq:

exp(

(t

t0 )))

(2)

p(t) = mq0 exp(

(3)

t0 ))

(t

Developing the equation (2), we got:


q(t) = q(t0 ) + q0

q0

exp(

t0 ))) = q(t0 ) + q0

(t

p(t)

Now we have a time-independent relation between p and q given as:


p(q) = q 0 + mq0

Where we define the initial condition for momentum as p(t0 ) = p0 = mq0 , so, we got:
p(q) = q 0 + p0

c)
We have he initial distribution given as:
D(p0 , q 0 , t0 ) = (q 0 ) p

1
p02
exp(
)
2(4p)2
24p

And we shall use the equation for a future distribution:


D(p00 , q 00 , t00 ) = det |M | D(p0 (p00 ), q 0 (q 00 ), t0 )

where the matrix M is defined by

M=

@p0
@p00
@q 0
@p00

@p0
@q 00
@q 0
@p00

Using equations (2) y (3) we obtain that:


M=

exp( m (t00 t0 ))
0
1
00
0
(1 exp( m (t
t ))) 1

So we got that det |M | = exp( m (t00

t0 )); from equation (3) we got that:

p0 = p00 exp(

(t00

t0 ))

And from equation (2):


q 0 = q 00

p00

(1

exp(

(t00

t0 )))

Now, for the future distribution we have:

00

00

00

D(p , q , t ) = (q

00

p00

(1 exp(

(t

00

t )))) p

2(4p)2t00 m (t00
1
exp(
24pt00
9

t0 ) p002 exp( 2m (t00


2(4p)2t00

t0 ))

Where 4pt00 should change in time and could be calculated by using the Liuoville theorem,
which tells that the volume, in this case area, of the phase-space region which is filled by the
initial condition of the system, is conserved in time, no matter how the system is change. For the
asymptotic case, we shall take the limit when t00 ! 1, in this case we obtain:
D(p00 , q 00 , t00 ) = (q 00

p00

)p

p002 exp( 2m (t00 t0 ))


1
exp(
)
2(4pt00 !1 )2
24pt00 !1

Where we have used that the term exp( 2m (t00 t0 )) increases so much faster than the term
(t00 t0 ) so we neglet the last one. As the initial distribution is normalized and as a result of the
m
Liouville theorem, we have that the integration of the distribution over the phase-space coordinates
should give as result 1 and when t00 ! 1, the term exp( 2m (t00 t0 )) diverges so there is no other
option that 4pt00 !1 = 0, this means that we have a Dirac delta function centeren in zero, as
follows:
D(p00 , q 00 , t00 ) = (q 00

p00

) (p00 )

This was an expected result because, as the system dissipates energy, in an infinite time we are
sure that we will find the momentum in a zero value.

References
[1] Abdul-Wali Ajlouni, Bassam Joudeh and Belal Salameh, 2007. Particle in a Box with
Dissipation. Journal of Applied Sciences, 7: 1314-1320.

10

Vous aimerez peut-être aussi