Vous êtes sur la page 1sur 29

National Taiwan University

Advanced Digital Signal Processing

Term Paper (Tutorial)


Non-Linear Time Variant
System Analysis


R01943018

Non-Linear Time Variant System Analysis


Version 1
Non-Linear Time Invariant System Analysis
R98942048 R01943018

Abstract
The equations governing the behavior of dynamic systems are usually nonlinear.
Even in cases where a linear approximation is justified, its range of validity is likely
to be limited. The engineer faced with the design or operation of dynamic systems,
especially the control engineer, must understand the various modes of operation that a
system may exhibit. Usually, a system is designed to yield operation in a certain mode
and, at the same time, suppression of some other modes. A typical example is the
design of a servo exhibiting asymptotic stability of the response to every constant
input but which cannot go into self-oscillations. Unlike linear systems, nonlinear
systems can exhibit different behavior at different signal levels. The fact that a system
is nonlinear, however, may not necessarily constitute a disadvantage. Nonlinearities
are frequently introduced to yield optimal performance in a system. It is the objective
of this tutorial to discuss some of the fundamental properties of nonlinear systems and
to illustrate some of the inherent problems, as well as considerations needed when
dealing with the analysis or design of non-linear time invariant systems.

Keywords
Nonlinear time-varying system, phase-space, stability analysis, approximate
method, describing function, Krylov- Bogoliubov asymptotical method.

~2~

Contents
Abstract..2
1. Introduction...4
2. Introduction to Analysis of Nonlinear System6
3. Approximate Analysis methods..10
4. Stability of Nonlinear Systems ..17
5. The Applications..25
6. Conclusion31
7. References32

~3~

1. Introduction
Every system can be characterized by its ability to accept an input such as
voltage, pressure, etc. and to produce an output in response to this input. An example
is a filter whose input is a signal corrupted by noise and interference and whose
output is the desired signal. So, a system can be viewed as a process that results in
transforming input signals into output signals.
First of all, we review the concept of systems by discussing the classification of
systems according to the way the system interacts with the input signal. This
interaction, which defines the model for the system, can be linear or nonlinear, timeinvariant or time varying, memoryless or with memory, causal or noncausal, stable or
unstable, and deterministic or nondeterministic. We briefly review the properties of
each of these classes.

1.1

Linear and Nonlinear Systems

When the system is linear, the superposition principle can be applied. This
important fact is the reason that the techniques of linear-system analysis have been so
well developed. The superposition principle can be stated as follows. If the
input/output relation of a system is
x(t) -> y(t), x1(t)+x2(t) ->y1(t)+y2(t)
Then the system is linear. So, a system is said to be nonlinear if this equation is not
valid.
ExampleConsider the voltage divider shown in Figure 1 with R 1=R2. For input
x(t) and output y(t), this is a linear system. The input/output relation can be written as

y (t )

R2
1
x (t ) x ( t )
R1 R 2
2

On the other hand, if R1 is a voltage-dependent resistor such that R1=x(t)R2, then the
system is nonlinear. The input/output relation in this case can be written as

y (t )

R2
x(t )
x (t )
x(t ) R 2 R 2
x (t ) 1

Figure1-1

1.2

Time-Varying and Time-Invariant Systems


A system is said to be time-invariant if a time shift in the input signal causes the

~4~

same time shift in the output signal. If y(t) is the output corresponding to input x(t), a
time-invariant system will have y(t-t0) as the output when x(t-t0) is the output. So, the
rule used to compute the system output does not depend on time at which the input is
applied. On the other hand, if the system output y(t-t 0) is not equal to x(t-t0), we call
this system time variant or time varying.
There are many examples of time-varying system. For example, aircraft is a time
varying system. The time variant characteristics are caused by different configuration
of control surfaces during takeoff, cruise and landing as well as constantly decreasing
weight due to consumption of fuel.

1.3

Systems With and Without Memory

For most systems, the inputs and outputs are functions of the independent
variable. A system is said to be memoryless, if the present value of the output depends
only on the present value of the input. For example, a resistor is a memoryless system,
since with input x(t) taken as the current and output y(t) take as the voltage, the
input/output relationship is y(t)=Rx(t), where R is the resistance. Thus, the value of
y(t) at any instant depends only on the value of x(t) at that time. On the other hand, a
capacitor is an example of a system with memory.

~5~

1.4

Causal System

A causal system is a system where the output depends on past and current inputs
but not future inputs. The idea that the output of a function at any time depends only
on past and present values of input is defined by the property commonly referred to as
causality. A system that has some dependence on input values from the future is
termed a non-caudal or acausal system, and a system that depends only on future
input values is an anticausal system. Classically, nature or physical reality has been
considered to be a causal system.

1.5

Linear Time Invariant Systems

We have discussed a number of basic system properties. Linear time-invariant


systems play a fundamental role in signal and system analysis because of the many
physical phenomena that can be modeled. A linear time-invariant(LTI) system is
completely characterized by its impulse response h(t) and output y(t) of a LTI system
is the convolution of the input x(t) with the impulse response of the system

y (t ) x(t ) h(t )

x( )h(t )d

1.6

Nonlinear Time Invariant Systems

With nonlinear systems, we cannot count on the above nice properties. The
nonlinear time invariant system is a system, whose operator is time-invariant but
depends on the input. For example, a square amplifier is nonlinear time invariant
system, provided y(t) = O[x(t)]x(t) = ax2(t). Other examples are rectifiers, oscillators,
phase-looked loops (PLL), etc. Note that all real electronic systems become
practically nonlinear owing to saturation. Because of the difficulties involved in
nonlinear analysis, approximation methods are commonly used.

~6~

2. Introduction to Analysis of Nonlinear System


Nonlinear systems with either inherent nonlinear characteristics or nonlinearities
deliberately introduced into the system to improve their dynamic characteristics have
found wide application in the most diverse fields or engineering. The principal task of
nonlinear system analysis is to obtain a comprehensive picture, quantitative if
possible, but as least qualitative, of what happens in the system if the variables are
allowed, or forced, to move far away from the operating points. This is called the
global, or in-the-large, behavior. Local, or in-the-small, behavior of the system can be
analyzed on a linearized model of the system.
So, the local behavior can be investigated by rather general and efficient linear
methods that are based upon the powerful superposition and homogeneity principles.
If linear methods are extended to the investigation of the global behavior of a
nonlinear system, the results can be erroneous both quantitatively and qualitatively
since the nonlinear characteristics may be essential but the linear methods may fail to
reveal it. Therefore, there is a strong emphasis on the development of methods and
techniques for the analysis and design of nonlinear system.

2.1

The Phase-space approach

The phase-space, or more specifically the phase-plane, approach has been used
for solving problems in mathematics and physics at least since Poincare. The
approach gives both the local and the global behavior of the nonlinear system and
provides an exact topological account of all possible system motions under various
operating conditions. It is convenient, however, only in the case of second-order
equations, and for high-order cases the phase-space approach is cumbersome to use.
Nevertheless, it is a powerful concept underlying the entire theory of ordinary
differential equations (linear or nonlinear, time varying or time invariant). It can be
extended to the study of high-order differential equations in those cases where a
reasonable approximation can be made to find an equivalent second-order equation.
However, this may lead to either erroneous conclusions about the essential system
behavior, such as stability and instability, or various practical difficulties such as time
scaling.

~7~

2.2

The stability analysis

The stability analysis of nonlinear systems, which is heavily based on the work
of Liapunov, is a powerful approach to the qualitative study of the system global
behavior. By this approach, the global behavior of the system is investigated utilizing
the given form of the nonlinear differential equations but without explicit knowledge
of their solutions. Stability is an inherent feature of wide classes of systems, thus
system theory is largely devoted to the stability concept and related methods of
analysis.
Stability analysis, however, does not constitute a complete satisfactory theory for
the design of nonlinear systems. The stability conditions, which are often hard to
determine, are sufficient but usually not necessary. This comes from the fact that the
given equations are reformulated for the application of the stability analysis. In that
reformulation certain information about the specific system characteristics is lost and,
unfortunately, the amount of information that is lost cannot be estimated.
For example, if a nonlinear system is found to be stable for a certain range of
parameter values, it is not possible to predict how far from that range the parameter
value can be chosen without affecting the system stability. Furthermore the system
can be unstable and still be satisfactory for practical applications. For example, a
system can exhibit stable periodic oscillations and therefore be unstable. However, in
the application of the system, these oscillations may not be observed because their
amplitude is sufficiently small and the perturbations permanently acting on the system
are large enough to drive the system far from the periodic oscillations.

2.3

Approximate methods

Approximate methods for solving problems in mathematical physics have been


received with much interest by engineers and have promptly obtained wide diffusion
in diverse fields of system engineering. The basic merit of approximate methods
consists in their being direct and efficient, and they permit a simple evaluation of the
solution for a wide class of problems arising in the analysis of nonlinear oscillations.
The application of computer techniques and system simulations has given strong
emphasis to those approximate methods which employ rather straightforward and
realizable solution procedures and calculations. These methods enable a simple
estimation of how different system structures and parameters influence the salient
system dynamic characteristics. The application of a computer simulation can then
provide the actual solution of the design problem. If the system behavior is not
satisfactory, or if the computer solution does not agree with predicted characteristics,
the approximate methods can again be applied to guide the next step in the system
simulation and also achieve a better solution of the analysis problem, If we
interchange these two steps---that is, apply the approximate methods and then the
computer simulation---the design converges eventually to a final satisfactory solution,
This philosophy in the analysis of nonlinear systems can give improved results not
only in a specific system but also in the related class of systems, and thus has an
important generality in system theory and application.
It is of particular significance to classify the nonlinear problem before a specific

~8~

technique is applied to its solution. Thus it is necessary to evaluate the potential of


both the exact and the approximate methods before they are tested on the actual
problem. This involves engineering experience and ingenuity in choosing the
appropriate design technique and procedure. If an exact method is to be applied, we
should be aware of the fact that it may require that a sequence of simplifications be
introduced in the original problem.
In the simplifications, certain vital characteristics of the original problem can be
lost---for example, the reduction of the order of a differential equation through neglect
of one of the system parameters. Then the approximate solution of the original
problem may represent more appropriately the actual situation and be of more use in
the design. In addition, the approximate methods normally yield more information
about the possible performance criteria trade-offs or the structural and parameter
changes that might enhance the overall system characteristics.
On the other hand, the exact methods can reveal various subtle phenomena in
nonlinear system behavior that cannot be discovered by the approximate methods. It
can be concluded that in a majority of practical problems both the exact and the
approximate methods should be applied to obtain a satisfactory solution of the
nonlinear system design problem, and the versatility of the designer in various
solution procedures is a prerequisite for a successful system analysis and design.
In the application of the approximate methods, a significant problem is the
estimation of their accuracy. A certain degree of accuracy is necessary to guarantee
the applicability of the method involved, and to ensure the validity of both the
qualitative conclusions and the quantitative results obtained by the approximate
analysis. The accuracy problem, however, involves various mathematical difficulties,
and the designer is forced to use simple and practical approximate methods despite
some pessimism about the validity of the methods and promising results have been
obtained in solving the accuracy problem.
Among the approximate methods used for the analysis of nonlinear oscillations,
the Krylov-Bogoliubov asymptotical method stands out because of its usefulness in
system engineering problems. The original method not only enables the determination
of steady-state periodic oscillations, but also gives in evidence the transient process
corresponding to small amplitude perturbations of the oscillations. The latter is of
particular interest in system design, where the transient process is often the ultimate
goal. However, the method is applicable to systems described by second-order
nonlinear differential equations.
The approximate method to be used in the analysis of nonlinear systems along
with the parameter plane concept is the harmonic linearization method, often called
the describing function method or the method of harmonic balance. The harmonic
linearization is heavily based on the Krylov-Bogoliubov approach, and be applied to
nonlinear systems described by high-order differential equations.

3. Approximate Analysis Methods for Nonlinear System


~9~

In this section, we will present several methods for approximately analyzing a


given nonlinear system. Because a closed-form analytic solution of a nonlinear
differential equation is usually impossible to obtain, methods for carrying out an
approximate analysis are very useful in practice. The methods presented here fall into
three categories
Describing function methods consist of replacing a nonlinear element within
the system by a linear element and carrying out further analysis. The utility
of these methods is in predicting the existence and stability of limit cycles, in
predicting jump resonance, etc.
Numerical solution methods are specifically aimed at carrying out a
numerical solution of a given nonlinear differential equation using a
computer.
Singular perturbation methods are especially well suited for the analysis of
systems where the inclusion or exclusion of a particular component changes
the order of the system. For example, in an amplifier, the inclusion of a stray
capacitance in the system model increases the order of the dynamic model by
one.
The above three types of methods are only some of the many varieties of
techniques that are available for the approximate analysis of nonlinear systems.
Moreover, even with regard to the three subject areas mentioned above, the
presentation here only scratches the surface, and references are given, at appropriate
places, to works that treat the subjects more thoroughly.

3.1

Mathematical Description of Nonlinear Systems

In general a nonlinear system consists of linear and nonlinear elements. The


linear elements are described by linear differential equations. The nonlinear elements,
which are normally very limited in number, are described by a nonlinear function or
differential equation relating the input and output of the element. The nonlinear inputoutput relationship can have a rather arbitrary form. The parameter plane analysis to
be presented is restricted to a certain class of these relationships.
In treating a real system as linear, we assume that the system is linear in a certain
range of operation. The signals appearing in various points of the system are such that
the superposition principle is justified. However, if signals in the system go beyond
the range of linear operation and, for example, become either very large or very small,
the characteristics of the system elements can be essentially different form the
linearized characteristics and the system must be treated as nonlinear. Such cases are
illustrated graphically by the characteristics shown in Figure2, where x denotes the
input to the element and the output is given by the value of the function F(x). If the
output of the element is denoted by y, the input-output relationship can be written
analytically as

y F ( x)

~ 10 ~

Figure3-1
Certain nonlinear characteristics can be given in analytical form. For example,
the characteristic of Figure 2 can be analytically described by

kx,| x | S

csignx,| x | S

F ( x)

The characteristic is linear with slope k=c/S for inputs less than S, and it exhibits
saturation for input magnitudes greater than S.
In various practical applications the nonlinear characteristic is obtained
experimentally, and an adequate analytical expression cannot be justified. On the
other hand, some characteristics are conveniently expressed analytically, whereas a
graphical interpretation is not possible.
Now, only single-valued nonlinear characteristics have been discussed; in the
characteristics of Figure2, to each value of the input x there is one and only one value
of the output y=F(x). The characteristics in Figure3, which have a hysteresis loop, are
multi-valued nonlinear characteristics.

Figure3-2

~ 11 ~

The hysteresis property can be such that the loop dimensions depend on the
magnitude of the input signal. It is also to be noted that hysteresis type nonlinear
characteristics cannot be completely described by the function y=F(x) since the output
y inherently depends on the direction of change in the magnitude of the input x. If the
rate of change in x is greater than zero, the right-hand side loop represents the
nonlinear characteristic, and vice versa. Thus the adequate description of the
hysteresis type of nonlinearities should be expressed as

y F ( x, signsx), s
Rather than as y=F(x).

d
dt

Figure3-3
Besides the analytical description of nonlinear elements and systems, it is
essential to consider the structure of the system, which is usually given in familiar
block diagram or signal flow graph form. The structure of the system displays certain
inherent features of nonlinear systems that are not apparent in the analytical
description.
The basic nonlinear system with one nonlinear element n is shown in Figure4. It
should be noted that the function F(x, sx) associated with n does not necessarily
represent the nonlinear element as described by a nonlinear differential equation as

Tsy y F ( x, sx )
F ( x, y ) sy y Kx
F ( x ) sy y Kx

To make the analysis easier, the nonlinear function F(x, sx) may be isolated in
the system, while all the linear relations are joined in the block G(s). For example, if
the nonlinear element n is described, it can be split into two equations

(Ts 1) y z
z F ( x, sx )

Then the equations are associated with the other linear elements of the system,
and the function F(x, sx) is isolated in the block n. Naturally, the function F(x, sx)
does not represent the nonlinear element n and therefore will be called the
nonlinearity.
The linear elements may coupled in an arbitrary way to make the equivalent

~ 12 ~

transfer function G(s), whose order is not theoretically limited as far as the parameter
plane analysis is concerned. However, certain restrictions on the nature of the function
G(s) are imposed in order to justify the application of the approximate analysis.
According to the block diagram of Figure4, the transfer function G(s) is

G(s)

C (s )
B( s)

The function f=f(t), which may be either a desired input signal or an undesired
perturbation, is applied somewhere in the linear part of the system. The block diagram
of Figure4 may represent a nonlinear system having two nonlinear elements
connected is cascade, providing it is possible to isolate the two related nonlinearities
and join them in one equivalent block.

~ 13 ~

3.2

Describing Function

Among the methods used for stability analysis and investigation of sustained
nonlinear oscillations, sometimes called a limit cycle, the describing function
generally stands out because of its usefulness in engineering problems of control
system analysis. The describing function technique can be successfully applied to
systems other than control whenever the sustained oscillations, which are based on
some nonlinear phenomena, represent possible operating conditions.
The theoretical basis of the describing function analysis lies in the van der Pol
method of slowly varying coefficients as well as in the methods of harmonic balance
and equivalent linearization for solving certain problems of nonlinear mechanics. The
analysis has been further developed in the work of Goldfarb with the emphasis on
nonlinear phenomena in feedback systems.
For presenting the concept of describing function method, a nonlinear time
invariant system with a block diagram of Figure4 is considered. The block n
represents the isolated nonlinearity described by a given function, F(x, sx). The linear
part of the system is presented by a known transfer function G(s) = C(s)/B(s). The
external forcing function f=f(t) is identically zero for all values of time t. Thus the free
oscillations in the system are determined by a nonlinear homogeneous differential
equation

B( s) x C ( s) F ( x, sx) 0, s

3.3

d
dt

Krylov-Bogoliubov Asymptotical Method

Since the parameter plane analysis of nonlinear oscillations is based upon the
concept and results of the Krylov- Bogoliubov asymptotical method, the fundamental
aspects of the method. Then the derivations involved in further extensions and
applications of the method can be more easily followed. Furthermore, the method is
highly applicable to practical problems of nonlinear oscillations and represents a basis
for other approximate methods in nonlinear analysis, particularly the describing
function technique.
The basis of approximate analysis of nonlinear oscillations is the small parameter
method introduced in connection with the three-body problem of celestial mechanics.
The fundamental concept and certain solution procedures have been postulated in a
general form by Poincare. In this method a second-order nonlinear differential
equations describing the oscillations has been formulated so that it incorporates a
small parameter. The parameter is small in the sense that it represents a number of
sufficiently small absolute value.
For a zero value of the parameter the nonlinear operation reduces to a linear
equation, the solution of which is a harmonic oscillation. The solution of the linear
equation is called the generating solution. The essential idea of the method is to
assume the solution of the nonlinear differential equation in the form of an infinite
power series.
Then, by substituting the solution into the original differential equation, a

~ 14 ~

recursive system of linear nonhomogeneous differential equations with constant


coefficients is obtained. Based upon the generating solution, the recursive system can
be solved by elementary calculations up to a desired degree of accuracy. The small
parameter method has proved useful for solving numerous problems in physics and
the technical sciences.
By considering certain nonlinear phenomena in electron tube oscillators, van der
Pol proposed the method of slowly varying coefficients for evaluation of the related
periodic oscillations. This method is a variant of the the small parameter method,
which is heavily based upon the consideration of the first harmonic in the Fourier
series expansion of the nonlinear function, this being the keystone in the describing
function analysis.
Furthermore, not only is the method convenient for the identification of periodic
solutions of second-order nonlinear differential equations, but it also places in
evidence the manner in which the possible periodic solutions are established, after
small amplitude perturbations, around the solution. The method, however, has been
based on a rather intuitive approach and only the first approximation has been
considered. From the approach it is not clear how the higher approximations can be
made.

~ 15 ~

4. Stability of Nonlinear System


Here, we are going to introduce various methods for the input-output analysis of
nonlinear systems. The methods are divided into three categories:
1. Optimal Linear Approximants for Nonlinear Systems. This is a formalization
of a technique called the describing function technique, which is popular for a
quick analysis of the possibility of oscillation in a feedback loop with some
nonlinearities in the loop.
2. Input-output Stability. This is an extrinsic view to the stability of nonlinear
systems answering the question of when a bounded input produces input
produces a bounded output. This is to be compared with the intrinsic or state
space or Lyapunov approach to stability.
3. Volterra Expansions for Nonlinear Systems. This is an attempt to derive a
rigorous frequency domain representation of the input output behavior of
certain classes of nonlinear systems.

4.1

Optimal Linear Approximate to Nonlinear Systems

In this section we will be interested in trying to approximate nonlinear systems


by linear ones, with the proviso that the "optimal" approximating linear system varies
as a function of the input. We start with single-input single-output nonlinear systems.
More precisely, we view a nonlinear system, in an input output sense, as a map N
from C[0, [, the space of continuous functions on [0, [, to C([0, [).
Thus, given an input u C ([0, [), we will assume that the output of the
nonlinear system N is also a continuous function, denoted by y N () , defined on [0,
[:
0,

y N =N (u) C
We will now optimally approximate the nonlinear system for a given reference
0,

input
by the output of a linear system. The class of linear systems, denoted
u0 C
by W, which we will consider for optimal approximations are represented in
0,

convolution form as integral operators. Thus, for an input


, the output of
u0 C
the linear system W is given by

y L ( t )=( W ( u0 ) ) ( t ) w ( t ) u 0 ( t ) d

With the understanding that u ( t ) 0 for t 0 . The convolution kernel is chosen to


minimize the mean squared error defined by
T
2
1
e ( w )= lim [ W u0 ( t ) N u 0 ( t ) ] d
T T 0
The following assumptions will be needed to solve the optimization problem.
1. Bounded-Input Bounded-Output (b.i.b.o) Stability. For given b, there exists

~ 16 ~

Thus, a bounded input to


the nonlinear system is assumed to produce a bounded output.
2. Causal, Stable Approximators. The class of approximating linear systems is
assumed causal and bounded input bounded output stable, i.e., w ( t ) 0 t <0 ,
and

|w( )|d <


0

This equation guarantees that abounded input u() to the linear system W produces
a bounded output.
3. Stationarity of Input. The input u0 () is stationary, i.e.,
s+T
2
1
lim |u0 ( t )| dt
T T S
Exists uniformly is s. The terminology of a stationary deterministic signal is due
to Wiener in his theory of generalized harmonic analysis.

4.1.1 Optimal Linear Approximations for Dynamic


Nonlinearities: Oscillations in Feedback Loops
We have studied how harmonic balance can be used to obtain the describing
function gain of simple nonlinear systems -memoryless nonlinearities, hysteresis,
dead zones, backlash, etc. The same idea may be extended to dynamic nonlinearities.
Consider, for example,
y +3 y 2 y + y =u
With forcing u= Asin (t) . If the nonlinear system produces a periodic output (this
is a very nontrivial assumption, since several rather simple nonlinear systems behave
chaotically under periodic forcing), then one may write the solution y(t) in the form

y (t )= Y k sin k (t +k )
k=1

Simplify and equate first harmonic terms yields


( 12 ) Y 1 sin ( t + ) + 3 Y 31 cos ( t + )= Asin(t )
4
2
2
3 Y 1
( 12 ) Y 1+ 3 Y 31 = A2 , =tan1
4
4 ( 12 )
Thus, if one were to find the optimal linear, causal, b.i.b.o. stable approximant
system of the nonlinear system (4.25), it would be have Fourier transform at
frequency w given by what has been referred to as the describing function gain
j
Y1e
A

4.2

Input-Output Stability

Up to this point, a great deal of the discussion has been based on a state space
description of a nonlinear system of the form
x=f ( t , x , u ) ,
y=h (t , x , u)

~ 17 ~

or
x k+1=f ( k , x k , uk ) ,
y k =h ( k , x k , uk ) .
One can also think of this equation from th input-output point of view. Thus, for
example, given an initial condition x ( 0 )=x 0 and an input u() defined on the

interval 0, , say piecewise continuous, and with suitable conditions on f() to

make the differential equation have a unique solution on 0, with no finite

escape time, it follows that there is a map N x from the input u() to the output y().
It is important to remember that the map depends on the initial state x_0 of the
system. Of course, if the vector field f and function h are affine in x, u then the
response of the system can be broken up into a part depending on the initial state and
a part depending on the input. More abstractly, one can just define a nonlinear system
as a map (possibly dependent on the initial state) from a suitably defined input space
to a suitable output space. The input and output spaces are suitably defined vector
spaces. Thus, the first topic in formalizing and defining the notion of a nonlinear
system as a nonlinear operator is the choice of input and output spaces. We will deal
with the continuous time and discrete time cases together. To do so, recall that a
+
+ R
Z
respectively , g ( ) : Z
respectively , l p
function g():
is said to belong to
if it is
R

0,
0,

Lp
measurable and in addition
0

|g( t)| dt< , resp . |p(n)| <


n=0

respectively ,

Also, the set of all bounded functions is referred to as


0,
L
+
Z
+
respectively , l p
Z . The L p ( l p ) norm of a function
is defined to be

l
0,
g Lp

1
p

|g()|p |g ( t )| dt resp .|g ( )|p


0

|g ( n )| p ,
n=0

And the L norm is defined to be


|g ( )| |g ( t )|resp . n|g ( n )|.
Unlike norms of finite dimensional spaces, norms of infinite dimensional spaces are
not equivalent, and thus they induce different norms on the space of functions.

~ 18 ~

4.3

Volterra Input-Ouput Representations

In this section we will restrict our attention to single input single output (SISO)
systems. The material in this section may be extended to multiple input multiple
output systems with a considerable increase in notational complexity deriving from
multilinear algebra in many variables. In an input-output context, linear time-invariant
systems of a very general class may be represented by convolution operators of the
form
t

y (t )= h ( t ) u ( ) d .

Figure4-1 A graphical interpretation of the Popov criterion


Here the fact that the integral has upper limit t models a causal linear system and
the lower limit of models the lack of an initial condition in the system
description (hence, the entire past history of the system). In contrast to previous
sections, where the dependence of the input output operator on the initial condition
was explicit, here we will replace this dependence on the initial condition by having
the limits of integration going from rather than 0. In this section, we will
explore the properties of a nonlinear generalization of the form

n1

y (t )= h ( t 1 , 1 2 , , n1 n ) d 1 d n .
n=1

This is to be thought of as a polynomial or Taylor series expansion for the function y()
in terms of the function u(). Historically, Volterra introduced the terminology
function of a function, or actually function of lines, and defined the derivatives of
such functions of functions or functionals, Indeed, then, if F denotes the
operator( functional) taking input functions u() to output function y(), then the terms
listed above correspond to the summation of the n-th term of the power series for F.
The first use of the Volterra representation in nonlinear system theory was by Winener
and hence representations of the form of are referred to as Volterra-Wiener series. Our
development follows that of Boyd, Chua and Desoer [37], and Rugh [248], which
have a nice extended treatment of the subject.

4.4

Lyapunov Stability Therory

The study of the stability of dynamical systems has a very rich history. Many
famous mathematicians, physicists, and astronomers worked on axiomatizing the
concepts of stability. A problem, which attracted a great deal of early interest was the
problem of stability of the solar system, generalized under the title "the N-body
stability problem." One of the first to state formally what he called the principle of

~ 19 ~

"least total energy" was Torricelli (1608-1647), who said that a system of bodies was
at a stable equilibrium point if it was a point of (locally) minimal total energy. In the
middle of the eighteenth century, Laplace and Lagrange took the Torricelli principle
one step further: They showed that if the system is conservative (that is, it conserves
total energy-kinetic plus potential), then a state corresponding to zero kinetic energy
and minimum potential energy is a stable equilibrium point. In turn, several others
showed that Torricelli's principle also holds when the systems are dissipative, i.e.,
total energy decreases along trajectories of the system. However, the abstract
definition of stability for a dynamical system not necessarily derived for a
conservative or dissipative system and a characterization of stability were not made
till 1892 by a Russian mathematician/engineer, Lyapunov, in response to certain open
problems in determining stable configurations of rotating bodies of fluids posed by
Poincar6.
At heart, the theorems of Lyapunov are in the spirit of Torricelli's principle. They
give a precise characterization of those functions that qualify as "valid energy
functions" in the vicinity of equilibrium points and the notion that these "energy
functions" decrease along the trajectories of the dynamical systems in question. These
precise concepts were combined with careful definitions of different notions of
stability to give some very powerful theorems.
For a general differential equations of the form
x=f ( x ,t ) , x ( t 0 )=x 0
where x R nt 0 . The system is said to be linear if f ( x ,t )= A ( t ) x for some
+ Rn n
and nonlinear otherwise. We will assume that f(x, t) is piecewise
A ( ) :R
continuous with respect to t, that is, there are only finite many discontinuity points in
any compact set. The notation B h will be short-hand for B(0, h), the ball of radius
h centered at 0. Properties will be said to be true
Locally if they are true for all x 0 in some ball B h
Globally if they are true for all x 0 R n
Semi-globally if they are true for all x 0 B h with h arbitrary
Uniformly if they are true for all t 0 0

4.4.1 Basic Theorems


Generally speaking, the basic theorem of Lyapunov states that when v(x, t) is a
dv ( x ,t )
0 then we can conclude stability of the equilibrium
p.d.f or an l.p.d.f and
dt
point. The time derivative is
dv (x , t) v ( x , t ) v ( x , t )
=
+
f (x ,t)
dt
t
x
The rate of change of v(x,t) along the trajectories of the vector field is also called
the Lie derivative of v(x,t) along f(x,t). In the statement of the following theorem
recall that we have translated the origin to lie at the equilibrium point under
consideration.
Table 4-1 Basic Theorems

~ 20 ~

Conditions on
v(x,t)
l.p.d.f
l.p.d.f, decrescent
l.p.d.f., decrescent
p.d.f., decrescent

1
2
3
4

Conditions on
-v(x,t)
0 locally
0 locally
l.p.d.f
p.d.f.

Conclusions
stable
Uniformly stable
Uniformly asymptotically stable
Globally uniform asymp. stable

4.4.2 Exponential Stability Theorems


+ Rn Rn
has continuous first partial derivatives in x and is
f ( x ,t ) : R
piecewise continuous in t. Then the two statements below are equivalent:
1. x = 0 is a locally exponentially stable equilibrium point of
x =f ( x , t)
i.e., if x B h for h small enough, there exist m, >0 such that
|( , x ,t )| m e ( t ) .
2. There exists a function v(x,t) and some constants h , 1 , 2 , 3 , 4 >0 such that
for all x B h , t 0
2
2
1|x| v ( x ,t ) 2| x| ,
dv (x , t)
2
3|x| ,
dt
dv ( x , t )
4|x|
dx
Assume that

4.4.3 LaSalles Invarian Principle


LaSAlles invariance principle has two main applications:
1. It enables one to conclude asymptotic stability even when -v(x, t) is not an l.p.d.f.
2. It enables one to prove that trajectories of the differential equation starting in a
given region converge to one of many equilibrium points in that region.
However, the principle applies primarily to autonomous or periodic systems, which
are discussed in this section.
Let v : Rn R be continuously differentiable and suppose that
c ={ x R n : v ( x)c }
Is bounded and v 0 for all x c . Define S c by
S={ x c : v ( x )=0 }
And let M be the largest invariant set in S. Then whenever x 0 c , (t , x 0 , 0)
approaches M as t .

4.4.4 Generalizations of LaSalles Principle


LaSalle's invariance principle is restricted in applications because it holds only
for time-invariant and periodic systems. For extending the result to arbitrary
timevarying systems, two difficulties arise:
1. {x: v(x, t) = 0} may be a time-varying set.
2. The limit set of a trajectory is itself not variant.
However, if we have the hypothesis that
v ( x , t ) (x ) 0
Then the set S may be defined to be

~ 21 ~

{ x : ( x ) =0 }
And we may state the following generalization of LaSalles theorem as in the
following paragraph.
Assume that the vector field f(x,t) is locally Lipschitz continuous in x, uniformly
in t, in a ball of radius r. Let v(x,t) satisfy for functions 1 , 2 of class K
1 (|x|) v ( x , t ) 2 (|x|) .
Futher, for some non-negative function ( x) , assume that
v v
v ( x , t )= +
f ( x , t) (x) 0.
t x
1
Then for all |x ( t 0 )| 2 ( 1 ( r ) ) , the trajectories x() are bounded and
lim ( x( t))=0 .
t

4.4.5 Instability Theorems


Lyapunov's theorem presented in the previous section gives a sufficient condition
for establishing the stability of an equilibrium point. In this section we will give some
sufficient conditions for establishing the instability of an equilibrium point.
The equilibrium point 0 is unstable at time t 0 if there exists a decrescent
+ R
function
such that
v : Rn R
1. v ( x , t ) is an l.p.d.f.
2. v ( 0, t ) =0 and there exist points x arbitrarily close to 0 such that v ( x ,t 0 ) >0 .

~ 22 ~

5. The Applications
5.1 View of Random Process
Recall that a system Y (, t) = T[X(, t)] is called memoryless iff the output Y (,
t) is a function of the input X(, t) only for the same time instant. For example, Y (,
t) = X(, t ) and Y (, t) = X 2(, t) are memoryless systems. Note that Y (, t) =
X2(, t) is nonlinear. We are here interested in memoryless nonlinear systems whose
input and output are both real-valued and can be characterized by Y (, t) = g(X(, t))
where g(x) is a function of x.

Figure5-1A nonlinear memoryless system


For memoryless nonlinear systems,
* if X(, t) is Strict-Sense Stationary processes, so is Y (, t);
* if X(, t) is stationary of order N, so is Y (, t);
* if X(, t) is Wide-Sense Stationary processes, Y (, t) may not be
stationary in any sense.
Therefore, the second-moment description of X(, t) is not sufficient for secondmoment description for memoryless nonlinear systems.
Examples of Nonlinearity:
* Full-Wave Square Law: g(x) = ax2.

Figure5-2g(x) = ax2
* Half-Wave Linear Law: g(x) = ax u(x) with u(x) a unit step function.

Figure5-3g(x) = ax u(x)

~ 23 ~

* Hysteresis Law

Figure5-4Hysteresis Law
* Hard Limiter

Figure5-5Hard limiter
* Soft Limiter

Figure5-6Soft limiter
Most of the nonlinear analytical methods concentrate on the second order
statistical description of input and output processes, namely autocorrelations and
power spectrums. One famous approach is the direct method which deals with
probability density functions and is good for use if X(, t) is Gaussian.
Consider the memoryless nonlinear system Y (, t) = g(X(, t)) where the firstorder and second-order densities of input process X(, t), namely f X(x; t) and fX(x1, x2;
t1, t2), are given.

Figure5-7A memoryless nonlinear system


Now, the following statistics of output Y (, t) can be obtained

fY ( y; t )

f x ( xi ; t ) | J ( xi ) |

allroots

~ 24 ~

E{Y (u, t )}
n

( x) f x ( x; t ) dx

RY (t1 , t2 ) E{g ( X (u , t1 )) g ( X (u , t2 ))}

g ( x1 ) g ( x2 ) f x ( x1 , x2 ; t1, t2 )dx1dx2

Consider some examples below.


Full-Wave Square Law Device: Y (, t) = aX2(, t) with a > 0.

fY ( y ; t )

1
[ f x ( y / a ; t ) f x ( y / a ; t ), y 0
2 ay

And FY(y; t) = 0 for y<0.


Square-Law Detector:

Figure5-8 Square-Law detector


The zonal LPF allows a spectral band to pass undistorted and everything else is
filtered perfectly.
Assume further that X(, t) is a narrowband WSS Gaussian noise process n(, t)
with zero mean and autocorrelation RX( ). Thus, X(, t) can be expressed in the
polar form

X (u , t ) n(u , t ) v (u , t ) cos(2 f ct (u , t ))

For fixed t, v(, t) and (, t) are independent random variables where v(, t) is
Rayleigh distributed with

v
v2
f v ( v; t )
exp{
}, v 0
Rn (0)
2 Rn (0)
and fv(v; t) = 0 for v < 0, and (, t) is uniform in [0, 2). Here, 2n = Rn(0) = E{n2(,
t)} and E{v2(, t)} = 2Rn(0). Note that the zonal LPF filters out all information about
frequency and phase. When the bandwidth of the zonal LPF is much smaller than fc,
we have

~ 25 ~

Y (u , t ) av 2 (u , t )cos 2 (2 f ct (u , t ))
a
[v 2 (u, t ) v 2 (u, t ) cos(4 f ct 2 (u, t ))]
2
a
Z (u , t ) v 2 (u , t )
2
Consider further that Sn(f) = A for |f fc|2 < B/2 and |f+fc|2 < B/2 and Sn(f) = 0
otherwise, with A,B > 0. Zonal bandwidth is assumed larger than 2B.

Figure5-9
In the case, RX(0) = Rn(0) = 2AB and the following can be obtained.
( 2 n) !
E {Y n ( ,t ) }=an E { X 2 n ( , t ) }=an n ( R n (0) )n
2 n!
2
2
2
2
(
)
(
)
(
)
Var { Y , t }=E {Y ,t }E {Y , t } =3 a 4 A 2 B2 ( a 2 AB ) =8 a2 A 2 B2
a
a
E { Z ( ,t ) } = E { v2 ( ,t ) }= 2 R n ( 0 )=2 aAB
2
2

5.2

Example of the Application of Lyapunovs Theorem

Consider the following model of an RLC circuit with a linear inductor, nonlinear
capacitor, and inductor as shown in the Figure 5-10. This is also a model for a
mechanical system with a mass coupled to a nonlinear spring and nonlinear damper
as shown in Figure 5-10. Using as state variables xl, the charge on the capacitor
(respectively, the position of the block) and X2, the current through the inductor
(respectively, the velocity of the block) the equations describingthe system are
x 1=x 2 , x 2=f ( x2 )g ( x 1 ) .
Here f is a continuous function modeling the resistor current voltage characteristic,
and g the capacitor charge-voltage characteristic (respectively the friction and
restoring force models in the mechanical analog). We will assume that f, g both model
locally passive elements, i.e., there exists a 0 such that
f ( ) 0 [ 0 , 0 ] ,
g ( ) 0 [ 0 , 0 ] ,

~ 26 ~

Figure5-10 An RLC circuit and its mechanical analogue


The Lyapunov function candidate is the total energy of the system, namely,
x
x 22
v ( x )= + g ( ) d .
2 0
The first term is the energy stored in the inductor (kinetic energy of the body) and the
second term the energy stored in the capacitor (potential energy stored in the spring).
The function v(x) is an l.p.d.f., provided that g ( x 1) is not identically zero on any
interval (verify that this follows from the passivity of g). Also,
f ( x 2) g ( x 1 )+ g ( x 1 ) x 2=x2 f ( x2 ) 0
v ( x )=x 2
Where x 2 is less than 0 . This establishes the stability but not asymptotic
stability of the origin. In point of fact, the origin is actually asymptotically stable, but
this needs the LaSalle principle, which is deferred to a later section.
1

5.3

Swing Equation

The dynamics of a single synchronous generator coupled to an infinite bus is


given by
1

=
, =M
DM 1 (PBsin ())
Here is the angle of the rotor of the generator measured relative to a
synchronously spinning reference frame and its time derivative is Co. Also M is the
moment of inertia of the generator and D its damping both in normalized units; P is
the exogenous power input to the generator from the turbine and B the susceptance of
the line connecting the generator to the rest of the network, modeled as an infinite bus
(see Figure 5-11) A choice of Lyapunov function is
v ( , )=D 2
yielding the stability of the equilibrium point. As in the previous example, one cannot
conclude asymptotic stability of the equilibrium point from this analysis.

~ 27 ~

Figure5-11 A generator coupled to an infinite bus

6. Conclusion
The development of nonlinear methods faces real difficulties for various reasons.
There are no universal mathematical methods for the solution of nonlinear differential
equations which are the mathematical models of nonlinear system. The methods deal
with specific classes of nonlinear equations and have only limited applicability to
system analysis. The classification of a given system and the choice of an appropriate
method of analysis are not at all an easy task. Furthermore, even in simple nonlinear
problem, there are numerous new phenomena qualitatively different from those
expected in linear system behavior, and it is impossible to encompass all these
phenomena in a single and unique method of analysis.
Although there is no universal approach to the analysis of nonlinear systems, by
excluding specific techniques we can still conclude that the nonlinear methods
generally fall under one of three following approachedthe phase-space topological
techniques, stability analysis method, and the approximate methods of nonlinear
analysis. This classification of the nonlinear methods is rather subjective but can be
useful in systematization of their review.
Moreover, we introduce the stability of nonlinear system. There are various
methods analyzing the stability of nonlinear system. Limited of time, we only talk
about some important idea, including input-output analysis, Lyapunov stability theory,
and LaSalles principle.

~ 28 ~

7. References
[1] Black H.S., Stabilized feedback amplifiers, Bell System Techn. J., 13, 118,
1934.
[2] Bogoliubov N.N., and Mitropolskiy Yu.A., Asymptotic Methods in the Theory of
Non-Linear Oscillations, New York, Gordon and Breach, 1961.
[3] Director, S.W., and Rohrer, R.A., Introduction to Systems Theory, McGraw-Hill,
New York, 1972.
[4] Doyle J.C., Francis B.A., Tannenbaum A.R., Feedback Control Theory,
Macmillan Publishing Company, New York, 1992.
[5] Dulac, H., Signals, Systems, and Transforms, 3rd ed., Prentice Hall, New-York,
2002.
[6] Gelb, A., and Velde, W.E., Multiple-Input Describing Functions and Nonlinear
System Design, McGraw-Hill, New York, 1968.
[7] Guckenheimer, J., Holmes, P., Nonlinear Oscillations, Dynamical Systems, and
Bifurcations of Vector Fields, 7th printing, Springer-Verlag, New-York, 2002.
[8] Hayfeh A.H., and Mook D.T., Nonlinear Oscillations, New York, John Wiley &
Sons, 1999.
[9] Haykin, S., and Van Veen, B., Signals and Systems, 2nd ed., New-York, Wiley &
Sons, 2002.
[10] Hilborn, R., Chaos and Nonlinear Dynamics: An Introduction for Scientistsand
Engineers, 2nd ed., Oxford University Press, New-York, 2004.
[11] Jordan D.W., and Smith P., Nonlinear Ordinary Differential Equations: An
Introduction to Dynamical Systems, 3rd ed., New York, Oxford Univ. Press,
1999.
[12] Khalil H.K., Nonlinear systems, Prentice-Hall, 3rd. Edition, Upper Saddle River,
2002.
[13]Rugh W. J., Nonlinear System Theory: The Volterra/Wiener Approach, Baltimore,
John Hopkins Univ. Press, 1981.
[14] Samarskiy, A.A., and Gulin, A.V., Numerical Methods, Nauka, Moscow, 1989.
[15] Sandberg, I.W., On the response of nonolinear control systems to periodic input
signals, Bell Syst. Tech. J., 43, 1964.
[16] Sastry, S., Nonlinear Systems: Analysis, Stability and Control, Springer-Berlag,
New York, 1999.
[17] Shmaliy, Yu. S., Continuous-Time Signals, Springer, Dordrecht, 2006.
[18] Verhulst, F., Nonlinear Differential Equations and Dynamic Systems, SpringerVerlag, Berlin, 1990.
[19] Wiener, N. Response of a Non-Linear Device to Noise, Report No. 129,
Radiation Laboratory, M.I.T., Cambridge, MA, Apr. 1942.
[20] Zames, G., Realizability conditions for nonlinear feedback systems, IEEE Trans.
Circuit Theory, Ct-11, 186194, 1964.
[21] Sastry, S. Nonlinear Systems Analysis, Stability, and Control, Springer-Verlag
New York Berlin Heidelberg, 1999

~ 29 ~

Vous aimerez peut-être aussi