Vous êtes sur la page 1sur 8

# 1 Gain Scheduling

The basic limitation of the design via linearization approach is the fact that the controller is guaranteed to work only in some neighborhood of a single operating point. Gain Scheduling is a technique to increase the region of attraction to a range of possible operating points. In many situations, it is known how dynamics of a system change with its operating points. It might even be possible to model the system in such a way that the operating points are parameterized by one or more variables, which we call scheduling variables. In such situations we may linearize the system at several equilibrium points, design a feedback controller at each point, and implement the resulting family of linear controller as a single controller whose parameters are changed by monitoring the scheduling variables. Such controller is called gain-scheduled controller. Idea: Linearize the system at a family of operating points, parameterized by the scheduling variables Design a linear controller for each equilibrium point to achieve the specified performance Design rules to switch from one controller to another Check non local performance via analytical tools and simulation

Remark 1.5: The region of attraction of one equilibrium point contains the neighboring equilibrium point Consider the system

ym = h ( x , w )

& = f ( x, u , v, w ) x y = h ( x, w )

Here x is the state, u the control input, v is a measured exogenous input, w is a vector of unknown constant parameters and disturbances, y p is the controlled output and ym m is the measured output. We want to design an output feedback controller that achieves small tracking error e = y r in response to the exogenous input

= D = Dr Dv v
We use integral control to achieve zero steady-state error when = d = const and rely on gain scheduling to achieve small error for slowly varying . We use as the scheduling variable. For the design of integral control, we assume that there is a unique pair ( xSS , uSS ) : D Dw Dx Du , continuously differentiable in and continuous in w , such that 0 = f ( xSS ( , w ) , uSS ( , w ) , v , w )

r = h ( xSS ( , w ) , w )
for all ( , w) D Dw .

## For = , we can use linearization to design an integral controller of the form

& =e = yr & = F ( ) z + G1 ( ) + G2 ( ) ym z u = L ( ) z + M 1 ( ) + M 2 ( ) ym + M 3 ( ) e

where the controller gains F , G1 , G2 , L , M 1 , M 2 and M 3 are continuous differentiable functions and are designed such that

A + BM 2Cm + BM 3C Ac ( , w ) = C G2Cm
is Hurwitz for all ( , w) D Dw , where

BM 1 0 G1

BL 0 F

A=

h f f h , B= ,C= , and C = m x u x x

with all the Jacobian matrices evaluated at ( x, u, v ) = ( xSS , uSS , v ) . The new element here is allowing the controller gain to depend on . Ac is the linearization of

## & f ( x, Lz + M 1 + M 2 h + M 3e, v, w ) x & h x , w r = ( ) z & Fz + G1 + G2 h ( x, w )

In the state feedback case, we can drop z and its state equation and take ym = x , L = 0 , M 1 = K 2 , M 2 = K1 and M 3 = 0 , where K = [ K1 A BK1 C is Hurwitz for every ( , w) D Dw . Remark 1.6: The fact that Ac is stable for every frozen = does not guarantee stability of the closed loop system when = ( t ) BK 2 0

## K 2 ] is designed such that

Example 1.5: Consider the second order system &1 = tan x1 + x2 x &2 = x1 + u x y = x2 where y is the measured signal; that is, ym = y . We want y to track a reference signal r . We use r as a scheduling variable. When r = = const , the equilibrium equations have the unique solution

## We use the observer base integral controller

& =e = yr

& = A ( ) x + Bu + H ( )( y Cx ) x K 2 ( ) u = K1 ( ) x

where

1 + 2 A ( ) = 1

1 0 , B = , C = [ 0 1] 0 1
1 3 + 2 , K2 = 1+ 2

1 K1 ( ) = 1 + 2 3 + 2 + 3 + 1+ 2

)(

10 + ( 4 + 2 )(1 + 2 ) H ( ) = 2 (4 + )

and A , B , C are evaluated at x1SS = tan 1 , x2 SS = and uSS = tan 1 . The next two plots illustrate the behavior of a such controller. The first show response of the closed-loop system to some step sequence when a fixed-gain controller evaluated at = 0 is used.

For small reference inputs, the response is as good as the one with the gain-scheduled controller, but as the reference signal increases, the performance deteriorates and the system goes unstable. The second show the closed-loop system under an unmodified controller to the same sequence of steps as before.

While stability and zero steady-state tracking error are achieved, the transient response deteriorates rapidly as the reference signal increases. Such bad transient behavior could lead to instability as it could take the state of the system out of the finite region of attraction, although instability was not observed in this example.
& is available or can be reasonably well estimated, modifications can be made that If y increase the system stability. For, then, we can represent the controller

& =e = yr & = F ( ) z + G1 ( ) + G2 ( ) ym z u = L ( ) z + M 1 ( ) + M 2 ( ) ym + M 3 ( ) e

as
& = & = F ( ) z + G ( ) z u = L ( ) z + M ( ) + M 3 ( ) e

where

## = , G = [G1 G2 ] and M = [ M1 M 2 ] &m y

The controller is equivalent to
& = F ( ) + G ( ) & = L ( ) + M ( ) u = + M 3 ( ) e
& is not available, we can use the gain-scheduled controller If measurement y

& = F ( ) + G1 ( ) e + G1 ( ) & = L ( ) + M 1 ( ) e + M 2 ( ) u = + M 3 ( ) e
& m is replaced by its estimate , provided by the filter where y

& = + ym =
1

( + ym )

where is a sufficiently small positive constant and the filter is always initialized at ( 0 ) such that

( 0 ) ym ( 0 ) k
for some k > 0 . Since ym is measured, we can always meet this initial condition. Furthermore, whenever the system is initiated from an equilibrium point, the upper condition is always satisfied, since, at equilibrium, ym = . The filter acts as derivative approximation when is sufficiently small, as it can be seen from its transfer function

s y s +1 m

which approximate the differentiator transfer function sym for frequencies much smaller than 1 .

## Example 1.6: (Continuation of example 5)

& is not available, we implement the gain scheduled controller with = 0.01 . The Since y next figure shows the response of the closed loop system to a sequence of step changes in the reference signal. If the initial state is in the region of attraction of the new equilibrium point, the system reaches steady state at that point. Since our controller is based on linearization, it guarantees only local stabilization. Therefore, in general, step changes in the reference signal have to be small. Reaching a large value of the reference signal can be done by a sequence of step changes, as in the figure, allowing enough time for the system to settle down after each step change.

Another method is to change the reference slowly from one set point to another. The next figure shows the response of the closed loop system to a slow ramp that take the set point from zero to one over a period of 100 seconds.

This response is consistent with our conclusions about the behavior of gain scheduled controllers under slowly varying scheduling variables. As the slope ramp increases, tracking performance decreases. If we keep increasing the slope of the ramp, the system will eventually become unstable. So far, our analysis of the closed loop system under gain-scheduled controller has focused on the local behavior in the neighborhood of a constant operating point. Can we say more about the behavior of the nonlinear system? In applications of gain scheduling, the practice has been that you can schedule on time-varying variables as long as they are slow enough relative to the dynamics of the systems. This practice is justified by next theorem.
Theorem 1.1: Consider the closed loop system under the stated assumptions. Suppose & (t ) ( t ) is continuously differentiable, ( t ) S (a compact subset of D ), and

for all t 0 . Then, there exist positive k1 , k2 , k , and T such that if < k1 and

e ( t ) k , t T

## & ( t ) 0 as t , then Furthermore, if ( t ) SS and e ( t ) 0 as t

In other words, the theorem shows that if ( t ) is slowly varying and the state not too far from the initial equilibrium point, then the tracking error will remain bounded and tend to & ( t ) tend to zero. zero when