Vous êtes sur la page 1sur 33



Abstract. Sliding mode control is quickly becoming a popular research eld due to the several favorable qualities, including robustness. Linear quadratic control has been one of the more popular and more traditional control techniques, partially due to the ease of implementation and its optimality quality. In this paper, we provide an introduction to sliding mode control. A technique is suggested to allow gradual introduction of a reference signal to the sliding plane. Finally, an inverted pendulum system is used as an example to compare linear quadratic regulation with sliding mode control.

Part 1. Introduction The majority of this paper is dedicated to introducing sliding mode control and applying it to the inverted pendulum problem. While the title indicates no preference between linear quadratic regulation (LQR) and sliding mode control (SMC), there are numerous volumes of material available on linear quadratic regulation. Most of these are presented in a thorough and easy-to-digest form. For this reason, little time has been spent discussing LQR. Conversely, most sliding mode literature is mostly in the form of journal articles and conference proceedings, and therefore quite diverse in approach or nomenclature. Here, an attempt has been made to capture the general intent of SMC, even though the limited space does not allow an exhaustive report on available SMC research and ndings. There is still a great deal to be done in SMC, and it seems that it will be a fertile research area for many years to come. Part 2. The Linear Quadratic Regulator (LQR) The linear quadratic regulator is an optimal and robust technique for MIMO control. Sources that discuss LQR in detail are available in [1] and [2]. In this report, we will primarily discuss time invariant linear quadratic control, however this will be briey derived from the time-varying case. 1. Time-Varying Linear Optimal Control Given the linear time-invariant system, (1)
Date : 12/1/97.

x (t) = Ax (t) + Bu (t) _


where T denotes the nal time for the control session. The linear solution that minimizes this index is given by some linear function of the states, (3) where (4) (5) u (t) = K (t) x (t) K (t) = R1 B T S (t) _ S (t) = AT S (t) + S (t) A S (t) BR1 B T S (t) + Q

we wish to minimize the performance index given by Z i 1 1 Th T (2) J = xT (T ) S (T ) x (T ) + x (t) Qx (t) + u (t)T Ru (t) dt 2 2 t0

If S (T ) is given, then the Riccati equation (Eq. 5) can be solved backwards in time from time T to time t T . This equation, as well as the time-varying gain (Eq. 4) must be solved o-line and stored. In the eld, a controller would use these stored gains to control the system. 2. Time-Invariant Linear Optimal Control The gains produced in a time-varying controller generally have a long inert period, where the gain remains constant or near constant. This is followed by a short active period where the gains change, falling o to zero as the control session ends. This time-varying gain can be approximated simply by assuming that the controller will be in operation indenitely, and therefore the controller is always in the inert mode. This is a reasonable assumption and gives rise to the linear quadratic regulator. The steady-state version of the Riccati equation is called the algebraic Riccati equation. It can be found by assuming steady state conditions have been reached and setting the rate of change of S to zero in Eq. 5. This give us (6) There is generally no analytical solution to the algebraic Riccati equation, however most engineering mathematics software packages include a function to nd a numerical solution. Since S appears quadratically in the equation, there are several possible solutions. The positive denite solution is chosen for the calculation of the optimal gain (7) K = R1 B T S 3. Robustness Qualities The LQR is, in general, a robust control mechanism. If we assume that Q and R are symmetric, then we can be assured ([2]) that the minimum singular values of the closed-loop system satisfy the following two inequalities: (8) (9) where (10) min [I + GCL (jw)] 1 h i 1 1 min I + GCL (jw) 2 GCL = K (sI A)

0 = AT S + SA SBR1 B T S + Q


This is indicative of several extremely useful robustness properties: Upward gain margin is innite. This means that the system is stable for any scalar perturbation of the system inputs, 1 < i < 1. Downward gain margin is at least 1 . This means that the system is stable for 2 any scalar perturbation of the system inputs, 1 < i < 1. 2 Phase margin is at least 60o . This means that the system is stable for any scalar phase perturbation of the system inputs, 60o < i < 60o . In eect, if a disturbance causes the ith input to the system, ui to be of the form (11) where (12) (13) 1 2 60o < i < 1 < i < 60o udi = i eji ui

then the LQR control law guarantees stability. 3.0.1. Note on Symmetry. The requirement that a matrix M is symmetric can always be satised for any quadratic function of that matrix (14) We can rewrite the matrix, (15) so that (16) xT Mx = xT 1 T 1 (M + M) x = x Mx + xT M x 2 2 xT Mx = = xT M T x T T x Mx M= 1 (M + M) 2 xT M x

Since xT Mx is a scalar, (17) (18) therefore, (19) where 1


x Mx = x M +M T

is obviously symmetric.

1 T M +M x 2

Part 3. Sliding Mode Control (SMC) While LQ control uses a single linear control law to minimize some performance index, sliding mode control (similar to gain scheduling) uses more than one and is, in general, non-linear. The performance index is specied as a manifold of space called the sliding surface. A sliding mode controller sends the system states onto the sliding surface and keeps them there. The name sliding mode comes from the slightly inane realization that, once the system states are on the sliding surface, then the system can be considered to be in sliding mode. Sliding mode control was originally developed in the Soviet Union. A survey of early literature can be found in [3]. Recently, papers are emerging from a more eclectic group of controls scientists. A survey of current literature can be found in [4]. This new research has rejuvenated and, in a way, popularized the notion

ANW ER S. BASHI Table 1: A survey of publications indicates an increase of interest in SM C.

Years 94-97 90-93 86-89

Approx. Number of Publications 1000 600 100

of sliding mode control as can be seen from a tally of publications (shown in Table 1fa). The survey was conducted by searching the on-line engineering catalogs COMPENDEX and COMPENDEX PLUS for (sliding mode OR variable structure). Some publications were no doubt missed, however the results do seem to be indicative of an increasing interest in sliding mode control. 4. The Sliding Surface The sliding surface is generally a pre-specied linear manifold, however some formulations of the SMC problem allow for adaptive [5, 6] or non-linear [7] sliding surfaces. Axiom 4.1. Consider two hyper-tubes, tube and " tube, of diameters > 0, " > 0. Their central axes run in the direction of the s = 0 axis. In general, a sliding surface can be found to exist if, barring disturbance, for any x (0) the state trajectory cannot leave the " tube after having entered into the tube. Usually ", however this is not always necessary.

In other words, once the state trajectory comes within of the s = 0 axis, it cannot escape to any distance greater than " unless it is perturbed by a disturbance. We will consider here only linear sliding surfaces as these are generally adequate for most purposes. Any n-dimensional linear manifold can be expressed as (20) s = cT x where c is the cost associated with each corresponding state in x. 5. Sliding Mode Control Based on Ackermanns Gain Ackermann and Utkin (one of the original developers and proponents of sliding mode control) have proposed a technique to automate sliding mode design [8]. Ackermanns formula is used to choose the sliding surface based on a desired feedback spectrum. The design procedure has been summarized in [8] as follows: Choose the desired feedback spectrum, f1 ; : : : ; n g Obtain the linear feedback control, ua = kT x from Ackermanns formula, kT = eT P (A) where (22) (23) 1 eT = (0; : : : ; 0; 1) B; AB; : : : ; An1 B P () = ( 1 ) ( 2 ) : : : ( n )


Design the dynamic part of the controller, (24) z = B T Ax B T B ua , z (0) = B T x (0) _

There is no clear explanation given concerning the reason for adding this


dynamic subsystem, however it may be seen as an augmentation to the original system that simplies control on the sliding surface. The sliding surface equation is found, (25) s = BT x + z Finally, the discontinuous control is designed, M (x; t) s > 0 (26) u= M (x; t) s<0 where, to ensure the existence of a sliding surface, we require (27) M (x; t) > jua j + f0 (x; t) This inequality satises the existence axiom since we require our control to be greater than any known input to the plant as well as any dynamic trajectory deviation that the plant might initiate itself. Note that a large enough disturbance may violate this inequality and cause a stable system to become unstable, i.e. if (28) jud j M (x; t) jua j f0 (x; t) then instability may result. Example 5.1. Design a sliding mode control using Ackermanns formula for the system x1 _ 1 2 x1 1 (29) = + u x2 _ 3 0 x2 1 (30) x (0) = [0; 0]T with closed loop poles at [8; 6]. Looking at the eigenvalues of A, we can see that the system has poles at [3; 2] and so is inherently unstable. Using Ackermanns formula, we get 57 (31) k= 42 which allows us to design the dynamic subsystem, 1 2 x1 1 x1 (32) z = [1; 1] _ [1; 1] [57; 42] 3 0 x2 1 x2 0 (33) z (0) = [1; 1] _ =0 0 which simplies to (34) z = [116; 82] _ x1 x2


6. Sliding Mode Control Based on Canonical Transformation We can model the system x (t) = Ax (t) + Bu (t) _ by an equivalent system z (t) = T AT 1 z (t) + T Bu (t) _ where T is a change of basis chosen such that the equivalent system is in control canonical form, i.e. 3 2 a1 a2 an1 an 6 1 0 0 0 7 7 6 6 0 1 1 0 0 7 (35) T AT = 6 7 6 . . . . 7 .. . . . 5 4 . . . . . . 0 0 1 0 2 3 1 6 0 7 6 7 (36) TB = 6 0 7 6 7 4 0 5 0 We may then choose the sliding surface as in [9] to be s = Gt z where the rate of convergence on the sliding surface is specied by the choice of Gt . Gt contains the coecients of the polynomial with the desired convergence spectrum of the equivalent system to the sliding surface, (37) Gt = [v1 ; v2 ; : : : ; vn ] where v1 ; v2 ; : : : ; vn can be found by using the binomial theorem [10] for an (n 1)th polynomial, (38) Since this spectrum is with respect to the transformed system, we can also transform the basis of the sliding surface, s = Gt z = Gt T x = Gx The control law then becomes [11] u (t) = (Keq + Ksw ) x (t) where (39) (40) Keq Ksw;i = (GB)1 G (A I) M (x; t) GBsxi > 0 = M (x; t) GBsxi < 0 v1 z n1 + v2 z n2 + : : : + vn = (z )n1

M (x; t) is usually chosen to be the maximum allowable input to try to meet the requirement that (41) M (x; t) > juj + f0 (x; t)


Example 6.1. Design a sliding mode controller using canonical transformation for the system 1 1 2 x1 x1 _ (42) + u = 1 3 0 x2 x2 _ (43) x (0) = [0; 0]T with closed loop poles at [8]. The T required to convert this system to control canonical form can be found to be 1 2 T = 1 4 We may now nd G by solving the polynomial equation and transforming by T , (44) (45) (46) (z + 8) (z + 6) = z 2 + 14z + 48 Gt = [8; 1] G = Gt T = [7; 12]

Therefore, the sliding mode controller is dened by: (47) (48) (49) u (t) = (Keq + Ksw ) x (t) Keq Ksw;i = (GB)1 G (A I) = [1:895; 1:3684] M (x; t) (133x1 128x2 ) xi > 0 = M (x; t) (133x1 128x2 ) xi < 0

7. Continuous Sliding Mode Control Sliding mode control requires an innitely fast controller to insure its robustness and disturbance rejection characteristics. In practice, this is of course impossible and is replace with the highest frequency the controller can manage. The number of transitions required becomes especially bad once the system is on the sliding mode. Let us assume that the system is at s = "1 where "k > 0 is some small real number. The controller will respond by blasting the system with a short bang until it reaches s = "2 . Since the system is now below the sliding surface, the controller will blast it the other way, to s = "3 , and so on. What we end up with is an eect called chattering, where the controller is switching at very high frequencies. Needless to say, this is undesirable for most systems since chattering induces wear in the control element and could also cause undesirable high-frequency dynamics in the system to surface. Replacing the ideal switching characteristics, M (x; t) s > 0 u= M (x; t) s<0 8 9 < -M (x; t) s > " = 0 jsj " u= : ; M (x; t) s < "

with a dead zone,


reduces the chatter somewhat. However, if this approximation is made, then the states can no longer be expected to converge to the sliding mode, but only to within some hyper-tube centered on the sliding mode. This tube may be made narrower by making " smaller, however this comes at the expense of requiring more controller action. A variable sliding gain M (x; t) is suggested in [12] that decreases the control action as the state trajectory approaches the sliding surface. This has been termed continuous sliding mode control, since the control actions thus generated are differentiable near the sliding surface. In [9], the following heuristic is chosen for dampening controller action: M (x; t) = M jsj kxk + "

where M is the initial sliding gain value, and " is a small positive constant. Example 7.1. Dampen the controller generated in Example 6.1. We may arbitrarily set " = 0:1. If simulation shows that a dierent value of " provides better results, then " is changed appropriately. Assuming our system can provide 10V input, our control action becomes (50) 10 j133x1 128x2 j M (x; t) = p 2 x1 + x2 + 0:1 2

which is implemented in the sliding mode controller, (51) (52) (53) u (t) = (Keq + Ksw ) x (t) Keq = [1:895; 1:3684] 8 10j133x 128x j 2 < p 2 12 x1 +x2 +0:1 Ksw;i = p 2 2 : 10j133x1 128x2 j
x1 +x2 +0:1

9 (133x1 128x2 ) xi > 0 = (133x1 128x2 ) xi < 0 ;

8. Other Forms of Sliding Mode Control

Since SMC is still a developing eld, each researcher entering this eld brings with him or her expertise from another eld. For this reason, there are many dierent approaches to the SMC problem. A few of the more common approaches are noted here. 8.1. Quadratic Control. In [13], Utkin suggests a quadratic cost for SMC parameter design identical to the LQR cost function, Z 1 1 T T J = xT (T ) F (T ) x (T ) + (54) x (t) Qx (t) + u (t)T Ru (t) dt 2 2 t0 This proposed cost, due to diculty of adaptation to general SMC has not become very popular.

8.2. PID Control. In [14], a proportional plus derivative (PD) SMC controller is proposed which is used as the basis for automatic optimization with genetic


with e representing the (negative) tracking error. This development allows a simpler transition to SMC for practicing engineers that are already familiar with PID control law. 8.3. Fuzzy Control. Several papers have proposed fuzzy logic SMC controllers, including [16] and [17]. Non-linearities are dealt with very well by utilizing fuzzy logic control and control chatter is done away with, however the design procedure becomes signicantly more complicated. In [18], genetic algorithms are employed quite successfully to design near-optimal fuzzy controllers for non-linear systems. 9. Reference Input We have not yet discussed the introduction of a reference input to the SMC. This is generally an easy task due to the nature of the sliding surface. Basically, the sliding surface can be dened such that (55) where xd is the desired system states. The introduction of a reference input, however may invalidate the convergence properties of the controlled developed sliding mode controller since we are introducing dynamics into the sliding surface that cannot, in general, be modeled without predening the reference input. Consider producing a sliding mode controller for the system x1 _ 1 2 x1 1 (56) = + u x2 _ 3 0 x2 1 s0 = cT (x xd )

algorithms in [15]. The general structure optimized in [15] makes use of a hardswitched proportional plus integral plus derivative sliding mode controller (PIDSMC) controller such as that given by Z de u = P e I edt D dt where 1 s < 0 = 2 s > 0 1 es < 0 P = 2 es > 0 1 s < 0 I = 2 s > 0 1 es < 0 _ D = 2 es > 0 _

with closed loop poles at 8. Our choice of poles is based on the fact that the system has poles at [3; 2]. If the system is in sliding mode with respect to s0 and the reference states are suddenly changed, then the controller experiences a high-frequency transient that may not have been modeled. If the controller was not designed to deal with such high frequency transients, the stability requirement that (57) may be temporarily violated. For this reason, consideration of the reference signal is required when designing the controller. A technique for conditioning the reference M (x; t) > jua j + f0 (x; t)



signal such that stability is maintained is suggested below. The introduction of a reference input is a subtopic of SMC that has received very little recognition. 9.1. Slew-Limiting Prelter to Maintain Stability. The basic problem can be considered to be one of sliding surface adaptation. By using a reference input in calculating the sliding surface, we eectively disrupt normal sliding mode operation by changing the surface as the states travel in the sliding trajectory. This is not normally a problem unless we change the surface faster than the SMC can follow it. For this reason, we propose a slewrate limiting lter to preprocess the reference input such that sliding surface does not drift too quickly for stable control. We wish to limit high frequency surface changes, so we can create a low pass lter in the frequency domain, (58)
0 Ri (s) =

pN i (s + pi )N

Ri (s)

where N is the order and is chosen to ensure a steep frequency roll-o. Choosing a high order for N may be necessary since increasing the reference inputs amplitude will also increase the high frequency components. If the lter order is higher, and therefore more like the ideal brick-wall lter, then the references amplitude becomes less signicant. A good choice of the pole, pi might be equal to the closed loop pole of the SMC controller corresponding to the ith state. 9.2. Reduced Order Prelter. While it would be nice to employ an 1-order lter, it is not generally the practice of engineers to squander electronic parts or power pointlessly. We would like to be able to determine the minimum order necessary for some reference input, (59) In eect, if we specify bounds for our reference input, as if it were a disturbance to the sliding surface, then we can design a lter to ensure that the frequency components higher than pi are suciently damped. A theoretical derivation for lter order may be proposed in a future paper, or an experimental heuristic found. The aim of this paper, however is primarily to give the reader an introduction to sliding mode control and compare its performance with linear quadratic control by solving the inverted pendulum problem. The author suggests the trial and error engineering method. If the sliding mode controller is designed, it is a trivial task to change the number of lter stages between the reference input and the state vector. Part 4. Extended Example: The Inverted Pendulum In order to illustrate the performance of SMC, we will design a continuous SMC for the inverted pendulum problem. Its performance will be compared to that of an LQR under various conditions and degrees of mismatch. 10. Pendulum Model The inverted pendulum is often used as a benchmark for controllers of all kinds. It is a nonlinear, unstable system, which makes it a challenge to control. Many dierent models have been developed for the inverted pendulum problem, including [9], [1], [19], and [20]. Umax < xd (t) < Umax


Figure 1. Force diagram of the inverted pendulum device.

Figure 2. Free body diagram of the cart and pendulum.

The inverted pendulum we are using has a very light pole (compared to the mass of the pendulum bob), and a relatively friction-free pivot for the pole. Because of this, pole friction and pole mass aect the overall model delity very little. Since they make the model considerably more complicated we chose to go with a model which ignores them. The following model is formulated as presented in [21]. The model is shown in gs. 1 and 2.



In the horizontal direction, we have for the cart (60) and for the pendulum (61) F N = M s + bs + N _ = F M s bs _

Substituting eq. 60 into eq. 61,

_2 N = m + ml cos ml sin s


We have two unknown states, s and . Their derivatives, s, s, , and may _ _ easily be solved for if s and are known. So far, we only have one equation for two unknowns. Our second equation may be found by summing the forces in the plane perpendicular to the pendulum. This choice for the axes saves us a lot of algebra, giving the following equation: (63) We can further simplify this equation by summing the moments about the centroid of the pendulum P l sin N l cos = I l (P sin + N cos ) = I P sin + N cos = P sin + N cos mg sin = ml + m cos s

_ F M s bs = m + ml cos ml sin _ s _2 F = m + ml cos ml sin + M s + bs s _ cos 2 sin ml + bs _ F = (M + m) s + _

I l and substituting for (P sin + N cos ) in eq. 63, we get I mg sin l I mgl sin 2 I + ml + mgl sin = ml + m cos s = ml2 + ml cos s = ml cos s


10.1. Linearization. The non-linearity of the problem forces us to either use tricky, nonlinear control techniques, or quasi-nonlinear control techniques, where a nonlinear system is modied in some way so as to allow linear control scheme to be used. One of these techniques is linearization, where a linear approximation of the function is used. If a function, f () has n derivatives at = 0 , then the polynomial expansion,
n X f (i) (0 ) i=0

And so, our nonlinear inverted pendulum model is described by eqs. 62 and 64: _2 (M + m) s + cos sin ml + bs = F _ I + ml2 + mgl sin = ml cos s


( 0 )i


is the nth order series for f () at 0 . If a rst order approximation is used, we have a rst order polynomial, or a line. In many cases the higher order terms can be considered negligible if the variable is close to 0 . In our inverted pendulum problem however, we are concerned only with values of that are close to (i.e. pendulum in the upright position). It is simple to replace with + , 0, sin ( + ) cos ( + ) 1 =

d ( + )2 d2 t 2 d ( + ) dt The pendulum equations become: (65) (66)

d2 d2 t

(M + m) s + bs ml = u _ I + ml2 mgl = ml s

10.2. System Equations. If we want to obtain the linearized pendulum equations in state space format, then we need to obtain eqs. 65 and 66 in terms of the state vector, x, 3 2 s 6 s 7 6 _ 7 x=6 ; x=6 _ 4 4 5 _ 2 3 s _ s 7 7 _ 5

This is achieved easily enough by arranging the equations we have. Rearranging, eq. 66 becomes s ml + mgl = I + ml2 We plug this value for into eq. 65: u = (M + m) s + bs ml _ (M + m) s = s = = = ml + mgl s I + ml2 ml + mgl s +u bs + ml _ I + ml2 s bs + ml ml+mgl + u _ I+ml2 M +m ml (ml + mgl) I + ml2 bs + I + ml2 u s _ (M + m) (I + ml2 ) m2 l2 s + m2 gl2 I + ml2 bs + I + ml2 u _ (M + m) (I + ml2 )



MI + Mml2 + mI + m2 l2 m2 l2 s = (M + m) (I + ml2 ) M I + M ml2 + mI s = (M + m) (I + ml2 ) s = or, in state format, s = s _ _

Taking all the s to the left, we get m2 l2 1 s = (M + m) (I + ml2 )

_ m2 gl2 I + ml2 bs + I + ml2 u (M + m) (I + ml2 ) _ m2 gl2 I + ml2 bs + I + ml2 u (M + m) (I + ml2 ) _ m2 gl2 I + ml2 bs + I + ml2 u (M + m) (I + ml2 ) 2 2 _ m gl I + ml2 bs + I + ml2 u (M + m) I + M ml2

s =

I + ml2 I + ml2 b m2 gl2 s+ _ + u (M + m) I + M ml2 (M + m) I + M ml2 (M + m) I + M ml2 Similarly, if we start with eq. 65, u bs + ml _ s= M +m Plugging into eq. 66, u bs + ml _ I + ml2 mgl = ml M +m = =
_ ml ubs+ml + mgl M+m I + ml2 mlu mlbs + m2 l2 + (M + m) mgl _ (M + m) (I + ml2 )

Taking all the to the left side, we get m2 l2 = 1 (M + m) (I + ml2 ) M I + M ml2 + mI = (M + m) (I + ml2 ) = or in state format, _ _ =

mlu mlbs + (M + m) mgl _ (M + m) (I + ml2 ) mlu mlbs + (M + m) mgl _ (M + m) (I + ml2 ) mlu mlbs + (M + m) mgl _ (M + m) I + Mml2

mlb (M + m) mgl ml s+ _ + u (M + m) I + M ml2 (M + m) I + Mml2 (M + m) I + M ml2 Our state space system equation becomes: (67) 3 2 2 3 2 0 3 2 1 0 0 0 s _ x 2 7 6 s 7 6 (I+ml2 ) m2 gl2 6 s 7 6 0 (I+ml )b 2 0 7 6 _ 7 6 (M+m)I+Mml2 (M+m)I+Mml (M+m)I+Mml2 6 7=6 +6 7 _ 4 5 6 0 4 0 0 1 5 4 5 4 0 _ (M+m)mgl ml mlb 0 (M+m)I+Mml2 (M+m)I+Mml2 0 (M+m)I+Mml2 =

7 7 7u 5


10.2.1. Numerical Values. Assume that our pendulum has the following characteristics: M = 2:4kg m = 0:23kg N b = 0:05 ms i = 0:0099kg m2 l = 0:36m m g = 9:81 2 s Fmax = 24N Fmin = 24N 2 3 0 0 7 7 1 5 0

Plugging into eq. 67, we get

These are the values we will use for A and B in calculating the LQR and SMC controller gains. Also, the maximum allowable input force to the pendulum is 24N. 10.3. Transfer Function. If we take the Laplace transform of the equations of motion for this system (assuming the initial conditions are close enough to zero to ignore), we get (68) (69) (M + m) Xs2 + bXs mls2 = U I + ml2 s2 mgl = mlXs2

0 1 0 6 0 0:0203 0:6893 A = 6 4 0 0 0 0 0:0424 21:8933 2 3 0 6 0:4069 7 7 B = 6 4 5 0 0:8486

where the X; ; U are the Laplace transforms of the displacement, s, the angle, , and the input u. We do not use S for the Laplace transform of the displacement due to the confusion that might arise between it and s, the Laplace operator. Rearranging eq. 68 in order to obtain the relation of X to , " # I + ml2 g X= 2 ml s and substituting back into eq. 69, we have (after a little rearranging, factoring, and cancellation): = U s3 + where
ml q s b(I+ml2 ) 2 s (M+m)mgl s bmgl q q q

h i q = (M + m) I + ml2 (ml)2



Figure 3. SMC setup for the inverted pendulum. 11. Sliding Mode Controller for the Inverted Pendulum Fig. 3 shows our SMC setup for the inverted pendulum. The output and disturbance are the same as in gs. 9 and 10. We will use an Ackermann SMC (sec. 5) with continuous control (sec. 6). We nd through trial and error that the control spectrum given by = [4; 4; 8; 8] produces desirable results. In order to observe set 2 0:1 6 0:1 x (0) = 6 4 0:1 0:1 the eects of initial conditions, we 3 7 7 5

The procedure shown in sec. 5 can be followed, or the Matlab function acker may be used to obtain k (see program listing for zrate.m). The augmented system equation is found to be: z = [108:9473; 81:7105; 273:3339; 64:2349] x _ z (0) = 0:1256 giving s = BT x + z M B T x + z > 0 u = M BT x + z < 0 We chose M = 24, since this is the maximum control force we can apply to the pendulum apparatus.


Figure 4. The complete continuous sliding mode controller. 11.1. Continuous Control. We chose " = 0:1, resulting in the damped control input 9 8 T < 24jB x+zj B T x + z > 0 = kxk+0:1 u= T : 24jB x+zj B T x + z < 0 ; kxk+0:1 The CSMC is shown in g. 4. 11.2. Reference Input. In the discussion of sec. 9, we suggest that sudden changes in the sliding surface might cause instability. To test this out, we will observe the behavior of the system with and without a slew-liming lter. Since the slowest pole in the SMC is at s = 4, the lter would take the form R (s) =


(s + 4)N

R (s)

where N is the order of the lter. The reference module with lter in place is shown in g. 5. A reference is being introduced into the position state through a 2nd order slew-limiting lter. 12. State Estimated Sliding Mode Control The estimator used for the SMC is the same as that used for the LQR. Our combined SMC-estimator can be seen in g. 6.



Figure 5. Reference introduced to the position through a 2nd order slew-limiting lter.

Figure 6. A combined SMC-estimator is developed to provide the missing states.

As in the LQR case, we will assume that the estimator has access to the true input to the system; the controller input plus the disturbance input, as shown in g. 7.


Figure 7. The SMC-estimator system with correct input data available to the estimator. 13. Linear Quadratic Regulator for the Inverted Pendulum Fig. 8 shows a typical LQR setup for the inverted pendulum. Several of the outputs of the system are saved to the workspace for later analysis and display (gs. 8 and 9). We assume here that our C matrix allows complete state observation, 2 3 0 1 0 0 6 0 0:0203 0:6893 0 7 7 A = 6 4 0 0 0 1 5 0 0:0424 21:8933 0 2 3 0 6 0:4069 7 7 B = 6 4 5 0 0:8486 C = I44 This is a requirement of both LQR and SMC. Later, we will design a state estimator to provide estimates of the states from the actual observables of the inverted pendulum, s and . In order to design an LQR controller, we rst have to decide upon our weighting matrices, Q and R. Since we have only one input, force, we can immediately (arbitrarily) set R=1 For the states, we will set 3 x 0 0 0 6 0 x 0 0 7 7 Q=6 4 0 0 y 0 5 0 0 0 y 2



Figure 8. LQR setup for the inverted pendulum.

Figure 9. Several of the variables are recorded to the workspace for later display. After experimental trial and error, we nd that good values for x and y are x = 100 y = 60 These values gives us the fastest settling time without exceeding the limit set on the pendulum input (too often). The LQR gains can now be calculated using the Matlab function lqr: K = lqr(A; B; Q; R) which turn out to be K = [10:0000; 17:0407; 118:3489; 26:8734]


Figure 10. Three dierent types of angular disturbance are introduced into the pendulum. 13.1. Reference Input. See [1] for details on introducing a reference input to a regulated system. However, in our case, the simplicity of having a single input allows us to forgo the calculations normally required. If our regulator gain for the reference state (the position, s) is -10, we can simply multiply our reference input for that state by -10. This would result in a zero error when s equals the reference. We call this multiplication factor N bar; N bar = 10 13.2. Disturbance. We want the disturbance entering the system to aect the angle of the pendulum, but none of the other states. This situation is more likely in actual trials. We achieve this by augmenting both the B matrix and the input, 2 3 0 b1 6 0 b2 7 7 B = 6 4 1 b3 5 0 b4 w ua = u

where b1:::4 are the 1st : : : 4th elements of the original B matrix, w is the angular disturbance, and ua is the augmented input vector. Three dierent types of disturbances are introduced into the system (see g. 10): Brownian, Gaussian, and Step. The Brownian disturbance is a gradual drift, of the type produced by strong wind hitting the pendulum. The Gaussian disturbance is similar to noise produced by sensor errors, and is generally accepted as normal noise. The step disturbance is of the type generated by, for example, a rival graduate student trying to knock the pendulum over when the instructors turns his back. While it might seem like overkill to include three kinds of disturbance, each has characteristics that reveal dierent features in the controller. For example, the Gaussian noise is zero mean for each ensemble, while the Brownian noise is zero mean only over many ensembles (it is not zero mean over each single ensemble). If



Figure 11. A combined linear quadratic regulator-estimator is developed to provide the missing states. a controller can only handle zero mean noise, then this will be revealed. Similarly, the step disturbance shows how fast a controller can recover from an unmodelled discontinuous dynamic. 14. State Estimated Quadratic Regulation We have assumed that all the pendulum states are available for measure. In actuality, only s and are available, i.e. 1 0 0 0 C= 0 0 1 0

We choose poles at P = [10; 11; 12; 13]. Using the Matlab function, place,

We will design a state observer to estimate the missing states. See [1] for details on development of state observers (or state estimators). Basically, a state observer is the dual problem to the linear regulator problem. We choose our observer poles to be at least twice as fast as the pendulum dynamics. 2 3 0 6 0:019 7 7 eig (A) = 6 4 4:6784 5 4:6797 L = place([A B K] ; C ; P )
0 0 0

where denotes the transposition operator. Our combined LQ regulator-estimator can be seen in gs. 11 and 12. This combined regulator-estimator is closer to what would be implemented in actuality, and it can be shown that the observer does not considerably change the system dynamics (if the observer poles are mush faster than the system dynamics). However, if the input being supplied to the observer is incorrect (due to error) then the performance of the regulator-estimator will be considerably changed. The


Figure 12. A state estimator is shown. The state-space module calculates: x = (A LC) x + I4 u; y = x _

Figure 13. A fudged system might provide more insight into the properties of the controller. eect this is that the performance of the observer might mask the performance of the regulator. Instead of remaining strictly correct, we will assume that the observer has access to sensors that measure the angle of the pendulum (with noise). This results in the system shown in g. 13. 15. Results 15.1. Reference Conditioning. Firstly, we test our theory about the introduction of a reference input. The sliding mode controller is simulated without the slew-limiting lter (g. 14). The CSMC becomes unstable as soon as the sliding surface is (discontinuously) changed.



Figure 14. The CSMC becomes unstable as soon as the sliding surface is suddenly changed.

Applying a 2nd order lter to the reference input, the CSMC follows the reference well, maintaining its stability as seen in g. 15. 15.2. Standard Sliding Mode Control. It would be interesting at this point to see how a standard SMC would perform for this problem. SMC assumes innite switching speed, however in practice this is never the case. An SMC was designed for the inverted pendulum (using Ackermanns gain) with fast switching action. Its performance is better than that of the CSMC when the reference changes (g. 16). However, it is immediately obvious that this comes at the cost of much greater control authority. A slower switching SMC is designed which performs considerably worse (g. 17). 15.3. Performance with Minor Disturbance. We simulate both controllers with the following angular disturbance (in radians): Brownian = 0:05 Gaussian = 0:2 Step = 0:1 The results of the LQR are shown in g. 18. The CSMC results are shown in g. 19. We see that the CSMC follows the reference a little more smoothly.


Figure 15. The CSMC follows the reference successfully after conditioning the reference.

Figure 16. An SMC with fast switching action.



Figure 17. An SMC with slow switching action.

Figure 18. Results of the LQR with minor disturbance.


Figure 19. Results of the CSMC with minor disturbance. 15.4. Performance with Major Disturbance. We simulate both controllers on the pendulum using the following disturbance: Brownian = 0:2 Gaussian = 0:5 Step = 0:5 The results of the LQR are shown in g. 18. The CSMC results are shown in g. 19. Neither of the controllers are able to converge completely to the position reference. This is because of the considerable disturbance being applied to the pendulums angle. If we visualized the disturbance as coming from a nger pushing against the pendulum bob, we can see that if a controller insists on maintaining the pendulum position with no compromise, the result will be the pendulum toppling over. We can see, however that the CSMC controller achieves a much closer delity between the pendulums position and the desired position. 15.5. Controller Mismatch. Two tests are performed where the controller is mismatched to the actual system. This is done by designing the controller for the pendulum, and then changing the length of the pendulum. In the rst test, the length of the pendulum is increased from 0.36 to 0.54. In the second, it is reduced to 0.18. The results for the LQR are shown in gs. 22 and 23, those for the CSMC are shown in gs. 24 and 25. Both controllers perform exceptionally well. In order to discriminate between the two controllers, we will instead try to mismatch them in the another way. The



Figure 20. Results of the LQR with major disturbance.

Figure 21. Results of the CSMC with major disturbance.


Figure 22. The LQR is designed for a pendulum length of 0.36, and tested on one with length 0.54.

Figure 23. The LQR is designed for a pendulum length of 0.36, and tested on one with length 0.18.



Figure 24. The CSMC is designed for a pendulum length of 0.36, and tested on one with length 0.54.

Figure 25. The LQR is designed for a pendulum length of 0.36, and tested on one with length 0.18.


Figure 26. The is increased until the LQR is no longer able to perform. = 1:902 controller is designed for the system x = Ax + Bu _ and tested on the system x = Ax + Bu _ The stable range for is then found. In the case of the LQR (g. 26), 0 < LQR < 1:902 For the CSMC (g. 27), 0 < CSMC < 3:360 It is interesting to note that, just before the controllers fail, they seem to be operating quite well. The degradation in performance comes suddenly. 16. Observations Both the LQR and the CSMC were able to control the inverted pendulum in a robust fashion. The presence of a slew-limiting prelter was seen to be necessary. In its absence, the CSMC became unstable when the sliding surface changed too suddenly. The CSMC had better disturbance rejection qualities than the LQR. We see that, in all the situations tested, the CSMC generally maintained better delity between the pendulum states and the desired reference states than the LQR. However, it can also be seen that the CSMC demanded more controller action once the state was close to the desired reference. This controller action could be reduced by reducing ".



Figure 27. The is increased until the CSMC is no longer able to perform. = 3:360 Based on the amount of mismatch that the controllers could suer without instability arising, the CSMC seemed to be the more robust controller for the inverted pendulum. Several design decisions were made based on human judgement, such as the choice of Q for the LQR or the choice of controller poles for the CSMC. Care was taken to maintain impartiality in design, and both controllers were rened separately in order to ensure performance that was close to optimal. For this reason, it is reasonable to expect the performance of the controllers with the inverted pendulum to be indicative of the control algorithms characteristics. References
[1] G. F. Franklin, J. D. Powel, and M. L. Workman, Digital Control of Dynamic Systems. Addison Wesley, 1994. [2] E. William S. Levine, The Control Handbook. CRC Press/IEEE Press, 1996. [3] V. I. Utkin, Variable structure systems with sliding modes, IEEE Transactions on Automatic Control, vol. AC-22, pp. 212222, April 1997. [4] J. Y. Hung, W. Gao, and J. C. Hung, Variable structure control: A survey, IEEE Transactions on Industrial Electronics, vol. 40, pp. 222, February 1993. [5] J. J. E. Slotine and J. A. Coetsee, Adaptive sliding controller synthesis for non-linear systems, International Journal of Control, vol. 43, no. 6, pp. 16311651, 1986. [6] H.-S. Tan and M. Tomizuka, An adaptive sliding mode vehicle traction controller design, in Proceedings of the 1990 Automatic Control Conference, pp. 18561861, May 1990. [7] P. Buttolo, P. Braathen, and B. Hannaford, Sliding control of force reecting teleoperation: Preliminary studies, PRESENCE, vol. 3, pp. 158172, Spring 1994. [8] J. Ackermann and V. Utkin, Sliding mode control design based on Ackermanns formula. Made temporarily available as an internet resource, 1996.


[9] H. N. Iordanou and B. W. Surgenor, Experimental evaluation of the robustness of discrete sliding mode control versus linear quadratic control, IEEE Transactions on Control Systems Technology, vol. 5, pp. 254260, March 1997. [10] S. of Research and E. Association, Handbook of Mathematical, Scientic, and Engineering Formulas, Tables, Functions, Graphs, Transforms. Research and Education Association, 1994. [11] K. Furuta, Sliding mode control of a discrete system, System Controls Letters, vol. 14, pp. 145152, 1990. [12] F. Zhou and D. G. Fisher, Continuous sliding mode control, International Journal of Control, vol. 55, no. 2, pp. 313327, 1992. [13] V. I. Utkin and K. D. Yang, Methods for contructing discontinuity planes in multidimensional variable structure systems, Automatic Remote Control, vol. 39, no. 10, pp. 14661470, 1978. [14] P. K. Nandam and P. C. Sen, Control laws for sliding mode speed control of variable speed drives, International Journal of Control, vol. 56, no. 5, pp. 11671186, 1992. [15] Y. Li, K. C. Ng, D. J. M. Smith, G. J. Gray, and K. C. Sharman, Genetic algorithm automated approach to the design of sliding mode control systems, International Journal of Control, vol. 63, pp. 721739, March 1996. [16] A. Ishigame, T. Furukawa, S. Kawamato, and T. Tuniguchi, Sliding mode controller design based on fuzzy inference systems for non-linear systems, IEEE Transactions on Industrial Electronics, vol. 40, pp. 6470, February 1993. [17] A. Suyitno, F. Fujikawa, H. Kobayashi, and Y. Dote, Variable-structured robust controller by fuzzy logic for servomotor, IEEE Transactions on Industrial Electronics, vol. 40, pp. 80 88, February 1993. [18] K. C. Ng, Y. Li, D. J. Murray-Smith, and K. C. Sharman, Genetic algorithms applied to fuzzy sliding mode controller design, Proceedings of the 1st IEE/IEEE International Conference on Genetic Algorithms in Engineering Systems, pp. 220225, 1995. [19] A. G. Barto, R. S. Sutton, and C. W. Anderson, Neuronlike adaptive elements that can solve dicult learning control problems, IEEE Transactions on Systems, Man, and Cybernetics, no. 13, pp. 834846, 1983. [20] G.-W. V. der Linden and P. F. Lambrechts, H 1 control of an experimental inverted pendulum with dry friction, IEEE Control Systems, pp. 4450, August 1993. [21] T. U. of Michigan, Control tutorials for matlab. Internet Resource, 1996. http://rclsgi.eng.ohio-state.edu/matlab/index.html.