Vous êtes sur la page 1sur 171

Lecture Notes

BASIC CONTROL THEORY


Module 1
Modelling of Dynamic Systems
(Linear system modelling, ODEs - continuous-time models, Laplace transform)

AUGUST 2005
Prepared by Dr. Hung Nguyen
TABLE OF CONTENTS

Table of Contents..............................................................................................................................i
List of Figures..................................................................................................................................ii
List of Tables ................................................................................................................................. iii
References ......................................................................................................................................iv
Objectives ........................................................................................................................................v

1. Mathematical Modelling of Dynamic Systems ...........................................................................1


2. Continuous Time Models – Ordinary Differential Equations .....................................................1
3. General Modelling Principles......................................................................................................2
3.1 Mechanical Systems .............................................................................................................2
Example 1 A Simple Pendulum..............................................................................................2
Example 2 A Mass-Damper System.......................................................................................2
3.2 Liquid Level Storage Systems ..............................................................................................4
Example 3 A Liquid Level Storage System ...........................................................................4
3.3 Electrical Systems.................................................................................................................7
Example 4 Close-Loop RLC Circuit......................................................................................7
4. Review of Laplace Transform .....................................................................................................8
4.1 Laplace Transform ................................................................................................................8
4.2 Laplace Transform Theorems .............................................................................................10
4.3 Applications of Laplace Transform.....................................................................................13
Example 5.............................................................................................................................13
Example 6.............................................................................................................................15
Example 7.............................................................................................................................16
Example 8.............................................................................................................................17
Example 9.............................................................................................................................19
Example 10...........................................................................................................................19
Example 11...........................................................................................................................20
Example 12...........................................................................................................................21
4.4 Partial Fraction Expansion with MATLAB ........................................................................23
Example 13...........................................................................................................................23
Example 14...........................................................................................................................24
Summary of Module 1...................................................................................................................26
Exercises........................................................................................................................................26
Appendix – Numerical Integration Methods .................................................................................30
Sample Program in MATLAB ..................................................................................................31

i
LIST OF FIGURES

Figure 1 ...........................................................................................................................................3
Figure 2 ...........................................................................................................................................4
Figure 3 ...........................................................................................................................................4
Figure 4 ...........................................................................................................................................7
Figure 5 .........................................................................................................................................17
Figure 6 .........................................................................................................................................20
Figure 7 .........................................................................................................................................22
Figure 8 .........................................................................................................................................26
Figure 9 .........................................................................................................................................26
Figure 10........................................................................................................................................27
Figure 11 ........................................................................................................................................27
Figure 12........................................................................................................................................28
Figure 13........................................................................................................................................28
Figure 14........................................................................................................................................28
Figure 15........................................................................................................................................29
Figure A.1 ......................................................................................................................................32

ii
LIST OF TABLES

Table 1 ...........................................................................................................................................2
Table 2 ...........................................................................................................................................9
Table 3 .........................................................................................................................................11

iii
REFERENCES

Kamen, Edward and Heck, Bonnie S. (1997), Fundamentals of Signals and Systems Using
MATLAB®, Englewood Cliffs, New Jersey, USA.

Kou, Benjamin C. (1995), Automatic Control Systems, Prentice-Hall International Inc., Upper
Saddle River, New Jersey, USA.

Ogata, Katsuhiko (1997), Modern Control Engineering, 3rd Edition, Prentice-Hall International
Inc., Upper Saddle River, New Jersey, USA.

Richards, R.J. (1993), Solving in Control Problems, Longman Group UK Ltd, Harlow, Essex,
UK.

Seborg, Dale E., Edgar, Thomas F. and Mellichamp, Duncan A. (2004), Process Dynamics and
Control, 2nd Edition, John Wiley & Sons, Inc., Hoboken, New Jersey, USA.

Taylor, D.A. (1987), Marine Control Practice, Anchor-Brendon Ltd, Tiptree, Essex, UK.

iv
AIMS

1.0 Explain principles of modelling linear systems, methods of representing dynamic systems in
term of mathematics and solving ordinary differential equations.

LEARNING OBJECTIVES

1.1 Explain the way to use ordinary differential equations to model dynamic systems and
principles of modelling.

1.2 Apply physical laws to generate differential equations for certain systems such as mechanical
systems, liquid tank systems and electrical systems.

1.3 Explain Laplace transform, Laplace transform theorems and inverse Laplace transform.

1.4 Apply Laplace transform for solution of ordinary differential equations.

v
1. Mathematical Modelling of Dynamic Systems
In order to analyse and design a control system knowledge of its behaviour is required. This
behaviour will be considered in a mathematical sense and a system description using
mathematical terms is therefore required. The relationships between the variable quantities in s
system will then form what is called a mathematical model of the system. These mathematical
models are usually derived from applications of the laws of physics, e.g. Newton’s laws, laws of
energy and momentum conservation. The way to find a mathematical model of a system is called
modelling.

In studying control systems a mathematical model of a dynamic system is defined as a set of


equations that represent the dynamics of the system accurately or, at leat, fairy well. Note that
mathematical model is not unique to a giving system. A system may be represented in many
different ways and, therefore, may have many mathematical models, depending on one’s
perspectives.

The dynamics of many systems, whether they are mechanical, electrical, thermal, pneumatic,
hydraulic, and so on, may be described in terms of ordinary differential equations (continuous
time systems) or difference equations (discrete time systems). In the subject, only linear system
modelling is considered.

2. Continuous Time Models - Ordinary Differential Equations


A dynamic system is represented by an ordinary differential equation which is constructed based
on the physical phenomenon related to the process in the system. Behaviour of characteristics of
the system will be analysed based on solutions of the ordinary differential equation. In general, a
first order differential equation is of the following form:

y& = f (y, u , t ) (1)

where u, y, and t are input, output and time, respectively. A second order differential equation is
of the following general form:

&y& = f (y& , y, u , t ) (2a)

Higher order differential equations can be expressed in the following form:

(
y (n ) = f y (n −1) , y (n −2 ) ,K, y (1) , y, u (m ) , u (m −1) ,K, u (1) , u, t ) (2b)

In control engineering, the more convenient form of a general ODE, in which the left-handed side
consists of output(s) and the right-handed side consists of input(s), can be expressed by

a n y (n ) + a n −1 y (n −1) + K + a 1 y (1) + a 0 y = b m u (m ) + b m−1 u (m −1) + K + b1u (1) + b 0 u (2c)

1
Solutions of many ordinary differential equations can be found by the classical mathematical
methods. High memory and fast speed computers and high performance programming languages,
however, allow solutions of difficult differential equations to be found by numerical methods.

3. General Modelling Principles


Modelling is very important for analysis and diagnostics of desired control systems. Modelling is
based on physical laws such as Newton’s laws, law of conservation of energy, law of
conservation of mass etc. The following table summarizes general modelling principles (D.E
Seborg et al., 2004)

Table 1 A systematic approach for developing dynamic models

1. State the modelling objectives and the end use of the model. Then determine the required
levels of model detail and model accuracy.
2. Draw schematic diagram of the process and label all process variables.
3. List all of the assumptions involved in developing the model. Try to be parsimonious:
the model should be no more complicated than necessary to meet the modelling
objectives.
4. Determine whether spatial variations of process variables are important. If so, a partial
differential equation model will be required.
5. Write appropriate conservation equations (mass, component, energy, and so forth).
6. Introduce equilibrium relations and other algebraic equations (from thermodynamics,
transport phenomena, chemical kinetics, equipment geometry, etc.).
7. Perform a degrees of freedom analysis to ensure that the model equations can be solved.
8. Simplify the model. It is often possible to arrange the equations so that the output
variables appear on the left side and the input variables appear on the right side. This
model form is convenient for computer simulation and subsequent analysis.
9. Classify inputs as disturbance variables or as manipulated variables.

Following examples illustrate modelling of linear systems.

3.1 Mechanical Systems

Mechanical systems include a very wide range of everyday items which may frequently be
reduced to a standard first or second order model. In combination with electrical components
mechanical system give rise in turn to electromechanical systems, e.g. motors and generators.
The following examples illustrate mechanical systems.

Example 1 – Simple pendulum

By laws of mechanics, the motion of a simple pendulum in Figure 1 can be expressed by the
following second order differential equation.

2
d 2 θ( t )
I + MgL sin θ( t ) = Lx ( t ) (3)
dt 2

where g is the gravity constant and I is the moment of inertia given by I = M(L2).

L θ
x(t)

Mg Mgsin θ (t)

Figure 1 Simple pendulum

The equation (3) is nonlinear. The linear differential equation for the simple pendulum can be
constructed as follows. If the magnitude θ( t ) of the angle θ( t ) is small, the sin θ( t ) sin is
approximately equal to θ( t ) , the above nonlinear equation can be approximated by the following
linear differential equation.

d 2 θ( t )
I + MgLθ( t ) = Lx ( t ) (4)
dt 2

or
d 2 θ( t ) MgL L
2
+ θ( t ) = x ( t ) (5)
dt I I

The representation (4) or (5) is called a small-signal model since it is a good approximation to the
given system if θ( t ) is small. It is possible to derive an explicit expression for the output θ( t ) of
the small-signal model.

End of Example 1

Example 2 Mass-damper systems

Figure 2 shows a simple mass-damper system. A force is applied directly to the mass which is
separated from a fixed rigid surface by a light damper with damping coefficient λ . If the system
is initially at rest in each case, derive the relationship between the movement of the mass and the
independent variable which is the forcing input.

3
P
m

Figure 2 Forced mass-damper system

For the configuration in Figure 2, equating net applied forces to the acceleration force on the
mass, or using the d’Alembert’s principle, gives the dynamic equation.

For applied external force = P with damping force = − λdx / dt the equation of motion is

dx d2x
P−λ =m 2 (6)
dt dt
or

d 2 x λ dx P
+ = (7)
dt 2 m dt m

End of Example 2

3.2 Liquid Level Storage Systems

Example 3 Liquid Level Storage Model

A typical liquid storage process is shown in Figure 3 where qi and qo are volumetric inlet and
outlet flow rates, respectively.

qi

V h

R qo
Cross-sectional area A

Figure 3 Liquid level storage system

4
A mass balance yields:

d(ρV )
= ρq i − ρq o (8)
dt

Assume that liquid density ρ is constant and the tank is cylindrical with cross-sectional area A.
The volume of liquid in the tank can be expressed as V = Ah, where h is the liquid level (or head).
The above equation becomes

dh
A = qi − q o (9)
dt

This equation appears to be a volume balance. However, in general, volume is not conserved for
fluids. This result occurs in this example due to the constant density consumption. There are three
important variations of the liquid storage process:

1. The inlet or outlet flow rates might be constant; for example, exit flow rate q might be kept
constant by a constant-speed, fixed-volume (metering pump). An important consequence of this
configuration is that the exit flow rate is then completely independent of liquid level over a wide
range of conditions. Consequently, q = q , where q is the steady state value. For this situation,
the tank operates essentially as a flow integrator.

2. The tank exit line may function simply as a resistance to flow from the tank (distributed along
the entire line), or it may contain a valve that provides significant resistance to flow at a single
point. In the simplest case, the flow may be assumed to be linearly related to the driving force,
the liquid level, i.e.

h = qR (10)

where R is the resistance of the line (m-2s). Rearranging (10) gives the following flow-head
equation:

1
q= h (11)
R

Substituting this into equation (9) give the first-order differential equation:

dh 1
A = qi − h (12)
dt R
or
dh 1
A + h = qi (13a)
dt R

This model shows the relationship between the level (h) and the inlet flow rate (qi).

5
3. A more realistic expression for flow rate q can be obtained when a fixed valve has been placed
in the exit line and turbulent flow can be assumed. The driving force for flow through the valve is
the pressure drop ∆P .

∆P = P − Pa (13b)

where P is the pressure at the bottom of the tank and Pa is the pressure at the end of the exit line.
It is assumed that Pa is the ambient pressure. If the valve is considered to be an orifice, a
mechanical every balance, or Bernoulli equation can be used to derive the following relation.

P − Pa
q=k (13c)
ρ

where k is a constant. The value of k depends on the particular valve and the valve setting (how
much it is open).

The pressure P at the bottom of the tank is related to liquid level h by a force balance,

ρg
P = Pa + h (13d)
gc

where the acceleration of gravity g and conversion factor gc are constants. Substituting (13c) and
(13d) into (9) yield the dynamic model as follows.

dh
A = qi − K h (13e)
dt

where K = k g / g c . This model is nonlinear due to the square root term.

The liquid storage process discussed above could be operated by controlling the liquid level in
the tank or allowing the level to fluctuate without attempting to control it. For the later case
(operation as a surge tank), it may be of interest to predict whether the tank would over flow or
run dry for particular variations in the inlet and outlet flow rates. Thus, the dynamics of the
process may be important even when automatic control is not utilized.

End of Example 3

6
3.3 Electrical Systems

An electrical circuit or network is another type of physical system. It is comprised of resistors,


capacitors and inductors, and usually one or more energy sources such as a battery or generator.

vr
vr
ir = , vr = irR
R R vr
i
vL
R
di 1
v L = L L , i L = ∫ v L dt v L vL
L dt L
C
vC vC
1 dv
v C = ∫ i C dt , i C = C C (b)
C C dt
(a)
Figure 4 Electrical systems: (a) passive circuit element relationships, (b) closed-loop electrical
circuit

Example 4 Close-loop electrical circuit

The resistors, inductors and capacitors in a circuit are considered passive elements and their
current-to-voltage relationships are shown in Figure 4(a). The active electrical elements can be
considered to exist as voltage sources or current sources, where the voltage or current is
considered to be constant throughout load changes. In analysing or determining the mathematical
model of an electrical circuit, use is made of Kirchhoff’s first law which states that the algebraic
sum of the voltages around a closed loop is zero. Consider the circuit shown in Figure 4(b):

vr + vL + vC − v = 0 (14)

or

di 1
dt C ∫
iR + L + idt = v (15)

Depending upon the variable of interest, the above expression may be rearranged in a variety of
forms. Use can be made of the relationship, current i = dq/dt, where q is the electrical charge
accumulating on a capacitor, if it were required to remove the integral from the expression.

End of Example 4

7
4. Review of Laplace Transform
4.1 Laplace Transform

Differential equations or mathematical models can be produced to represent all or part of a


control system. The system response will vary according to the input received and, in order to
determine this response, the differential equations must be solved. The form of the solution, since
time is often the independent variable, will usually include two terms. These are the steady state
and transient solutions. The steady state solution is obtained when all the initial conditions are
zero. The transient solution represents the effects of the initial conditions. Both of these parts of
the solution must be examined with respect to control systems. Classical mathematical techniques
create complex solutions for linear differential equations beyond first-order. Use can be made of
the Laplace transform technique to simplify the mathematics and also provide a solution in the
two-term form that is required. Transforming, in this situation, involves changing differential
equations into an algebraic equation in some new quality. A useful analogy can be made to the
use of logarithms where numbers are converted to a different form so that multiplication, division,
raising, to a power, etc., become addition, substraction or simple multiplication. At the end of
these computations the number obtained is inverse transformed or antilogged to return to the
original system of numbers. It should be noted that the Laplace transform can only be used with
constant coefficient linear differential equations. However, only this type of equation will be
considered (Taylor, D.A., 1987).

The Laplace transform is one of the mathematical tools used for the solution of linear ordinary
differential equation. In comparison with the classical method of solving linear differential
equations, the Laplace transform method has the following two attractive features below.

¾ Homogeneous equation and the particular integral of the solution are obtained in one
operation.
¾ The Laplace transform converts the differential equation into an algebraic equation in s. It
is then possible to manipulate the algebraic equation by simple algebraic rules to obtain
the solution in the s-domain. The final solution is obtained by taking the inverse Laplace
transform.

As a result of Laplace transformation, a function of time, f(t), becomes a function of a new-


domain, F(s). Upper case letters are always used for the transformed function. The quantity s is a
complex and takes the form, s = σ + jω where σ (sigma) and ω (omega) are real numbers and j
= − 1 . The operational symbol indicating a transform is L. The actual transformation involves
multiplying the function, f(t), by the e-st and then integrating the product with respect to time in
the interval t = 0 to t = ∞ , i.e.


L{f ( t )} = ∫ f ( t )e −st dt = F(s) (16)
0

The function f(t) must be real and continuous over the time interval considered, otherwise the
Laplace transform cannot be used.

8
The inverse transform is indicated by the operator L-1 such that

f ( t ) = L−1{F(s)} (17)

It is usual to employ tables of transform pairs for f(t) and the corresponding F(s). A number of
examples will, however, be provided to indicate the Laplace transform technique (Table 2).

Table 2 Some Laplace transform pairs

Time function f(t) Laplace transform, F(s)


Unit impulse, δt 1
1
Unit step, 1
s
1
Unit ramp, t
s2
n!
nth order ramp, tn
s n +1
1
Exponential decay, e − αt
s+α
α
Exponential rise, 1 − e − αt
s(s + α)
1
Exponential × t, te − αt
(s + α) 2
ω2
Sin ωt
s 2 + ω2
s
Cos ωt
s +ω
2

Obtain the Laplace transforms of the functions

1) f(t) = A
3) f(t) = 1
3) f(t) = At
4) f(t) = Ae − αt

assume f(t) = 0 for t < 0 in all cases.

1) f(t) = A, i.e. a step function of magnitude A.

∞ ∞
A 1
L{f ( t )} = ∫ Ae dt = − ⋅ st
−st

s e
| (18)
0 0

9
This step function is undefined at t = 0, but

0+

∫ Ae
−st
dt = 0 (19)
0−

Thus,

A
L{f ( t )} = = F(s) (20)
s

3) f(t) = 1, i.e. a unit step function.


1 1 ∞ 1
L{f ( t )} = ∫ e dt = ⋅ st | = = F(s)
−st
(21)
0
s e 0 s

3) f(t) = At, i.e. a ramp function

∞ ∞ ∞
e −st ∞ Ae −st A A
L{f ( t )} = ∫ Ate −st dt = At | −∫
−s 0 0 −s
dt = ∫ e −st dt = 2 = F(s)
s 0 s
(22)
0

4) f(t) = Ae − αt , i.e. an exponential decay.

∞ ∞
A
L{f ( t )} = ∫ Ae e dt = A ∫ e −( α +s ) t dt =
− αt −st
= F(s) (23)
0 0
s+α

4.2 Laplace Transform Theorems

A number of theorems relating to Laplace transforms are used when solving differential
equations.

Linearity theorem. Where a function is multiplied by a constant, the Laplace transform of the
product is the constant multiplied by the function transform. Hence

L{Af ( t )} = AF(s) (24)

where the sum of two functions is transformed it becomes the sum of the Laplace transforms of
the individual functions. Hence

L{f 1 ( t ) ± f 2 ( t )} = F1 (s) ± F2 (s) (25)

This is sometimes referred to as the principle of superposition.

Differential theorem. The Laplace transform of the first derivative of a function f(t) is

10
⎧ df ( t ) ⎫
L⎨ ⎬ = sF(s) − f (0) (26)
⎩ dt ⎭

where f(0) is the value of the function f(t) evaluated at time t = 0. The Laplace transform of the
second derivative of f(t) is

⎧ d 2 f (t) ⎫ df (0)
L⎨ 2 ⎬
= s 2 F(s) − f (0) − (27)
⎩ dt ⎭ dt

where df(0)/dt is the value of the first derivative of the function at time t = 0.

Integration theorem. The Laplace transform of the integral of a function f(t) is

F(s) ∫ f (0)
{ }
L ∫ f ( t )dt =
s
+
s
dt (28)

where ∫ f (0)dt is the value of the integral of the function evaluated at time, t = 0.

Initial value theorem. The value of the function f(t), as time t approaches zero, is given by

lim f ( t ) = lim sF(s) (29)


t →0 s →∞

Final value theorem. The value of the function f(t), as time t approaches infinity, is given by

lim f ( t ) = lim sF(s) (30)


t →∞ s →0

Time shift theorem. The Laplace transform of a time delayed function f ( t − τ) with respect to
the function f(t) is,

L{f ( t − τ)} = e −sτ F(s) (31)

where τ is the value of the time delay in seconds.

Table 3 Summary of theorems of the Laplace transforms

Multiplication by a constant L[kf (t )] = kF(s )


Sum and difference L[f1 (t ) ± f 2 (t )] = F1 (s ) ± F2 (s )

11
⎡ df (t )⎤
L⎢ ⎥ = sF(s ) − f (0)
⎣ dt ⎦
⎡ d n f (t ) ⎤
L ⎢ n ⎥ = s n F(s ) − s n −1f (0 ) − s n − 2f (1) (0) − ... − sf (n − 2 ) (0) − f (n −1) (0)
Differentiation ⎣ dt ⎦
where
d k f (t )
f ( k ) (0) =
dt k t =0
F(s )
L ⎡ ∫ f (τ)dτ⎤ =
t

⎢⎣ 0 ⎥⎦ s
Integration
F(s )
L ⎡ ∫ ∫ ...∫ f (τ)dτdt1dt 2 ...dt n −1 ⎤ = n
t1 t 2 tn

⎢⎣ 0 0 0 ⎥⎦ s
Shift in time L[f (t − T )u s (t − T )] = e − Ts F(s )
Initial-value theorem lim f (t ) = lim sF(s )
t →0 s →∞

lim f (t ) = lim sF(s ) if sF(s ) does not have poles on or to the right
Final-value theorem t →∞ s →0

of the imaginary axis in the s -plane.


Complex shifting [ ]
L e m αt f (t ) = F(s ± α )
F1 (s )F2 (s ) = L ⎡ ∫ f1 (τ)f 2 (t − τ)dτ⎤
t

⎢⎣ 0 ⎥⎦
Real convolution
= L ⎡ ∫ f 2 (τ)f1 (t − τ)dτ⎤ = L[f1 (t ) * f 2 (t )]
t

⎢⎣ 0 ⎥⎦

Where a control system is represented by an ordinary differential equation and time is the
independent variable it can be solved using Laplace transforms. The first step is to transform the
equation term by term, taking due account of any initial conditions. The transformed equations in
s can then be solved for the variable of interest. The equation in s must then be inversely
transformed to obtain the variable as function of time. Simultaneous equations can be handled in
a similar way where the solving for variable takes place in terms of s and the values obtained are
inversely transformed.

The inverse transform L-1 is usually obtained by reference to a set of transform tables. Where this
is not immediately possible the function in s must be rearranged into a suitable form. Control
system response functions often appear as a ratio of polynomials, e.g.

N(s) a m s m + a m −1s m−1 + ... + a 1s + a 0


F(s) = = (32)
D(s) s n b n + b n −1s n −1 + ... + b1s + b 0

where m and n are real positive integers and all a’s and b’s are real constants. The highest power
of s in the denominator must be greater that that in the numerator, which is usually the case with
practical control systems. The partial fraction technique is the most commonly used approach

12
when solving these functions. The denominator polynomial must first be factorized, i.e. the roots
must be known. Hence

N(s)
F(s) = (33)
(s + r1 )(s + r2 )...(s + rn )

where –r1, –r3, …, –rn are the roots of D(s) which may exist as real or complex numbers. The
factors of the denominator, e.g. (s + r1), should be recognizable as denominators in the table of
transforms, see Table 3.

Linear ordinary differential equations can be solved by the Laplace transform method with the
aid of the theorems on Laplace transform given in Table 3. The procedure is outlined as follows:

1. Transform the differential equation to the s-domain by Laplace transform using the Laplace
transform table.
2. Manipulate the transformed algebraic equation and solve for the output variable.
3. Perform partial-fraction expansion to the transformed algebraic equation.
4. Obtain the inverse Laplace transform from the Laplace transform table.

Following examples illustrate how to apply the Laplace transform method to the solution of
linear ODEs.

4.3 Applications of Laplace Transform

Example 5 Determine the inverse Laplace transform of the function F(s), where

s+2
F(s) = (34)
s(s + 1) 2 (s + 3)

Expanding F(s) into partial fractions:

A B C D s+2
F(s) = + + + = (35)
s s + 1 (s + 1) 2
s + 3 s(s + 1) 2 (s + 3)

where A, B, C and D are constants. The value of these constants must now be found by algebraic
methods. The evaluation of residues or the “cover-up rule” will be used.

If s is made equal to the value of any of the roots, i.e. 0, -1, or –3, then F(s) becomes infinite.
However, if both sides of the equation are multiplied by a factor (s + r) where r is the root, then a
function of s will be left which has a value at s = -r, or the value of F(s) if the factor (s + r) were
covered up. Hence

s+2 ⎛A B C D ⎞
(s + 3) = ⎜⎜ + + + ⎟s + 3 (36)
s(s + 1) (s + 3)
2
⎝ s s + 1 (s + 1)
2
s + 3 ⎟⎠

13
Let s = -3, then

−3+ 2 −1 1 1
=D= = ∴D = (37)
− 3(−3 + 1) 2
− 12 12 12

Now

s+2 ⎛A B C D ⎞
(s) = ⎜⎜ + + + ⎟s (38)
s(s + 1) (s + 3)
2
⎝ s s + 1 (s + 1)
2
s + 3 ⎟⎠

Let s = 0, then

2
A= (39)
3

Also,

s+2 ⎛A B C D ⎞
(s + 1) 2 = ⎜⎜ + + + ⎟⎟(s + 1) 2 (40)
s(s + 1) (s + 3) ⎝ s s + 1 (s + 1) s +3⎠
2 2

Let s = -1, then

−1 + 2 −1 −1
= = C ∴C = (41)
− 1(−1 + 3) 2 2

It is know necessary to substitute the values of A, C, and D and evaluate the equation at some
convenient value, e.g. s = 1, in order to obtain B. Thus

s+2 A B C D
F(s) = = + + + (42)
s(s + 1) (s + 3) s s + 1 (s + 1)
2 2
s+3

Substitute for A, C, and D,

s+2 2 B 1 1
= + + + (43)
s(s + 1) (s + 3) 3s s + 1 2(s + 1) 12(s + 3)
2 2

Let s = 1,

3 2 B 1 1 −3
∴ = + − + ∴B = (44)
16 3 2 8 48 4

All the constants can now be substituted into the original partial fraction expression such that

14
s+2 2 3 1 1
F(s) = = − − + (45)
s(s + 1) (s + 3) 3s 4(s + 1) 2(s + 1) 12(s + 3)
2 2

Each of the denominators can be readily inverse


transformed by reference to a table of transforms, see Table 3.

2 3 − t 1 − t 1 −3 t
L−1{F(s)} = f ( t ) = − e − te + e (46)
3 4 2 12

The form of the solution can be seen to be made up of a steady state term, i.e. 3/3 and transient
terms, i.e. the exponentials, which will all die away as the time increases towards infinity.

End of Example 5

Example 6 Solving Ordinary Differential Equations by Laplace transform

Give the differential equation

d 2 y (t ) dy(t )
2
+3 + 2 y(t ) = 5u s (t ) (47)
dt dt

where u s (t ) is the unit-step function. The initial conditions are y(0 ) = −1 and
y (1)
(0) = dy(t ) dt t =0 = 2 . To solve the differential equation, the Laplace-transform on both sides
of the above equation is first taken

5
s 2 Y(s ) − sy(0) − y (1) (0) + 3sY(s ) − 3y(0) + 2Y(s ) = (48)
s

Substituting the values of the initial conditions into this equation and solving for Y(s ) , the
following is obtained

− s2 − s + 5 − s2 − s + 5
Y(s ) = = (49)
s(s 2 + 3s + 2) s(s + 1)(s + 2)

Equation (49) can be expanded by partial-fraction expansion to give

5 5 3
Y(s ) = − + (50)
2s s + 1 2(s + 2)

Taking the inverse Laplace transform of this equation, the complete solution of the given

15
differential equation is obtained
5 3
y(t ) = − 5e − t + e −2 t with t ≥ 0 (51)
2 2

End of Example 6

Example 7 Complex-Conjugate Poles

Consider the following ordinary differential equation

&y& + 2 y& + 5y = 5u (52)

where y and u are output and input (unit impulse function, U(s) = 1). The initial conditions are
y(0) = 0, y& (0) = 0.

Taking the Laplace-transform on both sides of the above equation, we obtain

s 2 Y(s) + 2sY(s) + 5Y(s) = 5 (53)

or

5
Y(s) = (54)
s + 2s + 5
2

This function can be rewritten as

5
Y(s) = (55)
(s + 1 − 2 j)(s + 1 + 2 j)

Y(s) can be expanded as

5 A B
Y(s) = = + (56)
(s + 1 − 2 j)(s + 1 + 2 j) s + 1 − 2 j s + 1 + 2 j

The coefficients in (56) are determined as

5
A = (s + 1 − 2 j)Y(s) s=−1+2 j = (57)
4j
5
B = (s + 1 + 2 j)Y(s) s=−1−2 j = (58)
− 4j

16
Substituting values of A, B and C into (56), we obtain
5 5
Y(s) = +
(4 j)(s + 1 − 2 j) (−4 j)(s + 1 + 2 j)
5⎡ 1 1 ⎤
= − (59)
4 j ⎣ (s + 1 − 2 j) (s + 1 + 2 j)⎥⎦

Taking the inverse Laplace on both sides of equation (59) gives ( e jat = cos at + j sin at ):

y( t ) =
4j
(
5 −t 2 jt −2 jt
e e −e =) 5 −t
4j
e [(cos 2t + j sin 2t ) − (cos(− 2t ) + j sin (− 2 t ))]

5 −t 5
= e ( j sin 2t ) = e −t sin 2t (t ≥ 0) (60)
4j 4

End of Example 7

Example 8
A mechanical shown in the following figure is at rest before excitation force Psinωt is given,
derive the complete solution y(t) (using Laplace transform). The displacement y is measured
from the equilibrium position. Assume that the system is under-damped. Use values of P = 20N,
f = 10Hz ( ω = 2πf ), m = 200kg, λ =100Ns/m, k = 600N/m, y(0) = 0 and y& (0) = 0.

k
Psinωt

y
b

Figure 5 Spring-mass-damper system

Solution
The equation of motion for the system is

m&y& + by& + ky = P sin ωt (61)

Taking Laplace transforms of two sides with initial conditions y(0) = 0 and y& (0) = 0 yields

17
ω
(ms 2
+ bs + k )Y(s) = P
s + ω2
2
(62)
or

Pω ⎛ 1 ⎞
Y(s) = 2 ⎜ ⎟ (63)
( )
s + ω ⎝ ms + bs + k ⎠
2 2

Since the system is underdamped, Y(s) can be written as follows.

Pω 1 1
Y(s) = (64)
m s + ω s + 2ωn ξs + ω2n
2 2 2

(
where 0 < ξ < 1 , ωn = k / m , ξ = b / 2 mk . Y(s) can be expanded as )
Pω ⎛ as + c − as + d ⎞
Y (s) = ⎜⎜ 2 + 2 ⎟⎟ (65)
m ⎝s +ω 2
s + 2ω n ξs + ω 2n ⎠

By some calculations, it can be found that

a=
− 2ωn ξ
, c=
ω2n − ω2
and d =
(
4ξ 2 ω 2n − ω 2n − ω 2 ) (66)
(ω 2
n −ω 2 2
) + 4ω 2n ξ 2 ω2 (ω 2
n −ω 2 2
) + 4ω2n ξ 2 ω2 (ω 2
n −ω )
2 2
+ 4ω2n ξ 2 ω 2

Hence

⎡ − 2ξωn s + ω2n − ω2 ( ) ⎤
⎢ ⎥
Pω 1 ⎢ s 2 + ω2 ⎥
Y(s) = (67)
( )
m ω2n − ω2 2 + 4ω2n ξ 2 ω2 ⎢ 2ξωn (s + ξωn ) + 2ξ 2 ω2n − ω2n − ω2 ( )

⎢+ s 2 + 2ωn ξs + ω2n ⎥
⎣ ⎦

Taking inverse Laplace transform yields

⎡ ω2 − ω2 ⎤
⎢− 2ω n ξ cos ωt + n sin ωt ⎥
⎢ ω ⎥
Pω ⎢ ⎥
[( ]
− ωn ξt
y( t ) = + 2ω n ξe cos ωn 1 − ξ t
2
(68)
2 2 2 2 2 2 ⎢
m ω n − ω + 4ω n ξ ω )
⎢ 2ω2n ξ 2 − ω2n − ω2 −ωn ξt (

⎥ )
⎢+ sin ω n 1 − ξ t ⎥
2
e
⎣⎢ ωn 1 − ξ 2 ⎦⎥

Notes:
1. Substituting the given values into Equation (68) one can draw graphics for y(t) by an M-file.
2. One can solve Equation (61) by one of numerical integration method or a Simulink model.

18
End of Example 8
Example 9 Recalling the equation for mass-damper in Example 2

d 2 y λ dy P
+ = (69)
dt 2 m dt m

The constant force P = 20N, m = 200kg, λ = 100Ns/m. Applying Laplace transform find solution
for equation (69) with initial conditions y(0) = 0 and y& (0) = 0 .

Solution
Taking Laplace transform of two sides with zero initial conditions yields
( )
s 2 + ω2 X(s) = c
1
s
(70)

where c = P/m and ω = λ / m .

Equation (70) can be rewritten as

1
X(s) = c (71)
(
s s + ω2
2
)
Taking inverse Laplace transform yields

x ( t ) = c[1 − cos(ωt )] (72)

End of Example 9

Example 10
Recalling the equation for the liquid level storage system in Example 3

dh 1
A + h = qi (73)
dt R

It is assumed that the inlet flow rate qi is constant and the tank starts off empty. Applying Laplace
transform find the solution for the liquid level (qi = 0.01m3/s, A = 3m2 and R = 10m2s).

Solution
Equation (73) can be rewritten as follows.
dh
AR + h = Rq i (74)
dt
Taking Laplace transforms of two sides with zero initial conditions yields
(ARs + 1)H(s) = R 1 (75)
s

19
1
R A
H(s) = = (76)
s(ARs + 1) ⎛ 1 ⎞
s⎜ s + ⎟
⎝ AR ⎠

Taking inverse Laplace transform yields

1⎛ t ⎞
1

h (t ) = ⎜1 − e AR ⎟ (77)

A⎝ ⎟

End of Example 10

Example 11
Let’s consider the LRC circuit shown in Figure 6. The circuit consists of an inductance L (henry),
a resistance R (ohm) and a capacitance C (farad).

L R

ei i C eo

Figure 6 LRC circuit

Generate a differential equation to represent the relationship between the output voltage eo across
the capacitor and the supply voltage ei. It is assumed that the initial conditions are zero and the
system is under-damped, i.e. the damping ratio ξ satisfies 0 < ξ < 1 . Applying Laplace transform
find the solution.

Solution
Applying the Kirchhoff’s voltage law to the system, the following equations are obtained,

di 1
L + Ri + ∫ idt = e i (78)
dt C
1
C∫
idt = e 0 (79)

Equations (78) and (79) can be rearranged into

d 2eo de
LC 2
+ RC o + e o = e i (80)
dt dt

20
Taking Laplace transforms of two sides with zero initial condition and ei is a step function, yields

(LCs 2
+ RCs + 1)E o (s) =
Ei
s
(81)
or

1
E o (s) = E i (82)
s(LCs + RCs + 1)
2

Equation (82) can be rewritten as

Ei 1
E o (s) = (83)
LC s(s + 2ωn ξs + ω2n )
2

where ω n = 1 /(LC) and ξ = 0.5R C / L .

Equation (83) can be rearranged as

Ei ω2n
E o (s) = (84)
LCω2n s(s 2 + 2ωn ξs + ω 2n )

Taking inverse Laplace transform of (84) yields

e o (t) =
Ei
LCω2n

⎢1 −
⎢⎣
1
1− ξ2
( )

e −ξωn t sin ωn 1 − ξ 2 t + θ ⎥
⎥⎦
(85)

where θ = cos −1 ξ (as assumed that the system is underdamped, i.e. 0 < ξ < 1 ).

End of Example 11

Example 12 Ship steering dynamic model

The ship steering dynamic model representing ship motions in horizontal plane can be expressed
by the Nomoto’s first-order model as follows.

Tr& + r = Kδ (86)
ψ& = r (87)

where r is yaw rate (rad/sec), ψ is yaw angle (rad), δ is rudder angle (rad), and T and K are
constants known as ship manoeuvrability indices. Using Laplace transform, find the solution
(yaw angle) if T = 7.5seconds, and K = 0.11 and initial conditions ψ (0) = 0, r(0) = 0.

21
Solution
Equations (86) and (87) can be rewritten as


&& + ψ& = Kδ (88)

r
ψ

δ
Y

Figure 7 Ship steering dynamics

Taking Laplace transforms of two sides with zero initial conditions yields and δ is constant:


s(Ts + 1)ψ (s) = (89)
s
or

ψ (s) = (90)
s (Ts + 1)
2

Equation (90) can be rewritten as

Kδ 1
ψ (s) = (91)
T ⎛ 1⎞
s ⎜s + ⎟
2

⎝ T⎠
Taking inverse Laplace transform of (91) yields

Kδ 1 ⎛ 1 − t ⎞
1
⎛1 − t ⎞
1
ψ(t ) = ⎜ t − 1 + e T ⎟
= ψ ( t ) = KδT ⎜ t − 1 + e T ⎟
(92)
T ⎛ 1 ⎞ 2 ⎜⎝ T ⎟ ⎜T ⎟
⎠ ⎝ ⎠
⎜ ⎟
⎝T⎠

End of Example 12

22
4.4 Partial Fraction Expansion with MATLAB

MATLAB has a command to obtain the partial-expansion of N(s)/D(s). Consider the transfer
function (see in Module 8)

N(s) num b s m + b m −1s m −1 + K + b1s + b 0


= = m n (93)
D(s) den a n s + a n −1s n −1 + K + a 1s + a 0

Where some of ai and bj may be zero, except an and bm. In MATLAB row vectors num and den
specify the coefficients of the numerator and denominator of the transfer function. That is,

num = [b m b m−1 L b 0 ]
num = [a n b n −1 L a 0 ]

The command

[r, p, k] = residue(num,den)

finds the residues, poles and direct terms of a partial fraction expansion of the ratio of two
polynomials N(s) and D(s). The partial fraction expansion of N(s)/D(s) is given by

N(s) r (1) r (2) r (n )


= + +L+ + k (s) (94)
D(s) s − p(1) s − p(2) s − p(n )

where k(s) is a direct term.

Example 13
Given the following transfer function:

N(S) 2s 3 + 5s 2 + 3s + 6
= (95)
D(s) s 3 + 6s 2 + 11s + 6

Find its partial fraction expansion using MATLAB.

Solution

The commands:

num = [2 5 3 6];
den = [1 6 11 6];

[r,p,k] = residue(num,den)

23
give the following result is obtained.

r=
-6.0000
-4.0000
3.0000
p=
-3.0000
-2.0000
-1.0000
k=
2

The partial fraction expansion of N(s)/D(s) is as follows.

N(S) 2s 3 + 5s 2 + 3s + 6 − 6 −4 3
= 3 = + + +2 (96)
D(s) s + 6s + 11s + 6 s + 3 s + 2 s + 1
2

It should be noted that the commands

r = [-6 -4 3];
p = [-3 -2 -1];
k = 2;
[num, den] = residue(r,p,k)

where r, p, k are as given in the previous MATLAB output, converts the partial fraction
expansion back to the polynomial ratio N(s)/D(s) as follows:

num =
2.0000 5.0000 3.0000 6.0000

den =
1.0000 6.0000 11.0000 6.0000

End of Example 13

Example 14
Expand the following polynomial fraction into partial fraction with MATLAB.

N(S) s 2 + 2s + 3 s 2 + 2s + 3
= = (97)
D(s) (s + 1)3 s 3 + 3s 2 + 3s + 1

24
Solution
M-file:

num = [1 2 3];
den = [1 3 3 1];
[r, p, k] = residue(num, den)

The result is as follows:

r=
1.0000
-0.0000
2.0000
p=
-1.0000
-1.0000
-1.0000
k=
[]

Partial fraction is

N(s) 1 0 2
= + + (98)
D(s) s + 1 (s + 1) (s + 1)3
2

End of Example 14

25
SUMMARY OF MODULE 1

Module 1 is summarized as follows.

• Modelling of dynamic systems: ordinary differential equation, general principles of


modelling, modelling of mechanic systems, liquid level storage system and electrical
systems
• Laplace transforms: definition, Laplace transforms and inverse Laplace transform
• Application of Laplace transforms to solve ordinary differential equations.

Exercises
1. Cart on surface: We have a cart as shown in the following figure. Assuming that the rotational
inertia of the wheels is negligible and that there is friction retarding the motion of the cart,
friction is proportional to the cart’s speed. Write a differential equation representing the
relationship between the force (u) and the motion (y) of the cart.

y
Mass M
Force u Friction
M u
force by&

Figure 8 Cart on surface

2. The following figure shows two alternative applications of force or simple mass-damper
system. In the first case force is applied directly to the mass which is separated from a fixed rigid
surface by a light damper with damping coefficient λ . In the second case the restraining surface
is absent and the force P is now applied to this end of the damper system. In this latter case the
force is not now the independent variable but the velocity of movement of this part of the damper.
If the system is initially at rest in each case, derive the relationship between the movement of the
mass and the independent variable which is the forcing input.

F
m

y x

Figure 9 Forced mass-damper system

26
3. Spring-damper system: A light spring-damper coupling may be composed of parallel or
serially connected components. Although one end of the system may be fixed or connected to an
inertial mass, the modelling and movement may be initially investigated when there is movement
at each end of the system and no mass in the system. (One of the velocity terms may be set to
zero by fixing that end.) Derive a relationship in each case in Figure 3 between the difference in
velocity between the endpoints of the system and forces applied to the system.

P* P*
P P

z x
y
x
y

Figure 10 Forced spring-damper systems

4. Mass-spring-damper system: Many mechanical systems can be represented as a combination of


one or more mass, spring and damper configurations. Such a representation may be used for
suspension systems, machine tool vibrations, linkage of vehicles, etc. One such arrangement is
shown in Figure 4. Relate the movement of the mass to the applied force P, forming an overall
transfer function between this and the mass displacement. How does this system respond to a step
force of 20N? (Use values of m = 200kg, λ =100Ns/m, k = 600N/m).

P
m

k
x

Figure 11 Mass-spring-damper system

5. Newton’s Cooling Law: A hot block with temperature Tm is put in a room with a constant
ambient temperature Ta. Newton’s law of cooling states that the rate of heat loss from the block is

27
proportional to the temperature difference. Write down a differential equation to represent this
model (k = proportional constant, Tm = 50oC, Ta = 0oC). What is the temperature at time of 1/k?

Tm

Figure 12 Newton’s cooling law

6. Thermal system: Consider the system shown in Figure 11. It is assumed that the tank is
insulated to eliminate heat loss to the surrounding air. It is also assumed that there is heat storage
in the insulation and the liquid in the tank is perfectly mixed so that it is at a uniform temperature.
Thus, a single temperature is used to describe the temperature of the liquid in the tank and the
outflowing liquid.

Hot liquid
Heater
Mixer

Cold liquid

Figure 13 Thermal system

7. Pneumatic systems have been developed into low pressure pneumatic controllers for industrial
control systems and they have been used extensively in industrial processes.

Pressure in tank (P - po)


Gas

C
R Capacity
Supply pressure (P - pi)

Figure 14 Pneumatic systems

28
8. Hydraulic systems have a complexity of model which depends on the degree to which
properties such as fluid compressibility are taken into account. However, if these terms are
omitted then these systems give rise to model equations of standard form. This is also true of
pneumatic systems, although in this case the properties of the fluid will have more marked effect
on both the system behaviour and on the results predicted by an oversimplified model. Such
systems may also be subject to minor leakage flows, e.g. within the actuators, which lead to
additional model terms which it is often hard to quantity.

y k
M
F
λ
x

To sump Supply To sump

Figure 15 Hydraulic press schematic

29
Appendix - Numerical Integration Methods
Given the following differential equation

y& = f ( y, u , t )

Euler’s methods for the solution of this ordinary differential equation are summarized as follows:

Simple Euler’s Method

A simple but important numerical integration scheme is Euler’s method, where the numerical
solution is computed from:

y n +1 = y n + hf ( y n , t n )

Improved Euler’s Method

The improved Euler method includes an evaluation ŷ n +1 = y n + hf ( y n , t n ) according to Euler’s


method. Then an approximation of f (y n +1 , t n +1 ) at the time t n +1 is computed using ŷ n +1 . This
value is used to improve the accuracy of the numerical solution yn+1. The method is given by

k 1 = f (y n , t n )
k 2 = f (y n + hk 1 , t n + h )
h
y n +1 = y n + (k 1 + k 2 )
2

Modified Euler’s Method

The modified Euler method, also called the explicit midpoint rule, is derived in a similar way as
the improved Euler method. In the modified Euler method an approximation of f at (y( t + h2 ),
t + h2 )is used to find the solution. This approximation is computed using Euler’s method to find
an estimate of y( t + h2 ). The method is illustrated in Figure 3.4 and is given by
k 1 = f (y n , t n )
⎛ h h⎞
k 2 = f ⎜ y n + k1 , t n + ⎟
⎝ 2 2⎠
y n +1 = y n + hk 2

Second Order Runge-Kutta Method

The second-order Runge-Kutta method is summarized as follows.


k 1 = hf ( y n , t n )
k 2 = hf (y n + k 1 , t n + h )

30
1
y n +1 = y n + (k 1 + k 2 )
2

Fourth Order Runge-Kutta Method

The fourth-order Runge-Kutta method is summarized as follows.


k 1 = hf ( y n , t n )
k 2 = hf (y n + 12 k 1 , t n + 12 h )
k 3 = hf (y n + 12 k 2 , t n + 12 h )
k 4 = hf (y n + k 3 , t n + h )
1
y n +1 = y n + (k 1 + 2k 2 + 2k 3 + k 4 )
6

Sample program in MATLAB

Problem: Let’s consider a tank flow model

dh
3 + 0.1h = q i (A.1)
dt
If the input flow qi is maintained constant at a constant flow rate of 0.01m3/s into the tank, which
starts empty, plot the change in level of the liquid h in the tank with time.

Sample M-file:

% Filename: TankFlowModel.m
% This M-file illustrates Euler's Methods
%
% Ordinary Differential Equation
% 3*hdot + 0.1*h = qi
% where h = level, qi = inlet flow rate (0.01m^3/s)
% Initial conditions: h(0) = 0
%

% Created by Hung Nguyen in June 2005


% Last modified in June 2005
% Copyright (C) 2005 Hung Nguyen
% Email: H.Nguyen@mte.amc.edu.au
%

% Set initial conditions:

h1 = 0; % initial level
index = 0; % index for counting
step = 0.1; % sampling time
N = 200; % final value
qi = 0.01; % inlet flow rate 0.01m^3/s

% Euler's Simple Method:

31
for ii = 0.0:step:N

index = index + 1;
h1_dot = (qi - 0.1*h1)/3;
h1 = h1 + step*h1_dot;
data(index,1) = ii; % time (sec)
data(index,2) = qi; % inlet flow rate (m^3/s)
data(index,3) = h1; % level (m)

end

% Plot

plot(data(:,1),data(:,3));grid
xlabel('Time (second)');ylabel('Level (m)')

% End of program

Result:

0.1

0.09

0.08

0.07

0.06
Level (m)

0.05

0.04

0.03

0.02

0.01

0
0 20 40 60 80 100 120 140 160 180 200
Time (second)

Figure A.1 Step response

32
BASIC CONTROL THEORY
Module 3
Stability of Linear Control Systems and PID Control

SEPTEMBER 2005

Prepared by Dr. Hung Nguyen


TABLE OF CONTENTS

Table of Contents..............................................................................................................................i
List of Figures..................................................................................................................................ii
List of Tables ................................................................................................................................. iii
References ......................................................................................................................................iv
Objectives ........................................................................................................................................v

1. Stability of Linear Control Systems ............................................................................................1


1.1 Definitions of Stability..........................................................................................................1
1.2 Concepts of Stability.............................................................................................................1
Example 1...............................................................................................................................4
2. PID Control..................................................................................................................................6
2.1 What Is PID Control?............................................................................................................6
2.2 Control Actions .....................................................................................................................6
2.2.1 Proportional (P) Control Action ....................................................................................6
2.2.2 Integral (I) Control Action ............................................................................................7
2.2.3 Derivative (D) Control Action ......................................................................................8
2.2.4 Proportional Integral (PI) Control Action .....................................................................8
2.2.5 Proportional Derivative (PD) Control Action ...............................................................9
2.2.6 Proportional Integral Derivative PID Control Action ...................................................9
2.3 Types of PID Controller......................................................................................................10
2.3.1 Pneumatic PID Controllers .........................................................................................10
2.3.2 Hydraulic PID Controllers ..........................................................................................19
2.3.3 Electronic PID Controllers..........................................................................................25
2.4 Simulation of PID Control System (PID Autopilot)...........................................................30
2.4.1 Ship Manoeuvring Model ...........................................................................................30
2.4.2 Simulation of PID Autopilot (Nomoto’s First-Order Model) .....................................31
2.5 PID Controllers in Market ..................................................................................................37
2.5.1 DTZ4 Controller of Instronics Inc. .............................................................................37
2.5.2 More Information on the Internet................................................................................38
2.6 Examples.............................................................................................................................38
Example 2.............................................................................................................................38
Example 3.............................................................................................................................40
Summary of Module 3...................................................................................................................42
Exercises........................................................................................................................................43

i
LIST OF FIGURES

Figure 1 ...........................................................................................................................................5
Figure 2 ...........................................................................................................................................6
Figure 3 ...........................................................................................................................................7
Figure 4 ...........................................................................................................................................7
Figure 5 ...........................................................................................................................................8
Figure 6 ...........................................................................................................................................9
Figure 7 .........................................................................................................................................10
Figure 8 .........................................................................................................................................11
Figure 9 .........................................................................................................................................13
Figure 10........................................................................................................................................14
Figure 11 ........................................................................................................................................15
Figure 12........................................................................................................................................15
Figure 13........................................................................................................................................16
Figure 14........................................................................................................................................16
Figure 15........................................................................................................................................17
Figure 16........................................................................................................................................18
Figure 17........................................................................................................................................18
Figure 18........................................................................................................................................20
Figure 19........................................................................................................................................21
Figure 20........................................................................................................................................22
Figure 21........................................................................................................................................23
Figure 22........................................................................................................................................24
Figure 23........................................................................................................................................24
Figure 24........................................................................................................................................25
Figure 25........................................................................................................................................26
Figure 26........................................................................................................................................26
Figure 27........................................................................................................................................27
Figure 28........................................................................................................................................27
Figure 29........................................................................................................................................28
Figure 30........................................................................................................................................29
Figure 31........................................................................................................................................30
Figure 32........................................................................................................................................35
Figure 33........................................................................................................................................35
Figure 34........................................................................................................................................36
Figure 35........................................................................................................................................36
Figure 36........................................................................................................................................36
Figure 37........................................................................................................................................37
Figure 38........................................................................................................................................38
Figure 39........................................................................................................................................40
Figure 40........................................................................................................................................41

ii
LIST OF TABLES

Table 1 ...........................................................................................................................................3

iii
REFERENCES

Kamen, Edward and Bonnie S. Heck (1997), Fundamentals of Signals and Systems Using
MATLAB ®, Englewood Cliffs, New Jersey, USA.

Kou, Benjamin C. (1995), Automatic Control Systems, Prentice-Hall International Inc., Upper
Saddle River, New Jersey, USA.

Ogata, Katsuhiko (1997), Modern Control Engineering, 3rd Edition, Prentice-Hall International
Inc., Upper Saddle River, New Jersey, USA.

Richards, R.J. (1993), Solving in Control Problems, Longman Group UK Ltd, Harlow, Essex,
UK.

Seborg, Dale E., Thomas F. Edgar and Duncan A. Mellichamp (2004), Process Dynamics and
Control, 2nd Edition, John Wiley & Sons, Inc., Hoboken, New Jersey, USA.

Taylor, D.A. (1987), Marine Control Practice, Anchor-Brendon Ltd, Tiptree, Essex, UK.

Web Dictionary of Cybernetics and Systems: http://pespmc1.vub.ac.be/ASC/indexASC.html


(accessed on 22nd August 2005).

iv
AIMS

1.0 Explain concept of stability of linear control systems and principles of PID control.

LEARNING OBJECTIVES

1.1 Explain concept of stability of linear control systems.

1.2 Explain the control actions: P, I, D, PI, PD and PID controls.

1.3 Describe applications of PID control principles in pneumatic, hydraulic and electronic
controllers.

1.4 Apply the PID control principles in design of simple PID controllers.

v
1. Stability of Linear Control Systems

1.1 Definitions of Stability

In the Web Dictionary of Cybernetics and Systems (by F. Heylighen) there are some
definitions for the term ‘stability’ of a control system as follows:

1. Stability: The tendency of the variables or components of a system to remain within defined
and recognizable limits despite the impact of disturbances. (Young, p. 109)

2. Expanded or global stability: The ability of a system to persist and to remain qualitatively
unchanged in response either to a disturbance or to fluctuations of the system caused by a
disturbance. This idea of stability combines the concepts of traditional stability and Holling's
new concept of resilience. (Holling)

3. Stability: The capacity of an object or system to return to equilibrium after having been
displaced. Note with two possible kinds of equilibrium one may have a static (linear) stability
of rest or a dynamic (non-linear) stability of an endlessly repeated motion. (Iberall)

4. Stability: System is stable if, when perturbed, it returns to its original state. The more
quickly it returns, the more stable it is.

1.2 Concepts of Stability

Stability is probably the most important consideration when designing control systems. One
of the most important characteristics of a control system is that the output must follow the
desired signal as exactly as possible. The stability of a system is determined by the form of
the response to any input or disturbance. Absolute stability refers to whether a system is stable
or unstable. In a stable system the response to an input will arrive at and maintain some useful
value. In an unstable system the output will not settle at the desired value and may oscillate or
increase towards some high value or a physical limitation. For example, a system is stable if
the response to an impulse input approaches zero as time approaches infinity.

In the context of linear control systems, stability would be intuitively reasonable to define a
linear system to be stable if its output is bound for every bounded input, namely bounded
input bounded output (BIBO) stability. The system is said BIBO stable. This definition states
that stability is dependent on the system itself. The properties or dynamic behaviours of a
system are characterised by its transfer function G(s).

The input to a system will not affect or determine its stability. The components of the system
provide its characteristics, and hence determine stability. The solution to the differential
equation describing a system is made up of two terms, a transient response and a steady-state
response. For stability the transient response terms must all die away as time progresses. The
coefficients of the exponential terms must therefore be negative real numbers or complex
numbers with negative real parts.

The stability or instability of a closed-loop control system is determined by the poles of its
transfer function. The system is stable if the function response y(t) remains bounded as the
time (t) tends to infinity. For most control engineering purposes an even stronger concept of
stability if required:

1
Asymptotically stable: A system is said to be asymptotically stable if its response decays to
zero as t tends to infinity.

Marginally stable: as mentioned in previous section an undamped second order system has its
response that is indefinitely oscillatory. This is an example of a system that is stable but not
asymptotically stable. In fact, this system is known as marginally stable. Otherwise, the
system is said unstable.

For analysis and design purposes stability can be classified in absolute stability and relative
stability. As stated above, absolute stability refers to the condition of whether the system is
stable or unstable; it is a yes or no answer. Once the system is found to be stable, it is of
interest to determine how stable it is, and this degree of stability is a measure of relative
stability.

The stability of a system depends much on the poles of its transfer function. Let’s consider the
following transfer function G(s) of a system

G (s) = =
(
N(s) K s m + b1s m−1 +  + b m −1s + b m ) (1)
D(s) s n + a 1s n −1 +  + a n −1s + a n

and the system response is determined, except for the residues, by the poles of G(s), that is by
the solutions of the following characteristic equation of the system

s n + a 1s n −1 +  + a n −1s + a n = 0 (2)

Equation (2) can be written as

(s − p1 )(s − p 2 )...(s − p n ) = 0 (3)

where p1, p2,…, pn. are roots of characteristic equation (2), or the finite poles of transfer
function (1). The system is asymptotically stable if and only if the roots of the characteristic
equation are negative (for the real poles) or have negative real parts (for the complex poles).

The case of purely imaginary roots, like complex roots, all purely imaginary roots occur in
conjugate pairs. Let’s call the multiplicity of the roots of (3) is r. If the multiplicity r is equal
to 1 (r = 1), the corresponding response is oscillatory with a constant amplitude. On the other
hand, if the multiplicity of the imaginary roots r > 1 the corresponding response terms are
oscillatory but the oscillations are of increasing amplitude. Hence, for r = 1 the system is
marginally stable, but for r > 1 the system is unstable.

Similarly, if there is a root of multiplicity r > 1 at the origin, then the system is unstable. Table
1 summarises the relationship between the roots of the characteristic equation (or the poles of
the transfer function) and the stability. From Table 1, it can be seen that:

• For bounded input bounded output stability, the roots of the characteristic equation
must all lie on the left half s plane.
• A system that is BIBO stable is simply called stable, otherwise it is unstable.

2
Table 1 Relationship between poles (roots) and stability
Type of root S-plane graph
s = σ + jω Response graph Remarks
Real and negative ω y Asymptotically stable

σ t

Real and positive ω y Unstable

σ t

Zero ω y Marginally stable

σ t

Conjugate complex ω Asymptotically stable


y
with negative real
part σ t

Conjugate imaginary Marginally stable


ω y
(multiplicity r = 1)
σ t

Conjugate imaginary Unstable


ω y
(multiplicity r = 2)
σ t

Conjugate with Unstable


ω y
positive real part
σ t

Roots of multiplicity ω y Unstable


r = 2 at the origin
σ t

3
There are several methods to determine the stability of a control system. They are outlined as
follows.

Routh-Hurwitz stability criterion: this is an algebraic method that provides information on


the absolute stability of a linear time-invariant system that has a characteristic equation
with constant coefficients. The criterion tests whether any of the roots of C.E. lie in the
right-half s-plane. The number of roots that lie on the jω-axis and in the right-half plane is
also indicated.

Nyquist’s stability criterion: This criterion is a semi-graphical method that gives


information on the difference between the number of poles and zeros of the C.L.T.F that
are in the right-half s-plane by observing the behaviour of the Nyquist plot of the loop
transfer function.

Bode diagram: This diagram is a plot of the magnitude (gain) of the open-loop transfer
function in decibels (dB) and the phase of the open-loop transfer function in degrees, all
versus frequency ω. The stability of the C.L. system can be determined by observing the
behaviour of these plots.

Root-locus technique: This method is the investigation of the trajectories of the roots of the
characteristic equation – simply root loci – when a certain system parameter varies. It is
applicable for linear control systems.

Lyapunov’s stability criterion: This is the most general method for system stability analysis.
It is applicable to both linear and non-linear system of any order. For linear systems it
provides both the necessary and sufficient conditions, whereas for nonlinear systems it
provides only the sufficient conditions for asymptotic stability. The method depends on
defining a Lyapunov’s function, closely related to the system energy function.

Example 1

Let’s consider the following systems:

s +1
1. G(s) =
s+2
pole at s = -2 (real and negative)
9
2. G(s) = 2
s + 1.5s + 9
poles at s = -0.7500 ± 2.9047j (conjugate complex roots with negative real parts)
s
3. G(s) = 2
s +1
poles at s = ± j (conjugate imaginary, multiplicity r = 2)
1
4. G(s) = 2
s − 0.5s + 1
poles at 0.2500 ± 2.9896j (conjugate complex roots with positive real parts)

Figure 1 shows step responses of these systems, Figure 1a and Figure 1b: asymptotically
stable systems, Figure 1c: marginally stable system and Figure 1d: unstable system.

4
1

0.9

0.8
s +1
0.7 G (s) =
s+2
0.6

0.5
0 2 4 6 8 10 12 14 16 18 20
a) Asymptotically stable system
1.5

9
0.5 G (s) =
s + 1.5s + 9
2

0
0 2 4 6 8 10 12 14 16 18 20

b) Asymptotically stable system

0.5

0 s
G (s) =
s +12

-0.5

-1
0 2 4 6 8 10 12 14 16 18 20
c) Marginally stable system
100

50

-50 9
G (s) =
-100 s − 1.5s + 9
2

-150
0 2 4 6 8 10 12 14 16 18 20

d) Unstable system

Figure 1 Illustration of stable and unstable systems

5
2. PID Control

2.1 What Is PID Control?

PID stands for Proportional, Integral and Derivative. This is also called three terms control
and Gain, Reset and Rate. The PID controller is probably the simplest control system, but it
has still been used because of its efficiency. In 1922, N. Minorsky introduced his three-term
controller for the steering of ships, and his controller is regarded as the first PID controller.
He considered non-linear effects in the closed-loop control. The PID control has been
developed further in different forms such as non-linear PID control, time-varying gain PID
control and fuzzy PID control.

The PID control has three terms: P, I and D. We will investigate how each term has effects on
the system response.

2.2 Control Actions

Let’s consider a PID control system shown in Figure 2, in which u(t) is the reference input (or
set-point), y(t) is the output (or process variable), e(t) is the actuating error signal and u(t) is
the control signal. We examine how each term of the PID controller has effects on the whole
system response y(t).

Error detector
PID controller Output or process
Reference variable (PV)
input u(t) e(t) u(t) y(t)
+_ PID Actuator Plant
(Set-point)

Sensor

Figure 2 Simple PID control system

2.2.1 Proportional (P) Control Action

For a controller with proportional control action, the relationship between the output of the
controller u(t) (control signal) and the actuating error signal e(t) is

u ( t ) = K P e( t ) (4)

or, in Laplace transformed quantities,

U(s)
= KP (5)
E(s)

6
where Kp is termed the proportional gain.

R(s) E(s) U(s)


+_ KP

Figure 3 Block diagram of a proportional controller

Whatever the actual mechanism may be and whatever the form of the operating power, the
proportional controller is essentially an amplifier with an adjustable gain. A block diagram of
such a controller is shown in Figure 3.

2.2.2 Integral (I) Control Action

In a controller with integral control action, the value of the controller output u(t) is changed at
a rate proportional to the actuating error signal e(t). That is,

du ( t )
= K I e( t ) (6)
dt

thus

t
u ( t ) = K I ³ e( t )dt (7)
0

where KI is an adjustable constant (KI = KP/TI, TI is integral time). The transfer function of
the integral controller is

U(s) K I
= (8)
E(s) s

If the value of e(t) is doubled, then the value of u(t) varies twice as fast. For zero actuating
error, the value of u(t) remain stationary. The integral control action is sometimes called reset
control. Figure 4 shows a block diagram of such a controller.

R(s) E(s) KI U(s)


+_ s

Figure 4 Block diagram of an integral controller

7
2.2.3 Derivative (D) Control Action

In a controller with derivative control action, the value of the controller output u(t) is changed
at a rate proportional to the rate of the change of the actuating error signal e(t). That is,
de( t )
u (t ) = K D (9)
dt

where KD is derivative gain (KD = KPTD, TD is derivative time) or transfer function of the
controller is

U(s)
= K Ds (10)
E(s)

Note that the derivative control action can never be used alone because this control action is
effective only during transient periods. See the proportional derivative control action.

2.2.4 Proportional Integral (PI) Control Action

The control action of proportional integral controller known as PI controller is defined by

t
K
u ( t ) = K P e( t ) + P ³ e( t )dt (11)
TI 0

or the transfer function of the controller is

U(s) § 1 ·
= K P ¨¨1 + ¸¸ (12)
E(s) © TI s ¹

where KP is the proportional gain, and TI is called the integral time. Both KP and TI are
adjustable. The integral time adjusts the integral control action, while a change in the value of
KP affects both the proportional and integral parts of the control action. The inverse of the
integral time TI is called the reset rate. The reset rate is the number of times per minute that
the proportional part of the control action is duplicated. Reset rate is measured in terms of
repeats per minute. Figure 5(a) shows a block diagram of a proportional-integral controller. If
the actuating error signal e(t) is a unit step function as shown in Figure 5(b), then the
controller output u(t) becomes as shown in Figure 5(c).
u(t)
e(t)
R(s) E(s) U(s) KP
K P (1 + TI s) Unit step
+_
TI s
1 KP
Unit step
0 t 0 t
(a) (b) (c)

Figure 5 (a) block diagram of a proportional integral controller; (b) & (c) diagrams depicting
a unit-step input and the controller output

8
2.2.5 Proportional-Derivative (PD) Control Action

The control action of a proportional-derivative controller is defined by

de( t )
u ( t ) = K P e( t ) + K P TD (13)
dt

and the transfer function is

U(s) de( t )
= K P e( t ) + K P TD (14)
E(s) dt

where KP is the proportional gain and TD is a constant called the derivative time. Both KP and
TD are adjustable. The derivative control action, sometimes called rate control, is where the
magnitude of the controller output is proportional to the rate of change of the actuating error
signal. The derivative time TD is the time interval by which the rate action advances the effect
of the proportional control action. Figure 6(a) show a block diagram of a proportional-
derivative controller. If the actuating error signal e(t) is unit-ramp function as shown in Figure
6(b), then the controller output u(t) becomes as shown in Figure 6(c). As may be seen from
Figure 6(c), the derivative control action can never anticipate any action that has not yet taken
place.

While derivative control action has the advantage of being anticipatory, it has the
disadvantages that it amplifies signals and may cause a saturation effect in the actuator.

e(t) U(t) PD control


action
R(s) E(s) U(s) Unit ramp
+_ K P (1 + TD s) TD

Proportional only

(a)
0 t 0 t
(b) (c)

Figure 6 (a) block diagram of a proportional derivative controller; (b) & (c) diagrams
depicting a unit-ramp input and the controller output

2.2.6 Proportional-Integral-Derivative (PID) Control Action

The combination of proportional control action, integral control action, and derivative control
action is termed proportional, integral and derivative control action, known as PID control.
This combined action has the advantages of each of the three individual control actions. The
equation of a controller with this combined action is given by

t
K de( t )
y( t ) = K P e( t ) + P ³ e( t )dt + K P TD (15)
TI 0 dt

9
or transfer function is

U(s) § 1 ·
= K P ¨¨1 + + TD s ¸¸ (16)
E(s) © TI s ¹

where KP is the proportional gain, TI is the integral time (seconds), and TD is the derivative
time (seconds). It should be noted that KI = KP/TI and KD = KPTD. In practice, TI and TD are
preferred to KI and KD.

The block diagram of a proportional, integral and derivative controller is shown in Figure 7(a).
If e(t) is a unit-ramp function as shown in Figure 7(b), then the controller output u(t) becomes
as shown in Figure 7(c).

R(s) E(s) K P (1 + TI s + TI TD s 2 ) U(s)


+_ TI s

(a)

PD control
u(t) action
e(t)
Unit ramp
TD
PD control
action
Proportional only

0 t 0 t
(b) (c)

Figure 7 (a) block diagram of a PID controller; (b) and (c) diagrams depicting a unit-ramp
input and the controller output

2.3 Types of PID Controller

The above-mentioned PID control principle can be applied into many types of PI, PID
controllers such as pneumatic PI controller, hydraulic PI or PID controllers, and electronic
PID controllers. Nowadays, PID control has been developed in new types of PID controller
such as digital PID controller, fuzzy PID controller, optimal PID controller and self-tuning
PID controller. In this section we investigate how the PID control principle has been applied
in some typical PID controllers such as pneumatic PID controller, hydraulic PID controller
and electronic PID controller.

2.3.1 Pneumatic P-I-D Controllers

Pneumatic Actuating Valve: One characteristic of pneumatic controls is that they most
exclusively employ pneumatic actuating valves. A pneumatic actuating valve can provide a

10
large power output. In practical pneumatic actuating valves, the valve characteristics may not
be linear; that is the flow may not be directly proportional to the valve stem position, and also
there may be other nonlinear effects such as hysteresis.

Let’s consider the schematic diagram of a pneumatic actuating valve shown in Figure 8.
Assume that the area of the diaphragm is A. Assume also that when the actuating acting error
is zero the control pressure is equal to Pc and the valve displacement is equal to Y.

Control pressure

Return spring

Valve stem

Figure 8 Schematic diagram of a pneumatic actuating valve

In the following analysis, small variations in the variables are considered and the pneumatic
actuating valve is linearised. Let’s define the small variation in the control pressure and the
corresponding valve displacement to be pc and y, respectively. Since a small change in the
pneumatic force applied to the diaphragm repositions the load, consisting of the spring,
viscous friction, and mass, the force balance equation becomes

my + by + kx = Ap c (17)

where m = mass of the valve and valve stem


b = viscous friction coefficient
k = spring constant

If the force due to mass and viscous friction are negligibly small, then Equation (17) can be
simplified:

ky = Ap c (18)

The transfer function between y and pc thus becomes

Y(s) A
= = Kc (19)
Pc (s) k

11
If qi, the change in flow through the pneumatic actuating valve, is proportional to y, the
change in the valve stem displacement then

Q i (s)
= Kq (20)
Y(s)

The transfer function between qi and pc becomes

Q i (s)
= KcKq = K v (21)
Pc (s)

where Kc is a constant.

The standard control pressure for this kind of a pneumatic actuating valve is between 3 and 15
psig (approx. 20 kPa to 100 kPa). The valve stem displacement is limited by the allowable
stroke of the diaphragm and is only a few inches. If a longer stroke is needed, a piston-spring
combination may be employed.

In pneumatic valves, the static friction force must be limited to a low value so that excessive
hysteresis does not result. Because of the compressibility of air, the control action may not be
positive, that is an error may exist in the valve stem position. The use of a valve positioner
results in improvements in the performance of a pneumatic actuating valve.

Resistance and capacitance of pressure systems

Resistance is the ratio of the change in gas pressure difference (kg/m2) to the change in gas
flow rate (kg/sec).

d(∆P )
R= (Pa/m3/sec) (22)
dq

where d(∆P ) is a small change in the gas pressure and dq is a small change in the gas flow
rate. Computation of the value of the gas flow resistance may be quite time consuming. The
gas flow resistance R can be determined by experiments, for example, from a plot of the
pressure difference versus flow rate by calculating the slope of the curve at a given operating
condition as shown in Figure 9b.

The capacitance of the pressure vessel may be defined by


dm dρ
C= =V (23)
dp dp

where C = capacitance, m-2


m = mass of gas in vessel, kg
p = gas pressure, kg/m2
V = volume of vessel, m3
ρ = density, kg/m3

12
∆P
Resistance Slope = R
R d(∆P)
P + po
P + pi q dq

Capacitance C 0 q
(a) (b)

Figure 9 Schematic diagram of a pressure system

The capacitance of the pressure system depends on the type of expansion process involved.
The capacitance can be calculated by used of ideal gas law.

V
C= (24)
nR gas T

where V = volume of vessel


n = polytropic exponent
Rgas = gas constant
T = absolute temperature of gas, K

The capacitance of a given vessel is constant if the temperature stats constant.

Let’s consider the system shown in Figure 9 above. If we assume only small deviations in the
variables from their respective steady state values, then this system may be considered linear.

Let’s define:

P = gas pressure in the vessel at steady state (before changes in pressure have occurred), Pa
pi = small change in inflow gas pressure, Pa
po = small change in gas pressure in the vessel, Pa
V = volume of the vessel, m3
m = mass of the gas in vessel, kg
q = mass gas flow rate, kg/sec
ρ = density of gas, kg/m3

For small values of pi and po, the resistance R given by Equation (24) becomes constant and
may be written as

pi − p o
R= (25)
q

The capacitance C is given by Equation (24) or

dm
C= (26)
dp

13
Since the pressure change dpo times the capacitance C is equal to the gas added to the vessel
during dt seconds, we obtain

Cdp o = qdt (27)

or

dp o p i − p o
C = (28)
dt R

which can be written as

dp o
CR + po = pi (29)
dt

If pi and po are considered the input and output, respectively, then the transfer function of the
system is

Po (s) 1
= (30)
Pi (s) RCs + 1

where RC has the dimension of time and is called the time constant of the system.

Pneumatic Proportional Controller: Let’s consider a pneumatic controller in Figure 10.


Assuming that small changes in the variables, we can draw a block diagram of this controller
as shown in Figure 11. From the block diagram we can se that the controller is of proportional
type. An addition of a restriction in the negative feedback path will modify the proportional
controller to a proportional derivative (PD) controller.

e
Ps Y+x a

Flapper
b

Pc+pc

Figure 10 Pneumatic proportional controller

14
E(s) b Y(s) Pc(s)
+_ K
a+b

B(s) a A
a+b ks

Figure 11 Block diagram of the pneumatic proportional controller

Pneumatic PD Controller: Let’s consider the pneumatic controller shown in Figure 12(a).
Assuming small changes in the actuating error, nozzle-flapper distance, and control pressure,
we can summarise the operation of this controller as follows: Let’s first assume a small step
change in e. Then the change in the control pressure pc will be instantaneous. The restriction
R will momentarily prevent the feedback bellows from sensing the pressure change pc. Thus
the feedback bellows will not respond momentarily, and the pneumatic actuating valve will
feel full effect of the movement of the flapper. As time goes on, the feedback bellows will
expand or contract. The change in the nozzle-flapper distance y and the change in the control
pressure pc can be plotted against time t as shown in Figure 12(b). At steady state, the
feedback bellows acts like an ordinary feedback mechanism. The curve pc versus t clearly
shows that this controller is of the proportional derivative type.

e e
Ps Y+x a

t
y
Flapper
b

t
pc
C
R
Pc+pc t
(a) (b)
Figure 12 Pneumatic PD controller

A block diagram corresponding to this pneumatic is shown in Figure 13. In the block diagram,
K is constant, A is the area of the bellows, and ks is the equivalent spring constant of the
bellows. The transfer function between pc and e can be obtained from the block diagram as
follows:

b
K
Pc (s) a + b
= (31)
E(s) Ka A 1
1+
a + b k s RCs + 1

15
In such a controller the loop gain |KaA/[(a+b)ks(RCs+1)]| is normally very much greater than
unity. Thus the transfer function Pc(s)/E(s) can be simplified to give

Pc (s)
= K P (1 + TD s) (32)
E(s)

bk s
where KP = and TD = RC.
aA

E(s) b Y(s) Pc(s)


+_ K
a+b

B(s) a A 1
a+b ks RCs + 1

Figure 13 Block diagram of the pneumatic PD controller

Thus delay negative feedback, or the transfer function 1/(RCs+1) in the feedback path,
modifies the proportional controller to a PD controller. Note that if the feedback valve is fully
opened the control action becomes proportional. If the feedback valve is fully closed, the
control action becomes narrow-band proportional (on-off).

Pneumatic PI controller: Let’s consider the proportional controller shown in Figure 12(a)
again. Considering small changes in the variables, we can show that the addition of delayed
positive feedback will modify this proportional controller to a PI controller.
e
Ps X+x e
a

Flapper t
Ri y

b
t
C pc
C

Pc+pc I II
t
(a)
(b)
Figure 14 Pneumatic proportional-plus-integral controller

Let’s consider the pneumatic controller in Figure 14(a). The operation of this controller is as
follows: The bellows denoted by I is connected to the control pressure source without any
restriction.

16
Let’s assume a small step change in the actuating error. This will cause the back pressure in
the nozzle to change instantaneously. Thus a change in the control pressure pc also occurs
instantaneously. Due to the restriction of the valve in the path to bellows II, there will be a
pressure drop across the valve. As time goes on, air will flow across the valve in such a way
that the change in pressure in bellows II attains the valve pc. Thus bellows will expand or
contract as time elapses in such a way as to move the flapper an additional amount in the
direction of the original displacement e. This will cause the back pressure pc in the nozzle to
change continuously as shown in Figure 14(b).

Note that the integral control action in the controller takes the form of slowly cancelling the
feedback that the proportional control originally provided.

A block diagram of this controller under the assumption of small variations in the variables is
shown in Figure 15(a). A simplification of this block diagram is given in Figure 15(b).

E(s) Y(s) Pc(s)


b
+_ + K
a+b +

a A 1
a+b ks RCs + 1

a A
a+b ks

(a)

E(s) Y(s) Pc(s)


b
+_ K
a+b

1
_ RCs + 1
a A
a + b ks +

(b)
Figure 15 Block diagram of the pneumatic PI controller

The transfer function is


b
K
Pc (s) a + b
= (32)
E(s) Ka A § 1 ·
1+ ¨1 − ¸
a + b k s © RCs + 1 ¹

where K is a constant. A is the area of the bellows, and ks is the equivalent spring constant of
the combined bellows.

17
If |KaARCs/[(a+b)ks(RCs+1)]| >> 1, which is usually the case, the transfer function can be
simplified to
Pc (s) § 1 ·
= K P ¨¨1 + ¸¸ (33)
E(s) © T s
I ¹

bk s
where KP = and TI = RC.
aA

Pneumatic PID Controller: A combination of the pneumatic controllers shown in Figure


12(a) and 14(a) yields a proportional, integral and derivative (PID) controller. Figure 16
shows a schematic diagram of such a controller. Figure 17 shows a block diagram of this
controller under the assumption of small variations in the variables.
e
Ps X+x a

(Ri >> Rd)


Flapper
Ri

Rd
C C

Pc+pc

Figure 16 Pneumatic proportional-integral-derivative controller

E(s) Y(s) Pc(s)


b
+_ K
a+b

1
_ RCs + 1
a A
a + b ks +
1
RCs + 1

Figure 17 Block diagram of the pneumatic PID controller

The transfer function of this controller is


b
K
Pc (s) a+b
= (34)
E(s) Ka A (R i C − R d C )s
1+
a + b k s (R d Cs + 1)(RCs + 1)

18
By defining TI = RiC, TD = RdC and noting that under normal operation
|KaA(TI – TD)s/[(a+b)ks(TDs+1)(TIs+1)]| >> 1 and TI >> TD, we obtain:

Pc (s)

bk s (TD s + 1)(TI s + 1)

( )
bk s TD TI s 2 + TI s + 1 §
= K P ¨¨1 +
1 ·
+ TD s ¸¸ (35)
E(s) aA (TI − TD )s aA TI s © TI s ¹

bk s
where KP = .
aA

Equation (35) indicates that the controller shown in Figure 16 is a PID controller.

2.3.2 Hydraulic PID Controllers

The operating pressure in hydraulic systems is somewhere between 145lbf/in.2 (1MPa) and
5000lbf/in.2 (35MPa). In some special applications, the operating pressure may go up to
10,000lbf/in.2 (70MPa). For the same power requirement, the weight and size of the hydraulic
unit can be made smaller by increasing the supply pressure. With high pressure hydraulic
systems, very large force can be obtained. Rapid-acting, accurate positioning of heavy loads is
possible with hydraulic systems. A combination of electronic and hydraulic systems is widely
used because it combines the advantages of both electronic control and hydraulic power.

Advantages and disadvantages of hydraulic systems:


Advantages:
1. Hydraulic fluid acts as a lubricant, in addition to carrying away heat generated in the
system to a convenient heat exchanger.
2. Comparatively small sized hydraulic actuators can develop large forces or torques.
3. Hydraulic actuators have a higher speed of response with fast starts, stops, and speed
reversals.
4. Hydraulic actuators can be operated under continuous, intermittent, reversing, and stalled
conditions without damage.
5. Availability of both linear and rotary actuators gives flexibility in design.
6. Because of low leakages in hydraulic actuators, speed drop when loads are applied is small.
On the other hand, several disadvantages tend to limit their use:
1. Hydraulic power is not really available compared to electric power.
2. Cost of a hydraulic system may be higher than a comparable electrical system performing a
similar function.
3. Fire and explosion hazards exist unless fire-resistant fluids are used.
4. Because it is difficult to maintain a hydraulic system that is free from leaks, the system
tends to be messy.
5. Contaminated oil may cause failure in the proper functioning of a hydraulic system.
6. As a result of the non-linear and other complex characteristics involved, the design of
sophisticated hydraulic systems is quite involved.
7. Hydraulic circuits have generally poor damping characteristics. If a hydraulic circuit is not
designed properly, some unstable phenomena may occur or disappear, depending on the
operating condition.

Hydraulic Integral (I) Controller: Figure 18 shows a hydraulic servomotor. Operation of


this hydraulic servomotor is as follows. If input x moves the pilot valve to the right, port II is
uncovered, and so high-pressure oil enters the right side of the power piston. Since port I is

19
connected to the drain port, the oil in the left side of the power piston is returned to the drain.
The oil flowing into the power cylinder is at high pressure; the oil flowing out from the power
cylinder into the drain is at low pressure. The resulting difference in pressure on both sides of
the power piston will cause it to move to the left.

Oil under
pressure

Pilot valve
x

Port I Port II

Power cylinder
y

Figure 18 Hydraulic servomotor

Note that the rate of flow of oil q (kg/sec) times dt (sec) is equal to the power piston
displacement dy (m) times the piston area A (m2) times the density of oil ρ (kg/m3).
Therefore,

Aρdy = qdt (36)

Because of the assumption that the oil flow q is proportional to the pilot valve displacement x,
we have

q = K1x (37)

where K1 is a positive constant. From equations (36) and (37) we obtain

dy
Aρ = K1x (38)
dt

The Laplace transform of this last equation, assuming a zero initial condition, gives

AρsY(s) = K 1X(s) (39)


or
Y(s) K 1 K
= = (40)
X(s) Aρs s

where K = K1/(A ρ ). Thus, the hydraulic servomotor shown in Figure 18 acts as an integral
controller.

20
Hydraulic Proportional Controllers: The above integral servomotor can be modified to a
proportional controller by means of a feedbakc link. The following figure shows a hydraulic
proportional controller. The left side of the pilot valve is joined to the left side of the power
piston by a link ABC. This link is a flowing link rather than one moving abouve a fixed pivot.

The controller operates in the following way. If input e moves the pilot valve to the right, port
II will be uncovered and high-pressure oil will flow through port II into the right side of the
power pistion and force this piston to the left. The power piston, in moving to the left, will
carry the feedback link ABC with it, thereby moving the pilot valve to the left. This action
continues until the pilot piston again covers ports I and II. A block diagram of the system can
be drawn as follows (Figure 19 (b)). The transfer function between Y(s) and E(s) is given by:

b K
Y(s) bK
= a+b s = (41)
E(s) K a s(a + b) + Ka
1+
s a+b

Oil under
pressure
A
e
a
E(s) b X(s) K Y(s)
x
B a+b s

b a
a+b

y
C
(a) (b)

Figure 19 (a) Servomotor that acts as a proportional controller; (b) block diagram of the
servomotor

Note that under normal operating conditions we have Ka /[s(a + b)] >> 1 , this last equation
can be simplified to

Y (s) b
= = KP (42)
E(s) a

Dashpots (damper) – differentiating element: The dashpot (also called a damper) shown in
the following figure acts as a differentiating element. Suppose that we introduce a step
displacement to the piston position x. Then the displacement y becomes equal to x
momentarily. Because of the spring force, however, the oil will flow through the resistance R

21
and the cylinder will come back to the original position. The block diagram is shown in
Figure 20 (b).

q
R Y(s)
X(s)
k
P2 P1
A
1
x Ts RA 2 ρ
y T=
k
(a) (b)
Figure 20 (a) Dashpot, (b) block diagram of dashpot

Let us derive the transfer function between the displacement y and displacement x. Define the
pressures existing on the right and left sides of the piston as P1 (lb/in.2) and P2 (lb/in.2),
respectively. Suppose that the inertia force involved is negligible. Then the force acting on the
piston must balance the spring force. Thus

A(P1 − P2 ) = ky (43)

where A = piston area, in m2


k = spring constant, kg/m.

The flow rate q is given by

P1 − P2
q= (44)
R

where q = flow rate through the restriction, kg/sec


R = resistance to flow at the restriction, Nsec/(m2kg)

Since the flow through the restriction during dt seconds must equal the change in the mass of
oil to the left of the piston during the same dt seconds, we obtain:

qdt = Aρ(dx − dy ) (45)

where ρ = density, kg/m3. We assume that the fluid is incompressible or ρ = constant). The
last equation can be rewritten as

dx dy q P −P ky
− = = 1 2 = (46)
dt dt Aρ RAρ RA 2 ρ

or

dx dy ky
= + (47)
dt dt RA 2ρ

22
Taking the Laplace transforms of both sides of this last equation, assuming zero initial
conditions, we obtain
k
sX(s) = sY(s) + Y(s) (48)
RA 2 ρ

The transfer function of this system thus becomes


Y(s) s
= (49)
X(s) k
s+
RA 2 ρ

Let us define RA2 ρ /k = T. Then


Y(s) 1
= (50)
X(s) 1
1+
Ts

Figure 20 (b) shows a block diagram representation for this system.

Hydraulic PI Controller: Figure 21 shows a schematic diagram of a hydraulic proportional-


integral controller. A block diagram of this controller is shown in Figure 22. The transfer
function Y(s)/E(s) is given by:
b K
Y(s) a+b s
= (51)
E(s) Ka Ts
1+
a + b Ts + 1

Oil under
pressure
e

b
Spring (k) Area (A)
y

Density of oil ( ρ ) Resistance (R)

Figure 21 Schematic diagram of a hydraulic PI controller

In such a controller, under normal operation KaT /[(a + b)(Ts + 1)] >> 1 , with the result that

23
Y (s) § 1 ·
= K P ¨¨1 + ¸¸ (52)
E(s) © TI s ¹
b RA 2 ρ
where K P = , TI = T = .
a k

Thus the controller is a PI controller.

E(s) b X(s) K Y(s)


a+b s

a Z(s) Ts
a+b Ts + 1

Figure 22 Block diagram for hydraulic PI controller

Hydraulic PD Controller: Figure 23 shows a schematic diagram of a hydraulic proportional-


plus-derivative controller. The cylinders are fixed in space and the pistons can move. For this
system, notice that
k ( y − z) = A(P2 − P1 ) (53)
P −P
q= 2 1 (54)
R
qdt = ρ Adz (55)

Oil under
pressure
e
a

b
R
q
k
P2 P1 y
z

Density of oil ( ρ ) Area (A)

Figure 23 Schematic diagram of a hydraulic PD controller

24
Hence
A RA 2 ρ dz
y=z+ qR = z + (56)
k k dt

E(s) b X(s) K Y(s)


a+b s

a Z(s) 1
a+b Ts + 1

Figure 24 Block diagram for hydraulic PD controller

or
Z(s) 1
= (57)
Y(s) Ts + 1
RA 2 ρ
where T = .
k

A block diagram for this system is shown in Figure 24. From the block diagram the transfer
function Y(s)/E(s) can be obtained as:

b K
Y(s) a+b s
= (58)
E(s) a 1
1+
a + b Ts + 1

Under normal condition we have aK /[(a + b)s(Ts + 1)] >> 1 . Hence


Y (s)
= K P (1 + Ts ) (59)
E(s)

b RA 2 ρ
where KP = and T = .
a k

2.3.3 Electronic PID Controllers

Let’s consider simple operational amplifier circuit. In this section, we will consider a simple
operational amplifier circuit.

Operational Amplifiers: Operational amplifiers, called op amps, are frequently used to


amplify signals in sensor circuit. Op amps are also frequency used in filters used for
compensation purposes. Figure 25 shows an op amps and Figure 26 shows the most usual
type of op amps: 741 Op Amp 8-pin DIP (dual-in-package). It is a common practice to choose

25
the ground as 0 volt and measure the input voltages v1 and v2 relative to the ground. The input
v1 to the minus terminal of the amplifier is inverted, and the input v2 to the plus terminal is not
inverted.

v1

v2
vo

Figure 25 Operational amplifier

Figure 26 The most usual 741 package

The total input to the amplifier thus becomes v2 – v1. Hence, for the circuit shown in Figure
26, we have

v o = K ( v 2 − v1 ) (60)

where the inputs v1 and v2 may be dc or ac signal and K is the differential gain or voltage gain.
The magnitude of K is approximately 105~106 for dc signals and ac signals with frequencies
less than approximately 10Hz. It should be noted that the op amp amplifies the difference in
voltages v1 and v2. Such an amplifier is commonly called a differential amplifier. Since the
gain of the op amp is very high, it is necessary to have a negative feedback from the output to
the input to make the amplifier stable.

Inverting Amplifier: Consider the operational amplifier circuit in Figure 27. Let us obtain
the output voltage vo. The equation for this circuit can be obtained as follows:

26
v i − v′ v′ − v o
i1 = ; i2 = (61)
R1 R2

i2 R2

i1 R1

v′
vi vo

Figure 27 Inverting operational amplifier circuit

Since only a negligible current flows into the amplifier, the current i1 must be equal to current
i2. Thus

v i − v′ v′ − v o
= (62)
R1 R2

Since K (0 − v′) = vo and K>>1, v′ must be almost zero, or v′ ≅ 0 . Hence we have


vi − vo
= (63)
R1 R 2

or
R2
vo = − vi (64)
R1

Thus the circuit is an inverting amplifier. If R1 = R2, then the op-amp circuit acts as a sign
converter.

Non-inverting amplifier: Figure 28 shows a non-inverting amplifier circuit. For this circuit,
we have
§ R1 ·
v o = K¨¨ v i − v o ¸¸ (65)
© R1 + R 2 ¹

where K is the differential gain of the amplifier.


R2

R1

vi vo

Figure 28 Non-inverting operational amplifier circuit

27
Form Equation (65) we get

§ R1 1·
v i = ¨¨ + ¸¸ v o (66)
© R1 + R 2 K ¹

Since K >> 1, if R1/(R1+R2) >> 1/K, then

§ R ·
v o = ¨¨1 + 2 ¸¸ v i (67)
© R1 ¹

This equation gives the output voltage vo. Since vo and vi have the same signs, the amplifier
circuit shown in Figure 28 is non-inverting.

Impedance approach for obtaining transfer functions. Consider the op-amp circuit shown in
Figure 29(a). Simular to the case of electrical circuits discussed above, the impedance
approach can be applied to op-amp circuits to obtain their transfer functions. For the circuit in
Figure 29(b), we have
C I(s)
i2
Z2(s)
I3 R2

i1 R1 I(s)
Z1(s)
v′ v′
vi vo Vi(s) Vo(s)

(b)
(a)

Figure 29 Inverting operational amplifier circuit

Vi(s) = Z1(s)I(s); Vo(s) = − Z 2 (s)I(s) (68)

Hence, the transfer function for the circuit is obtained as

Vo (s) Z (s)
=− 2 (69)
Vi (s) Z1 (s)

PID Controller: Figure 30 shows an electronic PID controller using operational amplifiers.
The transfer function is

E (s) Z
=− 2 (70)
E 1 (s) Z1

R1 R C s +1
where Z1 = Z2 = 2 2 .
R 1C1s + 1 C 2s

28
Thus
E (s) § R C s + 1 · § R 1C1s + 1 ·
= − ¨¨ 2 2 ¸¸ ¨¨ ¸¸ (71)
E1 (s) © C 2s ¹ © R 1 ¹

Noting that
U (s) R
=− 4 (72)
E (s) R3

Z1 Z2

R2 C2
C1
R4

R3
R1

Ei(s)
E(s) U(s)

Figure 30 Electronic PID controller

We have

U(s) U (s) E(s) R 4 § R 2 C 2 s + 1 · § R 1C1s + 1 ·


= = ¨¨ ¸¸ ¨¨ ¸¸
E(s) E (s) E i (s) R 3© C 2s ¹ © R 1 ¹
R R (R 1C1s + 1)(R 2 C 2 s + 1) R 4 R 2 § R 1C1 + R 2 C 2 1 ·
= 4 2 = ¨¨ + + R 1C1s ¸¸
R 3R1 R 2 C 2s R 3R1 © R 2C2 R 2C2s ¹
R 4 (R 1C1 + R 2 C 2 ) § 1 RCR C ·
= ¨¨1 + + 1 1 2 2 s ¸¸ (73)
R 3 R 1C 2 © (R 1C1 + R 2 C 2 )s R 1C1 + R 2 C 2 ¹

From equation (73) we obtain:

R 4 (R 1C1 + R 2 C 2 ) R 1C1R 2 C 2
Kp = ; Ti = R 1C1 + R 2 C 2 and Td = (74)
R 3 R 1C 2 R 1C1 + R 2 C 2

In terms of the proportional, integral and derivative gains we have:

R 4 (R 1C1 + R 2 C 2 ) R4 R R C
Kp = ; Ki = and K d = 4 2 1 (75)
R 3 R 1C 2 R 3 R 1C 2 R3

29
2.4 Simulation of PID Control System (PID Autopilot)

MATLAB/Simulink® allows user to simulate a desired PID control system in M-file/s or a


Simulink model/s. In this section, we try to design a PID autopilot system for controlling the
course (heading or yaw angle) of a surface vessel and make a Simulink model for it.

2.4.1 Ship Manoeuvring Model

Let’s consider the following Nomoto’s first order model in Module 1.

Tr + r = Kδ (76)
 =r
ψ (77)

where T is time constant (seconds), K is constant, r is yaw rate (rad/seg), ψ is yaw angle (rad)
and δ is rudder angle (rad). It should be noted that in practice, for a ship the rudder angle is
limited in range of –35 degrees (port) to +35 degrees (starboard). The yaw angle is limited in
the range of 0 to 360 degrees (on the gyrocompass indicator). It is assumed that the ship
velocity is 12 knots.

From equations (76) and (77), we obtain the transfer function between the yaw angle ψ(s) and
the rudder angle δ(s):

ψ (s) K
= (78)
δ(s) s(Ts + 1)

As mentioned above, the PID autopilot has the transfer function as follows:

δ(s) K
= K P + I + K Ds (79)
E (s) s

Figure 31 shows the block diagram of the PID autopilot system.

R(s) E(s) KI δ(s) ψ(s) Yaw angle


K 1
+_ Kp + + K Ds
Set course s Ts + 1 s

PID autopilot Ship

Kg

Gyrocompass

Figure 31 Block diagram of the PID autopilot system

From Figure 31, the total closed-loop transfer function between the yaw angle ψ(s) and the set
course R(s) is

30
§ K · K
¨ K P + I + K Ds ¸
ψ (s)
= ©
s ¹ s(Ts + 1) (80)
R (s) § K · K
1 + ¨ K P + I + K Ds ¸ Kg
© s ¹ s(Ts + 1)

where Kg is the gyrocompass transfer function. For simplification, it is assumed that Kg = 1.


This equation can be rearranged as

§ K · K
¨ K P + I + K Ds ¸
ψ (s)
= ©
s ¹ s(Ts + 1) = 2
( )
K P s + K I + K Ds 2 K
R (s) § K
1 + ¨ K P + I + K Ds ¸
· K
Kg
( )
s (Ts + 1) + K P s + K I + K D s 2 KK g
© s ¹ s(Ts + 1)

=
( )
K Ps + K I + K Ds 2 K
(81)
Ts 3 + (K D KK g + 1)s 2 + K P KK g s + K I KK g

From equation (81), it can be seen that the poles and zeros of the total system depend of the
control gains (proportional gain KP, integral KI and derivative gain KD). Applying the stability
criteria we can determine the values of control gains so that the whole system is stable.
However, we can determine these control gains by computer simulation.

2.4.2 Simulation of PID Autopilot (Nomoto’s First-Order Model)

The MATLAB program consists of 3 M-files as follows:

1. Main propgram – PIDAutopilot_sim.m

% Main Program: PIDAutopilot_sim.m


% This program is to simulate the PID Autopilot system for a vessel
% using the Nomoto's first order model.
%
% Initial values:
%
% u = surge velocity (m/s)
% v = sway velocity (m/s)
% r = yaw velocity (rad/s)
% psi = yaw angle (rad)
% Xpos = position in x-direction (m)
% Ypos = position in y-direction (m)
% delta = actual rudder angle (rad)

% r = 0; psi = 0; Xpos = 0; Ypos = 0; delta = 0;

% State vector:
% X = [r psi Xpos Ypos delta]';

% Made by Hung Nguyen in 2004


% Last modified on 1 September 2005
% For subject: E07 267 Instrumentation nd Process Control
% Copyright (C) 2005 Hung Nguyen <H.Nguyen@mte.amc.edu.au>
% Department of Maritime Engineering
% Australian Maritime College
% Launceston, Tasmania, Australia.
%

31
clear

fid = fopen('NomotoModel_2005_01.dat','w');
fprintf(fid,'Simulation date: 1-Sep-2005\n');
fprintf(fid,'time yawrate(deg/s) course(deg) rudder(deg)\n');

% Initial conditions:
X = [0 0 0 0 0]';

% Set course:
Setco = 60*pi/180; % in rad

% Initial values for computation of errors:


e2 = 0;
int_err = 0;
e = [0 0 0]';

%
N = 400;
h = 0.1;
index = 0;

for ii = 0.0:h:N

% Counting:
index = index + 1;

% Rudder input:
uu = PIDAutopilot(ii*h,e);

% Simple Euler Method:


Xdot = NomotoModelFunction(X,uu);
X = X + h*Xdot;

% Set yaw angle in range of 0-360:


if X(2) >= 2*pi;
X(2) = X(2)-2*pi;
end
if X(2) < 0;
X(2) = X(2) + 2*pi;
end

% Calculate errors:
e1 = Setco - X(2);
% Set the error in range of -180 +180:
if e1 >= pi
e1 = e1 - 2*pi;
end
if e1 <= -pi
e1 = e1 + 2*pi;
end

e(1) = e1;

int_err = int_err + h*e1; e(2) = int_err;


der_err = (e1 - e2)/h; e(3) = der_err;

e2 = e1; % update error for next step

% Store data:

32
data(index,1) = ii; % time
data(index,2) = X(1)*180/pi; % yaw rate (deg/s)
data(index,3) = X(2)*180/pi; % yaw angle (deg)
data(index,4) = X(3); % Xpos (m)
data(index,5) = X(4); % Ypos (m)
data(index,6) = X(5)*180/pi; % Rudder angle (deg)
data(index,7) = Setco*180/pi; % Set course (deg)

% to a file:

fprintf(fid,'%6.2f %12.8f %12.8f %12.8f %12.8f %12.8f\n',...


ii,X(1)*180/pi,X(2)*180/pi,X(3),X(4),X(5)*180/pi);

end

fclose(fid);

% Draw graphics:

figure(1);
subplot(212);plot(data(:,1),data(:,6));grid
xlabel('Time (second)');ylabel('Rudder angle (deg)');

subplot(211);plot(data(:,1),data(:,7),data(:,1),data(:,3));grid
xlabel('Time (second)');ylabel('Setcourse and yaw angle (deg)');

figure(2);
plot(data(:,5),data(:,4));axis('equal');grid;

% End of PIDAutopilot_sim.m
%
2. Function of Nomoto’s 1st-order model: NomotoModelFunction.m

function xdot = NomotoModelFunction(x,ui)

% NomotoModelFunction.m
% This program is to simulate the Nomoto's first order model
%
% T*r_dot + r = K*delta
%
% delta_c - delta
% delta_dot = ---------------------------
% |delta_c - delta|*Trud + a
%
% delta_c = command rudder (rad)
% delta = rudder (rad)
%
% Made by Dr. Hung Nguyen in March 2004
% Modified on 1 September 2005
% Subject: E07 267 Instrumentation and Process Control
% Copyright (C) 2005 Hung Nguyen <H.Nguyen@mte.amc.edu.au>
% Department of Maritime Engineering,
% Australian Maritime College,
% Launceston, Tasmania, Australia
%

T = 7.5; K = 0.11; % maneuverability indices


U0 = 12*1852/3600; % ship speed (m/s)

r = x(1); % yaw rate (rad/s)

33
yaw = x(2); % yaw angle (rad)
delta = x(5); % rudder (rad)
delta_c = ui(1); % command rudder (rad)

% Return derivatives:

xdot = [ -1/T*r+K/T*delta
r
U0*cos(yaw)
U0*sin(yaw)
(delta_c-delta)/(abs(delta_c-delta)*11.9+1)];

% End of function "NomotoModelFunction"


%

3. Function of PID autopilot system: PIDAutopilot.m

function u1 = PIDAutopilot(k,e)

% PIDAutopilot.m
% This function is PID Autopilot for a surface ship
% using the Nomoto's first order model T*r_dot + r = K*delta
%
% Made by Hung Nguyen in 2004
% Last modified on 1 September 2005
% For subject: E07 267 Instrumentation nd Process Control
% Copyright (C) 2005 Hung Nguyen <H.Nguyen@mte.amc.edu.au>
% Department of Maritime Engineering
% Australian Maritime College
% Launceston, Tasmania, Australia.
%

% Control gains and maximum rudder angle:

Kp = 0.75; Ki = 0.000004; Kd = 4.0;

maxrud = 10*pi/180;

% Rudder angle based on PID control law:

rudder = Kp*e(1) + Ki*e(2) + Kd*e(3);

if rudder >= maxrud


rudder = maxrud;
end

if rudder <= -maxrud


rudder = -maxrud;
end

u1 = [ rudder % rudder angle (rad)


0 % port and starboard stern plane (rad)
0 % top and bottom bow plane (rad)
0 % port bow plane (rad)
0 % starboard bow plane (rad)
0 ]; % propeller shaft speed (rpm)

% End of Function PIDAutopilot


%

34
Running this program, the following results are obtained:

Setcourse and yaw angle (deg)


80

60

40

20

0
0 50 100 150 200 250 300 350 400
Time (second)
10
Rudder angle (deg)

-5
0 50 100 150 200 250 300 350 400
Time (second)

Figure 32 Time series of yaw angle and rudder angle

1400

1200

1000

800

600

400

200

0 200 400 600 800 1000 1200 1400 1600 1800

Figure 33 Trajectory of the ship


Simulink model:

35
Figure 34 Simulink model for PID autopilot system

80
Setcourse and yaw (deg)

60

40

20

0
0 50 100 150 200 250 300 350 400
Time in seconds
10
Rudder angle (deg)

-5
0 50 100 150 200 250 300 350 400
Time in seconds

Figure 35 Times series of yaw angle and rudder angle (Simulink model)

36
1400

1200

1000

800

600

400

200

0
0 200 400 600 800 1000 1200 1400 1600 1800 2000

Figure 36 Trajectory of the ship (Simulink model)

2.5 PID Controllers in Market

2.5.1 DTZ4 Controller of Instronics Inc.

The DTZ4 Series of PID Controllers offer front


panel selection of all operating functions
except output type. Programming flexibility
includes selection of a second temperature set
value, sensor type, and all PID function
parameters. Figure 37 shows series of DTZ4
controllers.

Another unique selectable feature of the DTZ4


Series of PID Controllers is fast or slow PID
response. When set in the slow response
(PIDS) mode, very little, if any overshoot will
occur. If selected, Auto-tune control will
monitor the system output response and after 3
response cycles will automatically calculate
and replace the PID constants with optimal
process control parameters.
Figure 37 DTZ4 Controllers of Instronics
Two independent alarm outputs, each with 9 alarm Inc. (Courtesy of Instronics Inc.)

37
mode types, can be programmed for automatic or manual reset. If you desire ramp up or ramp
down to SV all models have that capability. This feature will allow the DTZ4 Series of PID
Controllers to be used in hot runner applications.

Features of DTZ4 are as follows:


• Front panel selectable including sensor input.
• 2 temperature set values can be entered. Operator can then manually select between
the two separate set values.
• Fast or slow PID response selectable - In slow response overshoot eliminated.
• 2 independent alarm outputs, each with 9 operating modes
• Universal Input: 100-240 VAC 50/60 Hz, 100-240 VDC
• Powerful but inexpensive
• Indicating accuracy of ±0.3% Full Scale
• Diverse outputs like Relay, SSR driver, and 4-20mA analog.
• Multi-input type selectable 15 kinds of input modes.

2.5.2 More Information on the Internet

Following are some web sites on PID controllers.

http://www.library.cmu.edu/ctms/ctms/pid/pid.htm
http://www.process-controls.com/Instronics/Digitec_PID_Controllers.html
http://intelec.orcon.net.nz/controllers.htm
http://www.armfield.co.uk/pct20h_datasheet.html
http://www.predig.com/content/products/#4
http://www.tempatron.co.uk/digital_pid_controllers.htm
http://www.brighton-electronics.com/controllers.htm

2.6 Examples

Example 2
Given the following PID-typed autopilot system using the Nomoto’s first-order model with
yaw angle versus rudder angle,

R E U
C Y
+_ G

B
H

Figure 38 PID autopilot


K1
where G = (K1 and T1 are constant), U is input rudder (in rad), Y is output course
s(T1s + 1)
1
(in rad), E is error, B is feedback course and R is set course (in rad), and C = K P + K I + K D s
s
(KP, KI and KD are the control gains), and H = K 2 (K2 is constant feedback gain),

(i) Find open-loop transfer function (B/E) and its poles and zeros if K1 = 0.11, T1 = 7.5
seconds, KP = 2.5, KI = 0.25, KD = 5 and K2 = 3.

38
(ii) Find the total feedback transfer function (Y/R).
(iii) Find the steady state error if a set-point course (R) of 60 degrees is applied.
SOLUTIONS
(i) The open-loop transfer function is as follows:
B § 1 · K1 K s 2 + K P s + K I K 1K 2
O.L.T.F = = CGH = ¨ K P + K I + K D s ¸ K2 = D
E © s ¹ s(T1s + 1) s s(T1s + 1)
Substituting values of K1, T1, KP, KI, KD, and K2, we have the O.L.T.F:

O.L.T.F =
(5s 2
)
+ 2.5s + 0.25 × 0.33
s 2 (7.5s + 1)
− 2.5 ± 6.5 − 4 × 5 × 0.25 1 .5
Zeros: 5s 2 + 2.5s + 0.25 = 0 ; z1,2 = = − 0.25 ±
2×5 10
1
Poles: s2(7.5s+1) = 0; p1,2 = 0, p3 = −
7 .5
(ii) Total feedback transfer function is as follows
K Ds 2 + K P s + K I K1
Y CG s s(T1s + 1)
C.L.T.F = = =
R 1 + CGH K s + K Ps + K I
2
K1
1+ D K2
s s(T1s + 1)

=
(K Ds 2 + K P s + K I )K1
s 2 (T1s + 1) + (K D s 2 + K P s + K I )K1K 2

=
(K )
s 2 + K Ps + K I K1
D

T1s 3 + (K D K 1K 2 + 1)s 2 + K P K1K 2 s + K I K 1K 2


(iii)
R 60 π
Set-point signal is R(s) = = × (rad) (R = 60, value of set course)
s s 180
Error is E(s) = R (s)(1 − C.L.T.F)

=

¨¨1 − 3
(
K Ds 2 + K P s + K I K1 ) ·
¸
s © T1s + (K D K 1K 2 + 1)s + K P K 1K 2 s + K I K 1K 2 ¸¹
2

= ¨
(
R § T1s 3 + (K D K 1K 2 + 1)s 2 + K P K1K 2 s + K I K 1K 2 − K D s 2 + K P s + K I K1 · )¸¸
s ¨© T1s 3 + (K D K 1K 2 + 1)s 2 + K P K1K 2 s + K I K 1K 2 ¹
R § T s 3 + (K D K 1K 2 + 1 − K D )s 2 + (K P K1K 2 − K P )s + (K I K 1K 2 − K I K1 ) ·
= ¨¨ 1 ¸¸
s© T1s 3 + (K D K 1K 2 + 1)s 2 + K P K 1K 2 s + K I K1K 2 ¹
R
Steady state error is SSE = l im sE (s) = l im s (1 − CLTF)
s→0 s
s→0

R § T s 3 + (K D K1K 2 + 1 − K D )s 2 + (K P K 1K 2 − K P )s + (K I K 1K 2 − K I K 1 ) ·
= l im s ¨¨ 1 ¸¸
s→0 s
© T1s 3 + (K D K 1K 2 + 1)s 2 + K P K 1K 2 s + K I K 1K 2 ¹
=R
(K I K1K 2 − K I K1 ) K 2 −1
= R (values of R and K2 may be substituted here)
K I K 1K 2 K2
Note that if K2 = 1, SSE = 0, the course follows the set-course.
End of Example 2

39
Example 3
In the following liquid level control system, liquid level is measured and the level transmitter
(LT) output is sent to a PID feedback controller (LC) that controls liquid level by adjusting
volumetric flow rate qi. A current-to-pressure converter (I/P) with pneumatic supply is used to
convert the current signal into the pressure that activates the control diaphragm valve.
Assume:
p

qi Cross-sectional A

Pneumatic hm
I/P LC LT Tank
supply
h

qo

Resistance R
Figure 39 Liquid level control system

1. The liquid density ρ and the cross-sectional A are constant.


2. The level transmitter, I/P converter, and control diaphragm vale have negligible
dynamics and gains of Km, KIP, and Kv, respectively.
3. The level controller (LC) is a PID-typed controller with adjustable control gains of
KP, KI and KD. The set-point (input) and the output (control signal) are in % (full
scale = 100%). The set-point signal H sp (mA) is assumed to be equal Km times the
set-point signal in % (Hsp (%)).
4. The flow-head relationship is linear, qo = h/R (R is resistance).
(i) Develop a differential equation (between the head h and the inlet flow rate qi) to describe
the tank system.
(ii) Write the transfer function (level and inlet flow rate) with assumption that the tank starts
off empty using Laplace transform.
(iii) Draw a block diagram for the whole system.
(iv) Write the total feedback transfer function.
(v) If the liquid in the tank system is gas oil with the density ρ of 820kg/m3, and the level h is
controlled at a set-point value of 150cm, calculate the outlet weight of liquid flowing for a
day (R = 0.5sec/m2, cross-sectional area A = 2m2) and outlet flow velocity (internal
diameter of outlet pipe is 100mm).

SOLUTIONS
(i) Based on the mass balance: the change in mass in the tank is equal to the difference
between the inlet mass and the outlet mass, we have:
dm
= w i − w o or
dt
dh
ρA = ρq i − ρq o
dt

40
h
From the assumption that head-flow relationship is linear, we have q o =
R
Therefore, we have the differential equation characterizing the relationship between the level
and the inlet flow rate as follows:
dh h
A + = qi
dt R
(ii) With zero initial conditions, the transfer function above is
H(s) k
G (s) = =
Q i (s) Ts + 1
where k = R, and T = AR.

(iii) The block diagram for the whole liquid level control system is as follows

Hsp H sp E U Y
Km +_ CPID KIP Kv G

B
Km

Figure 40 Block diagram of the level control system

1 K Ds 2 + K Ps + K I k
where C PID = K P + K I + K Ds = and G = .
s s Ts + 1

(iv) Based on the above block diagram, the total feedback transfer function is

Y Y H sp Y
F.B.T.F = = = Km
H sp H sp H sp H sp
K Ds 2 + K Ps + K I k
K IP K v
Y C PID K IP K v G s Ts + 1
= =
H sp 1 + C PID K IP K v GK m K s + K Ps + K I
2
k
1+ D K IP K v Km
s Ts + 1
Y
=
( )
K D s 2 + K P s + K I K IP K v k
H sp ( )
s(Ts + 1) + K D s 2 + K P s + K I K IP K v kK m
Y
=
(K Ds 2 + K Ps + K I )K IP K v k
H sp (T + K D )s 2 + K P K IP K v kK m s + K I K IP K v kK m

F.B.T.F =
Y
Km =
(K Ds 2 + K P s + K I )K IP K v kK m
H sp (T + K D )s 2 + K P K IP K v kK m s + K I K IP K v kK m
(v) The mass flow rate is
h h 1 .5
w o = ρq o , qo = , therefore wo = ρ = 820 = 3 × 820kg/sec
R R 0 .5
the outlet weight of gas oil flowing through the outlet pipe for a day is:
Wday = wo × 3600 × 24(h) = 3 × 820 × 3600 × 24=212544 × 103 kg = 212544 tons.

41
The outlet flow velocity is as follows:
π
Ap = (10 × 10-2)2 (m2)= 0.00785398163397
4
qo h 1 .5 × 4
Vo = = = = 38.197 m/s
A p RA p 0.5 × 0.1× 3.14

End of Example 3

SUMMARY OF MODULE 3
Module 3 is summarised as follows.

1. Concepts of stability: stability is important for a control system, asymptotically stable,


marginally stable, absolute stability and relative stability.

2. PID control: P control, PI control, PD control, and PID control

3. Types of PID controller: Pneumatic PID controllers, hydraulic PID controllers and
electronic PID controllers.

4. An example of designing PID typed autopilot for ships.

5. A example of PID controller in market.

42
Exercises
Problems in this Module can be solved by computer (MATLAB/Simulink).

1. A control system that consists of a tank that has transfer function


H(s) 5
G(s) = =
Q(s) 2s + 1
where H(s) is output level and Q(s) is input volumetric flow, and a pump that has transfer
function
Q(s) 4
C(s) = =
U(s) 3s + 2
where U(s) is input (electric signal) and Q(s) is output (volumetric flow), and a flowmeter that
is has gain K = 4, and time delay Td = 20sec. The pump is controlled with a PI controller that
has control gains, Kp and KI, respectively. Draw the block diagram for the feed-back control
system (R is set-point, desired level) and find the total closed-loop transfer function. Using
MATLAB/Simulink simulate the system and select values of KP and KI such that the whole
system is stable.

2. A simplified version of the d.c. motor model is shown below. The input is voltage and the
output is rotational speed, θ .

Input u(t) in V Output θ( t )


1 1
sL + R sJ + K f

Feedback emf

where L is the motor inductance (Henry)


R is the motor electrical resistance (Ohm)
K = armature constant = motor constant
J moment of inertia of the rotor/load
Kf is the damping ratio of the mechanical system

Confirm that the closed-loop transfer function is


K
G (s) =
(Ls + R )(Js + K f ) + K 2
If the system is controlled with a proportional and integral controller with gains KP and KI,
what is the total system transfer function?

3. A marine engine running at 100rpm has a transfer function as follows:


1
s(s + 5)

43
where the input is fuel flow rate and the output is rotational speed. We control the fuel valve
with a d.c. motor which has a transfer function of
1
s +1
where the input is voltage and the output is fuel flow rate. We implement a PI controller for
the motor with gains of KP and KI, respectively. A tachometer in the feedback loop measures
the rotational speed and feeds back with a gain of 5.

(a) Draw the system block diagram and calculate the transfer function. Using
MATLAB/Simulink, simulate the system and select values of KP and KI such the system is
stable.

(b) At the lower speed of 70 rpm the engine transfer function is


1
s(s + 2)
Calculate the total system transfer function if the engine runs at 70 rpm. Using the
MATLAB/Simulink, simulate the system at speed of 70 rpm and select values of KP and KI
such the system is stable. Refer to the M-file/s or Simulink model/s you created in (a).

3. Find the closed-loop transfer function (F(s) = Y(s)/R(s)) of the following system.

R +_ G Y

K1 K2
where G (s) = ; and H(s) = .
1 + T1s 1 + T2s

(a) Find the poles and zeros of the total closed-loop transfer function. Using
MATLAB/Simulink simulate the system (K1 = 1, K2 = 2, T1 = 5, T2 = 7).
(b) Assuming that the input of G is controlled by a PID controller (C) as shown in the
following block diagram, simulate the system and try to select values of control gains (Kp, Ki
and Kd) such that the system is stable.

R +_ C G Y

4. Consider the following Nomoto’s first order manoeuvring model:


Tψ  + ψ = Kδ
where T = 7.5 seconds, K = 0.11 and δ is rudder angle (rad). The rudder handling machine is
modelled by the following equation:
δc − δ
δ =
( δ c − δ TRUD + a )

44
where δc is commanded rudder, TRUD = 11.9 seconds (time constant) and a = 1 (used to avoid
dividing by zero). The trajectory of the ship is expressed by the following equations:
 = u cos ψ − v sin ψ
X
 = u sin ψ + v cos ψ
Y
where u is surge velocity of the ship in x-axis, and v is sway velocity in y-axis (V =
u 2 + v 2 ). It is assumed that the ship is running at a constant speed, i.e. v = 0. The ship is
controlled by a PID autopilot that has control gains: KP, KI and KD. Using
MATLAB/Simulink, make M-file/s or Simulink model to simulate the ship system and try
different values of control gains.

5. Consider industrial automatic controllers whose control actions are proportional, integral,
proportional-plus-integral, proportional-plus-derivative, and proportional-plug-integral-plus-
derivative. The transfer functions of these controllers can be given, respectively, by
U(s)
= Kp
E(s)
U(s) K i
=
E(s) s
U(s) § 1 ·
= K p ¨¨1 + ¸¸
E(s) © Ti s ¹
U(s)
= K p (1 + Td s )
E(s)
U(s) § 1 ·
= K p ¨¨1 + + Td s ¸¸
E(s) © Ti s ¹
where U(s) is the Laplace transform of u(t), the controller output, and E(s) the Laplace
transform of e(t), the actuating error signal. Sketch u(t) versus t curves for each of the five
types of controllers when the actuating error signal is
(a) e(t) = unit-step function
(b) e(t) = unit-ramp function

In sketching curves, assume that the numerical values of Kp, Ki, Ti and Td are given as
Kp = proportional gain = 4
Ki = integral gain = 2
Ti = integral time = 2 seconds
Td = derivative time = 0.8 seconds

6. Consider the system shown in the following figure. Show that the steady state error in the
following the unit-ramp input is B/K. This error can be made smaller by choosing B small
and/or K large. However, making B small and/or K large would have the effect of making the
damping ratio small, which is normally not desirable. Describe a method or methods to make
B/K small and yet make the damping ratio have reasonable value (0.5 < ξ < 0.7).

R(s) 1 Y(s)
+_ K
s(Js + B)

45
7. The following figure shows three systems. System I is a positional servo system. System II
is a positional servo system with PD control action. System III is a positional servo system
with velocity feedback. Compare the unit-step, unit-impulse and unit-ramp responses of the
three systems. Which system is the best with respect to the speed of response and maximum
overshoot in the step response?

R(s) 1 Y(s)
+_ 5
s(5s + 1)

System I

R(s) 1 Y(s)
+_ 5(1+0.8s)
s(5s + 1)

System II

R(s) 1 Y(s)
1
+_ +_ 5
s(5s + 1) s

0.8

System III

46
Lecture Notes

BASIC CONTROL THEORY


Module 4
Control Elements

SEPTEMBER 2005

Prepared by Dr. Hung Nguyen


TABLE OF CONTENTS

Table of Contents..............................................................................................................................i
List of Figures..................................................................................................................................ii
List of Tables ................................................................................................................................. iii
References ......................................................................................................................................iv
Objectives ........................................................................................................................................v

1. General Structure of a Control System........................................................................................1


2. Comparison Elements..................................................................................................................2
2.1 Differential Levers (Walking Beams) ...................................................................................2
2.2 Potentiometers ......................................................................................................................3
2.3 Synchros................................................................................................................................4
2.4 Operational Amplifiers .........................................................................................................5
3. Control Elements .........................................................................................................................7
3.1 Process Control Valves .........................................................................................................7
3.2 Hydraulic Servo Valve ........................................................................................................11
3.3 Hydraulic Actuators ............................................................................................................15
3.4 Electrical Elements: D.C. Servo Motors.............................................................................16
3.5 Electrical Elements: A.C. Servo Motors .............................................................................18
3.6 Hydraulic Control Element (Steering Gear) .......................................................................18
3.7 Pneumatic Control Elements ..............................................................................................19
4. Exampples of Control Systems..................................................................................................22
4.1 Thickness Control System ..................................................................................................22
4.2 Level Control System .........................................................................................................23
Summary of Module 4...................................................................................................................23
Exercises........................................................................................................................................24

i
LIST OF FIGURES

Figure 4.1.........................................................................................................................................1
Figure 4.2.........................................................................................................................................3
Figure 4.3.........................................................................................................................................3
Figure 4.4.........................................................................................................................................4
Figure 4.5.........................................................................................................................................5
Figure 4.6a .......................................................................................................................................6
Figure 4.6b.......................................................................................................................................6
Figure 4.7.........................................................................................................................................8
Figure 4.8.........................................................................................................................................9
Figure 4.9.......................................................................................................................................10
Figure 4.10.....................................................................................................................................11
Figure 4.11 .....................................................................................................................................12
Figure 4.12.....................................................................................................................................13
Figure 4.13.....................................................................................................................................14
Figure 4.14.....................................................................................................................................15
Figure 4.15.....................................................................................................................................15
Figure 4.16.....................................................................................................................................17
Figure 4.17.....................................................................................................................................18
Figure 4.18.....................................................................................................................................19
Figure 4.19.....................................................................................................................................20
Figure 4.20.....................................................................................................................................21
Figure 4.21.....................................................................................................................................22
Figure 4.22.....................................................................................................................................22
Figure 4.23.....................................................................................................................................24
Figure 4.24.....................................................................................................................................24
Figure 4.25.....................................................................................................................................25
Figure 4.26.....................................................................................................................................25
Figure 4.27.....................................................................................................................................26

ii
LIST OF TABLES

iii
REFERENCES

Chesmond, C.J. (1990), Basic Control System Technology, Edward Arnold, UK.

Haslam, J.A., G.R. Summers and D. Williams (1981), Engineering Instrumentation and Control,
London, UK.

Kou, Benjamin C. (1995), Automatic Control Systems, Prentice-Hall International Inc., Upper
Saddle River, New Jersey, USA.

Ogata, Katsuhiko (1997), Modern Control Engineering, 3rd Edition, Prentice-Hall International
Inc., Upper Saddle River, New Jersey, USA.

Richards, R.J. (1993), Solving in Control Problems, Longman Group UK Ltd, Harlow, Essex,
UK.

Seborg, Dale E., Thomas F. Edgar and Duncan A. Mellichamp (2004), Process Dynamics and
Control, 2nd Edition, John Wiley & Sons, Inc., Hoboken, New Jersey, USA.

Taylor, D.A. (1987), Marine and Control Practice, Butterworths, UK.

iv
AIMS

1.0 Explain general structure of a control system and its components.

LEARNING OBJECTIVES

1.1 Describe a general structure of a control system by a block diagram.

1.2 State function of each block in a control system

1.3 Describe components of a control system: process, transducers, recorders, comparison


elements, controllers and final control elements

v
1. General Structure of a Feedback Control System
Automatic control systems, including their recording elements, may be represented by a general
block diagram as shown in the following figure.

Comparison
element
Input r(t) Control u(t) Output y(t)
+_ Controller Process
Error e(t) element

Feedback signal Recorder


Transducer

Figure 4.1 General structure of a feedback control system

Input: The input signal is also called reference signal or set-point signal. It is a desired signal that
is kept stable. The set-point signal can be set by an operator or by a control program.

Output: The output signal is also called process variable (PV). It is an actual signal. The output
signal is often measured by a transducer or transmitter and fed back to the comparison element in
the closed-loop control system. The output is indicated by a recorder or a display.

Error: The error signal is also called an actuating error. It is the difference between the set-point
signal and the measured output signal.

Process: The process block represents the overall process. All the properties and variables that
constitute the manufacturing or production process are a part of this block. The process is also
called a plant or a dynamic system in which the controlled variable is regulated as desired. The
dynamic behaviour of the process can be expressed by an ordinary differential equation. See
Modules 1 through 3.

Transducer: The transducer block represents whatever operations are necessary to determine the
present value of the controlled variables. The transducer block is also called the measurement
block. The transducer is used to measure the process variable or output and feedbacks the
measured output to the comparator. The output of this block is a measured indication of the
controlled variable expressed in some other form, such as voltage, current, or a digital signal.

Recorder: The recorder or indicating device indicates or displays the measured output.

Comparison Element: The comparison element is also called a comparator that detects an error,
a difference between the set-point signal and the measured output signal. The comparison
elements compare the desired input with the output and generate an error signal. The comparison

1
element may be one of the following types: mechanic types such as differential levers, electric
types such as potentiometer, operational amplifier and synchros.

Controller: The control block is the part of the loop that determines the changes in the
controlling variable that are needed to correct errors in the controlled variable. This block
represents the ‘brains’ of the control system. The output of this block will be a signal, called the
feedback signal, that will change the value of the controlling variable in the process (plant or
dynamic system) then thereby the controlled variable. The controller acts on the actuating error
and uses this information to produce a control signal that drives the process. The controller often
has two tasks 1) being able to compute control signal/s and 2) being able to drive the system
being controlled. There are many types of controller such as pneumatic controller, hydraulic
controller, electrical and electronic controller and hybrid controller that is a combination of two
or more than two of the above types. In traditional analogue control systems, the controller is
essentially an analogue computer. In the computer-based control systems, the controller function
is performed using software. There are several algorithms for controller such as PID control,
optimal control, self-tuning control, optimal control, neural network control and so on.

Control Element: The control element block is the part that converts the signal from the
controller into actual variations in the controlling variable. The control element is also called an
actuating element or an actuator in which the amplified and conditioned control signal is used to
regulate some energy source to the process. In practice, the control element is part of the process
itself, as it must be to bring about changes in the process variables.

2. Comparison Elements
Comparison elements compare the output or controlled variable with the desired input or
reference signal and generate an error or deviation signal. They perform the mathematical
operation of subtraction.

2.1 Differential Levers (Walking Beams)

Differential levers are mechanical comparison elements which are used in many pneumatic
elements and also in hydraulic control systems. They come in many varied an complex forms, a
typical example being illustrated in Figure 4.2, which shows a type used in a Taylor’s Transcope
pneumatic controllers.

For purposes of analysis a differential lever can be considered as a simple lever which is free to
pivot at points R, S and T as illustrated in Figure 4.3. From Figure 4.3 for small movements:

i) considering R fixed: if x moves to the right then

b
ε= x (4.1)
a+b

ii) considering T fixed: if y moves to the left then

2
a
ε=− y (4.2)
(a + b )

The total movement ε can be found by using the principle of superposition, which states that, for
a linear system, the total effect of several disturbances can be obtained by summation of the
effects of each individual disturbance acting alone. The total movement ε due to the motion of x
and y is therefore given by sum of (i) and (ii):

b a
ε= x− y (4.3)
(a + b ) (a + b )

In many cases it is arranged that a = b, so that the lever is symmetrical, and then

1
ε = ( x − y) (4.4)
2
1 1
i.e. ε = × error or ε = × deviation
2 2

It is important that the output movement at y is arranged to always be in the opposite direction to
the input x, i.e. a negative-feedback arrangement.

Figure 4.2 The motion plate for a Figure 4.3 The differential lever
Taylor’s Transcope controller
2.2 Potentiometers

Potentiometers are used in many d.c. electrical positioning servo-systems. They consist of a pair
of matched resistance potentiometers operating on the null-balance principle. The sliders are
driven by the input and output shafts of the control system as illustrated in Figure 4.4.

3
Figure 4.4 Error detection by potentiometers

If the same voltage is applied to each of potentiometer windings, an error voltage is generated
which is proportional to the relative positions. We have

ε = K P (θi − θ 0 ) (4.5)

where θ1 = input-shaft position


θ 0 = output-shaft position
KP = potentiometer sensitivity (volts/degree)

When the input and output shafts are aligned and, θ i = θ 0 , and the error voltage ε is zero, i.e.
null balance is achieved.

2.3 Synchros

Synchros are the a.c. equivalent of potentiometers and are used in many a.c. electrical systems for
data transmission and torque transmission for driving dials. They are also used to compare input
and output rotations in a.c. electrical servo-systems and rotating hydraulic systems.

To perform error detection, two synchros are used: one in the mode of a control transmitter, and
the other as a control transformer, as shown in Figure 4.5.

The synchros have their stator coils equally spaced at 120o intervals. An a.c. voltage (often 115V
at 400Hz) is applied to the transmitter rotor, producing voltages in the stator coils (by transformer
action) which uniquely define the angular position of the rotor. These voltages are transmitted to
the stator coils of the transformer, producing a resultant magnetic field aligned in the same
direction as the transmitter rotor.

4
The transformer rotor acts as a “search coil” in detecting the direction of its stator field. The
maximum voltage is induced in the transformer rotor coil when the rotor axis is aligned with the
field. Zero voltage is induced when the rotor axis is perpendicular. The “in-line” position of the
input and output shafts therefore requires the transformer rotor coil to be at 90o to the transmitter
rotor coil.

Figure 4.5 Error detection by synchros

The output voltage is an amplitude-modulated signal which requires demodulating to produce the
following relationship for small misalignment angles:

Output = K (input-shaft position – output-shaft position)


= K (θ i − θ 0 )

where K = voltage gradient (volts/degree)

Compared to d.c. potentiometers, synchros have the following advantages:

a) a full 360o of shaft rotation is always available;


b) since they have no sliding contacts, their life expectancy is much higher, resolution is infinite,
and hence they do not have “noise” problems;
c) a.c. amplifiers can be employed and therefore are no drift problems.

However, phase-sensitive rectifiers are necessary to sense direction.

2.4 Operational Amplifiers

Operational amplifiers, or “op. ams”, are direct-coupled (d.c.) amplifiers with special
characteristics as

5
¾ High gain, 200000 to 106;
¾ Phase reversal, i.e. the output voltage is of opposite sign to the input;
¾ High input impedance.
Rf

i1 R1 if
v1 vo
i2 R2
v2

Figure 4.6a Error detection by an operational amplifier

The input current to the amplifier can be assumed to be negligible, and

i1 + i 2 = i f (4.6)

v1 − 0 v 2 − 0 0 − v 0 ªR R º
∴ + = and v 0 = − « f + f v 2 » (4.7)
R1 R2 Rf ¬ R1 R 2 ¼

If Rf = R1 = R2, v1 is made equal to input ( θ i ), and v2 is made equal to – output ( − θ 0 ), we have

v 0 = −( θ i − θ 0 )
= –(error) (4.8)

The negative sign can be removed by using an inverter (as shown in the following example).
Operational amplifiers are used in electrical control systems and as comparison elements in many
hydraulic positioning systems.

Example

In Figure 4.6, Rf = 1M Ω , R1 = R2 = 0.1 M Ω , v1 is a voltage proportional to the input


displacement θ i , and v2 is a voltage proportional to the output displacement θ 0 and is arranged to
be fed back in a negative sense. Assuming the proportional constant is 1V/degree, determine the
amplification through the op.amp and show how the sign of the error output can be inverted.
Rf

i R if
v0 ve

Figure 4.6b An inverter

6
We have

ª 1MΩ 1MΩ º
v 0 = −« θi − θ 0 » = − 10(θ i − θ 0 ) (4.9)
¬ 0.1MΩ 0.1MΩ ¼

The amplification is therefore 4.


The sign of the error can be inverted as shown in Figure 4.6.

We have

i = if (4.10)
v − 0 0 − ve
∴ 0 = (4.11)
R Rf
R
∴ ve = − f v0 (4.12)
R

and, if Rf is made equal to R,

v e = −v 0 (4.13)

3. Control Elements (Actuators)

Control elements are those elements in which the amplified and conditioned error signal is used
to regulate some energy source to the process.

3.1 Process-control Valves

In many process systems, the control element is the pneumatically actuated control valve,
illustrated in Figure 4.7, which is used to regulate the flow of some fluid.

A control valve is essentially a pressure-reducing valve and consists of two major parts: the
valve-body assembly and the valve actuator.

a) Valve actuators

The most common type of valve actuator is the pneumatically operated spring-and-diaphragm
actuator illustrated in Figure 4.7, which uses air pressure in the range 0.2bar to 1.0bar unless a
positioner is used which employs higher pressure to give larger thrusts and quicker action. The
air can be applied to the top (air-to-close) or the bottom (air-to-open) of the diaphragm,
depending on the safety requirements in the event of an air-supply failure.

7
b) Valve-body design

Most control-valve bodies fall into two categories: single-seated and double-seated.

+ Single-seated valves have a single valve plug and seat and hence can be readily designed for
tight shut-off with virtually zero flow in the closed position. Unless some balancing arrangement
is included in the valve design, a substantial axial stem force can be produced by the flowing
fluid stream.

+ Double-seated valves have two valve plugs and seats, as illustrated in Figure 4.7. Due to the
fluid entering the centre and dividing in both upward and downward directions, the
hydrodynamic effects of fluid pressure tent to cancel out and the valves are said to be “balanced”.
Due to the two valve opening, flow capacities up to 30% greater than for the same nominal size
single-seat valve can be achieved. They are, however, more difficult to design to achieve tight
shut-off.

The valve plugs and seats – known as the valve “trim” – are usually sold as matched sets which
have been ground to a precise fit in the fully closed position.

Figure 4.7 A process-control valve

8
The valve plugs are of two main types: the solid plug and the skirted V-port plug, as illustrated in
Figure 4.8. All valves have a throttling action which causes a reduction in pressure. If the
pressure increases again too rapidly, air bubbles entrained in the fluid “implode”, causing rapid
wear on the valve plugs. This process is known as cavitation. The skirted V-port plugs have less
tendency to cause this rapid pressure recovery and are therefore less prone to cavitation.

Figure 4.8 Control valve plugs

9
c) Valve flow characteristics

The flow characteristic of a valve is the relationship between the rate of flow change and the
valve lift. The characteristics quoted by the manufacturers are theoretical or inherent flow
characteristics obtained for a constant pressure drop across the valve. The actual or installed
characteristics are different from the inherent characteristics since they incorporate the effects of
line losses acting in series with the pressure drop across the valve. The larger the line losses due
to pipe friction etc., the greater the effect on the characteristic.

Figure 4.9 Types of valve flow characteristics

Three main types of characteristic illustrated in Figure 4.9 are:

i) Quick-opening – the open port area increases rapidly with valve lift and the maximum flow
rate is obtained after about 20% of the value lift. This is used for on-off applications.

ii) Linear – the flow is directly proportional to valve lift. This is used example in bypass service
of pumps and compressors.

iii) Equal-percentage – the change in flow is proportional to the rate of flow just before the flow
change occurred; that is, an equal percentage of flow change occurs per unit valve lift. This is

10
used when major changes in pressure occur across the valve and where there is limited data
regarding flow conditions in the system.

3.2 Hydraulic Servo Valve

In hydraulic control systems, the hydraulic energy from the pump is converted to mechanical
energy by means of a hydraulic actuator. The flow of fluid from the pump to the actuator in most
systems is controlled by a servo-valve.

A servo-valve is a device using mechanical motion to control fluid flow. There are three main
modes of control:

i) sliding – the spool valve


ii) seating – the flapper valve;
iii) flow-dividing – the jet-pipe valve.

a) Spool Valves

Spool valves are the most widely used type of valve. They incorporate a sliding spool moving in
a ported sleeve as illustrated in Figure 4.4. The valves are designed so that the output flow from
the valve, at a fixed pressure drop, is proportional to the spool displacement from the null
position.

Figure 4.10 A spool valve

Spool valves are classified according to the following criteria.

The number of “ways” flow can enter or leave the valve. A four-way valve is required for use
with double-acting cylinders.

11
The number of lands on the sliding spool. Three and four lands are the most commonly used as
they give a balanced valve, i.e. the spool does not tend to move due to fluid motion through the
valve.

The valve-centre characteristic, i.e. the relationship between the land width and the port opening.
The flow-movement characteristics is directly related to the type of valve centre employed.
Figure 4.11 illustrates the characteristics of the three possibilities discussed below.

Figure 4.11 Valve-centre characteristics

i) Critical-centre or line-on. The land width is exactly the same size as the port opening. This is
the ideal characteristics as it gives a linear flow-movement relationship at constant pressure drop.
It is very difficult to achieve in practice, however, and slightly overlapped characteristics is
usually employed.

ii) Closed-centre or overlapped. The land width is larger than the port opening. If the overlap is
too large, a dead-band results, i.e. a range of spool movement in the null position which produces
no flow. This produces undesirable characteristics and can lead to steady-state errors and
instability problems.

12
iii) Open-centre or underlapped. The land width is smaller than the port opening. This means that
there is continuous flow through the valve, even in the null position, resulting in large power
losses. Its main applications is in high-temperature environments, which require a continuous
flow of fluid to maintain reasonable fluid temperatures.

b) Flapper Valves

Flapper valves incorporate a flapper-nozzle arrangement. They are used in low-cost single-stage
valves for systems requiring accurate control of small flows. A typical arrangement is illustrated
in Figure 4.12.

Figure 4.12 A Dowty single-stage servo-valve

Control of flow and pressure in the service line is achieved by altering the position of the
diaphragm relative to the nozzle, by application of an electrical input current to the coil.
Increasing the nozzle gap causes a reduction in service-port pressure, since the flow to the return
line is increased.

13
c) Jet-pipe Valves

Jet-pipe valves employ a swivelling-jet arrangement and are only used as the first stage of some
two-stage electrohydraulic spool valves.

d) Two-stage electrohydraulic servo-valves

These are among the most commonly used valves. A typical arrangement is illustrated in Figure
4.13, which shows a Dowty series 4551 “M” range servo-valve. This incorporates a double
flapper-nozzle arrangement as the first stage, driving the second-stage pool.

Figure 4.13 A Dowty electrohydraulic servo-valve

The flapper of the first-stage hydraulic amplifier is rigidly attached to the mid-point of the
armature and is collected by current input to the coil. The flapper passes between two nozzles,
forming a double flapper-nozzle arrangement so that, as the flapper is moved, pressure increases
at one nozzle while reducing at the other. These two pressures are fed to opposite ends of the
main spool, causing it to move.

The second stage is a conventional four-way four-land sliding spool valve. A cantilever feedback
spring is fixed to the flapper and engages a slot at the centre of the spool. Spool displacement
causes a torque in the feedback wire which opposes the original input-signal torque on the
armature. Spool movement continues until these two torques are balanced, when the flapper, with
the forces acting on it in equilibriums, is restored to its null position between the nozzles.

14
3.3 Hydraulic Actuators

The hydraulic servo-valve is used to control the flow of high-pressure fluid to hydraulic actuators.
The hydraulic actuator converts the fluid pressure into an output force or torque which is used to
move some load.

There are two main types of actuator: the rotary and the linear, the later being the most
commonly used.

Linear actuators are commonly known as rams, cylinders, or jacks, depending on their
application. For most applications a double-acting cylinder is required – these have a port on each
side of the piston so that the piston rod can be powered in each stroke direction, enabling fine
control to be achieved. A typical cylinder design is shown in Figure 4.14.

Figure 4.14 A linear actuator

Example

Figure 4.15 shows a diagrammatic hydraulic servo-valve/cylinder arrangement. Assuming that


the flow through the valve is directly proportional to the valve spool movement, and neglecting
leakage and compressibility effects in the cylinder, derive a simple transfer operator for this
system.

xv

Supply

Exhaust
θ0

Figure 4.15 A servo-valve/cylinder arrangement

15
Referring to Figure 4.15:

For the servo-valve:

Volumetric flow rate through the valve v ∝ valve spool movement x v

v = K v x v (4.14)

where Kv = valve characteristic

volumetric flow rate to the cylinder v = effective cylinder area × piston velocity

dθ 0
v = A × (4.15)
dt

Using s operator (Laplace transform), we have

v = Asθ 0 (4.16)

Substituting for v , we get

K v x v = Asθ 0 (4.17)

Therefore the transfer operator is

θ0 K v 1
= i.e. an integrator, since = ³ dt . (4.18)
x v As s

3.4 Electrical Elements: D.C. Servo Motors

D.C servo-motors have the same operating principle as conventional d.c. motors but have special
design features such as high torque and low inertia, achieved by using long small-diameter rotors.

Two methods of controlling the motor torque are used:

a) field control – Figure 4.16(a)


b) armature control – Figure 4.16(b)

16
(a) Field control (b) Armature control

Figure 4.16 Control of d.c. servo-motors

a) Field Control

With field control, the armature current is kept approximately constant and the field current is
varied by the control signal. Since only small currents are required, this means that the field can
be supplied direct from electronic amplifiers, hence the special servo-motors are wound with a
split field and are driven by push-pull amplifiers.

Most of these systems are damped artificially by means of velocity feedback, which requires a
voltage proportional to speed. This is achieved by means of a tachogenerator which is built with
the motor in a common unit.

Field-controlled d.c. motors are used for low-power systems up to about 1.5kW and have the
advantage that the control power is small and the torque produced is directly proportional to the
control signal; however, they have a relatively slow speed of response.

b) Armature Control

With armature control, the field current is varied by the control signal.

Considerable development has taken place in the design of this type of motor for use in robot
drive systems. A common form in use is the disc armature motor (sometimes called a pancake
motor). This consists of a permanent magnet field and a thin disk armature consisting of copper
tracks etched or laminated onto a non-metalic surface. These weigh less than conventional iron-
core motors giving very good power to weight ratios and hence a fast speed of response. Power
outputs in the range 0.1 to 10kW are typical.

17
3.5 Electrical Elements: A.C. Servo-motors

A.C. servo-motors are usually two-phase induction motors with the two stator coils placed at
right angles to each others as shown schematically in Figure 4.17. The current in one coil is kept
constant, while the current in the other coil is regulated by an amplified control signal. This
arrangement gives a linear torque/control-signal characteristic over a limited working range.

They are usually very small low-power motors, up to about 0.25kW.

A.C.
reference
voltage Fixed
reference
windings

Amplified
control
signal

Control Motor
windings shaft

Figure 4.17 A two-phase a.c. servo-motor

As with the d.c. motors in the previous section, servo-motor tachogenerator units are supplied to
facilitate the application of velocity feedback.

3.6 Hydraulic Control Element (Steering Gear)

Where a flowing liquid is used as the operation medium, this can be generally considered as
hydraulic control. Hydraulics is, however, usually concerned with the transmission of power,
rather than the transmission of signals.

Hydraulic systems enable the transfer of power over large distances with infinitely variable speed
control of linear and rotary motions. High static forces or torques can be applied and maintained
for long periods by compact equipment. The equipment itself is safe and reliable, and overload or
supply failure situations can be safeguarded against. Hydraulic operation of a ship’s steering gear
is usual and use is often made of hydraulic equipment for both mooring and carriage handling
deck machinery.

Hydraulic systems utilize pumps, valves, motors or actuators and various ancillary fittings. The
system components can be interconnected in a variety of different circuits. Using their low or
medium present oil.

Example of a hydraulic control system (Ship Steering Machine)

18
port

poil
Relay operated
valves poil
poil
starboard

(a) (c)

(b)
rudder
steering
telemoter cylinder floating lever

Figure 4.18 Simplified diagram of a two stage hydraulic steering machine

3.7 Pneumatic Control Elements

Where a control signal is transmitted by the use of a gas this is generally known as pneumatics.
Air is the usual medium and the control signal may be carried by a varying pressure or flow. The
variable pressure or flow. The variable pressure signal is most common and will be considered in
relation to the devices used. There are principally position-balance or force-balance devices.
Position balance relates to the balancing of linkages and lever movements and the nozzle-flapper
device is an example. Force balance relates to a balancing of forces and the only true example of
this is the stacked controller. Pivoted beams which are moved by bellows and nozzle-flappers are
sometimes considered as force-balance devices. Fluidics is the general term for device where the
interaction of flows of a medium result in a control signal.

Air as a control medium is usually safe to use in hazardous areas, unless oxygen increases the
hazard. No return path is required as the air simply leaks away after use. It is freely and readily
available although a certain amount of cleaning as well as compressing is required. The signal
transmission is slow by comparison with electronics, and the need for compressors and storage
vessels is something if a disadvantage. Pneumatic equipment has been extensive applied in
marine control systems and is still very popular.

Examples of Pneumatic Control Elements

Nozzle-flapper

The nozzle-flapper arrangement is used in many pneumatic devices and can be considered as a
transducer, a valve or an amplifier. It transduces a displacement into a pneumatic signal. The
flapper movement acts to close or open a restriction and thus vary air flow through the nozzle.
The very small linear movement of the flapper is then converted into a considerable control

19
pressure output from the nozzle. The arrangement is shown in Figure 4.19(a). A compressed air
supply is provided at a pressure of about 1 bar. The air must pass through an opening which is
larger than the orifice, e.g. about 0.40mm. The position of the flapper in relation to the nozzle
will determine the amount of air that escapes. If the flapper is close to the nozzle a high
controlled pressure will exist; if some distance away, then a low pressure. The characteristic
curve relating controlled pressure and nozzle-flapper distance is shown in Figure 4.19(b). The
steep, almost linear section of this characteristic is used in the actual operation of the device. The
maximum flapper movement is about 20 microns or micrometres in order to provide a fairly
linear characteristic. The nozzle-flapper arrangement is therefore a proportional transducer, valve
or amplifier. Since the flapper movement is very small it is not directly connected to a measuring
unit unless a feedback device is used.

To measuring
unit
Orifice Nozzle
Supply
air
Flapper

To control valve,
controller, etc
(closed system)
(a)

Supply
pressure
Air pressure

operating range

(b) Nozzle – flapper separation

Figure 4.19 Nozzle-flapper mechanism: (a) arrangement; (b) characteristic

Bellows

The bellows is used in some pneumatic devices to provide feedback and also as a transducer to
convert an input pressure signal into a displacement. A simple bellows arrangement is shown in
Figure 4.20. The bellows will elongate when the supply pressure increases and some
displacement, x, will occur. The displacement will be proportional to the force acting on the base,

20
i.e. supply pressure × area. The actual amount of displacement will be determined by the spring-
stiffness of the bellows. Thus

§ Supply · § Area of · § Spring − stiffness ·


¨¨ ¸¸ × ¨¨ ¸¸ = ¨¨ ¸¸ × (Displacement )
© pressure ¹ © bellows ¹ © of bellows ¹

The spring-stiffness and the bellows area are both constants and therefore the bellows is a
proportional transducer.

Bellows

Supply air
Displacement, x

Fixed end

Figure 4.20 Bellows mechanism

In some feedback arrangements a restrictor is fitted to the air supply to the bellows. The effect of
this will be to introduce a time delay into the operation of the bellows. This time delay will be
related to the size of the restriction and the capacitance of the bellows.

In practise it is usual for bellows to be made of brass with a low spring-stiffness and to insert a
spring. The displacement may therefore be increased, and also the effects of any pressure
variations.

21
4. Examples of Control Systems

4.1 Thickness Control System

Propose a control system to maintain the thickness of plate produced by the final stand of rollers
in a steel rolling mill as shown in Figure 4.21.

a) The input will be desired plate thickness and the output will be the actual thickness.
b) The required thickness will be set by a dial control incorporating a position transducer which
produces an electrical signal proportional to the desired thickness. The output thickness will
have to be measured using a device such as β-ray thickness gauge with amplification to
provide a suitable proportional voltage.
c) With two voltage signals, an operational amplifier will be suitable as a comparison element.
d) The desired power for moving the nip roller will require hydraulic actuation.
e) A power piston regulated by an electro-hydraulic servo-valve will be suitable.

Electro-hydraulic
servo valve
Rotary
Power potentiometer
piston Amplifier
Input

β-gauge

Figure 4.21 Thickness control system

4.2 Level Control System

Propose a control system to maintain a fixed fluid level in a tank. The flow is to be regulated on
the input side, and the output from the tank is flowing into a process with a variable demand.

a) The input will be the desired fluid level and the output the actual level.
b) Since the output is a variable level, a capacitive transducer will be suitable.

22
c) Since the system is a process type system, a commercial controller will be suitable and the
desired level will therefore be a set-point position on the controller. If a pneumatic controller
is chosen, the electrical signal from the capacitive level transducer will have to be converted
into a pneumatic signal by means of an electro-pneumatic converter.
d) The choice of a pneumatic controller means that the system will be electro-pneumatic.
e) A suitable control element will be an air-to-open pneumatically actuated control valve.

Figure 4.22 shows a simple arrangement for the level control system.

Electro-pneumatic
Pneumatic converter
Set-point
recorder & level
controller

Capacitive
Process transducer
control
valve
Inlet flow Outlet flow

Figure 4.22 Level control system

SUMMARY OF MODULE 4

Module 4 is summarised as follows:

• General structure of a control system: process, transducer (measurement), recorder,


comparison element, controller, final control element blocks;
• Control components including comparison elements and final control elements
• Examples of control systems and their components: thickness control system and level
control system.

23
Exercises
1. Figure 10.23 shows a d.c. remote position control system:
Potentiometer
D.C. motor
Potentiometer
Load
Input Error
position Amplifiers
Output
Reduction
position
gearbox

Figure 10.23 A remote position control system

Figure 10.24 shows a block diagram for the remote position control system, where

Motor Reduction
Input system gearbox
potentiometer Amplifier
Input θ i (t) Km 1 Output θ( t )
Kp G Js + K f s
2
n

Kp
Output
potentiometer

Figure 10.24 Block diagram for the remote position control system

Kp = potentiometer sensitivity (V/rad)


G = amplifier gain (V/V)
Km = motor constant (Nm/V)
J = equivalent inertial (kgm2)
Kf = equivalent viscous friction (Nms/rad)
n = gear ratio

Write the total feedback transfer function for the system.

2. Figure 10.25 shows an arrangement of an industrial heating and cooling system. Analyse the
system into its component parts and identify the function of each.

24
Recorder &
controller

Three Cold water


way
valve Hot water

Thermocouple Fan

Drain
Figure 10.25 Air-conditioning system

3. Figure 10.26 shows the arrangement of an electro-hydraulic servo system for manually
operating an aerodynamic control surface.
a) The input and output resistance potentiometers are transducers for converting linear
displacement into a voltage.
b) The differential amplifier is the comparison element generating the error signal.
c) The amplifier is the controller producing an amplified error signal.
d) The electro-hydraulic servo valve is the control element, controlling the flow of high pressure
oil to the actuator which moves the load.

Differential
Required Amplifier
amplifier
motion
Electro-hydraulic
servo valve

Load
Feedback
Potentiometer

Output
motion
Potentiometer

Figure 10.26 An electro-hydraulic servo system

25
4. Figure 10.27 shows a schematic diagram and a block diagram for a servo system. The
objective of this system is to control the position of the mechanical load in accordance with the
reference position.

Reference input Input potentiometer


Output potentiometer
er ec
Feedback signal
r c c
Ra La
Input
device
ev K1 K1ev ia T
θ

Error measuring device Amplifier Motor Gear train Load

(a)

R(s) E(s) Ev(s) Θ(s) Y(s)


K 1K 2
+_ K0 n
s(L a s + R a )(J 0s + b 0 ) + K 2 K 3s

(b)
Figure 10.27 Servo system: a) schematic diagram and b) block diagram

a) Reduce the block diagram


b) Write a total feedback transfer function for the servo system.

26
Lecture Notes

BASIC CONTROL THEORY


Module 5
Control Applications in Marine and Offshore Systems

SEPTEMBER 2005
Prepared by Dr. Hung Nguyen
TABLE OF CONTENTS

Table of Contents..............................................................................................................................i
List of Figures................................................................................................................................ iii
List of Tables ..................................................................................................................................iv
References .......................................................................................................................................v
Objectives .......................................................................................................................................vi

1. Introduction .................................................................................................................................1
2. Pneumatic Control Systems.........................................................................................................1
2.1 Essential Requirements.........................................................................................................1
2.2 Basic Pneumatic Control Systems ........................................................................................2
3. Hydraulic Control Systems..........................................................................................................6
3.1 Hydraulic Servo Valve and Actuator.....................................................................................6
3.2 Applications of Hydraulic Servo Valve.................................................................................7
3.2.1 Speed Control System ...................................................................................................7
3.2.2 Hydraulic Steering Machine..........................................................................................8
4. Electrical and Electronic Control Systems ................................................................................10
4.1 Analogue Control Systems..................................................................................................10
4.2 Digital (Computer-based) Control Systems........................................................................12
4.3 PLCs (Sequence Control Systems) .....................................................................................13
4.3.1 The Processor Unit ......................................................................................................13
4.3.2 The Input/Output Section ............................................................................................14
4.3.3 The Programming Device ...........................................................................................14
5. Ship Autopilot Systems .............................................................................................................15
5.1 Mathematical Foundation for Autopilot Systems ...............................................................15
5.1.1 Autopilots of PID Type ...............................................................................................15
5.1.2 P Control .....................................................................................................................16
5.1.3 PD Control ..................................................................................................................17
5.1.4 PID Control .................................................................................................................18
5.2 Automatic Steering Principles.............................................................................................19
5.2.1 Proportional Control....................................................................................................19
5.2.2 Derivative Control.......................................................................................................21
5.2.3 Integral Control ...........................................................................................................22
5.3 Marine Autopilots in Market...............................................................................................23
5.3.1 Autopilot System PR-6000 (Tokimec) ........................................................................23
5.3.2 Autopilot System PR-2000 (Tokimec) ........................................................................23
5.3.3 Autopilot System PR-1500 (Tomimec) .......................................................................24
6. Dynamic Positioning Systems ...................................................................................................24
6.1 Basic Principles of Dynamic Positioning Systems .............................................................25
6.2 IMO DP Classfications .......................................................................................................27
7. Roll Stabilisation Systems .........................................................................................................28
7.1 Fin Stabilisation Systems ....................................................................................................29

i
7.2 Rudder Roll Stabilisation System .......................................................................................31
8. Trend of Control Systems ..........................................................................................................32
Summary of Module 5...................................................................................................................32
Exercises........................................................................................................................................33

ii
LIST OF FIGURES

Figure 5.1.........................................................................................................................................3
Figure 5.2.........................................................................................................................................4
Figure 5.3.........................................................................................................................................5
Figure 5.4.........................................................................................................................................5
Figure 5.5.........................................................................................................................................6
Figure 5.6.........................................................................................................................................7
Figure 5.7.........................................................................................................................................7
Figure 5.8.........................................................................................................................................8
Figure 5.9.........................................................................................................................................9
Figure 5.10.......................................................................................................................................9
Figure 5.11 .....................................................................................................................................11
Figure 5.12.....................................................................................................................................11
Figure 5.13.....................................................................................................................................12
Figure 5.14.....................................................................................................................................12
Figure 5.15.....................................................................................................................................13
Figure 5.16.....................................................................................................................................14
Figure 5.17.....................................................................................................................................15
Figure 5.18.....................................................................................................................................15
Figure 5.19.....................................................................................................................................16
Figure 5.20.....................................................................................................................................20
Figure 5.21.....................................................................................................................................21
Figure 5.22.....................................................................................................................................21
Figure 5.23.....................................................................................................................................22
Figure 5.24.....................................................................................................................................22
Figure 5.25.....................................................................................................................................23
Figure 5.26.....................................................................................................................................23
Figure 5.27.....................................................................................................................................24
Figure 5.28.....................................................................................................................................25
Figure 5.29.....................................................................................................................................25
Figure 5.30.....................................................................................................................................26
Figure 5.31.....................................................................................................................................26
Figure 5.32.....................................................................................................................................29
Figure 5.33.....................................................................................................................................30
Figure 5.34.....................................................................................................................................31
Figure 5.35.....................................................................................................................................31

iii
LIST OF TABLES

Table 5.1 ........................................................................................................................................27


Table 5.2 ........................................................................................................................................29

iv
REFERENCES

AMC (unknown year), Lecture Notes on Automation, Australian Maritime College, Launceston

Chesmond, C.J. (1990), Basic Control System Technology, Edward Arnold, UK.

Fossen, T.I. (1994), Guidance and Control of Ocean Vehicles, John Wiley and Sons, UK.

Fossen, T.I. (1994), Marine Control Systems – Guidance, Navigation and Control of Ships, Rigs
and Underwater Vehicles, Marine Cybernetics, Trondheim, Norway.

Haslam, J.A., G.R. Summers and D. Williams (1981), Engineering Instrumentation and Control,
Edward Arnold, UK.

Kou, Benjamin C. (1995), Automatic Control Systems, Prentice-Hall International Inc., Upper
Saddle River, New Jersey, USA.

Nguyen, H.D. (2000), Self-tuning Pole Assignment and Optimal Control Systems for Ships,
Doctoral Thesis, Tokyo University of Mercantile Marine, Tokyo, Japan.

Ogata, Katsuhiko (1997), Modern Control Engineering, 3rd Edition, Prentice-Hall International
Inc., Upper Saddle River, New Jersey, USA.

Perez, T. (2005), Ship Motion Control – Course Keeping and Roll Stabilisation Using Rudder
and Fins, Springer-Verlag, London.

Richards, R.J. (1993), Solving in Control Problems, Longman Group UK Ltd, Harlow, Essex,
UK.

Seborg, Dale E., Thomas F. Edgar and Duncan A. Mellichamp (2004), Process Dynamics and
Control, 2nd Edition, John Wiley & Sons, Inc., Hoboken, New Jersey, USA.

Taylor, D.A. (1987), Marine Control Practice, Butterworks, UK.

Tetley, L. & C. Calcutt (2001), Electronic Navigation System, Butterworths Heinemann, Woburn,
MA.

v
AIMS

1.0 Explain structures, operating principles of control systems in industries and maritime
engineering.

LEARNING OBJECTIVES

1.1 Describe operating principles of pneumatic control systems

1.2 Describe operating principles of hydraulic control systems

1.3 Describe operating principles of electrical and electronic systems including analogue control
systems, digital control systems and programmable logic controllers

1.4 Describe operating principles of autopilot systems for marine vehicles.

1.5 Describe operating principles of dynamic positioning systems

1.6 Describe operating principles of roll stabilisation systems

vi
1. Introduction
As mentioned in the earlier modules, computer science, high performance programming
languages, I/O interface techniques and modern control theories allow very complicated
control systems to be designed for different purposes. A modern control system is a
combination of pneumatic, hydraulic and electronic elements. The trend of new control
systems are computer-based control systems in which the control algorithms are designed in
the form of software. Control applications in industries and marine and offshore systems may
be as follows:
• Pneumatic control systems
• Hydraulic control systems
• Electrical and electronic control systems, including analogue, digital control systems
and PLCs (programmable logic controllers)
• Surface vessels’ autopilot systems
• Manoeuvring, control and ship positioning systems
• Engine and machinery control systems
• Control systems for underwater vehicles and robotics
• Traffic guidance and control systems

In this Module, we deal with pneumatic, hydraulic, and electronic control systems and some
control applications in industries and maritime engineering such as ship autopilots, dynamic
positioning systems, roll stabilisation systems and so on.

2. Pneumatic Control Systems


In previous modules, we dealt with principles of PID pneumatic control systems. This section
gives more information about practical sides of pneumatic control systems.

2.1 Essential Requirements

a) The air must be free of:


• Oil,
• Moisture
• Dust.
b) Sources of contamination are
• Air intake – dust, oil vapour and water vapour
• Compressor – oil, water, carbon wear particles and corrosion products
c) Reduction of contaminants
• System design
• Correct operation
• Regular maintenance

System design: System to be used for control air supply only, preferably of a ring main type
and sized to suit number of items of equipment in the circuit.

Design features:
(a) Compressor discharge pressure should be high enough to prevent condensation and
minimise power requirements. Instruments and controllers require a supply of about 1.5 bar
whilst some actuators may require 5 bar.

1
Operating at a high discharge pressure and reducing pressure at the instruments helps to dry
the air and reduce size of components.

(b) Quantity: System should be sized to match maximum expected demand.


(i) match future expansion
(ii) allow 10% leakage factors
(iii) prevent excessive operation of compressors

(c) Quality (dryness): Often this is over-emphasized resulting in increased costs. Most
instruments will accept as a maximum.
(i) 500ppm water vapour
(ii) 1ppm solids >| micron
(iii) 1gmHC/100m3

Dryness is achieved by
(a) siting of air intake
(b) sizing of air receiver
(c) auto drains
(d) sloping lines
(e) tapping from top of distribution manifold to instruments
(f) drains at low points
(g) use of either:
(i) Absorbent driers
(ii) Refrigerant driers

Compressor Types: Governed by the dryness and oil content of the air. Early instruments
required totally oil free air, however modern instruments will tolerate some oil. Improvement
in filtration systems has allowed use of oil lubricated compressors.

Filtration:
(a) Coalescing filters
(b) Bronze filters
(c) Air intake filters

Operation/Maintenance:
• Operate compressors at rates discharge pressures.
• Check moisture drains regularly.
• Filters to be changes at prescribe intervals.
• Ensure new instruments are correctly connected.
• Site compressor suction in as clean and cry an area as possible.
• Ensure automatic driers functioning.
• Overhaul at prescribed intervals.

2.2 Basic Pneumatic Control Systems

Pneumatic control systems are compressed air to supply energy for the operation of valves,
motors, relays and other pneumatic control equipment. Consequently, the circuits consist of
air lines. Pneumatic control systems are made up of the following:
1. A source of clean, dry compressed air which is stored in a receiving tank at a pressure
capable of supplying all the pneumatic devices in the system with operating energy.

2
Pressure in the receiving tank is normally maintained between 2.5-10bar depending on
the system.
2. A pressure reducing station which reduces recurving tank pressure to a normal
operating pressure of 1-1.5bar again depending on system requirement.
3. Air lines which can either be copper or polyethylene tubing connect the air supply to
the controlling devices (thermostats and other controllers). These air lines are called
“mains”.
4. Controlling instruments such as thermostats, humidistats and pressure controllers are
used to position the control valves.
5. Intermediate devices such as relays and switches.
6. Air lines leading from the controlling devices to the controlled devices. These air lines
are called “branch lines”.
7. Controlled devices such as valves or damper actuators. These can either be called
operators or actuators.
A typical application of a pneumatic process controller is shown in Figure 5.1. The function
of the controller is to open and close the control valve so as to manipulate the inflow rate, in
the presence of fluctuations in outflow rate. It does this in order that the liquid level in the
tank, as measured by the transducer, shall match as closely as possible the desired value, as
determined by the manually adjusted set point.

VALVE
POSITIONER

CONTROL VALVE

SET POINT
KNOB
manipulated
inflow rate
LEVEL
CONTROLLER
measured level signal
outflow
LEVEL rate
TRANSDUCER

SCHEMATIC DIAGRAM

outflow rate
level reference controller inflow rate
signal level error output
REFERENCE signal
set point CONTROLLER VALVE & PLANT
= desired level TRANSDUCER POSITIONER PROCESS
CONTROL LAW
SENSITIVITY

LEVEL CONTROLLER
LEVEL
measured level (process TRANSDUCER actual level
variable) signal

BLOCK DIAGRAM
Figure 5.1 Scheme and block diagrams for closed loop control of liquid level in a vessel,
using a pneumatic process controller.

3
The functions of the pneumatic controller are:

• To enable the set point signal to be generated


• To receive the feedback signal representing the measured level
• To generate an error signal by comparing the above two signals
• To amplify the error signal and to incorporate dynamic terms, in generating the
controller output signal.

The control law can incorporate one or more of the terms known as proportional action,
integral action, and derivative action which have been described in Module 3. Figure 5.2
shows a symbolic representation of a controller containing only proportional action (P
Controller). In practice, the PV and set point bellows may be coupled (differentially) to the
flapper through fairly complex linkage arrangements. The flapper-nozzle amplifier and its out
put pressure responds, nonlinearly, to minute changes in flapper displacement. The air relay
behaves as a unity follower, so that its output pressure tracks the amplifier output pressure,
but with significant increase in volumetric flow capacity. The feedback bellows completes a
high gain negative feedback loop, and equilibrium is established by a force balance at the
flapper. Thus, the controller output pressure is proportional to the difference between the set
point and process variable pressures. The constant of proportionality may be adjusted by
manually changing the moment arm ratios of linkages (not shown) which couple the feedback
bellows to the flapper.

flapper input displacement


∝ (set point pressure – PV pressure)

process set point pressure


variable pressure

PV BELLOWS SET POINT BELLOWS

supply air

FLAPPER-NOZZLE
AMPLIFIER

flapper feedback displacement


∝ output pressure

FEEDBACK BELLOWS
AIR controller output
supply air
RELAY pressure

Figure 5.2 Symbolic representation of a pneumatic process controller incorporating only


proportional action

4
Integral action may be incorporated by adding a series connected combination of variable
restriction and (integral action) bellows in the feedback path, as shown in Figure 5.3. The
restriction is analogous to a variable resistor and the bellows is analogous to a capacitor, so
that adjustment of the restriction will cause the integral action time constant to be ‘tuned’. If a
second, variable (derivative) restriction is added at point X in Figure 5.3, in series with the
proportional action bellows, adjustment of this restriction will cause the derivative action time
constant to be tuned.
flapper input displacement

process variable set point


pressure pressure

PV BELLOWS SET POINT BELLOWS


FLAPPER-NOZZLE AMPLIFIER

supply air

PROPORTIONAL INTEGRAL
ACTION BELLOWS ACTION BELLOWS

VARIABLE
RESTRICTION

AIR controller output


supply air pressure
RELAY

Figure 5.3 Symbolic representation of a pneumatic process controller incorporating


proportional and integral action

Figure 5.4 shows the faceplate of a typical pneumatic indicating process controller, and the
features shown are common to all general purpose analog process controllers.

MOVING DEVIATION
INDICATOR 90
AUTO-MANUAL MODE
80 CHANGEOVER SWITCH
percentage deviation
(deviation = process 70
variable – set point) FIXED SET POINT
MARKER
60

50 SET POINT (SCALE)


ADJUSTMENT KNOB
40

KNOB TO CONTROL THE


CONTROLLER OUTPUT
CONTROLLER 0 % 100
OUTPUT DIRECTLY, IN THE MANUAL
MODE
INDICATOR

Figure 5.4 Faceplate of a typical pneumatic process controller showing instrument displays
and manual controls

5
3 Hydraulic Control Systems
3.1 Hydraulic Servo Valve and Actuator

Purely hydraulic controllers are really a controller-actuator combination. Their input signal is
a physical displacement that alters a servo valve.

Servo valve LOAD

Drain

Fluid supply

Drain

Actuator

Input movements Output movements

X P Y

a b

Figure 5.5 Hydraulic servo valve and actuator

When the spool of the servo valve is exactly central, the supply of hydraulic oil is prevented
from reaching the actuator piston. The piston is held in place by the oil trapped between it and
the servo valve spool. If the link then receives an input force, it moves and oil under pressure
flows through the servo valve. This oil exerts a force on the actuator piston, making it move.
Movement of the actuator rod causes the link to move at Y. The link pivots about X and
moves at point P.

The movement at P represents negative feedback. For example, an initial upward movement
at X admits oil into the upper half of the actuator, forcing the piston down. This counters the
initial movement at X. For any given input (X), a new position of equilibrium (Y) is rapidly
reached.

The lengths a and b determine the amount of negative feedback and the relationship of Y to
X.

Ideally, this controller has a first-order characteristic equation. Thus, theoretically, it cannot
oscillate. In practice, however, the compressibility of the oil and mass of the piston-oil
introduce a second-order (inertial) term.

6
This gives the possibility of an oscillatory response to step changes in input. The inherent
damping (friction and leakage) ensures that any oscillation dies away. But additional damping
(usually of the dashpot or vane type) may be necessary to prevent excessive overshoot and
long settling time.

3.2 Applications of Hydraulic Servo Valve

3.2.1 Speed Control System

Let’s consider a speed control system as shown in Figure 5.6. If the engine speed increase, the
sleeve of the fly-ball governor moves upward. This movement acts as the input to the
hydraulic controller. A positive error signal (upward motion of the sleeve) causes the power
piston to move downward, reduces the fuel-valve opening, and decrease the engine speed. A
block diagram for the system is shown in Figure 5.7.

b
a2 a1

z e

ω
Oil under
pressure

y
Engine

Figure 5.6 Speed control system

E(s) a2 K Output Y(s)


a1 + a 2 s

a1 Z(s) bs
a1 + a 2 bs + k

Figure 5.7 Block diagram for the speed control system in Figure 5.6

7
If the flowing condition applies

a1 bs K
>> 1 (5.1)
a 1 + a 2 bs + k s

the transfer function Y(s)/E(s) becomes

Y(s) a 2 a 1 + a 2 bs + k a § k·
= = 2 ¨1 + ¸ (5.2)
E(s) a1 + a 2 a1 bs a1 © bs ¹

The speed controller is of the proportional and integral (PI) control.

3.2.2 Hydraulic Steering Machine

Hydraulic servo valve is applied in the rudder handling system. Figure 5.8 shows a hydraulic
steering machine. The ship actuator or the steering machine is usually controlled by an on-off
rudder control system. The on-off signals from the rudder controller are used to open and
close the port and starboard valves of the telemotor system.

port

poil
Relay operated
valves poil
poil
starboard

(a) (c)

(b)
rudder
steering
telemoter cylinder floating lever

Figure 5.8 Simplified diagram of a two-stage hydraulic steering machine

Assume that both the telemotor and floating lever are initially at rest in position (a). The
telemotor can be moved to position (b) by opening the port valve. Suppose that the rudder is
still in its original position corresponding to position (b); this will cause the steering cylinder
valve to open. Consequently, the floating lever will move to position (c) when the desired
rudder angle has been reached. The maximum opening of the steering cylinder valve, together
with the pump capacity, determines the maximum rudder speed. Figure 5.9 shows a block
diagram of the steering machine with its dynamics.

Amerogen (1982) suggested a simplified steering machine for rudder as shown in Figure 5.10.
This representation is based on the telemotor being much faster than the main servo and that
the time constant Td is of minor importance compared with the influence of the rudder speed.

8
δc Rudder δmax K δ
K
control
algorithm s(1 + Tf s) s(1 + Td s)

Angle Angle
transducer transducer
rudder servo telemotor system main servo

Figure 5.9 Simplified diagram of the hydraulic steering machine

δc δmax δ
δ max 1
from s
autopilot
rudder rudder rate
limiter limiter

Figure 5.10 Simplified diagram of the hydraulic steering machine

Generally, the rudder angle and rudder rate limiters in Figure 5.10 will typically be in the
ranges:

1
δ max = 35 degrees 2 (deg/ s) ≤ δ max < 7 (deg/s) (5.3)
3

for most of commercial ships. The requirement for minimum average rudder rate is specified
by the classification societies such as American Bureau Shipping (ABS), Det norske Verits
(DnV), Lloyds, etc. It is required that the rudder can be moved from 35 degrees port to 35
degrees starboard within 30 seconds. Fossen reported that according to Eda and Crane (1965)
the minimum design rudder rate in dimensional terms should satisfy:

δ min = 132.9 (U/L) (deg/s) (5.4)

where U is the ship speed in m/s and L is the ship length in m. Recently, much faster steering
machine have been designed with rudder speeds up to 15-20 (deg/s). A rudder speed of 5-20
(deg/s) is usually required for a rudder-roll stabilisation (RRS) system to work properly.

Another model of the rudder could be

­°δ max (1 − exp(− (δ c − δ ) / ∆ )) if δ c − δ ≥ 0


δ = ® (5.5)
°̄− δ max (1 − exp((δ c − δ ) / ∆ )) if δ c − δ ≥ 0
The parameter ∆ will depend on the moment of inertia of the rudder. Typical values will be in
the range 3 ≤ ∆ ≤ 10 .

9
In Japan, the MMG (Mathematical Model Group) suggested the following steering machine:

δc − δ
δ = (5.6)
δ c − δ Trud + a

where Trud is the time constant of rudder (seconds) and a is a constant used to avoid zero-
dividing.

4 Electrical and Electronic Control Systems


In earlier modules, we have dealt with very basic principles of PID electrical and electronic
control systems. Generally electrical and electronic control systems can be categorised into
three types: 1) analogue control systems; 2) digital (or discrete) control systems; and 3)
programmable logic controllers (PLCs). This section will outline three types of electrical and
electronic control systems.

4.1 Analogue Control Systems

The control system is interconnected using current or voltage signals. There is no standard
range. Typical common ranges are:

Current: 4-20mA, 10-50mA


Voltage : 0-10V, 1-5V, -5V-+5V, -10V-+10V

The tendency is to use 4-20mA in the loop since the elevated zero (range) means fault
findings is easier and the current loop is less prone to signal noise.

Electronic controllers are most sensitive, have wider adjustment ranges on PI and D actions,
but often require signal conversion to pneumatic at the final control element.

Figure 5.11 illustrates a three-term controller with separate sections. The controller consists of
comparator circuit, proportional controller (P-action), integrator (I-action), and defferentiator
(D-action) and power amplifier.

Figure 5.12 and Figure 5.13 illustrate an application of PID controller into a flow control
system at the Australian Maritime College. In this system, the PID controller plays role as a
comparison element and controller in which the controlling signal is computed. To operate the
whole control system, it is necessary for user to set control gains (including proportional
control gain, integral gain (or integral time) and derivative gain (or derivative).

Comparator circuit

Setpoint
Sensor Actuating error

(Feedback signal)
10

Integrator Proportional gain


Integrator
Figure 5.11 Three-term controller with separate sections

Globe valve

Rotameter
kPa
Controller
R
Controller C/P Converter Actuator
mA

D/P Cell Control valve

Orifice Valve
Valve
plate (Closed)
(Closed)

PUMP

Water tank

Figure 5.12 The flow control bench system arrangement (Control Engineering Lab, AMC)
Bailey Controller

DC 24V

250 Ω
D/P 11
Cell
4-20mA
Square Root
Circuit
Figure 5.13 Connection diagram of the flow control bench system
(Control Engineering Lab, AMC)

4.2 Digital (Computer-based) Control Systems

Nowadays computers have been used in many control systems. The microprocessor based
control systems allow user to configure set point variables and a range of subsidiary functions
by means of a keyboard, and display control variable digitally by various indicators.

Input signals can be digital or analogue and can be linearised as required. Some controllers
will accept multiple inputs at the same time, allowing improved control as in gas flow.
• Output signals is by user specification, either
• Solid state or mechanical relay digital, or 4-20mA analogue.

Computer interfacing by using optional boards (I/O (A/D-D/A) interface boards) allows
computer supervisory control. Figure 5.14 shows a computer-based (digital) control system
with I/G interfaces.

Final
DAC Control
Computer
(Control PROCESS
Software)
ADC Measurement

I/O Interfaces

Figure 5.14 Digital control system using computer (software) and I/O interfaces
Figure 5.15 shows the block diagram of a computer-based (digital) autopilot system using a
recursive estimation algorithm in combination with the optimal control law (Nguyen, H.D.,
2000).

12
Feedback
Noises

Set δ δt Outputs
Autopilot Steering Ship
u(t) = -Kx(t) machine (Shioji Maru)

K Dynamic system
Design
settings State feedback
control gain RPE
Estimator
θ
Optimal Riccati State space
calculator equation model (F,G)

Computing unit (computer)

Figure 5.15 Block diagram of a computer-based autopilot system

4.3 PLCs (Sequence Control Systems)

PLC stands for Programmable Logic Controller. The first PLC was developed in 1968-1969.
The PLC has become an unqualified success. PLCs are now produced by over 50
manufacturers. Varying in size and sophistication, these electronic marvels are rapidly
replacing the hard-wired circuits that have controlled the process machines and driven
equipment of industry in the past. This section outlines operating principles of a PLC.

A programmable logic controller is a solid state device designed to perform the logic
functions previously accomplished by electro-mechanical relays, drum switches, mechanical
timers/counters, etc. for the control and operation of manufacturing process equipment and
machinery.

A typical PLC consists of three components. These components are the processor unit, the
input/output section (I/O interface) and the programming device.

4.3.1 The Processor Unit (CPU)

The processor unit houses the processor which is the ‘brain’ of the system. This brain is a
microprocessor-based system which replaces control relays, counters, timers, sequencers, and
so forth and is designed so the user can enter the desired circuit in relay ladder logic. The
processor then makes all the decisions necessary to carry out the user program for control of a
machine or process. It can also perform arithmetic functions, data manipulation and
communication between the PC, remotely located PC’s, and/or computer systems. A DC
power supply is required to produce the low level DC voltage used by the processor. This
power supply can be housed in the processor unit or may be a separately mounted unit
depending on the model and/or the manufacturer. The processor can be referred to as a CPU
(Central Processing Unit).

4.3.2 The Input/output Section

13
The input/output section consists of input modules and output modules for communication
with peripherals (real world devices). The real world input devices may be pushbuttons, limit
switches, analogue sensors, selector switches and so on, while the real world output devices
could be hard wired to motor starters, solenoid valves, indicator lights, position valves, and
the like.

4.3.3 The Programming Device

The programming device may be called an industrial terminal, program development


terminal, programming panel or simply programmer. Regardless of their names, they all
perform the same function and are used to enter the desired program in relay ladder logic that
will determine the sequence of operation and ultimate control of the process equipment or
driven machinery. The programming device may be a hand held unit with an LED (light
emitting diode) display, an LCD (liquid crystal display), a desktop type with a CRT display or
other compatible computer terminals.

Programming
CPU device

Output Input Gates


devices

Motors Switches (b)


Relays Sensors
Lamps Etc.
Etc.
Output Input
lines lines
(a)

Figure 5.16 (a) Conceptual structure of a PLC-based control system;


(b) Mitsubishi PLC (input/output section)

There are several types of PLC. Each type has its own features. Figure 5.17 shows an example
of a PLC-based control system (AMC Control Lab). The PLC is Mitsubishi MELSEC FXon-
24MR-ES. The CPU is Pentium 4 with Windows XP.

Input/output
Section (see
Figure 5.7b) CPU and
Programming
device
14
Peripherals

Figure 5.7 Mitsubishi PLC-based Control System (AMC Control Lab)

5. Ship Autopilot Systems


5.1 Mathematic Foundation for Autopilot Systems

Autopilots for course-keeping are normally based on feedback from a gyrocompass


measuring the heading. Heading rate measurements can be obtained by a rate sensor, gyro,
numerical differentiation of the heading measurement or a state estimator. This is common
practice in most control laws utilizing proportional, derivative and integral action. The control
objective for a course-keeping autopilot can be expressed as ψ d = constant.

This control objective is illustrated in Figure 5.18. On the contrary, course-changing


manoeuvres suggest that the dynamics of the desired heading should be considered in
addition.

Waves,
wind & current

ψd δc δ ψ
PID-typed Steering
Autopilot Machine SHIP

Figure 5.18 Autopilot for automatic heading

5.1.1 Autopilots of PID-Type

Most autopilots for ship steering are based on simple PID-control laws with fixed parameters.
To avoid that the performance of the autopilot deteriorating in bad weather and when the
speed of the ship changes, HF rudder motions must be suppressed by proper wave filtering,
while a gain scheduling technique can be applied to remove the influence of the ship speed on
the hydrodynamic parameters. For simplicity, let the LF motion of a ship be described by
Nomoto’s 1st-order model:

 + ψ
Tψ  = Kδ (5.8)

15
Based on this simple model the control laws of P, PD-, and PID-type using feedback from the
LF state estimates will be discussed. The performance and robustness of the autopilot can be
evaluated by using the simulation set-up showed in Figure 5.19. The proposed simulator
models 1st-order wave disturbances as measurement noise while wave drift forces, wind and
sea currents are treated as a constant disturbance.

1st-order wave disturbance


ω2n

2ζωn

White w
noise KW

Wave drift
ψH
Wind & current

ψd δc δc ψL
Autopilot Steering
Machine
K
ψ

Ship (Nomoto’s 1st-order model)

Figure 5.19 Simplified simulation set-up for course-keeping autopilot

5.1.2 P-control

Let us first consider a proportional control law:

δ = K P (ψ d − ψ ) (5.9)

where K P > 0 is a regulator design parameter. Substitution of (5.9) into (5.8), yields the
closed-loop dynamics:


 + ψ + KK P ψ = KK P ψ d (5.10)

From this expression the eigenvalues are found to be:

− 1 ± 1 − 4TKK P
λ1, 2 = (5.11)
2T
Since, 1 − 4TKK P < 0 for most ships, it is seen that the real part of the eigenvalues are given
as:

16
1
Re{λ1, 2 } = − (5.12)
2T
Consequently, the suggested P-controller will not stabilize an open-loop unstable ship (T < 0).
For stable ships (T > 0) the imaginary part of the closed-loop eigenvalues and thus the
oscillatory motion can be modified by adjusting the regulator gain KP. For instance, a
critically damped system is obtained by choosing:

1
KP = (5.13)
4TK

5.1.3 PD-control

Since, the use of a P-controller is restricted to open-loop stable ships with a certain degree of
stability, another approach has to be used for marginally stable and unstable ships. A
stabilizing control law is obtained by simply including derivative action in the control law.
Consider a control law of PD-type in the form:

δ = K P (ψ d − ψ ) − K D ψ (5.14)

Here KP > 0 and KD > 0 are the controller design parameters. The closed-loop dynamics
resulting from the ship dynamics and the PD-controller are:

 + (1 + KK D )ψ + KK P ψ = KK P ψ d
Tψ (5.15)

This expression simply corresponds to a 2nd-order system in the form:

 + 2ζω n ψ
ψ  + ω2n ψ = ψ 2n ψ d (5.16)

with natural frequency ωn (rad/s) and relative damping ratio ζ . Combining (5.15) and (5.16)
yields:

KK P 1+ KK D
ωn = and ζ = (5.17)
T 2 TKK P

The relative damping ratio is typically chosen in the interval 0.8 ≤ ζ ≤ 1.0, whereas the
choice of ωn will be limited by the resulting bandwidth of the rudder ωδ (rad/s) and the ship
dynamics 1/T (rad/s) according to:

1
< ω n 1 − 2ζ 2 4ζ 4 − 4ζ 2 + 2 < ωδ (5.18)
T

ship dynamics closed-loop bandwidth rudder servo

17
For a critically damped ship ( ζ = 1 ) the closed-loop bandwidth ωb is related to the natural
frequency ωn of the closed-loop system (5.17) by a factor of 0.64, that is ωb = 0.64ωn .
Alternatively, we can solve (5.10) for KP and KD which yield:

Tω2n 2Tζωn − 1
KP = KD = (5.19)
K K

Here ωn and ζ can be treated as design parameters.

5.1.4 PID-Control

During autopilot control of a ship it is observed that a rudder off-set is required to maintain
the ship on constant course. The reason for this is a yaw moment caused by the rotating
propeller and the slowly-varying environmental disturbances. These are wave drift forces
(2nd-order wave disturbances) and LF components of wind and sea currents. However,
steady-state errors due to wind, current and wave drift can all be compensated for by adding
integral action to the control law. Consider the PID-control law:

t
δ = K P (ψ d − ψ ) − K D ψ + K I ³ (ψ d − ψ (τ))dτ (5.20)
o
where KP > 0, KD > 0 and KI > 0 are the regulator design parameters. Applying this control
law to Nomoto’s 1st-order model:

Tψ  = K (δ − δ 0 )
 + ψ (5.21)

where δ 0 is the steady-state rudder off-set, yields the following closed-loop characteristic
equation

Tσ 3 + (1 + KK D )σ 2 + KK P σ + KK I = 0 (5.22)

Hence the triple (KP, KD, KI) must be chosen such that all the roots of this 3rd-order
polynomial become negative, that is

Re{σ i } < 0 for (i = 1, 2, 3) (5.23)

This can be done by applying Routh’s stability criterion. Another simple intuitive way to do
this is by noticing that δ can be written as:

§ 1 ·
δ = K P ¨¨1 + TD s + ¸(ψ d − ψ ) (5.24)
© TI s ¸¹

where the derivative and integral time constants are TD = KD/KP and TI = KP/KI, respectively.
Hence, integral action can be obtained by first designing the PD-controller gains KD and KP
according to the previous discussions. This ensures that sufficient stability is obtained. The

18
next step is to include integral action by adjusting the integral gain KI. A rule of thumb can be
to choose:

1 ωn
≈ (5.25)
TI 10

which suggests that KI should be chosen as:

ωn ω3n T
KI = KP = (5.26)
10 10 K

Now let’s consider some practical aspects of designing an autopilot system.

5.2 Automatic Steering Principles

Whatever type of system is fitted to a ship, the basic principles of operation remain the same.
Before considering the electronic aspects of an automatic steering system it is worthwhile
considering some of the problems faced by an automatic steering device.

In its simplest form an autopilot system compares the course-to-steer data, as set by the
helmsman, with the vessel’s actual course data derived from a gyro or magnetic repeating
compass, and applies rudder correction to compensate for any error detected between the two
input signals. Since the vessel’s steering characteristics will vary under a variety of
conditions, additional facilities must be provided to alter the action of the autopilot parameters
in a similar way that a helmsman would alter his actions under the same prevailing conditions.

For a vessel to hold a course as accurately as possible, the helm must be provided with data
regarding the vessel’s movement relative to the course to steer line. “Feedback” signals
provide this data consisting of three sets of parameters.

• Position data: information providing positional error from the course line
• Rate data: rate of change of course data
• Accumulative error data: data regarding the cumulative build-up of error.

Three main control functions acting under the influence of one or more of the data inputs
listed above are: proportional control, derivative control and integral control.

5.2.1 Proportional Control

This electronic control signal causes the rudder to move by an amount proportional to the
positional error deviated from the course line. The effect on steering, when only proportional
control is applied, is to cause the vessel to oscillate either side of the required course, as
shown in Figure 5.20. The vessel would eventually reach its destination although the erratic
course steered would give rise to an increase in fuel expended on the voyage. Efficiency
would be downgraded and rudder component wear would be unacceptable.

19
Figure 5.20 An early electro-mechanical autopilot system using telemotors
(Tetley L. et al. 2001)

20
At the instant an error is detected, full rudder is applied, bringing the vessel to starboard and
back towards its course (Figure 5.21). As the vessel returns, the error is reduced and autopilot
control is gradually removed. Unfortunately the rudder will be amidships as the vessel
approaches its course causing the vessel resulting in a southerly error. Corrective data is now
applied causing a port turn to bring the vessel back onto course. This action again causes an
overshoot, producing corrective data to initiate a starboard turn in an attempt to bring the
vessel back to its original course. It is not practical to calculate the actual distance of the
vessel from the course line at any instant. Therefore, the method of achieving proportional
control is by using a signal proportional to the rudder angle as a feedback signal.

Figure 5.21 The effect of applying proportional control only.

The vessel oscillates about the course to steer

5.2.2 Derivative Control

With this form of control, the rudder is shifted by an amount proportional to the “rate-of-
change” of the vessel’s deviation from its course. Derivative control is achieved by
electronically differentiating the actual error signal. Its effect on the vessel’s course is shown
in Figure 5.22.

Figure 5.22 The effect of applying derivative control only

Any initial change of course error is sensed causing a corrective starboard rudder command to
be applied. The rate-of-change decreases with the result that automatic rudder control
decreases and, at point X, the rudder returns to the midships position. The vessel is now
making good a course parallel to the required heading and will continue to do so until the
autopilot is again caused to operate by external forces acting on the vessel.

21
An ideal combination of both proportional and derivative control produces a more satisfactory
return to course, as shown in Figure 5.23.

Figure 5.23 Applying a combination of proportional and derivative control brings the vessel
back to on track.

The initial change of course causes the rudder to be controlled by a combined signal from
both proportional and derivative signals. As the vessel undergoes a starboard turn (caused by
proportional control only) there is a change of sign of the rate of change data causing some
counter rudder to be applied. When the vessel crosses its original course, the rudder is to port,
at some angle, bringing the vessel back to port. The course followed by the vessel is therefore
a damped oscillation. The extent of counter rudder control applied is made variable to allow
for different vessel characteristics. Correct setting of the counter rudder control should cause
the vessel to make good its original course. Counter rudder data must always be applied in
conjunction with the output of the manual “rudder” potentiometer, which varies the amount of
rudder control applied per degree of heading error.

Figure 5.24 (a) If “counter rudder” and “rudder” controls are set too high, severe oscillations
are produced before the equipment settles. (b) If “counter rudder” and “rudder” controls are
set too low, there will be little overshoot and sluggish return to the course.

Figure 5.24(a) and (b) show the effect on vessel steering when the counter rudder and rudder
controls are set too high and too low, respectively.

5.2.3 Integral Control

Data for integral control is derived by electronically integrating the heading error. The action
of this data offsets the effect of a vessel being moved continuously off course. Data signals
are produced by continuously sensing the heading error over a period of time and applying an
appropriate degree of permanent helm.

22
In addition to proportional control, derivative control and integral control, autopilots normally
have the yaw, trim, draft, rudder limit, and weather controls.

5.3 Marine Autopilots in Market

The marine autopilot receives signals from directional sensors such as the gyrocompass and
uses them to automatically control the helm for navigation.

The history of the TOKIMEC marine autopilots dates back to 1925 with the development of
the P1 single pilot and the P2 single pilot. Later TOKIMEC integrated the gyrocompass into
the autopilot unit ("GYLOT") and also developed the "Navigation Console" with radar and
other navigational instruments all built in. The performance and reliability of these products
obtained the overwhelming support of the marine market. Today, TOKIMEC offers the
following series of autopilots.

PR-6000 For medium to large vessels


PR-2000 For small to medium vessels
PR-1500 For small craft

5.3.1 Autopilot System PR-6000


This high-grade autopilot model was designed with three
fundamental concepts in mind: expanding and strengthening helm
functions, thorough safety considerations, and improved
reliability and more intelligent maintenance functions.
• IBS (Integrated Bridge System) compatibility
• Improved interface
• Complete range of controls
• Compatible with a variety of steering systems
• Safety considerations designed in
• Standard compatibility with digital output gyros

Figure 5.25 Autopilot system PR-6000


 Courtesy of Tokimec Co. Ltd. (Japan)

5.3.2 Autopilot system PR-2000
This best-selling model has been installed in over 15,000 small and
medium vessels including fishing boats, coastal craft, and merchant
ships, and is renowned for its stable performance and ease of use.
• Designed for operational simplicity
• More affordable, with improved course holding ability
• Variety of system configurations (stand-alone model,
GYLOT model, console model)
• Compatible with a variety of steering systems
Figure 5.26 Autopilot system PR-2000
Courtesy of Tokimec Co. Ltd. (Japan)

23
5.3.3 Autopilot system PR-1500

Features of PR-1500 are as follows:

High-grade Functions; Easy to Operate


Wide range of automatic steering modes
• Work mode augments the conventional automatic steering function: Work mode
is used under operational conditions which differ from normal operation - such as
trawling and slow speeds, etc.
• Automatic navigation mode is standard feature: Standard NMEA interface
simplifies connections with GPS navigators or plotters.


Figure 5.27 Autopilot system PR-1500
Courtesy of Tokimec Co. Ltd. (Japan)

Easy steering and easy-to-view display
• Remote Azimuth Holding is standard feature: Remote Azimuth Holding can be
initiated during remote steering. When the vessel is turned on a planned course by
remote steering with the RAH switch activated and remote controller dial returned to
the neutral position, the course setting can be stored in memory.
• Evasive steering (Override function) standard feature: NFU steering can be
employed in any of the steering modes.
• Total information display: The large size LCD provides a comprehensive display of
data such as ship's heading, set course, rudder angle, and control constants.
• Self-diagnostic function simplifies maintenance: Internal checks of main functions
can be performed at the front panel.

6. Dynamic Positioning Systems


In the 1960s systems for automatic control of the horizontal position, in addition to the
course, were developed. Systems for the simultaneous control of the three horizontal motions
(surge, sway and yaw) are today commonly known as dynamic positioning (DP) system.
More recently anchored positioning systems or position mooring (PM) systems have been
designed. For a free floating vessel in DP the thrusters are the prime actuators for station-
keeping, while for the PM system the assistance of thrusters are only complementary since
most of the position keeping is provided by a deployed anchor system.

24
DP systems have traditionally been a low-passed application,
where the basic DP functionality is either to keep a fixed
position and heading or to move slowly from one to another
location. In addition, specialized tracking functions for cables
and pipe-layers, and operations of remotely operated vehicles
(ROVs), have been included. The traditional autopilot and
way-point tracking functionalities have also been included in
modern DP systems. The trend today is that high-speed
operation functionality merges with classical DP functionality,
resulting in a unified system for all speed ranges and types of
operations.

Figure 5.28 DPS manufactured by Kawasaki Heavy Industries


Co. Ltd.

The first DP systems were designed using conventional PID controllers in cascade with low
pass and/or notch filters to suppress the wave-induced motion component. Figure 5.28 shows
an illustration of a DP system (the control console unit) developed by Kawasaki Heavy
Industries Co. Ltd., Japan. Figure 5.29 shows a conceptual diagram of a DPS.

GPS/GNSS Gyrocompass Barometer Log & Sounder


D-GPS (Position) (Course) (Wind D & S) (Speed, depth)

Steering Machine
Indicators
(rudder controller)
(Monitors)
Course
CPP Controller
Ship speed
(Pitch)
DPS Position
(Advanced Rudder angle
Propeller Technology) Pitch angle
Controller Revolution
Wind direction
Side Thruster Wind speed
Controller Slave Maneuvering
Unit
Figure 5.29 Conceptual diagram of a DPS

6.1 Basic Principles of Dynamic Positioning System

The basic forces and moments act on a vessel operated in seawater as shown in Figure 5.30.
Ship steering dynamics is represented by an appropriate state space model as follows.

x = Ax + Bu (5.27)
y = Cx + Du (5.28)

where x and u are the vector of state variables and the vector of control variables,
respectively, y is the vector of outputs and A, B, C and D are the matrices consisting of
parameters (coefficients).

25
In order to obtain the values of state variables and parameters, many measuring techniques,
filtering and identification procedures are applied. An example of the filtering procedure is
Kalman filter. In order to compute the control signals for the DPS, a control law with a
criterion function is applied. For examples, PID control or optimal control has been applied in
the DPS. Figure 5.31 shows a block diagram of a DPS. Interested readers can find more
information about the DPSs in Fossen (1994) and Fossen (2002).

Figure 5.30 Basic forces and moments (Kongsberg Maritime)

Environmental loads due to


wind, wave and currents 1st-order wave
disturbance

Positioning
heading
Reference Control system D/RTK-GPS
computation (DPS) Compass

Low-frequency estimates of position,


heading, velocities and biases
Observer +
wave filter

Figure 5.31 Dynamic positioning system

Example: an optimal control algorithm (H. Nguyen, 2000)

In order to design an optimal control system (e.g. autopilot system or DPS), in general, it is
assumed that the ship to be controlled is described by a discrete-time state space (matrix)
model as follows.
x( t + 1) = F(θ)x( t ) + G (θ)u( t )
(5.29)
y ( t ) = C(θ)x( t )

where x(t) is the vector of state variables, u(t) is the vector of control variables, and F( θ ),
G( θ ) and C( θ ) are matrices formed by estimated parameters. The linear optimal control law
is to minimize the quadratic cost function (JO)

26
[ ]
M −1
J O = ¦ ε T ( t )Qε ( t ) + u( t ) T Ru( t ) 5.30)
t =0

where Q and R are weighting matrices (symmetric and positive definite matrices) weighting
the cost of heading errors against the control effort, ε( t ) is the ship heading error and u(t) is
the rudder angle. It should be noted that the system (5.29) could be an MIMO ARX (Auto-
Regressive eXogenous) model, which could be transformed into a state space model as in
(5.29).

The solution to this problem can be found by applying the R. Bellman’s Principle of
Optimality: “An optimal policy, or optimal control strategy, has the property that, whatever
the initial state decision, the remaining decision must form an optimal control strategy with
respect to the state resulting from the first decision.”

Minimizing the cost function leads to finding solution of the following discrete-time matrix
Riccati equation, and then the control signals can be calculated by the following expression.

u( t ) = −Kx( t ) (5.31)

where K is the state feedback control gain resulting from the solution of the Riccati equation.
Interested readers can find more information about optimal control in H. Nguyen (2000),
Forsen (1994) and Fossen (2002) and therein references.

6.2 IMO DP-Classifications

Dynamic positioning systems are classified by IMO as shown in Table 5.1.

Table 5.1 Classifications of DPS according to IMO

Corresponding Class Notations


IMO
(Manufacturers)
Description
DP Class ABS LRS DNV
Manual position control and
automatic heading control under DPS-0 DP (CM) DNV-T
-
specified maximum environmental
conditions
Automatic and manual position and
heading control under specified DNV-AUT
Class 1 DPS-1 DP (AM)
maximum environmental DNV-AUTS
conditions
Automatic and manual position and
heading control under specified
maximum environmental
conditions, during and following Cass 2 DPS-2 DP (AA) DNV-AUTR
any single fault excluding loss of a
compartment. (Two independent
computer systems).

27
Automatic and manual position and
heading control under specified
maximum environmental
conditions, during and following
any single fault including loss of a DPS-3 DP (AAA) DNV-AUTRO
Class 3
compartment due to fire or flood.
(At least two independent
computer systems with a separate
back-up system separated by A60
class division).

7. Roll Stabilization Systems


According to Fossen (1994), the main reasons for using roll stabilizing systems on merchant
ships are to prevent cargo damage and to increase the effectiveness of the crew. From a safety
point of view it is well known that large roll motions cause people to make more mistakes
during operation due to sea sickness and tiredness. For naval ships certain operations such as
landing a helicopter or the effectiveness of the crew during combat are of major importance.
Therefore, roll reduction is an important area of research.

Several solutions have been proposed to accomplish roll reduction. The most widely used
systems are (Van der Klugt 1987):

Bilge keels: Bilge keels are fins in planes approximately perpendicular to the hull or near the
turn of the bilge. The longitudinal extent varies from about 25 to 50 percent of the
length of the ship. Bilge keels are widely used, are inexpensive but increase the hull
resistance. In addition to this they are effective mainly around the natural roll
frequency of the ship. This effect significantly decreases with the speed of the ship.
Bilge keels were first demonstrated in about 1870.

Anti-Rolling Tanks: The most common anti-rolling tanks in use are free-surface tanks, U-
tube tanks and diversified tanks. These systems provide damping of the roll motion
even at small speeds. The disadvantages of course are the reduction in metacentre
height due to free water surface effects and that a large amount of space is required.
The earliest versions were installed about 1874.

Fin Stabilizers: Fin stabilizers are a highly attractive device for roll damping. They provide
considerable damping if the speed of the ship is not too low. The disadvantages with
additional fins are increased hull resistance (except for some systems that are
retractable) and high costs associated with the installation. They are also required
that at least two new hydraulic systems are installed. It should be noted that fins are
not effective at low speed and that they cause drag and underwater noise. They were
first granted a patent by John I. Thornycroft in 1889.

Rudder-Roll Stabilisation (RRS): Roll stabilization by means of the rudder is relatively


inexpensive compared to fin stabilizers, has approximately the same effectiveness,
and causes no drag or underwater noise if the system is turned off. However, RRS
requires a relatively fast rudder to be effective, typically δ max = 5 − 20 (deg/s).
Another disadvantage is that the RRS will not be effective if the ship’s speed is low.

28
Table 5.2 Overall comparison of ship roll stabilizer systems (Sellars & Martine 1992)

General % Roll Price


Stabilizer Type Installation Remarks
Application Reduction ($ × 1000)
FINS Mega yatchs, 90 100-200 Hull Speed loss,
(small fixed) naval auxiliaries attachment, largest size
supply and about 2m2.
install power
and control
cables
FINS Passenger, 90 400-1500 Hull Sizes range
(retractable) cruise, ferries, attachment, from 2m2 to
large Ro-Ro, supply and about 15m2.
naval install power
combatants and control
cables
FINS Naval 90 300-1300 Hull Speed loss
(large fixed) combatants attachment,
supply and
install power
and control
cables
TANKS Work vessels, 75 30-50 Install Includes liquid
(free surface) ferries, small steelwork level monitor
passenger and supply and
cargo ships install power
and control
cables
TANKS Work vessels, 75 200-300 Install Includes heel
(U-tube) Ro-Ro vessels instrument control system
cables cables
RRS Small, high 50-75 50-250 Install power New
speed vessels and control development,
cables more robust
steering gear
may be required
Bilge keels Universal 25-50 … Hull attachment Speed loss

7.1 Fin Stabilization System

The motions of the ship in a seaway can result in various undesirable effects, examples of
which are human discomfort and cargo damage. Only the rolling of a ship can be effectively
reduced by stabilization. Active or fin stabilization will now be considered. Figure 5.32 shows
a typical fin stabilizer arrangement.

CG

Art view
Figure 5.32 Typical fin stabiliser arrangement

29
The actions of waves on ship in a seaway result in rolling. If the rolling couple applied by the
sea can be opposed then the vessel will be stabilized. The rolling acceleration, velocity, angle
and the natural list of the vessel must all be determined in order to provide a suitable control
signal to activate the fins. The stabilizing power results from the lift on aerofoil fins located
on opposite sides of the ship. The angle of the fins is controlled in order to produce an upward
force on one side of the vessel and a downward force on the other. The resulting couple will
oppose the couple-inducing roll.

One type of control unit uses an angular accelerometer which continuously senses the rolling
accelerations of the ship, see Figures 5.22. The sensor is supported on air bearings to
eliminate friction and provided by a small oil-free compressor via filters and driers and the
system is sealed in operation. The accelerometer output signal is proportional to the rolling
acceleration of the ship. This signal is first electronically integrated to give a rolling velocity
signal and then integrated again to give a roll angle signal. Each signal can be adjusted for
sensitivity and then all three are summed. The summed signal is fed to a moving coil servo
valve which is located in the hydraulic machinery which drives the fins. The stroke of the
hydraulic pumps and the overall gain of the system can each be adjusted. A fin angle
transmitter is provided for each fin to provide a feedback to the servo valve. This type of
stabilization will provide roll reduction in excess 90% at resonance and where low residual
roll occurs over a wide range of frequencies. However, at low speed the stabilizing power
falls off and when the vessel is stopped no stabilization is possible.

Torque
Coil

Sense Torque CR Drift Port fin


Sensor Amp Amp Filter Corrector Angle
Indicators

AR bearing Handroll Gain


assembly input +12V DC

Accelera- Range Summing Atten- Inter- Port Pump Port MC Port FFB pot
tion Filter Switches Amp uator locks Servo Control unit on port fin
Pot Pot main pump Angle Amp

Velocity Roll Roll Summing Pump STBD STBD FFB pot


Drift STBD
integrator Angle Ouput Amp Control MC unit on STBD fin
Corrector Servo-Amp
Integrator Inverter Integrator Pot Main pump Angle Amp

Velocity Timer Running Stabilizer STBD fin


integrator (1.5 min) Relay Off for Angle
relay input RLH 35 sec RLG indicators

Figure 5.33 Stabilizer control system

Figure 5.34 shows the block diagram of the Denny-Brown-AEG Fin Stabilizer.

30
SIGNAL
CONVERTER LOG

M FIN M SEA M FIN

PORT STARBOARD
FIN SHIP FIN

FIN ANGLE MAIN MAIN FIN ANGLE


TRANSMITTER PUMP PUMP TRANSMITTER
ACCELEROMETER

SERVO VELOCITY SERVO


VALVE INTEGRATOR VALVE

ROLL
INTEGRATOR

SERVO SUMMING SERVO


PUMP AMP PUMP

Figure 5.34 Denny-Brown-AEG ship stabilisation system using fin angle control

7.2 Rudder Roll Stabilisation System

According to Perez (2005), rudder roll stabilisation is a technique based on the fact that the
rudder is located aft and also below the centre of gravity of the vessel, and thus the rudder
impacts not only yaw but also roll moment as shown in Figure 5.35. RRS is an extra feature
of the course autopilot.

CG CG

Rudder force
Rudder force

Art view

Figure 5.35 Rudder induced rolling moment

31
Most of the drawbacks of conventional active fin stabilisers and anti-roll tanks are overcome
by RRS. Provided that the speed of ship and the rudder rate are sufficiently high, this
technique can be applied to different ship types: small and large naval vessels, patrol vessels,
ferries and some Ro-Ro vessels. The main advantages of RRS are as follows:

• Medium to high performance. This can be in the range of 50-57% of roll reduction
• Relatively inexpensive
• No resistance in calm water conditions
• No large spaces required
• Can be combined with other stabilisers to achieve higher performance

Some disadvantages of RRS are indicated as follows:

• Ineffective at low speeds. Nevertheless, this can be higher than that of fins because the
rudders are located in the race of the propellers; and thus operate in higher speed flows
than fins.
• Drag is produced when in use. Nevertheless, this can be less than the drag of fin
stabilisers, provided that the ship turning is prevented.
• Rudder machinery upgrade may be needed to achieve high performance faster than
rudder motion.
• Need sophisticated control systems to extend the good performance to different sailing
conditions.

8. Trend of Control Systems


New technologies have been advanced. IT (computers with high speed CPUs, high
performance software, network and Internet), wireless communication technology and
satellite technology allow very complicated control systems to be designed. In future, control
systems will be integrated computer-based systems with high performance and multi-
functions. Figure 5.36 shows an example of a newly-developed dynamic positioning system.

Figure 5.36 Example of newly-developed dynamic positioning system (Courtesy of NUST)

32
SUMMARY OF MODULE 5
Module 5 is summarised as follows.

• Pneumatic control systems


• Hydraulic control systems
• Electrical and electronic control systems including analogue control systems,
programmable logic controllers (PLCs) and digital (computer-based) control systems.
• Autopilot systems for marine vehicles
• Dynamic positioning systems
• Fin stabilisation systems
• Rudder stabilisation system.

33
Exercises
1. A damper-spring-mass system is shown in the following figure. Write a differential equation
for the relationship between the output displacement y(t) and the input force u(t). Use the
following numerical values: P = 20N, m = 200kg, λ =100Ns/m, k = 600N/m, and initial
conditions: y(0) = 0 and y (0) = 0.

k
u(t)

y(t)
b

Assuming that the output displacement is measured by a displacement transducer that has
sensitivity of Km = 5 and the displacement is controlled by a PID controller with control gains KP,
KI and KD, draw a block diagram and write the total feedback transfer of the system. Using
MATLAB/Simulink make a simulation program for the system and find control gains such that
the system is stable.

2. Consider the position control system shown in the following figure. Write a MATLAB
program or Simulink model to obtain a unit-step response and a unit-ramp response of the system.
Plot curves x1(t) versus t, x2(t) versus t, x3(t) versus t, and e(t) versus t [where e(t) = r(t) – x1(t)]
for both the unit-step response and the unit-ramp response.

r e 1 x3 2 x2 1 x1
4 5
s 0.1s + 1 s

34
3*. Consider the hydraulic servo system shown in the following figure. Assuming that the load
reaction forces are not negligible, derive a mathematical model of the system. Assume also that
the mass of the power piston is included in the load mass m.
p0 ps p0

4 1
x 2 3

q q k
p1 p2
y m
b

4. Consider the liquid control system shown in the following figure. The controller is of the
proportional type with proportional control gain KP. The set point of the controller is fixed. Draw
a block diagram of the system, assuming that changes in the variables are small. Obtain the
transfer function between the level of the second tank and the disturbance input qd. Obtain the
steady state error when the disturbance qd is a unit step function.

Proportional
controller

qi

V1 h1 R1
qd

Cross-sectional area A1

V2 h2
R2 qo

Cross-sectional area A2

35
5*. Consider the liquid level control system shown in the following figure. The inlet valve is
controlled by a hydraulic integral controller. Assume that the steady state pilot valve
displacement is X = 0 , and steady state valve position is Y . We assume that the set point R
corresponds to the steady state head H . The set point is fixed. Assume also that the disturbance
inflow rate qd, which is small quantity, is applied to the water tank at t = 0. This disturbance
causes the head to change from H to H + h . This change results in a change in the outflow rate
by qo. Through the hydraulic controller, the changes in head causes a change in the inflow rate
from Q to Q + q i . (The integral controller tends to keep the head constant as much as possible in
the presence of disturbances.) We assume that all changes are of small quantities.

Assume the following numerical values for the system: C = 2 m2, R = 0.5 sec/m2, Kv = 1 m2/sec,
a = 0.25 m, b = 0.75 m, K1 = 4 sec-1,

a b
x
h

Y +y
Q +qi qd

V H +h
R
Q +qo
Cross-sectional area A

obtain the response h(t) when the disturbance input qd is a unit-step function. Also obtain this
response h(t) with MATLAB or Simulink.

6. A surface ship is represented by the following Nomoto’s manoeuvring model:


Tψ  + ψ
 = Kδ
where ψ is yaw angle (rad) and δ is rudder angle (rad), T = 7.5 seconds (time constant of ship), K
= 0.11. Ship speed is constant, v = 15 knots (1NM = 1,852.00 m). The position of the ship is
represented by the following model:
x = u cos ψ − v sin ψ
y = u sin ψ + v cos ψ

36
where u is surge velocity and v is sway velocity (assuming that v = 0). The ship’s heading is
control by a PID autopilot system with control gains of KP, KI and KD. It is assumed that the
rudder angle for the PID autopilot is in range of -10o (port) to +10o (starboard), rudder rate in
range of -5 deg/s to +5 deg/s, error (between the actual yaw angle and set course) in range of -
180o to +180o, and yaw angle in range of 0-360o. Using one of the steering machine models (for
rudder) in previous section (Section 2.2.2), draw a block diagram and make a MATLAB program
or Simulink for the PID autopilot.

37
Basic Control Theory
(35 Hours)

Lecturer: Dr. Hung Nguyen

1. Prerequisites

Mathematical foundation: matrix operations, complex number, differential equations


Fundamentals of electric and electronic engineering

2. Aims

The aims of the unit are:

• To provide students with basic control theory, knowledge and understanding of control systems
and their components and performance;
• To introduce students to control applications in marine and offshore industries.

3. Learning Outcomes

Upon the successful completion of this subject, students should be able to:

• Describe methods used to represent a dynamic system;


• Describe methods used in automatic control systems to reduce steady state error and deviation
during a disturbance;
• Describe linear first-order and second-order measuring and control systems, including PID
control systems;
• Explain the stability of the control systems and apply stability criteria to analyse stability of the
control system;
• Describe automatic control systems, which are common in the marine and offshore industries;
• Develop very basic skills of using MATLAB/Simulink to perform technical computation for
analysis of simple measuring and control systems.

4. Generic Graduate Attributes

The unit covers the following generic graduate attributes:

1. Ability to apply knowledge of basic science and engineering fundamentals (Knowledge):


through applications of technical and information skills related to control systems.

2. Ability to understand problem identification, formulation and solution (Problem solving skills):
through conceptualization of problems, formulation of a range of solutions, and interpretation
of experimental data, analysis and discussions of experimental data and evaluation of
experimental methods.

1
3. Ability to utilise a system approach to design and operational performance (Global
perspective): through the demonstration of awareness of the advantages and disadvantages of
control systems and of approach to design new systems or improve operational performance in
order to meet the client’s demands and needs.

5. Learning Resources

5.1 Software

MATLAB/Simulink: www.mathworks.com

5.3 Textbook and References

Lecture Notes

Module 1: Modelling

1. Mathematical Modelling of Dynamic Systems 1


2. Continuous Time Models – Ordinary Differential Equations 1
3. General Modelling Principles 2
3.1 Mechanical Systems 2
Example 1 A Simple Pendulum 2
Example 2 A Mass-Damper System 2
3.2 Liquid Level Storage Systems 4
Example 3 A Liquid Level Storage System 4
3.3 Electrical Systems 7
Example 4 Closed-Loop RLC Circuit 7
4. Review of Laplace Transform 8
4.1 Laplace Transform 8
4.2 Laplace Transform Theorems 10
4.3 Applications of Laplace Transform 13
Example 5 13
Example 6 15
Example 7 16
Example 8 17
Example 9 19
Example 10 19
Example 11 20
Example 12 21
4.4 Partial Fraction Expansion with MATLAB 23
Example 13 23
Example 14 24
Summary of Module 1 26
Exercises 26
Appendix – Numerical Integration Methods 30
Sample Program in MATLAB 31

2
Module 2: Modelling (continued), Time Domain Analysis, Frequency Domain Analysis,
Time Delay, Steady State Error, Disturbances

1. Transfer Function, Zeros and Poles 1


1.1 Transfer Function 1
1.2 Zeros and Poles 3
2. Block Diagram 5
2.1 Block Diagram 5
2.2 Block Diagram Algebra 8
3. Dynamic Performance 11
3.1 Time Domain Analysis 13
3.1.1 Zero-order Systems 13
3.1.2 First-order Systems 16
3.1.3 Second-order Systems 20
3.1.4 Transient Response Analysis with MATLAB/Simulink 29
3.2 Frequency Domain Analysis 34
3.2.1 Frequency Response of Closed-loop Systems 35
3.2.2 Frequency Domain Specifications 36
3.2.3 Frequency Response of Second-order Systems 38
4. Time Delays 42
4.1 Systems with Time Delays 42
4.2 Approximation of Time Delay 44
5. Steady State Errors 46
5.1 Concept of Steady State Error 46
5.2 Definition of Steady State Error 44
5.3 Steady State Error and Final Value Theorem 47
5.4 Steady State Error with Step Input 50
5.5 Steady State Error with Ramp Input 51
6. Disturbances 53
6.1 Concept of Disturbances 53
6.2 Effects of Disturbance on the System Output 54
6.3 Methods to Reduce Disturbances 54
6.3.1 Reduction at the Source 56
6.3.2 Reduction by Local Feedback 56
6.3.3 Reduction by Feed-forward 57
6.3.4 Reduction by Prediction 58
Summary of Module 2 58
Exercises/Problems 59

Module 3: Stability and PID Control

1. Stability of Linear Control Systems 1


1.1 Definitions of Stability 1
1.2 Concepts of Stability 1
Example 1 4
2. PID Control 6

3
2.1 What Is PID Control? 6
2.2 Control Actions 6
2.2.1 Proportional (P) Control Action 6
2.2.2 Integral (I) Control Action 7
2.2.3 Derivative (D) Control Action 8
2.2.4 Proportional Integral (PI) Control Action 8
2.2.5 Proportional Derivative (PD) Control Action 9
2.2.6 Proportional Integral Derivative PID Control Action 9
2.3 Types of PID Controller 10
2.3.1 Pneumatic PID Controllers 10
2.3.2 Hydraulic PID Controllers 19
2.3.3 Electronic PID Controllers 25
2.4 Simulation of PID Control System (PID Autopilot) 30
2.4.1 Ship Manoeuvring Model 30
2.4.2 Simulation of PID Autopilot (Nomoto’s First-Order Model) 31
2.5 PID Controllers in Market 37
2.5.1 DTZ4 Controller of Instronics Inc. 37
2.5.2 More Information on the Internet 38
2.6 Examples 38
Example 2 38
Example 3 40
Summary of Module 3 42
Exercises 43

Module 4: Control Components

1. General Structure of a Control System 1


2. Comparison Elements 2
2.1 Differential Levers (Walking Beams) 2
2.2 Potentiometers 3
2.3 Synchros 4
2.4 Operational Amplifiers 5
3. Control Elements 7
3.1 Process Control Valves 7
3.2 Hydraulic Servo Valve 11
3.3 Hydraulic Actuators 15
3.4 Electrical Elements: D.C. Servo Motors 16
3.5 Electrical Elements: A.C. Servo Motors 18
3.6 Hydraulic Control Element (Steering Gear) 18
3.7 Pneumatic Control Elements 19
4. Examples of Control Systems 22
4.1 Thickness Control System 22
4.2 Level Control System 23
Summary of Module 4 23
Exercises 24

Module 5: Applications of Control

4
1. Introduction 1
2. Pneumatic Control Systems 1
2.1 Essential Requirements 1
2.2 Basic Pneumatic Control Systems 2
3. Hydraulic Control Systems 6
3.1 Hydraulic Servo Valve and Actuator 6
3.2 Applications of Hydraulic Servo Valve 7
3.2.1 Speed Control System 7
3.2.2 Hydraulic Steering Machine 8
4. Electrical and Electronic Control Systems 10
4.1 Analogue Control Systems 10
4.2 Digital (Computer-based) Control Systems 12
4.3 PLCs (Sequence Control Systems) 13
4.3.1 The Processor Unit 13
4.3.2 The Input/Output Section 14
4.3.3 The Programming Device 14
5. Ship Autopilot Systems 15
5.1 Mathematical Foundation for Autopilot Systems 15
5.1.1 Autopilots of PID Type 15
5.1.2 P Control 16
5.1.3 PD Control 17
5.1.4 PID Control 18
5.2 Automatic Steering Principles 19
5.2.1 Proportional Control 19
5.2.2 Derivative Control 21
5.2.3 Integral Control 22
5.3 Marine Autopilots in Market 23
5.3.1 Autopilot System PR-6000 (Tokimec) 23
5.3.2 Autopilot System PR-2000 (Tokimec) 23
5.3.3 Autopilot System PR-1500 (Tokimec) 24
6. Dynamic Positioning Systems 24
6.1 Basic Principles of Dynamic Positioning Systems 25
6.2 IMO DP Classifications 27
7. Roll Stabilisation Systems 28
7.1 Fin Stabilisation Systems 29
7.2 Rudder Roll Stabilisation System 31
8. Trend of Control Systems 32
Summary of Module 5 32
Exercises 33

Vous aimerez peut-être aussi