Vous êtes sur la page 1sur 214

CHAPTER - I LAPLACE TRANSFORM

1.1 INTRODUCTION The analysis and design of many physical systems is based upon the solution of an ordinary linear dierential equation with constant coecients. This is, in fact, an idealization of the actual process. Nevertheless, in a dened operating region, many systems can be described by an ordinary linear dierential equation. The procedure is illustrated in Fig. 1.1.

In general, physical laws are applied to the physical system to obtain its mathematical description. Since the operating region of many systems covers a small range, this mathematical description can, therefore, be linearized to give ordinary linear dierential equations. Later, initial conditions, if any, can be added and the solution obtained to give information for design or analysis. The solution of such equations can either be obtained by known methods of ordinary linear dierential equations or by transform methods. The former method of substituting an assumed solution in the dierential equation and then nding the values of constants is quite laborious. In the latter, the use of the Laplace transform changes the dierential equation into an algebraic equation. This transformation changes the dierential equation with time as the independent variable into an algebraic equation with s as the independent variable. The initial conditions are added automatically during the process of transformation and the time solution is found by inverting the transformed equations. This method is also applicable to partial dierential equations.

1.2 DEFINITION Let f(t) be a function of t with the following properties: 1. f(t) is identically zero for t < 0 2. f(t) is continuous from the right at t = 0 Mathematically, 1. f(t) = 0 for t < 0 2. limt0 f(t) = f (0) for t > 0 Then the Laplace transform of f (t), denoted by L[f (t)], is dened as Z est f(t) dt L[f(t)] = F (s) =
0

(1.1)

where s = + j is a complex number. The Laplace transform of Eq. (1.1) shall exist if and only if the integral converges for some values of . 1.3 SUFFICIENT CONDITIONS FOR EXISTENCE OF LAPLACE TRANSFORMS THEOREM 1.1: If f(t) is sectionally continuous in every nite interval 0 < t < N and of exponential order for t > N , then its Laplace transform F (s) exists. PROOF: In order to prove the theorem, let us dene the function of exponential order. Denition 1.1 If real constants M > 0 and exist such that for all t > N, |et f(t)| < M or |f (t)| < M et we say that f(t) is a function of exponential order as t . Now, we have for any positive number N, Z
0

(1.2)

e st f(t)dt =

e stf (t)dt +
0

e st f(t)dt

(1.3)

Since f (t) is sectionally continuous in every nite interval 0 t N, the rst integral on the right exists. Also, the second integral on the right exists, since f (t) is of exponential order for t > N. To see this, let us observe that Z Z e stf (t) dt |est f(t)| dt (1.4) N ZN e t |f (t)| dt, s = + j 0 Z < e t Me t dt
0

M , >
2

(1.5)

Thus the Laplace transform exists for Re{s} > . If this sucient condition is not satised, then the Laplace transform for f(t) may or may not exist. 1.4 LAPLACE TRANSFORM OF FUNCTIONS We consider here some elementary functions which are used in engineering problems. Example 1.1 Unit Impulse Function

The unit impulse function or delta function is denoted by the symbol (t) (Fig. 1.2), and is dened by the relationships (t) = 0, t 6= 0 and Z
b

(t) dt =
a

1, a 0 b 0, otherwise

Hence, (t t0 ) = 0, t 6= t0 It has the property that Z


b

(t t0 ) dt =

Which implies that (t) has innite magnitude at t = t0 . In addition, for any function f(t) Z
b

1, for a t0 b 0, otherwise

(1.6)

f (t)(t t0 ) dt =

f (t0 ), for a t0 b 0, otherwise

(1.7)

(t) is transformable for every s, since by Eq. (1.7) Z Hence, L[(t)] = 1 Example 1.2 Dene u(t) = Unit Step Function n 1, t 0 0, t < 0 (1.9)
0

e st(t) dt = 1

(1.8)

The graph of the function is shown in Fig. 1.3 Then, U(s) = Z


0

1 1 1.est dt = e st = s s 0
3

Hence, 1 L [u(t)] = , Re{s} > 0 s Example 1.3 Unit Ramp Function (1.10)

The function is shown in Fig. 1.4 and dened as f (t) = t, for t 0 Then, F (s) = Z
0

test dt test e st + 2 s s
0

= = Therefore, L [f(t)] = 1 s2 Example 1.4 Exponential Function

1 , Re{s} > 0 s2

(1.11) (1.12)

The graph of the function is shown in Fig. 1.5. Let, Z


0 0

f(t) = eat , for t 0 e at e st dt e (s+a)t dt (1.14) (1.13)

F (s) = =

= Example 1.5

Sinusoidal Function

1 (s+a)t = 1 , Re{s} > a e s+a s +a 0

The function f (t) = sin(at) can be written as f (t) = sin(at) = Then, F (s) = Z 1 (ejat e jat )e st dt 2j 0 Z Z 1 e( ja+s) t dt e( ja+s )t dt = 2j 0 0 1 1 a 1 , Re{s} > 0 = 2 = 2j s ja s + ja s + a2
4

ejat e jat 2j

(1.15)

Example 1.6

Find the Laplace transform of f(t) if f (t) = n 4, 0 t < 2 0, t 2

By denition L[f(t)] = f (t)est dt Z Z2 e st dt + 0.est dt = 0 0 st 2 4e = s 0 4 = [1 e 2s ] , s s


0

(1.16)

Example 1.7 Let f(t) = t2 Then integrating by parts, we have Z F (s) = t2 e st dt 0 1 2 2 = est t2 2 e st t 3 est s s S 0 0 0 2 = 3 , Re{s} > 0 s The reader should verify that lim t2 est = 0
t

(1.17)

1.5 LAPLACE TRANSFORM OF DERIVATIVES Let us now proceed to nd the Laplace transform of derivatives. Dene, L[y(t)] = Y (s). We wish to nd Ly(t) = Z
0

y(t)e st dt

(1.18)

(In this text, we will frequently use y to represent dy .) We reduce this equation by integrating dt by parts Z
0

L{y(t)} = y(t)est |0

y(t).(s.e st ) dt (1.19)

= y(0) + sY (s) The only restriction is that y(t) must be such that
6

lim y(t)est = 0
t

The Laplace transforms of higher derivatives are easily deduced from Eq. (1.19) by letting y(t) = Thus d 2z = dt2 Also L( dz ) = sZ(s) z(0) dt Combining the above equations L d2 z dz = s 2 Z(s) sz(0) (0) dt2 dt (1.22) Z
0

dz dt

(1.20)

dz dz d 2 z st e dt = sL( ) (0) dt2 dt dt

(1.21)

Some of such results are given in Table 1.2. Other properties are the subject of Chapter IV. 1.6 LAPLACE TRANSFORM APPLIED TO ORDINARY DIFFERENTIAL EQUATIONS Example 1.8 Consider the circuit of Fig. 1.6

The equation describing the electric current i(t) is L0 di + Ri = E dt (1.23)

where L0 is the inductance, R is the resistance and E is a constant voltage source. Transforming the equation (1.23)
7

di L L0 + Ri = L[E] dt L0 L di + RL[i] = L[E] dt E s (1.24)

L0 [sI(s) i(0)] + RI(s) =

(1.25)

If we assume that initial current i(0) in the inductor is zero, then L0 sI (s) + RI(s) = or (sL0 + R)I(s) = I(s) = = separating by partial fraction method # " E 1 1 I(s) = R R s s + L0 Inverting I(s) into time domain, we get i(t) = E (1 eR/L0t ), R t0 (1.27) E s (1.26)

E s

E s(sL0 + R)
E L0

s(s +

R L0

Example 1.9 Consider the dierential equation dy d2 y + 4 + 3y = sin(t) dt2 dt d2 y dy + 4 + 3y = L[sin t] dt2 dt 1 dy (0) + 4sY (s) 4y(0) + 3Y (s)] = 2 dt s +1

(1.28)

[s2 Y (s) sy(0) If y(0) = 0 and dy (0) = 0, then dt

Y (s) =

1 (s2 + 1)(s2 + 4s + 3) 1 = (s + 1)(s + 3)(s2 + 1)

(1.29)

The solution of Eq. (1.31) in the time domain can be obtained and is given below: 1 1 1 y(t) = et e 3t cos(t + 0.4636), t 0 4 20 20 (1.30)

PROBLEMS 1.1 Find the Laplace transforms of y(t) in the following dierential equations. a) b) c)
d2y dt2 d y dt4 d y dt3
3 4

dy dt

2y = 0, y(0) = 1,
2

dy dt

(0) = 2
d 2y dt2 d y dt2
2

2 d y + y = 0, dt2
2

d y dt3

(0) = 0,

(0) = 1,

dy dt dy dt

(0) = 0, y(0) = 1

+ 7 d y + 12 dy = (1 + t)e3t , dt2 dt

(0) = 0,

(0) = 0, y(0) = 1

1.2 Find the Laplace transforms of the following functions of time. a) f(t) = tn b) f(t) = sin (t) c) f (t) = te at d) f(t) = e at sin(t) e) f (t) = t2 sin(t)
2

11

CHAPTER - II APPLICATIONS TO PHYSICAL SYSTEMS


2.1 INTRODUCTION The aim of this chapter is to give an appreciation of the equations of linear physical systems and their formulation in the Laplace domain. These transformed equations can then be analyzed by the s-plane analysis techniques discussed later. 2.2 MECHANICAL SYSTEMS The dierential equations for mechanical systems are written by using Newtons Law which states that for a translational system the sum of the forces acting on a body is equal to the mass times the linear acceleration of the body. If the forces on the body are balanced, i.e. the force in the positive direction is equal to the force in negative direction, the mass will not move. In this case, the resultant of the forces is zero, hence the acceleration is zero. For rotational systems, the law states that the sum of torques acting on a body is equal to the moment of inertia times the angular acceleration of the body. Example 2.1 Translational System

Consider the translational mechanical system in Fig. 2.1 whose dierential equation for applied force F(t) is: M or d2 y dx + F (t) = kx f dt2 dt d2 x dx + kx = F (t) +f dt2 dt (2.1) (2.2)

Where M is the mass of body, K the stiness coecient, f the friction coecient, and x denotes displacement. Transforming Eq. (2.2) and assuming that all initial conditions are zero and F(t) is a unit step function, we obtain 1 s 2 M X(s) + sfX(s) + kX(s) = s so that X (s) = Example 2.2 1 s(M s2 + f s + k) (2.3)

Linear Rotational System

Consider the rotational system of Fig. 2.2 By Newtons Law: d d2 J 2 = B + T k dt dt J d d2 +B + k = T dt2 dt


12

(2.4)

Where J is the moment of inertia, B the friction coecient, T the applied torque, and the angular displacement. Transforming Eq. (2.4) and assuming that (0) = 0, (0) = 0 , we obtain (s) = Example 2.3 T (s) Js2 + Bs + k (2.5)

Coupled Translational System

Consider Fig. 2.3 whose dierential equation is given below: M1 dx1 d2 x1 =0 + (k1 + k 2)x1 k2 x2 + f dt2 dt 2 d x2 M2 2 + k2 x2 k 2x1 = F (t) dt

Transforming

M1 s2 X1 (s) + (k1 + k2 )X1 (s) k2 X2 (s) = 0 M2 s 2 X2 (s) + k 2X2 (s) k2 X1 (s) = F (s) (2.6)

Eq. (2.6) can now be written in matrix form as follows: k 2 M1 s2 + f s + k 1 + k2 k 2 M2 s 2 + k2

0 X1 (s) = F (s) X2 (s)

(2.7)

2.3 ELECTRIC CIRCUITS Consider the simple electric circuit shown in Fig. 2.4 which is made up of an inductive coil, a resistor, and a capacitor. A time varying voltage is applied across terminals 1 and 2. This excitation produces the current i(t), and the voltage e2 (t) across terminals 3 and 4. The analysis of electrical circuits is based on two fundamental laws: Kirchos voltage law and Kircho s current law. The rst law states that the sum of all voltages around any closed path of a circuit is zero. The second law states that the sum of all currents entering a node is zero. Both these laws apply to instantaneous values of voltages and currents. As the current i(t) ows through the elements of the circuit, it causes a voltage drop that is opposed to the direction of current ow. The magnitudes of these voltage drops are: 1. across the inductor: eL (t) = L di(t) dt 2. across the resistor: eR (t) = Ri(t) Rt 1 3. across the capacitor: eC (t) = C 0 i(u)du + e C (0)

where the voltage e c(0) is caused by charges already on the capacitor at time t = 0. Applying Kirchos voltage law to the closed path:

13

e1 (t) eL (t) e R(t) eC (t) = 0 Here, voltages which oppose the clockwise current ow have a negative sign. Substitution of the voltage drops yields Z 1 t di(t) + Ri(t) + i(u) du + eC (0) = e1 (t) (2.8) L dt C 0 This is an integro-dierential equation dening the unknown function i(t). However, we wish to solve the system for e 2 (t). The voltage across terminals 3 and 4 is equal to the voltage across the capacitor, namely 1 C Z
t

i(u) du + e C (0) = e 2 (t)


0

(2.9)

Equations (2.8) and (2.9) can now be Laplace transformed. Example 2.4 Consider the passive network (Fig. 2.5 or 2.6) whose equations have to be derived by mesh and nodal methods. The circuits for mesh and nodes are given in Figs. 2.5 and 2.6. The mesh equations for the network can be written as follows: 1 1 V (s) = R1 + I (s) I (s) C1 s 1 C1s 2 0= 1 I1 (s) + C1 s 1 1 1 + + sL1 I2 (s) I3 (s) C1 s C2 s C2 s 1 1 I2 (s) + R2 + I3 (s) 0= C2 s C2 s (2.10)

Similarly, a set of nodal equations can be written from Fig. 2.6 as given below: V (s) 1 1 V (s) = 1 + sC 1 V1 (s) + V (s) V (s) R1 R1 sL1 1 sL1 2 0= 1 1 1 V1 (s) + sC2 V2 (s) + V2 (s) + V2 (s) sL1 sL1 R2 (2.11)

The above set of equations can now be solved for unknown variables. Example 2.5 An equivalent circuit for an active device is shown in Fig. 2.7. We derive its mesh equations in Laplace domain. The following mesh equations for the circuit can easily be derived with the help of Kircho s voltage law. Vs (s) = (Rs + Rg )I1 (s) Rg I4 (s)
15

V g (s) = (Rp + RL )I2 (s) RL I3 (s) 0 = RL I2 (s) + (RL + R2 + 1 )I (s) K R2 I4 (s) sC 3 (2.12)

1 I4 (s) 0 = Rg I1(s) K R2 I3 (s) + Rg + KR2 + sC gk Also, from Fig. 2.7. Vg (s) = 1 I (s) sCgk 4 (2.13)

The matrix equation can be written as follows. The unknown variables can be determined by using the Cramers rule, for example Rs + R g 0 0 Rg V I (s) 0 Rp + RL RL I1 (s) 0s SC gk 2 = (2.14) 1 0 RL RL + R2 + SC KR2 I3 (s) 0 1 0 I4 (s) Rg 0 KR2 Rg + K2 R + SC gk 2.4 A SIMPLE THERMAL SYSTEM Consider a large bath whose water temperature is at degrees and T is its lag constant. A thermometer indicating degrees is immersed in the bath. Newtons Law of Cooling states that the rate of change of temperature is proportional to the dierence between the bath temperature and the measured temperature (Fig. 2.8). The equation is 1 (t) = ( (t)) T Transforming Eq. (2.15) T [s (0)] + (s) (s) = = (s)
T

(2.15)

(2.16)

s(s + T1)

(0) 1 s+ T

(2.17)

2.5 A SIMPLE HYDRAULIC SYSTEM Consider the system shown in Fig. 2.9 which consists of a tank of cross sectional area A to which is attached a ow resistance R such as a pipe. Assume that q0 the volumetric ow rate through the resistance is related to the head h by the linear relationship:
17

q0 =

h R

(2.18)

Liquid of constant density enters the tank with volumetric ow q(t). Determine the transfer function which relates head to ow. The mass balance equation around the tank is: Mass ow in - mass ow out = rate of accumulation of mass in the tank. So that, d (Ah(t)) dt dh q(t) q0 (t) = A dt Combining Eqs. (2.18) and (2.20) to eliminate q0 (t) gives q(t) q0 (t) = q h dh =A R dt (2.19) (2.20)

(2.21)

Assuming initial conditions as zero and transforming Eq. (2.21) Q(s) = H(s) + AsH(s) R 1 + As)H (s) R RQ(s) 1 + s (2.23) (2.22)

Q(s) = (

H(s) = where, = AR

Equations for hydraulic systems with more than one tank can easily be obtained by following the above method. 2.6 A MODEL OF A SINGLE COMMODITY MARKET Fig. 2.10 shows a diagram for the market of a single commodity. Three groups are involved in the market: the suppliers, the merchants, and the consumers. Each group is aggregated into one function. The merchant sets the price, purchases from the supplier, sells to the consumer, and maintains a stock of the commodity. The variables of the market are assumed to be continuous functions of time. They are: 1. The rate of ow of supply, s(t), measured in units of commodity per unit time. 2. The rate of ow of demand, d(t), measured in units of commodity per unit time. 3. The stock level, q(t), measured in units of commodity.
19

4. The price per unit of commodity, p(t). In modelling this market, assumptions need to be made about the relationship of the above variables. First of all, any access of supply over demand goes into stock. This means that: q(t) = q(0) + Z
t

[s(u) d(u)] du

(2.24)

An excess of demand over supply is lled from stock. Negative values of q(t) will be interpreted as the amount of the commodity which has been sold, but not delivered because of shortage. The second assumption is that the merchant sets the price p(t) at each instant of time, so as to make the rate of increase of p(t) proportional to the amount by which the actual stock q(t) deviates from an ideal stock level qi . Mathematically, p(t) = A[qi q(t)] (2.25)

Where A is a positive constant. The third assumption is that both supply and demand depend on the price. In the simplest model, the supply s(t) increases linearly with price and the demand d(t) decreases linearly with price. Thus, the excess of supply over demand is linearly increasing function of p, s(t) d(t) = B[p(t) Pe ] (2.26)

Where B is a positive constant and Pe is the equilibrium price for which supply equals demand. This function is shown in Fig.2.11. A more elaborate model considers the anticipatory nature of the supplier as well as the consumer reaction. This provides for an increase in supply or demand proportional to the rate of change of price. With such a term added, Eq. (2.26) becomes: s(t) d(t) = B[p(t) Pe ] + C p(t) (2.27)

The value of C may be positive or negative. For example, if it is assumed that the supplier increases his production because prices are on the rise and because he wishes to benet from a greater prot margin, then C > 0. On the other hand, the consumer may purchase more than he needs because prices are rising and because he wishes to buy before the prices are too high, then C < 0. If both of these eects are present, C may be either positive or negative. Substituting Eq. (2.24) in Eq. (2.25) and dierentiating the resulting equation, we obtain d2 pt = A[s(t) d(t)] dt2 Substitution of Eq. (2.27) yields the following second order dierential equation d2 p(t) dp(t) + ABp(t) = ABPe + AC dt2 dt which can be Laplace transformed.
21

(2.28)

(2.29)

2.7 A MODEL FOR CAR FOLLOWING Consider the position of two cars moving on a single lane road, where passing is not possible. The leading car has the position x(t), and the second car has the position y(t). Both x(t) and y(t) are measured from the same reference point and both increase with time. It follows that x(t) > y(t), and that x(t) = y(t) implies the collision of the cars. Now let x(t) be an arbitrary function of time. Assume that the initial distance is x(0)y(0) = d and that the driver of the second car attempts to hold the distance d at all times. If the distance increases (decreases), the driver will accelerate (decelerate) in proportion to the deviation from d. Moreover, the leading car has a higher (lower) speed, the driver will accelerate(decelerate) in proportion to d2 y(t) the dierence in speed. The actual acceleration is the result of both eects, that is dt2 d 2 y(t) dx(t) dy(t) (2.30) = A[x(t) y(t) d] + B dt2 dt dt Where A and B are positive constants. Separation of x(t) and y(t) in this equation yields: dx(t) dy(t) d2 y(t) + Ay(t) = Ax(t) + B Ad +B dt2 dt dt Dene, f(t) = Ax(t) + Bx(t) Ad Now Eq. (2.31) is a typical second order dierential equation. PROBLEMS 2.1 Write the Laplace transformed dierential equations for the three mechanical systems in Fig. 2.12. 2.2 Write the Laplace transformed dierential equation for the electric circuit in Fig. 2.13. Find the transfer function E2 (s)/E1 (s). 2.3 For the circuit in Fig. 2.14(a), let the circuit be initially inert and let e(t) = Find the charge q(t) on the capacitor. 2.4 For the circuit in Fig. 2.14 (b), let e(t) = Cos(t). Find ic(t), the current through the capacitor, for the initially inert circuit and write the equation in Laplace transform. 2.5 In the circuit shown in Fig. 2.14(c), the switch is closed at t = 0. Find iL (t) and write the Lalace transformed equations. n t, for 0 < t, 1 0, for t 1 (2.31)

22

CHAPTER - III THE INVERSE LAPLACE TRANSFORM


3.1 INTRODUCTION In the Laplace transform method, the behaviour of a physical system is determined via following steps: 1. Derive the ordinary dierential equations describing the system. 2. Obtain the initial conditions. 3. Laplace transform the dierential equations including the initial conditions. 4. Manipulate the algebraic equations and solve for the desired dependent variable. 5. Find the inverse Laplace transform to obtain the solution in time domain. We have, so far, reached the stage where we could obtain the desired variables. In this chapter, we consider the problem (step 5, above) of nding the original function f (t) from its image function F (s). In order that the transform calculus be useful, we must require uniqueness, that is, L1 [L[f (t)]] = f (t)

The uniqueness means that if two functions f(t) and g(t) have the same transform F (s), then f (t) and g(t) are identical functions. There is a theorem which states that two functions f(t) and g(t) that have the same Laplace transform can dier only by a null function n(t). A null function has the property Z t n()d = 0, for all t > 0.
0

An example of a null function is n(t) = n 1, 0, for t = 1, 2, 3, ........ otherwise

Null functions are highly articial functions and are of no signicance in applications. We can, therefore, say that the inverse Laplace transform of F (s) is essentially unique. In order to obtain the solution in time domain, we have to invert the Laplace transform. There are three methods to obtain the time domain solutions 1. Laplace transform tables 2. Partial fractions 3. Inversion integral The following discussion is limited to systems which have the following form: Y (s) = a n sn + an 1 sn1 + . . . + a1 s + a 0 bm sm + bm 1 sm 1 + . . . + b1 s + b0 (3.1)

The coecients ai and bi are real and m, n 0, 1, 2, . . . The most obvious method to nd the inverse is to use Laplace transform tables, which can take care of a wide range of problems.
25

3.2 PARTIAL FRACTION METHOD The rational fraction in Eq. (3.1) can be reduced to a sum of simpler terms each of whose inverse Laplace transform is available from the tables. As an example, consider the following: Example 3.1 Consider Y (s) = Step 1 Eq. (3.2) cannot be expanded into partial fractions because the degree of numerator and denominator is equal and, therefore, it should be rst divided, so that s2 + 12s + 14 = 1 + Y1 (s) s 3 + 7s2 + 14s + 8 s 3 + 8s2 + 26s + 22 s3 + 7s 2 + 14s + 8 (3.2)

Y (s) = 1 +

(3.3)

Now Y1 (s) can be expanded into partial fractions. Step 2 Factor the polynomial in the denominator of Y1 (s), so that s 3 + 7s2 + 14s + 8 = (s + 1)(s + 2)(s + 4) Step 3 Y1 (s) = A B C s2 + 12s + 14 = + + s 3 + 7s2 + 14s + 8 s+ 1 s+ 2 s+4 (3.4)

The partial fraction expansion shall be complete when we have evaluated the values of the constants A, B, and C. To evaluate A, multiply both sides by (s + 1), so that s 2 + 12s + 14 B(s + 1) C(s + 1) = A+ + (s + 1)(s + 2)(s + 4) s+2 s+4

(s + 1)

(3.5)

Since Eq. (3.5) holds for all values of s we may let s = 1 and solve for A. Then: A= Similarly, let s = 2 B= and, let s = 4
26

(1)2 + 12(1) + 14 3 = =1 (1 + 2)(1 + 4) 3

6 (2) + 12(2) + 14 = =3 (2 + 1)(2 + 4) 2

C= so that Y1 (s) = and

18 (4)2 + 12(4) + 14 = = 3 (4 + 1)(4 + 2) 6

1 3 3 s 2 + 12s + 14 = + s + 7s 2 + 14s + 8 s+1 s+2 s+4


3

(3.6)

Y (s) = 1 +

1 3 3 + s+1 s+2 s+4

(3.7)

whose inverse transform is (from Tables) y(t) = (t) + e t + 3e 2t 3e 4t , t 0 Example 3.2 Consider another example Y (s) = s2 + 3s + 1 s(s + 1)3 (3.9) (3.8)

Because of third order pole at s = 1, we write the expansion Y (s) = B1 B2 A B3 s2 + 3s + 1 + = + + s(s + 1)3 s s + 1 (s + 1)2 (s + 1)3 (3.10)

The two constants A and B3 can be determined as before s(s2 + 3s + 1) =1 3 s(s + 1) s=0 (s + 1)3 (s 2 + 3s + 1) =1 B3 = s(s + 1)3 A=
s=1

The other coecients are found by dierentiating. To nd B2 , we dierentiate once # " 3 i d (s + 1) (s2 + 3s + 1) d h 3 (s + 1) Y (s) = 3 ds ds s(s + 1) 2 d s + 3s + 1 = ds s s(2s + 3) (s 2 + 3s + 1) = s2 s=1 2 s 1 = =0 s2 s=1
27

and

d d (s + 1)3 d d 2 + B1 (s + 1) + B2 (s + 1) + B3 A = B2 ds s ds ds ds s= 1 i d s2 1 d2 h (s + 1)3 Y (s) = ds2 ds s2 s2 (2s) (s2 1)2s = 2 s4 s=1

so that, B2 = 0 For B1 ,

And

d2 d2 d2 d2 (s + 1)3 + B1 2 [(s + 1)2 ] + B2 2 (s + 1) + 2 B3 = 2B1 ds2 s ds ds ds s=1

so that B1 = 1 Therefore Y (s) = and from transform tables y(t) = 1 e t + Example 3.3 Consider another system with complex poles Y (s) = s+1 (s + 2){(s + 2)2 + 22 )} A Bs + C = + s + 2 (s + 2)2 + 4 (3.13) t2 t e , 2 t0 (3.12) 1 1 1 + s s + 1 (s + 1)3 (3.11)

The rst constant A is easily found 1 s+1 = (s + 2)2 + 4 s= 2 4 s+ 1 s+ 2

A=

To evaluate B and C , we follow the method outlined in Eqs. (3.16-3.20) In that notation, Y 1(s) = = 2 and = 2 Using Eq. (3.20)
28

(3.14)

1 Y1 (2 j2)(s + 2 j2) j2 1 1 2 j2 + 1 (s + 2 j2) = (s + 6) = Re j2 2 j2 + 2 4 Re Thus Y (s) = 1 1 s+6 + 4(s + 2) 4 (s + 2)2 + 22

(3.15)

This function is inverted directly from the tables as follows 1 1 s+2 1 4 + L1 + L 1 2 2 4(s + 2) 4 4 (s + 2) + 22 (s + 2) + 22

y(t) = L 1 [Y (s)] = L 1

1 1 1 y(t) = e 2t + e t cos(2t) + e 2t sin(2t), t 0 4 4 2 Method : Let Y (s) = Y1 (s) 2 (s + ) + 2 (3.16)

where Y1 (s) is a rational fraction which contains the remaining terms in Y (s). Y (s) = Y1 (s) B A + + ... = s + ( + j) s + ( j ) (s + )2 + 2 A[s + ( j)] + B[s + ( + j )] (s + ) + 2
2

(3.17)

The rst two terms can be combined as (3.18)

But since Y (s) has real coecients, A and B must be complex conjugate of each other and their sum must be real. In fact, the sum of a complex number (x + jy) and its conjugate (x jy) is twice the real part 2x = 2Re(x + jy). Since A[s + ( j)] is complex conjugate of B[s + ( + j)] we can write 2ReA[s + ( j)] 2 (s + ) + 2

Y (s) = Evaluating the coecient A,

(3.19)

A=

(s + ( + j))Y1 (s) 2 (s + ) + 2 s= j Y1 ( j ) Y ( j) = = 1 j + j 2j
29

so that Y (s) = Example 3.4 Find the inverse transform of Re

1 j

Y1 ( j)[s + j] (s + )2 + 2

(3.20)

Y (s) =

s2 + s 10 (s + 1)2 (s 2 + 22 )

(3.21)

Y (s) is rst expanded into partial fractions (s2 + s 10) A Cs + D B = + 2 + (s + 1)2 (s2 + 22 ) (s + 1)2 s + 1 s + 22 The coecients are then evaluated as follows (s 2 + s 10) A = =2 s2 + 22 s= 1 d s2 s + 10 =1 ds s 2 + 22 s= 1 (3.22)

The second coecient at s = 1

B= Now obtain C and D

Y1 (s) = h

(s2 + s 10) (s + 1)2 i

Y (s) =

Re

1 ( 4+j210) (sj2) j2 ( j2+1)2

s2 + 22 s 2 = 2 +... s + 22

+... (3.23)

Y (s) =

2 s 2 1 + + (s + 1)2 s + 1 s 2 + 22 t0

(3.24) (3.25)

y(t) = 2tet + e t cos(2t) sin(2t),

Finally, one may use the method familiar from high-school mathematics where one writes simultaneous algebraic equations in the unknown constants by equating coecients of like powers of s.
30

3.3 INVERSION INTEGRAL METHOD A method of nding the inverse transforms which is distinct from the partial fraction method is the method of residues. Associated with each pole of a function of a complex variable is a particular coecient in the series expansion of the function around the pole. The method of residues states: If F (s) is a rational fraction then, L1 [F (s)] = all
poles

[residues of F (s)est ]

(3.26)

where the residue at an nth order pole at s = s 1 is given by n1 d 1 Rs1 = (s s1 ) n F (s)est (n 1)! ds n 1 s=s1 Example 3.5 Find the inverse Laplace transform of F (s) = 2s2 + 3s + 3 (s + 1)(s + 2)(s + 3)

(3.27)

(3.28)

The residues of F (s)est at s = 1 is 2 (2s + 3s + 3)est = e t R1 = (s + 2)(s + 3) s= 1 2 (2s + 3s + 3)est R2 = = 5e 2t (s + 1)(s + 3) s=2 2 (2s + 3s + 3)est R3 = = 6e3 t (s + 1)(s + 2) s=3 f(t) = L1 [F (s)] = et 5e 2t + 6e3 t Example 3.6 Find the inverse Laplace transform of 1 (s + )4 1 1 d 3 st L1 = e (s + )4 3! ds3 s= t d2 st = (e ) 3! ds2 s= 2 t d st e = 3! ds s= t3 t3 f(t) = est = et 3! s= 3! F (s) =
31

(3.29)

(3.30)

3.4 POLES AND ZEROS A rational algebraic polynomial is one whose numerator and denominator can be factorized as below: (s z1 )(s z 2 ) . . . (s zn ) (3.31) (s p1 )(s p2 ) . . . (s pn )

Where the factors in the numerator are called zeros of the function and the factors in the denominator are called poles of the function. Consider Y (s) = s2 + 2s + 1 (s + 1)(s + 2)2 (s + 3) (3.32)

We have previously observed that y(t) shall have terms like et , e2t, e 3t , which depend upon the nature of the poles.

Consider

(s + 1) s(s + 2)(s + 3)

The location of the poles gives an idea of the nature of the behaviour of the solution. The system-response component due to a pole in the left-half plane dies out after sometime. A pole on the imaginary-axis (non-zero) gives an oscillatory response component. A pole in the right-half plane results in an ever increasing response. The contribution of zero towards the solution is in the form of amplitude and phase shift. 3.5 THE SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS Example 3.7 Consider the dierential equation d2 x dx + 2x = 0 +3 dt2 dt x(0) = 2, x(0) = 1 s2 X (s) sx(0) dx (0) + 3sX(s) 3x(0) + 2X (s) = 0 dt s 2 X(s) + 2s 1 + 3sX (s) + 6 + 2X(s) = 0
32

(3.33)

Transforming

(3.34)

(s2 + 3s + 2)X(s) = (2s + 5) X(s) = or, X(s) = X (s) = (2s + 5) (s + 1)(s + 2) (3.35) t0 (3.36) (2s + 5) (s2 + 3s + 2)

3 1 s+2 s+1 L 1[X(s)] = x(t) = e 2t 3e t Example 3.8 Find the solution of the following dierential equation. d2 x + 4x = 0 dt2 for t > 0, x(0) = 2 and x(0) = 4 s2 X (s) sx(0)

(3.37)

dx (0) + 4X (s) = 0 dt

(3.38) (3.39) (3.40) (3.41) (3.42)

2s + 4 X(s) = 2 s +4 2s 4 2s + 4 = 2 + X(s) = 2 s +4 s + 4 s2 + 4 x(t) = 2 cos(2t) + 2 sin(2t) Example 3.9 d2 x dx + 4 + 5x = 0 , f or t > 0 dt2 dt dx (0) = 2 x(0) = 0, dt dx s 2X (s) sx(0) (0) + 4sX(s) 4x(o) + 5X (s) = 0 dt s2 X (s) 2 + 4sX(s) + 5X (s) = 0 X(s) = 2 2 = s2 + 4s + 5 (s + 2) 2 + 1

s2 X(s) + 2s 4 + 4X(s) = 0

x(t) = 2e2 t sin(t)

(3.43)

33

PROBLEMS 3.1 Find the inverse 1 a) s(s + a)(s + b) 1 c) s[(s + a)2 + b2 ] 2 e) s2 + 3s + 1 s + 5s + 1 Laplace transform of the following functions by partial fractions b) 2 1 s (s + a) d) 2 s 2+ a 2 s (s + b ) s2 + 3s + 1 f) s(s + 1)[(s + 1)2 + 1]

3.2 Find the inverse Laplace Transform using inversion integral for the following s+1 b) a) s + 2 s(s + b) s(s 2 + 1)2 (s + 2) 1 d) c) 2 s 2+ a 2 s (s + a ) (s + 1)5 3.3 Find the solution of each of the following dierential equations a) dx + a1 x = a2 + a 3 t , x(0) = x0 dt dx + a 1 x = a 2 (t) , x(0) = x0 b) dt

3.4 Solve the following equations d2 x dx dx (0) = 1 + 5 + 6x = 0 , f or t > 0, x(0) = 0, dt2 dt dt 2 dx dx b) + w 2 x = 0 , for t > 0, x(0) = a, (0) = b dt2 dt d4x d3 x d2 x dx c) + 4x = (t) , for t > 0 + 6 3 + 13 2 + 12 dt4 dt dt dt d3 x d2 x dx d) + 2 2 + 4 + 4x = (t), for t > 0 dt3 dt dt all initial conditions zero for (c) and (d). a) 3.5 Find the time solution of the following dierential equation dx d2 x + 4 + 4x = f(t) , dt2 dt Where, (i) f(t) = au(t), (ii) f(t) = eat u(t) f or t > 0

3.6 Find the solution of the following integro- dierential equations a) Z dx + 4x + 4 xdt = at dt Z d2x dx b) x 2 xdt = u(t) dt +2 dt2 dt

where all initial conditions are zero except x(0) = x0


34

CHAPTER - IV LAPLACE TRANSFORM THEOREMS


In this chapter we present important theorems which shall help in obtaining the solution of practical problems. 4.1 LINEARITY Theorem 4.1: If a and b are constants and f(t) and g(t) are transformable functions, then L[af (t) + bg(t)] = aL[f (t)] + bL[g(t)] (4.1) This theorem denes the linearity of Laplace transformation operation. Proof: Z

= aF (s) + bG(s) Example 4.1

[af(t) + bg(t)]e stdt Z Z =a f (t)e st dt + b


0 0

g(t)est dt (4.2)

L[3t + 5] = L[3t] + L[5] 5s + 3 5 3 = 2+ = s s s2 Example 4.2 L[sinh(bt)] = L [ebt ebt] 2

(4.3)

(4.4)

1 = L[(ebt ) (e bt )] 2 1 1 1 1 = 2 (s b) 2 (s + b) b = 2 s b2 4.2 COMPLEX TRANSLATION

(4.5)

Theorem 4.2: If f (t) is transformable with transform F (s) then eat f(t) is also transformable, where a is a real constant, and has a transform F (s a) and conversely. Proof: By denition F (s) = Z

f (t)est dt ,
35

Re{s} > c (say)

(4.6)

so that by comparison Z
0

e at f(t)e stdt
0

(4.7) Re{s} > c + a

e(s a) tf (t)dt = F (s a) ,

Example 4.3 L[cos(wt)] = From the theorem L[e at cos wt] = Example 4.4

s s 2 + 2

s+a (s + a)2 + 2

L[u(t)] = From the theorem L[eb t] = Example 4.5 L[t] = From the theorem L[teat ] = 4.3 REAL TRANSLATION 1 s2

1 s

1 sb

(4.8)

1 (s + a)2

Theorem 4.3 a: Real translation Right: If f(t) is a transformable function with transform F (s) and a is a non negative real number then f(t a) more correctly, f(t a)u(t a) is a transformable funtion with transform eas F (s) and conversely. This theorem states that translation in the real domain corresponds to multiplication by an exponential in the transform domain. Proof: Let g(t) = f (t a)u(t a) a > 0 then G(s) = = Z

e stg(t) dt f (t a)u(t a) dt
36

(4.9)

Z0
0

Now let, v = t a G(s) =

= e as

= e as F (s) Example 4.6 Now

es(v +a) f(v)u(v) dv Z esv f(v)dv


0

(4.10)

f(t) = u(t a) , a > 0 L[u(t)] = 1 s e as s (4.11)

From theorem L[u(t a)] = L[f (t)] =

e as s

Theorem 4.3b: Real translation Left: If f(t) is a transformable function with transform F (s) and a is real non negative for which f(t + a) = 0 for t < 0 , then f(t + a) is transformable with transform easF (s). 4.4 REAL DIFFERENTIATION
(t) Theorem 4.4: If f(t) and its derivative dfdt are both transformable functions, then their transforms are related by the equation.

d f(t) = sL[f(t)] f(0) dt

(4.12)

Hence, dierentiation in the time domain corresponds to multiplication by s in the s domain, Proof: L Integrating by parts = e stf (t)| 0

Z df(t) df = est dt dt dt 0 Z
0

(4.13)

se stf (t) dt

(4.14)

Since f (t) is transformable, rst expression on left above vanishes at the upper limit and we obtain L df = f (0) + sF (s) dt (4.15)

37

Example 4.7 Let f (t) = eat for t > 0 df = ae at for t > 0 dt f(0) = 1 df = sL(eat ) 1 dt s = 1 sa

then L

(4.16)

Theorem 4.4 can be extended to higher order derivatives. For example, if L and g(t) = We obtain df df dt dt t= 0 dg = sL[g(t)] g(0) dt df(t) dt

d2 f dt2

= sL

(4.17)

So that, if f(t) and its rst derivative exists, and if the nth derivative is transformable, then kth derivatives are also transformable (k = 0, 1, 2, ..., n 1) and the following relation holds, where

df = s[sF (s)] sf (0) dt t=0 df = s 2 F (s) sf (0) dt


t=0

(4.18)

dn f f ( n) (t) = n dt L f (n) (t) = s n L [f(t)] sn1 f (0) . . . f(n 1)(0) n 1 X nk1 (k) = s n L[f (t)] s n 1 f(0) s f (0)
k=1

(4.19)

38

4.5 REAL INTEGRATION Theorem 4.5: Real integration (Denite) If f (t) is transformable, its denite Rt integral 0 f(u)du is transformable. The transforms are related by the equation, Z t 1 (4.20) f(u)du = [F (s)] L s 0 Rt Proof: Let g(t) = 0 f (u) du Note that g(0) = 0,
dg dt

= f(t)

From Theorem 4.4 L and

dg = sLg(t) g(0) dt F (s) = sG(s)

1 G(s) = F (s) (4.21) s This states that integration in the real domain corresponds to division by s in the transform domain. Example 4.8 Since 1 cos t = Z
t

sin(u) du

(4.22)

it follows from Theorem 4.5 that

L [1 cos t] = L [sin(u)] s 2 = s(s2 + 2 ) Also, from L[1 cos t] = Example 4.9 F (s) Let f (t) have the transform F (s), then by Theorem 4.5, 2 is the transform of the function. s Z t Z f(u) du d g(t) =
0 0

1 s 2 2 = 2 2 s +s s(s + 2 )

Changing the order of integration Z


t

g(t) = =

f(u)
0 t

d du (4.23)

(t u)f (u) du
39

Further, let f (t) have the transform F (s), then by Theorem 4.5, the function Z

F (s) is the transform of s3

b(t) = = = = =

g()d 0 Z t Z ( u)f(u) du d 0 0 Z t Zt f(u) [( u)d ]du 0 t u Zt ( u)2 f(u) du 2! 0 u Zt 2 (t u) f(u) du 2! 0

(4.24)

If the result of above two examples are tabulated as transform pairs then the extension of the above results gives the following table. Transform Function F (s)
1 s 1 s

F (s) F (s)

. . .
1 sn+1

F (s)

Frequently, it is more desirable to deal with indenite integrals, such as Z


t

. . . Rt
0

f(t) Rt f(u) du 0 Rt f(u) du 0


(tu)n n!

f(u) du

g(t) = or g(t) = = Z
t

f(u) du
t

f (u) du =
t

f(u) du +
0

f(u) du

f (u) du + g(0)
0

g(0) 1 L[g(t)] = L[f (t)] + s s

(4.25) Rt

Summarizing the above, if f(t) is transformable, its integral f ( 1) (t) = formable and the transforms are related by the equation. L f ( 1) (t) = L Z
t

f (u) du is trans-

f ( 1) (0) 1 f(u) du = L[f (t)] + s s


40

(4.26)

4.6 COMPLEX DIFFERENTIATION Theorem 4.6: If f(t) is transformable with transform F (s), tf(t) is also transd formable and has the transform ds [F (s)]. Proof: By Denition. F (s) = Z
0

est f (t) dt

(4.27)

or

Z d st dF = e f(t) dt ds ds 0 Z d st (e )f (t) dt = ds 0 Z = te stf (t) dt Z0 = e st(tf (t)dt)


0

L[tf (t)] = Example 4.10 Using the result of Example 4.2 L[u(t)] = It follows that

dF ds

(4.28)

1 s

1 d 1 = 2 ds s s d 1 2! L[t2 ] = = 3 ds s2 s L[t] = L[tn ] = (1)n n! dn 1 = n+1 dsn s s s2 + 2 (4.29)

and

Example 4.11 L(sin wt) =

L(t sin wt) =

d ds s2 + 2 2s = 2 (s + 2 )2
41

(4.30)

Example 4.12 1 d ds s + 1 = (s + )2

L[te t] =

(4.31)

4.7 COMPLEX INTEGRATION Theorem 4.7: If both f(t) and f (t) are transformable functions and the transform t of f (t) is F (s), then the transform of f (tt) is related to it by the equation. Z 1 F ()d (4.32) L f (t) = t s Hence division by t in the real domain is equivalent to integration in the transform domain. Proof: By denition F (s) = Z
0

F ()d =

Z
s

est f (t) dt

e t f(t) dt d

Assuming the validity of changing the order of integration Z Z = f(t) et d dt s Z0 e t f(t) dt = t s Z0 f (t) e st dt = L f(t) = t t 0 Example 4.13 L(sin t) = It follows that L Z sin t = d = tan1 t 2 + 2 s s 1 s 1 s = cot = tan 1 = tan 2 s s2 + 2

(4.33)

(4.34)

4.8 SECOND INDEPENDENT VARIABLE Suppose for a particular , f(t, ) is transformable, the transform will also (in general) be a function of that parameter.
42

That is, F (s, ) =

If is made to vary, then under suitable conditions, Z lim F (s, ) = lim est f(t, ) dt 0 0 0 Z = e st lim f (t, ) dt
0 0

est f (t, ) dt

(4.35)

F (s, ) = and Z

e stf (t, ) dt = Z Z

est

f(t, ) dt

F (s, )d =
0

= Hence the following theorem

0 0 st

e st f(t, ) dt d Z f(t, ) d dt
0

(4.36)

Theorem 4.8: If f (t, ) is transformable function with respect to t with a second independent variable, then under suitable conditions the following relation hold. lim L[f(t, )] = L lim f (t, ) 0 0 [Lf(t, )] = L f(t, ) Z 1 Z 1 L [f(t, )] d = L f(t, )d
0 0

(4.37)

Example 4.14 Since, L[et ] = it follows from Theorem 4.8 that L[u(t)] = L [lim0 et ] 1 1 = = lim 0 s + s L[te t ] = L 1 s+

(refer to Example 1.2)

(4.38)

t e 1 1 . = = s + (s + )2
43

(4.39)

1 e t t

=L

e t d

Z 0
0

1 d s+

s+ s

= log(s + )| 0 = log

(4.40)

4.9 PERIODIC FUNCTIONS A function f(t) is periodic of period T if f(t + T ) = f(t) for every t. Since, f (t + 2T ) = f(t + T + T ) = f(t + T ) = f (t) it follows that if f(t) has period T , f(t + kT ) = f(t) (4.42) (4.41)

Theorem 4.9: If f(t) is transformable periodic function of period T , then its transform may be found by integration over the rst period according to the formula. R T st e f(t) dt L[f(t)] = 0 (4.43) 1 est Proof: Z
0

e stf (t)dt =

est f(t)dt +
0 k=

XZ
k=0

2T

e st f(t)dt + . . . +
0

kT

est f (t)dt + . . .
(k1 )T

(k+ 1)T

e stf (t)dt
kT

Making the change of variable t = u + kT , so that Z


0

dt = du
XZ k=0 T

est f(t)dt = = =

0T es( u+kt) f(u + kt) du


X k=0

(4.44)

e su f(u) du
0 T

eskT

esu f (u) du 1 e st
44

(4.45)

Example 4.15 Consider the periodic function of Fig 4.1 Let f (t) be the square wave n 1 for 2n t < 2n + 1 f (t) = 0 for 2n + 1 t < 2n + 2 n = 0, 1, 2, . . . L[f (t)] = = = = = f(t)e st dt 2s 0 1 e Z 1 est dt 1 e2s 0 1 1 e st s 1 e2s 0 (1 e s ) s(1 es )(1 + e s) 1 s(1 + es ) Z
2

(4.46)

45

Example 4.16 Consider the waveform of Fig. 4.2 and transform it into Laplace domain. f (t) = u(t) 2u(t 1) + u(t 2) L[f (t)] = 1 2e s e 2s + s s s s 2 (1 e ) = s
k= 0

Dene the square Theorem 4.9

wave as f1 (t) =

f (t 2k) with a period of T = 2, so that by (1 e s )2 s(1 e 2s )2

L[f1 (t)] = F1 (s) = Example 4.17

The function f(t) = |sint| is a rectied sine wave and is periodic with period T = . A single period of f(t) corresponds to a half period of the function sint. Using the symmetry of sint, we obtain the rst half period by shifting sint to the right by units and by adding the result to sint; so that f1 (t) = sint + sin(t )u(t ) Lf1 (t) = F (s) = and L |sin t| = 4.10 CHANGE OF SCALE Theorem 4.10: If the function f (t) is transformable and L[f (t)] = F (s) and a is positive constant (or variable) independent of t and s, then t (4.47) L f( ) = aF (as) a Hence the division of the variable by a constant in the real domain results in multiplication of both the transform F (s) and the transform variable s by the same constant. We make the change in the variable = in the integral denition of the transform F (s) = Z
0

Then

1 + e s s2 + 1

1 + e s (s + 1)(1 e s )
2

t a

f ()e s d

(4.48)

46

After changing the variable, we obtain F (as) = Rearranging Z


0

t ast t f( )e a d( ) a a

(4.49)

aF (as) = Example 4.18 Given the transform pair

t f ( )e st dt a

(4.50)

s + 50 = L [e 50t cos 100t] (s + 50)2 + 104 in which t is in seconds. Suppose we wish to nd the transform pairs in which t is measured in milliseconds. This is accomplished by letting a = 103 . h 50t i 1000s + 50 L e 1000 cos(10 1 t) = 1000 (1000s + 50)2 + 104 s + 0.05 = (4.51) (s + .05)2 + 102 4.11 REAL CONVOLUTION Suppose f (t) and g(t) have transforms F (s) and G(s) respectively. Is there a function h(t) with transform F (s). G(s) and if so, how is it related to f (t) and g(t)? Theorem 4.11: If f(t) and g(t) have transforms F (s) and G(s) respectively, the product F (s) x G(s) is the transform of Z t h(t) = f(v)g(t v) dv (4.52)
0

h(t) is termed the real convolution of the functions f(t) and g(t) and is usually written as f(t) g(t) h(t) = f (t) g(t) = g(t) f (t) (4.53) Since the convolution is symmetric, This theorem states that the product of two functions of s is the Laplace transform of the convolution of the two respective functions of time t. Proof: Z esv f(v) dv esu g(u) du 0 0 Z Z = es(u+v) f (v)g(u)dudv
0 0

F (s)G(s) =

(4.54)

For a xed v let

t = u+v dt = du t = v, at u = 0 t = , at u =
47

so that F (s)G(s) = =

est f (v)g(t v)dvdt, since g(t v) = 0 for t < v Z Z = e st f (v)g(t v) dv dt 0 Z0 st e h(t)dt =


0 0 0

Z 0 Z0

est f (v)g(t v)dtdv

where h(t) = Z
0

f(v)g(t v)dv =

f (v)g(t v)dv, since g(t) = 0 for t < 0

(4.55)

Example 4.19 L[t] = and 1 s2

has the transform

L[sin t] = 2 s + 2 Z t h(t) = (t u) sin udu Z0 t u sin (t u)du =


0

w s 2 (s2 + 2 ) t Z t u cos (t u) cos (t u) du h(t) = 0 0 t t sin (t u) = t sin t = + 2 2 0 1 1 H (s) = 2 s (s 2 + 2 ) H(s) = 2 2 s (s + 2 ) H(s) =

(4.56)

(4.57)

Example 4.20 We are given the transform F (s) = s (s + a)(s2 + 1)

and are to nd the inverse transform f(t). From the known pairs s 1 = L[e at] ; 2 = L[cos t] s+ a s +1
48

We deduce that f(t) = e at cos t = ea(tu ) cos u du Z t eau cos u du = eat


0 0

1 [a cos t + sin t aeat ] a2 + 1

(4.58)

4.12 FINAL VALUE THEOREM Theorem 4.12: If f (t) and its derivatives are transformable, and if f (t) has the transforms F(s) and if all the singularities of F (s) are in the left half plane, then lim sF (s) = lim f(t)
s0 t

This theorem states that the behaviour of F (s) near the origin of the s-plane corresponds to the behaviour of f(t) for large t(t ). Proof: lim F (s) =
s0

df + f(0) dt = f(t)| 0 + f(0) = lim f (t) f(0) + f (0) =


0 t

lim est
s0

df dt + f(0) , (See Eq. (4.13)) dt

lim sF (s) = lim f(t)


s t

(4.59)

4.13 INITIAL VALUE THEOREM Theorem 4.13: This theorem states that the behaviour of sF (s) near the point at innity in the s-plane corresponds to the behaviour of f (t) near t = 0. Proof: df est dt + f(0) dt Z df lim sF (s) = lim e st dt + f(0) s s 0 dt Z df st = dt + f (0) lim e s dt 0 sF (s) =
0 s

(4.60)

lim sF (s) = f(0)

(4.61)

Example 4.21 Let F (s) = 1 s(s + )


49

sF (s) = lim sF (s) =


s0

1 s+ (4.62)

1 = lim f(t) t

Example 4.22 s2 + 2 s sF (s) = 2 s + 2 F (s) = This does not satisfy the condition of Theorem 4.12 since its singularities lie on the imaginary axis. 4.14 EXTENSION TO COMPLEX FUNCTIONS Theorem 4.14: If f (t) is a complex valued transformable function,then its real and imaginary parts are transformable and the operation of transforming and taking real and imaginary parts is commutative. It follows from linearity of functions that, L[Ref(t)] = ReL[f(t)]

L[Imf(t)] = I mL[f(t)] Example 4.23: Consider the following transform pair L[ejt ] = But by theorem 4.14 L[e jt ] = L[cos t + j sin t] = L[cos t] + j(L[sin t]) since s s 2 + 2 L[sin t] = 2 s + 2 L[cos t] = L[e jt] = s j + s 2 + 2 s 2 + 2 s + j 1 = 2 , real s j s + 2

(4.63)

Therefore, (4.64)

50

PROBLEMS 4.1 Use Theorem 4.1 to nd the Laplace transform of the functions a + b + ct2 4.2 From the knowledge that L[3t + 5] =
5s+3 s2

, use Theorem 4.2 to nd L [(3t + 5)et ]

4.3 Find the Laplace transform of the ramp function translated a units (a > 0) to the right. (t a)u(t a) 4.4 Since L[cos t] =
s s2+ 2

, use Theorem 4.6 to derive L[t cos t]


s2+ 2

4.5 Use Theorem 4.2 and 4.6 to obtain L [teat sin t] from L[sin t] =

4.6 Starting with the transform pair 1 e t 1 L 1 + 2 sin t tan 1 = 2 s[(s + )2 + 2 ] + 2 + 2 Use Theorem 4.8 to derive the following L 1 4.7 Starting with the transform pair L1 1 = et s+

1 s(s + )2

Use Theorem 4.8 to derive the following a) L 1 [ 1 ] s h i 1 1 b) L ( s+)2 c) L 1 [ s1n ] 4.8 Use Theorem 4.9 to nd the Laplace transforms of the function shown in Fig. 4.3

51

4.9 Use convolution integral to nd the following transform a) L1 Z t 1 1 . = u( )e(t ) d s s+ 0 Z t 1 1 . = ej e +j(t ) d b) L 1 s + j s j 0

4.10 Assuming all poles lie to the left of the imaginary axis, nd the nal and initial values of the time function whose transform is F (s) = k(s + d) s 3 + a2 s2 + a1 s + a0

4.11 If L[f(t)] = F (s) nd the transform of the following functions a) f (t)u(t ) b) f (t)[u(t ) u(t )] Assume that and are positive real numbers 4.12 Find the inverse Laplace transform of the following functions e s 2 a) (1 s ) b) 1 es s2 (1 + e s )

4.13 The Laplace transform of the output voltage of an amplier is given by L[e0 (t)] = a3 s 3 + a2 s2 + a 1 s + a0 s2 (s + )[(s + )2 + 2 ]

The input is a unit impulse. All constants(a0 , a1 , a2 , a3 , , ) are positive and real. Without carrying out the inverse transform nd: a) The form of each term b) The initial value of e0 c) The initial values of the rst two derivatives of e0 d) The nal value of e 0

52

CHAPTER - V S-PLANE ANALYSIS


5.1 INTRODUCTION The Laplace transform oers a method of nding the solution of linear dierential equations with constant coecients. The method automatically includes the initial conditions. In the s-plane analysis, many problems can be solved without the labor of inverting the Laplacetransformed equations. The necessary design information can be obtained by locating the roots of the characteristic equation on the s-plane and by observing how these roots vary as some parameter is changed. 5.2 SOLUTION OF A SECOND ORDER SYSTEM M d2 x dx + K x = f(t) +B dt2 dt Let, = damping ratio n = undamped natural f requency The linear dierential equation can be written as d2 x f(t) dx 2 + n x = f1 (t) = + 2n dt2 dt M Comparing the coecients of Eq. (5.1) and Eq. (5.2). 2n = B M K 2 = n M (5.2) (5.1)

(5.3)

giving r B K and = M 2 KM

n =

The two parameters and are sucient to describe the second order equation. Assuming x(0) = 0, x(0) = 0, the Laplace transformed equation is (s2 + 2n s + 2 )X(s) = F1 (s) n The ratio X (s)/F (s), dened as the transfer function of the system, is X(s) 1/M = 2 F (s) s + 2n s + 2 n
53

(5.4)

(5.5)

For F (s) = 1 (f(t) = (t)), the RHS of Eq. (5.6) gives the impulse response of the system. 2 1 ( n+j n 1 2)t 1 e + e( n jn 1 ) t M M p 2 nt e = cos( 1 2 n t), t 0 M

x(t) =

(5.6)

The impulse response is plotted in Figs. 5.1 (a)-(c) for n = 10; = 0.1, 1, 2.
2 The characteristic equation is s2 + 2n s + n = 0, and the quantity is the ratio of damping that exists in the second order system to critical damping. Critical damping in the second order system is dened as the value of damping which produces two equal roots in the characteristics equation and separates sub-critical and super critical regions.

For the system of Fig. 5.2 (a), the characteristic equation can be written s2 + and the roots are si = B 2M Bs K + =0 M M B2 K 4M 2 M 1 2 (5.7)

(5.8)

Critical damping occurs for the value of B = Bc which makes the term under the radical of Eq. (5.8) equal to zero and is found from the expression .

Fig. 5.1 (a) Impulse Response of the vibration table undamped natural frequency n = 10, damping ratio = 0.1 (undamped) Roots of the characteristic equation at s = 1 j9.95

54

Fig. 5.1 (b) Impulse Response of the vibration table n = 10, = 1 (critically damped). Double roots at s = 10

Fig. 5.1 (c) Impulse Response of the vibration table n = 10, = 2 (over damped). roots at s = 2.68, 37.32

55

2 Bc K =0 4M 2 M

(5.9)

Bc = 2 K M

(5.10)

Bc For this value of B, there exists a double real root at 2M . From the denition, we nd the damping ratio to be B B/2M = = (5.11) Bc /2M 2 KM

The undamped natural frequency is the frequency of oscillation that occurs with zero damping. The roots of Eq. (5.7), if B = 0, are r r K K = j (5.12) si = M M q K Then the undamped natural frequency n = M The roots of the second order equation are located on the s-plane as shown in Fig. 5.2 (b). For constant n , as is varied from 0 to 1, these roots move along a semicircle. At = 0, the roots are located on the j axis and when = 1 the roots are on the real axis. The radius is the natural frequency n , and the damping ratio is cos = . For > 1 (overdamped), both roots are real. In higher order systems, where there are more than two roots, the system response is often dominated by two least damped roots and the results of the second order system can be extended to approximate the higher order systems. 5.3 ROUTH HURWITZ STABILITY CRITERION This method provides a simple and direct means for determining the number of roots of characteristic equation with positive real parts (i.e. roots which lie in the right half s-plane). Although we cannot actually locate the roots, we can determine without factorizing the characteristic equation, if any of the roots lie in the right half plane and hence give rise to an unstable system. The transfer function of a linear system is a ratio of polynomials in s and can be written: H(s) = N(s) N (s) = n D(s) s + an 1 sn 1 + an2 sn2 + . . . + a1 s + a0 (5.13)

For a stable system, all the a0 s are positive real constants and all powers of s in the denominator polynomial are present. If any power is missing or any coecient is negative, we know immediately that D(s) has roots in the right half plane or on the j axis, and the system is unstable or marginally stable (necessary condition). In this case, it is not necessary to continue unless we wish to determine the actual number of roots in the right half plane. The Routh-Hurwitz method centers about an array which is formed as follows:

57

sn s
n1

an a n1 b1 c1 i1 . . . j1 . . .

an2 an3 b2 c2 ... ...

an 4 an 5 b3 c3 ... ...

an6 . . . an7 . . . b4 . . . . . . ...... ... ...

sn2 sn3 s
1

where the constants in the third row are formed by cross multiplying as follows:

an1 an2 an an3 an 1 an1 an2 an an5 b2 = an 1 an1 an6 an an7 b3 = an 1 b4 = . . . . . . . . . . . . b1 =

(5.15)

We continue the pattern until the remaining b0 s are equal to zero. The next row is formed by cross multiplying, using the sn 1 and sn2 rows. These constants are evaluated as follows: c1 = b1a n 3 b2 an 1 b1 ba b3 an 1 c2 = 1 n 5 b1 b1a n 7 b4 an 1 c3 = b1 c4 = . . . . . . . . . . . . (5.16)

This is continued until all remaining c0i s are zero. The remainder of rows down to s is formed in a similar fashion. Each of the last two rows contain only one non-zero term. Having formed the array, we can determine the number of roots in the r.h.p. from the Routh Hurwitz Criterion: The number of roots of the characteristic equation with positive real parts is equal to the number of changes of sign of the coecients in the rst column. Hence, if all the terms of the column have the same sign, the system is stable. Example 5.1 Consider the following polynomial D1 (s) = s5 + s 4 + 3s3 + 9s2 + 16s + 10
58

(5.17)

We form the Routh Hurwitz array s5 s


4

1 1
(1)(3) (1)(9 ) 1

3 9 = 6 = 10 = 12
(1) (16) (1)(1 0) 1 (6 )(10 )0 6

16 10 = +6

s3 s2 s
1

( 6)(+9)(1 )(6) 6 (10)(6) (10 )(6 ) 10

= 10

10

There are two changes of sign in the rst column from +1 to -6 and from -6 to +10. Therefore, we conclude that there are two roots in the right half plane. In order to avoid labor in the calculations, the coecients of any row may be multiplied or divided by a positive number without changing the sign of the rst column. Two special cases may occur: 1. A zero in the rst column. 2. A row in which all coecients are zero. Special Case 1: When the rst term in a row is zero, but other terms in the row are not zero, one of the following two methods can be used to obviate the diculty. 1. Replace the zero with a small positive number and proceed to compute the remaining terms in the array. 2. In the original polynomial, substitute 1 for s and nd the number of roots of x which have x positive real parts. This is also the number of roots of s with positive real parts. Example 5.2 We illustrate the above methods on the polynomial D(s) = s 5 + 3s4 + 4s3 + 12s 2 + 35s + 25 Method 1: The array is as follows (5.18)

59

s5 s
4

1 3 0

4 12
80 3 80 3

35 25

s3 s3 s2 s s
1

12 8 0 [(12 25
80 8 0 3

25 ) 25 ]/[12
80

The rst term in the s 2 row is negative as approximately s5 + s4 s


3 + 80 3

0 and the rst term in the s1 row becomes

. The signs of the rst column are

+ + + +

s2 s1 s
0

There are two changes of sign, hence, there are two roots in the right half plane. Method 2: We replace s by 1/x in the original polynomial 1 1 1 1 1 1 D( ) = ( )5 + 3( )4 + 4( ) 3 + 12( ) 2 + 35( ) + 25 x x x x x x 1 = (25x5 + 35x4 + 12x3 + 4x2 + 3x + 1) . 5 x and form the array x5 x x
4

(5.19)

25 35
320 35

12 4
80 35

3 1

x3 x2 x1 x0

4 19 4
35 19

1 1

1
60

Since there are two changes of sign, there are two roots of x with positive real parts. Hence, there are two roots of s with positive real parts. Special Case 2: When all coecients in any one row are zero, we make use of the following procedure: 1. The coecient array is formed in the usual manner until all zero coecient row appears. 2. The all zero row is replaced by the coecients obtained by dierentiating an auxiliary equation which is formed from the previous row. The roots of the auxiliary equation which are also the roots of the original equation occur in pairs and are the opposite of each other. The occurrence of all zero row usually means that two roots lie on the j axis. This condition occurs, however, any time the polynomial has two equal roots with opposite signs or two pairs of complex conjugate roots. Example 5.3 Consider the following polynomial: F (s) = s6 + 4s5 + 11s 4 + 22s 3 + 30s2 + 24s + 8 The coecient array is written s6 s5 s s
4

(5.20)

1 4
11 2

11 22 24
80 3 2 20 11

30 24 8

0
50 11

s3

multiply by

11 50

1 s
2

4 8 4 divide by 2

2 1

0 {2} . . . ... replaced

s0

The existence of all zero s1 row indicates the presence of equal roots of opposite sign. The auxiliary equation is s2 + 4 = 0 Dierentiating Eq. (5.21) 2s + 0 = 0
61

(5.21)

(5.22)

So the coecient pf 0 s0 is 2 Since there are no changes of sign in the rst column, there is no root which has positive real part. The roots are s = 2j (5.23) These roots give rise to an undamped sinusoidal response. Example 5.4 As a practical example of the use of the Routh Hurwitz method, consider the servomechanism of Fig. 5.3. whose block diagram is given in Fig. 5.4. The overall transfer function of the system is C(s) AKs Km = (5.24) R(s) am s 3 + (a + m )s 2 + s + aKsK m where A = Ks = Km = m = a = C(s) = R(s) = The amplier gain Potentiometer sensitivity Volts/radian Motor constant, radians/Volt/Sec Motor time constant Amplier time constant; sec Laplace transformed output position Laplace transformed input position

Suppose we wish to determine the eect of the amplier time constant a upon the system stability. To do this we shall nd the relationship between the variables for marginal stability (i.e. two equal imaginary roots). The Routh Hurwitz array is established s3 s s
2

a m (a + m ) (a + m ) AKs Km am a + m

1 AKs Km

We know that if all the coecients in a row are zero, we have two equal roots of opposite sign. We can obtain all zeros in a single row by setting the rst term in the s1 row equal to zero with the result a m 1 = KK A a + m s m (5.25)

Since Eq. (5.25) gives us the relation between A and a , we need not continue further. Eq. (5.25) is plotted in Fig. 5.5 which shows the eect of upon system stability. Note that the larger the amplier time constant the smaller the amplier gain (for stability).

62

5.4 ROOT LOCUS ANALYSIS The root locus method is based upon the knowledge of the location of the roots of the system (Fig. 5.6), with the feed back loop opened H(s) = 0. In most cases, these are easily determined from the loop transfer function which for the root locus method is written as K GH = (G(s)H(s)), where K is the constant portion of the loop gain, G(s) is the forward loop transfer function, and H(s) is the feedback loop transfer function. The root locus traces the location of the poles of the closed-loop transfer function C (s)/R(s) (zeros of 1 + K G(s)H(s) = 0) in the s-plane, as K is varied from 0 to . The loop transfer function KGH is written as a ratio of factored polynomials e.g., KGH = which may be rewritten in the form K1 1 3 (s + 11 )(s + 13 ) 2 4 s n (s + 12 )(s + 14 ) K1 (s1 + 1)(s3 + 1) s n (s2 + 1)(s4 + 1) (5.26)

K GH =

(5.27)

Each term in KGH is a complex number and may be written in polar form as s+ 1 = A1 e j1 1 (5.28)

Hence the entire KGH function is a complex quantity and is written in polar form as K(A1 ej 1 )(A3 e j3 ) (A e )(A2 ej 2)(A4 ej 4 ) K A1 A3 j[(1+ 3)( n 0+2+ 4)] = n e A0 A 2A4 j = Ae
n jn0 0

KGH =

(5.29) (5.30) (5.31)

Where K =

K 11 3 24

. The algebraic equation from which the roots are determined is 1 + K GH = 1 + Aej = 0 (5.32)

which furnishes the two expressions

Angle of K GH = arg KGH = = (2k + 1) 180 k = 0, 1, 2, 3 . . . and Magnitude of KGH = |KGH| = A = 1

(5.33)

(5.34)

Eqns. (5.33) and (5.34) are the result of setting 1 + KGH = 0. The locus is plotted by nding all points s which satisfy Eq. (5.33). With the locus plotted Eq. (5.34) is used to determine the gain k at points along the locus.
64

R(s) = C(s) = B(s) = E(s) = A(s) = G(s) =

Refrence input Output (controlled variable) feedback signal R(s) C(s)= error signal actuating signal C( s) = open-loop transfer function A(s) C(s) G(s) G(s) = = = closed-loop transfer function R(s) 1 + G(s)H(s) 1 + K G(s)H(s)

H(s) = feedback path transfer function KG(s)H(s) = loop transfer function

65

The rules that are stated and demonstrated in this section enable the engineer to sketch the locus diagram rapidly. The following loop transfer function is used to demonstrate the method. KGH = K(s + 10) s(s + 5)(s + 2 + j15)(s + 2 j15) (5.35)

(The poles and zeros of Eq. (5.35) are shown in Fig. 5.7.) Rule 1. Continuous curves which comprise the locus start at each pole of K GH, for K = 0. The branches of the locus which are single valued functions of gain, terminate on the zeros of KGH for K = . In Eq. (5.35), there exist four branches, starting from the poles located at s = 0, s = 5, and s = 2 j 15. (see Fig. 5.11) Since there is only one nite zero at s = 10, three of the branches must terminate at innity. The rule can be expanded to read, that locus starts at poles and terminates on either a nite zero or zeros located at s = . The gain K is usually positive and varies from 0 to . Rule 2. The locus exists at any point along the real axis where an odd number of poles plus zeros K GH are found to the right of the point. In Fig. 5.7 the locus exists along the real axis from the origin to the point at s=-5 and s=-10 to minus innity (see Fig. 5.11). Any complex poles and zeros are ignored while applying this rule. Rule 3. For large values of gain, the branches of the locus are asymptotic to the angles (2k + 1)180 deg pz k = 0, 1, 2,

where p is the number of poles and z is the number of zeros. If the number of poles exceeds the number of zeros, some of the branches will terminate on zeros that are located at innity. Fig 5.8 illustrates Rule 3 for the above example. Here p = 4, z = 1, and the asymptotic angles are computed as follows: k = (2k + 1)180 deg k = 0, 1, 2 pz 1 = 180 deg 2 = 60 deg (5.36)

0 = 60 deg K = 0, K = 1, K = 2

Rule 4. The starting point for the asymptotic lines is given by CG = poles zero p z

which is termed as the centre of gravity of the roots. The angular directions to which the loci are asymptotic are given by Rule 3. For the example poles = (5) + (2 + j15) + (2 j15) + 0 = 9 zeros = 10 p = 4, z = 1
66

CG =

(9) (10) 1 = 3 3

(5.37)

The asymptotic lines found in Eq. (5.36) start from the centre of gravity. These lines are placed on the s-plane as shown in Fig. 5.9 Since the complex poles and zeros always appear in conjugate pairs i.e., equal vertical distances from the real axis, the centre of gravity always lies on the real axis. Rule 5. The break away point for real axis roots Jb for Eq. (5.31) is found from the equation: 1 1 + n b b + + 1 b + += 1 b + + 1 b + 13 (5.38)

1 2

1 4

1 1

where b + 1/1 is the magnitude of the distance between the assumed break away point b and the zero at 1/1 . In the example, b = 2.954, 14.05 Rule 6. Two roots leave (or strike) the axis at the break away point at an angle of 90 deg. If n root loci are involved at a breakaway point , they be 180/n degrees apart. Rule 7. The angle of departure from the complex poles and the angle of arrival to complex zeros is found by using Eq. (5.33). The initial angle of departure of the roots from complex poles is helpful in sketching root locus diagrams. The pole-zero conguration of Fig. 5.7 is redrawn in Fig. 5.10 for the purpose of nding the angle of departure from the complex pole at s = 2 + j15. The angles subtended by the poles and zeros to the pole in question are added (positive for zeros and negative for pole). ( 1 + 2 + 3 + 5 ) + 4 = 180 deg (97.6 deg +90 deg +78.7 deg +5 ) + 62 deg = 180 deg The root locus procedure is based upon the location of the poles and zeros of KGH in the s-plane. These points do not move. They are merely the terminal points for the loci of the roots of 1 + K GH. If the locus crosses the imaginary axis for some gain K, the system becomes unstable at this value of K. The degree of stability is determined largely by the roots near the imaginary axis. The root locus sketch for the system of Eq. (5.35) is shown in Fig. 5.11. The rules for sketching the locus are based on the angle criterion of Eq. (5.33) which is repeated here argK GH = (2k + 1)180 deg After the locus is sketched and certain points located more accurately, the values of gain that occur at certain points along the locus must be found. The gain K at a point s1 is evaluated from the criterion of Eq. (5.38) K= 1 |G(s 1 )H(s1 )|
67

(5.39)

or

where |GH | is the product of the magnitudes of the distances from the zeros to the point at which the gain is to be evaluated, divided by the product of the distances to the poles. Thus K= product of pole distances product of zero distances (5.40)

The magnitudes are measured directly from the root locus plot. If no zeros are present in the transfer function, the product of the zero distances is taken equal to unity. In the example, K 194 at the breakaway point s = 2.95. At the point where the locus crosses the imaginary axis, K = 751.. This value of K (on the verge of instability) can also be determined from the Routh Hurwitz criterion

69

PROBLEMS 5.1. Consider the system equation y(t) + a(y(t) = f(t). Find Y (s), for f(t) = 0, y(0) = 1, obtain y(t) by inverse Laplace transforming Y (s). Plot y(t) for i) a > 0, ii) a < 0. Locate the poles of Y (s) in the s-plane. Discuss stability. 5.2. By means of Routh Hurwitz stability criterion, determine the stability of the systems which have the following characteristic equations: a) s 3 + 20s 2 + 9s + 100 = 0 b) s4 + 2s3 + 6s 2 + 8s + 8 = 0 c) s6 + 2s 5 + 8s4 + 12s3 + 20s2 + 16s + 16 = 0 In each case, determine the number of roots in the right half plane, the left half plane, and on the j axis. 5.3. The characteristic equations for certain systems are given below. a) s 4 + 22s 3 + 2s + K = 0 b) s3 + (K + 0.5)s2 + 4K s + 50 = 0 In each case, determine the values of K which correspond to a stable system. 5.4. For each of the following transfer functions locate the zeros and poles. Draw root locus G(s) Discuss the stability of each case. sketches for the closed-loop system 1 + G(s) K s(2s + 1) K(s + 1) b) G(s) = 2 (s + s + 10) K c) G(s) = s(s + 1)(s2 + s + 10) a) G(s) = 5.5. Draw the root locus of a unity-feedback system with (H(s) = 1in Fig. 5.6) the open-loop transfer function G(s) = K s(s + 1)(s + 3.5)(s + 3 + j 2)(s + 3 j 2)

5.6. Sketch the root-locus diagrams of the system in Fig. 5.6. Discuss stability. The quantities KG and H are dened below. a) KG = K 1 ,H = s 2 + 2s + 100 s s+4 K (s + 2) ,H = 2 b) K G = s(s + 20) s

70

CHAPTER - VI FOURIER SERIES


INTRODUCTION In the early years of 19th century the French mathematician J.B.J. Fourier was led to the discovery of certain trigonometric series during his research on heat conduction, which now bear his name. Since that time Fourier series and generalization to Fourier integrals and orthogonal series, have become an essential part of the background of scientists, engineers and mathematicians from both an applied and theoretical point of view. This trigonometric series is now required in the treatment of many physical problems, such as, in the theory of sound, heat conduction, electromagnetic waves, electric circuits and mechanical vibrations. 6.1 EULER-FOURIER FORMULAS A function f(x) can be represented by a trignometric series as follows: X 1 f(x) = a 0 + (an cos nx + bn sin nx) 2 (6.1)

Let us assume that f(x) is known on the interval (, ) and coecients an and bn are to be found. It is convenient to assume that the series is uniformly convergent, so that it can be integrated term by term from to . Since Z the calculation yields Z

cos nxdx =

sin nxdx = 0 for n = 1, 2, . . .

(6.2)

f(x)dx = a0

(6.3)

The coecient an is determined similarly. Thus, if we multiply Eq. (6.1)by cos nx, there results 1 f(x) cos nx = a0 cos nx + . . . + an cos2 nx 2 (6.4)

where the missing terms are the products of the form sin mx . cos nx, or of the form cos nx . cos mx with m 6= n. It is easily veried that for integral values of m and n Z

sin mx cos nxdx = 0, in general

(6.5)

and

cos mx cos nxdx = 0, when m 6= n


71

(6.6)

and hence integration of Eq. (6.4) yields Z so that an = In Eq. (6.8), if n = 0 a0 = 1

f(x) cos nxdx = an Z

cos 2 nxdx = an

(6.7)

f(x) cos nxdx

(6.8)

f(x)dx

(6.9)

(That is the reason for writing the constants term as 1 a0 rather than a 0 ). 2 Similarly, multiplying Eq. (6.1) by sin x and integrating yields Z 1 f (x) sin nxdx bn =

(6.10)

The formulas of Eq. (6.8) and Eq. (6.10) are called Euler-Fourier formulas and series in Eq. (6.1) which results when an and bn are determined is called the Fourier series of f (x). Example 6.1 Represent the function f(x) = x by Fourier series over the interval (, ) Z 1 an = x cos nxdx = 0 Z 1 x sin nxdx bn = 2 cos n = n 2 = (1)n+1 n Substituting in Eq. (6.1) f (x) = 2(sin x 6.1.1 PERIODIC FUNCTIONS A function f (x) is said to be periodic if f(x + p) = f(x) for all values of x, where p is a non zero constant. Any number p with this property is a period of f(x). For instance, sin nx has the period, 2, 2, 4, . . . Now each term of Eq. (6.13) has a period of 2 and hence the sum also has a period of 2 and the sum is equal to x on the interval < x < and not on the whole interval < x < . It remains to be seen what happens at the point x = , 3, . . ., where the sum of the series exhibits an abrupt jump from to . Upon setting x = , 3, . . . in Eq. (6.13), we see that every term is zero. Hence the sum is zero.
72

(6.11) (6.12)

sin 2x sin 3x + . . .) 2 3

(6.13)

The term an cos nx + bn sin nx is sometimes called the nth harmonic, and a0 is called the fundamental or d c term of the Fourier series. Example 6.2 Find the Fourier series of the function dened by f(x) = 0 if f (x) = if x< 0 0x (6.15) (6.16) (6.17) (6.14)

Z 0 Z 1 0.dx + dx = 0 Z 1 cos nxdx = 0, f or n 1 an = 0 Z 1 bn = sin nxdx 0 1 = (1 cos n) n a0 = The factor (1 cos n) assumes the following values as n increases. n 1 cos n 1 2 2 0 3 2 4 0 5 2 6 0

Determining bn by this table, we obtain the required Fourier series f(x) = sin x sin 3x sin 5x + 2( + + + . . .) 2 1 3 5 (6.18)

The successive partial sums are y0 = Example 6.3 Find the Fourier series of the function dened by n , < x < 0 f(x) = , 0 < x < f(x) = is obtained. +2 2 sin 2x sin 4x sin 6x + + +... 2 4 4 (6.21) 2 sin 3x , y1 = + 2 sin x, y 2 = + 2 sin x + etc. 2 2 2 3 (6.19)

73

6.2 REMARKS ON CONVERGENCE In Eq. (6.1) each term has a period of 2 and hence if f(x) is to be represented by the sum, f(x) must also have a period of 2. Whenever we consider a series, such as Eq. (6.1), we shall suppose that f(x) is on the interval (, ) and outside this interval f (x) is determined by the periodicity condition f(x + 2) = f(x) (6.22)

The simple discontinuity is used to describe the situation that arises when the function suers a nite jump at a point x = x0 (Fig. 6.2). Analytically this means that two limiting values of f(x) as x approaches x0 from the right and left hand sides exist, but are unequal, that is lim f (x0 +) 6= lim f (x0 ). A function f(x) is said to be bounded if |f(x)| < M holds for some constant M and for all x under consideration. For example, sin x is bounded, but the function f(x) = x1 for x 6= 0, (f (0) = ) is not, even though the latter is well dened for every value of x. It can be shown that if a bounded function has nite number of maxima and minima and only a nite number of discontinuities, all its discontinuities are simple. That is f(x+) and f (x) exist at every value of x. The function sin(1/x) has innitely many maxima near x = 0 and the discontinuity at x = 0 is not simple. The function dened by the 1 f(x) = x2 sin( ) x 6= 0, f (0) = 0 x also has innitely many maxima near x = 0, although it is continuous and dierentiable for every value of x. The behaviour of these two functions is illustrated graphically in Fig. 6.3 and Fig. 6.4. 6.3 DIRICHILETS THEOREM Suppose f(x) is dened on the interval (, ), is bounded, has only a nite number of maxima and minima and only a nite number of discontinuities. Let f(x) be dened for other values of x by the periodicity condition f (x + 2) = f(x). The Fourier series for 1 [f (x+) + f(x)] 2 (6.23)

converges at every value of x, and hence it converges to f(x) at points where f (x) is continuous. The condition imposed on f(x) are called Dirichlets conditions after the mathematician Dirichlet who discovered the theorem. Example 6.4 Consider the Fourier series of a periodic function dened by f(x) = for < x < 0 f(x) = x for 0 < x < (6.24)

74

an =

Z 0 Z 1 cos nxdx + x cos nxdx 0 1 cos 1 = n2

(6.25)

For n = 0 a0 = Similarly, 2 bn =

Z 0 Z 1 sin nxdx + x sin nxdx 0 1 = [1 2 cos n] n

(6.26)

Therefore, the Fourier series is f(x) = +3 sin x 2 2 cos 3x 2 cos 5x cos x ... 4 32 52 (6.27)

sin 2x 3 sin 3x sin 4x 3 sin 5x + + ... 2 3 4 5

By Dirichlets theorem, equality holds at all points of continuity, since f (x) has been dened to be periodic. At the points of discontinuity x = o and x = , the series converges f (0+) + f (0) f(+) + f () = , and =0 2 2 2 respectively. Either condition leads to the interesting expansion 2 1 1 1 1 = 2 + 2 + 2 + 2 + ... 8 1 3 5 7 as is seen by making substitution in Eq. (6.27). 6.4 EVEN AND ODD FUNCTIONS For many functions, the Fourier sine and cosine coecients can be determined by inspection. A function f(x) is said to be even if f(x) = f(x) and the function f (x) is odd if f(x) = f(x) For example, cos x and x2 are even and x and sin x are odd.
76

(6.28)

(6.29)

(6.30)

(6.31)

Also,

f(x)dx = 2

f(x)dx if f (x) is even


0

(6.32)

and

f (x)dx = 0 if f(x) is odd

(6.33)

Products of even and odd functions obey the rules (even) (even) = even (even) (odd) = odd (odd) (odd) = even The product of sin nx and cos mx is odd and Z

sin nx cos mxdx = 0

(6.34)

Theorem 6.1: If f (x), dened in the interval < x < is even, Fourier series has cosine terms only and the coecients are given by an = 2 Z

f(x) cos nxdx, bn = 0


0

(6.35)

If f(x) is odd, the series has sine terms and the coecients are given by bn = 2 Z

f(x) sin nxdx, an = 0


0

(6.36)

In order to see this, let f (x) be even. Then f (x) cos nx is the product of even functions. Therefore, Z 1 f(x) cos nxdx Z 2 f (x) cos nxdx = 0

an =

(6.37)

On the other hand f(x) sin x is an odd function, so that bn = Example 6.5 Consider the function in Fig.6.5 where, f(x) = x, < x < .
77

f(x) sin nxdx = 0

(6.38)

Since the function is odd, the Fourier series reduces to a sine series Z 2 x sin nxdx 0 Z 2 x cos nx 1 = cos nxdx + n n 0 0 2 x cos nx + sin nx = 2 n n 0 0 2 = (1)n+1 n sin 2x sin 3x sin 4x + + . . .) < x 2 3 4

bn =

(6.39) (6.40)

(6.41)

x = 2(sin x Example 6.6

(6.42)

Consider the function in Fig. 6.6 and write its Fourier series. f(x) = |x| or x The function is even, hence an = = a0 = an = 1 2 2 2 2 Z

Z Z0 Z0

|x| cos nxdx x cos nxdx xdx =

(6.43) (6.44) (6.45)

x cos nxdx Z 0 x sin nx sin nx dx = n n 0 0 2 n = 2 [(1) 1] n

(6.46)

|x| =

4 cos x cos 3x cos 5x + + + ... x < 2 12 32 52

(6.47)

Since |x| = x for x 0, series in Eqs.(6.42) and (6.47) converge to the same function x when 0 x < . The rst expansion of Eq. (6.42) is called Fourier sine series for x and Eq. (6.47) is the Fourier cosine series. Any function f(x) dened in (0, ) which satises the Dirichlets conditions can be expanded in a sine series and cosine series on 0 < x < . To obtain the sine series, we extend f(x) over the interval < x < 0 in such a way that the extended function is odd. The Fourier series for f(x) consists of sine terms only since f(x) is odd.
78

Example 6.7 Obtain a cosine series and also a sine series for sin x For the cosine series an = Z 2 sinx cos nxdx 0 2(1 + cos n) = n 6= 1 (1 n2 ) (6.48) (6.49)

For n = 1, the result of integration is zero, hence sin x = 2 4 cos 2x cos 4x cos 6x + 2 + 2 +... , 22 1 4 1 6 1

when 0 < x < . Since the sum of the series is an even function, it converges to | sin x| rather than sin x when < x < 0. To obtain a sine series, an = 0 n > 2 Z

bn =

sin x sin nxdx = 0 n > 2 1 n=1 (6.50)

Hence the Fourier sine series for sin x is sin x. That is just not a coincidence as shown by the following. 6.4.1 UNIQUENESS THEOREM: If two trignometric series of the form of Equation (6.1) converge to the same sum for all values of x, then corresponding coecients are equal. 6.5 EXTENSION OF INTERVAL The methods developed upto this point restrict the interval of expansion to (, ). In many problems, it is desired to develop f (x) in Fourier series that will be valid over a wider interval. By letting the length of the interval increase indenitely, one may get an expansion valid for all x. To obtain an expansion valid on the interval (T, T ) change the variable from x to T Z/ If f (x) satises the Dirichlets conditions on (T, T ), the function f(T Z/) can be developed in a Fourier series in Z. X a0 X TZ = + an cos nz + bn sin nz 2 n=1 n=1
x T

(6.51)

from z < , Since z =

, the series in Eq. (6.3) becomes


80

f (x) =

a0 X nx X nx an cos bn sin + + 2 T T n=1 n=1


(6.52)

By applying Eq. (6.8) to Eq. (6.51) Z 1 TZ cos nz dz Z T 1 nx = dx f (x) cos T T T

an =

(6.53)

and Z

bn = Example 6.8 Let

1 T

f(x) sin

nx dx T

(6.54)

f (x) = 0 for 2 < x < 0 f (x) = 1 for 0 < x < 2 a0 = an = = bn = = Z 0 Z 2 1 0.dx + 1.dx = 1 2 2 0 Z 0 Z 2 1 nx nx dx + dx 0. cos 1. cos 2 2 2 2 0 2 1 nx =0 sin n 2 0 Z 0 Z 2 1 nx nx dx + dx 0. sin 1. sin 2 2 2 2 0 1 (1 cos n) n (6.55) (6.56) (6.57)

(6.58)

f(x) =

1 3x 1 5x 1 2 + (sin x + sin + sin + . . .) 2 3 2 5 2

(6.59)

Subject to Dirichlet conditions, the function can be chosen arbitrarily on the interval (T , T ) and it is natural to enquire if a representation for arbitrary function on (, ) might be obtained by letting T . We shall see that such a representation is possible. The process leads to Fourier Integral Theorem, which has many practical applications. Assume that f(x) satises the Dirichlet conditions in every interval (T, T ), no matter how large, and that the integral
81

M =

|f(x)|dx

(6.60)

converges. As we have just seen f(x) is given by a0 X nx X nx + + an cos bn sin 2 T T n=1 n=1

f (x) = where 1 T Z
T

(6.61)

an =

f (t) cos
T

nt 1 dt, bn = T T

f (t) sin
T

nt dt T

(6.62)

Substituting these values of coecients in Eq. (6.61) 1 2T Z


T Z 1X T n(t x) dt f(t) cos T T T n=1

f (x) = Since, cos

f(t)dt +

(6.63)

nx nt nx n(t x) nt cos + sin sin = cos T T T T T

(6.64)

Moreover, R |f (x)|dx is assumed to be convergent Z 1 2T


T T

which obviously tends to zero as T is allowed to increase indenitely. Also, if the interval (T , T ) is made large enough, the quantity /T which appears in the integrands of the sum, can be made as small as desired. Therefore, the sum in Eq. (6.63) can be written as Z T 1X f (t) cos n(t x)dt n=2 T where =
T

Z T 1 M f (t)dt |f (t)|dt 2T 2T T

(6.65)

(6.66)

is very small.

The sum suggests the denition of the denite integral of the function Z
T

F () =

f(t) cos (t x)dt

in which the values of the function F () are calculated at the points n. For large values of T
82

Z dier little from Z

f(t) cos (t x)dt

(6.67)

f (t) cos (t x)dt

(6.68)

and it appears plausible that as T increases indenitely, the sum will approach the limit 1 Z

f(t) cos (t x)dt

(6.69)

If such is the case, then Eq. (6.63) can be written as f(x) = 1 Z


0

f (t) cos (t x)dt

(6.70)

The foregoing discussion is heuristic and cannot be regarded as a rigorous proof. However, the validity of the formula can be established rigorously, if the function f(x) satises the above conditions. This formula assumes a simpler form if f (x) is even or an odd function. Expanding the integrand of the integral: 1 Z
0

f(t) cos t cos xdt +

f (t) sin t sin xdt

(6.71)

If f(t) is odd, then f (t) cos t is an odd function. Similarly, f (t) sin t is even. So that f (x) = 2 Z
0

f(t) sin t sin xdt

(6.72)

when f(x) is odd. A similar argument shows that if f(x) is even, then 2 Z
0

f(x) =

f (t) cos t cos xdt

(6.73)

If f(x) is dened in (0, ), then both the above integrals be used. Since the Fourier Series converges to 1 [f(x+) + f (x)] at points of discontinuity, the Fourier 2 integeral also does. In particulars, for an odd function, the integral converges to zero at x = 0, this fact is veried by setting x = 0, in 2 Z
0

f (x) =

f(t) sin t sin xdt

83

Example 6.9 By f (x) = obtain the formula Z 2

f(t) cos t cos xdt

(6.74)

sin cos x d = if 0 x < 1 2 = if x = 1 4 = 0 if x > 1

(6.75)

We choose f (x) = 1 for 0 x < 1 and f (x) = 0 for x > 1., Then Z
1

f(t) cos tdt =


0

cos tdt =

sin , 6= 0

(6.76)

substituting in Eq. (6.76) Z


0

sin cos xdx = f(x) 2

(6.77)

Upon recalling the denition of f(x), we see that the desired result is obtained for 0 x < 1. The fact that the integral is when x = 1 follows from 4 f(1) + f(1+) 1 = 2 2 6.6 COMPLEX FOURIER SERIES - FOURIER TRANSFORM The Fourier series a0 X + (a n cos nx + bn sin nx) 2 n=1

(6.78)

f (x) =

with

an =

Z 1 f(t) cos nt dt Z 1 bn = f(t) sin nt dt

can be written with the aid of Euler formula e j = cos + j sin


84

(6.79)

in an equivalent form, namely


X

f (x) =

Cn e jnx

(6.80)

n=

The coecients Cn are dened by the equation Cn = 1 2 Z

f(t)ejnt dt

(6.81)

and the limit is interpreted by taking the sum from n to +n letting n . Thus, the index n runs through all positive and negative integral values including zero. This can be shown as below. If the series
X

f (x) =

Cn e jnx

n=

is uniformly convergent, we can obtain the above formula for Cn . Replace x by t and the dummy index n by m, so that
X

f(t) =

Cn ejmt

(6.82)

m=

Multiplying by ejnt
X

f (t)e jnt =

Cm e j(mn)t

(6.83)

m=

If we now integrate from to the terms with m = n integrate to zero and the term with m 6= n give 2C n giving Cn = Example 6.10 Consider the function f (x) = ex on (, ) Hence, 2Cn = Z

1 2

f (t)e jnt dt

(6.84)

et ejnt dt

(6.85)

e( jn)t dt

(6.86)

85

Cn =

e e (1)n 2 jn sinh (1)n ( + jn) = 2 + n2

(6.87)

Hence,
sinh X (1)n ( + jn)ejnx 2 + n2 n=

ex =

(6.88)

Example 6.11 Consider the rectangular pulse train shown in Fig. 6.7 and draw its amplitude spectra.

The pulse width is and the period is T . Therefore, 1 T Z


/2

Cn = Since, it is an even function

Ae jntdt

/2

Cn =

2 T

/2

A cos ntdt

2A sin nt /2 2A sin n /2 | = T n 0 T n A sin cntf T


86

f =

1 = 2 T

sin cnt = Therefore, the Fourier series is as follows f (t) =

sin nt nt

A X sin(nf )e jnt T n=

The amplitude spectrum then is drawn below

Let us now write the Fourier Integral Theorem as 1 A 2 Z


A

f (x) = lim

d
A

e j (xt) dt

(6.89)

when f(x) satises the Dirichlet conditions g() = then, 1 A 2 Z


A

1 2

e j f (t)dt

(6.90)

f(t) = lim The transform T dened by T (f) =

e jtg()d
A

(6.91)

1 2

e jt f(t)dt

(6.92)

is called the Fourier transform. It is one of the most powerful tools in modern analysis.
87

Although, the formulas of Eq. (6.91) and Eq. (6.92) are similar, the conditions on the functions f and g are quite dierent. A more symmetric theory can be based on a type of convergence known as mean convergence. Let gA (t) be an integrable function of t on each nite interval for each value of parameter A. It is said that g A(t) converges in mean to g(t) and we write g(t) = lim gA(t)
A

(6.93)

If it is true that lim Z


|g(t) gA (t)|2 dt = 0 Z
A

(6.94)

As an illustration g(t) = lim


A

e j tf (t)dt

(6.95)

means that Eq. (6.94) holds with gA (t) replaced by the integral on the right of Eq. (6.95). One can write g(t) in Eq. (6.95) as an integral from to +, if it is stated that the equation holds in the sense of mean convergence. 6.6.1 PLACHERELS THEOREM: Let f (t) and g() be integrable on every nite interval, and suppose Z Z |f (t)| 2 dt or |g()|d (6.96)

is nite, then if either of the equations Z 1 g() = f (t)e jt dt 2 Z 1 f (t) = g()ejt d 2

(6.97)

holds in the sense of mean convergence, so does the other, and the two integrals of Eq. (6.96) are equal. This is in the sense of ordinary Reimann integral. 6.7 ORTHOGONAL FUNCTIONS A sequence of functions n (x) is said to be orthogonal on the interval (a, b) if Z b m (x)n (x)dx = 0 for m 6= n
a

6= 0 for m = n

(6.98)

For example, the sequence 1 (x) = sin x, 2 (x) = sin 2x, n (x) = sin nx is orthogonal on (0, ) because
88

m (x)n (x)dx =
0

sin mx sin nxdx = 0 for m 6= n =

for m = n 2

(6.99)

The sequence 1, sin x, cos x, sin 2x, cos 2x is orthogonal on (0, 2) though not on (0, ) The formula for Fourier coecients is specially simple if the integeral has the value of 1 for m = n. The function n (x) are then said to be normalized and {(x)} is called an orthonormal set. If Z In other words Z For example, since Z
2 b

[n (x)]2 dx = 1
a

(6.100)

m (x)n (x)dx = 0 for m 6= n = 1 for m = n

(6.101)

1.dx = 2,
0

sin 2 nxdx = ,

cos2 nxdx =
0

for n 1 and the orthonormal set is (2) 1/2 , ()1 /2 sin x, ()1 /2 cos x, . . . , ()1/2 sin nx, ()1 /2 cos nx The product of two dierent functions in this set gives zero, but the square of each function gives 1 when integrated from 0 to 2. Let {n (x)} be an orthonormal set of functions on (a, b) and suppose that another function f (x) is to be expanded in the form f(x) = c1 1 (x) + c2 2 (x) + . . . + cn n (x) To determine the coecient cn , we multiply by n (x) getting f (x)n (x) = c1 1 (x)n (x) + c2 2 (x)n (x) + . . . + cn (n (x))2 If we formally integrate from a to b, the cross-product terms disappear, and hence
89

(6.102)

(6.103)

f (x)(x)dx =
a

Cn [n (x)] dx
a

(6.104)

The term to term integration is justied when the series is uniformly convergent and the functions are continuous. The foregoing procedure shows that if f(x) has an expansion of the desired type, the coecients cn must be given by Eq. (6.(104) is called Euler-Fourier formula, the coecients cn are called the Fourier coecients of f (x) with respect to {n (x)} and the resulting series f(x) = c1 1 (x) + c2 2 (x) + . . . + cn n (x) is called the Fourier series with respect to {n (x)}. 6.8 MEAN CONVERGENCE OF FOURIER SERIES If we try to approximate a function f(x) by another function pn (x), the quantity |f(x) pn (x)| or [f (x) pn (x)] 2 (6.106) (6.105)

gives a measure of the error in the approximation. The sequence pn (x) converges to f(x) whenever the expression of Eq. (6.106) approaches zero as n . These measures of error are appropriate for discussing convergence at any xed point x. But it is often useful to have a measure of error which applies simultaneously to a whole interval of x values, a x b. Such measure is easily found if we integrate Eq. (6.108) from a to b. Z
b

|f (x) pn (x)|dx or

[f (x) pn (x)] dx

(6.107)

These expressions are called mean error and mean-square error respectively. If either expression of Eq. (6.107) approaches zero at n , the sequence p(x) is said to converge in mean to f(x) and the mean convergence is used. The terminology is appropriate because if the integrals of Eq. (6.107) are multiplied by 1/(b a), the result is precisely the mean value of the corresponding expression of Eq. (6.106). Even though Eq. (6.107) involves an integration that is not present in Eq. (6.106) for Fourier series, it is much easier to discuss the mean square error and the corresponding mean convergence then the ordinary convergence. In the following discussion, we use f and n as abbreviation for f(x) and n (x) respectively and assume that f and n are integrable on a < x < b. If integrals are improper, the convergence of Rb Rb f 2 dx and a 2 dx is required. n a Let {n (x)} be a set of orthonormal functions on a x b, so that as in the preceding section Z

n (x)m (x)dx = 0 for m 6= n = 1 for m = n


90

(6.108)

We seek to approximate f(x) by a linear combination of (x). pn (x) = a 1 1 (x) + a2 2 (x) + . . . + an n (x) in such a way that mean square error of Eq. (6.107) is minimum. E= Z
b

[f (a 1 1 + a2 2 + . . . + an n )] dx = min

(6.109)

Upon expanding the term in brackets, we see that Eq. (6.109) yields. Z Z Z

E=

f 2 dx 2

(a1 1 + a2 2 + . . . + an n )fdx +
a

(a1 1 + a2 2 + . . . + a n n )dx (6.110)

If the Fourier coecients of f relative to k are denoted by ck . ck = The second integral in Eq. (6.110) is Z
b

k fdx
a

(6.111)

(a1 1 + a2 2 + . . . + an n )fdx = a1 c1 + a2 c2 + . . . + an cn
a

(6.112)

The third integral in Eq. (6.110) can be written as Z

(a 1 1 + a2 2 + . . . + an n )(a1 1 + a2 2 + . . . + an n )dx
a b

(6.113) (6.114) (6.115)

= a + a2 + . . . + a2 2 n

a2 2 + a2 2 + . . . + a2 2 dx 1 1 2 2 n n

a 2 1

Where the second group of terms involves cross products ij with i 6= j and such terms integrate to zero. Hence Eq. (6.110) yields. Z
b n X k=1 n X k=1

E=

f 2 dx 2

ak ck +

a2 k

(6.116)

for the mean square error in the approximation. In as much as 2a kck + a2 = c2 + (ak ck )2 k k The error E in Eq. (6.116) is also equal to Z b n n X X E= f 2 dx c2 + (a k ck )2 k
a k=1 k=1

(6.117)

91

Theorem 6.2: If {n (x)} is a set of orthonormal functions, the mean square error of Eq (6.109) can be written in the form of Eq (6.117) where ck are the Fourier coecients of f relative to k . From the two expressions of Eq. (6.112) and Eq. (6.117), we obtain a number of interesting and signicant theorems. In the rst place, the terms (ak ck )2 in Eq. (6.117) are positive unless ak = ck , in which case they are zero. Hence the choice of ak that make E minimum is obvious by ak = ck and we have the following. Corollary 1: The partial sum of the Fourier Series c1 1 + c2 2 + . . . + cn n , ck = Z
b

f k dx
a

(6.118)

gives a smaller mean square error Rb (f k )2 dx then is given by any linear combination a1 1 + a2 2 + . . . + an n upon setting a ak = ck in Eq. (6.117), we see that the minimum value of the error is min.E = Z
b

f 2 dx

n X k=1

c2 k

(6.119)

Now, the expression of Eq. (6.109) shows that E 0 because the integrand in Eq. (6.109) being a square, is not negative. Since E 0 for all choices of a k , it is clear that minimum of E (which arises when ak = ck is also greater or equal to zero. Hence, "Z or
b n X k= 1

f dx
n X k= 1

2 k

0,

c2 k

f 2 dx
a

(6.120)

Upon letting n , we obtain by the principle of monotone convergence. Rb Corollary 2: If ck = a fk dx are the Fourier coecients of f relative to the orthonormal P set , then the series k=1 c2 converges and satises the Bessel inequality k
n X k=1

c2 k

[f(x)] 2 dx
a

(6.121)

Since the general term of a convergent series must approach zero, we deduce the following from Corollary 2. Rb Corollary 3: The Fourier coecients cn = a f n dx tends to zero as n . For applications, it is important to know whether or not the mean square error approaches zero as n . Evidently, the error approaches zero for some choice of a0 s only,if the minimum error in Eq. (6.117) does so. Letting n in Eq. (6.1179), we get Parseval equality
92

Z as the condition for zero error.

f 2 dx

X k=1

2 Ck = 0

(6.122)

Corollary 4: If f is approximated by the partial sum of its Fourier series, the mean square error approaches zero as n if and only if Bessel inequality becomes Parsevals equality
X k=1

c2 = k

[f(x)] 2 dx
a

(6.123)

In other words, the Fourier series converges to f in the mean square sense if and only if Eq. (6.123) holds. If this happens for every choice of f, the set { n (x)} is said to be closed. A closed set then is a set which can be used for mean square approximation of arbitrary functions. It can be shown that the set of trigonometric functions cos nx, sin nx is closed on 0 < x < 2. A set {n (x)} is said to be complete if there is no non trivial function f(x) which is orthogonal to all the 0n s. That is, the set is complete if ck = implies that Z
b

f (x)(x)dx = 0 for k = 1, 2, 3, . . .
a

(6.124)

[f(x)]2 dx = 0
a

(6.125)

Now, whenever Eq. (6.123) holds, Eq. (6.124) yields Eq. (6.125) at once, hence we have, Corollary 5: Every closed set is complete. The converse is also true. This converse, however requires a more general integral than the Reimann. The generalized integral known is Lebesgue integral. The notions of closure and completeness have simple analogs in the elementary theory of vectors. Thus a set of vectors v1 , v2 , v 3 is said to be closed if every vector V can be written in the form. V = c1 v1 + c2 v2 + c3 v3 (6.126) for some choice of constants ck . The set of vectors v1 , v2 , v3 is said to be complete if there is no nontrivial vector orthogonal to all of them; that is the set is complete if the condition. v.vk = 0 k = 1, 2, 3 . . . (6.127)

In this case, it is obvious that closure and completeness are equivalent, for both conditions simply state that the three vectors v1 , v2 , v3 are not coplanar. 6.9 POWER IN A SIGNAL Consider two voltage sources connected in series across a 1 ohm resistance. Let one source have an emf of 10 cos 2t and other an emf of 5 cos 20t. These two voltages do not make a periodic function.
93

If the power dissipated in the resistance at any moment is to be calculated, we have

p(t) =

v2 (t) = v 2 (t) = (10 cos 2t + 5 cos 20t)2 R = 100 cos2 2t + 100 cos 2t cos 20t + 25 cos2 20t = 50 + 12.5 + 50 cos 4t + 12.5 cos 40t + 50 cos(2 + 20)t + 50 cos(2 20)t

(6.128)

(6.129)

From Eq. (6.129), it is clear that 50 is the average power that would be dissipated in the load if 1Hz source acted alone and 12.5 is the average power if 10/H z source acted alone. The total average power when both sources are present is the sum of the averages for both sources acting alone. The instantaneous power is given by Eq. (6.129). 6.9.1 AVERAGE POWER IN A SIGNAL Applying the Parseval equality
X X

P av =

cn cn =

n=

|cn | 2

(6.130)

and root mean square value of f(t)


X

r.m.s. =

n=

|cn |2

(6.131)

The expressions of Eq. (6.130) and Eq. (6.131) are for two sided spectrum. For the positive frequency line spectrum P = c2 + 0
X 1 X 2 |2cn |2 = c2 + 2 cn 0 2 n= n= 1

(6.132)

The Fourier series for the rectangular pulse train in Example 6.11 was
A X sin cnfejnt T n=

f (t) =

where cn =

A sin cnf T c0 = A T

The ratio /T is called the duty cycle d0


94

thus, cn = dn A sin cnd Then average power


X

P av =

(dA)2 sin c2 nd

n=

Example 6.12 Consider the train of sinusoidal pulses in Fig. 6.9. Draw its amplitude spectrum and write the expression for average power 1 T = The average power Pav is
X

cn =

/2

A cos c t cos ntdt

A [sin c(fc nf) + sin c(fc + nf) ] 2T

/2

P av =

c2 n

n=

and the amplitude spectrum is given in Fig. 6.10. Example 6.13 The triangular wave is shown in Fig. 6.11 along with its Fourier series. Draw the power spectrum of the function. The Fourier series is
1 X cn ej nt T n=

f (t) =

we can also dene cn as cn =

f (t)e
0

j 2nt T

dt

where c0 = 0, since the wave has no average value and sin n 2


n 2

cn =

f or n 6= 0

Since all ck are real, T = 2. Therefore, the series can be written in cosine terms. f (t) = c1 cos t + c2 cos 2t + c3 cos 3t + . . .
95

Moreover, sin 2 (n/2) is zero when n is even and unity when n is odd. cn , therefore can be 4 written as cn = ( n)2 for n = 1, 3, 5, . . . Then the sinusoidal form of the Fourier series. f (t) = 1 1 1 4 cos 5t + cos 7t + . . .) (cos t + cos 3t + 2 9 25 49

The power spectrum (also called line spectrum) is obtained by P avn = |cn | 2 16 4 = = 4 4 watts T2 (n)4 (2)2 n

The line spectrum is plotted in Fig.6.12.

6.10 PERIODIC SIGNAL AND LINEAR SYSTEMS If the input to a stable linear network or system is periodic, the steady state output signal is also periodic with the same period. That this is true can be easily demonstrated by use of transfer functions. If the system is linear, the transform of the output signal is related to the transform of the input signal by the equation. F0 (s) = H(s)Fin (s) (6.133)

where H(s) is the transfer function of the system. Strictly speaking fin (t) cannot be periodic if it is to have a Laplace transform, but we can dene:
u(t) X cn e j2nf t T n=

fin (t) =

97

cn = as the input signal. Its transform is

f (t)e
0

j 2nt T

dt

Fin (s) =

1 X cn T s j2nf n=

(6.134)

with H(s) the transfer function of a linear system


1 X cn H(s) T n= (s j2nf)

F0 (s) =

(6.135)

F0 (s) will have poles in the left half plane because of the poles of H(s). These will lead to transient terms. If we wish only the inverse transform of the steady state, we need only the inverse transform of the j-axis poles. If f0 (t) is the periodic portion only of the inverse transform, then
1 X c H (j2nf )ej2 nf t T n= n

f0 (t) =

(6.136)

and the only eect the system has on the series is to alter the amount of each frequency by the transfer function evaluated at that frequency. The power spectrum of the output signal is given by P avn = |cn | 2 H(j2nf)| 2 T2 (6.137)

Eqs.(6.136) and (6.137) represent the principal reasons for the use of Fourier series in signal analysis. The steady state eect of a lter on a signal can be seen if we compare the power spectrum of the signal with the frequency response of the lter. Multiplication of the two will produce the spectrum of the output signal. Example 6.14 The rectangular wave of Fig. 6.13 is the input signal (the current is ) of the RLC tank circuit in Fig. 6.14. Find the Fourier series of the input and output signals and their spectra. The Laplace transform of one period of input signal is P (s) = e sT /4 esT /4 s

98

Setting s =

j2n T

, we have cn = e j n/2 e jn/2


2 jn T

sin n 2 =T n If we divide by T , the Fourier series of the input signal is


X sin n 2 ej2 nt/T n n= n 2

is (t) =

Note that for small angles, sinx = x for small n, sin n = 2 (n = 0) = 1/2. The transfer function of the network is H(s) =
s c s RC

, making the d c term

1 s + ( ) + LC 1.77x10 6 c Q 0 = R = 1000 = 10 L 17.7x10 3 so the roots of the denominator are very nearly 2

s=

1 1 ( j) LC 20 = 5.65x103 (0.05 + j)

If the frequency response versus frequency is plotted, a sharp resonance will be seen at f = 900Hz. At this frequency H(s) = R = 1000. Since the fundamental frequency of the 1 periodic wave is T = 100Hz, the response of the network will be large at the ninth harmonic. Substitution of even harmonics (n even) will yield zero, so the input signal consists only of the frequencies 100, 300, 500, 700, 900, 1100, . . . Hz. If the numerator and denominator of H(s) are multiplied by H(s) = and with s= R 1 + ( RC )(s2 + s
RC s

, H(s) can be written

1 LC

j2n = j 200n T 1000 1 + ( j10 )(n 8n1 ) 9

H(j2000n) =

substituting n = 1, 3, 5, 7, 9, 11 and computing only the magnitude of H(s), we obtain 11.25, 37.5, 80.5, 193, 1000 and 240 respectively. The circuit is not an ideal band pass lter, because frequencies other than 900 Hz get through, but it certainly shows a preference for 900 H z.
100

The Fourier series of the output is 1000 sin n 2 i .e 1200n t f0 (t) = v0 t = h 1 + ( j10 (n 891 ) n 9) The power spectrum of the input signal is simply a set of lines with height 1/4 at f = 0 and 1/(n)2 at the odd harmonics. This is shown in Fig. 6.15. The dashed line in Fig. 6.15 is the magnitude of H(s) squared. Fig. 6.16 shows the power spectrum of the output signal. Note that it has no d c term and that the line at n = 1 is 1/ 2 times the square of the magnitude of H at n = 1; that is (11.25)2 = 12.8 2 calculating the others in a similar way. We have: P av3 = 15.8, P av5 = 26.3, P av7 = 76.7, P av9 = 1250, P av11 = 48.1. If the sum of the powers in these harmonics is calculated, the total is 1429.7 watts per ohm (the actual power is one thousandth of this, since R = 1000 and P = v2 /R, and so about 87.5 percent of the power is in the ninth harmonic. Actually, something less than this value is in the ninth harmonic, since the power in 13th, 15th, . . . harmonics would have to be calculated to obtain the total output power. Since the output is a voltage and ninth harmonic is dominant, the output voltage should be nearly a 900Hz sinusoidal with average peak amplitude of 2x100 0 = 70.7 volts. 9 The RLC is approximately a band pass lter, and the assumption that only the ninth harmonic is passed leads to the result that v(t) = 70.7 cos 1800 t in the steady state.

101

PROBLEMS 6.1 Evaluate R

cos mx cos nxdx for integral m and n by use of the identity. 2 cos A cos B = cos(A + B) + cos(A B)

6.2 Find the Fourier series for f (x) if f(x) = , 0, for < x < for < x 2
2

6.3 Find the Fourier series for the function dened by f(x) = 6.4 If f(x) = n 0, sin x, n x, 0, for < x < 0 for 0 < x for < x < 0 for 0 < x

Show that the corresponding Fourier series is


2 X cos(2n 1) X (1)n sin nx + 2 4 n=1 (2n 1) n n=1

6.5 Classify the following functions as even, odd or neither x2 , x sin x, 6.6 Show that if f(x) = then f (x) = 2 cos 2x cos 6x cos 10x + + + ... 4 12 32 52 x3 cos nx, x4 , ex , (x2 )(sin x) 2
2

x, x,

for 0 < x < for < x 2

6.7 If f (x) is an odd function on (T , T ) show that the Fourier series takes the form
X n= 1

f(x) =

bn sin

nt 2 , bn = T T

f(x) sin

nx dx T

6.8 Find the Fourier series for the following function: n for 0 < x < 2 f (x) = 8, 8, for 2 < x < 4 6.9 Write down the Fourier series for the waveforms shown in Figs. 6.17 (a)-(b)
103

6.10 For a one port network, it is given that i = 10 cos t + 5 cos(2t 45o ) v = 2 cos(t + 45o ) + cos(2t + 45o ) + cos(3t 60o ) a) what is the average power to the network. b) Plot the power spectrum. 6.11 By using the following equations e j = cos + j sin 1 2 Z

cn =

f (t)e jnt dt

show that, 2cn = an jbn , 2c0 = a0 , 2c n = an + jbn . 6.12 Determine whether f (t) is periodic. If it is, nd its period and its fundamental frequency. Whether it is periodic or not, write the function in the exponential form and list all frequencies contained within the function. f (t) = 5 + 7 cos(20t + 35o ) + 2 cos(200t 30o ) 6.13 The current source in the circuit shown in Fig. 6.18 is a square wave whose Fourier series is
X

is (t) = with cn = 0 for n even, and

cn ej2 nt/T

n=

n sin cn = A n2 2 for n odd a) Sketch i s(t), v0 (t). b) Find the Fourier series of v0 (t). c) Write the rst ve nonzero terms of cosine series for v 0 (t). d) Calculate Pav. for v0 (t) if R = 100. Plot the power spectrum. e) Since the square wave can be thought as successions of steps, the steady state term v0 (t) must be a succession of step responses. Without using the Laplace transform, determine the waveshape of steady state.
105

CHAPTER - VII THE FOURIER TRANSFORMS


INTRODUCTION In the preceding chapter on Fourier series, we have shown that the period can be extended for non periodic signals and the resulting equations are called Fourier Transform pairs. These transform pairs are extremely useful in dealing with the electromagnetic radiation, signal transmission and ltering. The practicality of the use of Fourier transform is validated by the fact that no practical signal is mathematically periodic, since all the signals (speeches, music and audio signals) have both beginnings and ends. Such signals may be strictly time limited, so f (t) is identically zero outside a specied interval or asymptotically time limited, so f (t) 0 as t , or if f(t) is square integrable over all time, that is lim
T

|f(t)| 2 dt <

(7.1)

then the frequency domain description is provided by the transforms. F (f ) = F[f (t)] F (f) = and f(t) = Z Z

f (t)ejt dt

(7.2)

F (f)ejt df

(7.3)

Eqs.(7.2) and (7.3) are called the Fourier transform pairs. 7.1 AVERAGE VALUE AND ENERGY IN A NON-PERIODIC SIGNAL Since the transforms are non-periodic functions, the average of the signal is dened as 1 T T Z
T

< f(t) >= lim and power

(t)dt
0 T

(7.4)

P =< f 2 (t) >= lim

1 T T

f 2 (t)dt
0

(7.5)

The integral in Eq.(7.5) remains nite as P 0 and T . Since time limited signals must have zero averages, when averaged over innite time, average power is, therefore, not useful and we turn to energy. By denition, total energy E is the integral of instantaneous power. Assuming that f(t) is applied to a one ohm resistor
108

E= Using Parsevals theorem Z

f 2 (t)dt

(7.6)

E= = = =

f (t)

Z Z Z

F (f)e df dt Z F (f) f (t)e (jt) dt df

jt

(7.7) (7.8)

F (f)F (f)df |F (f)| 2 df (7.9)

Eq. (7.9) is called Raleighs Energy Theorem. If f (t) is a voltage waveform, the F (f ) has dimensions per unit frequency and describes the distribution or density of the signal voltage in frequency. By like reasoning |F (f)|2 is the density of energy in the frequency domain. Dene S(f) as the energy spectral density S(f) = |F (f)| 2 (7.10)

S(f) is positive real. Moreover, if f(t) is real, F (f) is hermitian and S() is even function of frequency. The total energy is therefore, Z

E=

S(f)df Z =2 S(f )df


0

(7.11)

7.2 LINE SPECTRA VS. CONTINUOUS SPECTRA Consider a narrow frequency interval f central at f1 that is f1 1/2f < |f | < f1 + 1/2f and suppose that this interval includes the mth harmonic of a periodic signal f1 = mf0 . The frequency component of the periodic signal cm ej1t + cm ej( 1)t = 2|cm |cos(1 t + m ) so that, average power is 2|cm |2 For a non periodic signal, the frequency component represented by the interval F (f1 )(ej1t f + F (f1 )ej( 1) tf = 2F (f1 )f cos[1 t + ArgF (f1 )] This interval contains energy approximately equal to
109

2|F (f1 )|2 f Therefore, a line spectrum represents a signal that can be constructed from a sum of discrete frequency components and the signal power is concentrated at specic frequencies. On the other hand a continuous spectrum represents a signal that is constructed by integrating over a continuum of frequency components and signal energy is distributed continuously in frequency. Example 7.1 Consider the time limited pulse of Fig. 7.1 whose amplitude is A between /2 and /2 and zero otherwise. Draw its amplitude and line spectrum.

F (f) =

/2

Aejt dt
/2 sin 2 2

= A

2 S(f) = A2 2 sin c2 2 = A sin c The graph of F (f ) and S(f) is given in Fig. 7.2 and Fig. 7.3.

(7.12) (7.13)

In this example, 1/ can be taken as the measure of spectral width. Now, if the pulse width is increased and vice versa, this phenomenon is reciprocal spreading. Let us see the percentage of total energy contained in |f| < 1 . Using energy spectral density: E=2 S(f)df Z 1 / sin c2 f df = 2(A )2
0 0

1/

(7.14)

110

= 0.92A2 2 = 0.92E Thus over 90 percent of signal energy is contained in |f| < Example 7.2 Find and draw the amplitude spectrum of a Gaussain pulse of Fig. 7.4. f (t) = Ae (t/ ) Z
2

F (f) =

Ae ( t/ )

cos tdt

(7.15)

= 2A = Ae

0 (f )2

e(t/)2 cos tdt (7.16)

Eq. (7.16) is a Gaussian pulse in frequency as is shown in Fig. 7.5. 7.3 FOURIER TRANSFORM THEOREMS Almost all Laplace transform theorems are also Fourier transform theorems. Those that are the same will not be proved. 7.3.1 Multiplication by a scalar : If F (f ) is a Fourier transform of f(t), F (f ) is a Fourier transform of f(t). 7.3.2 If G(f ) and F (f) are Fourier transforms respectively of g(t) and f (t), the Fourier transform of [g(t) + f (t)] is [G(f ) + F (f )] 7.3.3 If F (f) is the Fourier transform of f(t), the Fourier transform of e j2f 0t f(t) is F (f f0 ). This is complex translation theorem of Laplace transforms with s = 2jf0 , f0 may be positive or negative real. 7.3.4 Real Translation: If F (f ) is the Fourier transform of f (t), the Fourier transform of f(t t0 ) is ej2 f t0 F (f), where t0 may be positive or negative real. 7.3.5 If F (f) is the Fourier transform of f(t), the Fourier transform of df (t)/dt is j2fF (f) Note that this assumes that F (f) behaves like 1/f n , n 1 as f becomes large. Actually F (f ) can approach a constant as f becomes innite. o nR t f(x)dx is 7.3.6 If F (f ) is Fourier transform of f(t), the Fourier transform of F (f)/(j2f), provided this division by f does not produce a pole at f = 0.
111

Thus F (f ) must vanish at f = 0 at least as rapidly as f. More exactly lim |


f 0

F (f ) |< f

7.3.7 If F (f) is the Fourier transform of f (t), the Fourier transform of tf(t) is (j/2)dF (f)/df . 7.3.8 If F (f) is the Fourier transform of f (t), F (f /a) is the transform of f(at) 7.3.9 If G(f) is the Fourier transform of g(t), g(f ) is the Fourier transform of G(t).This theorem is peculiar to the Fourier transform and for the rst time confuses the use of capital letters for transforms and lower case functions of time. Proof: We know that G(f) = and Z

g(x)e j2f xdx

(7.17)

g(t) =

G(y)ej2 ty dy

(7.18)

If in Eq. (7.18) t is replaced by f g(f) = Z G(y)e j2 f y dy

which is seen to be the denition of G(t). Example 7.3 a) Since F[p(t)] = A sin f b f

Where p(t) is a rectangular pulse of height A and width b and centered on the origin, replacing f by t in the transform and t by f in p(t) gives. A sin bt = p(f) = p(f ) t

since the pulse p(t) is symmetric about t = 0 axis. Thus the Fourier transform of A sin(bt)/t is A for |f| < b and 0 for |f| > b. b) Since F [eat u(t)] =
114

1 j 2f + a

then, F

1 = eaf u(f) j2t + a

The theorem actually works both ways since either t is replaced by f and f by t or t is replaced by f and f by t. For example, in the last pair above, put t for f on the right and obtain eatu(t). Now place f for t on the left and get the pair F [e at u(t)] = 1 j2f + a

7.3.10 If F (f) is the Fourier transform of f(t), the Fourier transform of f(t) is F (f ). Note that if f (t) is real, the conjugate F (f), F (f ), is equal to F (f). Proof: Since F (f) = then F (f ) = Z Z

f (x)ej2f x dx

f(x)e j2f xdx

If the dummy variable of integration is changed to y = x, dx = dy. For x = , y = +. For x = , y = . Z f(y)ej2 f y (dy) F (f) =

The sign of the last integral can be changed if limits of integration are reversed, so Z f (y)e j2 f y dy (7.19) F (f) =

But Eq. (7.19) is, by denition, the Fourier transform of f (t). Example 7.4 a) Since the Fourier transform of e tu(t) is 1 must be ( j2 f +) b) The Fourier transform of et cos t u(t) is j2f + s+ | = (s + )2 + 2 s=j2 f (j2f + )2 + 2 and the transform of e t cos(t)u(t) = e t cos (t)u(t) is simply
115
1 j2 f +

, the Fourier transform of et u(t)

j2f + (j 2f + )2 + 2 7.3.11 If f(t) is a real and even function of t that is, if f(t) = f (t) and if f(t) is transformable, its Fourier transform F (f ) is real and is an even function of f . Proof: Let the Fourier transform of f (t)u(t) be F (f ). Then by Theorem 10, the transform f(t)u(t) is F (f). But, f(t) = f(t)u(t) +f (t)u(t) since u(t)+ u(t) = 1,. The even property of f (t) permits the second f (t) in the last equation to be replaced by f(t), so f(t) = f(t)u(t) + f(t)u(t) and F (f) = F 0 (f ) + F 0 (f ) Clearly, F (f) is even because if f is replaced by f, the equation does not change. But since f(t)u(t) is real, F 0(f) = F (f) by Theorem 10. Then F (f) = F 0 (f ) + F (f) But the sum of any complex number and its conjugate is twice the real part of the number, so F (f ) = 2Re[F 0 (f)] 7.3.12 If f(t) is real and odd function of time that is,if f (t) = f (t), the transform of f(t), if it exists, is imaginary and an odd function of f . Proof: Evidently f (t) = f (t)u(t) + f (t)u(t) and because f(t) is odd f(t) = f (t)u(t) f (t)u(t) If F 0 0(f ) is the transform of f(t)u(t), the transform of f(t) is F (f ) = F 00 (f) F 00 (f ) which is seen to be odd. The fact that f (t)u(t) is real means that F 00 (f ) F 00 (f ), so F (f ) = F 00 (f) F 00 (f ) = 2jIm[F 00 (f )] Again there follows a corollary: If f (t) is an odd function of time and F (f ) is its transform, the imaginary part of the transform of f(t)u(t) is jF (f )/2
116
0

Note that any function can be expressed as sum of an even and an odd functions, since for f(t) neither even nor odd. f(t) = fe (t) + f0 (t) with f(t) + f (t) 2 f(t) f(t) 2 (7.20)

fe (t) = and f0 (t) =

(7.21)

(7.22)

Then Theorems 11 and 12 imply that if F (f) is the transform of f (t) F[fe (t)] = Re[F (f)] F[f0 (t)] = jIm[F (f)] 7.4 SUMMARY OF FOURIER TRANSFORM THEOREMS All the theorems in the preceding section are summarized by the following equations. Let f(t) and g(t) be transformable functions with transforms F (f) and G(f) respectively. Then F[af(t)] = aF (f) F[f (t) + g(t)] = F (f ) + G(f ) F ej2 f0t ft = F (f f0 ) F[f(t t0 )] = ej2 t0f F (f) df(t) = j2f F (f ) dt F (f) f(x)dx) = j2f
t

(7.23) (7.24)

(7.25)

(7.26)

(7.27)

(7.28)

(7.29)

provided that fF (f) is bounded as f F Z (7.30)

provided that F (f ) is bounded for f = 0


117

dF (f) df F(f ) a F[f(at)] = a F[G(t)] = g(f) F[G(t)] = g(f) F[f(t)] = F (f ) F[fe (t)] = Re[F (f)] F[fe (t)] = jI m[F (f)] F[tf (t)] = 7.5 THE INVERSE FOURIER TRANSFORM OF A RATIONAL FUNCTION

j 2

(7.31) (7.32) (7.33) (7.34) (7.35) (7.36) (7.37)

The inverse transform of a rational function of f can be found by using a procedure almost exactly like that used for the Laplace transform. If we use p = j2f instead of s, so that f = jp/2 and the Fourier transform can be written as a ratio of polynomials in p. The inverse transform of rational function p is then found by a partial fraction expansion exactly like that used to obtain inverse Laplace transform of function of s. The only new aspects of the procedure are as follows: 1. If there are poles on the j-axis of the p-plane, the function comes under the category of special Fourier Transforms and caution should be exercised. 2. The inverse transforms of the terms in the partial fraction expansion that have left half p-plane poles are exactly the same as those in the Laplace transform. 3. The Inverse transform of the terms in partial fraction expansion that have right half p-plane poles are also the same functions, as those in Laplace transform, but instead of multiplying them by u(t), we multiply by u(t). Example 7.5 Find the inverse Fourier Transform of F (f) = Setting f =
jp 2

1 24 2 f 2 + j 2f

F (f) = =

1 1 = 2 p2 + p (p + 1)(p 2)
1 3

p +1 p2 1 t f(t) = [e u(t) + e2 tu(t)] 3 Example 7.6 Find the Inverse Fourier Transform of F (f) = A f 4 + a4

1 3

118

where A and a are real positive constants. Replace f by jp/(2). Then A F (f) = jp 4
2

+ a4 (2)4 A = 4 p + (2a)4

The roots of the denominator are p4 = (2a)4 , p2 = j(2a)2 Since the square roots of +j are + j)/ 2, and the square roots j are (1 j)/ 2, (1 of the four roots are 2a(1 + j), 2a(1 j), 2a(1 + j ), and 2a(1 j). In factored form, we have F (f ) = 164 A (p + 2a j 2a)(p 2a j 2a)(p + 2a + j 2a)(p 2a + j 2a)

If this is expanded in the partial expansion, only the terms with j in the factors are required. "
1 6 4 5o 32 3a 3 j 2a 2 a 16 13 5o 32 3a 3

F (f) = 16 A = so, f (t) =

16 45o 16 45o A + conjugate 2a 3 p + 2a j 2a p 2a j 2a

p+

p+

2a

j 2a

+ conjugate

i A h 2 at cos( 2at 45o )u(t) + e 2 at cos( 2at + 45o )u(t) e a3

For negative t, the cosine in the second term can be written as cos( 2a|t| + 45) = cos( 2a|t| 45o) since cosine is an even function. Then f (t) may be written as f (t) = h i A 2 a|t| e cos 2a|t| 45o (t) 3 a

and it is seen to be an even function of t, as it should be with F (f) being real. 7.6 CONVOLUTION Suppose that a linear system is excited by an input signal fin (t), then the output f0 (t) will be some function that depends upon the transfer function of linear system as well as upon the input signal. As already discussed in the treatment of Laplace transform F0 (s) = Fin (s)H(s)
119

(7.38)

and f0(t) = Z

fin ( )h(t )d

(7.39)

In a similar manner, convolution for the Fourier transform can be dened. Theorem 7.1: Let f1 (t) and f2 (t) be Fourier transformable with transforms F 1(f) and F2 (f) respectively The Fourier transform of f(t) = Proof: By denition, the Fourier transform of y(t) is Z Z f1 ( )f2 (x )d e j2f xdx Y (f ) =

f1 ( )f2 (t )d is F1 (f)F 2(f)

If the integrated integral is expressed as a double integral Z Z Y (f) = f1 ( )f2 (x )ej2 f x dxd

If the new variable = x is substituted for x, then dx = d and the limits on the x integration become and for the integration. But for xed , this will be same as and , so Z Z Y (f) = f1 ( )f2 ()e j2( + ) dd

But the integrand can now be separated into two functions, one in alone and one in alone Y (f) = Z

f1 ()e j2d d

f2 ()ej2 f d

But this is the product of two Fourier transforms F1 and F2 , so Y (f ) = F1 (f)F2 (f) Since F1 (f )F2 (f) and F2 (f)F1 (f ) must be equal, the order in which the functions are chosen is immaterial, Z Z f1 ()f2 (t )d = f2 ( )f1 (t )d

120

Theorem 7.2: If f1 (t) and f2 (t) are Fourier transformable, with transforms F 1(f) and F2 (f) respectively the Fourier transform of f1 (t)f2 (t) is given by. F [f1 (t)f2(t)] = Z

F1 (y)F2 (f y)dy

(7.40)

Example 7.7 Let f1 (t) = et u(t) and f2 (t) = e2 tu(t + 2) convolve f1 and f2 Let y(t) be the result of convolving f1 and f2 . Then, y(t) = = Z

e e 2t e+2 u( )u(t + 2)d Z e u()u(t + 2)d = e2t


e u( )e 2(t ) u(t + 2)d

Let y = 2 or = y + 2 y(t) = e2 e 2t Z

e y u(y + 2)u(t y)dy

But the integrand is zero for y > t, so u(t y) can be dropped if the upper limit is changed to t. Then Zt y(t) = e2 e 2t ey u(y + 2)dy

The lower limit may now be changed to 2, and y(t) = e 2 e2tu(t + 2) Z


t

e y dy

= [e 2 et e 2t ] u(t + 2) 7.7 SOME SPECIAL FOURIER TRANSFORMS 7.7.1 THE UNIT IMPULSE The unit impulse function (t) presents some diculties. If the function is transformed directly Z (x)e j2 f x dx = 1 (7.41) F[(t)] =

as in the case of Laplace transform. The diculty arises when we attempt to inverse transform (t) = Z

e j2 ty dy

121

This can only be demonstrated indirectly since the integral does not simply exist in the ordinary sense. If 1 is the Fourier transform of (t), then according to Eq. (7.34) (f) is the transform 1. It is convenient to dene (t) as the limiting form of any function of time whose transform approaches unity. There are many such functions and few are listed here. 1. The tall rectangular pulse of height 1 and width . Its transform is known to be (sin f)/ f. which certainly approaches unity for any f if is made suciently small. 2. The tall triangular pulse of height 1 and width 2 , which has a transform of (sin2 f)/( f)2 . This approaches unity for any xed f so long as is made suciently small. 3. The function et /22 2 has a transform e 2 f and can be made to approach unity for any value of f , however large, if is made small enough. Then e t /2 2 (t) = lim 0 2 is another choice for the denition of the delta function. 4. Since the delta function is real and even any transform that is real and even and that approaches unity in a limit can be used for the transform of (t). If we consider the function [U (f f1 )], which is unity in the region f1 < f < f1 , so the transform approaches unity for all f as f1 becomes innite. For any nite f1 , the inverse transform is Z Thus we might dene (t) = lim sin 2f1 t t
f1
2 2 2 2 2

(7.42)

e j2ty dy =

f1

sin 2f1 t t

f1

Although this is a useful denition mathematically, it is somewhat awkward to visualize as a function of time. If the numerator and denominator are multiplied by 2f1 , the relation reads sin 2f1 t 2f1 t

(t) = lim 2f1


f 1

(7.43)

Which is known to have value of 2f1 at t = 0. Furthermore, the area under the function can be shown to be unity. Thus the function becomes innitely high at t = o and it does have the proper area under it. However, the envelope of the function does not go to zero as f1 becomes innite.
122

5. Another possibility is the transform pictured in Fig. 7.6. Again as f1 becomes innite, the triangle approaches unity for all f , though it never gets there for any f but zero. We need transform the positive position only; then we take twice the real part, since the transform is real and an even function of f. In the positive region, the function is given by 1f [u(f ) u(f f1 )] f1 and the inverse transform is the same as the transform of 1t [u(t) u(t f1 )] f1 with f replaced by t. The Laplace transform of the triangle is 1 1 e s f1 + s f1 s2 f1 s2 with s = j2f , the Fourier transform is 1 1 1 1 ej2 f1f j2f f1 (2f)2 Only the real part is needed, and thus 1/(j2f) can be dropped. Since 1 ej2 f1 f = 1 cos(2f1 f ) + j sin(2f1 f) the real part of the transform is simply 1 cos(2f1 f) 2 sin 2 (f1 f) = (2f)2 f1 (2f)2 f1 Replacing f by t and doubling the result gives. 4 sin f1 t (2t)2 f1 as the inverse transform of the function of Fig. 7.6. 6. There are many new denitions that could be construed for the delta function. For instance e|f /f1| is an even function of f and approaches unity as f1 becomes innite. Its inverse transform 2 can be found by noting that the Fourier transform of e|t| is 2+4 2f 2 . If we write t for f |f /f1| . and 1/f1 for to obtain the inverse transform of e This inverse transform is then
123
2

h i2
1 f1

2 f1

+ 4 2 t2

Since f1 is to be made large, we might let = 1/(2f1 ), and as f1 becomes large will become small. Then the last expression becomes
2(2) = (42 2 + 4 2 t2 ) 2 + t2

Then (t) = lim


0 2

(7.44) + t2 This denition agrees well with the initial concept of the impulse, since at t = 0 it has a value 1/, which becomes large as 0. Also for t 6= 0, the function approaches /(t)2 or zero as 0. 7.7.2 THE STEP FUNCTION Since the Fourier transform of unity has been dened as (f), it should be possible to dene the Fourier transform of the step function. Before we begin, the even part of the step function is 1 1 [u(t) + u(t)] = 2 2 The real part of the Fourier transform of u(t) is (f )/2. The transform of u(t) does not exist in the strict sense. The transform of e t u(t) does exist; it is 1/(j 2f + ). This function approaches the step function as approaches zero, so the transform of step function must be 1/(j 2f). But this has no real part, yet it has already been established that the real part is (f )/2. If what has been done is correct, F[u(t)] = lim
0

1 j2f +

Multiplying numerator and denominator by ( j2f), we have. F[u(t)] = lim


0

j 2f 2 + (2f )2 2 + (2f)2
1 (j2 f )

The limit of the imaginary part, as indicated before, is dividing numerator and denominator by 42
(4 2)

, but the real part is, after

Re{F [u(t)]} = lim


0

2 (4 2 ) + f 2

If we set a =

, a 0 as 0, so
124

Re {F [u(t)]} = lim
0

a 2

a + f2
2

But this is exactly one half the expression for (f) given in Eq. (7.44) with t replaced by f . The real part of the transform is indeed (f )/2 then 1 (f) + 2 j 2f

F[u(t)] = 7.7.3 THE SIGNUM FUNCTION

(7.45)

The function signum of t abbreviated sgn(t) is equal to +1 when t is positive, 1 when t is negative and 0 when t = 0. Thus Sgn(t) = u(t) u(t) (7.46)

This function is Fourier transformable in the sense that a constant and a step function are transformable. So that 1 (f ) 1 1 (f ) + = 2 j2f 2 j2f jf

F[sgn(t)] =

(7.47)

Poles on the j axis in the p plane are transformable then, with p = j2f 1 Sgn(t) = 2 p

(7.48)

1 sgn(t) = F ej2 f0t 2 p j2f0 Example 7.8 Find the inverse Fourier transform of 4p4 2p3 + 6p2 66p 18 p(p + 1)(p 2)(p2 + 9) Writing the partial fraction expansion of the above expression 2 1 1j 1 + + + conjugate p p + 1 p 2 p 3j

(7.49)

1. The left half plane poles have the same inverse transform as the Laplace transform; hence F 1 2 = 2e t u(t) p+1
125

2. The right half p-plane poles have the same inverse transform as the Laplace transform, but with u(t) replaced by u(t). 1 F 1 = e2t u(t) p1 3. The j axis p-plane poles have the same inverse Laplace transform, but with u(t) replaced by sgn(t)/2. Hence F 1 1 1j Sgn(t) + + conjugate = [1 + 2cos(3t 45)] p p 3j 2

7.7.4 PERIODIC FUNCTION Since the transform of ej2 f0t is understood to be (f f0), any periodic or almost periodic function that is expressible in an exponential series has a Fourier transform. Thus with
X

f(t) =

Bkej2 fk
X

(7.50)

k=

F[f(t)] = F (f) =

k=

Bk (f fk )

(7.51)

126

PROBLEMS 7.1 Find the Fourier transform of the functions pictured in Fig. 7.7. 7.2 Find the Fourier transforms of a) e a|t| h i n b) sin(t) u(t + 2 n ) u(t 2 ) , n is an integer

7.3 Using the transform of triangular pulse in Fig. 7.7(a) and Eq. (7.33), nd the Fourier transform of (sin2 at)/t2 . 7.4 Find the Fourier transforms of a) e at u(t) eat u(t) b)
1 a2 +t2

7.5 Find the Fourier transforms of a) e at u(t + b), a and b are positive b) eat [u(t) u(t b)] 7.6 Find the inverse Fourier transform of the following functions. a) b) c)
B f 2+a 2 jB f f 2+b 2 p 24 (p+1 )(p+ 2)(p3)

where B and b are real and p = j2f 7.7 Find the inverse Fourier transforms of a) b)
1 2f 2 f 4+
1.25f 2 + 0.25 2 4

1 (f +j)2

7.8 Find the inverse Fourier transforms of [ a ] f2 i a) h 2 2 2 a f 2+{ 2 } b) h


j af f 2+
a { 2 } 2 2

7.9 Convolve by direct integration, and check by transforms, the functions a) e 2t u(t) and e3tu(t) b) e tu(t) and sin 2t u(t) c) et u(t) and et u(t)

i2

128

7.10 Let f(t) be Laplace transformable with transform F (s) and having e = c, show that regardless of the sign of c, e bt f(t) has a transform, provided that b > c. Show that this Fourier transform is F (j2f + b). 7.11 Use the theorem on complex translation to demonstrate that 1 2j Z
b+j

L[f1 (t)f2 (t)] =

bj

F1 ()F2 (s )d

provided that b is greater than the abscissa of exponential order of either f1 or f2 . 7.12 The auto correlation function ( ) of a function f(t) is dened by ( ) = Z

f(x)f (x )dx

a) Show that ( ) is an even function of . b) Show that 2 (f ), the Fourier transform of (), is |F (f )|2 . c) Demonstrate that several functions can have the same correlation function. Hint: It is necessary to show only that several functions have dierent transforms but the same F (f )|2 . 7.13 Given that the Fourier transform of unity is (f ), nd the Fourier transforms of cos(2f0 t) and sin(2f0 t). 7.14 Find the inverse Fourier transforms of a) b) c)
p p2+ 2 p2+ 2 2 p(p+1)

129

CHAPTER - VIII APPLICATIONS OF THE FOURIER TRANSFORM


8.1 MODULATION Let S(t) be a signal whose spectrum lies in the vicinity of carrier frequency fc , a frequency high enough that radiation is economical and practical. The most general signal that satises this requirement is the function S(t) = A(t) cos [2fc t + (t)] (8.1)

Where A(t) is some arbitrary function of time whose spectrum does not exceed the frequency fc (it is usually small compared to fc) and (t) is signal whose maximum derivative does not exceed in magnitude the value 4fc /3 and whose spectrum does not exceed fc/2. If an audio or video signal Sm (t) is to be carried by the function of Eq. (8.1) either A(t) must be some function of Sm (t) or (t) must be some function of Sm (t). If A(t) is a function of Sm (t) and (t) is a constant, the result is called amplitude modulation. If (t) is a function of Sm (t) and A(t) is a constant, the result is called angle modulation. The two most common examples of angle-modulation are phase modulation (P M ) and frequency modulation (F M ). In the following subsections, the Fourier spectrum will be used interchangeably with line spectrum or the energy spectrum. This will simply be the transform of the signal, which is plotted graphically in magnitude, or will simply be the square root of the energy spectrum. In case of sinusoids, the Fourier spectrum will be a pair of impulses. 8.1.1 Amplitude Modulation An arbitrary message x(t) can represent the ensemble of all probable messages from a given source. Assume that the messages are bandlimited in , above which spectral content is negligible and unnecessary. X(f) = 0 for |f| > Also, let the message be scaled to have a magnitude not exceeding unity. |x(t)| < 1 or < x2 (t) > 1 The ensemble average then also satises X 2 (t) 1
130

(8.2)

(8.3)

(8.4)

(8.5)

The envelope of the modulated carrier has the same shape as the message waveform. The modulated signal is

Xc (t) = A [cosc t + mx(t)cosc t] = Ac[1 + mx(t)]cosct where Accosc t m - unmodulated carrier

(8.6)

- modulation index

The modulated amplitude Ac(t) = Ac[1 + mx(t)] fc >> and m 1 When m = 1, 100 percent modulation takes place and amplitude varies from 0 to 2Ac . If m > 1, overmodulation takes place, which results in carrier phase reversal and envelop distortion. The message signal and the modulated signal are shown in Fig. 8.1. The Fourier transform of Eq. (8.6) is Xc(f ) = Ac mAc [(f fc ) + (f + fc)] + [x(f fc ) + x(f + fc )] 2 2 (8.7)

The spectrum of the modulated signal is shown in Fig. 8.2 The properties of Xc (f) in Eq. (8.7) are as follows: 1. It is symmetric about the carrier frequency with amplitude as even function and phase as odd function. 2. Transmission bandwidth BT required for an AM signal is exactly twice that of message bandwidth. The average transmitted power PT
2

2 PT = < Xc (t) >= A2 < [1 + mx(t)] cos2 ct > c 2 Ac {< 1 + 2mx(t) + m2 < x2 (t) > + < [1 + mx(t)] 2 cos 2ct >} = 2

(8.8)

Since fc >> , the second term averages to zero. If the d c component of the message is also zero, then PT PT = [1 + m2 < x2 (t) >] If the message source is ergodic A2 c PT = 1 + m2 x2 = Pc + 2P sB 2
131

A2 c 2

(8.9)

Where x2 is the ensemble average, and carrier power P c is 1 A2 . Power in each side band. c 2 PsB = m2 x2 1 1 A2 c = m2 x2 Pc < P c 4 2 2

This implies that at least 50 percent of the total power resides in the carrier, which is wasted. The maximum voltage Xcmax = 2Ac and , therefore, peak instantaneous power is proportional to 4A2 . c Example 8.1 If x(t) = A cos 2fm t, and carrier is Ac cos c t, nd and draw the modulated signal.

xc (t) = Ac(1 + mAm cos m t) cos ct mAm Ac [cos(c m )t + cos(c + m )t] = Ac cos ct + 2 The spectrum of the modulated signal is given in Fig. 8.3.

133

8.1.2 Double Sideband Suppressed-Carrier Modulation (DSB) The carrier frequency component is independent of message and represents wasted power, therefore, it can be eliminated from the modulated wave without losing any information. consider, Eq. (8.6) Xc (t) = Accosct + mx(t)Accosct Dropping the rst term and m in the above equation, we get Xc (t) = x(t)Accosct so that, Xc(t) = 0 when x(t) = 0 (8.10)

The average transmitted power for Eq. (8.10) then is PT = 2.PSB = The peak power is proportional to A2 then, c Xc(f ) = Ac [X(f fc) + X(f + fc )] 2 (8.11) X 2 A2 c 2

The spectrum of the modulated signal is shown in Fig. 8.4. It can be seen that the bandwidth remains unchanged i.e. BT = 2 and AM and DSB are quite similar in the frequency domain. However, they are quite dierent in the time domain. 8.1.3 Balanced Modulator for DSB The DSB is obtained by using two AM modulators arranged in a balanced conguration to cancel out the carrier. The arrangement is shown in Fig. 8.5. Assuming that AM modulators are identical save for the reversed sign of one input, the outputs are 1 Ac 1 + x(t) cos ct 2 Subtracting the two components, we get Xc(t) = x(t)Ac cos c t 8.1.4 Single Sideband Modulation The upper and lower side bands of AM and DSB are uniquely related by symmetry, given the amplitude and phase of one, we can always construct the other. Therefore, transmitting both bands is a waste of bandwidth. Total elimination of carrier and one sideband from AM spectrum produces SSB for which
134

1 and Ac 1 x(t) cos tc 2

BT =

and PT = PsB =

X2 Ac 4

The arrangement for obtaining SSB is shown in Fig. 8.6 and the spectrum is shown in Fig. 8.7. The SSB is widely used for transoceanic radio telephone circuits and wire communication. A balanced SSB modulator is shown in Fig. 8.8 Example 8.2 A modulating signal x(t) = cos 440t + cos 880t is multiplied by a carrier fc = cos 2x106 t. The resulting DSB SC is ltered so that only the frequencies less than 1M C are retained. What will the output be now when the signal is multiplied by cos 2(106 110)t and the frequencies above the audio range are ltered out. This will indicate what happens when music is received by a receiver with drifting oscillator. Solution: The DSB is given by the equation (cos 440t + cos 880t) cos 2x106 t

1 1 [cos 2(106 + 220)t cos 2(106 220)t] + [cos 2(106 + 440)t cos 2(106 440)t] 2 2

Filtering, so that frequencies below 1MC are retained, we get 1 [cos 2(106 220)t + cos 2(106 440)t] 2 Multiply Eq. (8.12) by cos 2(106 110)t, we get Xc (t) = 1 [cos 2.110t + cos 2.330t] 4 1 = [cos 220t + cos 660t] 4 (8.12)

This indicates the deterioration in SSB because of corruption of carrier by 110Hz. 8.1.5 Demodulation or Detection The process of separating a modulating signal from a modulated carrier is called detection. This is inverse of modulation and requires time varying or nonlinear devices. In normal AM detection, the modulating signal is recovered by applying Xc(t) to a half wave rectier. The output is then ltered to provide the desired modulating signal. A scheme of demodulation is shown in Fig. 8.9. The diode in Fig. 8.9. can be treated as a piece wise linear device where switching takes place at carrier frequency f .
137

Thus, Xc(t) = k[1 + mx(t)]cosct and


0 Xc(t) = k[1 + mx(t)]cosct.S(t)

(8.13)

(8.14)

The ltering is carried out by the low pass lter with time constant R1 C1 . The time constant is much larger than 1/fc and smaller than 1/. R2 C2 acts as a d.c. block to remove the bias of unmodulated carrier component. S(t) is the switching function. It can be shown that the output of Fig. 8.8. contains a component proportional to x(t) plus higher frequency terms. The capacitor serves to lter out the higher frequency terms. In case of SSB detection, the carrier must be supplied at the receiver before detection can take place. The sum of the signal and locally generated carrier could be rectied to select the components corresponding to the desired modulating signal. It is more common in practice to use the carrier to shift the SSB signal to required audio band by using a frequency converter. The problem of providing a carrier at the receiver of exactly the right frequency has been a block in the wide spread use of SSB. 8.1.6 Frequency Modulation. In this type of modulation, the frequency of the carrier is caused to vary according to the modulating signal x(t). Thus the frequency of the carrier is c + kx(t). Strictly speaking, we can talk of only sine(cosine) waves for understanding this type of modulation. If the angle varies linearly with time, the frequency can be expressed as the derivative of the angle. Thus fc (t) = cos(t) = cos(ct + 0 ) (8.15)

When (t) does not vary linearly, we can obviate this diculty by dening instantaneous radian frequency i to be the derivative of the angle as function of time i = d(t) dt (8.16)

If (t) is now made to vary in some manner with a modulating signal f (t), we call the resulting form of modulation as angle modulation. In particular, if (t) = c t + 0 + k1 x(t) (8.17) Where k1 is a constant of the system. Here the phase of carrier varies linearly with modulating signal. Now let the instantaneous frequency vary linearly with x(t) i = c + k 2x(t) idt = ct + 0 + k2

(t) =

x(t)dt

(8.18)

This gives rise to F M system. Both phase and frequency modulation are special cases of angle modulation.
138

Since F M is a nonlinear process, new frequencies are generated by the modulating process. As the simplest example, consider a sinusoidal modulating signal at fm X(t) = a cos m t The instantaneous radian frequency i i = c + cos m t where is a constant depending on the amplitude 0 a0 of the modulating signal and circuit converting variations in signal amplitude to corresponding variations in carrier frequency. Thus i varies around c at the rate of m and with maximum deviation , f = /2 gives maximum frequency of deviation called frequency deviation. The phase variation (t) for this special case Z sin m t + 0 (t) = idt = ct + m Here, 0 may be taken as zero by referring to an appropriate phase reference, so that Xc (t) = cos(ct + sin m t), = f = m fm (8.19) <<

is called modulation index and represents maximum phase shift of the carrier. Thus the bandwidth of F M depends on . The average power associated with the F M carrier is independent of modulating signal and is the same as average power of the modulating carrier. The average power over a cycle for 1 ohm is 1 T Where Z
T 2 Xc (t)dt = 0

1 T

cos2 (ct + sin m t)dt

(8.20)

1 fm Z 1 T 1 + cos(2ct + 2 sin m t dt = T 0 2 1 watt = 2 If the amplitude of the carrier is Ac, the average power is 1/2 A2 . This result is true for c general form of signals. T= 8.1.7 Narrowband F M In this case << /2. The equations for narrowband F M appear in the form of product modulator of AM and give rise to sideband frequencies equally displaced about the carrier. In this case is usually smaller than 0.2. So that
139

For << /2

Xc(t) = cos(ct + sin m t) = cos c t cos( sin m t) sin ct sin( sin m t)

(8.21)

cos( sin m t) = 1, and sin( sin m t) = sin m t Eq. (8.21) can now be written as Xc(t) = cos c} sin m t sin c t | {z t | {z }
Carr ier Sideband f requencie s

Thus the BW of narrowband F M is 2.fm . For general signal

Taking = 0 and

i = c + k2 x(t) Z Z (t) = i dt = c t + 0 + k2 x(t)dt x(t)dt = g(t) Xc(t) = cos [c t + k2 g(t)]

If k2 and the amplitude of G(t) are small, so that 2 Xc (t) = cos c t k2 g(t) sin ct |k2 g(t)| <<

(8.22)

The bandwidth is again 2fm , where fm is the highest frequency component of either g(t) or its derivative x(t). In F M , carrier and sideband terms are in phase quadrature. Whereas in AM, carrier and sidebands are in phase. This is demonstrated by Fig. 8.10. Xc (t) = cos ct sin m t sin ct = cos ct [cos(c m )t cos(c + m )t] 2 j c t = Re e (1 e j mt ej mt) narrow band F M 2 2 Xc (t) = cos ct + mx(t) cos ct (AM ) 8.1.8 Wideband FM The advantage of noise and interference reduction of F M over AM becomes signicant for >> /2. The bandwidth required to pass this signal becomes correspondingly large. Consider >> /2 Xc (t) = cos ct cos( sin m t) sin c t sin( sin m t)
140

Since is signicant cos( sin m t) ' 1 If we assume << 2 sin2 m t 2 2 << 6

6 and retain just the rst two terms, we get the additional term sin 2 m t cos ct in Xc(t)

so that Xc(t) = (1 + 2 ) cos ct [cos(m c)t cos(c + m )t] 4 2 (8.23)

2 [cos(c + 2m )t + cos(c 2m )t] 8

The amplitude spectrum of Eq. (8.23) is shown in Fig. 8.11.

Note that the carrier term has decreased somewhat with increasing . For a xed modulating frequency, is proportional to the amplitude of the modulating signal. Since the average power is constant, increase in bandwidth and sidebands is accompanied by decrease in power in the carrier and hence the amplitude of the carrier is decreasing. As increases further, we require more terms in power series expansion of both cos( sin m t) and sin( sin m t) and bandwidth begins to increase with . Consider, Xc(t) = cos(ct + sinm t) = cos c t cos( sin m t) sin( sin m t) sin c t Both cos( sin m t) and sin( sin m t) are periodic functions of m and each may be expanded in Fourier series of period 2/. Each term will have terms in m and all its harmonics and each harmonic multiplied by cosc t or sinc t gives rise to two side bands symmetrically situated about c . The sidebands for sin c t will be quadrature apart, whereas, sideband for cos ct will be in phase. Let us consider cos( sin m t) 1. For < 0.50, the curve can be represented by d.c. component plus a small component at twice the fundamental frequency.
142

2. For > /2, the function remains positive and appears as a d.c. component with some ripple superimposed. If we multiply this by cosct, the carrier term decreases with increasing and sidebands increase. 3. For > /2, the function takes on negative values, as increases positive and negative excursions become more rapid. So, at = /2 transition from more or less slowly varying periodic time function with most of spectral energy in its carrier to a rapidly varying function with the spectral energy spread over a wide range of frequencies. Consider sin( sin m t), the frequency components are all old integeral multipliers of m , so that they give rise to odd order sidebands about the carrier. We can decrease the carrier power (wasted power) considerably by increasing . Consider periodic complex exponential V (t) = ej sin m t, T T <t< 2 2 (8.24)

The real part of Eq. (8.24) gives cosine terms and the imaginary part gives sine terms. 1 T Z
T /2

Cn = 2 T

ej( sin mt n t) dt
T /2

(8.25)

m =

2n n = nm = T Z 1 j( sin x nx) = e dx 2

x = m t

(8.26)

The integral in Eq. (8.26) can be evaluated only as an innite series and is called Bessel function of the rst kind and is denoted by Jn () = 1 2 Z

ej( sin xnx) dx

A circuit for direct F M modulation is given in Fig. 8.12. 8.1.9 Frequency Demodulation The demodulation process must provide an output voltage (current ) whose amplitude is linearly proportional to the frequency of the input F M signal. This device is called a frequency discriminator. A circuit of a discriminator is shown in Fig. 8.13. 8.1.10 Pulse Modulation Consider a periodic function p(t) consisting of impulses occuring every Ts sec and having an area of Ts units. This function is shown in Fig. 8.14. The Fourier series of p(t) can be obtained by nding the Laplace transform of one period and letting s = j2k/Ts . But Laplace transform of one period is simply Ts . Then C k = Ts and the exponential series is
143

p(t) = =

1 X j2 kt/T s e T k= X

ej2 kt /Ts

(8.27)

k=

If a function Sm (t) is multiplied by p(t), the product will be a series of impulses seperated by TS sec., but whose areas are now Ts times the height of amplitude of Sm (t) evaluated at the time of concurrence of each impulse. The result is only a sampling Sm (t) at the sampling instants. Mathematically. s0 (t) = sm (t)P (t) but, so p(t) = Ts (t rTs ) r = s0 (t) =
X

(8.28) (8.29) (8.30)

r=

Ts sm (rT )(t rT )

on using Eq. (8.27)


X

s0 (t) =

sm (t)ej2 kt/Ts

(8.31)

k=

The Fourier transform of Eq. (8.31) is s0 (f ) =


X

k=

s m (f

k ) Ts

(8.32)

Then, the spectrum of the sampled signal is the same as Sm (f ) translated to the right and left 1/T s, 2/T s, 3/T s etc. In addition to Sm (f ) itself when k = 0. Typical spectrum of Sm (f) and S0 (t) = Sm (T )P (t) are shown in Fig. 8.15. Note that 1/Ts 2fm , where fm is the maximum frequency component of Sm (f). It is essential and follows from the sampling theorem which states if the maximum frequency component in Sm (t) is fm , Sm (t) can be sampled at any rate greater than or equal to 2fm and original signal can be recovered from its samples by ltering. Note that although a low-pass lter will restore the original signal, a bandpass lter centered at f = 1/T s, 2/T s, . . ., and 2fm wide will produce the original AM SC . Then amplitude modulation can be produced by pulse modulation and ltering as well as by using nonlinear devices. Since impulses are impossible to produce, pulses of suitable width can be used for this purpose. In general, it can be shown that, if the pulse width is t, the Fourier transform of
145

the pulse is equal to the area under the pulse for all frequencies less than 1/(4t), provided that the pulse is symmetric about its centre. Even if the condition of symmetry is not met, there is still a pulse length which the spectrum is very nearly a constant upto some maximum frequency. For symmetric pulses, it is easier to prove: Let p(t) be a pulse of length t and p(t) be an even function of t. Then P (f ) = = Z
t/2

p(t)ej2f t dt
t/2 t/2

(8.33) Z
t/2

t/2

p(t) cos(2f t)dt j

p(t) sin(2f t)dt

t/2

Since p(t) is even, therefore its transform is real and the second integral vanishes. Then Z
t/2

P (f) =

p(t) cos 2ftdt


t/2

1 But for f < 41t , 2f < 2t , so even at the upper limit, cos 2f t = cos 1 = cos 14.3 = 0.969. 4 The integrand is very nearly equal to p(t) for all t within the range of integration. So, very nearly

P (f ) = for f <
1 4 t

t/2

p(t)dt = area of p(t)

t/2

Now, if the objective of sampling is to recover Sm (t) by ultimately using a low pass lter, it is only necessary that t < 1/(4fm ), where fm is maximum frequency contained in Sm (f ). If the sampling is to be used to produce AM , the rst repetition of the spectrum is the one that is used, so the carrier frequency and the sampling frequency are the same. The pulse has to be about 8 percent of the carrier period. With 360 deg to a cycle, the pulse should not be larger than 28.6 deg. wide. The plate modulator operates on this principle and is shown in Fig. 8.16. The modulating signal is placed in series with the d-c supply of the circuit, so the plate voltage on the triode is very nearly Ebb + Sm (t). The L C resonant circuit provides a low impedance path for the modulating signal and a high impedance to the carrier. The resonant circuit is also the lter that removes all but those frequencies near the carrier frequency. The triode is biased well below cuto, so when no carrier signal is applied to its grid, the triode does not conduct at all. The carrier is applied to the control grid with an amplitude that insures that the tube will conduct only when the crest of the carrier is reached. If it conducts, for all intents and puposes, it can be considered an impulse as far as the band of frequencies near fc is concerned. Further-more, the amount of current produced will be proportional to the plate voltage at the moment of conduction, so the pulse will have an area under it proportional to Ebb + sm(t). The current pulses produced then pass through the bandpass lter represented by the L C circuit, and the output voltage will be proportional to [Ebb + Sm (t)] cos 2fct If the maximum amplitude of Sm (t) never exceeds Ebb , this is an ordinary AM signal. The plate modulator has the added advantage that nearly 100 percent modulation can be achieved with relatively little distortion.
146

Another practical application of pulse modulation is time multiplexing. If it is possible to send a message by sending only the samples, is there not some use that can be made of the time in between the samples? Consider two signals Sm1 (t) and Sm2 (t). These might be two voice signal on a telephone line. Since 2.5KHZ is the highest frequency needed or used in voice transmission, put both the signals through 2.5KHZ low-pass lter and then sample each at 5KHZ rate. Now stagger the sampling pulses, so that those belonging to one message are alternated with those of the other message. Two such signals appear in Fig. 8.17(a) and the alternating sample pulses appear in Fig. 8.17(b) This pulse (note that its fundamental frequency is 10K HZ) and the pulses at the other end of the line are separated out by some type synchronized switches device or commutator, then each is passed through a 2.5KHZ low pass lter, and both signals are thus recovered simultaneously. Observe that the spectrum of the pulses that carry the sampled information must be transmitted by this channel without distortion, so the channel bandwidth must be determined not by signals being modulated but by the pulses. To use this bandwidth eciently, it is necessary to send more than two message simultaneously. 8.2. FILTERS Filters are an essential part in the design of linear systems and are used to modify the signal or eliminate the unwanted frequency band. In Sec. 8.1, we have used lters in communication systems. Indeed any communication system involves lters. The Fourier transform is not used to design the lter, but rather to establish the design criteria for them. It will tell us what is possible and what is not possible, and explain some characteristics of practical lters. A physically realizable lter is one whose impulse response is necessarily zero for t less than zero. However, in frequency domain, it is not easy to specify criteria for physical realizability. For example, if h(t) is the response of a realizable lter, than h(t) = 0 for t < 0. Its transform H(f ) can be expressed as H(f) = R(f) + jI(f) and Where R(f) and I(f) are the real and imaginary parts respectively. Eq. (8.35) is used in the lter theory. For example, a bandpass lter should have constant magnitude over the passband and zero magnitude outside the band. In this case magnitude of the transfer function has to be considered. Suppose a band limited siganl in |f | < fc is put through a lter, whose magnitude is constant over this frequency range, but can be any thing at all other frequencies. Let the transform of this signal be X (f) and the magnitude of the transfer function be A. Then the transform of the output signal will be X0 (f ) = AX (f)ej (f ) (8.36) However, if the device is to produce the signal without distortion, then X0 (f) should be proportional to X(f). This means that (f) = 0. But (f ) = j2ft0 would not be X0 (f ) = AX(f)e j2 f t0 (8.37) H(f ) = |H(f )|e j (f ) (8.35) (8.34)

has an inverse transform, which is the input signal changed in amplitude by A and delayed t0 sec. Assuming that time delay of t0 seconds is not objectionable, the criterion for distrortionless ltering is a phase function that is linear with frequency.
148

The ideal bandpass lter then has the following properties: 1. The magnitude of the transfer function is constant in the passband and zero otherwise. 2. The phase function, (f) must be linear function of frequency in the passband with negative slope of 2t0 . As with all ideals, an ideal lter is unattainable. However, the ideal lter can be approached arbitrarily closely if one is willing to increase the delay time t0 . The criterion that the amplitude of the transfer function must meet to insure that the impulse response will be zero for negative t is called the Paley-Weiner condition. This states that the |H(f)| may be the magnitude of the Fourier transform of a function which is zero for t less than some time t if and only if the integral Z

ln|H (f)| df 1 + f2

(8.38)

converges that is, if it is less than innity. If the integral converges, there exists a phase function, not necessarily linear, that can be associated with |H(f)|, so that its inverse transform is zero for negative t. 8.2.1 The Ideal Low-Pass Filter The transfer function of the ideal low pass lter is H(f ) = A [u(f + fc) u(f fc )] ej2 f t0 (8.39)

where fc is the cut o frequency and t0 is the delay time. The inverse transform of Eq. (8.39) is h(t) =

A sin [2fc(t t0 )] t t0

(8.40)

The function is shown in Fig. 8.18 and it starts wiggling, before t = 0 and hence is not realizable. The unit function response of this lter is A Z
t

sin{2fc(t t0 )} dt = A[2fc(t t0 )] t t0

(8.41)

This function resembles a step function and is shown in Fig. 8.19. If fc is made large, the frequency of wiggles is large. Notice, however that no matter how large fc is made, there will always be an overshoot and an undershoot at the discontinuity. Even though the lter above is not realizable, it will be shown that realizable lters demonstrate this overshoot if the magnitude of the transfer function falls rapidly in the vicinity of fc . This peculiarity is known as Gibbs phenomenon. Observe that rise time of the step response is very nearly equal to the reciprocal of the slope of {2fc (t t0 )} at t = t0 . Then the rise time of the output is /a = 1/(2fc). If the input were a rectangle pulse instead of a step function, then output can be viewed as positive step followed by a negative step. If the output is to look anything like a pulse, then the step function rise time should not exceed one-half the pulse length. This leads to the rule of thumb: the bandwidth of a lter must be at least the reciprocal of the pulse length if the pulse is not to be seriously altered in amplitude. If the delay time t0 is large
150

sin{2fc(t t0 )} t t0 will be small for negative t. Therefore, we should be able to approximate the ideal lter by making the impulse response of a lter. A sin 2{fc(t t0 )} , t0 t t0 Unfortunately this function is not symmetrical about the time t = t0 , so when this is transformed, the result will not have linear phase. Symmetry can be achieved however by chopping the response of ideal lter for t > 2t0 as well. The resulting response is shown in Fig. 8.20. The transform of this function can be approached in two ways. The impulse response above can be viewed as the original impulse response multiplied by {u(t) u(t 2t0 )}. The transform of the resulting function is sin 2f t0 j2 f t0 e f and this can be convolved with the transform of the ideal lter to nd the transfer function of almost ideal lter. If Hai(f ) is the transfer function of this realizable approximation to the ideal lter, then Z
fc

Hai f = =

fc fc

Aej2 yt0e j2 (f y)t0 sin 2(f y)t0 dy (f y) A sin 2(f y)t0 j2 f t0 e dy (f y)

fc

Since the exponential and the constant under the integral sign are independent of y, they may be brought outside the integral sign. If the new variable x = f y is used, with dx = dy and the limit changed to f + fc and f fc Z f fc A j2 f t0 sin 2t0 x e dx x f + fc Z f + fc A sin 2t0 x = e j2 f t0 dx x f fc Hai(f ) = A{{2t0 (f + fc)} x{(2t0(f fc ))}}ej2 f t0 Haif =

(8.42) (8.43)

The magnitude and phase of Eq. (8.43) are plotted as a function of frequency in Fig. 8.21. Notice that since the function becomes negative for alternate intervals below fc and above fc , the phase function has discontinuities in it of magnitude every time a change in sign occurs.This in no way detracts from the linearity requirement of the phase.

152

We can now draw two conclusions from the result: 1. A lter will be function of the slope of the frequency response at the cuto frequency. The larger this slope, the larger will be the delay time. Further, the price for good low pass lter would appear as a delay in response. 2. The response of such lter will be accompanied by about 9 percent overshoot and undershoot at points where the input is discontinuity. 8.2.2 The High-Pass Filter A lter with transfer function Ae j2 f t0 for all frequencies is physically realizable. Therefore, if we subtract the transfer function for the low pass lter from the function given, the resulting transfer function will be physically realizable. Thus Hn (f ) = Ae j2 f t0 {1 {2t0 (f + fc) {2t0 (f fc)}}

will have an impulse response equal to an impulse value A at t = t0 , minus the impulse response of the low pass lter with the same cuto frequency. This appears in Fig. 8.22(a). Its integral will be step response which will be upside down version of the low pass step response, but with a positive discontinuity of A at t = t0 as shown in Fig. 8.22(b). The magnitude and phase functions for this lter are shown in Fig. 8.23. 8.2.3 The Bandpass Filter The bandpass lter may be thought of as the result of subtracting from a constant both a high pass and a low pass lter, the cuto frequency of the low pass lter being less than the high pass lter. A bandpass lter can be visualised from a low pass lter by simple translation to the left and the right of f0 Hz, where f0 is the centre frequency of the lter. This means that impulse response of the lter is the same as that of low pass lter multiplied by cos 2f t0 . In fact, this result can be made to apply approximately to any signal applied to a band lter. 8.2.4 Practical Filters The rst lter designs were used in audio work, so there was little concern for linear phase functions. If a bandpass or high pass lter is to be designed using lumped elements R, L, and C, it is necessary only to design the equivalent low pass lter with a cut o frequency of one rad/sec and one Ohm impedance level. We proceed as follows. Suppose we have been given a low pass lter constructed of R, L and C elements, and we know its circuit diagram. What would happen if all the inductances and capacitances were removed and replaced by inductances and capacitances half as large?. The lter will have the same characteristics, but the bandwidth will be doubled. Therefore, if L and C terms are reduced in size by a factor 0 a0 then the transfer function of the network will have the same amplitude and phase variation, but with the frequency axis multiplied by the constant 0 0 a . If we are satised with the cuto frequency, but dissatised with the impedance level, then the impedance level can be raised by a factor 0 b0 . This means that R and L terms are multiplied by 0 b0 but the C terms are divided by 0 b0. If, then, we design a low pass lter with cuto frequency 1/(2)Hz and a one ohm impedance level, then to convert it to low pass lter with cuto frequency fc and impedance level R0 we simply Multiply all resistances by R0 Multiply all inductances by R0 /(2fc )
154

Divide all capacitances by 2fc R0 If we wish to design a bandpass lter with impedance level R0 and bandpass fc then it is necessary to design the corresponding low pass lter with the same impedance level and cuto frequency fc and then place (1) in series with every inductance a capacitance that is in series resonant with it at the desired centre frequency f0 and (2) in parallel with every capacitance of the low pass lter an inductance that is parallel resonant with it at the center frequency f0 . This means that the impedance of an inductance is replaced by a series resonant circuit, then jL = j2f L is replaced by j2L j/2fc where C is related to L by C= 1 42 f02 L

if it is resonant with it at f0 . This amounts to replacing jL = j2f L by j2L(f f02 )). It can be shown in similar way that placing an L in parallel with each capacitance is equivalent to replacing the admittances j2f C by j2C (f f02 /f) where inductances in each case are related to C by 1 L= 2 2 4 f0 C in order that each pair will be resonant at the frequency f0 . Both of these operations can be expressed mathematically by saying that the frequency f is replaced by f f02 (f f0 )(f + f0 ) = f f

This is not quite equivalent to translation to the left or right for bandpass lters. For frequencies near f0 , however, the function (f f0 )(f + f0 )/f behaves like 2(f f0 ) and so if the cuto frequency of the original lowpass lter is fc, then, the upper cuto frequency of the bandpass lter will be about f0 + fc/2 and the lower cuto frequency near f0 fc /2. The transfer function has thus shrunk in size, but the resulting bandwidth is the same. Actually, the new cuto frequencies are related to each other by fH fL = fc

and

fH fL = f02 so that the bandwidth is exactly fc , but the centre frequency is at the geometric mean of the upper and lower cuto frequencies. Finally, if it is required to design a high pass lter with cuto frequency fc and impedence level R, then we need only design a low pass lter with the same level and cuto frequency and then replace all C terms by L terms that are resonant with the C terms at the cuto frequency fc and replace all L terms by C terms that are resonant with L terms at cuto frequency. The phase angle will change sign however. On either side of the frequency fc the variation of the impedance of the new elements with frequency will be just the opposite of those they replaced and so opposite transfer function will be obtained. It is as though the frequency f were replaced by fc2 /f. Since high pass and bandpass lters can be obtained from low pass lters, we will consider only the design of low pass lters and compare with the ideal lter.

155

8.2.5 Butterworth Filters The low pass Butterworth lter of order n has a transfer function with a magnitude of 1 (1 + 2n )1/2

|HB n (j)| =

(8.44)

It is seen that the magnitude of the transfer function is 1/ 2 at = 1 rad per sec, and this is called its cuto frequency. It is also well known that
2

|HBn (j)| = HBn (j)HB n (j) =

1 1 + 2 n

(8.45)

But the conjugate of HB n (j) is HBn (j). Eq. (8.45) can now be written as HBn (j)HB n (j) = 1 2n (8.46)

If we now go backwards and put s = j or = js, Eq. (8.46) will read HBn (s)HBn (s) = 1 1 = 1 + (js)2 n 1 + (1)n s2n (8.47)

Now if HBn (s) is the transfer function of a realizable lter, then all its poles must be in the left half plane. Then HBn (s) must have all its poles in the right half plane. It is then necessary only to factor the denominator of Eq. (8.47) and keep the left half s-plane poles, and throw the others away. The roots of the denominator that are in the left half s-plane are the (2n) the roots of 1 or +1, depending on whether n is even or odd. Thus the roots lie on the unit circle in the s-plane, and it can be shown that those lying in the left half plane are (1 + 2k) 2n (1 + 2k) 2n

Sk sin for k = 0, 1, 2, . . . , n 1

+ j cos

(8.48)

If these roots are put in the appropriate factors, then 1 (s s0 )(s s1 ) . . . (s sn )

HBn (s) = Example 8.3

(8.49)

Find the transfer function and step response of a third order butterworth lter. Solution: Since n = 3, the roots of HB 3 (s) are
157

s 0 = sin = s1 = = s2 = = Then the denominator polynomial is " and HB3 (s) =

+ j cos 6 6 1 3 +j 2 2 3 3 + j cos sin 6 6 1 5 5 + j cos sin 6 6 3 1/2 j 2

#" # 1 1 3 3 s+ j s+ +j (s + 1) = (s2 + s + 1)(s + 1) 2 2 2 2

1 s 3 + 2s2 + 2s + 1

The unit step response will be the inverse Laplace transform of HB 3 (s)/s; that is 1 h ih i 1 s(s + 1) s + 2 j 23 s + 1 + j 23 2 =
1 6 90o 1 1 3 + . .. + s s +1 s+ 1 j 3 2 2

plus the conjugate of the last term. This makes the response 2 3t + 900 ) 1 et + e t/2 cos( 2 3 2 3 t = 1 e t + et/2 sin 2 3 This response is shown in Fig. 8.24 along with the response of the ideal lter. The low frequency group delay of the Butterworth lter can be calculated by noting that s = j, the transfer function is HB3 (j) = n 1
1 2

(1 + j)

+ j(

3 2

on o 1 ) + j( + 23 ) 2

for which angle is


158

(f) = tan 1 tan1 2(

3 3 ) tan 1 2( + ) 2 2

The negative derivative of this with respect to will yield the group delay tg . That is tg = 1 d(f ) 2 2 = + h h i + d 1 + 2 1 + 4 3 2 1 + 4 + 2 i2

3 2

At = 0, this is 2 sec. and so the response of the ideal lter is drawn with this delay. The overshoot of the Butterworth lter is 8 percent, but there is no undershoot. Since the transfer function falls o like 1/f 3 for large frequencies, this is a factor of 23 per octave or 18db per octave. Example 8.4 1. Show that the transfer function V0 (s)/Is(s) for the circuit shown in Fig. 8.25 is a Butterworth third order lter. Plot the magnitude and phase of its transfer function. 2. Use the lter given to design a low pass lter with an impedance level of 10K and cuto frequency of 15KH z. 3. Use circuit of (2) to design a bandpass lter with a bandwidth of 15KHz but centered at 100 KHz with a 10k impedence level. 4. Use circuit of (2) to design a high pass lter with a cuto frequency of 15KHz and an impedence level of 10k. Solution: 1. By using circuit analysis, it can be shown that 1 + s/2 + 3s [1 + 4s(1 + s/2)/3] /2 Is (s) = V0 (s) 1 1 V0 (s) = 3 Is (s) s + 2s 2 + 2s + 1 which is the HB N (s) The magnitude and phase functions are |HB3 (j )| = and (f) = tan 1 + tan 1 ( Since HB3 (s) = 1 1 + 6

3 3 + tan1 2( + ) 2 2

1 (s + 1)(s2 + s + 1)
159

Then HB3 (j) = and (f ) could be written as

1 (1 + j)(1 2 + j)

(1 2 ) The magnitude and phase functions are shown in Fig. 8.26 (f) = tan 1 + tan1 2. Since the level is to be raised to 10k = 104 multiply R and L by 104 , divide C by 104 . The 1 frequency level is to go to 15.103 H z from 2 R = 1 to R = 10 L= 4 4.104 4 to L = h = 3 3.2.15.103 9

C= and c =
3 2

1/2 102 1 108 to C = 4 = f = 3 2 10 .2.15.10 6 6


102 2

goes to three times the latter, or C =

f .

The circuit diagram is shown in Fig. 8.27. 3. For a bandpass lter at f0 = 105 ; in series with C=
4 9

h, put a C such that

900 1 = f 42 f02 L 16 1 3 h = 4 2 f02 200

In Parallel with the 102 /(6)f capacitance put an L=

Finally, put an inductance of 1/3 of last value in parallel with the 102 /(2)f capacitance and obtain the circuit in Fig. 8.28. 4. For the highpass lter with cuto at 15kH z replace L and C terms by elements resonant 4 with them at 15kHz. L = h is replaced by C= 1 4 2 f02 L 102 = f 4
10 2 (6 )

C = 10 ) f is replaced by L = (2 circuit is shown is in Fig. 8.29. 8.2.6 Chebyshev Filters

2 (9)

h, and the

f capacitance becomes L =

2 3

h. The

The Chebyshev polynomials are dened by Cn (x) = cos(n cos1 x)


161

(8.50)

It can be shown that these polynomials satisfy the recurrence formula Cn = 2xCn1 (x) Cn2 (x) and so if the rst two can be obtained then, the others also can. Letting n = 0 in Eq. (8.50) C0 (x) = cos(0) = 1 Letting n = 1 gives C1 (x) = cos(cos 1 x) = x Now C2 can be found by Eq. (8.51) C 2 (x) = 2x(x) 1 = 2x2 1 C3 (x) = 2x(2x2 1) x = 4x3 3x (8.54) (8.55) (8.53) (8.52) (8.51)

and

These polynomials are useful because in the interval 1 x 1 the polynomials oscillate back and forth from +1 to 1, and are always equal to +1 at x = 1 and 1 at x = 1. The polynomials are odd if n is odd and even if n is even. The nth-order Chebyshev lter has the general form |Hcn (j)| = p 1 2 1 + 2C n () (8.56)

Where is commonly chosen to be less than or equal to one. A device where = 1 leads to a 3 db variation in the transfer function in the passband, and this is usually considered large. As is in the case of Butterworth lters the magnitude squared is formed, with set equal to js, then the roots of the resulting denominator that lie in the left half s-plane can be shown to be at a+1 (1 + 2k) (1 + 2k) (a 1)/a sin + j a cos 2 2n 2 2n

Sk =

(8.57)

where k = 0, 1, . . . , n 1 and a= ( 1+ 1
2

1/2

)1/n

(8.58)

These poles lie on an ellipse whose semi-major axis lies on the j axis and whose length is (a + 1/a)/2 and whose semi minor axis lies on the real axis is (a 1/a)/2 in length.

164

Example 8.5 Choose = 1/2 and determine the magnitude phase and step response of a third order Chebyshev low-pass lter.
2 Solution: Since C 3() = 43 3, then C3 ()/4 is 46 64 + 9 2 /4, and so the transfer function is

|Hc3 (j)| = with = 1/2. Then from Eq. (8.58)

1 (1 + 9 2 /4 64 + 46 )
1/2

5+1 a = { 5 + 2}1/3 = 2 making a 1/a = 1 and a + 1/a = 5. Then the roots in the left half plane are at 1 5 sin + j cos 2 6 2 6 1 15 = +j 4 4 1 1 s1 = sin = 2 2 2 1 15 s2 = s = j 0 4 4 s0 = Then the denominator of the transfer function is 1 1 15 15 1 )(s + + j ) (s + )(s + j 2 4 4 4 4 1 s 5s 1 = (s + )(s2 + + 1) = s 3 + s2 + + 2 2 4 2 Then H(s) has to be H(s) = 2 2s3 + 2s 2 + 5s + 1 4

where the denominator polynomial had to be multiplied by two to make the constant term unity. Since (f) is the negative of the angle of the denominator polynomial, then (f ) = tan 1 2 + tan1 2(1 2 )

If (f ) is dierentiated with respect to and set equal to zero., then the delay time for low frequencies can be shown to be 2.5 sec. Fig. 8.30 shows the magnitude and phase function and Fig. 8.31 shows the step response of the ideal lter having 2.5 sec. delay.
165

PROBLEMS 8.1 Let x(t) = cos 220t+ cos 440t. Multiply this signal by cos(2x106t) to produce AM SC , then multiply by cos{106 110)t} to detect the AM SC wave. Assuming that all but the audio frequencies are ltered out in the last step, what is the output signal? This will indicate what happens when music is received by an AM SC receiver with drifting oscillator. 8.2 Suppose that in Prob. 8.1 only the lower sideband is retained when x(t) = cos 220t + cos 440t is multiplied by cos(2x106 t). Thus only these frequencies whose magnitude are less than 1M Hz are retained. What will the output be now when signal is multiplied by cos{2(106 110)t} and the frequencies above audio range are ltered out. 8.3 The signal Xm (t) in Fig. 8.32 is phase modulated and the output is x(t) = A cos{6.73x107 t + bxm (t)} (a) What is the smallest bandwidth the phase modulated signal can have assuming that b, but nothing else can be altered at will {b is not equal to 0 of course } and xm (t) is 1KHz sinusoid. (b) If Xm (t) = sin377t and b = 1000, what is the maximum instantaneous frequency deviation? What is the carrier frequency? What is the bandwidth occupied by the signal? 8.4 The concerned regulatory agency has decreed that the maximum frequency deviation for F M stations will be 75KHz. a) If the maximum and minimum modulation frequencies it is desired to transmit are 15K Hz and 20Hz. then what range will have? b) What bandwidth will this require? c) If an AM SSB with vestigial carrier were used instead of F M for the same signals as in (a) what bandwidth will be required? What is the ratio of the F M bandwidth to the AM SSB bandwidth? 8.5 For the low-pass to bandpass conversion let us suppose that the lowpass lter has a cuto frequency at fc. Since the alteration of the circuit is equivalent to replacing f by f f02 /f the behaviour of the new circuit at f0 will be same as that of the old at f = 0, and the behaviour of the new circuit at the frequency f given by f f02 fc f must be the same as the behaviour of the old at fc = f. Let fH be the positive solution of f f02 = fc f and fL the positive solution f f02 = fc f a) Find fH and fL in terms of f0 and fc b) Show that fH fL = fc
167

2 c) Show that fH fL = f0

8.6 The constant k low-pass lter is shown in Fig. 8.33. This has a nominal impedence level of one ohm and a cuto frequency of 1/(2)Hz a) Design a high pass constant k lter with impedence level 5K and cuto frequency of 30Hz b) Design a bandpass lter with bandwidth 10KHz, centre frequency 80KHz and 10K impedence level. 8.7 If the constant k lter of Fig. 8.34 is terminated in one Ohm and driven by a source with a one Ohm internal impedence, as shown in Fig. 8.34 with R = 1, then nd the transfer function of the lter. Show in particular that with this resistance level the lter is a third order Butterworth. 8.8 Repeat Problem 8.9 but this time R = 2 in Fig. 8.34 and show that this is now a third order Chebyshev lter with = 1/2 8.9 Show that the circuit of Fig. 8.35 is a second order Butterworth lter. Find and Plot carefully its response. Does it have overshoot? 8.10 Show that the circuit appearing in Fig. 8.36 is Chebyshev second order lter with Find its step response. Does it have overshoot? = 3/4,

168

CHAPTER - IX Z-TRANSFORM
9.1 INTRODUCTION Digital signal processing has become an established method of dealing with electrical waveforms, and the associated theory of discrete time systems can often be employed in a number of science and technology disciplines. Typical applications of this technique are analysis of biomedical signals, vibration analysis, picture processing, analysis of seismic signals, speech analysis and sampled data control systems. The signals in sampled data system may be of the form of a periodic or an aperiodic pulse train with no information transmitted between two consecutive pulses. This train of pulses may be natural or man made through some sampling process. A simple but adequate model of the sampling process is one which considers a continuous input signal, x(t), to be sampled by a switch closing periodically for a short time, seconds, with a sampling interval T seconds (Fig. 9.1). Referring to Fig. 9.1, it is seen that the switch output is a train of nite width pulses. However, if the pulse width, , is negligible compared with the interval between successive samples, T , the output of the sampler can be considered to be a train of impulses with their height proportional to x(t) at the sampling instant (Fig. 9.2) The ideal sampling function T (t) represents a train of unit impulses, and is dened as T (t) =
X

n=

(t nT )

(9.1)

where (t) is the unit impulse function occurring at t = 0, and (t nT ) is a delayed impulse function occurring at t = nT . Therefore x (t) = x(t).T (t) (9.2)

The value of x(t) is needed only at t = nT and furthermore for a physical system x(t) = 0 for t < 0, therefore X x(nT )(t nT ) (9.3) x (t) =
n=0

Thus we see that x (t) is a weighted sum of shifted unit impulses. Taking the Laplace transform of x (t) directly from Eq. (9.3) X (s) = L[x (t)] =
X n=0

x(nT )L [(t nT )]

(9.4)

Since the Laplace transform of the unit impulse (t nT ) is enTs , Eq. (9.4) becomes X (s) = x(nT )e nT s n=0 (9.5)

171

We can also expand Eq. (9.1) as Fourier series, that is X C n ejnst T (t) =
n=

1 T T (t)ejnst dt T 0 and s is the sampling frequency equal to 2/T rad/sec. Since the area of an impulse is unity, then Z T t(t)e jnst dt = 1 Cn =
0 1 and therefore Cn = T , hence

where

T (t) =

Taking Laplace transform and using the associated shifting theorem, we obtain 1 X X (s) = L[x (t)] = X(s j ns) T n= therefore X (j) =
1 X X[j( ns)] T n=

We have seen in Fig. 9.2 that for the impulse modulator, x (t) = T (t)x(t), therefore 1 X x (t) = x(t)e jn st (9.6) T
n=

1 X jnst e T n=

(9.7)

Thus we see from Eq. (9.4) that as a result of impulse sampling the frequency spectrum of x(t) is repeated innitum at intervals of jw. Let us now consider the frequency spectra of X (t). Referring to Fig. 9.3, if s/2 is greater than the highest frequency component of x(t) (Fig. 9.3a), then the original signal can theoretically be recovered from the spectra of x (t) (Fig. 9.3b). In contrast if s/2 is not greater than the highest frequency component in the continuous signal (Fig. 9.3c), then the folding of frequency response function occurs, and consequently the original signal cannot be reclaimed from the sampled data signal. The errors caused by the folding of the frequency spectra are generally referred to as aliasing errors, which may be avoided by increasing the sampling frequency. It has been established that the sampled data signal has innite number of complementary frequency spectra, which means that there must be an innite number of associated pole zero patterns in its s-plane representation. Consequently, the analysis of any sampled data signal or system is extremely dicult when working in the s-plane. However, fortunately, it is possible to use Z-transfrom instead, which gives a good mathematical description. 9.2 THE Z-TRANSFORM The z-transform is simply a rule that converts a sequence of numbers into a function of the complex variable z, and it has properties that enable linear dierence equations to be solved using straight forward algebraic manipulations. Suppose that we let z = eS T = e( +j)T , then |z| = e and 6 z = T , so that any point sx in the s-plane transforms to a corresponding point zx in the z-plane as shown in Fig. 9.4.
T

173

Referring to Table 1.1, it is seen that the imaginary axis in the s-plane transform to the P P circumference of the unit circle in z-plane. When is negative |z| < 1 and when is positive |z| > 1. Hence a strip s wide in the left hand half of the s-plane transforms to the area inside the unit circle in the z-plane (Fig. 9.5). Table 9.1 = 0, j 0
s 8
s

4 3 s 8
s

2 5 s 8 3 s 4 7 s 8 s

s = 2/T z = 16 T 16 0 16 45o 16 90o 16 135o 16 180o 16 225o 16 270o 16 315o 16 360o

The most important eect of z-transformation is that since the poles and zeros of x (t) are spaced at intervals of s = 2/T rad/sec in the j direction, all sets of poles and zeros in the s-plane transform to a single set poles and zeros in the z-plane. Let us consider Eq. (9.5) X (s) =
X n=0

x(nT )e nTs

Since z = esT , the above equation can be written as X(z) =


X n=0

x(nT )z n

(9.8)

In general, any continuous function, which possesses Laplace transform, also has a z-transform for the sampled function. Example 9.1 Let x(t) = eat , nd X (z) for sampling period T . Solution: From Eq. (9.5) X (s) = = substituting z = esT ,
X n=0

x(nT ) = ean T eantenT s 1 , |e (s + a)T | < 1 1 1 eaTz 1 , |z| > e aT

(9.9)

(9.10)

1 e (s + a) T

[X (s)] z = esT = X (z) = z z ea T


175

(9.11) (9.12)

Example 9.2 Suppose that input signal of a digital lter is x(t) = sin t what is the z-transform of x (t)? Solution: x (nT ) = sin nT = (e jnT e jnT )/j 2, therefore from Eq. (9.5) and Eq. (9.8).
X ejnT ejnT Z n j2 n=0 # " X X jnT n jnT n = (1/2j) (e )z (e )z n=0 n= 0

X(z) =

(9.13) (9.14)

now

similarly

X (ejnT )z n = n=0 X (ejnT )z n = n=0

z , f or |z| > 1 z ejT z , for |z| > 1 z ejT

(9.15)

(9.16)

therefore

X(z) =

1 1 z , |z| > 1 j2 z e jT z ejT jT jT z e e = j2 z 2 (e jT + e jT )z + 1 X (z) = z sin T z 2 2z cos T + 1 (9.17)

Example 9.3 Suppose that the transfer function of a system is X(s) = 1 (s + a)(s + b) (9.18)

Find the corresponding z-transform. Solution: X(s) = Using partial fraction method. 1 1 1 1 . + . b a + a a b s+ b s 1 1 1 + X(s) = ab s+a s+b X(s) =
177

1 (s + a)(s + b)

(9.19)

Now from Table 9.2 X(z) = 1 z 1 + ab z e aT z ebT bT aT e )/a b z(e = (z e at(z ebT )

(9.20)

Table 9.2 Table of Z - Transforms Laplace Transform 1 1 s 1 1 eT s 1 s2 1 s3 1 sn+1 1 s+ a 1 (s + a) 2 a s(s + a) s2 + 2 (s + a) 2 + 2 s s2 + 2 s+a (s + a) 2 + 2 Time Function unit impulse (t) unit stepu(t) T (t) = t t2 2 tn n! e at teat 1 eaT sin t e aT sin t cos t e at cos t P
n=0

(t nT )

Z-Transform 1 z z1 z z1 Tz (z 1)2 T 2 z(z + 1) 2(z 1)3 z (n)n n lims0 ( ) n! a n z eaT z z e aT T ze aT (z e aT )2 (1 e aT )z (z 1)(z eaT ) z sin T z 2 2z cos T + 1 zeaT sin T 2 aT 2z aT cos T + 1 ze z(z cos T ) z 2 2z cos t + 1 z 2 zeaT cos T z 2 2ze aT cos t + e2at

178

9.3 THE INVERSE Z-TRANSFORM Just as in the Laplace transform method, it is often desirable to obtain the time domain response from the z-transform. This can be accomplished by one of the following methods: 1. The z-transform is manipulated into partial fraction expression and the z-transform table is used to nd the corresponding time function. 2. The z-transform signal X(z) is expanded into power series in powers of z 1 . The coecient of z n corresponds to the value of time function x(t) at the nth sampling instant. 3. The time function x(t) may be obtained from X (z) by the inversion integral. The value of x(t) at the sampling instant t = nT can be obtained by the following formula: x(nT ) = 1 2j I X (z)z n1 dz (9.21)

where is a circle of radius z = ecT centered at the origin in the z-plane, and c is of such a value that all the poles of X(z) are enclosed by the circle, i.e., in the region of convergence of X(z). It may be emphasized that only the value of x(t) at the sampling instants can be obtained from X(z), since X(z) does not contain any information on x(t) between sampling instants. Example 9.4 Given the z-transform X(z) = nd the inverse z-transform x (t). 1. Partial Fraction Expansion Method Equation (9.22) may be written as X(z) = z z z 1 z e aT (9.23) (1 e aT )z (z 1)(z e at ) (9.22)

From the z-transform table (Table 9.2), the corresponding time function at the sampling instant is x(nT ) = 1 eanT x (t) = =
X n=0

(9.24)

hence

x(nT )(t nT ) (9.25)

X (1 eanT )(t nT )
n=0

2. Power Series Expansion Expanding X(z) into a power series in z 1 by long division.

X(z) = (1 e aT )z 1 + (1 e 2aT )z 2 + (1 e 3aT )z 3 + . . . + (1 e naT )z n + . . . (9.26)


179

Correspondingly

X (t) = 0x(t) + (1 eaT )(t T ) + (1 e 2aT )(t 2T ) + . . . + (1 e ant )(t nT ) + . . . X = (1 e anT )(t nT ) (9.27)
n=0

3.

Real Inversion Integral Method

From Eq. (9.21) we have x(nT ) = at poles of X (z) 1 2j I X(z)z n 1 dz =

Residue of X(z) z n1

at n (1 e at)z n + (1 e )z z e at z=1 z1 z= eat anT =1e 9.4 SOME IMPORTANT THEOREMS OF Z- TRANSFORMS.

(9.28)

1.

Linearity of the z-Transform

For all constants C1 and C 2 , the following property holds:


X n= 0

Z(C1 f1 + C2 f2 ) =

[C 1 f1 (nT ) + C2 f2 (nT )] z n f1 (nT )z n + C 2


X n=0

= C1

= C1 Z(f1 ) + C2 Z(f2 )

X n=0

f2(nT )z n (9.29)

The region of convergence is at least the intersection of regions of convergence of z[f1 ] and z[f2 ]. Thus Z is a linear operator on the space of all z-transformable functions f (nT ) for n = 0, 1, 2, . . . 2. Shifting Theorem (Real Translation) If Z[f] = F (z) then where n is an integer Z[f(t nT )] = z n [F (z)]
180

(9.30)

Proof: By denition z[f(t nT )] = =


X

X
k= 0

n=0

f (kT nT )z k f (kT nT )z (k n) .z n
X k=0

= z n

f(kT nT )z (kn) (9.31)

= z n F (z)

This Theorem is very useful in the solution of dierence equations. Following a similar procedure, we can easily obtain the z-transform of the forward dierence as well as the backward dierences. 3. Complex Translation Z e aT f(t) = [F (s a)] = F ze aT
X Z e aT f(t) = f (nT )e anT z n n=0

(9.32)

Proof: By denition

(9.33)

If we let z1 = ze aT , Eq. (9.32) becomes

hence,

X e(nT )z n = F (z 1 ) Z eaT = 1 n=0

(9.34)

Example 9.5 Apply the complex translation theorem to nd the z-transform of te Solution: If we let f (t) = t, then F (z) = Z[t] = From Theorem 3 Tz (z 1)2

Z eaT f (t) = F (zeaT )

(9.35)
at

(9.36)

Z [teat ] = F (zeat) =

T (z at) (zeaT 12 ) T ze aT = (z e aT )2

(9.37)

181

4.

Initial Value Theorem

If the function f(t) has the z-transform F (z), and limit of F (z) exists, then lim f (t) = lim F (z)
t0 z

(9.38)

5.

Final Value Theorem

If the function f(t) has the z-transform F (z), and (1 z 1 )F (z) has no poles on or outside the unit circle centered at the origin in the z-plane, then
t

lim f (t) = lim(1 z 1 )F (z)


z 1

(9.39)

Example 9.6 Given F (z) = 0.792z 2 (z 1)(z 0.416z + 0.208)


2

(9.40)

determine the initial and nal value F (z). Initial value of F (z): From theorem 4

lim f (t) = lim F (z)


t0 z

= lim
z

=0

0.792z 2 (z 1)(z 2 0.0416z + 0.208

(9.41)

Therefore, the initial value of f (t) is zero. Final value of F (z): From Theorem 5 lim f (t) = lim(1 z 1 )F (z)
t z 1

= lim(
z 1

Therefore, the nal value of f (t) is unity. 6. Real Convolution Theorem

0.792z z1 ) z (z 2 0.0416z + 0.208) 0.792z = lim 2 =1 z 1 z 0.416 + 0.208

(9.42)

If f1 (t) and f2 (t) have the z-transform F1 (z) and F2 (z) then, " X
k=0 X k=0

F1 (z)F 2 (z) = Z Proof: By denition

f1 (kT )f2(n k)T

(9.43)

F1 (z)F2 (z) = But we know that

f1 (kT )z k F2 (z)

(9.44)

z k F2 (z) = Z [f2 (t kT )]
182

(9.45)

Hence F1 (z)F2 (z) = = =


X

X
n=0

k=0

f1 (kT )Z[f2 (t kT )] f1 (kT )


X n=0

(9.46)

k=0 " X X k=0

f2 [(n k)T ] z n # (9.47)

f1 (kT )f2 ((n k)T ) z n

7.

Complex Dierentiation (Multiplication by t)

If F (z) is the z-transform of f , then Z[tf ] = T z Proof: By denition Z[tf] =


X (nT )f (nT )z n n=0 X n=0

d F (z) dz

(9.48)

= Tz

f (nT )(nz n1 )

(9.49)

The term in the bracket is a derivative with respect to z

Z[tf] = T z = Tz = Tz 8.

d X f (nT )z n dz n=0

X n=0

f (nT )

d n z dz

d F (z) dz

(9.50)

Dierentiation with respect to second independent variable Z[ f(t, a)] = F (z, a) a a (9.51)

9.

Second Independent variable limit value Z[lim f(t, a)] = lim F (z, a)
aa 0 aa 0

(9.52)

183

10. Integration with respect to second independent variable Z if the integral is nite. 9.5 THE PULSE TRANSFER FUNCTION The transfer function of the open-loop system in Fig. 9.6a is given as C (s) (9.54) G(s) = X(s) For a system with sampled-data, Fig. 9.6b illustrates a network G which is connected to a sampler S with sampling period T. P Assume that S1 is an ideal sampler so that x (t) = n x(nt)(t nt) Z
a

a0

Z f(t, a)da =

F (z, a)da
a0

(9.53)

If a ctitious sampler S2 with the same sampling period T as that of S1 is placed at the output, the output of the switch S2 to a unit-impulse input is X c (t) = g (t) = c(nT )(t nT ) (9.55)
n=0

where c(nT ) = g(nT ) is dened as the weighting sequence of G.

The signals x(t), x (t), c(t), c (t) are illustrated in Fig. 9.7. X G (s) = g(nT )enT s
n=0

(9.56)

which is the pulse transfer function of system G.

Once the weighing sequence of a network G is dened, the output c(t) and c (t) of the system is obtained by means of the principle of superposition. Suppose that an arbitrary function x(t) is applied to the system of Fig. 9.6b at t = 0, the sampled input to G is the sequence x(nT ). At the time t = nT , the output sample c(nT ) is the sum of the eects of all samples x(nT ), x(n 1)T , x(n 2)T, . . . , x(0), ; that is X c(nT ) = eects of all samples x(nT ), x(n 1)T . . . , x(0) (9.57) or c(nT ) = x(0)g(nT ) + x(T )g[(n 1)T ] + x(2T )g[(n 2)T ]+ . . . + x[(n 1)T ]g(T ) + x(nT )g(0) (9.58) Multiplying both sides of the last equation by e nT s and taking the summation for n = 0 to n = , we have X X X c(nT )e nT s = x(0)g(nT )e nT s + x(T )g(n 1)T e nT s
n=0 n=0 n=0

+... + +
X n=0

X n=0

x [(n 1)T ] g(T )enTs +


nT s

X n= 0

x(nT )g(0)enT s (9.59)

x(nT )g(0)e

184

or

X n= 0

c(nT )enT s = x(0) + x(T )e Ts + x(2T )e 2T s + . . .


X n=0

g(nT )enT s
X n=0

(9.60)

from which

or simply

X n=0

c(nT )enT s =

X n=0

x(nT )e nT s

g(nT )e nT s

(9.61)

C (s) = X (s)G (s)

(9.62)

where G (s) is dened as the pulsed transfer function of G and is given by Eq. (9.56). Taking the z-transform of both sides of Eq. (9.62) yields C(z) = X (z)G(z) 9.6 Z-TRANSFORM OF SYSTEMS 1. Z-Transform of Cascaded Elements with Sampling Switches between them (9.63)

Fig. 9.8 a illustrates a sampled data system with cascaded elements G1 and G2 . The two elements are separated by a second sampling switch S which is synchronized to S1 . The z-transform relation between the output and the input signals is derived as follows. The output signal of G 1 is D(s) = G 1(s)X(s) and the system output is C (s) = G2 (s)D (s) Taking the pulsed transform of Eq. (9.64) yields D (s) = G (s)X (s) and substituting D (s) in Eq. (9.65), we have C(s) = G2 (s)G (s)X (s) 1 Taking the pulsed transform of the last equation, we have, C (s) = G (s)G (s)X (s) 2 1 The z-transform of the above equation is C(z) = G1 (z)G2 (z)X(z) 2. (9.69) (9.68) (9.67)

(9.64) (9.65) (9.66)

Z-Transform of cascaded elements with No sampling switch between them

Fig. 9.8 b illustrates a sampled data system with two cascaded elements with no sampler between them. The z-transform relation of output and input is derived as follows: The transform of the continuous output is C(s) = G1 (s)G 2 (s)X (s)
186

(9.70)

The pulsed transform of the output is C (s) = G 1 G (s)X (s) 2 where G1 G (s) = [G1 (s)G 2 (s)] = 2 In general,
1 X G1 (s + jns )G2 (s + jns ) T n=

(9.71)

(9.72)

The z-transform of the Eq. (9.71) is

G1 G (s) 6= G (s)G (s) 2 1 2 C(z) = G 1G 2 (z)X(z)

(9.73)

(9.74)

Example 9.7 For the sampled data system in Fig. 9.8 a and b, if G1 (s) = 1/s, G(s) = a/(s + a), and x(t) is a unit step function. Find C(z) in both the cases. Solution: The output of the system in case a is C(z) = G1 (z)G 2 (z)X(z) z az z = z 1 z e aT z 1 az 3 = 2 (z e aT ) (z 1) The output in case 0 b0 is

(9.75)

C(z) = G 1G 2 (z)X(z) a X(z) = s(s + a) z(1 eaT ) z = (z 1)(z eaT ) z 1 z 2 (1 eaT ) = (z 1)2 (z eaT ) 3. General Closed Loop Systems

(9.76)

The transfer function of a closed loop sampled data system can also be obtained by the procedure in the last sections. For the system shown in Fig. 9.9 the output transform is C(s) = G(s)E (s) The Laplace Transform of continuous error function is E(s) = X(s) C (s)H(s) or E(s) = X(s) H(s)G(s)E (s)
188

(9.77)

(9.78) (9.79)

Taking the pulsed transform of the last equation, we have E (s) = X (s) HG (s)E (s) from which E(s) = X (s) 1 + HG (s) (9.80) (9.81)

The output transform C(s) is obtained by substituting E (s) from Eq. (9.81) into Eq. (9.77). C(s) = The pulsed-transform of c (t) is C (s) = G (s)E (s) = Hence the z-transform of c(t) is C(z) = G(z) X(z) 1 + HG(z) (9.84) G (s) 1 + H G (s) (9.83) G(s) X (s) 1 + HG (s) (9.82)

9.7 LIMITATIONS OF THE Z-TRANSFORM METHOD We have seen that z-transform is a convenient tool for the treatment of discrete systems. However, it has certain limitations and in certain cases care must be taken in its applications. 1. The derivation of z-transform is based on the assumption that the sampled signal is approximated by a train of impulses whose areas are equal to the input time function of the sampler at the sampling instants. This assumption is considered to be valid only if the sampling duration is small, compared to the signicant time constant of the system. 2. The z-transform C (z) species only the values of the time function c(t) at the sampling instants. Therefore, for any C (z), the inverse transform c(nT ) describes c(t) only at the sampling instants t = nT . 3. In analysing sampled data by z-transform method, it is necessary that the transfer function G(s) must have at least two more poles than zeros [or g(t) must not have a jump at t = 0]; otherwise the system response obtained by the z-transform method is unrealistic or even incorrect. 9.8 STABILITY ANALYSIS A sampled-data system is considered to be stable if the sampled output is bounded when bounded input is applied. However, there may be hidden oscillations between sampling instants, which may be studied by special methods. The closed loop transfer function of the sampled-data system in Fig. 9.9 is given as C (s) G (s) = X (s) 1 + H G (s) (9.85)

where 1 + HG (s) = 0 is the characteristic equation of the system. The stability of the sampled data system is entirely determined by the location of the roots of the characteristic
189

equation. Specically, none of the roots of the characteristic equation must be found in the right-half of the s-plane, since such a root will yield exponentially growing time functions. In terms of the z-transform, the characteristic equation of the system is written as 1 + HG(z) = 0. Since the right half of the s-plane is mapped into the exterior of the unit circle in the z-plane, as shown in Fig. 9.10, the stability requirement states that all the roots of the characteristic equation must lie inside the unit circle. We will not discuss all the stability techniques in detail, but outline briey only two methods namely Routh Hurwitz Criterion and Root Locus Method. 1. The Routh-Hurwitz Criterion Applied to Sampled Data System.

The stability of the sampled data system concerns the determination of the location of the roots of the characteristic equation with respect to the unit circle in the z-plane. A convenient method is to use bilinear transformation. r= or z= z+1 z1 r+1 r1 (9.86)

Where r is a complex variable; i.e. r = r + jwr . This transformation maps the interior of the unit circle in the z-plane into the left half of the r-plane; therefore, the Routh test may be performed on the polynomial in the variable r. The following example illustrates how the modied Routh test is performed for a sampled data feedback system. Example 9.8 Let the open loop transfer function of a unity feedback system with sampled error signal be of the form 22.57 G(s) = 2 (9.87) s (s + 1) Solution: If the sampling period is 1 sec, the z-transform of G(s) is G(z) = 22.57z(0.368z + 0.264) (z 1)2 (z 0.368) (9.88)

The characteristic equation of the system may be written as z 3 + 5.94z 2 + 7.7z 0.368 = 0 Substitution of Eq. (9.86) in the last equation yields r+1 r1 3 + 5.94 r +1 r1 2 + 7.7 r+1 0.368 = 0 r1 (9.90) (9.89)

Simplifying Eq. (9.90), we get 14.27r3 + 2.3r 2 11.74r + 3.13 = 0 The Routh tabulation of the last 14.27 r3 2.3 r2 27 44. 6 1 r = 31.1 2.3 0 r 3.13 equation is -11.74 3.13 0 (9.91)

190

Since there are two changes of sign in the rst column of tabulation, the characteristic equation has two roots in the right half of the r-plane, which corresponds to two roots outside the unit circle in the z-plane, and shows that the system is unstable. 2. The Root Locus Technique The root locus technique used for analysis and design of continuous data system can also easily be adapted to the study of sampled data systems. Since the characteristic equation of a simple sampled data system may be represented by the form 1 + HG(Z) = 0 (9.92)

where H G(z) is a rational function in z, the root locus method may be applied directly to the last equation without modication. The signicant dierence between the present case and the continuous case is that the root loci in Eq. (9.92) are constructed in the z-plane, and that in investigating the stability of sampled data system from the root locus plot, the unit circle rather than the imaginary axis in the z-plane should be observed. It is clear that, in the construction of the root loci discussed in Chapter 5 is still valid. The following example shows that construction of root loci for a sampled data system. Example 9.9 Consider a unity feedback control system with sampled error signal, the openloop transfer function of the system is given as G(z) = kz(1 eT ) (z 1)(z e T ) (9.93)

Draw the root loci of the system for T = 1 sec and T = 5 sec. Solution: The characteristic equation of the system is 1 + G(z) = 0, whose root loci are to be determined when k is varied from 0 to . If the sampling period T is 1 sec, G(z) becomes G(z) = 0.632kz (z 1)(z 0.368) (9.94)

which has poles at z = 1 , z = 0.368 and a zero at the origin. The pole-zero conguration of G(z) is shown in Fig. 9.11a.The root loci must start at the poles (k = 0) and end at the zeros (k = ) of G(z). The complete root loci for T = 1 sec intersects with the unit circle occurs at z = 1 and the corresponding value of k at that point is 4.33. If the sampling period is changed to T = 5 sec. G(z) becomes G(z) = 0.933kz (z 1)(z 0.0067) (9.95)

The root loci for T = 5 sec are constructed in Fig. 9.11b. The marginal value of k for T = 5 sec is found to be 2.02 as compared to the marginal k of 4.33 for T = 1 sec.

192

PROBLEMS 9.1 The following signals are sampled by an ideal sampler with sampling period T . Determine the sampler output x (t) and evaluate the pulsed transform X (s) by the Laplace Transform method. (a) x(t) = te t (b) x(t) = eat sin t (a=constant) 9.2 Derive the z-transform for the following functions (a) (b) 1 s3 (s + 2) 1 s(s + s)2

(c) {2.5, 1.2, 0.08, 8.9, 0.4} (d) (1/4) 4 for n > 0 for n > 0 9.3 Evaluate the inverse z-transform of G(z) = by the following methods, (a) the real inversion formula. (b) the partial fraction expansion. (c) power series expansion. 9.4 Obtain the inverse z-transform of G(z) = 0.5z (z 1)(z 0.5) z(z 2 + 2z + 1) (z z + 1)(z 2 + z + 1)
2

9.5 A digital lter has a pulse transfer function G(z) = Determine: z2 0.05z 0.05 z 2 + 0.1z 0.2

(a) the location in the z-plane, of the lters poles and zeros. (b) whether or not the lter is stable. (c) a general expression for the lters impulse response. (d) the lters linear dierence equation. (e) initial and nal values of the output of the lter for a unit step input.

193

9.6 The characteristic equation of certain sampled data systems are as follows. Determine the stability of these systems. (a) z 3 + 5z 2 + 3z + z 2 = 0 (b) 3z 5 + 4z 4 + z 3 + 2z 2 + 5z + 1 = 0 (c) z 3 1.5z 2 2z + 3 = 0 9.7 The sampled data system shown below has a transfer function G(s) = K s(1 + 0.2s)

Sketch the root locus diagram for the system for T = 1sec and 5 sec. Determine the marginal value of k for stability in each case. 9.8 Obtain the initial and nal value of the following functions. (a) G(z) = (b) G(z) = 2 (1 z 1 )(1 0.2z 1 ) 1 (1 z 1 )(1 0.5z 1 ) 100 T = 0.1 sec, x(t) = unit step. s(s2 + 100)

9.9 For the open-loop sampled data system given below G(s) =

Use the z-transform method to evaluate the output response.

194

CHAPTER X APPLICATIONS OF Z-TRANSFORM


The method of z-transform is an ecient tool for dealing with the linear dierence equations. In the following sections, we will demonstrate its usefulness in the analysis and design of networks, sampled data control systems and digital lters. It may be mentioned that the application eld of z-transform is not limited to the above areas and with the introduction of digital computers in control and instrumentation, its scope has become almost unlimited. 10.1 Z-TRANSFORM METHOD FOR SOLUTION OF LINEAR DIFFERENCE EQUATIONS Several methods, such as the classical method, the matrix method, the recurrence method and the transform method exist for the solution of dierence equations. In this section, we shall apply z-transform method (generating function method) to the solution of certain type of linear dierence equations. The formulation of the dierence equation can be expressed in several forms such as backward or forward method or the translational form. These equations are usually encountered in physical, economic and physiological systems. In the following, we formulate the equation of the current in any loop of a ladder network and nd its solution by z-transform method. Consider the ladder network in Fig. 10.1. Assume that resistances except RL are of the same value R. Suppose that it is required to nd the current in the nth loop. In the classical approach, we could set up the (k + 1) loop equations and solve for in which would be a cumbersome process. However, by z-transform method, we could formulate one loop equation and two terminal equations. The equation for the (n + 1)th loop is Ri n + 3Rin+1 Ri n+2 = 0 Instead of writing down the other K equations, we make the following observations. 1. Eq. (10.1) is true for any n except 000 and 0 k 0 , since the network is the repetitive structure and all loops except two ends are alike. 2. Eq. (10.1), together with end condition is sucient to describe the network. Applying z-transform to Eq. (10.1) I(z) + 3zI(z) 3zi 0 z 2 I(z) + z 2 i 0 + zi1 = 0 or or (1 3z + z 2 )I(z) = z(zi0 3i0 + i1 ) I(z) = From 0 00 Loop or z(zi 0 3i0 + i 1 ) z2 3z + 1 (10.2) (10.3) (10.4) (10.1)

2Ri0 Ri1 = V i1 = 2i0 V R

Substituting the value of i 1 in Eq. (10.2), we get


195

I (z) = =

z[zi 0 3i0 + 2i0 V ] R z 2 3z + 1 h i z z (1 + RV i


0)

i0

z 2 3z + 1

(10.5)

From the tables of inverse z-transform, we readily obtain in as follows. "


1 2

i n = Z [I (z)] = i0 cos h0 n + Where and

V Ri 0

5 2

sin h0 n

(10.6)

3 2 5 sin h0 = 2 t = nT = n for T = 1sec. cos h0 =

(10.8)

The value of i 0 can be found by substitution Eq. (10.6) into the equation of the end loop and solving for i0 . 10.2 SAMPLED DATA CONTROL SYSTEM DESIGN IN THE Z-PLANE Design and synthesis of sampled data control systems is a subject of control theory, and methods such as Bode plots, Nyquist plots, Magnitude and Phase plots and Root Locus plots are generally employed and synthesis of control systems is carried out in the z-plane. As the detailed discussion of control theory is beyond the scope of this book and the fact that we intend to demonstrate the application of z-transform in this eld, we shall limit this section to the design and synthesis by root locus method only. This method is chosen because of the fact that it was thoroughly discussed in Chapter 5 for continuous systems and the reader will not nd any diculty in following its application for sampled data systems. However, it may be pointed out that z-transform method is equally useful and applicable in the design and synthesis by Bode Plots, Nyquist plots, Magnitude and Phase plots either by direct applications or after bilinear transformations. 10.2.1 Design in the z-Plane by using the Root Locus Method. The root locus plots which are plotted in the z-plane have quite similar properties to those of the root locus for continuous systems in s-plane. Once the root loci of the characteristic equation are plotted in the z-plane, much knowledge concerning the transient response of the system can be obtained by observing the location of the roots for the particular loop gain k. In terms of root locus design, the desirable characteristic equation roots are found by reshaping the root loci of the original system through adjustment of the loop gain and the use of the compensation networks. The most elementary problem in the root locus design is the determination of the loop gain to yield suitable relative stability. The loop gain of the system can be adjusted to give appropriate dynamic performance as measured by the position of the complex poles with respect to the constant damping ratio curves inside the unit circle. However, no simple rules are available for the determination of appropriate compensation networks from the root locus diagram. Therefore, the design in the z-plane with root locus usually involves a certain amount of trial and error.
196

In the design of continuous data systems, usually, the design may fall into one of the following categories: (1) phase-lead compensation (2) phase-lag compensation. 10.2.2 Phase-Lead Compensation A simple phase-lead model on the -domain is described by the transfer function D = 1 + a (a > 1) 1 + = z 1 z +1 (10.9) (10.10)

where is a constant greater than or equal to zero. This transfer function produces positive phase shift that may be added to the system phase shift in the vicinity of the gain croos-over frequency ( ) to increase the phase margin. The pole-zero conguration of Eq. (10.9) is shown in Fig. 10.2.a. Note that the poles and zero of D() always lie on the negative real axis in the -plane with the zero to the right of the pole. Substitution of = z 1/z + 1 into Eq. (10.9) yields a + 1 z + 1a 1+a D(z) = (10.11) + 1 z + 1 1 + Since and a are both positive numbers and since a > 1, the poles and zero of D(z) always lie on the real axis on or inside the unit circle in the z-plane; the zero is always to the right of the pole. A typical set of pole zero conguration of D(z) is shown in Fig. 10.2b. Illustrative example is represented in the following. Example 10.1 A sampled data feedback control system with digital compensation is shown in Fig. 10.3. The controlled process of the system is described by the transfer function G1 (s) = k s(s + 1) (10.12)

The root locus diagram of the uncompensated system is plotted in Fig. 10.4.

The sampling period is one sec. The open-loop transfer function of the system without compensation is 0.386k(z + 0.717) G h0 G1 (z) = (10.13) (z 1)(z 0.386)

Note that the complex conjugate part of the root loci is a circle with centre at z = 0.717 and a radius of 1.37. The closed-loop system becomes unstable for all values of K greater than 2.43. Let us assume that k is set at this marginal value so that the two characteristic equation roots are on the unit circle as shown in Fig. 10.4. Suppose that the transfer function D(z) of the digital controller is of the form 1a a + 1 z + 1+a D(z) = + 1 z + 1 1 + where for phase-lead compensation, a > 1 and > > 0.
198

(10.14)

The constant factor (a + 1)/( + 1) in D(z) is necessary, since the insertion of the digital controller should not eect the velocity error constant kv while improving the stability of the system. In other words, D(z) must satisfy the condition lim D(z) = 1
z1

(10.15)

The design problem now essentially involves the determination of appropriate value of a and , so that the system is stabilized. However, at this point, how the values of a and should be chosen is not clear. Although, we know from the properties of the root loci that an added open loop zero has the eect of pulling root loci toward it, whereas an additional open loop pole has the tendency to push the loci away. However, no simple ways exist for telling which combinations are the most eective for stabilizing the system because of the unlimited number of possible combinations of pole and zero of D(z). Several sets of values of a and are used in the following to illustrate the eects of phase-lead compensation. As a rst trial, let a = 6.06 and = 0.165. The transfer function of the digital controller reads. D(z) = 1.72 z z + 0.717 (10.16)

and the open loop transfer function of the compensated system is D(z)Gh0 G 1 (z) = 0.64kz (z 1)(z 0.368) (10.17)

The root loci of the compensated system are shown in Fig. 10.5 as loci (2). It may be seen that for k = 2.43, one of the roots of the characteristic equation is on the negative real axis outside the unit circle, and the system is unstable. This shows that for the values chosen for a and , the compensated system is worse than the original system. From other sets of values for a and are tried, and the corresponding root loci of the compensated systems are plotted in Fig. 10.5 (only the positive complex conjugate parts of the root loci are shown). The characteristic equation roots of the compensated system when k = 2.43 are indicated on the loci. The pole and zero locations of D(z) of the various compensations are tabulated in Table 10.1. From the root locus diagram, we see that among the ve compensation only when a = 2, = 0.4 and a = 3, = 0.1 result in stable systems. But then the damping ratios are less than 10 percent, which means that overshoots would exceed 70 percent, which is not acceptable. The general ineectiveness of the phase-lead compensation is anticipated in this problem, since the original system is on the verge of instability and the situation is one for which phaselead compensation is not recommended. In the next section, we shall see that a phase-lag compensation is more satisfactory for improving the stability of this system. Table 10.1 Pole and Zero of D(z) = 0.1 0.165 0.4 1.0 3.0 a 3 6.06 2 2 3 a +1 z+ 1a 1+a
+1 z + 1 1+

zero of D(z) 0.538 0 -0.111 0.333 0.80

Pole of D(z) -0.818 -0.717 -0.375 0 0.50

200

10.2.3 Phase - Lag Compensation A simple phase-lag model is given by the transfer function D() = where 0 < a < 1 and 0 < < The Pole-Zero conguration of the phase-lag D() and D(z) are depicted in Fig. 10.6. Note that since a is less than unity, the pole is always to the right of the zero. Example 10.2 Consider the same system given in Example 10.1, with k = 2.43. The system is now to be stabilized by means of a simple lag compensator of Eq. (10.18). First, we investigate the eects of the phase-lag compensation on the root loci when small values of are chosen. With a = 0.5, the root loci of the system with phase-lag compensation are plotted for = 0.1, 0.4 and 1.0, as shown in Fig. 10.7. (only the positive complex conjugate loci parts are shown). For small values of , the phase-lag compensation has made the system unstable, which indicates that the value of should be large. Let us assume that the design specication requires the damping ratio of the complex closed loop poles to be approximately 60 percent. Referring to locus (1) in Fig. 10.8, which is the root locus of the original system, we note that the complex closed loop poles have a damping ratio of 60 percent when the loop gain is equal to 0.5. In essence, the phase-lag compensation can be regarded as a means of increasing the velocity error constant kv by a ratio of 4.86(2.43/0.5) while keeping the complex loop poles relatively small. Since is to be very large, and since a is less than unity, the poles and zero of D(z) will appear as an integrating dipole near the point z = +1. Therefore, the complex conjugate parts of the original root loci are not aected signicantly by the addition of the integrating dipole, since from points on these loci the dipole and the pole of G h0 G1 (z) at z = 1 appear as a sample pole. In order to increase the velocity error constant by a factor of 4.86 (from k = 0.5 to k = 2.43), the constant ratio a of the integrating dipole should be chosen to be at least 1/4.86, preferably 1/5 to allow the dipole to contribute a slight phase lag near the new gain cross-over. Thus we, let a = 0.2 and is chosen to be 100. Substituting these values of a and into Eq. (10.11) yields the transfer function of the phase-lag controller z 0.905 0.980 The open-loop transfer function of the compensated system is D(z) = 0.191 D(z)Gn0 G1 (z) = 0.07k(z 0.905)(z + 0.717) (z 1)(z 0.368)(z 0.98) (10.19) 1 + a 1 + (10.18)

(10.20)

In Fig. 10.8, the root loci of the phase-lag compensated system are shown as loci (2). Note that the complex roots for K = 2.43 on the compensated loci lie very close to the roots for K = 0.5 on the uncompensated loci. The root loci of the compensated system when a = 0.2 and = 50 are also plotted in Fig. 10.8 (loci 3). for K = 2.43, the complex characteristic equation roots lie very close to those loci (2). This shows that precise location of the dipole is not critical, as long as it is close to the z = +1 point.

202

10.3 Z-TRANSFORM METHOD FOR THE DESIGN OF DIGITAL FILTERS There are two types of digital lters namely; recursive and non-recursive lters. In this section, we will present the design of recursive lters, which are more economical in execution time and storage requirements as compared to non-recursive lters. Recursive digital lters are more commonly referred to as innite impulse response lters. The term recursive intrinsically means that the output of the digital lter, y(n)T , is computed using the present inputs x(T ), and previous inputs and outputs, namely x(n 1)T , x(n 2)T, . . . , y(n 1)T, y(n 2)T , . . . respectively. The design of the recursive lter centres around nding the lter coecients - the a0 s and b0 s of G(z), thereby yeilding a pulse transfer function which is a rational function in z. There are two main methods for the design of digital lters. The rst method is an indirect approach, which requires that a suitable prototype continuous lter transfer function G(s), is designed and subsequently this is transformed via an appropriate s-plane to z-plane mapping to give a corresponding digital lter pulse transfer function, G(z). The mapping used in this section will be the bilinear z-transform though other transformation methods are also available. The second method is a direct approach which is concerned with the z-plane representation of the digital lter, and the derivation of G(z) is achieved by working directly in the z-plane. The direct approach is used in the design of frequency sampling lters and lters based on squared magnitude functions. 10.3.1 Indirect Approach Using Prototype Continuous Filter The continuous lters e.g. Butterworth and Chebyshev were discussed in Chapter 8. The general equations for the Butterworth and Chebyshev lters are given below.

|G(j)|2 =

1 1 + ( c )2n 1 = | j (Butterworth) 1 + (1)ns 2n s= c 1 (Chebychev) 1 + 2 [Cn ()]2

(10.21)

and |G(j)| 2 =

(10.22)

where |G(j)|2 is the squared magnitude of the lters transfer function. c is the cuto frequency n is the order of the lter is the frequency is a real number and << 1 Cn () is Chebechev polynomial The design of digital lter is carried out by using bilinear transformation to the continuous lter obtained for given specication. In the following, we present one example of a low pass lter design. It may be mentioned that once a low pass continuous lter is designed, it can readily be converted into bandpass or highpass continuous lter as already discussed in Chapter 8. Example 10.3 Derive the digital equivalent of a Butterworth low pass lter by bilinear transformation for the following specications. (1) Digital cuto frequency, fcd = 100 Hz
204

(2) Sampling period T = 1 ms (3) Amplitude attenuation of 20 dB at 400 Hz. Obtain the amplitude response of G(z). Solution: The order of the lter is determined by the ratio of cuto frequency and 20dB attenuation frequency as follows. 2n ) 20dB = 10 Log1 0 1 + ( c 2n 2 = Log10 1 + ( ) c 99 = (4)2n 1.9956 = 2n log1 0 4 n = 1.65

(10.23)

(10.24)

As n has to be an integer, therefore n = 2 shall satisfy the lter specications. The prototype continuous second order lter is given by G(s) = s2 + 1 2s + 1 (10.25)

The analog cuto frequency is obtained by the following equation cd T 2 ca = tan T 2 2 1X 10 3 = tan 200x 1X10 3 2 = 650 rad/sec

(10.26)

(10.27)

The transformation from normalised low pass to low pass lter is achieved by submitting s/ca for s in G(s) as follows G(s)pt =
2 s 650

(the pre-warped transformed transfer function) G(s)pt = For the bilinear z-transform s= And it follows that 2 T

1 +
2s 650

+1

(10.28)

422500 s 2 + 919.24s + 422500

(10.29)

(z 1) (z 1) = 2000 (z + 1) (z + 1) (z 1)2 (z + 1)2

(10.30)

s2 = 4x106

(10.31)

205

Substituting Eqs. (10.30) and (10.31) in Eq. (10.29), we obtain the digital lter pulse transfer function as follows. 422500 G(z) = (z 2 4x106 (z+ 1)2 + 1838480 (z 1 ) + 422500 1) ( z+1) = or 422500(z + 2.z 2 + 1) 6260980z2 715500z + 2584020

z 2 + 2z + 1 (10.32) 14.82z 2 16.935z + 6.16 jT in Eq. (10.32) The frequency response of G(z) can be obtained by substituting z = e which results G(z) = ej2 T + ejT + 1 14.82e .2 16.395ejT + 6.116 cos 2T + jsni2T + 2 cos T + j25inT + 1 = 14.82 cos 2T + j14.82sn2T 16.935 cos T j16.935 sin T + 6.116 G(e jT ) =
j2 T

(10.33) (10.34)

or

G(ejT ) = A A= (cos 2T + 2 cos T + 1) + j(sn2T + 2 sin T ) (14.82 cos 2t 16.935 cos t + 6.116) + j(14.82 sin 2T 16.935 sin t) (10.35)

The amplitude vs. frequency for Eq. (10.35) can be obtained for T = 1 msec. and is shown in Fig. 10.9 10.3.2 Direct Approach Using Squared Magnitude Functions A direct approach to the design of digital lters is to derive G(z) working in the z-plane. When designing digital lters using this direct approach, we seek functions that produce half of the poles within the unit circle, the other half being outside. These functions are known as mirror image polynomials (MIPs). Consider the magnitude squared function dened as |G(ejT )|2 = 1 1 + [Fn (T )]2 (10.36)

We need suitable trigonometric functions for [Fn (T )]2 , such that on substitution z = e jT an MIP in the z-plane results. One such function is cos(T /2), that is 1 T = (1 + cos T ) 2 2 1 1 = 1 + (ejT + ejT ) 2 2 1 1 1 + (z + z 1 ) = 2 2 cos2 (z + 1)2 T = 2 4z
206

cos 2

(10.37)

therefore

(10.38)

similarly sin now consider


2

(z 1)2 T = 2 4z 1+ h 1
sin2 ( T ) 2 sin 2( c T ) 2

(10.39)

|G(ejT )|2 =

where c is the desired angular cuto frequency. Substituting z = ejT yields |G(z)|2 = qn q + pn
n

i2

(10.40)

(10.41)

where q = sin2 (cT /2) and p = (z 1)2 / 4z. The roots in the p-plane occur on a circle of radius q. Thus pk = qej k , k = 0, 1, 2 . . . , (n 1) and (2k + 1) for n even n 2k k = for n odd n

k =

(10.42) (10.43)

Having solved for p, we can nd the corresponding factors in the z-plane by solving p = (z 1)2 / 4z, that is 4pz = (z 1)2 = z 2 2z + 1 z 2 2z(1 2p) + 1 = 0 z = (1 2p) (4p(p 1)) (10.45) (10.44)

therefore or

Hence it is seen that for every root in the p-plane, there will be two corresponding roots in the z-plane, as given in Eq. (10.45). Example 10.4 Consider the specications of the low pass lter in Example 10.3 and design the digital lter by direct method. Solution: 1 2x100 c T 1 = (1 cos cT ) = (1 cos ) = 0.0955 2 2 2 1000 1 1 2x400 T p = sin 2 ) = 0.9045, = 400x2 = (1 cos T ) = (1 cos 2 2 2 1000 q = sin 2 |G(z)| 2 = qn 1 1 1 h in = n = = , at = 400x2 p q + pn 1 + (9.47)n 1 + 0.9 045 1+ q 0.0 955
n

(10.46)

207

20 = 100 = 89 = n=

10 Log10 [1 + (9.47)n ] 1 + (9.47)n ] (9.47)n 2

(10.47)

(10.48)
2

Hence, we will choose a second order digital lter. Therefore k = 0 and k = 1, and 0 = and 1 = 3 2 Therefore P0 = 0.09556 Now applying Eq. (10.45). z 0 = 0.09556 Therefore and z 0 = 1.417 j0.649 (outside the uni-circle) Similarly 1/2 3 3 3 zi = 1 0.09556 4(0.09556 )[0.09556 1] 2 2 2 = (1 + j0.191) (0.417 + j0.459) Therefore z1 = 0.583 j0.268 (inside the unit circle) z1 = 1.417 + j0.649 (outside the unit circle) For stability we use the poles which are inside the unit circle, therefore we obtain G(z) = 1 [z (0.583 j0.268)][z (0.583 + j0.268)] o1/2 n 4(.09556 )[(0.09556 1]1/2 2 2 2 = (1 j0.191) (0.417 + j0.159)(10.50) z 0 = 0.583 + j 0.268 (inside the unit-circle) 2 andP1 = 0.09556 3 2

(10.49)

(10.50)

(10.51) (10.52)

(10.53)

Now taking |G(j )| = 1 at = 0, then z = ejT = 1. Therefore |G(1)| = |1 [1 (0.583 j 0.268)][1 (0.583 + j0.268)]| G(z) = 0.246 [(z (0.583 j0.268)][(z (0.583 + j0.268)] 0.246 = 2 z 1.166z + 0.4117
208

(10.54) (10.55) (10.56)

Hence

The frequency response of this lter can be obtained by substituting z = ejT . Thus

G(ejT ) =

0.246 = cos 2T + j sin 2T 1.166 cos T j1.166 sin T + 0.4117 0.246 = (cos 2T 1.166 cos T + 0.477)j(sin 2t 1.66 sin t)

0.246 ej2T 1.166ejT 0.4117

(10.57)

(10.58)

The frequency for Eq. (10.58) can now be plotted for T = 1 msec and is shown in Fig. 10.10. Table 10.1 may be used to frequency transform from low pass lters to high pass, band pass and stop lters. Also note that it is possible to transform from low pass to low pass, that is a shift of cuto frequency. Furthermore, note that in using Table 10.2 (rad/sec) is the desired cuto frequency, 1 and 2 are the lower and upper cuto frequencies respectively and T is the sampling period. Table 10.2 Frequency transformation used with Direct Design Methods. Filter Lowpass Highpass Substitute for z 1 az z a i h 1 + az z+a Design Formulae a = sin( c )T /2 sin( + c )T /2 a= a= b= cos( c)T/2 cos( + c)T /2 cos(2 + 1 )T /2 cos(2 1 )T /2

Bandpass

z 2 (b 1) 1 2abz + (b + 1) (b + 1) h i b 1 2abz + z 2 b+1 (b + 1) z 2 (1 b)


1

cos(2 1 )T /2 1 tan T /2 a = Same as above b= tan(2 1 )T / 1 tanT /2

Bandstop

2az (b + 1) 1 b 2az +z 2 b + 1 (b + 1)
i

209

PROBLEMS 10.1. An error-sampled control system has the block diagram shown in Fig. 10p-1. The transfer function of the controlled process is G1 (s) = K s(s + 1)(s + 2)

The sampling period is 0.5 sec. (a) Sketch the root loci in the z-plane as a function of k. (b) What is the marginal value of k for stability. (c) When k is set at the marginal value for stability, design a digital compensator, so that the damping ratio of the closed loop control poles is equal to 0.707. 10.2. A sampled data feedback control system with digital compensation is as shown in Fig. 10p-1. The transfer function of the controlled process is G1 (s) = The sampling period is 1 sec. (a) Sketch the root loci in the z-plane as a function of k (b) What is the marginal value of k for stability. (c) When k is set at the marginal value for stability, design a digital compensator so that the damping ratio of the closed loop poles is equal to 0.65. 10.3. (a) Find the magnitude squared function of the G(z) = 1 + z 1 1 + 0.5z 1 + 0.5z 2 k(s + 1) s(s + 2)(s + 3)

(b) Construct a pole-zero diagram of G(z). (c) Construct the pole-zero diagram of magnitude squared function of G(z). 10.4. Consider the transfer function of an analog lter. G(s) = 1 2s + 1

s2 +

Let the sampling period be 0.5 sec. Find the corresponding digital transfer function by bilinear transformation method. Also, nd the pole-zero diagram of the resultant transfer function and sketch the magnitude characteristics. 10.5. Suppose that a low-pass Butterworth lter is desired to satisfy the following requirements. (i) The 3dB cuto point is at c = 0.1 rad. (ii) The 10dB attenuation period.
211

(iii) The sampling period T is 10 sec. (note that c = c /T and = /T ). Find (a) The order of the Butterworth low pass lter. (b) The digital equivalent lter by bilinear transformation. (c) The magnitude vs. frequency plot. 10.6 A lowpass digital lter is required to have 3dB attenuation at 2kHz and at least 20dB attenuation at 5kHz. Using the direct approach of squard magnitude functions, derive G(z) to satisfy the above specications. Take sampling frequency as 20kHz, Sketch the magnitude vs. frequency plot.

212

Vous aimerez peut-être aussi