Vous êtes sur la page 1sur 101

1

Lecture Plan for MAT 203


Part-I: Vector Calculus

Lecture-1 Scalar fields, vector fields, Gradient and applications of gradient.


Lecture-2 Divergence of a vector field, Curl of a vector field and Vector identities
lecture-3 Line integral and Greens theorem
Lecture-4 Surface integral
Lecture-5 Volume integrals, Gauss divergence Theorem
Lecture-6 Stokes Theorem

Part-II: Ordinary differential equations

Lecture-7 Introduction, Formation of First order first degree differential equations(FOFDDE),


Lecture-8 Variable separable, Homogeneous, Nonhomegenous FOFDDE
Lecture-9 Exact and non exact, LDE, Bernoullis equations
Lecture-10 Application of FOFDDE, Orthogonal trajectories
Lecture-11 General solution of y00 (x) + py0 (x) + qy(x) = 0, where p and q are constants and Solving Homogeneous
IVODE.
Lecture-12 Particular integral of y00 (x) + py0 (x) + qy(x) = R(x), where p and q are constants: Method of Undetermined
Coefficients
Lecture-13 Particular integral of y00 (x) + P(x)y0 (x) + Q(x)y(x) = R(x):
Method of Variation of parameters.
Lecture-14 LDEs(linear differential equations) with variable coefficients which can be converted to LDEs with constant
coefficients

Part-III: Laplace Transforms

Lecture-15 Introduction, Linearity property, Change of scale property


Lecture-16 Multiplication by eat ,Multiplication by t n ,Division by t, Laplace Transform of Derivatives
Lecture-17 Laplace transforms of integrals, Second shifting, Laplace transforms of periodic functions, Convolution
Lecture-18 Inverse Laplace transforms and Applications

Part-IV:Series and Using Series to Solve Differential Equations

Lecture-19 Series, divergence test. Comparison and limit comparison tests


Lecture-20 Integral test, alternating series test
Lecture-21 Absolute convergence, root & ratio tests
Lecture-22 Power series, Taylor polynomials and series.
2

Lecture-23 Series to Solve Differential Equations, Frobinious Method

Lecture-24 Legendre and Bessel Differential Equations

Part-V: System of Linear differential equations

Lecture-25 Solving systems of linear equations

Lecture-26 Gauss elimination Method


Lecture-27 Eigenvalues, eigenvectors and characteristic polynomial.
Lecture-28 Systems of linear differential equations

Text books

1. For part-1 refer: Calculus Early Transcendentals by James Stewart.


2. For remaining parts : Advance Engineering Mathematics by Erwin Kreyszig.
1 PART-I 3
( ) p
Unit outward normal: |( ) p |
-

1 Part-I INPUT: Surface and a point p.

1.1 Lecture-1 Directional Derivative: ( ) p |aa|


-
INPUT: Surface and a point p
and direction a.
Gradient
-
Vector grad = = x i =
x
i + y j + z k

Differential
div F := Fx1
Calculus -Divergence: F = - Maximal
- Directional Derivative:
F = div F = 0 F is called solenoidal
( ) p |aa| , where a = ( ) p .
INPUT: Surface and a point p.
 
F3
Curl: F = F2 i
y- z - Angle between the surfaces
F = 0 F is called irrotational
(1 ) p (2 ) p
|1 ) p ||1 ) p |
is a scalar field
INPUT: two surfaces 1 , 2
F = F1 i + F2 j + F3 k and their point of intersection p.

= x
i + y j + z k

R
Line Integral: F. d r
-C

RR
Vector Surface Integral: F n dS
Vector - S -
Calculus
Integral

Calculus
RRR-
Volume Integral: F dV
V

RR  N 
My dxdy
H
Greens theorem: Mdx + Ndy = x
C - R

RH RRR
Gauss Divergence theorem: F ndS = divF dV
Vector S
- V

Integral
H RR
Transformations Stokes theorem: F.d r = curl F n dS
- C -S
1 PART-I 4

Definition: The gradient of a scalar field is the vector field



grad = i+ j+ k
x y z
so gradient is a map
grad : scalar fields 7 vector fields

grad = i+ j+ k (1)
x y z
It is common and useful to also use the symbolic notation
grad =
where , called nabla is the vector differential operator

= i+ j+ k
x y z
Hence, for example,
Example: The gradient of the scalar field (x, y, z) = xy + y cos z is
grad = yi + (x + cos z)j y sin zk
and physical examples include the force on a particle
F = V
in a potential energy field V (x, y, z).
Example: f = x2 + 2x2 yz, find grad f . If g = x2 yz + 2z, what is f g.
Solution:
2 2 2
x + 2x2 yz i + x + 2x2 yz j + x + 2x2 yz k
  
f =
x y z
= (2x + 4xyz)i + 2x2 zj + 2x2 yk
and
2  2  2 
g = x yz + 2z i + x yz + 2z j + x yz + 2z k
x y z
= 2xyzi + x2 zj + (x2 y + 2)k
and
f g = 2xyz(2x + 4xyz) + 2x2 z(x2 z) + 2x2 y(2 + x2 y)
= 4x2 yz + 8x2 y2 z2 + 2x4 z2 + 4x2 y + 2x4 y2
Example: Let r = xi + y j + zk. That is r is the position p vector of point P(x, y, z). In particular, we denote
r(t) = (x(t), y(t), z(t)) = x(t)i + y(t) j + z(t)k. Let r = |r| = x2 + y2 + z2 or r2 = x2 + y2 + z2 .

(r2 ) (x2 + y2 + z2 )
=
x x
r
2r = 2x
x
r x
=
x r
1 PART-I 5

r y r z
Similarly we can prove y = r and z = r
Find grad(r).
Solution:
r r r
grad(r) = (r) = i+ j+ k
x y z
x y z
= i+ j+ k
r r r
1 r
= (xi + y j + z j) =
r r

1.1.1 Few Applications of Gradient


1. Finding a normal to a given surface at a given point.
The normal to a surface S := f (x, y, z) = c at a point P(x0 , y0 , z0 ), where f (x0 , y0 , z0 ) = c is given by ( f )(x0 ,y0 ,z0 ) .

Let S be a surface represented by f (x, y, z) = c = const, where f is differentiable.


As we know that such a surface is called level surface of f , and different values of c, we get different level surfaces.
Now let C be a curve on S through a point P on S. As a space curve C has representation r(t) = [x(t), y(t), z(t)]. For C
to lie on the surface S, the components of r(t) must satisfy f (x, y, z) = c. That is

f (x(t), y(t), z(t)) = c.

Now a tangent vector of C is r0 (t) = [x0 (t), y0 (t), z0 (t)]. And the tangent vectors of all curves on S passing through P
will generally form a plane, called tangent plane of S at P.
The normal to this plane ( the straight line through P perpendicular to the tangent plane) is called the surface normal
to S at P. A vector in the direction of the surface normal is called a surface normal vector of S at P.
By differentiating f (x(t), y(t), z(t)) = c with application of chain rule, we get
df
0 =
dt
f dx f dy f dz
= + +
x dt y dt z dt
f f f dx dy dz
= ( i+ j+ k) ( i + j + k)
x y z dt dt dt
= ( f ) (r0 (t)).

Hence grad( f ) is orthogonal to all vectors r0 in the tangent plane, so that it is a normal vector of S at P.

Theorem 1.1. Let f be a differentiable scalar function in space. Let f (x, y, z) = c = const represents a surface S. Then the
gradient of f at a point P of S is not the zero vector, it is a normal vector of S at P.
p p
Example: Find a unit normal to z = x2 + y2 at (6, 8, 10). Note that this surface is f = 0 where f = z x2 + y2
p p
Solution:The normal to f = c is given by f , here f = z x2 + y2 and the surface is f = 0. Writing = x2 + y2 we have
x y
f = i j+k

so, at (6, 8, 10), = 10 and the vector, f , normal to the surface is


3 4
f = i j+k
5 5
1 PART-I 6


This has length | f | = 2 so the unit normal is
f 3 4 1
n = = i j+ k
| f | 5 2 5 2 2

The actual shape of the surface is a cone with its point at the origin.

2. Angle between the surfaces at their point of intersection.


It is known that angle between the two surfaces at their point of intersection is same as the angle between their tangent planes
at that point, which is further equal to angle between their normals.
Let P be a point of intersection of two surfaces f1 (x, y, z) = c1 and f2 (x, y, z) = c2 . The angle between these two surfaces at
their point of intersection P is given by
( f1 )P ( f2 )P
cos = .
|( f1 )P ||( f2 )P |
3. Finding directional derivative of given function at given point in the given direction Probably the easiest way to under-
stand the gradient is to relate it to the directional derivative; it is easy to see that a sensible definition of the derivative of is
the direction given by a unit vector e = (e1 , e2 , e3 ) is
(x + he) (x)
De := lim
h0 h
but, by expanding (x + he) = (x + he1 , y + he2 , z + he3 ) using the Taylor expansion

De = e = |e|| | cos ,

where is the angle between vectors e and . Obviously directional derivative is maximum for e in the same direction
as so the direction of gradient gives the direction that has its greatest variation in and the length of the gradient
is the directional derivative in that direction. Similarly, the gradient of is perpendicular to the level surfaces of , so
grad is perpendicular to the surface =constant.
But the value of (x, y) is constant along a level curve, so since v = r0 (t) is a tangent vector to this curve, then the rate of
change of in the direction of v is 0, i.e. Dv = 0. But we know that Dv = v = kvk k k cos , where is the angle
between v and . So since kvk = 1 then Dv = k k cos . So since 6= 0 then Dv = 0 cos = 0 = 90 . In
other words, v, which means that is normal to the level curve.

y v = r0 (t)

(x, y) = c

x
0

Figure 1

Theorem 1.2. Let f (x, y) be a continuously differentiable real-valued function, with f 6= 0. Then:
1 PART-I 7

(a) The gradient f is normal to any level curve f (x, y) = c.


(b) The value of f (x, y) increases the fastest in the direction of f .
(c) The value of f (x, y) decreases the fastest in the direction of f .

 
Example 1.3. Find the directional derivative of f (x, y) = xy2 + x3 y at the point (1, 2) in the direction of v = 1 , 1 .
2 2

Solution: We see that f = (y2 + 3x2 y, 2xy + x3 ), so


 
Dv f (1, 2) = v f (1, 2) = 1 , 1 (22 + 3(1)2 (2), 2(1)(2) + 13 ) = 15

2 2 2

Example: Find the directional derivative of z/(x2 + y2 ) in the direction i.


Solution:Well the directional derivative along i is grad f i, in other words the one component of the grad. Furthermore

z 2xz
= 2
x x2 + y2 (x + y2 )2

Example: Let f = xyz. Work out grad f . What is the value of grad f at (2, 1, 2). What is the directional derivative of f in the
i-direction at (2, 1, 2).
Solution:
xyz xyz xyz
grad xyz = i+ j+ k
x y z
= yzi + xzj + xyk

At (2, 1, 2), x = 2, y = 1 and z = 2, so


grad f = 2i + 4j + 2k
The direction derivative in the i direction is given by i grad f = 2.
Example: Let f = x2 + y2 + z2 . Find grad f . What is the directional derivative of f in the direction of b = (1, 1, 1) at the
point (1, 1, 1). Remember to use a unit vector when working out the directional derivative. What is the directional derivative
in the i direction at (1, 0, 0); what about the j direction?
Solution:
f = 2xi + 2yj + 2zk
b f where
The directional derivative of f in the direction of b = (1, 1, 1) is given by b

b= b
b
|b|

is the unit vector in the b direction. Here |b| = 3 so
 
1 1 1
b=
b , ,
3 3 3
At (1, 1, 1), we have x = y = z = 1 so f = 2i + 2j + 2k and hence
2 2 2
Dbb f = + + = 2 3
3 3 3
At (1, 0, 0) grad f = 2i so the direction derivative in the i direction is two and in the j direction is zero.
1 PART-I 8

Example 1.4. In which direction does the function f (x, y) = xy2 + x3 y increase the fastest from the point (1, 2)? In which
direction does it decrease the fastest?
f
Solution: Since f = (y2 + 3x2 y, 2xy + x3 ), then f (1, 2) = (10, 5) 6= 0. A unit vector in that direction is v = k fk =
   
2
, 1 2 1
. Thus, f increases the fastest in the direction of , and decreases the fastest in the direction of
5 5 5 5
 
2
, 1 .
5 5

Though we proved Theorem 1.2 for functions of two variables, a similar argument can be used to show that it also applies to
functions of three or more variables. Likewise, the directional derivative in the three-dimensional case can also be defined by
the formula Dv f = v f .

Example 1.5. The temperature T of a solid is given by the function T (x, y, z) = ex + e2y + e4z , where x, y, z are space
coordinates relative to the center of the solid. In which direction from the point (1, 1, 1) will the temperature decrease the
fastest?
Solution: Since f = (ex , 2e2y , 4e4z ), then the temperature will decrease the fastest in the direction of f (1, 1, 1) =
(e1 , 2e2 , 4e4 ).

4. Equation of a tangent plane to a surface at a point on the surface.


Suppose f has continuous partial derivatives. An equation of the tangent plane to the surface z = f (x, y) at the point
P(x0 , y0 , z0 ) is
z z0 = fx (x0 , y0 )(x x0 ) + fy (x0 , y0 )(y y0 ). (2)

In terms of Gradient:

An equation of the tangent plane to the surface z = f (x, y) at the point P(x0 , y0 , z0 ) is
[( )( x0 , y0 , z0 )] [(x x0 )i + (y y0 ) j + (z z0 )k] = 0
where = f (x, y) z.

Geometrical Interpretation:
Suppose a surface S has equation z = f (x, y), where f has continuous first partial derivatives, and let P(x0 , y0 , z0 ) be a
point on S.
Let C1 and C2 be the curves obtained by intersecting the vertical planes y = y0 and x = x0 with the surface S. Then the
point P lies on both C1 and C2 .
Let T1 and T2 be the tangent lines to the curves C1 and C2 at the point P.
Then the tangent plane to the surface S at the point P is defined to be the plane that contains both tangent lines T1 and
T2 .
If C is any other curve that lies on the surface S and passes through P, then its tangent line at P also lies in the tangent
plane. Therefore you can think of the tangent plane to S at P as consisting of all possible tangent lines at P to curves
that lie on S and pass through P. The tangent plane at P is the plane that most closely approximates the surface S near
the point P.
Any plane that passes through the point P(x0 , y0 , z0 ) has an equation of the form
A(x x0 ) + B(y y0 ) +C(z z0 ) = 0.
By dividing the equation by C and letting a = CA and b = CB , the above equation becomes
z z0 = a(x x0 ) + b(y y0 ). (3)
1 PART-I 9

If Equation 3 represents the tangent plane passes through the point P(x0 , y0 , z0 ), then its intersection with the plane
y = y0 is the tangent line T1 . Setting y = y0 in the Equation 3 gives

z z0 = a(x x0 ) y = y0 .

and we recognize these as the equations (in point-slope form) of a line with slope a. But we know that the slope of the
tangent T1 is fx (x0 , y0 ). Therefore a = fx (x0 , y0 ). Similarly b = fy (x0 , y0 )

z z
f
z = f (x, y) slope = y
(a, b)
f
(a, b, f (a, b)) slope = x
(a, b) z = f (x, y)
(a, b, f (a, b))

Ly
Lx
b y y
0 0
a
(a, b) (a, b)
x x
D D
(a) Tangent line Lx in the plane y = b (b) Tangent line Ly in the plane x = a

Figure 2 Partial derivatives as slopes

Example 1.6. Find the equation of the tangent plane to the surface z = x2 + y2 at the point (1, 2, 5).
f f
Solution: For the function f (x, y) = x2 + y2 , we have x
= 2x and y
= 2y, so the equation of the tangent plane at the point
(1, 2, 5) is

2(1)(x 1) + 2(2)(y 2) z + 5 = 0 , or
2x + 4y z 5 = 0 .

In a similar fashion, it can be shown that if a surface is defined implicitly by an equation of the form F(x, y, z) = 0, then the
tangent plane to the surface at a point (a, b, c) is given by the equation
F
x
(a, b, c) (x a) + Fy (a, b, c) (y b) + Fz (a, b, c) (z c) = 0 . (4)

Note that formula (3) is the special case of formula (4) where F(x, y, z) = f (x, y) z.

Example 1.7. Find the equation of the tangent plane to the surface x2 + y2 + z2 = 9 at the point (2, 2, 1).
F F F
Solution: For the function F(x, y, z) = x2 + y2 + z2 9, we have x
= 2x, y
= 2y, and z
= 2z, so the equation of the
tangent plane at (2, 2, 1) is

2(2)(x 2) + 2(2)(y 2) + 2(1)(z + 1) = 0 , or


2x + 2y z 9 = 0 .

4a. Linear approximation or the tangent plane approximation of f (x, y) at (a, b)


From Equation (2) the tangent plane to the the graph of function f (x, y) at the point (a, b, f (a, b) is

z = f (a, b) + fx (a, b)(x a) + fy (a, b)(y b)


1 PART-I 10

if fx , fy are continuous. The linear function whose graph is this tangent plane, namely

L(x, y) = f (a, b) + fx (a, b)(x a) + fy (a, b)(y b) (5)

is called the linearization of f (x, y) at (a, b) and the approximation

f (x, y) f (a, b) + fx (a, b)(x a) + fy (a, b)(y b)

is called linear approximation or tangent plane approximation of f at (a, b).

Example 1.8. Find linearization of f (x, y) = xexy at (1, 0). Then use it to approximate f (1.1, 0.1).

Solution:The partial derivatives are fx (x, y) = exy + xyexy fy (x, y) = x2 exy


fx (1, 0) = 1 fy (1, 0) = 1.
The linearization is

L(x, y) = f (a, b) + fx (a, b)(x a) + fy (a, b)(y b) = f (1, 0) + fx (1, 0)(x 1) + fy (1, 0)(y 0) = x + y.

The corresponding linear approximation is


xexy x + y
so f (1.1, 0.1) = 1.1 + (0.1) = 1.
But actual value is f (1.1, 0.1) = 1.1e(1.1)(0.1) 0.98542.

4b. Differential or total differential


Consider a function of two variables z = f (x, y), and suppose x changes from a to a + x, y changes from b to b + y. Then
the corresponding increment of z is
z = f (a + x, b + y) f (a, b)
.

Definition 1.9. If z = f (x, y), then f is differentiable at (a, b) if z can be expressed in the form

z = fx (a, b)x + fy (a, b)y + 1 x + 2 y,

where 1 and 2 0 as (x, y) (0, 0).

Theorem 1.10. If the partial derivatives fx and fy exist near (a, b) and are continuous at (a, b), then f is differentiable at
(a, b).

Let z = f (x, y) be a differentiable function. Then the differential dz, also called the differential, is defined by

z z
dz = fx (a, b) dx + fy (a, b) dy = dx + dy. (6)
x y

Example 1.11. (a) If z = f (x, y) = x2 + 3xy y2 , find the differential dz.


(b) If x changes from 2 to 2.05 and y changes from 3 to 2.96, compare the values of z and dz.
Solution:
z
(a) dz = x
dx + yz dy = (2x + 3y) dx + (3x 2y) dy.
(b) Putting x = 2, dx = x = 0.05, y = 3 and dy = y = 0.04, we get dz = 0, 65 where as z = f (2.05, 2.96) f (2, 3) =
0.6449.

5. The stationary points or critical points of a scalar field are points where the gradient of the field is zero.
1 PART-I 11

It is known that, a function has local minimum or local maximum or saddle points (points where function has neither
local maxima nor local minima and gradient of function is zero) only at stationary points. Consequently gradient is
used find local maximum or local minimum of function.
Theorem 1.12. Let f (x, y) be a real-valued function such that both xf (a, b) and f
y
(a, b) exist. Then a necessary condition
for f (x, y) to have a local maximum or minimum at (a, b) is that f (a, b) = 0.

Example 1.13. The function f (x, y) = xy has a critical point at (0, 0): xf = y = 0 y = 0, and yf = x = 0 x = 0, so
(0, 0) is the only critical point. But clearly f does not have a local maximum or minimum at (0, 0) since any disk around
(0, 0) contains points (x, y) where the values of x and y have the same sign (so that f (x, y) = xy > 0 = f (0, 0)) and different
signs (so that f (x, y) = xy < 0 = f (0, 0)). In fact, along the path y = x in R2 , f (x, y) = x2 , which has a local minimum at
(0, 0), while along the path y = x we have f (x, y) = x2 , which has a local maximum at (0, 0). So (0, 0) is an example of a
saddle point, i.e. it is a local maximum in one direction and a local minimum in another direction.

1.2 Lecture-2
Now, the issue is how to define the derivatives of scalar and vector fields. In practice, there are three differential
operators used; these will be defined and, hopefully, by looking at examples it will become clearer as to why these
particular operators are the ones that are important for physically and mathematically. One of them is the gradient, that
we already discussed. The remaining two differential operators act on vector fields, the divergence, sends a vector field
to a scalar field and, we will see, the curl sends a vector field to another vector field.
Definition: The divergence of a vector field F = F1 i + F2 j + F3 k is
F1 F2 F3
div F := + +
x y z
or in the symbolic notation
div F = F
Hence
div : vector fields 7 scalar fields
F div F = F (7)

So for example
Example: The divergence of the vector field F = (xy, sin z, z) is div F = 1 + y.
We will see that it is significant when a vector field has no divergence and
Definition: A vector field is called solenoidal if it has a zero divergence.
In electromagnetism the magnetic field B is solenoidal by the Maxwell equations and in fluid flow the continuity
equation for an incompressible liquid has a solenoidal velocity field. In fact, the continuity equation is a good way of
getting a handle on how the divergence works, consider a compressible fluid with density field (x, y, z;t) and velocity
field u(x, y, z;t), at a given time t and at a given point (x, y, z) gives the density of the fluid and u gives its velocity.
The field u is the mass transport and the continuity equation is

= div (u)
t
so the amount of fluid at a point changes according to the divergence of the mass transport field, hence, roughly speaking
we can think of the divergence as giving the net accumulation of the vector field at the point.
1 PART-I 12

Definition: The curl of a vector field F = F1 i + F2 j + F3 k is the vector field


curl : vector fields 7 vector fields
F curl F (8)
with      
F3 F2 F1 F3 F2 F1
curl F = i+ j+ k
y z z x x y
and, in the symbolic notation this is
curl F = F
This is the easiest way to remember the formula, using the determinant formula for the cross product

i j k

curl = x y z
F F F
1 2 3

Example: If F = xyi + y2 j + zk then applying the formula above gives


curl F = xk

Again, it is not easy at first to get a picture of what the curl does. One rough idea is that it measures the rotation at
a point of the vectors in a vector field. Certainly, this is what happens when you take the curl of the rotational field.
Consider the velocity field
u = wr
where r = (x, y, z) is the position vector and w = (w! , w2 , w3 ) is some constant vector. Now, u is perpendicular to both r
and w and the length of u is constant on circles around w, hence the velocity field corresponds to rotation around w. We
will take its curl, first

i j k

u = w1 w2 w3
x y z
= (w2 z w3 x)i + (w3 x w1 z)j + (w1 y w2 x)k (9)
Now, substituting this into the curl formula we get
u = 2w
Definition: A vector field is called irrotational if it has zero curl.
The gradient of a scalar field is irrotational:
curl grad = 0
or, using the symbolic notation = 0. This is proved by calculation, using the subscript to denote component, so
v = (v1 , v2 , v3) for any vector, we have
( )1 = y ( )3 z ( )2
= y z z y (10)
and the other components follow in the same way. We have used the useful notation where

x =
x
and so on.
1 PART-I 13

1.2.1 Conservative fields


A smooth vector field F is conservative if and only if there exists a smooth scalar field such that

F = grad

is often called a potential for F.


Since curl grad = 0 for any scalar field , curlF = 0 is a necessary condition for F to be conservative. It isnt
sufficient, however, this is something we will return to, but, for now, we notice that it makes it easy to spot fields that
arent conservative, for example, if F = (y, x, 0) then

F = (0, 0, 2)

On the other hand, it is easy to see that F = r is conservative because

r2
= (x, y, z)
2
and therefore = r2 /2 is a potential for F. Of course, curl r = 0.

1.2.2 Vector identities


There are a number of useful identities involving grad, div and curl. These are usually proved by direct calculation,
expand out the various terms.
Let and be scalar fields and F and G be vector fields, then
1.
( ) = +
This is a direct consequence of the product rule.

2.
( F) = F + F
and again this follows by just expanding it out

( F) = x ( F1 ) + y ( F2 ) + z ( F3 )
= F1 x + F2 y + F3 z + (x F1 + y F2 + z F3 )
= F + F

3.
( F) = F + F
This can easily be proved too, just check, say, the x-component by direct calculation.

4.
(F G) = ( F) G F G
(OR)
div(F G) = G Curl F F CurlG
If F and G are irrotational vectors, then F G is solenoidal.
1 PART-I 14

5.
(F G) = ( G)F + (G )F ( F)G (F )G
This is one of the harder ones to prove since it involves the unusual operator

(G ) = G1 x + G2 y + G3 z .

6.
(F G) = F ( G) + G ( F) + (F )G) + (G )F)
This is also given as an exercise.
7.
( F) = 0
or, the curl of a vector field is solenoidal. This is one of the important vector identities which hints at some of the
beautiful constructions in differential geometry. It is easy enough to prove by direct calculation.

8.
= 0

9.
( F) = ( F) 2 F
where
2 = = (x2 + y2 + z2 )
is the Laplacian, an operator which occurs frequently in physically significant equations. Obviously

2 F = (2 F3 , 2 F2 , 2 F3 )

1.3 Lecture-3
1.3.1 Line Integrals of Scalar Valued Functions
In single-variable calculus you learned how to integrate a real-valued function f (x) over an interval [a, b] in R1 . This
integral (usually called a Riemann integral) can be thought of as an integral over a path in R1 , since an interval (or
collection of intervals) is really the only kind of path in R1 . You may also recall that if f (x) represented the force
applied along the x-axis to an object at position x in [a, b], then the work W done in moving that object from position
x = a to x = b was defined as the integral:
Z b
W = f (x) dx
a
Now we will see how to define the integral of a function (either real-valued or vector-valued) of two variables over a
general path (i.e. a curve) in R2 . This definition will be motivated by the physical notion of work. We will begin with
real-valued functions of two variables.
In physics, the intuitive idea of work is that

Work = Force Distance .

Suppose that we want to find the total amount W of work done in moving an object along a curve C in R2 with a smooth
parametrization x = x(t), y = y(t), a t b, with a force f (x, y) which varies with the position (x, y) of the object and
is applied in the direction of motion along C (see Figure 3 below).
1 PART-I 15

y t = ti
C p
si xi 2 + yi 2
t =a yi
t = ti+1
xi
t =b
x
0
Figure 3 Curve C : x = x(t), y = y(t) for t in [a, b]

We will assume for now that the function f (x, y) is continuous and real-valued, so we only consider the magnitude
of the force. Partition the interval [a, b] as follows:

a = t0 < t1 < t2 < < tn1 < tn = b , for some integer n 2

p we can see from Figure 3, over a typical subinterval [ti ,ti+1 ] the distance si traveled along the curve is approximately
As
xi 2 + yi 2 , by the Pythagorean Theorem. Thus, if the subinterval is small enough then the work done in moving the
object along that piece of the curve is approximately
q
Force Distance f (xi , yi ) xi 2 + yi 2 , (11)

where (xi , yi ) = (x(ti ), y(ti )) for some ti in [ti ,ti+1 ], and so


n1 q
W f (xi , yi ) xi 2 + yi 2 (12)
i=0

is approximately the total amount of work done over the entire curve. But since
s
xi 2 yi 2
q   
2 2
xi + yi = + ti ,
ti ti

where ti = ti+1 ti , then s


n1 2  2
xi yi
W f (xi , yi ) + ti . (13)
i=0 ti ti
Taking the limit of that sum as the length of the largest subinterval goes to 0, the sum over all subintervals becomes the
integral from t = a to t = b, xi yi 0 0
ti and ti become x (t) and y (t), respectively, and f (xi , yi ) becomes f (x(t), y(t)), so
that Z b q
W = f (x(t), y(t)) x 0 (t)2 + y 0 (t)2 dt . (14)
a
The integral on the right side of the above equation gives us our idea of how to define, for any real-valued function
f (x, y), the integral of f (x, y) along the curve C, called a line integral:

Definition 1.14. For a real-valued function f (x, y) and a curve C in R2 , parametrized by x = x(t), y = y(t), a t b,
the line integral of f (x, y) along C with respect to arc length s is
Rb p
x 0 (t)2 + y 0 (t)2 dt .
R
C f (x, y) ds = a f (x(t), y(t)) (15)
1 PART-I 16

The symbol ds is the differential of the arc length function


Z tq
s = s(t) = x 0 (u)2 + y 0 (u)2 du , (16)
a

which you may recognize as the length of the curve C over the interval (a,t), for all t in (a, b). That is,
q
ds = s 0 (t) dt = x 0 (t)2 + y 0 (t)2 dt , (17)

by the Fundamental Theorem of Calculus. R


For a general real-valued function f (x, y), what does the line integral C f (x, y) ds represent? The preceding
discussion of ds gives us a clue. You can think of differentials as infinitesimal lengths. So if you think of f (x, y) as the
height of a picket fence along C, then f (x, y) ds can be thought of as approximately
R
the area of a section of that fence
over some infinitesimally small section of the curve, and thus the line integral C f (x, y) ds is the total area of that picket
fence (see Figure 4).

f (x, y)

C ds

x
0
Figure 4 Area of shaded rectangle = height width f (x, y) ds

Example 1.15. Use a line integral to show that the lateral surface area A of a right circular cylinder of radius r and
z

h = f (x, y)

y
0
x C : x2 + y2 = r2
height h is 2rh.
Solution: We will use the right circular cylinder with base circle C given by x2 + y2 = r2 and with height h in the
positive z direction. Parametrize C as follows:

x = x(t) = r cost , y = y(t) = r sint , 0 t 2


1 PART-I 17

Let f (x, y) = h for all (x, y). Then


Z Z b
q
A = f (x, y) ds = f (x(t), y(t)) x 0 (t)2 + y 0 (t)2 dt
C a
Z 2 q
= h (r sint) + (r cost)2 dt
2
0
Z 2 p
= h r sin2 t + cos2 t dt
0
Z 2
= rh 1 dt = 2rh
0

Note in Example 1.15 that if we had traversed the circle C twice, i.e. let t vary from 0 to 4, then we would have
gotten an area of 4rh, i.e. twice the desired area, even though the curve itself is still the same (namely, a circle of
radius r). Also, notice that we traversed the circle in the counter-clockwise direction. If we had gone in the clockwise
direction, using the parametrization
x = x(t) = r cos(2 t) , y = y(t) = r sin(2 t) , 0 t 2 , (18)
then it is easy to verify that the value of the line integral is unchanged. R
In general, it can be shown that reversing the direction in which a curve C is traversed leaves C f (x, y) ds unchanged,
for any f (x, y). If a curve C has a parametrization x = x(t), y = y(t), a t b, then denote by C the same curve as C
but traversed in the opposite direction. Then C is parametrized by
x = x(a + b t) , y = y(a + b t) , at b, (19)
and we have R R
C f (x, y) ds = C f (x, y) ds .
Notice that our definition of the line integral was with respect to the arc length parameter s. We can also define
Rb
f (x(t), y(t)) x 0 (t) dt
R
C f (x, y) dx = a (20)

as the line integral of f (x, y) along C with respect to x, and


Rb
f (x(t), y(t)) y 0 (t) dt
R
C f (x, y) dy = a (21)

as the line integral of f (x, y) along C with respect to y.

1.3.2 Line Integrals of Vector Valued Functions


In the derivation of the formula for a line integral, we used the idea of work as force multiplied by distance. However,
we know that force is actually a vector. So it would be helpful to develop a vector form for a line integral. For this,
suppose that we have a function F(x, y) defined on R2 by
F(x, y) = P(x, y) i + Q(x, y) j
for some continuous real-valued functions P(x, y) and Q(x, y) on R2 . Such a function F is called a vector field on R2 .
It is defined at points in R2 , and its values are vectors in R2 . For a curve C with a smooth parametrization x = x(t),
y = y(t), a t b, let
r(t) = x(t) i + y(t) j
1 PART-I 18

be the position vector for a point (x(t), y(t)) on C. Then r 0 (t) = x 0 (t) i + y 0 (t) j and so
Z Z Z b Z b
P(x, y) dx + Q(x, y) dy = P(x(t), y(t)) x 0 (t) dt + Q(x(t), y(t)) y 0 (t) dt
C C a a
Z b
= (P(x(t), y(t)) x 0 (t) + Q(x(t), y(t)) y 0 (t)) dt
a
Z b
= F(x(t), y(t)) r 0 (t) dt
a

by definition of F(x, y). Notice that the function F(x(t), y(t)) r 0 (t) is a real-valued function on [a, b], so the last integral
on the right looks somewhat similar to our earlier definition of a line integral. This leads us to the following definition:
Definition 1.16. For a vector field F(x, y) = P(x, y) i + Q(x, y) j and a curve C with a smooth parametrization x = x(t),
y = y(t), a t b, the line integral of f along C is
Z Z Z
F d r = P(x, y) dx + Q(x, y) dy (22)
C C C
Z b
= F(x(t), y(t)) r 0 (t) dt , (23)
a

where r(t) = x(t) i + y(t) j is the position vector for points on C.


We use the notation d r = r 0 (t) dt = dx i + dy j to denote the differential of the vector-valued function r. The line
integral in Definition 1.16 is often called a line integral of a vector field to distinguish it from the line integral in
Definition 1.14 which is called a line integral of a scalar field. For convenience we will often write
Z Z Z
P(x, y) dx + Q(x, y) dy = P(x, y) dx + Q(x, y) dy ,
C C C

where it is understood that the line integral along C is being applied to both P and Q. The quantity P(x, y) dx + Q(x, y) dy
is known as a differential form. For a real-valued function f (x, y), the differential of f is d f = xf dx + yf dy. A
differential form P(x, y) dx + Q(x, y) dy is called exact if it equals d f for some function f (x, y).
Recall that if the points on a curve C have position vector r(t) = x(t) i + y(t) j, then r 0 (t) is a tangent vector to C at
the point (x(t), y(t)) in the direction of increasing t (which we call the direction of C). Since C is a smooth curve, then
r 0 (t) 6= 0 on [a, b] and hence
r 0 (t)
T(t) =
||r 0 (t)||
is the unit tangent vector to C at (x(t), y(t)).
Theorem 1.17. For a vector field F(x, y) = P(x, y) i + Q(x, y) j and a curve C with a smooth parametrization x = x(t),
y = y(t), a t b and position vector r(t) = x(t) i + y(t) j,
Z Z
F d r = F T ds , (24)
C C

r 0 (t)
where T(t) = ||r 0 (t)|| is the unit tangent vector to C at (x(t), y(t)).

If the vector field F(x, y) represents the force moving an object along a curve C, then the work W done by this force
is Z Z
W = F T ds = F d r . (25)
C C
1 PART-I 19

2 + y2 ) dx + 2xy dy,
R
Example 1.18. Evaluate C (x where:
1. C : x = t , y = 2t , 0t 1
2. C : x = t , y = 2t 2 , 0t 1
y
(1, 2)
2

x
0 1
Solution:
(a) Since x 0 (t) = 1 and y 0 (t) = 2, then
Z Z 1
(x2 + y2 ) dx + 2xy dy = (x(t)2 + y(t)2 )x 0 (t) + 2x(t)y(t) y 0 (t) dt

C 0
Z 1
(t 2 + 4t 2 )(1) + 2t(2t)(2) dt

=
0
Z 1
= 13t 2 dt
0
1
13t 3 13
= =
3 3
0

(b) Since x 0 (t) = 1 and y 0 (t) = 4t, then


Z Z 1
(x2 + y2 ) dx + 2xy dy = (x(t)2 + y(t)2 )x 0 (t) + 2x(t)y(t) y 0 (t) dt

C 0
Z 1
(t 2 + 4t 4 )(1) + 2t(2t 2 )(4t) dt

=
0
Z 1
= (t 2 + 20t 4 ) dt
0
1
t3
5 1 13
= + 4t = + 4 =
3 3 3
0

So in both cases, if the vector field F(x, y) = (x2 + y2 ) i + 2xy j


represents the force moving an object from (0, 0) to (1, 2)
along the given curve C, then the work done is 13 3 . This may lead you to think that work (and more generally, the line
integral of a vector field) is independent of the path taken. However, as we will see that this is not always the case.

Although we defined line integrals over a single smooth curve, if C is a piecewise smooth curve, that is
C = C1 C2 . . . Cn
is the union of smooth curves C1 , . . . ,Cn , then we can define
Z Z Z Z
F d r = F d r1 + F d r2 + . . . + F d rn
C C1 C2 Cn

where each ri is the position vector of the curve Ci .


1 PART-I 20

2 + y2 ) dx + 2xy dy,
R
Example 1.19. Evaluate C (x where C is the polygonal path from (0, 0) to (0, 2) to (1, 2).
y
2 (1, 2)
C2

C1

x
0 1
Solution: Write C = C1 C2 , where C1 is the curve given by x = 0, y = t, 0 t 2 and C2 is the curve given by x = t,
y = 2, 0 t 1. Then
Z Z
2 2
(x + y ) dx + 2xy dy = (x2 + y2 ) dx + 2xy dy
C C1
Z
+ (x2 + y2 ) dx + 2xy dy
C2
Z 2 Z 1
(02 + t 2 )(0) + 2(0)t(1) dt + (t 2 + 4)(1) + 2t(2)(0) dt
 
=
0 0
Z 2 Z 1
= 0 dt + (t 2 + 4) dt
0 0
1
t3 1 13
= + 4t = + 4 =

3 3 3
0

1.3.3 Properties of Line Integrals


1. Orientation Dependence
We know that for line integrals of real-valued functions (scalar fields), reversing the direction in which the integral
is taken along a curve does not change the value of the line integral:
Z Z
f (x, y) ds = f (x, y) ds (26)
C C

For line integrals of vector fields, however, the value does change. To see this, let F(x, y) = P(x, y) i + Q(x, y) j
be a vector field, with P and Q continuously differentiable functions. Let C be a smooth curve parametrized by
x = x(t), y = y(t), a t b, with position vector r(t) = x(t) i + y(t) j (we will usually abbreviate this by saying
that C : r(t) = x(t) i + y(t) j is a smooth curve). We know that the curve C traversed in the opposite direction is
1 PART-I 21

parametrized by x = x(a + b t), y = y(a + b t), a t b. Then


Z b
d
Z
P(x, y) dx = P(x(a + b t), y(a + b t)) (x(a + b t)) dt
C a dt
Z b
= P(x(a + b t), y(a + b t)) (x 0 (a + b t)) dt (by the Chain Rule)
a
Z a
= P(x(u), y(u)) (x 0 (u)) (du) (by letting u = a + b t)
b
Z a
= P(x(u), y(u)) x 0 (u) du
b
Z b Z a Z b
= P(x(u), y(u)) x 0 (u) du , since = , so
Z Za b a

P(x, y) dx = P(x, y) dx
C C

since we are just using a different letter (u) for the line integral along C. A similar argument shows that
Z Z
Q(x, y) dy = Q(x, y) dy ,
C C

and hence
Z Z Z
F d r = P(x, y) dx + Q(x, y) dy
C C C
Z Z
= P(x, y) dx +
Q(x, y) dy
C C
Z Z 
= P(x, y) dx + Q(x, y) dy
C C
Z Z
F d r = F d r . (27)
C C

The above formula can be interpreted in terms of the work done by a force F(x, y) (treated as a vector) moving an
object along a curve C: the total work performed moving the object along C from its initial point to its terminal
point, and then back to the initial point moving backwards along the same path, is zero. This is because when
force is considered as a vector, direction is accounted for.
The preceding discussion shows the importance of always taking the direction of the curve into account when
using line integrals of vector fields. For this reason, the curves in line integrals are sometimes referred to as
directed curves or oriented curves.
2. Parametrization Independence
Recall that our definition of a line integral required that we have a parametrization x = x(t), y = y(t), a t b
for the curve C. But as we know, any curve has infinitely many parameterizations. So could we get a different
value for a line integral using some other parametrization of C, say, x = x(u), y = y(u), c u d ? If so, this
would mean that our definition is not well-defined. Luckily, it turns out that the value of a line integral of a vector
field is unchanged as long as the direction of the curve C is preserved by whatever parametrization is chosen:
Theorem 1.20. Let F(x, y) = P(x, y) i + Q(x, y) j be a vector field, and let C be a smooth curve parametrized by
x = x(t), y = y(t), a t b. Suppose that t = (u) for c u d, such that Ra = (c), b = (d), and 0 (u) > 0
on the open interval (c, d) (i.e. (u) is strictly increasing on [c, d]). Then C F d r has the same value for the
parameterizations x = x(t), y = y(t), a t b and x = x(u) = x((u)), y = y(u) = y((u)), c u d.
1 PART-I 22

Proof. Since (u) is strictly increasing and maps [c, d] onto [a, b], then we know that t = (u) has an inverse
function u = 1 (t) defined on [a, b] such that c = 1 (a), d = 1 (b), and du 1 0
dt = 0 (u) . Also, dt = (u) du,
and by the Chain Rule
d x d dx dt x 0 (u)
x 0 (u) = = (x((u))) = = x 0 (t) 0 (u) x 0 (t) =
du du dt du 0 (u)
so making the substitution t = (u) gives
Z 1 (b)
x 0 (u)
Z b
P(x(t), y(t)) x 0 (t) dt = P(x((u)), y((u))) ( 0 (u) du)
a 1 (a) 0 (u)
Z d
= P(x(u), y(u)) x 0 (u) du ,
c
R
which
R
shows that C P(x, y) dx has the same value for both parameterizations.
R
A similar argument shows that
C Q(x, y) dy has the same value for both parameterizations, and hence C F d r has the same value.

Notice that the condition 0 (u) > 0 in Theorem 1.20 means that the two parameterizations move along C in
the same direction. That was not the case with the reverse parametrization for C: for u = a + b t we have
t = (u) = a + b u 0 (u) = 1 < 0.
Example 1.21. Evaluate the line integral C (x2 + y2 ) dx + 2xy dy from Example 1.18, along the curve C : x = t,
R

y = 2t 2 , 0 t 1, where t = sin u for 0 u /2. Solution: First, we notice that 0 = sin 0, 1 = sin(/2), and
dt
du = cos u > 0 on (0, /2). So by Theorem 1.20 we know that if C is parametrized by

x = sin u , y = 2 sin2 u , 0 u /2
13
then C (x2 + y2 ) dx + 2xy dy should have the same value as we found in Example 1.18, namely
R
3 . And we can
indeed verify this:
Z Z /2
(x2 + y2 ) dx + 2xy dy = (sin2 u + (2 sin2 u)2 ) cos u + 2(sin u)(2 sin2 u)4 sin u cos u du

C 0
Z /2
sin2 u + 20 sin4 u cos u du

=
0
/2
sin3 u 5

= + 4 sin u
3
0
1 13
= +4 =
3 3
In other words, the line integral is unchanged whether t or u is the parameter for C.

By a closed curve, we mean a curve C whose initial point and terminal point are the same, i.e. for C : x = x(t),
y = y(t), a t b, we have (x(a), y(a)) = (x(b), y(b)).
I I

t =a t =b t =a

t =b
J J
C C
1 PART-I 23

3. Independence of Path and Gravitational/Conservative fields


A simple closed curve is a closed curve which does not intersect itself. Note that any closed curve can be
regarded as a union of simple closed curves (think of the loops in a figure eight). We use the special notation
I I
f (x, y) ds and F d r
C C

to denote line integrals of scalar and vector fields, respectively, along closed curves.

The following theorem gives a necessary and sufficient condition for this path independence:
R
Theorem 1.22. In a region R, the line integral F d r is independent of the path between any two points in R if
H C
and only if F d r = 0 for every closed curve C which is contained in R.
C
H
Proof. Suppose that F d r = 0 for every closed curve C which is contained in R. Let P1 and P2 be two distinct
C
points in R. Let C1 be a curve in R going from P1 to P2 , and let C2 be another curve in R going from P1 to P2 .
C1
I

P1 P2

I
C2 H
Then C = C1 C2 is a closed curve in R (from P1 to P1 ), and so F d r = 0.
C
Thus,
I
0 = F d r
C
Z Z
= F d r + F d r
C1 C2
Z Z
= F d r F d rand so
C1 C2
R R
F d r = F d r. This proves path independence.
C1 C2
R
Conversely, suppose that the line integral F d r is independent of the path between any two points in R. Let C
C
be a closed curve contained in R. Let P1 and P2 be two distinct points on C. Let C1 be a part of the curve C that
goes from P1 to P2 , and let C2 be the remaining part of C that goes from P1 to P2 . Then by path independence we
1 PART-I 24

have
Z Z
F d r = F d r
C1 C2
Z Z
F d r F d r = 0
C1 C2
Z Z
F d r + F d r = 0 , so
C1 C2
I
F d r = 0
C

since C = C1 C2 .
Clearly, the above theorem does not give a practical way to determine path independence, since it is impossible to
check the line integrals around all possible closed curves in a region. What it mostly does is give an idea of the
way in which line integrals behave, and how seemingly unrelated line integrals can be related (in this case, a
specific line integral between two points and all line integrals around closed curves).

Example 1.23. Let f (x, y, z) = z and let C be the curve in R3 parametrized by

x = t sint , y = t cost , z=t , 0 t 8 .


R
Evaluate C f (x, y, z) ds. (Note: C is called a conical helix)
Solution: Since x 0 (t) = sint + t cost, y 0 (t) = cost t sint, and z 0 (t) = 1, we have

x 0 (t)2 + y 0 (t)2 + z 0 (t)2 = (sin2 t + 2t sint cost + t 2 cos2 t) + (cos2 t 2t sint cost + t 2 sin2 t) + 1
= t 2 (sin2 t + cos2 t) + sin2 t + cos2 t + 1
= t2 + 2 ,

so since f (x(t), y(t), z(t)) = z(t) = t along the curve C, then


Z Z 8 q
f (x, y, z) ds = f (x(t), y(t), z(t)) x 0 (t)2 + y 0 (t)2 + z 0 (t)2 dt
C 0
Z 8 p
= t t 2 + 2 dt
0
 8


1 2 1
= (t + 2)3/2 = (64 2 + 2)3/2 2 2 .

3 3
0

1.3.4 The fundamental theorem for line integrals

For a more practical method for determining path independence, we first need a version of the Chain Rule for
multi variable functions: we restate the theorem for two variables as below.
1 PART-I 25

Theorem 1.24. [(Chain Rule)] If z = f (x, y) is a continuously differentiable function of x and y, and both
x = x(t) and y = y(t) are differentiable functions of t, then z is a differentiable function of t, and

dz z dx z dy
= + (28)
dt x dt y dt
at all points where the derivatives on the right are defined.

We will now use this Chain Rule to prove the following sufficient condition for path independence of line integrals:
Theorem 1.25. Let F(x, y) = P(x, y) i + Q(x, y) j be a vector field in some region R, with P and Q continuously
differentiable functions on R. Let C be a smooth curve in R parametrized by x = x(t), y = y(t), a t b. Suppose
that there is a real-valued function f (x, y) such that f = F on R. Then
Z Z
F d r = f d r = f (B) f (A) , (29)
C C

where A = (x(a), y(a)) and B = (x(b), y(b)) are the endpoints of C. Thus, the line integral is independent of the
path between its endpoints, since it depends only on the values of f at those endpoints.
R
Proof. By definition of C F d r, we have
Z Z b
P(x(t), y(t)) x 0 (t) + Q(x(t), y(t)) y 0 (t) dt

F d r =
C a
Z b 
f dx f dy f f
= + dt (since f = F = P and = Q)
a x dt y dt x y
Z b
= f 0 (x(t), y(t)) dt (by the Chain Rule in Theorem 1.24)
a
b
= f (x(t), y(t)) = f (B) f (A)

a

by the Fundamental Theorem of Calculus.


Theorem 1.25 can be thought of as the line integral version of the Fundamental Theorem of Calculus. A real-
valued function f (x, y) such that f (x, y) = F(x, y) is called a potential for vector field F. A conservative vector
field is one which has a potential.

Example 1.26. Recall from Examples 1.18 and 1.19 that the line integral C (x2 + y2 ) dx + 2xy dy was found to
R

have the value 133 for three different curves C going from the point (0, 0) to the point (1, 2). Use Theorem 1.25 to
show that this line integral is indeed path independent.
Solution: We need to find a real-valued function f (x, y) such that

f f
= x2 + y2 and = 2xy .
x y
f f
Suppose that x = x2 + y2 , Then we must have f (x, y) = 13 x3 + xy2 + g(y) for some function g(y). So y =
f
2xy + g 0 (y)
satisfies the condition = 2xy if
y g 0 (y) = 0,
i.e. g(y) = K, where K is a constant. Since any choice
for K will do (why?), we pick K = 0. Thus, a potential f (x, y) for F(x, y) = (x2 + y2 ) i + 2xy j exists, namely
1 3
f (x, y) = x + xy2 .
3
1 PART-I 26

2 + y2 ) dx + 2xy dy
R
Hence the line integral C (x is path independent.
Note that we can also verify that the value of the line integral of F along any curve C going from (0, 0) to (1, 2)
will always be 13
3 , since by Theorem 1.25

1 3 1 13
Z
F d r = f (1, 2) f (0, 0) = (1) + (1)(2)2 (0 + 0) = + 4 = .
C 3 3 3

A consequence of Theorem 1.25 in the special case where C is a closed curve, so that the endpoints A and B are
the same point, is the following important corollary:
I
Corollary 1.27. If a vector field F has a potential in a region R, then F d r = 0 for any closed curve C in R
H
(i.e. Cf d r = 0 for any real-valued function f (x, y).)

I
Example 1.28. Evaluate x dx + y dy for C : x = 2 cost, y = 3 sint, 0 t 2.
C

Solution: The vector field F(x, y) = x i + y j has a potential f (x, y):

f 1
= x f (x, y) = x2 + g(y) , so
x 2
f 1
= y g 0 (y) = y g(y) = y2 + K
y 2
1 1
for any constant K, so f (x, y) = x2 + y2 is a potential for F(x, y). Thus,
2 2
I I
x dx + y dy = F d r = 0
C C

2
x2
by Corollary 1.27, since the curve C is closed(it is the ellipse 4 + y9 = 1).

ExampleR 1.29. Let F(x, y, z) = x i + y j + 2z k be a vector field in R3 . Using the same curve C from Example 1.23,
evaluate C F d r.

Solution: 2
x2
It is easy to see that f (x, y, z) = 2 + y2 + z2 is a potential for F(x, y, z) (i.e. f = F). So by Theorem 1.25 we know that
Z
F d r = f (B) f (A) , where A = (x(0), y(0), z(0)) and B = (x(8), y(8), z(8)), so
C
= f (8 sin 8, 8 cos 8, 8) f (0 sin 0, 0 cos 0, 0)
= f (0, 8, 8) f (0, 0, 0)
(8)2
= 0+ + (8)2 (0 + 0 + 0) = 96 2 .
2
1 PART-I 27

1.3.5 Greens Theorem


We will now see a way of evaluating the line integral of a smooth vector field around a simple closed curve. A vector
field F(x, y) = P(x, y) i + Q(x, y) j is smooth if its component functions P(x, y) and Q(x, y) are smooth. We will use
Greens Theorem (sometimes called Greens Theorem in the plane) to relate the line integral around a closed curve with
a double integral over the region inside the curve:
Theorem 1.30. (Greens Theorem) Let R be a region in R2 whose boundary is a simple closed curve C which is
piecewise smooth. Let F(x, y) = P(x, y) i + Q(x, y) j be a smooth vector field defined on both R and C. Then
ZZ  
Q P
I
F d r = dA , (30)
x y
C R

where C is traversed so that R is always on the left side of C.


Proof. We will prove the theorem in the case for a simple region R, that is, where the boundary curve C can be written
as C = C1 C2 in two distinct ways:

C1 = the curve y = y1 (x) from the point X1 to the point X2 (31)


C2 = the curve y = y2 (x) from the point X2 to the point X1 , (32)

where X1 and X2 are the points on C farthest to the left and right, respectively; and

C1 = the curve x = x1 (y) from the point Y2 to the point Y1 (33)


C2 = the curve x = x2 (y) from the point Y1 to the point Y2 , (34)
y

y = y2 (x)
d
Y2
J
X2 x = x2 (y)
x = x1 (y) X1 R

Y1
IC
c
y = y1 (x)
x

where Y1 and Y2 are the lowest and highest points, respectively, on C. a b Since
1 PART-I 28

y = y1 (x) along C1 (as x goes from a to b) and y = y2 (x) along C2 (as x goes from b to a), we have
I Z Z
P(x, y) dx = P(x, y) dx + P(x, y) dx
C C1 C2
Z b Z a
= P(x, y1 (x)) dx + P(x, y2 (x)) dx
a b
Z b Z b
= P(x, y1 (x)) dx P(x, y2 (x)) dx
a a
Z b
= (P(x, y2 (x)) P(x, y1 (x))) dx
a
Z b y=y2 (x) 
= P(x, y) dx

a y=y1 (x)

Z b Z y2 (x)
P(x, y)
= dy dx (by the Fundamental Theorem of Calculus)
a y1 (x) y

P
ZZ
= dA .
y
R

Likewise, integrate Q(x, y) around C using the representation C = C1 C2 . Since x = x1 (y) along C1 (as y goes from d
to c) and x = x2 (y) along C2 (as y goes from c to d). We have
I Z Z
Q(x, y) dy = Q(x, y) dy + Q(x, y) dy
C C1 C2
Z c Z d
= Q(x1 (y), y) dy + Q(x2 (y), y) dy
d c
Z d Z d
= Q(x1 (y), y) dy + Q(x2 (y), y) dy
c c
Z d
= (Q(x2 (y), y) Q(x1 (y), y)) dy
c
Z d x=x2 (y) 
= Q(x, y) dy

c x=x1 (y)

Z d Z x2 (y)
Q(x, y)
= dx dy (by the Fundamental Theorem of Calculus)
c x1 (y) x

Q
ZZ
= dA , and so
x
R
1 PART-I 29

I I I
F d r = P(x, y) dx + Q(x, y) dy
C C
C

P Q
ZZ ZZ
= dA + dA
y x
R R
ZZ  
Q P
= dA .
x y
R

Though we proved Greens Theorem only for a simple region R, the theorem can also be proved for more general
regions (say, a union of simple regions).

Example 1.31. Evaluate C (x2 + y2 ) dx + 2xy dy, where C is the boundary (traversed counterclockwise) of the region
H

R = { (x, y) : 0 x 1, 2x2 y 2x }.
y
(1, 2)
2

x
0 1
Solution: R is the shaded region in the above Figure. By Greens Theorem, for P(x, y) = x2 + y2 and Q(x, y) = 2xy, we
have
ZZ  
Q P
I
2 2
(x + y ) dx + 2xy dy = dA
C x y
R
ZZ ZZ
= (2y 2y) dA = 0 dA = 0 .
R R

We actually already knew that theH answer was zero. As the vector field F(x, y) = (x2 + y2 ) i + 2xy j has a potential
function f (x, y) = 31 x3 + xy2 , and so F d r = 0 by Corollary 1.27.
C

Example 1.32. Let F(x, y) = P(x, y) i + Q(x, y) j, where


y x
P(x, y) = and Q(x, y) = ,
x 2 + y2 x2 + y2

and let R = { (x, y) : 0 < x2 + y2 1 }. For the boundary curve C : x2 + y2 = 1, traversed counterclockwise.
H
F d r = 2.
C
But
y2 x2
ZZ  
Q P Q P
ZZ
= 2 2 2
= dA = 0 dA = 0 .
x (x + y ) y x y
R R

This would seem to contradict Greens Theorem. However, note that R is not the entire region enclosed by C, since
the point (0, 0) is not contained in R. That is, R has a hole at the origin, so Greens Theorem does not apply.
1 PART-I 30

1
C1

J
R
1/2
C2
I

x
0 1/2 1

If we modify the region R to be the annulus R = { (x, y) : 1/4 x2 +y2


1 }, and take the boundary C of R to be C = C1 C2 , where C1 is the unit circle x2 + y2 = 1 traversed counterclockwise
and C2 is the circle x2 + y2 = 1/4 traversed clockwise, then it can be shown that
I
F d r = 0 .
C

RR  Q

We would still have x Py dA = 0, so for this R we would have
R
ZZ  
Q P
I
F d r = dA ,
x y
C R

which shows that Greens Theorem holds for the annular region R.

It turns out that Greens Theorem can be extended to multiply connected regions, that is, regions like the annulus in
Example 1.32, which have one or more regions cut out from the interior, as opposed to discrete points being cut out.
For such regions, the outer boundary and the inner boundaries are traversed so that R is always on the left side.

C1 JC
1

R1 R1
J

C2 C3 C2
I

I
J

R2 R2
I
I

(a) Region R with one hole (b) Region R with two holes

Figure 5 Multiply connected regions

The intuitive idea for why Greens Theorem holds for multiply connected regions is shown in Figure 5 above. The
idea is to cut slits between the boundaries of a multiply connected region R so that R is divided into subregions which
do not have any holes. For example, in Figure 5(a) the region R is the union of the regions R1 and R2 , which are
1 PART-I 31

divided by the slits indicated by the dashed lines. Those slits are part of the boundary of both R1 and R2 , and we traverse
then in the manner indicated by the arrows. Notice that along each slit the boundary of R1 is traversed in the opposite
direction as that of R2 , which means that the line integrals of F along those slits cancel each other out. Since R1 and R2
do not have holes in them, then Greens Theorem holds in each subregion, so that
ZZ   ZZ  
Q P Q P
I I
F d r = dA and F d r = dA .
x y x y
bdy R1 bdy R2
of R1 of R2

But since the line integrals along the slits cancel out, we have
I I I
F d r = F d r + F d r ,
C1 C2 bdy bdy
of R1 of R2

and so ZZ   ZZ   ZZ  
Q P Q P Q P
I
F d r = dA + dA = dA ,
x y x y x y
C1 C2 R1 R2 R

which shows that Greens Theorem holds in the region R. A similar argument shows that the theorem holds in the
region with two holes shown in Figure 5(b).
We know from Corollary 1.27 that when a smooth vector field F(x, y) = P(x, y) iH+ Q(x, y) j on a region R (whose
boundary is a piecewise smooth, simple closed curve C) has a potential in R, then F d r = 0. And if the potential
C
F F
f (x, y) is smooth in R, then x = P and y = Q, and so we know that

2F 2F P Q
= = in R.
yx xy y x
P Q
Conversely, if y = x in R then
ZZ  
Q P
I ZZ
F d r = dA = 0 dA = 0 .
x y
C R R

Applications of Greens Theorem


For a simply connected region R (i.e. a region with no holes), the following can be shown: The following statements

are equivalent for a simply connected region R in R2 :


1. F(x, y) = P(x, y) i + Q(x, y) j has a smooth potential f (x, y) in R
R
2. F d r is independent of the path for any curve C in R
C
I
3. F d r = 0 for every simple closed curve C in R
C

P Q
4. = in R (in this case, the differential form P dx + Q dy is exact)
y x
1 PART-I 32

1.4 Lecture-4
We learned how to integrate along a curve. We will now learn how to perform integration over a surface in R3 , such as
a sphere or a paraboloid. Recall that how we identified points (x, y, z) on a curve C in R3 , parametrized by x = x(t),
y = y(t), z = z(t), a t b, with the terminal points of the position vector

r(t) = x(t)i + y(t)j + z(t)k for t in [a, b].

The idea behind a parametrization of a curve is that it transforms a subset of R1 (normally an interval [a, b]) into a
curve in R2 or R3 (see Figure 6).

z
(x(t), y(t), z(t))
(x(a), y(a), z(a))
x = x(t) C
y = y(t) r(t) (x(b), y(b), z(b))
z = z(t)
R1 y
a t b 0
x
Figure 6 Parametrization of a curve C in R3

Similar to how we used a parametrization of a curve to define the line integral along the curve, we will use a
parametrization of a surface to define a surface integral. We will use two variables, u and v, to parametrize a surface
in R3 : x = x(u, v), y = y(u, v), z = z(u, v), for (u, v) in some region R in R2 (see Figure 7).

v R2
z

R x = x(u, v)
y = y(u, v)
(u, v) z = z(u, v) r(u, v)
y
0
u x

Figure 7 Parametrization of a surface in R3

In this case, the position vector of a point on the surface is given by the vector-valued function

r(u, v) = x(u, v)i + y(u, v)j + z(u, v)k for (u, v) in R.

The parametrization of can be thought of as transforming a region in R2 (in the uv-plane) into a 2-dimensional
surface in R3 . This parametrization of the surface is sometimes called a patch, based on the idea of patching the
region R onto in the grid-like manner shown in Figure 7.

Let
r(u, v) = x(u, v)i + y(u, v) j + z(u, v)k
1 PART-I 33

is a vector-valued function defined on a region R (called parameter domain) in the uv-plane.

So x, y, and z, the component functions of r, are functions of the two variables u and v with domain R.
The set of all points = {(x, y, z) R3 |x = x(u, v), y = y(u, v), z = z(u, v), (u, v) R} is called a parametric surface and
equations
x = x(u, v), y = y(u, v)z = z(u, v)
are called parametric equations of . Each choice of u and v gives a point on , by making all choices, we get all of .

Example 1.33. 1. Identify the surface with given vector equation

r(u, v) = (u + v)i + (3 v) j + (1 + 4u + 5v) j.

Solution:
The parametric equations of the surface are x = u + v, y = 3 v, z = 1 + 4u + 5v.
Now z = 1 + 4(u + v) + v = 1 + 4x + 3 y 4x y z + 4 = 0.
The surface represents a plane.
2. The parametric equations of the sphere with radius a are

x = a sin cos

y = a sin sin
z = a cos

3. The vector function that represents the plane passes through the point P0 with position vector r0 and that contains
two non-parallel vectors a and b is
r(u, v) = r0 + au + bv,
where u, v are real numbers.
(Let P be a point in the plane, the vector from P0 to P (i.e., PP0 = OP OP0 ) is some multiple of a plus some
multiple of b. (Can you see why?) We can express this as OP OP0 = au + bv OP = r = r0 + au + bv where
u, v are real numbers.
The real numbers u and v are the parameters for this parametrization of the plane.
The idea of the parametrization is that as the parameters u and v sweep through all real numbers, the point r = P
sweeps out the plane. In other words, it is a two-dimensional analogue of the parametrization of the line. If
r = (x, y, z), r0 = (x0 , y0 , z0 ), a = (a1 , a2 , a3 ), b = (b1 , b2 , b3 ), then

x = x0 + ua1 + vb1 , y = y0 + ua2 + vb2 , z = z0 + ua3 + vb3 .

4. Find a parametric representation of the plane that passes through the point (1, 2, 3) and contains the vectors
i + j k and i j + k.
Solution:
x = 1 + u + v, y = 2 + u v, z = 3 u + v.

5. Identify the surface with given vector equation

r(u, v) = 2 sin ui + 3 cos u j + vk 0 v 2


1 PART-I 34

Solution:
The parametric equations of the surface are x
2 = sin u, 3y = cos u, z = v 0 v 2.

x2 y2
+ = 1, 0 z 2
4 9
The surface represents a elliptic cylinder.
Definition 1.34. If a parametric surface is given by a vector function r(u, v), then there are two useful families
of curves that lie on , one family with u constant and the other with v constant.
These families correspond to vertical and horizontal lines in the uv-plane.

If we keep u constant by putting u = u0 , then r(u0 , v) becomes a vector function of the single parameter v and
defines a curve C1 lying on .
Similarly, if we keep v constant by putting v = v0 , we get a curve C2 given by r(u, v0 ) that lies on .
We call these curves grid curves.
Tangent Plane

We now find the tangent plane to a parametric surface traced out by a vector function r(u, v) = x(u, v)i +
y(u, v) j + z(u, v)k at a point P0 with position vector r(u0 , v0 ).
If we keep u constant by putting u = u0 , then r(u0 , v) becomes a vector function of the single parameter v and
defines a curve C1 lying on .

The tangent vector to C1 at P0 is obtained by taking the partial derivative of r with respect to v:

x y z
rv = (u0 , v0 )i + (u0 , v0 ) j + (u0 , v0 )k.
v v v

Similarly, if we keep v constant by putting v = v0 , we get a grid curve C2 given by r(u, v0 ) that lies on , and its
tangent vector at P0 is
x y z
ru = (u0 , v0 )i + (u0 , v0 ) j + (u0 , v0 )k.
u u u
Definition 1.35. A parametrized surface r(u, v) = f (u, v)i + g(u, v) j + h(u, v)k is smooth (it has no corners) if ru , rv
are continuous and ru rv is not 0 on the parameter domain.

For a smooth surface, the tangent plane is the plane that contains the tangent vectors ru and rv , and the vector
ru rv is a normal vector to the tangent plane.
Since r(u, v) is a function of two variables, define the partial derivatives ur and vr for (u, v) in R by

r x y z
(u, v) = (u, v)i + (u, v)j + (u, v)k , and
u u u u
r x y z
(u, v) = (u, v)i + (u, v)j + (u, v)k .
v v v v
Example 1.36. Find the equation of the tangent plane to the given parametric surface at the specified point

x = u + v, y = 3u2 , z = u v at (2, 3, 0).


1 PART-I 35

Solution:
r(u, v) = (u + v)i + 3u2 j + (u v)k so
ru = i + 6u j + k. rv = i k

i j k

ru rv = 1 6u 1 = 6ui + 2 j 6uk


1 0 1

(u + v, 3u2 , u v) = (2, 3, 0) u = 1, v = 1.
n = 6i + 2 j 6k The equation of the plane is

6(x 2) + 2(y 3) 6(z 0) = 0.

1.4.1 Surface Area


If a smooth parametric surface is given by the equation r(u, v) = x(u, v)i + y(u, v) j + z(u, v)k (u, v) R and is
covered just once as (u, v) ranges throughout the parameter domain R, then the surface area of is
ZZ
A() = |ru rv |dA, (35)
R

where ru = ux i + uy j + uz k and rv = xv i + yv j + vz k.
Sketch of the proof of above formula (35):

In fact, those gridlines in R lead us to how we will define a surface integral over . Along the vertical gridlines in
R, the variable u is constant. So those lines get mapped to curves on , and the variable u is constant along the
position vector r(u, v). Thus, the tangent vector to those curves at a point (u, v) is vr . Similarly, the horizontal
gridlines in R get mapped to curves on whose tangent vectors are ur .
Now take a point (u, v) in R as, say, the lower left corner of one of the rectangular grid sections in R, as shown
in Figure 7. Suppose that this rectangle has a small width and height of u and v, respectively. The corner
points of that rectangle are (u, v), (u + u, v), (u + u, v + v) and (u, v + v). So the area of that rectangle is
A = u v.
Then that rectangle gets mapped by the parametrization onto some section of the surface which, for u and
v small enough, will have a surface area (call it d ) that is very close to the area of the parallelogram which
has adjacent sides r(u + u, v) r(u, v) (corresponding to the line segment from (u, v) to (u + u, v) in R) and
r(u, v + v) r(u, v) (corresponding to the line segment from (u, v) to (u, v + v) in R).
We have
r r(u + u, v) r(u, v)
, and
u u
r r(u, v + v) r(u, v)
,
v v
and so the surface area element d is approximately

r r
r r

||(r(u + u, v) r(u, v)) (r(u, v + v) r(u, v))|| (u ) (v ) = u v

u v u v
1 PART-I 36

Thus, the total surface area S of is approximately the sum of all the quantities ur vr u v, summed over

the rectangles in R. Taking the limit of that sum as the diagonal of the largest rectangle goes to 0 gives

r r
ZZ
S = du dv . (36)
u v
R

We will write the double integral on the right using the special notation

r r
ZZ ZZ
d = du dv . (37)
u v
R

This is a special case of a surface integral over the surface , where the surface area element d can be thought
of as 1 d .

1.4.2 Surface area of the graph of a function


For the special case of a surface with equation z = f (x, y), where (x, y) lies in R and f has continuous partial
derivatives, we take x and y as parameters. The parametric equations are
x = x, y = y, z = f (x, y).

f f
rx = i + k, rx = j + k
x y

i j k

1 0 f f f
rx ry = x = i j +k
f
x y
0 1 y

ZZ
A() = |ru rv | dA
R
s  2  2
f f
ZZ
= 1+ + dA
x y
R
s  2  2
z z
ZZ
= 1+ + dA
x y
R

1.4.3 Surface Integrals


Replacing 1 in the Equation (35) by a general real-valued function f (x, y, z) defined in R3 , we have the following:
Definition 1.37. Let be a surface in R3 parametrized by x = x(u, v), y = y(u, v),
z = z(u, v), for (u, v) in some region R in R2 . Let r(u, v) = x(u, v)i + y(u, v)j + z(u, v)k be the position vector for any
point on , and let f (x, y, z) be a real-valued function defined on some subset of R3 that contains . The surface
integral of f (x, y, z) over is

ZZ ZZ r r
f (x, y, z) d = f (x(u, v), y(u, v), z(u, v)) du dv . (38)

u v
R
1 PART-I 37

In particular, the surface area S of is ZZ


S = 1 d . (39)

Example 1.38. A torus T is a surface obtained by revolving a circle of radius a in the yz-plane around the z-axis, where
the circles center is at a distance b from the z-axis (0 < a < b). Find the surface area of T .
z

(y b)2 + z2 = a2

a
u y
0

b
Solution: For any point on the circle, the line segment from the center of the circle to that point makes an angle u with
the y-axis in the positive y direction. And as the circle revolves around the z-axis, the line segment from the origin to the
center of that circle sweeps out an angle v with the positive x-axis. Thus, the torus can be parametrized as:

x = (b + a cos u) cos v , y = (b + a cos u) sin v , z = a sin u , 0 u 2 , 0 v 2

So for the position vector

r(u, v) = x(u, v)i + y(u, v)j + z(u, v)k


= (b + a cos u) cos v i + (b + a cos u) sin v j + a sin u k

we see that
r
= a sin u cos v i a sin u sin v j + a cos u k
u
r
= (b + a cos u) sin v i + (b + a cos u) cos v j + 0k ,
v
and so computing the cross product gives

r r
= a(b + a cos u) cos v cos u i a(b + a cos u) sin v cos u j a(b + a cos u) sin u k ,
u v
which has magnitude
r r
= a(b + a cos u) .


u v
1 PART-I 38

Thus, the surface area of T is


ZZ
S = 1 d


Z 2 Z 2
r r
= du dv


0 0 u v
Z 2 Z 2
= a(b + a cos u) du dv
0 0
Z 2  u=2 
2
= abu + a sin u dv

0 u=0
Z 2
= 2ab dv
0
= 4 2 ab

Since ur and vr are tangent to the surface (i.e. lie in the tangent plane to at each point on ), then their cross
product ur vr is perpendicular to the tangent plane to the surface at each point of . Thus,
ZZ ZZ
f (x, y, z) d = f (x(u, v), y(u, v), z(u, v)) knk d ,
R
r r
where n = u v . We say that n is a normal vector to .
z

y
0

x Recall that normal vectors to a plane can point in two opposite directions. By an
outward unit normal vector to a surface , we will mean the unit vector that is normal to and points away from the
top (or outer part) of the surface. With this idea in mind, we make the following definition of a surface integral of a
3-dimensional vector field over a surface:

Definition 1.39. Let be a surface in R3 and let F(x, y, z) = f1 (x, y, z)i + f2 (x, y, z)j + f3 (x, y, z)k be a vector field
defined on some subset of R3 that contains . The surface integral of F over is
ZZ ZZ
F d = F n d , (40)

where, at any point on , n is the outward unit normal vector to .


1 PART-I 39

Now
ZZ ZZ
F d = F n d

ru rv
ZZ
= F d
|ru rv |

ru rv
ZZ
= F(r(u, v)) |ru rv | dA,
|ru rv |
D

Where D is the parameter domain. Thus we have


ZZ ZZ
F d = F(r(u, v)) (ru rv ) dA.
D

Further
  is given by z = g(x, y), we can think x, y as parameters and write F(r(u, v)) (ru rv ) as
if a surface
g g
P x Q y + R , where F = Pi + Q j + Rk. Consequently,
ZZ  
g g
ZZ
F(r(u, v)) (ru rv ) dA = P Q + R dA.
x y
D D

Note in the above definition that the dot product inside the integral on the right is a real-valued function, and hence
we can use Definition 1.37 to evaluate the integral.
RR
Example 1.40. Evaluate the surface integral F d , where F(x, y, z) = yzi + xzj + xyk and is the part of the

plane x + y + z = 1 with x 0, y 0, and z 0, with the outward unit normal n pointing in the positive z direction
z
1
n


y
0
1
1 x+y+z = 1
x
Solution: Since the vector v = (1, 1, 1)is normal tothe plane x + y + z = 1 (why?), then dividing v by its length yields
the outward unit normal vector n = 13 , 13 , 13 . We now need to parametrize . Now by projecting onto the
xy-plane yields a triangular region R = { (x, y) : 0 x 1, 0 y 1 x }. Thus, using (u, v) instead of (x, y), we see
that
x = u, y = v, z = 1 (u + v), for 0 u 1, 0 v 1 u
is a parametrization of over R (since z = 1 (x + y) on ). So on ,
 
1 1 1 1
F n = (yz, xz, xy) , , = (yz + xz + xy)
3 3 3 3

1 1
= ((x + y)z + xy) = ((u + v)(1 (u + v)) + uv)
3 3
1
= ((u + v) (u + v)2 + uv)
3
1 PART-I 40

for (u, v) in R, and for r(u, v) = x(u, v)i + y(u, v)j + z(u, v)k = ui + vj + (1 (u + v))k we have

r r r r
= (1, 0, 1) (0, 1, 1) = (1, 1, 1) = 3.

u v u v

Thus, integrating over R using vertical slices gives


ZZ ZZ
F d = F n d


ZZ r r
= (F(x(u, v), y(u, v), z(u, v)) n) dv du

u v
R
Z 1 Z 1u
1
= ((u + v) (u + v)2 + uv) 3 dv du
0 0 3
v=1u
Z 1
(u + v)2 (u + v)3 uv 2
= + du

2 3 2

0
v=0

u 3u2 5u3
Z 1 
1
= + + du
0 6 2 2 6
1
u u2 u3 5u4 1
= + + = .
6 4 2 24 8
0

Computing surface integrals can often be tedious, especially when the formula for the outward unit normal vector at
each point of changes. The following theorem provides an easier way in the case when is a closed surface, that is,
when encloses a bounded solid in R3 . For example, spheres, cubes, and ellipsoids are closed surfaces, but planes and
paraboloids are not.

1.5 Lecture-5
Computing surface integrals can often be tedious, especially when the formula for the outward unit normal vector at
each point of changes. The following theorem provides an easier way in the case when is a closed surface, that is,
when encloses a bounded solid in R3 . For example, spheres, cubes, and ellipsoids are closed surfaces, but planes and
paraboloids are not.
Theorem 1.41. Let be a closed surface in R3 which bounds a solid S, and let F(x, y, z) = f1 (x, y, z)i + f2 (x, y, z)j +
f3 (x, y, z)k be a vector field defined on some subset of R3 that contains . Then
ZZ ZZZ
F d = div F dV , (41)
S

where
f1 f2 f3
div F = + + (42)
x y z
is called the divergence of F.
1 PART-I 41

The proof of the Divergence Theorem is very similar to the proof of Greens Theorem, i.e. it is first proved for
the simple case when the solid S is bounded above by one surface, bounded below by another surface, and bounded
laterally by one or more surfaces. The proof can then be extended to more general solids.

Fd , where F(x, y, z) = xi + yj + zk and is the unit sphere x2 + y2 + z2 = 1.


RR
Example 1.42. Evaluate

Solution: We see that div F = 1 + 1 + 1 = 3, so
ZZ ZZZ ZZZ
=
F d div F dV = 3 dV
S S
4(1)3
ZZZ
= 3 1 dV = 3 vol(S) = 3 = 4 .
3
S

RR
In physical applications, the surface integral F d is often referred to as the flux of F through the surface . For

example, if F represents the velocity field of a fluid, then the flux is the net quantity of fluid to flow through the surface
per unit time. A positive flux means there is a net flow out of the surface (i.e. in the direction of the outward unit
normal vector n), while a negative flux indicates a net flow inward (in the direction of n).
The term divergence comes from interpreting div F as a measure of how much a vector field diverges from a
point. This is best seen by using another definition of div F which is equivalent to the definition given by formula (42).
Namely, for a point (x, y, z) in R3 ,
1
ZZ
div F(x, y, z) = lim F d , (43)
V 0 V

where V is the volume enclosed by a closed surface around the point (x, y, z). In the limit, V 0 means that we take
smaller and smaller closed surfaces around (x, y, z), which means that the volumes they enclose are going to zero. It can
be shown that this limit is independent of the shapes of those surfaces. Notice that the limit being taken is of the ratio of
the flux through a surface to the volume enclosed by that surface, which gives a rough measure of the flow leaving a
point, as we mentioned. Vector fields which have zero divergence are often called solenoidal fields.
The following theorem is a simple consequence of formula (43).
Theorem 1.43. If the flux of a vector field F is zero through every closed surface containing a given point, then
div F = 0 at that point.
Proof. By formula (43), at the given point (x, y, z) we have
1
ZZ
div F(x, y, z) = lim F d for closed surfaces containing (x, y, z), so
V 0 V

1
= lim (0) by our assumption that the flux through each is zero, so
V 0 V

= lim 0
V 0
= 0.

1.6 Lecture-6
We will now discuss a generalization of Greens Theorem in R2 to orientable surfaces in R3 , called Stokes Theorem.
A surface in R3 is orientable if there is a continuous vector field N in R3 such that N is nonzero and normal
1 PART-I 42

to (i.e. perpendicular to the tangent plane) at each point of . We say that such an N is a normal vector field.
z
N

N
y
0

x For example, the unit sphere x2 + y2 + z2 = 1 is orientable, since the continuous vector field
N(x, y, z) = x i + y j + z k is nonzero and normal to the sphere at each point. In fact, N(x, y, z) is another normal vector
field. We see in this case that N(x, y, z) is what we have called an outward normal vector, and N(x, y, z) is an inward
normal vector. These outward and inward normal vector fields on the sphere correspond to an outer and inner
side, respectively, of the sphere. That is, we say that the sphere is a two-sided surface. Roughly, two-sided means
orientable. Other examples of two-sided, and hence orientable, surfaces are cylinders, paraboloids, ellipsoids, and
planes.
You may be wondering what kind of surface would not have two sides. An example is the Mobius strip, which is
constructed by taking a thin rectangle and connecting its ends at the opposite corners, resulting in a twisted strip.
A B

B A

If you imagine walking along a line down the center of the Mobius strip, then you arrive back at the same place
from which you started but upside down! That is, your orientation changed even though your motion was continuous
along that center line. Informally, thinking of your vertical direction as a normal vector field along the strip, there is a
discontinuity at your starting point (and, in fact, at every point) since your vertical direction takes two different values
there. The Mobius strip has only one side, and hence is non orientable.
For an orientable surface which has a boundary curve C, pick a unit normal vector n such that if you walked along
C with your head pointing in the direction of n, then the surface would be on your left. We say in this situation that n is
a positive unit normal vector and that C is traversed n-positively. We can now state Stokes Theorem:
Theorem 1.44. (Stokes Theorem) Let be an orientable surface in R3 whose boundary is a simple closed curve
C, and let F(x, y, z) = P(x, y, z)i + Q(x, y, z)j + R(x, y, z)k be a smooth vector field defined on some subset of R3 that
contains . Then Z ZZ
F d r = (curl F ) n d , (44)
C
where      
R Q P R Q P
curl F = i+ j+ k, (45)
y z z x x y
n is a positive unit normal vector over , and C is traversed n-positively.
1 PART-I 43

Proof. As the general case is beyond the scope of this text, we will prove the theorem only for the special case where
is the graph of z = z(x, y) for some smooth real-valued function z(x, y), with (x, y) varying over a region D in R2 .
z
: z = z(x, y)
n

C
y
0
D (x, y)
x
CD
Projecting onto the xy-plane, we see that the closed curve C (the boundary curve
of ) projects onto a closed curve CD which is the boundary curve of D. Assuming that C has a smooth parametrization,
its projection CD in the xy-plane also has a smooth parametrization, say

CD : x = x(t) , y = y(t) , a t b ,

and so C can be parametrized (in R3 ) as

C : x = x(t) , y = y(t) , z = z(x(t), y(t)) , a t b ,

since the curve C is part of the surface z = z(x, y). Now, by the Chain Rule (Theorem 1.24, for z = z(x(t), y(t)) as a
function of t, we know that
z 0 z 0
z 0 (t) = x (t) + y (t) ,
x y
and so
Z Z
F d r = P(x, y, z) dx + Q(x, y, z) dy + R(x, y, z) dz
C
C
Z b  
0 0 z 0 z 0
= P x (t) + Q y (t) + R x (t) + y (t) dt
a x y
Z b     
z z
= P+R x 0 (t) + Q + R y 0 (t) dt
a x y
Z
= P(x, y) dx + Q(x, y) dy ,
CD

where
z
P(x, y) = P(x, y, z(x, y)) + R(x, y, z(x, y)) (x, y) , and
x
z
Q(x, y) = Q(x, y, z(x, y)) + R(x, y, z(x, y)) (x, y)
y

for (x, y) in D. Thus, by Greens Theorem applied to the region D, we have


ZZ  
Q P
Z
Fd r = dA . (46)
x y
C D
1 PART-I 44

Thus,
 
Q z
= Q(x, y, z(x, y)) + R(x, y, z(x, y)) (x, y) , so by the Product Rule we get
x x y
   
z z
= (Q(x, y, z(x, y))) + R(x, y, z(x, y)) (x, y) + R(x, y, z(x, y)) (x, y) .
x x y x y
Now, by formula chain rule we have

Q x Q y Q z
(Q(x, y, z(x, y))) = + +
x x x y x z x
Q Q Q z
= 1 + 0 +
x y z x
Q Q z
= + .
x z x

Similarly,

R R z
(R(x, y, z(x, y))) = + .
x x z x
Thus,

2z
 
Q Q Q z R R z z
= + + + + R(x, y, z(x, y))
x x z x x z x y xy
Q Q z R z R z z 2z
= + + + +R .
x z x x y z x y xy

In a similar fashion, we can calculate

P P P z R z R z z 2z
= + + + +R .
y y z y y x z y x yx
So subtracting gives
     
Q P Q R z R P z Q P
= + + (47)
x y z y x x z y x y
2z 2z
since xy = yx by the smoothness of z = z(x, y). Hence, by equation (46),
ZZ       
R Q z P R z Q P
Z
F d r = + dA (48)
y z x z x y x y
C D

after factoring out a 1 from the terms in the first two products in equation (47).
Now, recall that the vector N = xz i yz j + k is normal to the tangent plane to the surface z = z(x, y) at each
point of . Thus,
N z i yz j + k
n = = r x
N  2  2
1 + xz + yz
1 PART-I 45

is in fact a positive unit normal vector to . Hence, using the parametrization r(x,r y) = x i + y j + z(x, y) k, for (x, y) in D,
 2  2
of the surface , we have xr = i + xz k and ry = j + yz k, and so xr ry = 1 + xz + yz . So we see that

using formula (45) for curl F, we have



ZZ ZZ r r
(curl F ) n d = (curl F ) n dA

x y
D
ZZ         
R Q P R Q P z z
= i+ j+ k i j + k dA
y z z x x y x y
D
      
R Q z P R z Q P
ZZ
= + dA ,
y z x z x y x y
D

which, upon comparing to equation (48), proves the Theorem.


Note: The condition in Stokes Theorem that the surface have a (continuously varying) positive unit normal vector
n and a boundary curve C traversed n-positively can be expressed more precisely as follows: if r(t) is the position
vector for C and T(t) = r 0 (t)/kr 0 (t)k is the unit tangent vector to C, then the vectors T, n, T n form a right-handed
system.
Also, it should be noted that Stokes Theorem holds even when the boundary curve C is piecewise smooth.

Example 1.45. Verify Stokes Theorem for F(x, y, z) = z i + x j + y k when is the paraboloid z = x2 + y2 such that
z 1.
z
C
1 n


y
0
x
Solution:
The positive unit normal vector to the surface
z = z(x, y) = x2 + y2 is
xz i yz j + k 2x i 2y j + k
n = r  2  2 = p1 + 4x2 + 4y2 ,
1 + xz + yz

and curl F = (1 0) i + (1 0) j + (1 0) k = i + j + k, so
p
(curl F ) n = (2x 2y + 1)/ 1 + 4x2 + 4y2 .
1 PART-I 46

Since can be parametrized as r(x, y) = x i + y j + (x2 + y2 ) k for (x, y) in the region D = { (x, y) : x2 + y2 1 }, then

ZZ ZZ r r
(curl F ) n d = (curl F ) n dA

x y
D
2x 2y + 1 p
ZZ
= p 1 + 4x2 + 4y2 dA
1 + 4x2 + 4y2
D
ZZ
= (2x 2y + 1) dA , so switching to polar coordinates gives
D
Z 2 Z 1
= (2r cos 2r sin + 1)r dr d
0 0
Z 2 Z 1
= (2r2 cos 2r2 sin + r) dr d
0 0
Z 2  
3 3 2 r=1
= 2r3 cos 2r3 sin + r2 d
0 r=0
Z 2
23 cos 23 sin + 2 d1

=
0
2
= 23 sin + 23 cos + 21 = .

0

The boundary curve C is the unit circle x2 + y2 = 1 laying in the plane z = 1 , which can be parametrized as x = cost,
y = sint, z = 1 for 0 t 2. So
Z Z 2
F d r = ((1)( sint) + (cost)(cost) + (sint)(0)) dt
0
C
Z 2    
1 + cos 2t 2 1 + cos 2t
= sint + dt here we used cos t =
0 2 2
t sin 2t 2
= cost + + = .
2 4 0
R RR
So we see that F d r = (curl F ) n d , as predicted by Stokes Theorem.
C

The line integral in the preceding example was far simpler to calculate than the surface integral, but this will not
always be the case.

2 2
Example 1.46. Let be the elliptic paraboloid z = x4 + y9 for z 1, and let C be its boundary curve. Calculate
F d r for F(x, y, z) = (9xz + 2y)i + (2x + y2 )j + (2y2 + 2z)k, where C is traversed counterclockwise.
R
C
2 2
Solution: The surface is similar to the one in Example 1.45, except now the boundary curve C is the ellipse x4 + y9 = 1
laying in the plane z = 1. In this case, using Stokes Theorem is easier than computing the line integral directly. As in
2 2
Example 1.45, at each point (x, y, z(x, y)) on the surface z = z(x, y) = x4 + y9 the vector

xz i yz j + k 2x i 2y9 j+k
n = r = q ,
4y2
 2  2 x 2
z
1 + x + y z 1+ 4 + 9
1 PART-I 47

is a positive unit normal vector to . And calculating the curl of F gives

curl F = (4y 0)i + (9x 0)j + (2 2)k = 4y i + 9x j + 0 k ,

so
(4y)( 2x ) + (9x)( 2y9 ) + (0)(1) 2xy 2xy + 0
(curl F ) n = q = q = 0,
2 2 2 2
1 + x4 + 4y9 1 + x4 + 4y9
and so by Stokes Theorem Z ZZ ZZ
F d r = (curl F ) n d = 0 d = 0 .
C
R
In physical applications, for a simple closed curve C the line integral F d r is often called the circulation of F
C
around C. For example, if E represents
R
the electrostatic field due to a point charge, then it turns out that curl E = 0,
which means that the circulation E d r = 0 by Stokes Theorem. Vector fields which have zero curl are often called
C
irrotational fields.
In fact, the term curl was created by the 19th century Scottish physicist James Clerk Maxwell in his study of
electromagnetism, where it is used extensively. In physics, the curl is interpreted as a measure of circulation density.
This is best seen by using another definition of curl F which is equivalent to the definition given by formula (45).
Namely, for a point (x, y, z) in R3 ,
1
Z
n (curl F )(x, y, z) = lim F d r , (49)
S0 S
C

where S is the surface area of a surface containing the point (x, y, z) and with a simple closed boundary curve C and
positive unit normal vector n at (x, y, z). In the limit, think of the curve C shrinking to the point (x, y, z), which causes ,
the surface it bounds, to have smaller and smaller surface area. That ratio of circulation to surface area in the limit is
what makes the curl a rough measure of circulation density (i.e. circulation per unit area).
y

x
0
Suppose we have a vector field F(x, y, z) which is always
parallel to the xy-plane at each point (x, y, z) and that the vectors grow larger the further the point (x, y, z) is from the
y-axis. For example, F(x, y, z) = (1 + x2 ) j. Think of the vector field as representing the flow of water, and imagine
dropping two wheels with paddles into that water flow. Since the flow is stronger (i.e. the magnitude of F is larger) as
you move away from the y-axis, then such a wheel would rotate counterclockwise if it were dropped to the right of the
y-axis, and it would rotate clockwise if it were dropped to the left of the y-axis. In both cases the curl would be nonzero
(curl F(x, y, z) = 2x k in our example) and would obey the right-hand rule, that is, curl F(x, y, z) points in the direction
1 PART-I 48

of your thumb as you cup your right hand in the direction of the rotation of the wheel. So the curl points outward (in the
positive z-direction) if x > 0 and points inward (in the negative z-direction) if x < 0. Notice that if all the vectors had
the same direction and the same magnitude, then the wheels would not rotate and hence there would be no curl (which
is why such fields are called irrotational, meaning no rotation). Finally, by Stokes Theorem, we know that if C is a
simple closed curve in some solid region S in R3 and if F(x, y, z) is a smooth vector field such that curl F = 0 in S, then
Z ZZ ZZ ZZ
F d r = (curl F ) n d = 0 n d = 0 d = 0 ,
C

where is any orientable surface inside S whose boundary is C (such a surface is sometimes called a capping surface
for C). So similar to the two-variable case, we have a three-dimensional version for solid regions in R3 which are
simply connected (i.e. regions having no holes):

The following statements are equivalent for a simply connected solid region S in R3 :
1. F(x, y, z) = P(x, y, z) i + Q(x, y, z) j + R(x, y, z) k has a smooth potential F(x, y, z) in S
Z
2. F d r is independent of the path for any curve C in S
C
Z
3. F d r = 0 for every simple closed curve C in S
C

R Q P R Q P
4. = , = , and = in S i.e., curl F = 0 in S)
y z z x x y
Part (d) is also a way of saying that the differential form P dx + Q dy + R dz is exact.

Example 1.47. Determine if the vector field F(x, y, z) = xyz i + xz j + xy k has a potential in R3 .
Solution:
Since R3 is simply connected, we just need to check whether curl F = 0 throughout R3 , that is,

R Q P R Q P
= , = , and =
y z z x x y

throughout R3 , where P(x, y, z) = xyz, Q(x, y, z) = xz, and R(x, y, z) = xy. But we see that

P R P R
= xy , =y 6= for some (x, y, z) in R3 .
z x z x

Thus, F(x, y, z) does not have a potential in R3 .


2 PART-II 49

2 Part-II
2.1 Lecture-7
A differential equation is an equation involving derivatives. An ordinary differential equation (ODE) is a differential
equation involving a function, or functions, of only one variable. If the ODE involves the nth (and lower) derivatives it
is said to be an nth order ODE. Let y be a function of one variable x, for neatness, we will try to always use x as the
dependent variable and prime for derivative. An equation of the form

h1 (x, y(x), y0 (x)) = 0 (50)

is a first order ODE.


h2 (x, y(x), y0 (x), y00 (x)) = 0 (51)
is second order. A function satisfying the ODE is called a solution of the ODE. Let us see an example you may not
have seen:
dx
+ x = 2 cost. (52)
dt
Here x is the dependent variable and t is the independent variable. Equation (52) is a basic example of a differential
equation. In fact it is an example of a first order differential equation, since it involves only the first derivative of the
dependent variable. This equation arises from Newtons law of cooling where the ambient temperature oscillates with
time.
Solving the differential equation means finding x in terms of t. That is, we want to find a function of t, which we
will call x, such that when we plug x, t, and dxdt into (52), the equation holds. It is the same idea as it would be for a
normal (algebraic) equation of just x and t. We claim that

x = x(t) = cost + sint


dx
is a solution. How do we check? We simply plug x into equation (52)! First we need to compute dt . We find that
dx
dt = sint + cost. Now let us compute the left hand side of (52).

dx
+ x = ( sint + cost) + (cost + sint) = 2 cost.
dt
Yay! We got precisely the right hand side. But there is more! We claim x = cost + sint + et is also a solution. Let us
try,
dx
= sint + cost et .
dt
Again plugging into the left hand side of (52)

dx
+ x = ( sint + cost et ) + (cost + sint + et ) = 2 cost.
dt
And it works yet again!
So there can be many different solutions. In fact, for this equation all solutions can be written in the form

x = cost + sint +Cet

for some constant C. It turns out that solving differential equations can be quite hard. There is no general method
that solves every differential equation. We will generally focus on how to get exact formulas for solutions of certain
differential equations.
2 PART-II 50

A first order ODE is an equation of the form


dy
= f (x, y),
dx
or just
y0 = f (x, y).
In general, there is no simple formula or procedure one can follow to find solutions. In the next few lectures we will
look at special cases where solutions are not difficult to obtain. In this section, let us assume that f is a function of x
alone, that is, the equation is
y0 = f (x). (53)
We could just integrate (antidifferentiate) both sides with respect to x.
Z Z
y0 (x) dx = f (x) dx +C,

that is Z
y(x) = f (x) dx +C.

This y(x) is actually the general solution. So to solve (53), we find some antiderivative of f (x) and then we add an
arbitrary constant to get the general solution.
Now is a good time to discuss a point about calculus notation and terminology. Calculus textbooks muddy the
waters by talking about the integral as primarily the so-called indefinite integral. The indefinite integral is really the
antiderivative (in fact the whole one-parameter family of antiderivatives). There really exists only one integral and that
is the definite integral. The only reason for the indefinite integral notation is that we can always
R
write an antiderivative
as a (definite) integral. That is, by the fundamental theorem of calculus we can always write f (x) dx +C as
Z x
f (t) dt +C.
x0

Hence the terminology to integrate when we may really mean to antidifferentiate. Integration is just one way to compute
the antiderivative (and it is a way that always works, see the following examples). Integration is defined as the area
under the graph, it only happens to also compute antiderivatives. For sake of consistency, we will keep using the
indefinite integral notation when we want an antiderivative, and you should always think of the definite integral.
Example 2.1. Find the general solution of y0 = 3x2 .
Elementary calculus tells us that the general solution must be y = x3 +C. Let us check: y0 = 3x2 . We have gotten
precisely our equation back.
Normally, we also have an initial condition such as y(x0 ) = y0 for some two numbers x0 and y0 (x0 is usually 0,
but not always). We can then write the solution as a definite integral in a nice way. Suppose our problem is y0 = f (x),
y(x0 ) = y0 . Then the solution is Z x
y(x) = f (s) ds + y0 . (54)
x0

Let us check! We compute y0 = f (x) (by fundamental theorem of calculus) and by Jupiter, y is a solution. Is it the one
satisfying the initial condition? Well, y(x0 ) = xx00 f (x) dx + y0 = y0 . It is!
R

Do note that the definite integral and the indefinite integral (antidifferentiation) are completely different beasts.
The definite integral always evaluates to a number. Therefore, (54) is a formula we can plug into the calculator or a
computer, and it will be happy to calculate specific values for us. We will easily be able to plot the solution and work
with it just like with any other function. It is not so crucial to always find a closed form for the antiderivative.
2 PART-II 51

Example 2.2. Solve


2
y0 = ex , y(0) = 1.
By the preceding discussion, the solution must be
Z x
2
y(x) = es ds + 1.
0

Here is a good way to make fun of your friends taking second semester calculus. Tell them to find the closed form
solution. Ha ha ha (bad math joke). It is not possible (in closed form). There is absolutely nothing wrong with writing
the solution as a definite integral. This particular integral is in fact very important in statistics.
Using this method, we can also solve equations of the form

y0 = f (y).

Let us write the equation in Leibniz notation.


dy
= f (y).
dx
Now we use the inverse function theorem from calculus to switch the roles of x and y to obtain
dx 1
= .
dy f (y)
What we are doing seems like algebra with dx and dy. It is tempting to just do algebra with dx and dy as if they were
numbers. And in this case it does work. Be careful, however, as this sort of hand-waving calculation can lead to trouble,
especially when more than one independent variable is involved. At this point we can simply integrate,
1
Z
x(y) = dy +C.
f (y)
Finally, we try to solve for y.
Example 2.3. Previously, we guessed y0 = ky (for some k > 0) has the solution y = Cekx . We can now find the solution
without guessing. First we note that y = 0 is a solution. Henceforth, we assume y 6= 0. We write
dx 1
= .
dy ky
We integrate to obtain
1
x(y) = x = ln |y| + D,
k
where D is an arbitrary constant. Now we solve for y (actually for |y|).

|y| = ekxkD = ekD ekx .

If we replace ekD with an arbitrary constant C we can get rid of the absolute value bars (which we can do as D was
arbitrary). In this way, we also incorporate the solution y = 0. We get the same general solution as we guessed before,
y = Cekx .
Example 2.4. Find the general solution of y0 = y2 .
First we note that y = 0 is a solution. We can now assume that y 6= 0. Write
dx 1
= 2.
dy y
2 PART-II 52

We integrate to get
1
x= +C.
y
1
We solve for y = Cx . So the general solution is
1
y= or y = 0.
Cx
Note the singularities of the solution. If for example C = 1, then the solution blows up as we approach x = 1.
Generally, it is hard to tell from just looking at the equation itself how the solution is going to behave. The equation
y0 = y2 is very nice and defined everywhere, but the solution is only defined on some interval (,C) or (C, ).
Variable separable
Let us suppose that the equation is
y0 = f (x)g(y),
for some functions f (x) and g(y). Let us write the equation in the
dy
= f (x)g(y).
dx
Then we rewrite the equation as
dy
= f (x) dx.
g(y)
Now both sides look like something we can integrate. We obtain
dy
Z Z
= f (x) dx +C.
g(y)
If we can find closed form expressions for these two integrals, we can, perhaps, solve for y.
Example 2.5. Take the equation
y0 = xy.
dy
First note that y = 0 is a solution, so assume y 6= 0 from now on. Write the equation as dx = xy, then
dy
Z Z
= x dx +C.
y
We compute the antiderivatives to get
x2
ln |y| = +C.
2
Or
x2 x2 x2
|y| = e 2 +C = e 2 eC = De 2 ,
where D > 0 is some constant. Because y = 0 is a solution and because of the absolute value we actually can write:
x2
y = De 2 ,

for any number D (including zero or negative).


We check:  
0 x2 x2
y = Dxe 2 = x De 2 = xy.

Yay!
2 PART-II 53

We should be a little bit more careful with this method. You may be worried that we were integrating in two different
variables. We seemed to be doing a different operation to each side. Let us work this method out more rigorously.
dy
= f (x)g(y)
dx
dy
We rewrite the equation as follows. Note that y = y(x) is a function of x and so is dx !

1 dy
= f (x)
g(y) dx

We integrate both sides with respect to x.


1 dy
Z Z
dx = f (x) dx +C.
g(y) dx

We can use the change of variables formula.


1
Z Z
dy = f (x) dx +C.
g(y)

And we are done.

2.1.1 Implicit solutions


It is clear that we might sometimes get stuck even if we can do the integration. For example, take the separable equation
xy
y0 = .
y2 + 1
We separate variables,
y2 + 1
 
1
dy = y + dy = x dx.
y y
We integrate to get
y2 x2
+ ln |y| = +C,
2 2
or perhaps the easier looking expression (where D = 2C)

y2 + 2 ln |y| = x2 + D.

It is not easy to find the solution explicitly as it is hard to solve for y. We will, therefore, leave the solution in this
form and call it an . It is still easy to check that implicit solutions satisfy the differential equation. In this case, we
differentiate to get  
2
y0 2y + = 2x.
y
It is simple to see that the differential equation holds. If you want to compute values for y, you might have to be tricky.
For example, you can graph x as a function of y, and then flip your paper. Computers are also good at some of these
tricks, but you have to be careful.
We note above that the equation also has a solution y = 0. In this case, it turns out that the general solution
is y2 + 2 ln |y| = x2 + C together with y = 0. These outlying solutions such as y = 0 are sometimes called singular
solutions.
2 PART-II 54

Example 2.6. Solve x2 y0 = 1 x2 + y2 x2 y2 , y(1) = 0.


First factor the right hand side to obtain
x2 y0 = (1 x2 )(1 + y2 ).
We separate variables, integrate and solve for y
y0 1 x2
= ,
1 + y2 x2
y0 1
= 2 1,
1 + y2 x
1
arctan(y) = x +C,
x 
1
y = tan x +C .
x
Now solve for the initial condition, 0 = tan(2 +C) to get C = 2 (or 2 + , etc. . . ). The solution we are seeking is,
therefore,  
1
y = tan x+2 .
x
2
Example 2.7. Find the general solution to y0 = xy 3 (including singular solutions).
First note that y = 0 is a solution (a singular solution). So assume that y 6= 0 and write
3 0
y = x,
y2
3 x2
= +C,
y 2
3 6
y= 2 = 2 .
x /2 +C x + 2C
The equation
y0 = (x y + 1)2
is neither separable nor linear. What can we do? How about trying to change variables, so that in the new variables the
equation is simpler. We will use another variable v, which we will treat as a function of x. Let us try
v = x y + 1.
We need to figure out y0 in terms of v0 , v and x. We differentiate (in x) to obtain v0 = 1 y0 . So y0 = 1 v0 . We plug this
into the equation to get
1 v0 = v2 .
In other words, v0 = 1 v2 . Such an equation we know how to solve.
1
dv = dx.
1 v2
So

1 v + 1
ln = x +C,
2 v1

v+1 2x+2C
v1 = e ,

2 PART-II 55

v+1
or = De2x for some constant D. Note that v = 1 and v = 1 are also solutions.
v1
Now we need to unsubstitute to obtain
xy+2
= De2x ,
xy
and also the two solutions x y + 1 = 1 or y = x, and x y + 1 = 1 or y = x + 2. We solve the first equation for y.

x y + 2 = (x y)De2x ,
x y + 2 = Dxe2x yDe2x ,
y + yDe2x = Dxe2x x 2,
y (1 + De2x ) = Dxe2x x 2,
Dxe2x x 2
y= .
De2x 1
Note that D = 0 gives y = x + 2, but no value of D gives the solution y = x.
Substitution in differential equations is applied in much the same way that it is applied in calculus. You guess.
Several different substitutions might work. There are some general things to look for. We summarize a few of these in a
table.
When you see Try substituting
yy0 v = y2
y2 y0 v = y3
(cos y)y0 v = sin y
(sin y)y0 v = cos y
y0 ey v = ey

Usually you try to substitute in the most complicated part of the equation with the hopes of simplifying it. The
above table is just a rule of thumb. You might have to modify your guesses. If a substitution does not work (it does not
make the equation any simpler), try a different one.
Homogeneous equations

Another type of equations we can solve by substitution are the so-called homogeneous equations. Suppose that we
can write the differential equation as y
y0 = F .
x
Here we try the substitutions
y
v= and therefore y0 = v + xv0 .
x
We note that the equation is transformed into

v0 1
v + xv0 = F(v) or xv0 = F(v) v or = .
F(v) v x

Hence an implicit solution is


1
Z
dv = ln |x| +C.
F(v) v
Example 2.8. Solve
x2 y0 = y2 + xy, y(1) = 1.
2 PART-II 56

We put the equation into the form y0 = (y/x)2 + y/x. Now we substitute v = y/x to get the separable equation
xv0 = v2 + v v = v2 ,
which has a solution
1
Z
dv = ln |x| +C,
v2
1
= ln |x| +C,
v
1
v= .
ln |x| +C
We unsubstitute
y 1
= ,
x ln |x| +C
x
y= .
ln |x| +C
We want y(1) = 1, so
1 1
1 = y(1) = = .
ln |1| +C C
Thus C = 1 and the solution we are looking for is
x
y= .
ln |x| 1
Non homogeneous FOFDDE reduced to VS
dy
Consider the non-homogeneous equation dx = aa12 x+b1 y+c1
x+b2 y+c2 where a1 , b1 , c1 , a2 , b2 , c2 are all constants.
a1 b1 dy dY a1 X+b1Y +(a1 h+b1 k+c1 )
Case-1: If a2 6= b2 , then choose x = X + h, y = Y + k. Thus dx = dX = a2 X+b2Y +a2 h+b2 k+c2 Now it is possible to
choose unique h, k such that a1 h + b1 k + c1 = a2 h + b2 k + c2 = 0. Then dXdY
= aa12 X+b 1Y
X+b2Y .
a1 b1
Case-2: If a2 = b2 , then take z = a1 x + b1 y.

2.2 Lecture-8
2.2.1 Exact Differential equations
The (total) differential of a function f (x, y) is denoted by d f and is defined as
f f
df = dx + dy.
x y
Consider the differential equation M(x, y)dx + N(x, y)dy = 0. Suppose there exists a function f (x, y) such that
f f
= M(x, y) = N(x, y)
x y
Then
f f
0 = Mdx + Ndy = dx + dy = d f
x y
i.,e d f = 0 by integrating f (x, y) = c is solution to Mdx + ndy = 0. A differential equation Mdx + ndy = 0 is said to be
exact differential equation if there exists a function f (x, y) such that d f = Mdx + Ndx.
2 PART-II 57

Example 2.9. 1. xdy + ydx = 0 is exact as d(xy) = xdy + ydx.

2. ey dx + (xey + 2y)dy = 0 is exact as ex dx + xey + 2y)dy = d(xey + y2 )


Necessary condition for exactness
Suppose Mdx + Ndy = 0 is an exact differential equation. That is there exists a function f such that Mdx + Ndy =
f f f f
x dx + y dy. Thus we have x = M(x, y) y = N(x, y)
 
M f
=
y y x
2 f
 
f
= =
y x x y
N
= .
x
M N
Sufficiency (that is if y = x , then how to find f )
M N
Let y = x .
R f
Suppose f (x, y) = M(x, y)dx + g(y) then x = M(x, y).
f
So, now we need to find such a g(y) so as to make f to satisfy y = N(x, y).
f
That is y = R
y ( M(x, y)dx) + dg
dy = N.

So g0 (y) = N y M(x, y)dx which can be integrated WRT y thus: g(y) = (N y M(x, y)dx)dy which
R R R

works as long as the integrand is a function of y only.


This will happen if its derivative WRT x is equal to zero. We need to make sure of that, so we try it out:

2
Z Z

(N M(x, y)dx) = N M(x, y)dx)
x y x x y
 
N
Z

= ( M(x, y)dx)
x y x
N M
= = 0.
x y

Definition 2.10. A nonzero function I(x, y) is called an integrating factor for the differential equation M(x, y)dx +
N(x, y)dy = 0 if the differential equation I(x, y)M(x, y)dx + I(x, y)N(x, y)dy = 0 is exact.

Note that integrating factor is not unique.


M N
Example 2.11. Consider the DE xdy ydx = 0 here M = y, N = x clearly y 6= x but

1 x 1
0= (xdy ydx) = 2 dy dx
y2 y y
1 1 1
which is an exact DE hence y2
is an integrating factor. It is easy to see ,
x2 xy
are also integrating factors.

Example 2.12. I(x, y) = xy2 is an integrating factor of (3y2 + 5x2 y)dx + (3xy + 2x3 )dy = 0.
2 PART-II 58

Theorem 2.13. The function I(x, y) is an integrating factor for M(x, y)dx + N(x, y)dy = 0 if and only if it is a solution
to the partial differential equation  
I I M N
N M = I.
x y y x
Proof. IM + IN = 0 is exact if and only if
(IM) (IN)
= )
y x
if and only if
M I N I
I +M =I +N
y y x x
if and only if  
I I M N
N M = I.
x y y x

1. There exists an integrating factor that is dependent only on x if and only if ( My Nx )/N = f (x), a function of x
R
only. In such a case, an integrating factor is I(x) = e f (x)dx .

Proof. Let I is an integrating factor and I is a function of x alone if and only


 
I M N
N = I.
x y x
if and only if  
M
1 dI y Nx
= = f (x)
I dx N
R
dI f (x)dx .
Since I = f (x)dx,hence I(x) = e

2. There exists an integrating factor that is dependent only on y if and only if ( My Nx )/M = g(y), a function of y
R
only. In such a case, an integrating factor is I(y) = e g(y)dy .

2.3 Lecture-9
dy dy
A DE equation of the dx + p(x)y = Q(x) is called Linear differential equation. If Q(x) = 0 then dx + p(x)y = 0 is in
dy
variable
 separable
 form. If Q(x) 6
= 0 then dx + p(x)y = Q(x) can be written as (p(x)y Q(x))dx + dy = 0. Consequently
R
M
Nx N = P(x) is a function of x alone. Hence IF = e
y
p(x)dx is an integrating factor. It is easy to see that the
general solution of this DE is Z
y IF = (IF)Q(x) dx + c.

Example 2.14. Solve y0 + ycotx = 2xcosecx.


dy
Solution:It is of the form Rdx + p(x)yR= Q(x) where p(x) = cotx and Q(x) = 2xcosecx.
Integrating factor (IF)=e p(x)dx = e cotxdx = eln sin x = sin x.
General solution is Z
y IF = Q(x)IFdx + c
R
Thus y sin x = 2xdx + c or
y sin x = x2 + c.
2 PART-II 59

x
Example 2.15. Solve y0 + y = ee .
x
Solution:Here P(x) = 1 IF = ex and GS is yex = ex ee dx + c up on simplifying we get
R

x
yex = ee + c.

Bernoulli equations

There are some forms of equations where there is a general rule for substitution that always works. One such
example is the so-called 1 .
y0 + p(x)y = q(x)yn .
This equation looks a lot like a linear equation except for the yn . If n = 0 or n = 1, then the equation is linear and we
can solve it. Otherwise, the substitution v = y1n transforms the Bernoulli equation into a linear equation. Note that n
need not be an integer.
Example 2.16. Solve
xy0 + y(x + 1) + xy5 = 0, y(1) = 1.
First, the equation is Bernoulli (p(x) = (x + 1)/x and q(x) = 1). We substitute

v = y15 = y4 , v0 = 4y5 y0 .

In other words, (1/4) y5 v0 = y0 . So

xy0 + y(x + 1) + xy5 = 0,


xy5 0
v + y(x + 1) + xy5 = 0,
4
x 0
v + y4 (x + 1) + x = 0,
4
x 0
v + v(x + 1) + x = 0,
4
and finally
4(x + 1)
v0
v = 4.
x
Now the equation is linear. We can use the integrating factor method.

2.4 Lecture-10
2.4.1 Law of Natural growth
Let x(t) be the population at any time t. Assume that the population grows at a rate directly proportional to the amount
of population present at that time. Then the DE governing this phenomenon is the first order first degree linear equation
dx
dt = kx, where k is the proportionality constant. Here k > 0 since there is a growth phenomena. The solution of DE is
x(t) = cekt .
Example 2.17. Suppose there are 100 bacteria at time 0 and 200 bacteria at time 10s. How many bacteria will there
be 1 minute from time 0 (in 60 seconds)?
1 There are several things called Bernoulli equations, this is just one of them. The Bernoullis were a prominent Swiss family of mathematicians.

These particular equations are named for Jacob Bernoulli (1654 1705).
2 PART-II 60

First we have to solve the equation. We claim that a solution is given by

P(t) = Cekt ,

where C is a constant. Let us try:


dP
= Ckekt = kP.
dt
And it really is a solution.
OK, so what now? We do not know C and we do not know k. But we know something. We know that P(0) = 100,
and we also know that P(10) = 200. Let us plug these conditions in and see what happens.

100 = P(0) = Cek0 = C,


200 = P(10) = 100 ek10 .
ln 2
Therefore, 2 = e10k or 10 = k 0.069. So we know that

P(t) = 100 e(ln 2)t/10 100 e0.069t .

At one minute, t = 60, the population is P(60) = 6400.

2.5 Law of natural decay


The DE dmdt = km where k > 0 describes decay phenomena, where it is assumed that the material m(t) at any time t
decays which is proportional to the amount present. The solution is m(t) = cekt . If initially at t = 0 m0 is the amount
present then m(t) = m0 ekt .
Example 2.18. Radium decomposes at a rate the quantity of radium present. Suppose it is found that in 25 years
approximately 1.1% of a certain quantity of radium has decomposed. Determine approximately how long will it take
one of the original amount of radium to decompose.
1.1
Solution:(1 100 )m0 = m(25) = m0 e25k . 21 m0 = m0 ekt , t = 1565 years app.

2.5.1 Newtons law of cooling


Physical experiments show that the time (t) rate of change dT
dt of the temperature T of a body is propositional to the
difference between T and TA of the (ambient) surrounding medium.

dT
= k(T TA ).
dt
Example 2.19. Suppose Bob made a cup of coffee, and the water was boiling (100 degrees Celsius) at time t = 0
minutes. Suppose Bob likes to drink his coffee at 70 degrees. Let the ambient (room) temperature be 26 degrees.
Furthermore, suppose Bob measured the temperature of the coffee at 1 minute and found that it dropped to 95 degrees.
When should Bob start drinking?
Let T be the temperature of coffee, let TA be the ambient (room) temperature. Then for some k the temperature of
coffee is:
dT
= k(TA T ).
dt
2 PART-II 61

For our setup TA = 26, T (0) = 100, T (1) = 95. We separate variables and integrate (C and D will denote arbitrary
constants)
1 dT
= k,
T TA dt
ln(T TA ) = kt +C, (note that T TA > 0)
T TA = D ekt ,
T = D ekt + TA .

That is, T = 26 + D ekt . We plug in the first condition 100 = T (0) = 26 + D and hence D = 74. We have T =
26 + 74 ekt . We plug in 95 = T (1) = 26 + 74 ek . Solving for k we get k = ln 9526
74 0.07. Now we solve for the
ln 7026
time t that gives us a temperature of 70 degrees. That is, we solve 70 = 26 + 74e0.07t to get t = 0.07 74
7.43
minutes. So Bob can begin to drink the coffee at about 7 and a half minutes from the time Bob made it. Probably about
the amount of time it took us to calculate how long it would take.

2.5.2 Orthogonal Trajectories of curves


Given one (first) family of curves, if there exists another (second) family of curves such that each curve of the first
family cuts each curve of the second family at right angles, then the first family is said to be orthogonal trajectories of
second family vice versa.
Example 2.20. 1. Meridians and parallels on world globe.
2. Lines of heat flow and isothermal curves.
3. Curves of electric force and equi-potential lines.
4. Curves of steepest descent and contour lines on a map.
Find Orthogonal trajectories (OT) of each of the following one parameter family of curves.
1. xy = c
dy dy y dy
Solution:DE is y + x dx = 0. That is dx = x . DE of corresponding OT is dx = xy . Solving we get y2 x2 = c

2. Show that the family of parabolas y2 = 4cx + 4c2 is self orthogonal.


Solution:First we will find the DE corresponding given family of curves.

2yy0 = 4c

or c = yy0 /2. By substituting c in given equation we get y2 = 2yy0 x + 2yy0 2yy0 /2. Thus

y2 = 2xyy0 + y2 (y0 )2 .
1
It is easy to check that by replacing y0 by y0 we will get the same DE.
.
Example 2.21. Suppose a car drives at a speed et/2 meters per second, where t is time in seconds. How far did the car
get in 2 seconds (starting at t = 0)? How far in 10 seconds?
Let x denote the distance the car traveled. The equation is

x0 = et/2 .
2 PART-II 62

We can just integrate this equation to get that


x(t) = 2et/2 +C.
We still need to figure out C. We know that when t = 0, then x = 0. That is, x(0) = 0. So

0 = x(0) = 2e0/2 +C = 2 +C.

Thus C = 2 and
x(t) = 2et/2 2.
Now we just plug in to get where the car is at 2 and at 10 seconds. We obtain

x(2) = 2e2/2 2 3.44 meters, x(10) = 2e10/2 2 294 meters.

Example 2.22. Suppose that the car accelerates at a rate of t 2 sm2 . At time t = 0 the car is at the 1 meter mark and is
traveling at 10 ms . Where is the car at time t = 10.
Well this is actually a second order problem. If x is the distance traveled, then x0 is the velocity, and x00 is the
acceleration. The equation with initial conditions is

x00 = t 2 , x(0) = 1, x0 (0) = 10.

What if we say x0 = v. Then we have the problem

v0 = t 2 , v(0) = 10.

Once we solve for v, we can integrate and find x.

2.6 Lecture-11
Theorem 2.23 (Superposition). Suppose y1 and y2 are two solutions of the homogeneous equation y00 + p(x)y0 +q(x)y =
0. Then
y(x) = C1 y1 (x) +C2 y2 (x),
also solves y00 + p(x)y0 + q(x)y = 0 for arbitrary constants C1 and C2 .
That is, we can add solutions together and multiply them by constants to obtain new and different solutions. We call
the expression C1 y1 +C2 y2 a of y1 and y2 . Let us prove this theorem; the proof is very enlightening and illustrates how
linear equations work.
Proof: Let y = C1 y1 +C2 y2 . Then

y00 + py0 + qy = (C1 y1 +C2 y2 )00 + p(C1 y1 +C2 y2 )0 + q(C1 y1 +C2 y2 )


= C1 y001 +C2 y002 +C1 py01 +C2 py02 +C1 qy1 +C2 qy2
= C1 (y001 + py01 + qy1 ) +C2 (y002 + py02 + qy2 )
= C1 0 +C2 0 = 0.

The proof becomes even simpler to state if we use the operator notation. An is an object that eats functions and
spits out functions (kind of like what a function is, but a function eats numbers and spits out numbers). Define the
operator L by
Ly = y00 + py0 + qy.
2 PART-II 63

The differential equation now becomes Ly = 0. The operator (and the equation) L being linear means that L(C1 y1 +
C2 y2 ) = C1 Ly1 +C2 Ly2 . The proof above becomes

Ly = L(C1 y1 +C2 y2 ) = C1 Ly1 +C2 Ly2 = C1 0 +C2 0 = 0.

Two different solutions to the second equation y00 k2 y = 0 are y1 = cosh(kx) and y2 = sinh(kx). Let us remind
x x x x
ourselves of the definition, cosh x = e +e
2 and sinh x = e e
2 . Therefore, these are solutions by superposition as they
are linear combinations of the two exponential solutions.
The functions sinh and cosh are sometimes more convenient to use than the exponential. Let us review some of
their properties.

cosh 0 = 1 sinh 0 = 0
d d
cosh x = sinh x sinh x = cosh x
dx dx
cosh2 x sinh2 x = 1

2.6.1 General solution of y00 (x) + py0 (x) + qy(x) = 0, where p and q are constants
Let y = emx is a solution of
y00 (x) + py0 (x) + qy(x) = 0. (55)
Then emx (m2 + pm + q) = 0 m2 + pm + q = 0. The equation

m2 + pm + q = 0 (56)

is called auxiliary equation or characteristic equation of (55).

Case Roots of (56) Basis of (55) General solution of (55)


I Distinct real roots (m1 , m2 ) em1 x , em2 x y = c1 em1 x + c2 em2 x
p p p p
II Real double root (m = 2p ) e 2 x , xe 2 x y = c1 e 2 x + c2 xe 2 x
p p p
III Complex conjugate e 2 x cos x y = c1 e 2 x cos x + c2 e 2 x sin x
p
m1 = 2p + i e 2 x sin x or
p
m2 = 2p i y = e 2 x (c1 cos x + c2 sin x)

Example: y00 + 3y0 + 2y = 0 has auxiliary equation 2 + 3 + 2 = 0 with roots 1 = 1, 2 = 2 so the general
solution is
y(x) = C1 ex +C2 e2x (57)
This corresponds to over damping.
Example: y00 + 2y0 + y = 0 has auxiliary equation 2 + 2 + 1 = 0 with two equal roots = 1 and so the general
solution is
y(x) = (C1 +C2 x)ex (58)

Example: If the auxiliary equation 2 + + 1 = 0 with complex roots = 21 12 3i the general complex
solution is 1 1
1 1

y(x) = C1 e 2 x+i 2 3x +C2 e 2 xi 2 3x (59)
where C1 and C2 are complex constants. The general real solution can be obtained by imposing the constraint
C2 = C1 :
1 1
   
21 x
y(x) = e C1 cos 3x + i sin cos 3x + c.c. (60)
2 2
2 PART-II 64

Writing C1 = 21 (A iB) where A and B are real constants gives

1 1
 
1
y(x) = e 2 x A cos 3x + B sin 3x (61)
2 2

this is the underdamped case, it still oscillates.

2.7 Lecture-12
Particular integral of y00 (x) + py0 (x) + qy(x) = R(x), where p and q are constants: Method of Undetermined
Coefficients
All solutions, or the general solution of

y00 (x) + P(x)y0 (x) + Q(x)y(x) = R(x) (62)

are given by
y(x) = C1 y1 (x) +C2 y2 (x) + y p (x) (63)
where y1 , y2 are linearly independent solutions of the corresponding homogeneous equation

y00 (x) + P(x)y0 (x) + Q(x)y(x) = 0 (64)

and y p (x) is a solution of the full equation. C1 and C2 are arbitrary constants. y p (x) is called a particular integral. The
general solution is sometimes written
y(x) = yc (x) + y p (x) (65)
where yc (x) = C1 y1 (x) +C2 y2 (x) is called the complementary function. It is the general solution of the homogeneous
form of the ODE.
The trick is to somehow, in a smart way, guess one particular solution to

y00 + 5y0 + 6y = 2x + 1. (66)

. Note that 2x + 1 is a polynomial, and the left hand side of the equation will be a polynomial if we let y be a polynomial
of the same degree. Let us try
y p = Ax + B.
We plug in to obtain

y00p + 5y0p + 6y p = (Ax + B)00 + 5(Ax + B)0 + 6(Ax + B) = 0 + 5A + 6Ax + 6B = 6Ax + (5A + 6B).
1 1 3x1
So 6Ax + (5A + 6B) = 2x + 1. Therefore, A = 1/3 and B = 1/9. That means y p = 3 x 9 = 9 . Solving the
complementary problem (exercise!) we get

yc = C1 e2x +C2 e3x .

Hence the general solution to (66) is


3x 1
y = C1 e2x +C2 e3x + .
9
Now suppose we are further given some initial conditions. For example, y(0) = 0 and y0 (0) = 1/3. First find y0 =
2C1 e2x 3C2 e3x + 1/3. Then
1 1 1
0 = y(0) = C1 +C2 , = y0 (0) = 2C1 3C2 + .
9 3 3
2 PART-II 65

We solve to get C1 = 1/3 and C2 = 2/9. The particular solution we want is

1 2 3x 1 3e2x 2e3x + 3x 1
y(x) = e2x e3x + = .
3 9 9 9
Check that y really solves the equation (66) and the given initial conditions.
Note: A common mistake is to solve for constants using the initial conditions with yc and only add the particular
solution y p after that. That will not work. You need to first compute y = yc + y p and only then solve for the constants
using the initial conditions.

For example: consider


y00 + 3y0 + 2y = ex . (67)
Trying y p = Cex where C is a constant gives

(1 + 3 + 2)Cex = ex (68)

so
1
C= (69)
6
and, therefore,
1
y p (x) = ex (70)
6
is a particular integral. To obtain the general solution we require the general solution of the homogeneous
equation, see earlier example, yc = C1 ex +C2 e2x . Thus, the general solution is
1
y(x) = yc + y p = C1 ex +C2 e2x + ex (71)
6

If f is a solution of the homogeneous ODE this doesnt work.


For example: consider y00 + 3y0 + 2y = ex . Trying y p = Cex gives C = . Much as in the equal-root case for
the homogeneous equation try y p = Cxex . Differentiating

y0p = Cex Cxex


y00p = 2Cex +Cxex (72)

Inserting these into the ODE gives

Cxex (1 3 + 2) +Cex (2 + 3) = ex (73)

so that C = 1 or y p = xex . The general solution can be written

y = (C1 + x)ex +C2 e2x (74)

Now, for more general cases, we exploit the linearity. If f is a sum of exponentials, as, for example, sin x =
(eix eix )/(2i), just add up the corresponding particular integrals corresponding to each exponential term; so, for
example, if
y00 + 3y0 + 2y = sin x. (75)
We solve
1 ix
y00 + 3y0 + 2y = e (76)
2i
2 PART-II 66

and
1 ix
y00 + 3y0 + 2y = e (77)
2i
and add the two solutions.
A right hand side consisting of exponentials, sines, and cosines can be handled similarly. For example,

y00 + 2y0 + 2y = cos(2x).

Let us find some y p . We start by guessing the solution includes some multiple of cos(2x). We may have to also add a
multiple of sin(2x) to our guess since derivatives of cosine are sines. We try

y p = A cos(2x) + B sin(2x).

We plug y p into the equation and we get

4A cos(2x) 4B sin(2x) 4A sin(2x) + 4B cos(2x) + 2A cos(2x) + 2B sin(2x) = cos(2x).

The left hand side must equal to right hand side. We group terms and we get that 4A + 4B + 2A = 1 and 4B 4A +
2B = 0. So 2A + 4B = 1 and 2A + B = 0 and hence A = 1/10 and B = 1/5. So

cos(2x) + 2 sin(2x)
y p = A cos(2x) + B sin(2x) = .
10
Similarly, if the right hand side contains exponentials we try exponentials. For example, for

Ly = e3x ,

we will try y = Ae3x as our guess and try to solve for A.


When the right hand side is a multiple of sines, cosines, exponentials, and polynomials, we can use the product rule
for differentiation to come up with a guess. We need to guess a form for y p such that Ly p is of the same form, and has
all the terms needed to for the right hand side. For example,

Ly = (1 + 3x2 ) ex cos(x).

For this equation, we will guess

y p = (A + Bx +Cx2 ) ex cos(x) + (D + Ex + Fx2 ) ex sin(x).

We will plug in and then hopefully get equations that we can solve for A, B,C, D, E, F. As you can see this can make for
a very long and calculation very quickly. Cest !
There is one hiccup in all this. It could be that our guess actually solves the associated homogeneous equation. That
is, suppose we have
y00 9y = e3x .
We would love to guess y = Ae3x , but if we plug this into the left hand side of the equation we get

y00 9y = 9Ae3x 9Ae3x = 0 6= e3x .

There is no way we can choose A to make the left hand side be e3x . The trick in this case is to multiply our guess by x to
get rid of duplication with the complementary solution. That is first we compute yc (solution to Ly = 0)

yc = C1 e3x +C2 e3x


2 PART-II 67

and we note that the e3x term is a duplicate with our desired guess. We modify our guess to y = Axe3x and notice there
is no duplication anymore. Let us try. Note that y0 = Ae3x + 3Axe3x and y00 = 6Ae3x + 9Axe3x . So
y00 9y = 6Ae3x + 9Axe3x 9Axe3x = 6Ae3x .
So 6Ae3x is supposed to equal e3x . Hence, 6A = 1 and so A = 1/6. Thus we can now write the general solution as
1
y = yc + y p = C1 e3x +C2 e3x + xe3x .
6
It is possible that multiplying by x does not get rid of all duplication. For example,
y00 6y0 + 9y = e3x .
The complementary solution is yc = C1 e3x +C2 xe3x . Guessing y = Axe3x would not get us anywhere. In this case we
want to guess y p = Ax2 e3x . Basically, we want to multiply our guess by x until all duplication is gone. But no more!
Multiplying too many times will not work.
Finally, what if the right hand side has several terms, such as
Ly = e2x + cos x.
In this case we find u that solves Lu = e2x and v that solves Lv = cos x (that is, do each term separately). Then note that
if y = u + v, then Ly = e2x + cos x. This is because L is linear; we have Ly = L(u + v) = Lu + Lv = e2x + cos x.
Term in R(x) Choice for y p
Keax Aeax
Kcos x
or Acos x + Bsin x
Ksin x
Kxn (n = 01, 2, . . .) A0 + A1 x + A2 x2 + . . . + An xn
Keax cos x
or eax (Acos x + Bsin x)
ax
Ke sin x
Kxn cos x (A0 + A1 x + A2 x2 + . . . + An xn )cos x
or + (B0 + B1 x + B2 x2 + . . . + Bn xn )sin x
n
Kx sin x
Keax xn cos x eax [(A0 + A1 x + A2 x2 + . . . + An xn )cos x
or + (B0 + B1 x + B2 x2 + . . . + Bn xn )sin x]
ax n
Ke x sin x

Important: The above table holds only when NO term in the trial function shows up in the complementary solution.
If any term in the trial function does appear in the complementary solution, the trial function should be multiplied by
x to make the particular solution linearly independent from the complementary solution. If the modified trial func-
tion still has common terms with the complementary solution, another x must be multiplied until no common term exists.

Pros and Cons of the Method of Undetermined Coefficients:


This method is very easy to perform. However, it has two limitations.
This method can be applied only to linear differential equations with constant coefficients.
And the non-homogeneous term can only contain simple functions such as eax , cos x, xn and sin x so that trial
function can be effectively guessed.
2 PART-II 68

2.8 Lecture-13
Variation of parameters
The method of undetermined coefficients will work for many basic problems that crop up. But it does not work all
the time. It only works when the right hand side of the equation Ly = f (x) has only finitely many linearly independent
derivatives, so that we can write a guess that consists of them all.
Definition 2.24. Let y1 and y2 be two functions of x. Then the Wronskian determinant or briefly, the Wronskian is
denoted by W (y1 , y2 ), and is defined as

y1 y2
W(y1 , y2 ) = 0 = y1 y0 y0 y2 . (78)
y1 y02

2 1

If y1 and y2 are two independent functions of x, then W (y1 , y2 ) 6= 0.


mx
e 1 em2 x
Example 2.25. m x m
1. W (e , e ) =
1 2 x = (m2 m1 )e(m1 +m2 )x .
m1 em1 x m2 em2 x
mx
e xemx 2mx
1 x
mx mx
2. W (e , xe ) = mx =e = e2mx .
me mxemx + emx m mx + 1

cos ax sin ax
3. W (cos ax, sin ax) = = a.
asin ax acos ax

2.8.1 Particular integral of y00 (x) + P(x)y0 (x) + Q(x)y(x) = R(x):


Method of Variation of parameters.
Let yh = c1 y1 + c2 y2 be the complementary function of y00 (x) + P(x)y0 (x) + Q(x)y(x) = R(x). The method of variation
of parameters involves replacing the constants c1 and c2 (here regarded as parameters in yh ) by A(x) and B(x) to be
determined so that particular integral of y00 (x) + P(x)y0 (x) + Q(x)y(x) = R(x) is given by

y p = A(x)y1 + B(x)y2 .

It is easy to show that A(x) and B(x) are computed by the formulas
y2 R y1 R
Z Z
A= dx & B = dx,
W W
where W = W (y1 , y2 ), the Wronskian of y1 and y2 .
Example 2.26. Solve (D2 + 1)y = cose x.
Solution:The general solution of (D2 + 1)y = 0 is yc = c1 cos x + c2 sin x. First we compute Wronskian

cos x sin x
W = = 1.
sin x cos x
Now particular solution y p = A cos x + B sin x, where
y2 R sin x cosec x
Z Z
A= dx = dx = x.
W 1
y1 R cos x cosecx
Z Z
B= dx = dx = ln | sin x|
W 1
GS is
y = yc + yp = c1 cos x + c2 sin x x cos x + ln | sin x| sin x.
2 PART-II 69

Consider
y00 + y = tan x.
Note that each new derivative of tan x looks completely different and cannot be written as a linear combination of the
previous derivatives. We get sec2 x, 2 sec2 x tan x, etc. . . .
This equation calls for a different method. We present the method of , which will handle any equation of the form
Ly = f (x), provided we can solve certain integrals. For simplicity, we will restrict ourselves to second order constant
coefficient equations, but the method will work for higher order equations just as well (the computations will be more ).
The method also works for equations with nonconstant coefficients, provided we can solve the associated homogeneous
equation.
Perhaps it is best to explain this method by example. Let us try to solve the equation
Ly = y00 + y = tan x.
First we find the complementary solution (solution to Lyc = 0). We get yc = C1 y1 +C2 y2 , where y1 = cos x and y2 = sin x.
Now to try to find a solution to the nonhomogeneous equation we try
y p = y = u1 y1 + u2 y2 ,
where u1 (= A) and u2 (= B) are functions and not constants. We are trying to satisfy Ly = tan x. That gives us one
condition on the functions u1 and u2 . Compute (note the product rule!)
y0 = (u01 y1 + u02 y2 ) + (u1 y01 + u2 y02 ).
We can still impose one more condition at our discretion to simplify computations (we have two unknown functions, so
we should be allowed two conditions). We require that (u01 y1 + u02 y2 ) = 0. This makes computing the second derivative
easier.
y0 = u1 y01 + u2 y02 ,
y00 = (u01 y01 + u02 y02 ) + (u1 y001 + u2 y002 ).
Since y1 and y2 are solutions to y00 + y = 0, we know that y001 = y1 and y002 = y2 . (Note: If the equation was instead
y00 + p(x)y0 + q(x)y = 0 we would have y00i = p(x)y0i q(x)yi .) So
y00 = (u01 y01 + u02 y02 ) (u1 y1 + u2 y2 ).
We have (u1 y1 + u2 y2 ) = y and so
y00 = (u01 y01 + u02 y02 ) y,
and hence
y00 + y = Ly = u01 y01 + u02 y02 .
For y to satisfy Ly = f (x) we must have f (x) = u01 y01 + u02 y02 .
So what we need to solve are the two equations (conditions) we imposed on u1 and u2

u01 y1 + u02 y2 = 0,
u01 y01 + u02 y02 = f (x).

We can now solve for u01 and u02 in terms of f (x), y1 and y2 . We will always get these formulas for any Ly = f (x), where
Ly = y00 + p(x)y0 + q(x)y. There is a general formula for the solution we can just plug into, but it is better to just repeat
what we do below. In our case the two equations become
u01 cos(x) + u02 sin(x) = 0,
u01 sin(x) + u02 cos(x) = tan(x).
2 PART-II 70

Hence

u01 cos(x) sin(x) + u02 sin2 (x) = 0,


u01 sin(x) cos(x) + u02 cos2 (x) = tan(x) cos(x) = sin(x).

And thus

u02 sin2 (x) + cos2 (x) = sin(x),




u02 = sin(x),
sin2 (x)
u01 = = tan(x) sin(x).
cos(x)

Now we need to integrate u01 and u02 to get u1 and u2 .



1 sin(x) 1
Z Z
u1 = u01 dx = tan(x) sin(x) dx = ln + sin(x),
2 sin(x) + 1
Z Z
u2 = u02 dx = sin(x) dx = cos(x).

So our particular solution is



1 sin(x) 1
y p = u1 y1 + u2 y2 = cos(x) ln + cos(x) sin(x) cos(x) sin(x) =
2 sin(x) + 1

1 sin(x) 1
= cos(x) ln .
2 sin(x) + 1

The general solution to y00 + y = tan x is, therefore,



1 sin(x) 1
y = C1 cos(x) +C2 sin(x) + cos(x) ln
.
2 sin(x) + 1

2.9 Lecture-14
2.9.1 LDEs(linear differential equations) with variable coefficients which can be converted to LDEs with con-
stant coefficients
d2y dy
a0 (ax + b)2
+ b0 (ax + b) + c0 y = R, (79)
dx2 dx
where a, b, a0 , b0 , c0 are constants. Idea: Change of independent variable from x to t with following assignment.

ax + b = et (or t = ln(ax + b)).


Or
et b
x= .
a
d 2 y dy d 2 y dy
With this transformation, y also becomes a function of t. Also we can express ,
dx2 dx
in terms of ,
dt 2 dt
and Equation (79)
becomes

d2 y 2 dy et b
a0 a2 + (b0 a a0 a ) + c0 y = R( )
dt2 dt a
2 PART-II 71

Observe that above equation is a LDE equation with constant coefficients.


If b = 0, then (79) becomes
d2y dy
a0 ax2 2 + b0 ax + c0 y = R
dx dx
and is called Cauchy-Euler equation.
The Euler equation
For a(x)y00 (x) + b(x)y0 (x) + c(x)y(x) = 0 no general solution when the coefficients arent constants. One important
case that can be solved is Eulers equation.

x2 y00 + xy0 + y = 0 (80)

where , , constants. This equation arises when studying Laplaces equation, the most important partial differential
equation. Eulers equation is solved by transforming it into the constant coefficient case using a change of variable:

x = ez (81)

Using
dx d dz
1= = ez = ez (82)
dx dx dx
so dz/dx = ez this gives
dy dy dz dy
x = ez = (83)
dx dz dx dz
and
d2y d dy d 1 dy
x2 = x2 = x2
dx2 dx dx dx x dz
dy d 2 y dz dy d 2 y
= +x 2 = + 2 (84)
dz dz dx dz dz
so the Eulers equation becomes
d2y dy
+ ( ) + y = 0 (85)
dz2 dz
which has constant coefficients. The auxiliary equation is

2 + ( ) + = 0. (86)

with general solution is


yc = C1 e1 z +C2 e2 z = C1 x1 +C2 x2 (87)
where 1 and 2 are roots of the auxiliary equation. If 1 = 2 then

yc = C1 e1 z +C2 ze1 z = C1 x1 +C2 log xx1 (88)

for x 0.
Example 2.27. Solve x2 y00 3xy0 + 5y = x2 sin(logx).
Soln Take x = et , t = ln x and dx t
dt = e .
dy dy dt dy 1 dy
= = et =
dx dt dx dt x dt
2 PART-II 72

d2y
 
d t dy
= e
dx2 dx dt
 
d t dy dt
= e
dt dt dx
2
 
t t dy t dy
= e e e
dt 2 dt
 2 
dy dy
= e2t 2

dt dt

dy2
Thus the x2 y00 3xy0 + 5y = x2 sin(logx) equation becomes dt 2
4 dy 2t
dt + 5y = e sint. By solving we get y(t) =
te2t
e2t (c1 cost + c2 sint) 2 sint By replacing t by ln x we get

x2
y(x) = x2 (c1 cos(ln x) + c2 sin(ln x) ln x sin(ln x).
2
Example 2.28. Solve (2x + 3)2 y00 + (2x + 3)y0 2y = 24x2 .
Solution:Take 2x + 3 = et , proceed as above we get

Y (x)c1 u1/2 + c2 u + 3/5u2 6u ln u 27,

where u = 2x + 3.
3 PART-III 73

3 Part-III
3.1 Lecture-15
3.1.1 Introduction
The Laplace transform can be used to solve differential equations. Besides being a direct and efficient alternative to
variation of parameters and undetermined coefficient methods, the Laplace method is particularly advantageous for
input terms that are piecewise-defined , periodic or impulsive.
Definition 3.1. The Laplace integral or Laplace transform of a function f (t) defined for 0 t is denoted by
L [ f (t)] and is defined as Z
L [ f (t)] = est f (t)dt = F[s].
0

Note in the integral 0 est f (t)dt integration is done with respect to t and limits 0 and are substituted (not as
R

direct substitution) for t. Hence L [ f (t)] is a function of s denoted by F[s].


Proposition 3.2. L [1] = 1s , where s > 0.
Solution:
Z
L [1] = est 1 dt
0
 st 
e
=
s 0
est 1
= +
lim
t s s
1
=
s
Proposition 3.3. L [eat ] = 1
sa , where s > a.
Solution:
Z
L [eat ] = est eat dt
0
Z
= e(sa)t dt
0
" #
e(sa)t
=
(s a)
0
e(sa)t 1
= lim +
t (s a) sa
1
=
sa
Similarly we can prove L [eat ] = 1
s+a .

Proposition 3.4. Find the Laplace transform of



1 0t <2
f (t) =
t 2 2t
3 PART-III 74

Solution:
Z
L [ f (t)] = est f (t)dt
0
Z 2 Z
= est f (t)dt + est f (t)dt
0 2
Z 2 Z
est 1 dt + est (t 2) dt [Recall uv = u v (u0 v)]
R R R R
=
0 2
 st 2 
est
 Z st
e e
= + (t 2) dt
s 0 s 2 2 s
1 2s 1
= (e 1) + 2 e2s
s s

3.1.2 Linearity property


Theorem 3.5. L [a f (t) + bg(t)] = aL [ f (t)] + bL [g(t)]
Proof.
Z
L [a f (t) + bg(t)] = est (a f (t) + bg(t))dt
0
Z Z
= a est f (t) + b est g(t)dt
0 0
= aL [ f (t)] + bL [g(t)]

Proposition 3.6. L [sin h at] = a


s2 a2

Solution:

eat eat
L [sin h at] = L [ ]
2
1 1
= L [eat ] L [eat ] (by linearity property)
2 2 
1 1 1
=
2 sa s+a
1 2a
=
2 s2 a2
a
=
s2 a2
Proposition 3.7. L [cos h at] = s
s2 a2

Solution:
3 PART-III 75

eat + eat
L [cos h at] = L [ ]
2
1 1
= L [eat ] + L [eat ] (by linearity property)
2 2 
1 1 1
= +
2 sa s+a
1 2s
=
2 s2 a2
s
=
s a2
2

Proposition 3.8. L [cos at] = s


s2 +a2
& L [sin at] = a
s2 +a2
Solution:

1
L [eiat ] =
s ia
1 s + ia
=
s ia s + ia
s + ia
L [cos at + isin at] =
s2 + a2
s a
L [cos at] + iL [sin at] = +i 2 (by linearity property)
s2 + a2 s + a2

Now by comparing real and imaginary parts on both sides we get


L [cos at] = s2 +a
s
2 & L [sin at] = s2 +a
a
2

Table of some Trigonometric Formulas


sin (A + B) sin A cos B + cos A sin B
cos (A + B) cos A cos B sin A sin B
sin (A + B) + sin (A B) 2 sin A cos B
sin (A + B) sin (A B) 2 cos A sin B
cos (A + B) + cos (A B) 2 cos A cos B
cos (A + B) cos (A B) 2 sin A sin B
sin 2 2 sin cos
cos 2 cos2 sin2 = 2 cos2 1 = 1 2 sin2
cos 3 4 cos3 3 cos
sin 3 3 sin 4 sin3
Proposition 3.9. Find L [cos kt sin kt]
Solution:
sin 2kt
L [cos kt sin kt] = L [ ]
2
1
= L [sin 2kt] (by linearity property)
2 
1 2k
=
2 s2 + 4k2
3 PART-III 76

Proposition 3.10. Find L [cos2 (kt)]


Solution:
1 + cos 2(kt)
L [cos2 (kt)] = L [ ]
2
1
= [L [1] + L [cos 2kt]] (by linearity property)
2 
1 1 s
= +
2 s s2 + 4k2

Proposition 3.11. Find L [sin3 (kt)]


Solution:
3 sin (kt) sin 3(kt)
L [sin3 (kt)] = L [ ]
4
3 1
= L [sin (kt)] L [sin (3kt)] (by linearity property)
4 4
3 k 1 3k
=
4 s2 + k2 4 s2 + 9k2
Proposition 3.12. Find L [cos3 (kt)]
Solution:
3 cos (kt) + cos 3(kt)
L [cos3 (kt)] = L [ ]
4
3 1
= L [cos (kt)] + L [cos (3kt)] (by linearity property)
4 4
3 s 1 s
= +
4 s2 + k2 4 s2 + 9k2
Theorem 3.13. (Existence of L [ f (t)]) Let f (t) be piecewise continuous function on every finite interval for t 0 and
satisfy | f (t)| Met for some constants M and . Then L [ f (t)] exists for s > and lims L [ f (t)] = 0.
Proposition 3.14. Find L [t b ], where b is a non-ve real number i.e., b > 0.
Solution:By definition L [t b ] = 0 est t b dt.
R

Let st = x. Then t = xs and dt = dx s .


Also t = 0 x = 0 & t = x = .
Hence L [t b ] = 0 ex ( xs )b dx 1 R x b
R
s = sb+1 0
e x dx.
Thus L [t b ] = (b+1)) R x 1
sb+1
, where () = 0 e x dx.
L [t b ] = (b+1)
sb+1
.

We also have ( + 1) = (), ( 21 ) = and (n + 1) = n!, where n is a positive integer.

If n is a positive integer, then L [t n ] = (n+1)


sn+1
= n!
sn+1
.
Using the shift theorem find the Laplace transform of

f (t) = e2t t 2
3 PART-III 77

1 1
Proposition 3.15. Find L [t 2 ] & L [t 2 ].

1 ( 12 +1)) 1 1
2 ( 2 )
Solution:L [t 2 ] = 1 = 3 = 3 .
s 2 +1 s2 2s 2
( 1 ( 21 )
q
2 +1))
1
L [t 2 ] = 1 = 1 =
s.
s 2 +1 s2

3.1.3 Change of scale property


Theorem 3.16. If L [ f (t)] = F[s], then L [ f (at)] = 1a F[ as ].
Proof. By definition, L [ f (at)] = 0 est f (at) dt.
R

Let at = v. Then t = av & dt = dv a


Also t = 0 u = 0 & t = u = .
Thus
dv
Z
v
L [ f (at)] = es a f (v)
0 a
1
Z
s
= e( a )v f (v) dv
a 0

Hence L [ f (at)] = a1 F[ as ].

3.2 Lecture-16
3.2.1 First Shifting Rule (or) Multiplication by eat (or) s-shifting Rule
Theorem 3.17. If L [ f (t)] = F[s], then L [eat f (t)] = F[s a].
Proof. L [eat f (t)] =
R st at
e f (t) dt = 0 e(sa)t f (t) dt = F[s a].
R
0 e

Proposition 3.18. Find L [eat sin bt].

Solution:We know that L [sin(bt)] = s2 +b


b
2.

Now by first shifting rule L [e sin bt] = [ s2 +b


at b
2 ]s(sa) , (i.e., s is replaced by (s a)).

Thus L [e sin bt] = (sa)2 +b2 .


at b

Similarly, one can prove


b
L [eat sin bt] = ,
(s + a)2 + b2
sa
L [eat cos bt] =
(s a)2 + b2
and
s+a
L [eat cos bt] = .
(s + a)2 + b2
Proposition 3.19. Find L [eat sinh bt].
Solution:We know that L [sinh(bt)] = s2 b
b
2.

Now by first shifting rule L [e sin bt] = [ s2 b


at b
2 ]s(sa) , (i.e., s is replaced by (s a)).

Thus L [e sinh bt] = (sa)2 b2 .


at b
3 PART-III 78

Similarly, one can prove


b
L [eat sinh bt] = ,
(s + a)2 b2
sa
L [eat cosh bt] =
(s a)2 b2
and
s+a
L [eat cosh bt] = .
(s + a)2 b2
Proposition 3.20. Find L [eat t n ], where n is a positive integer.
Solution:we know that
n!
L [t n ] = .
sn+1
Now by first shifting rule L [eat t n ] = [ sn+1
n!
]s(sa) , (i.e., s is replaced by (s a)).
Thus L [e t ] = (sa)n+1 .
at n n!

Similarly we can prove L [eat t n ] = n!


(s+a)n+1
.

Proposition 3.21. Find L [t 2 e2t ]


Solution:Recall the first shift theorem says

L eat f (t) = F(s a)


 

where L [ f (t)] = F[s]. Now, we know that


  2! 2
L t2 = 3 = 3
s s
so, by the shift theorem
2
L e2t t 2 =
 
(s 2)3
Proposition 3.22. Find L [cosh at cosh bt].
Solution:Start by splitting up cosh at
eat + eat
cosh at =
2
and so, by linearity,
1 1 1
L [cosh at cosh bt] = L [eat cosh bt + eat cosh bt] = L [eat cosh bt] + L [eat cosh bt]
2 2 2
and use the shift theorem: if L [ f (t)] = F[s] then

L [eat f (t)] = F[s a],

along with the known Laplace transform of cosh bt


s
L [cosh bt] =
s2 b2
to get  
1 sa s+a
L [cosh at cosh bt] = +
2 (s a)2 b2 (s + a)2 b2
3 PART-III 79

1
Proposition 3.23. Find L [5 + 3e6t 7e2t + 2 sin 5t 7 sinh 3t + 8 cos 9t 6 cosh 8t 9t 4 4t 2 + e5t t 4 ].
Solution:By applying linearity property
1
L [5 + 3e6t 7e2t + 2 sin 5t 7 sinh 3t + 8 cos 9t 6 cosh 8t 9t 4 4t 2 + e5t t 4 ]
1
= 5L [1] + 3L [e6t ] 7L [e2t ] + 2L [sin 5t] 7L [sinh 3t] + 8L [cos 9t] 6L [cosh 8t] 9L [t 4 ] 4L [t 2 ] + L [e5t t 4 ]
1 1 1 5 3 s s 4! ( 12 + 1) 4!
= 5 +3 7 +2 2 2
7 2 2
+ 8 2 2
6 2 2
9 5
4 1 +1 + .
s s6 s+2 s +5 s 3 s +9 s 8 s s2 (s 5)5

3.2.2 Multiplication by t n
n
Theorem 3.24. If L [ f (t)] = F[s], then L [t n f (t)] = (1)n ds
d
n F[s] .

Proof. We will prove the result by induction on n.


F[s] = L [ f (t)] = 0 est f (t)dt. Differentiate this equation on both sides with respect to s.
R
d d R st
That is ds F[s] = ds 0 e f (t)dt. Now by using Leibnitz rule we have
Z
d st
F[s] = ( e ) f (t) dt.
ds s
0

R R
or equivalently d
ds F[s] = (te
st ) f (t) dt d
(1)1 ds F[s] = est (t f (t)) dt = L [t f (t)]. So the result is true for n = 1.
0 0
Now by induction assume that the result is true for n = m. That is
dm
L [t m f (t)] = (1)m F[s].
dsn
m R
d
Or equivalently (1)m dsn F[s] = est (t m f (t)) dt.
0
Differentiate above equation on both sides with respect to s and apply Leibnitz rule, we get
Z
d dm st m
((1)m n F[s]) = ( e )(t f (t)) dt.
ds ds s
0

Up on simplification we get (1) m+1 d m F[s] = L [t m+1 f (t)]. Thus the result is true for n = m + 1. Hence by induction
dsn
it is true for every n.
Proposition 3.25. Find L [t sin at].
Solution:

L [t sin at] = L [t 1 sin at]


d
= (1)1 L [sin at]
ds
d a
= [ 2 ]
ds s + a2
d
= a[ (s2 + a2 )1 ]
ds
2as
= a[(1)(s2 + a2 )2 2s] = .
s2 + a2
3 PART-III 80

Proposition 3.26. Find L [t sinh at].

Solution:

L [t sinh at] = L [t 1 sinh at]


d
= (1)1 L [sinh at]
ds
d a
= [ 2 ]
ds s a2
d
= a[ (s2 a2 )1 ]
ds
2as
= a[(1)(s2 a2 )2 2s] = .
s2 a2
Proposition 3.27. Find L [t cos at].

Solution:

L [t cos at] = L [t 1 cos at]


d
= (1)1 L [cos at]
ds
d s
= [ 2 ]
ds s + a2
d d 2
(s2 + a2 ) ds (s) s ds (s + a2 )
= [ 2 2 2
]
(s + a )
(s2 + a2 ) 2s2
= [ ]
(s2 + a2 )2
s2 a2
= .
(s2 + a2 )2

Proposition 3.28. Find L [t cosh at].

Solution:

L [t cosh at] = L [t 1 cosh at]


d
= (1)1 L [cosh at]
ds
d s
= [ 2 ]
ds s a2
d d 2
(s2 a2 ) ds (s) s ds (s a2 )
= [ ]
(s2 a2 )2
(s2 a2 ) 2s2
= [ ]
(s2 a2 )2
s2 + a2
= .
(s2 a2 )2
3 PART-III 81

Alternate Solution:

L [t cosh at] = L [cosh att]


(eat + eat
= L[ )t]
2
1
= (L [eat t] + L [eat t]) (by linearity property)
2
1 1 1
= ([ 2 ]s(sa) + [ 2 ]s(s+a) ) (by first shifting rule)
2 s s
1 1 1
= ( + )
2 (s a)2 (s + a)2
1 (s + a)2 + (s a)2
= ( )
2 (s2 a2 )2
s2 + a2
=
(s2 a2 )2

3.2.3 Division by t
R
Theorem 3.29. If L [ f (t)] = F[s], then L [ f (t)
t ] = F[s] ds.
s

Proof. By definition of L [ f (t)] we have,


Z Z Z
F[s] ds = est f (t) dt ds.
s s 0

Since s and t are independent variables, we can interchange the order of integration

Z Z Z
F[s] ds = ( est ds) f (t) dt
s
s 0
Z st
e
= [ ]
s f (t) dt
t
0
Z
f (t)
= est dt
t
0
f (t)
= L[ ].
t

at ebt
Proposition 3.30. Find L [ e t ].
3 PART-III 82

Solution:
Z
eat ebt
L[ ] = L [eat ebt ] ds
t
s
Z
= L [eat ] L [ebt ] ds
s
Z  
1 1
= ds
s+a s+b
s
= [ln(s + a) ln(s + b)]
s
s+a
  
= ln
s+b s
1 + (a/s)
  
= ln
1 + (b/s) s
   
s+a s+b
= ln = ln
s+b s+a
2
Proposition 3.31. Find L [ sint t ].
Solution:
Z
sina t
L[ ]= = L [sin2 at] ds
t
s
Z
1
= [L [1] L [cos 2at]] ds
2
s
Z  
1 1 s
= ds
2 s s2 + 4a2
s
1 1
= [ln(s) ln(s2 + 4b2 )]s
2 2

s2
  
1
= ln 2
4 s + 4a2 s
  
1 1
= ln
4 1 + (4a2 /s2 ) s
s2
  2
s + 4a2
   
1 1
= ln 2 = ln
4 s + 4a2 4 s2

3.2.4 Laplace Transform of Derivatives


Theorem 3.32.
L [ f 0 (t)] = sL [ f (t)] f (0),
L [ f 00 (t)] = s2 L [ f (t)] s f (0) f 0 (0),
3 PART-III 83

More generally,
L [ f (n) (t)] = sn L [ f (t)] sn1 f (0) sn2 f 0 (0) f (n1) (0),
where f (n) (t) is the n-th derivative of f (t).
Proof.
Z
0
L [ f (t)] = est f 0 (t) dt
0
Z
st
= e f (t)|
0 +s est f (t) dt
0
= f (0) + sL [ f (t)]
Now
L [ f 00 (t)] = sL [ f 0 (t)] f 0 (0) = s[sL [ f (t)] f (0)] f 0 (0) = s2 L [ f (t)] s f (0) f 0 (0).
By induction we can prove that
L [ f (n) (t)] = sn L [ f (t)] sn1 f (0) sn2 f 0 (0) f (n1) (0).

3.3 Lecture-17
3.3.1 Laplace transforms of integrals
t 
F[s]
Theorem 3.33. If L [ f (t)] = F[s], then L
R
f (u) du = s .
0
Rt
Proof. Let g(t) = f (u) du. Then g0 (t) = f (t) and g(0) = 0 By Laplace transform of derivatives
0

F[s] = L [ f (t)]L [g0 (t)]


= L [g0 (t)]
= sL [g(t)] g(0)

Zt
F[s]
consequently, = L f (u) du .
s
0

t 
Proposition 3.34. Find L
R u
e cos u du .
0
Solution:
s
L [cos t] =
s2 + 1
s+1
L [et cos t] =
(s + 1)2 + 1

Zt
1 s+1
L eu cos u du =
s (s + 1)2 + 1
0
3 PART-III 84

t 
Proposition 3.35. Find L
R u sin u
e u du .
0

Solution:
1
L [sin t] =
s2 + 1
  Z
sin t
L = L [sin t]
t
s
Z
1
=
s2 + 1
s
= tan1 s|
s = cot
1
s
sin t
] = cot 1 (s 1)
L [et
t
Zt
sin u 1
Hence L eu du = cot 1 (s 1)
u s
0
t 
Proposition 3.36. Find L ueu sin
R
4u du .
0

Solution:
4
L [sin 4t] =
s2 + 16
4
L [et sin 4t] =
(s + 1)2 + 16
d
L [tet sin 4t] = (1) L [et sin 4t]
ds
d
= (4) (s2 + 2s + 17)1
ds
d
= (4)(s2 + 2s + 17)2 (s2 + 2s + 17)
ds
8(s + 1)
=
(s + 2s + 17)2
2

Zt
1 8(s + 1)
Hence L ueu sin 4u du =
s (s2 + 2s + 17)2
0
 
Rt
Proposition 3.37. Find L sinh ct eau sinh bu du .
0
3 PART-III 85

Solution:
b
L [sinh bt] =
s2 b2
b
L [eat sinh bt] =
(s a)2 + b2

Zt
1 b
L eau sinh bu du =
s (s a)2 + b2
0

Zt Zt Zt
1 ct
Hence L sinh ct eau sinh bu du = L e eau sinh bu du L ect eau sinh bu du
2
0 0 0
   
1 1 b 1 b
=
2 s c (s c a)2 + b2 s + c (s + c a)2 + b2

3.3.2 Second shifting theorem or t-shifting


Unit step function: Denoted by u(t a) or ua (t) and is defined as

0 t a
u(t a) =
1 t >a

It is very useful in writing discontinuous functions as single function. We illustrate with the following examples.

Example 3.38. 
f1 (t) 0 < t < a
f (t) =
f2 (t) t > a.
Then it is easy to check that
f (t) = f1 (t) + ( f2 (t) f1 (t))u(t a).

f1 (t) 0 < t < a
f (t) = f2 (t) a < t < b
f3 (t) t > b.

Then it is easy to check that

f (t) = f1 (t) + ( f2 (t) f1 (t))u(t a) + ( f2 (t) f1 (t))u(t b).

Theorem 3.39. Let g(t) = f (t a)u(t a) that is



0 0<t <a
g(t) =
f (t a) t > a.

Then L [g(t)] = L [ f (t a)u(t a)] = eas L [ f (t)].


3 PART-III 86

Proof.
Z
L [g(t)] = est g(t) dt
0
Z
= est f (t a) dt
a
put t a = x
implies dt = dx,
t =ax=0
and t = x =
Z
= es(a+x) f (x) dx
0
Z
as
= e esx f (x) dx
0
as
= e L [ f (t)].

Proposition 3.40. Find L [u(t a)].


Solution:

L [u(t a)] = L [1 u(t a)]


= L [ f (t a) u(t a)], where f (t) = 1
L [ f (t a) u(t a)] = eas L [1] (by second shifting theorem)
eas
= .
s
Proposition 3.41. Find L [4 sin (t 5)u(t 5)].
Solution:We know that
1
L [4 sin (t)] = 4 .
s2 + 1
Hence by second shifting theorem

4e5s
L [4 sin (t 5)u(t 5)] = e5s L [4 sin t] =
s2 + 1
Proposition 3.42. Find L [ f (t)], where
et

0<t <3
f (t) =
0 t > 3.
3 PART-III 87

Solution:

f (t) = et + (0 et )u(t 3)
= et et u(t 3)
= et e(t3+3) u(t 3)
= et e3 e(t3) u(t 3)
L [ f (t)] = L [et e3 e(t3) u(t 3)]
= L [et ] e3 L [e(t3) u(t 3)]
1
= e3 [e3s L [et ]] by second shifting theorem
s+1
1 h i
= 1 e3(s+1) .
s+1
Proposition 3.43. Find L [ f (t)], where

1 0<t <2
2 2<t <4

f (t) =

3 4<t <6
0 t > 6.

Solution:

f (t) = 1 + (2 1)u(t 2) + (3 2)u(t 4) + (0 3)u(t 6)


= 1 + u(t 2) + u(t 4) 3u(t 6)
L [ f (t) = L [1 + u(t 2) + u(t 4) 3u(t 6)]
= L [1] + L [u(t 2)] + L [u(t 4)] 3L [u(t 6)]
1
= [1 + e2s + e4s 3e6s ]
s

3.3.3 subsubLaplace transforms of periodic functions


A function f (t) is said to be a periodic function of period T > 0 if

f (t) = f (t + T ) = f (t + 2T ) = = f (nT + t).

Theorem 3.44. Let f (t) be a periodic function with period T . Then

RT st
e f (t) dt
0
L [ f (t)] = .
1 eT s
3 PART-III 88

Proof.
Z
L [ f (t) = est f (t) dt
0
ZT Z2T Z3T
st st
= e f (t) dt + e f (t) dt + est f (t) dt . . .
0 T 2T
(r+1)T
Z
= est f (t) dt
r=0 rT

(r+1)T
Z
Consider est f (t) dt
rT
where r{0, 1, 2 . . .}
put t = x + rT
implies dt = dx,
t = rT x = 0
and t = (r + 1)T x = T
ZT
L [ f (t) = es(x+rT ) f (x + rT ) dx
r=0
0
ZT
= ersT esx f (x) dx
r=0
0
ZT
= esx f (x) dx[ ersT ]
r=0
0
ZT
esx f (x) dx 1 + esT + e2sT + e3sT + . . .
 
=
0
ZT
1
= est f (t) dt.
1 eT s
0

Proposition 3.45. Find L [ f (t)], where 


1 0t <2
f (t) =
1 2 t < 4
f (t + 4) = f (t).
Solution:Note that f (t) is a periodic function with period 4.
3 PART-III 89

Z4
1
L [ f (t)] = est f (t) dt
1 e4s
0

Z2 Z4
1
= est f (t) dt + est f (t) dt
1 e4s
0 2
1 e2s
=
s(1 + e2s )

3.3.4 Convolution
Rt
L [ f (t)]L [g(t)] = L [( f g)(t)], where ( f g)(t) = f (x)g(t x) dx. That is if L [ f (t)] = F[s] and L [g(t)] = G[s],
0
Rt
then L 1 [F[s] G[s]] = ( f g)(t) = f (x)g(t x) dx}.
0
h i
Proposition 3.46. Find L 1 1
(sa)(sb) .
Solution:By convolution theorem
   
1 1 1 1 1
L = L
(s a)(s b) (s a) (s b)
Zt Zt
at
= e e = bt
e e ax b(tx)
dx = e bt
e(ab)x dx
0 0
" #t
e(ab)x eat ebt at
= ebt = e ebt .
ab ab
0
h i
Proposition 3.47. Find L 1 s
(s2 +a2 )2
.
Solution:By convolution theorem
   
s s 1 a
L 1 = L 1

(s2 + a2 )2 (s2 + a2 ) a (s2 + a2 )
Zt
1 1
= cos at sin at = cos ax sin a(t x) dx.
a a
0

For complete solution we need to evaluate above integral.


h i
Proposition 3.48. Find L 1 (s2 +as
2 )2 .

Solution:By convolution theorem


s2
   
s s
L 1 = L 1

(s2 + a2 )(s2 + b2 ) (s2 + a2 ) (s2 + b2 )
Zt
= cos at cos bt = cos ax cos b(t x) dx.
0
3 PART-III 90

For complete solution we need to evaluate above integral.


h i
Proposition 3.49. Find L 1 s2 (s2)
1
.
h i h i
Solution:L 1 s2 (s2)
1
= L 1 s12 cot s2 1
By the convolution, it is equal to ( f g)(t) where f (t) = t, g(t) = e2t
(t 0). Hence it is sufficient to find ( f g)(t).
From the definition of convolutions
Z t Z t
( f g)(t) = f (x)g(t x) dx = xe2(tx) dx
0 0
Z t Z t
= xe2t e2x dx = e2t xe2x dx
0 0
Use integration by parts with u = x, dv = e2x dx
1
du = dx, v = e2x
Z t 2 Z t 
= e2t u dv = e2t [uv]t0 v du
0 0
h
x it Z t 1 
= e2t e2x e2x dx
2 0 0 2
 Z t 
t 1
= e2t e2t + 0 + e2x dx
2 2 0
2t
 t
t e 1
= + e2x
2 2 2 0
2t
 
t e 1 2t 1
= + e +
2 2 2 2
t 1 1 2t
= + e
2 4 4
Proposition 3.50. Use the convolution theorem to find the function f (t) with

1
L (f) = .
s2 (s 4)

Solution:We know L [t] = 1


s2
and L [e4t ] = 1
s4 . From the convolution theorem, we see

1
L [f] = = L [t]L [e4t ] = L [t e4t ]
s2 (s 4)
3 PART-III 91

so that f (t) is the convolution t e4t .


Z t
f (t) = xe4(tx) dx
0
Z t Z t
= xe4t e4x dx = e4t xe4x dx
0 0
Use integration by parts with U = x, dV = e4x dx
1
dU = dx, V = e4x
4
Z t  Z t 
= e 4t
U dV = e 4t
[UV ]t0 V dU
0 0
t 1
h it 
x
Z
= e4t e4x e4x dx
4 0 0 4
 Z t 
t 1
= e4t e4t + 0 + e4x dx
4 4 0
4t
 t
t e 1
= + e4x
4 4 2 0
4t
 
t e 1 4t 1
= + e +
4 4 4 4
t 1 1 4t
= + e
4 16 16

3.4 Lecture-18
Proposition 3.51. Find f (t) = L 1 [ s(s+3)
6
] both by partial fractions and by convolutions.
6
Solution:Let F = s(s+3) .
First, by partial fractions, let
1 A B
= +
s(s + 3) s s+3
then
1 = A(s + 3) + Bs
and choosing s = 0 gives A = 1/3. Choosing s = 3 gives B = 1/3. This means that
2 2
F=
s s+3
and, so,
f (t) = 2 2e3t
By the convolution theorem,
6 6 1
= L [6]L e3t = L 6 e3t
   
F= =
s(s + 3) s s+3
using L [ f ]L [g] = L [ f g]. So, we need to work out 6 exp (3t). Remember the formula for the convolution.
Z t
f g = f (x)g(t x)dx
0
3 PART-III 92

In this case, it doesnt make much difference, but we can use f g = g f and work out exp (3t) 6 instead of
6 exp (3t), it is a tiny bit easier:
Z t
exp (3t) 6 = 6 exp (3x)dx = 2 exp (3x)]t0 = 2 2e3t
0

as before.

Proposition 3.52. Solve


y00 4y0 + 3y = 6t 8
with intial conditions y(0) = y0 (0) = 0.
Solution:
If we write Y = L [y(t)] the Laplace transform is

6 8
s2Y 4sY + 3Y =
s2 s
6 8
(s2 4s + 3)Y =
s2 s
6 8
Y =
s2 (s2 4s + 3) s(s2 4s + 3)

Now we have to put this into a form which allows us to take the inverse transform. The second term isnt so bad. Since
s2 4s + 3 = (s 1)(s 3) we write

1 A B C
= + +
s(s 1)(s 3) s s1 s3
1 = (s 1)(s 3)A + s(s 3)B + s(s 1)C

Thus, choosing s = 0 gives A = 1/3, s = 1 gives B = 1/2 and choosing s = 3 gives C = 1/6. Thus

1 1 1 1
= +
s(s2 4s + 3) 3s 2(s 1) 6(s 3)

The other expansion is harder because it has a repeated root: in


1
s2 (s 1)(s 3)

the s factor appears as a square. To deal with this you have to include a 1/s term and a 1/s2 term in the partial fraction
expansion.
1 A B C D
= + 2+ +
s2 (s 1)(s 3) s s s1 s3
1 = s(s 1)(s 3)A + (s 1)(s 3)B + s2 (s 3)C + s2 (s 1)D

Now taking s = 0 gives B = 1/3, s = 1 gives C = 1/2 and s = 3 gives D = 1/18. There is no convenient choice of s
that gives A on its own, so we just substitute in any other value, s = 2 say and by putting in the values of B, C and D we
get
1 2
1 = 2A + 2 +
3 9
3 PART-III 93

and hence A = 4/9. Thus


1 4 1 1 1
= + +
s2 (s 1)(s 3) 9s 3s2 2(s 1) 18(s 3)
Now we can put everything together
 
4 1 1 1
Y = 6 + +
9s 3s2 2(s 1) 18(s 3)
 
1 1 1
8 +
3s 2(s 1) 6(s 3)
and if we do the algebra we find
2 1 1
Y= 2
+
s s1 s3
which means that
y = 2t + et e3t
and you can check that this does solve the original equation.
Proposition 3.53. Solve f 0 f = u(t 1) with f (0) = 1.
Solution:Taking the Laplace transform of the equation gives
es
(s 1)F = 1 + , where F = L [ f ]
s
and so
es
 
1 1 1 1
F= + = + + es
s 1 s(s 1) s 1 s s1
Since
1 1
L 1 + et = +
 
s s1
we can use the second shift theorem to find that
f = et + 1 + et1 u(t 1)


Proposition 3.54. Solve 2 ddtf = 1 with initial conditions f (0) = 4.


Solution:Using linearity of L , plus the property of Laplace transforms of derivatives, we get
 
df
L 2 = L [1]
dt
 
df 1
2L =
dt s
1
2sF[s] 8 =
s

This means that


4 1
F[s] = +
s 2s2
and, since, L [t n ] = n!/sn+1
1
f (t) = 4 + t
2
To verify that this solves the equation note that f (0) = 4 as required and f 0 = 1/2.
3 PART-III 94

Proposition 3.55. Using the Laplace transform solve the differential equation

f 00 4 f 0 + 3 f = 1

with initial conditions f (0) = f 0 (0) = 0.


Solution:First, take the Laplace transform of the equation. Since f 0 (0) = f (0) = 0, if L [ f ] = F[s] then L [ f 0 ] = sF[s]
and L [ f 00 ] = s2 F[s]. Thus, the subsidiary equation is
1
s2 F 4sF + 3F =
s
and so
1
(s2 4s + 3)F =
s
1 1
F = 2
(89)
s s 4s + 3
and, since s2 4s + 3 = (s 3)(s 1), this gives
1
F=
s(s 3)(s 1)
Before we can invert this, we need to do a partial fraction expansion.
1 A B C
= + +
s(s 3)(s 1) s s3 s1
1 = A(s 3)(s 1) + Bs(s 1) +Cs(s 3) (90)

So substituting in s = 0 we get A = 1/3, s = 3 gives B = 1/6 and s = 1 gives C = 1/2. Hence


1 1 1
F= +
3s 6(s 3) 2(s 1)
and so
1 1 3t 1 t
+ e e
f (t) =
3 6 2
Proposition 3.56. Using the Laplace transform solve the differential equation

f 00 4 f 0 + 3 f = 2et

with initial conditions f (0) = f 0 (0) = 0.


Solution: This time we have L [2et ] = 2/(s 1) on the right hand side. This means that the subsidiary equation is
2
(s2 4s + 3)F =
s1
so
2
F=
(s 1)2 (s 3)
We need to do partial fractions again, but this is one of those cases with a repeated root:
1 A B C
2
= + 2
+
(s 1) (s 3) s 1 (s 1) s3
3 PART-III 95

and multiplying across


1 = A(s 1)(s 3) + B(s 3) +C(s 1)2
so s = 1 gives B = 1/2 and s = 3 gives C = 1/4. No value of s gives A on its own, so wee try s = 2:
1 1
1 = A + +
2 4
which means that A = 1/4. Hence
1 1 1
F = 2
+
2(s 1) (s 1) 2(s 3)
and
1 1
f = et tet + e3t
2 2
Proposition 3.57. Using the Laplace transform solve the differential equation

f 00 4 f 0 + 3 f = 0

with initial conditions f (0) = 1 and f 0 (0) = 1.


Solution: In this example there are non-zero initial conditions. Since

L [ f 0 ] = sF f (0) (91)
L [ f 00 ] = s2 F s f (0) f 0 (0) (92)

the subsidiary equation in this case is


s2 F s 1 4sF + 4 + 3F = 0
so
(s2 4s + 3)F = s 3.
Hence
1
F=
s1
and
f (t) = et
Proposition 3.58. Using the Laplace transform solve the differential equation

y00 2ay0 + a2 y = 0

with initial conditions y0 (0) = 1 and y(0) = 0, where a is some real constant.
Solution: Taking the Laplace transform we get

s2Y 1 2aY + a2Y = 0

and hence
1
Y=
(s a)2
which means that
y = teat
3 PART-III 96

Proposition 3.59. Using the Laplace transform solve the differential equation

f 00 + f 0 6 f = e3t

with intial conditions f (0) = f 0 (0) = 0.


Solution: So, as before, the subsidiary equation is
1
s2 F + sF 6F =
s+3
or
1
F=
(s + 3)2 (s 2)
As before, we do partial fractions
1 A B C
= + +
(s + 3)2 (s 2) s + 3 (s + 3)2 s 2
1 = A(s + 3)(s 2) + B(s 2) +C(s + 3)2 (93)

s = 3 gives B = 1/5 and s = 2 gives C = 1/25. Putting in s = 1 we find

1 16
1 = 4A + +
5 25
and so A = 1/25. Putting all this together says that

1 3t t 3t 1
f = e e + e2t
25 5 25
Proposition 3.60. Using the Laplace transform solve the differential equation

f 00 + 6 f 0 + 13 f = 0

with initial conditions f (0) = 0 and f 0 (0) = 1.


Solution: So, taking the Laplace transform of the equaiton we get,

s2 F 1 + 6sF + 13F = 0

and, hence,
1
F= .
s2 + 6s + 13
Now, using minus b plus or minus the square root of b squared minus four a c all over two a, we get

s2 + 6s + 13 = 0

if
6 36 52
s= = 3 2i
2
which means
s2 + 6s + 13 = (s + 3 2i)(s + 3 + 2i)
3 PART-III 97

Next, we do the partial fraction expansion,


1 A B
= +
s2 + 6s + 13 s + 3 2i s + 3 + 2i
and multiplying across we get
1 = A(s + 3 + 2i) + B(s + 3 2i)
therefore we choose s = 3 + 2i to get
1 i
A= =
4i 4
and s = 3 2i to get
1 i
B= =
4i 4
and so
i 1 i 1
F = + .
4 s + 3 2i 4 s + 3 + 2i
If we take the inverse transform
i i
f = e(32i)t + e(3+2i)t
4 4
i 3t 2it
= e (e e2it )
4
1 3t
= e sin 2t (94)
2
Proposition 3.61. Using the Laplace transform solve the differential equation

f 00 + 6 f 0 + 13 f = et

with initial conditions f (0) = 0 and f 0 (0) = 0.


Solution: Taking the Laplace transform of the equation gives
1
s2 F + 6sF + 13F =
s1
so that
1
F= .
(s 1)(s + 3 + 2i)(s + 3 2i)
We write
1 A B C
= + +
(s 1)(s + 3 + 2i)(s + 3 2i) s + 3 2i s + 3 + 2i s 1
giving
1 = A(s 1)(s + 3 + 2i) + B(s 1)(s + 3 2i) +C(s + 3 2i)(s + 3 + 2i).
s = 3 + 2i gives
1 = A(4 + 2i)(4i) = A(8 16i)
so
1 1 8 16i 1 + 2i
A= = =
8 + 16i 8 + 16i 8 16i 40
3 PART-III 98

In the same way, s = 3 2i leads to


1 2i
B=
40
and, finally, s = 1 gives
1
C= .
20
Putting all this together we get
1 + 2i 1 1 2i 1 1 1
F = +
40 s + 3 2i 40 s + 3 + 2i 20 s 1
and so
1 + 2i (32i)t 1 2i (3+2i)t 1
f = e e + et
40 40 20
1 3t  2it 2it
 1 t
= e (1 + 2i)e + (1 2i)e + e (95)
40 20
We then substitute in

e2it = cos 2t + i sin 2t


2it
e = cos 2t i sin 2t (96)

to end up with
1 3t 1
f= e [2 sin 2t cos 2t] + et
20 20
Proposition 3.62. Use Laplace transform methods to solve the differential equation

00 0 1, 0 t < c
f +2f 3f =
0, t c

subject to the initial conditions f (0) = f 0 (0) = 0. (3)

Solution: Taking Laplace transforms of both sides and using the tables for the Laplace transform of the right hand side
function, leads to

1 ecs
(s2 + 2s 3)F =
s
1 ecs
F =
s(s2 + 2s 3)
1
= (1 ecs )
s(s 1)(s + 3)
 
A B C
= (1 ecs ) + + (97)
s s1 s+3
3 PART-III 99

Concentrating on the partial fractions part, we have


1 A B C
= + +
s(s 1)(s + 3) s s1 s+3
1 = A(s 1)(s + 3) + Bs(s + 3) +Cs(s 1)
s=0:
= 3A
1
1
A =
3
s=1:
1= 0 + 4B + 0
1
B =
4
s = 3 :
1 = 0 + 012C
1
C =
12
Hence we have  
11 1 1 1 1
F = (1 ecs ) + +
3 s 4 s 1 12 s + 3
From the tables, we know that
 
1 1 t 1 3t 11 1 1 1 1
L + e e = + +
3 4 12 3 s 4 s 1 12 s + 3
and then using the second shift theorem
 
1 1 1 1 1 1
f (t) = + et + e3t Hc (t) + e(tc) + e3(tc)
3 4 12 3 4 12
Proposition 3.63. Use Laplace transform methods to solve the differential equation

0, 0 t < 1
f 00 + 2 f 0 3 f = 1, 1 t < 2
0, t 2

subject to the initial conditions f (0) = 0 and f 0 (0) = 0.


Solution:So the thing here is to rewrite the right hand side of the equations in terms of Heaviside (unit step) functions.
Remember the definition of the Heaviside function:

0 t <a
Ha (t) = u(t a) =
1 t a
so the Heaviside function is zero until a and then it is one. The right hand side is zero until t = 1 and then it is one
until t = 2 and then it is zero again. Consider H1 (t) H2 (t), this is zero until you reach t = 1, then the first Heaviside
function switches on, the other one remains zero. Things stay like this until you reach t = 2, then the second Heaviside
function switches on aswell and you get 1 1 = 0. Thus

0, 0 t < 1
H1 (t) H2 (t) = 1, 1 t < 2
0, t 2

3 PART-III 100

Now, using
eas
L (Ha (t)) =
s
we take the Laplace transform of the differential equation:
es e2s
s2 F + 2sF 3F =
s s
This gives
1 s
(s2 + 2s 3)F e e2s

=
s
1
es e2s

F= (98)
s(s 1)(s + 3)
Now, if you look at the soln to problem sheet 4, question 3 youll see that
1 1 1 1
= + +
s(s 1)(s + 3) 3s 4(s 1) 12(s + 3)
and we know that  
1 1 t 1 3t 1 1 1
L + e + e = + +
3 4 12 3 4(s 1) 12(s + 3)
In other word, if it wasnt for the exponentials wed know the little f. However, we know from the second shift thereom
that the affect of the exponential eas is to change t to t a and to introduce an overall factor of Ha (t). Thus
   
1 1 1 1 1 1
f = H1 (t) + et1 + e3t+3 H2 (t) + et2 + e3t+6
3 4 12 3 4 12
Proposition 3.64. Use Laplace transform methods to solve the differential equation
f 00 + 2 f 0 3 f = (t 1)
subject to the initial conditions f (0) = 0 and f 0 (0) = 1.
Solution:The only thing that is unusual is that there is a delta function. We take the Laplace transform using
L [ (t a)] = eas
hence
(s2 + 2s 3)F 1 = es
Now, if we do partial fractions on 1/(s2 + 2s 3) we get
1 1 1
= +
s2 + 2s 3 4(s + 3) 4(s 1)
Hence  
1 1
1 + es

F= +
4(s + 3) 4(s 1)
Since  
1 1 1 1
L e3t + et = +
4 4 4(s + 3) 4(s 1)
then, by the second shift theorem we have
   
1 3t 1 t 1 3t+3 1 t1
f = e + e + H1 (t) e + e
4 4 4 4
3 PART-III 101

Table of Laplace Transforms


f (t) for t 0 L [ f (t)] f (t) for t 0 L [ f (t)]

1 1
1 eat
s sa
n! a
tn (n = 0, 1, . . .) sin at
sn+1 s2 + a2
s a
cos at sinh at
s2 + a2 s2 a2
s eas
cosh at u(t a)
s2 a2 s
1 bta
f (t a)u(t a) eas L [ f (t)] L 1 [F[as + b]] ae f ( at )

(t a) eas f 0 (t) sL ( f ) f (0)

f 00 (t) s2 L ( f ) s f (0) f 0 (0) L [a f (t) + bg(t)] aL [ f (t)] + bL [g(t)]

L [eat ] 1
s+a L [t b ] (b+1)
sb+1

n R
L [t n f (t)] d
(1)n ds n F[s] L [ f (t)
t ] F[s] ds
s
Rt F[s]
L [ f 0 (t)] sL [ f (t)] f (0) L[ 0 f (u) du] s

eas n
L [u(t a)] s L [t n f (t)] d
(1)n dsn L [ f (t)]

RT st
e f (t) dt
L [ f (t)], 0
1eT s
L [ f (t)]L [g(t)] L [( f g)(t)]
Rt
where f (t + T ) = f (t) {where ( f g)(t) = f (x)g(t x) dx}
0

Vous aimerez peut-être aussi