Vous êtes sur la page 1sur 13

Math 126 : Partial Differential Equations Notes

Notes for the Spring 2018 Math 126 Course taught by Professor Jao
prepared by Joshua Lin (email: joshua.z.lin@gmail.com)
May 3, 2018

1 Introduction and Overview


These are personal notes for the Math 126 class. As such; the assumed background is exactly
the background I have right now.

2 Review of Multivariable Calculus


We denote the closure of a set K ⊂ Rd as:

K̄ = K ∪ {limit points of K}

where a x is a limit point of K if there exists a sequence {xn } in K\{x} such that lim xn = x.
The boundary of K is the set of points x such that every ball Br (x) intersects both K and
its complement.

Definition 2.1 (The Jacobian).


Given a diffeomorphism (bijective with continuous partial derivatives in both ways)
Φ : U ⊂ Rd → V ⊂ Rd , the Jacobian of Φ is the matrix-valued function:
 
∂Φi
JΦ(x) = , i, j = 1, . . . , d
∂xj ij

Theorem 2.1 (Using the Jacobian).


Let Φ : U → V be a diffeomorphism, then:
Z Z
f (x)dx = f ◦ Φ(x)|detJΦ(x)|dx
V U

Now, for some things we should remember.

1
Theorem 2.2 (Green’s Identities).
Let Ω ∈ Rd have smooth boundary. For any smoorth functions u, v:

u4v = ∇ · (u∇v) − h∇u, ∇vi

so we have Green’s Identities:


Z Z Z
u4vdx = uh∇v, n̂idS − h∇u, ∇vidx
Ω ∂Ω Ω
Z Z
u4v − v4udx = uh∇v, n̂i − vh∇u, n̂idS
Ω ∂Ω

Theorem 2.3 (Useful Vector Formulas).


In Spherical coordinates, we have:
∂f 1 ∂f 1 ∂f
∇f = r̂ + θ̂ + φ̂
∂r r ∂θ r sin θ ∂φ
1 ∂ 2 1 ∂ 1 ∂vφ
∇·v = 2
(r vr ) + (sin θvθ ) +
r ∂r r sin θ ∂θ r sin θ ∂φ
∂ 2f
 
1 ∂ ∂f 1 ∂ ∂f 1
∇2 f = 2 (r2 ) + 2 sin θ + 2 2
r ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂φ2

3 Convolution and Approximate Identities

Definition 3.1 (Convolutions).


Let u, v be bounded functions on Rd such that at least one has compact support. Then
the convolution is defined:
Z
u ? v(x) = u(x − y)v(y)dy
Rd

Theorem 3.1 (Properties of Convolutions).


u ? v = v ? u, and if u is smooth then u ? v is smooth, with ∂(u ? v) = (∂u) ? v.

2
Definition 3.2 (Mollification).
Let η ∈ Cc∞ (Rd ) (smooth with compact support)
R
satisfy η(x)dx = 1. For each 0 <
−d −1
R
h ≤ 1, define ηh (x) = h η(h x), note that ηh (x)dx = 1 as well. Then {ηh }h is called
an approximate identity or mollifier.
For a function f , define the mollification of f as

fh (x) = (f ? ηh )(x)

We can understand mollification as ’blurring’ the function in a smooth manner; think


Gaussian blurs. Note that as h → 0, the mollifier gets steeper and steeper, and we are
”losing less information”; in the sense that:

Theorem 3.2.
Let f be a continuous function on Rd . Then f ? ηh is smooth for each h, and

lim f ? ηh (x) = f (x) for all x


h→0

The proof essentially follows from continuity of f .

4 Distributions
We usually think about functions, as, well, functions. But instead, we can think of them
as ’distributions’, in the following sense. Suppose I take f , then it is interesting to see the
behaviour of integrals of the form:
Z
f (x)φ(x)dx

as φ ranges over some well-behaved set of functions. The question we want to ask is; given
these integrals can we recover f ? The following says yes in some sense:

Theorem 4.1.
Let f1 , f2 be continuous on Rd , and suppose that:
Z Z
f1 (x)φ(x)dx = f2 (x)φ(x)dx

for all smooth, compact support φ. Then f1 = f2 .

The proof follows by considering a subcollection of φ, namely an approximate identity,


and using the ideas discussed in the previous section. Now, we might want to expand our
brains, by thinking not just about functions as distributions, but other more spooky kinds
of distributions. First, some terminology:

3
Definition 4.1.
A test function is a smooth, compact support function, and the collectin of test functions
is denoted D(Rd ), and is a vector space under pointwise operations.

Definition 4.2.
A distribution on Rd is a linear functional l on D(Rd ) which also satisfises the following
continuity property: if φ, φn ∈ D(Rd ), and
(1) - There exists R > 0 such that supp(φn ), supp(φ) ⊂ BR (0) for all n (we are talking
support)
(2) - limn→∞ maxx∈Rd |Dα φn (x) − Dα φ(x)| = 0 for all multiindices α (i.e. the derivatives
of φn uniformly converge to the derivatives of φ), then lim l(φn ) = l(φ).

You might be wondering what on earth this continuity condition imposes on the distri-
butions. And I would say to you: What a good question! Moving on now.
Note that for any continuous f , we can associate a distribution, where
Z
lf : φ → f (x)φ(x)dx

and φ is a test function. However, we have enlarged our brains, because we now have exotic
distributions that don’t correspond to functions, i.e. the Dirac delta function:

δ : φ → φ(0)

Note that any mollifier limits to a delta function in the space of distributions, to be precise:

Definition 4.3.
A sequence of functions fn converges to l ∈ D0 (Rd ) in the sense of distributions if the
associated distributions lfn satisfy:

lim lfn (φ) = l(φ) for each φ ∈ D(Rd )


n→∞

Now, we have various operations on distributions; which are motivated by thinking about
their ’functional counterparts’ (e.g. imagine everything is a function, think about how it
should behave, and then impose this as a rule on the space of distributions)

4
Definition 4.4.
We can multiply distributions by smooth functions:

(hl)(φ) = l(hφ)

We can differentiate distributions:

(∂xi l)(φ) = l(−∂xi φ)

We can translate distributions:

(Ty l)(φ) = l(T−y φ)

where Ty f (x) = f (x − y). Finally, we can convolve with a test function η to get a
function of x:
(l ? η)(x) = l(rTx η)
where r is the reflection rg(x) = g(−x).

5 Characteristics for First Order Linear Equations


Now, time to start learning some PDE’s! Woohoo! Feel the excitement bubble in your veins!
Let’s consider a first order linear PDE:

a(t, x)ut + b(t, x)ux + c(t, x)u = f (t, x), u : Rt × Rx → R

Observe that:
a(t, x)ut + b(t, x)ux = (a(t, x), b(t, x)) · (ut , ux )
represents the directional derivate of u in (a, b) direction. So, to solve our PDE, consider
characteristics of our PDE, which are curves γ(s) that satisfy:

γ̇(s) = (ṫ(s), ẋ(s)) = (a(γ(s)), b(γ(s)))

i.e. they are curves that follow the vector (a, b) along the plane. Now, if we define z(s) =
u(γ(s)), i.e. the value of u on the curve gamma as parametrized by s, then we find that:

ż(s) = (ut (γ(s)), ux (γ(s))) · γ̇(s)

so all in all we have:


ż(s) + c(γ(s))z(s) = f (γ(s))
which is an ODE, which we can (hopefully) solve easily! Note that this PDE is fairly general,
if we have that c = 0 and a, b constant, then this is called the ’Linear Transport Equation’
since the initial value of u gets linearly transported along vectors of the form (a, b) across
the entire plane (up to an integral of f along the straight characteristics).

5
6 Laplace’s Equation

Definition 6.1 (Laplace’s Equation).


Laplace’s equation is
4u = ∇2 u = ∂xi xi u = 0

As a first (slightly strange) note, note that Laplace’s equation is invariant under rotations,
i.e. if u is a solution and S is a rotation, then u ◦ S is also a solution of Laplace’s equation.
Now, to find our first solution (on the entire space), let’s look for radial solutions u(x) =
v(r). Plugging this into our equation, we find that
(
A
rd−2
+ B, d≥3
v(r) =
A log r + B, d=2

is a solution to Laplace’s equation. Normalizing in a mysterious way, we find:

Definition 6.2 (The Fundamental Solution of Laplace’s Equation).


(
1 1
d |x|
d−2 , d≥3
Φ(x) = (d−2)c
1
− 2π log |x|, d=2
where cd is the surface area of unit sphere in Rd is a particular solution, which is called
the Fundamental Solution of −4.

Now, we can consider Φ as a distribution, and we find that:

Theorem 6.1 (The Fundamental Solution as a distribution).

−4lΦ = δ0

We essentially prove this using Green’s Identities, stated in the first section on multivari-
able calculus; which is just another flavor of Stoke’s Theorem I believe.
Now, as a corollary to the above fact, we have:

Theorem 6.2 (The ’Fundamental Solution’ to the Poisson’s Equation).


If f ∈ D(Rd ), then u = Φ ? f solves Poisson’s Equation −4u = f .

Which we get; once again, by using Green’s Identities, and also using bounds on the fun-
damental solution (far away from origin it decays like ... etc). Now, if we start investigating
general harmonic functions:

6
Theorem 6.3 (Mean Value Formulas).
Let Ω ⊂ Rd be open and u ∈ C 2 (Ω) be harmonic. Then
Z Z
1 1
u(x) = udS = udx
A(∂Br (x)) ∂Br (x) vol(Br (x)) Br (x)

for each ball Br (x) ⊂ Ω.

Essentially, we integrate u over surfaces and volumes of balls, and show that this integral
doesn’t change as a function of radius of the ball, by using it’s harmonic nature, and Green’s
identities and Divergence Thm and such.

Theorem 6.4 (Converse to Mean Value).


If u ∈ C 2 (Ω) satisfies Z
1
u(x) = udS
Area(∂Br (x)) Br (x)
for all Br (x) ⊂ Ω, then 4u = 0.

Essentially, assume not. Then pick a point with nonzero laplacian; and take some inte-
grals in balls close to that point.

Theorem 6.5 (Strong Maximum Principle).


Assume Ω ⊂ Rd is open, bounded, and connected. Suppose u ∈ C 2 (Ω) ∩ C Ω̄ and is
harmonic on Ω. Then if there exists x0 ∈ Ω such that

u(x0 ) = maxx∈Ω̄ u(x)

then u is constant.
as a corollary, the weak version:

Theorem 6.6 (Weak Maximum Principle).


In the setting of the theorem above,

maxΩ̄ = max∂Ω u

The proof for this one is vaguely interesting.


Proof: Let M = maxΩ̄ u. Then:

Ω = {x ∈ Ω|u(x) < M } ∪ {x ∈ Ω|u(x) = M } = A ∪ B

u is continuous, so A is open. Now suppose that B is nonempty, and pick a point x0 such
that u attains it’s max at x0 , and pick a neighbourhood around x0 . Then, if u is not constant
maximum in this neighbourhood, then there is a point inside where it is submaximal, so the
average of u over the ball is less than M , but this contradicts the mean value theorem! Hence
u is constant maximum on this neighbourhood. So B is open.

7
Since Ω assumed to be connected, B nonempty implies A is empty. So u(x) = M on the
whole space. 

Definition 6.3 (Dirichlet Problems).


A boundary value problem of the form

−4u = f on Ω; u|∂Ω = g

where f, g are known, then we have a Dirichlet boundary value problem.

The maximum principle tells us there is at most one solution to a Dirichlet Problem.
Now, we have Green’s Representation Formula:

Definition 6.4 (Green’s Representation Formula).


Z Z
u(x) = Γ(x, y)h∇u(y), n̂y i − u(y)h∇y Γ(x, y), n̂y idSy − Γ(x, y)4u(y)dy
∂Ω Ω
where
Γ(x, y) = Φ(x − y)

The whole point being that we can evaluate these integrals given f, g (i.e. notice that in
the surface integral we know u = g, and in the volume integral we know 4u = f ).

7 The Heat Equation


The Heat Equation is the following PDE:

Definition 7.1 (Heat Equation).

∂t u − 4x u = 0; (t, x) ∈ (0, ∞) × Ω, Ω ⊂ Rd open

Usually, we want to solve an initial value problem, where u has been defined for t = 0.
Now, we have some observations:

Theorem 7.1 (Some Observations:).


If u solves ∂t u − 4u = 0 on (0, ∞) × Rd , then so does 2
R uλ (t, x) = u(λ t, λx) for any λ > 0.
Furthermore, if u solves the heat equation, then u(t, x)dx is constant for all t (total
heat is conserved). Finally, we have invariance under rotations, so 4(u ◦ S) = (4u) ◦ S.

We find a solution that obeys the symmetry and the scaling laws we observed:

8
Definition 7.2 (The Fundamental Solution).
1 2
Φ(t, x) = d/2
e−|x| /4t ; t>0
(4πt)
and Φ = 0 for t < 0 is a solution.
Note that this fundamental solution has been chosen to be normalised. To solve our initial
problem, just as with Laplace’s equation, we take a convolution and our worries disappear:

Definition 7.3 (Cauchy Problem Solution).


If f ∈ C(Rd ) is bounded, define:
Z
u(t, x) = f ? [Φ(t, ·)](x) = Φ(t, x − y)f (y)dy
Rd

Solves our problem (where f is our initial data for u)

Now, when we talk about the heat equation, it’s often convenient to talk about the
following subsets:

Definition 7.4 (Parabolic Cylinder and Parabolic Boundary).


For a positive T > 0, let ΩT denote the ”parabolic cylinder”

ΩT = (0, T ] × Ω

And the parabolic boundary:


ΓT = Ω¯T ΩT
so that the parabolic boundary contains ”the bottom and sides of ΩT but not the top”.

Of course, we also have a maximum principle:

Theorem 7.2 (Weak Maximum Principle).


Assume u ∈ C12 (ΩT ) ∩ C(Ω¯T ) (C 2 in space and C 1 in time) solves the heat equation in
ΩT . Then:
maxΩ¯T u = maxΓT u

Proof: We have a calculus proof! We can consider subsolutions (in this case strict):
u = u − t; (∂t − 4)u = − < 0 on ΩT
and in other situations we might want to look at supersolutions
(∂t − 4)v ≥ 0
Note that u has no local maxima in ΩT , which essentially comes from noting that at a
maxima the hessian must be nonpositive, and that first derivatives vanish. Now, we just
consider by contradiction a maxima of u in ΩT and show that it is a maxima of u , a
contradiction.

9
This leads to a calculus based proof of the maximum principle for the laplacian. Suppose
that Ω is contained in a region |x| < M , and let u = u + ex1 . Then the hessian is positive
everywhere, so u cannot have a maximum. So assume that u has a maximum, this leads to
a maximum of u , contradiction.

8 DuHamels Principle
There is a different way of thinking about PDE’s than we have been thinking about them. In
particular, instead of thinking about u(t, x) as the object of interest, we can think of u(t, ·)
as the object of interest (the functions themselves, that map x to the heat value). Then, we
have a ’first order’ system:
d
u(t, ·) = 4u(t, ·)
dt
where 4 is thought of as a linear operator on the function space. (Note, for happiness, we
can only consider bounded, continuous functions). Once we start thinking in this funky way,
we can introduce ’propagators’, in the sense that for any g initial data, we can let u be the
unique solution to:
u(s, ·) = g; (∂t − 4)u = 0
Then we let the propagator be:
S(t, s)g = u(t, ·)
which ’propagates’ the initial condition g from time s to time t. The obvious rules follow
(i.e. S(a, b)S(b, c) = S(a, c)). Now, we can appeal to Duhamel’s formula, which tells us:
Theorem 8.1.
If we want to solve the system:

ż = A(t)z + f (t); z(0) = z0

then our solution is: Z t


z(t) = S(t, 0)z0 + S(t, s)f (s)ds
0

where S is the propagator for the homogenous system (i.e. the system with no f , which
is assumed to be easy to solve).

Now, we are in good shape, because this tells us how to solve the inhomogenous heat
equation:
u(0, x) = g(x); (∂t − 4x )u = f
since we know what the propagator is for the heat equation:
Z
[S(t, s)g](x) = u(t, x) = Φ(t − s, x − y)g(y)dy
Rd

(i.e. we know how to solve the homogenous system for the heat equation). The Duhamel
formula is very easy to interpret for the heat equation, it just says that the solution is to
propagate the intial data, and also propagate all the ’f ’s that appear along the way (you
can think of them as ’adding heat’, they are ’additional initial data’).

10
9 The Wave Equation
Now, we move onto another PDE, the wave equation:

Definition 9.1 (The Wave Equation).


The free wave equation takes the form:

u = (∂t2 − 4x )u = 0, u : Rt × Rdx → R

where  is the D’Alembertian. We also want to think about the inhomogenous equation:

u = f

Since it is second order, an initial value problem gives the following data:

u = f in (0, ∞) × Rd

u = g, ∂t u = h on {0} × Rd

Note that the equation is reversible, i.e. if u(t, x) is a solution, then u(−t, x) is also
a solution. Now, suppose we are in the simplest one dimensional case. Then we have
D’Alembert’s formula:

Theorem 9.1 (D’Alambdert’s formula for 1D solution to wave equation).

1 x+t
Z
1
u(t, x) = [g(x + t) + g(x − t)] + h(y)dy
2 2 x−t

In words, you can think of the solution as being an average of g (of the surface of a ball
centered at x with radius t representing how much information has been able to propagate),
along with an integral of the derivative along the interior of the ball.
Now, we can find solutions in higher dimensions, Kirchoff’s formula for three dimensional
space, and Poisson’s formula for two dimensional space. To prove these, roughly for the three
dimensional case we introduce averages of u, g, and h, centered at a fixed spatial point of a
given radius, and we come up with a new set of equations these averages obey. We solve these,
we get a solution. 3D is good; because we have nice ’stokes-like theorems’. To reduce down
to 2D, we view the solution as a solution in 3D with no dependence on the third coordinate.
We can consider the backwards light cone of a point in space. In even dimensions, the value
at the point only depends on the values inside and on the cone, for odd dimensions, it only
depends on the values on the cone.
Note that Duhamel’s principle also works for the wave equation:

11
Theorem 9.2 (Duhamel for the wave equation).
Suppose we want to solve:

u = 0; ∂t u = 0 initial data ; u = f

Then (because it’s now a second time derivative, things are a little bit different), if we
let us (t, x) be a solution to:

us = 0; us (s, x) = 0, ∂t us (s, x) = f (s, x)

then we can ’integrate these solutions together’:


Z t
u(t, x) = us (t, x)ds
0

We can do a lot with energy methods as well.

Definition 9.2 (Energy of the wave equation).


If we define the energy density as:
1 1
e(t, x) = |∂t u(t, x)|2 + |∇x u(t, x)|2
2 2
then at least for u that vanish quickly enough away from origin (and definitely for u that
vanish on boundary and are only defined on a bounded set), if we integrate this energy
density over space (constant time), this is invariant.

Something immediately cool: If we have two solutions to a wave equation, subtract them,
and take the energy. They agree on initial data, so zero energy, so the difference must be
zero everywhere, so uniqueness of solution.

10 Separation of variables/Fourier series


So we’ve been talking a lot about initial conditions. Those are great; (specifying the situation
at t = 0), but what if you didn’t have initial conditions, but rather you had boundary
conditions? Specifically, there are two kinds of boundary conditions:

Definition 10.1 (Boundary condition types).


Dirichlet Boundary Condition: u(t, x) is specified for x ∈ ∂Ω
Neumann Boundary Condition: ∇u(t, x) · ~n(x) is specified for x ∈ ∂Ω

The homogenous case is usually good (where either of the two quantities are zero on the
boundary). Here, just to be clear, the boundary refers to the spatial boundary. The key
idea with separation of variables is to look for solutions

u(t, x) = T (t)X(x)

12
separate out the time parts from the spatial parts, set each to a constant. And then you
hope you can build up solutions from these separated solutions. Usually you can. We have
a nice theorem:

Theorem 10.1 (Fourier).


If F : [−L, L] → C is periodic (that is, F (−L) = F (L)) and smooth, then we can write
F (x) in the fourier series rep:
n=∞
X
F (x) = cn einπx/L
n=−∞

Now that we have started talking about fourier series, we are really thinking about
orthonormal basis in function spaces. We have Bessel’s inequality/Parseval’s identity:

Theorem 10.2 (Bessel/Parseval).


If {vn }n∈Z orthonormal set (not necessary basis), then for any f ∈ X:
X
|hf, vn i|2 ≤ ||f ||2

Equality if vn is a basis

This is pretty obvious, expansion in orthonormal basis pretty much.

13

Vous aimerez peut-être aussi