Académique Documents
Professionnel Documents
Culture Documents
Notes for the Spring 2018 Math 126 Course taught by Professor Jao
prepared by Joshua Lin (email: joshua.z.lin@gmail.com)
May 3, 2018
K̄ = K ∪ {limit points of K}
where a x is a limit point of K if there exists a sequence {xn } in K\{x} such that lim xn = x.
The boundary of K is the set of points x such that every ball Br (x) intersects both K and
its complement.
1
Theorem 2.2 (Green’s Identities).
Let Ω ∈ Rd have smooth boundary. For any smoorth functions u, v:
2
Definition 3.2 (Mollification).
Let η ∈ Cc∞ (Rd ) (smooth with compact support)
R
satisfy η(x)dx = 1. For each 0 <
−d −1
R
h ≤ 1, define ηh (x) = h η(h x), note that ηh (x)dx = 1 as well. Then {ηh }h is called
an approximate identity or mollifier.
For a function f , define the mollification of f as
fh (x) = (f ? ηh )(x)
Theorem 3.2.
Let f be a continuous function on Rd . Then f ? ηh is smooth for each h, and
4 Distributions
We usually think about functions, as, well, functions. But instead, we can think of them
as ’distributions’, in the following sense. Suppose I take f , then it is interesting to see the
behaviour of integrals of the form:
Z
f (x)φ(x)dx
as φ ranges over some well-behaved set of functions. The question we want to ask is; given
these integrals can we recover f ? The following says yes in some sense:
Theorem 4.1.
Let f1 , f2 be continuous on Rd , and suppose that:
Z Z
f1 (x)φ(x)dx = f2 (x)φ(x)dx
3
Definition 4.1.
A test function is a smooth, compact support function, and the collectin of test functions
is denoted D(Rd ), and is a vector space under pointwise operations.
Definition 4.2.
A distribution on Rd is a linear functional l on D(Rd ) which also satisfises the following
continuity property: if φ, φn ∈ D(Rd ), and
(1) - There exists R > 0 such that supp(φn ), supp(φ) ⊂ BR (0) for all n (we are talking
support)
(2) - limn→∞ maxx∈Rd |Dα φn (x) − Dα φ(x)| = 0 for all multiindices α (i.e. the derivatives
of φn uniformly converge to the derivatives of φ), then lim l(φn ) = l(φ).
You might be wondering what on earth this continuity condition imposes on the distri-
butions. And I would say to you: What a good question! Moving on now.
Note that for any continuous f , we can associate a distribution, where
Z
lf : φ → f (x)φ(x)dx
and φ is a test function. However, we have enlarged our brains, because we now have exotic
distributions that don’t correspond to functions, i.e. the Dirac delta function:
δ : φ → φ(0)
Note that any mollifier limits to a delta function in the space of distributions, to be precise:
Definition 4.3.
A sequence of functions fn converges to l ∈ D0 (Rd ) in the sense of distributions if the
associated distributions lfn satisfy:
Now, we have various operations on distributions; which are motivated by thinking about
their ’functional counterparts’ (e.g. imagine everything is a function, think about how it
should behave, and then impose this as a rule on the space of distributions)
4
Definition 4.4.
We can multiply distributions by smooth functions:
(hl)(φ) = l(hφ)
where Ty f (x) = f (x − y). Finally, we can convolve with a test function η to get a
function of x:
(l ? η)(x) = l(rTx η)
where r is the reflection rg(x) = g(−x).
Observe that:
a(t, x)ut + b(t, x)ux = (a(t, x), b(t, x)) · (ut , ux )
represents the directional derivate of u in (a, b) direction. So, to solve our PDE, consider
characteristics of our PDE, which are curves γ(s) that satisfy:
i.e. they are curves that follow the vector (a, b) along the plane. Now, if we define z(s) =
u(γ(s)), i.e. the value of u on the curve gamma as parametrized by s, then we find that:
5
6 Laplace’s Equation
As a first (slightly strange) note, note that Laplace’s equation is invariant under rotations,
i.e. if u is a solution and S is a rotation, then u ◦ S is also a solution of Laplace’s equation.
Now, to find our first solution (on the entire space), let’s look for radial solutions u(x) =
v(r). Plugging this into our equation, we find that
(
A
rd−2
+ B, d≥3
v(r) =
A log r + B, d=2
−4lΦ = δ0
We essentially prove this using Green’s Identities, stated in the first section on multivari-
able calculus; which is just another flavor of Stoke’s Theorem I believe.
Now, as a corollary to the above fact, we have:
Which we get; once again, by using Green’s Identities, and also using bounds on the fun-
damental solution (far away from origin it decays like ... etc). Now, if we start investigating
general harmonic functions:
6
Theorem 6.3 (Mean Value Formulas).
Let Ω ⊂ Rd be open and u ∈ C 2 (Ω) be harmonic. Then
Z Z
1 1
u(x) = udS = udx
A(∂Br (x)) ∂Br (x) vol(Br (x)) Br (x)
Essentially, we integrate u over surfaces and volumes of balls, and show that this integral
doesn’t change as a function of radius of the ball, by using it’s harmonic nature, and Green’s
identities and Divergence Thm and such.
Essentially, assume not. Then pick a point with nonzero laplacian; and take some inte-
grals in balls close to that point.
then u is constant.
as a corollary, the weak version:
maxΩ̄ = max∂Ω u
u is continuous, so A is open. Now suppose that B is nonempty, and pick a point x0 such
that u attains it’s max at x0 , and pick a neighbourhood around x0 . Then, if u is not constant
maximum in this neighbourhood, then there is a point inside where it is submaximal, so the
average of u over the ball is less than M , but this contradicts the mean value theorem! Hence
u is constant maximum on this neighbourhood. So B is open.
7
Since Ω assumed to be connected, B nonempty implies A is empty. So u(x) = M on the
whole space.
−4u = f on Ω; u|∂Ω = g
The maximum principle tells us there is at most one solution to a Dirichlet Problem.
Now, we have Green’s Representation Formula:
The whole point being that we can evaluate these integrals given f, g (i.e. notice that in
the surface integral we know u = g, and in the volume integral we know 4u = f ).
Usually, we want to solve an initial value problem, where u has been defined for t = 0.
Now, we have some observations:
We find a solution that obeys the symmetry and the scaling laws we observed:
8
Definition 7.2 (The Fundamental Solution).
1 2
Φ(t, x) = d/2
e−|x| /4t ; t>0
(4πt)
and Φ = 0 for t < 0 is a solution.
Note that this fundamental solution has been chosen to be normalised. To solve our initial
problem, just as with Laplace’s equation, we take a convolution and our worries disappear:
Now, when we talk about the heat equation, it’s often convenient to talk about the
following subsets:
ΩT = (0, T ] × Ω
Proof: We have a calculus proof! We can consider subsolutions (in this case strict):
u = u − t; (∂t − 4)u = − < 0 on ΩT
and in other situations we might want to look at supersolutions
(∂t − 4)v ≥ 0
Note that u has no local maxima in ΩT , which essentially comes from noting that at a
maxima the hessian must be nonpositive, and that first derivatives vanish. Now, we just
consider by contradiction a maxima of u in ΩT and show that it is a maxima of u , a
contradiction.
9
This leads to a calculus based proof of the maximum principle for the laplacian. Suppose
that Ω is contained in a region |x| < M , and let u = u + ex1 . Then the hessian is positive
everywhere, so u cannot have a maximum. So assume that u has a maximum, this leads to
a maximum of u , contradiction.
8 DuHamels Principle
There is a different way of thinking about PDE’s than we have been thinking about them. In
particular, instead of thinking about u(t, x) as the object of interest, we can think of u(t, ·)
as the object of interest (the functions themselves, that map x to the heat value). Then, we
have a ’first order’ system:
d
u(t, ·) = 4u(t, ·)
dt
where 4 is thought of as a linear operator on the function space. (Note, for happiness, we
can only consider bounded, continuous functions). Once we start thinking in this funky way,
we can introduce ’propagators’, in the sense that for any g initial data, we can let u be the
unique solution to:
u(s, ·) = g; (∂t − 4)u = 0
Then we let the propagator be:
S(t, s)g = u(t, ·)
which ’propagates’ the initial condition g from time s to time t. The obvious rules follow
(i.e. S(a, b)S(b, c) = S(a, c)). Now, we can appeal to Duhamel’s formula, which tells us:
Theorem 8.1.
If we want to solve the system:
where S is the propagator for the homogenous system (i.e. the system with no f , which
is assumed to be easy to solve).
Now, we are in good shape, because this tells us how to solve the inhomogenous heat
equation:
u(0, x) = g(x); (∂t − 4x )u = f
since we know what the propagator is for the heat equation:
Z
[S(t, s)g](x) = u(t, x) = Φ(t − s, x − y)g(y)dy
Rd
(i.e. we know how to solve the homogenous system for the heat equation). The Duhamel
formula is very easy to interpret for the heat equation, it just says that the solution is to
propagate the intial data, and also propagate all the ’f ’s that appear along the way (you
can think of them as ’adding heat’, they are ’additional initial data’).
10
9 The Wave Equation
Now, we move onto another PDE, the wave equation:
u = (∂t2 − 4x )u = 0, u : Rt × Rdx → R
where is the D’Alembertian. We also want to think about the inhomogenous equation:
u = f
Since it is second order, an initial value problem gives the following data:
u = f in (0, ∞) × Rd
u = g, ∂t u = h on {0} × Rd
Note that the equation is reversible, i.e. if u(t, x) is a solution, then u(−t, x) is also
a solution. Now, suppose we are in the simplest one dimensional case. Then we have
D’Alembert’s formula:
1 x+t
Z
1
u(t, x) = [g(x + t) + g(x − t)] + h(y)dy
2 2 x−t
In words, you can think of the solution as being an average of g (of the surface of a ball
centered at x with radius t representing how much information has been able to propagate),
along with an integral of the derivative along the interior of the ball.
Now, we can find solutions in higher dimensions, Kirchoff’s formula for three dimensional
space, and Poisson’s formula for two dimensional space. To prove these, roughly for the three
dimensional case we introduce averages of u, g, and h, centered at a fixed spatial point of a
given radius, and we come up with a new set of equations these averages obey. We solve these,
we get a solution. 3D is good; because we have nice ’stokes-like theorems’. To reduce down
to 2D, we view the solution as a solution in 3D with no dependence on the third coordinate.
We can consider the backwards light cone of a point in space. In even dimensions, the value
at the point only depends on the values inside and on the cone, for odd dimensions, it only
depends on the values on the cone.
Note that Duhamel’s principle also works for the wave equation:
11
Theorem 9.2 (Duhamel for the wave equation).
Suppose we want to solve:
u = 0; ∂t u = 0 initial data ; u = f
Then (because it’s now a second time derivative, things are a little bit different), if we
let us (t, x) be a solution to:
Something immediately cool: If we have two solutions to a wave equation, subtract them,
and take the energy. They agree on initial data, so zero energy, so the difference must be
zero everywhere, so uniqueness of solution.
The homogenous case is usually good (where either of the two quantities are zero on the
boundary). Here, just to be clear, the boundary refers to the spatial boundary. The key
idea with separation of variables is to look for solutions
u(t, x) = T (t)X(x)
12
separate out the time parts from the spatial parts, set each to a constant. And then you
hope you can build up solutions from these separated solutions. Usually you can. We have
a nice theorem:
Now that we have started talking about fourier series, we are really thinking about
orthonormal basis in function spaces. We have Bessel’s inequality/Parseval’s identity:
Equality if vn is a basis
13