Vous êtes sur la page 1sur 132

Image Quality Metrics

Image quality metrics


Mutual information (cross-entropy) metric
Intuitive definition
Rigorous definition using entropy

Example: two-point resolution problem


Example: confocal microscopy

Square error metric


Receiver Operator Characteristic (ROC)

Heterodyne detection
MIT 2.717
Image quality metrics p-1

Linear inversion model

object
physical
attributes
(measurement)

hardware
channel
field
propagation

detection

inversion problem:
determine f, given the measurement

g =H f

(noise variance)
2
noise-to-signal ratio (NSR) =
=
= 2
(average signal power) 1

normalizing signal power to 1


MIT 2.717
Image quality metrics p-2

Mutual information (cross-entropy)

object
physical
attributes
(measurement)

hardware
channel
field
propagation

C = ln1 + 2

2
k =1

MIT 2.717
Image quality metrics p-3

detection

eigenvalues
of H

The significance of eigenvalues

rank of
measurement

(aka how many


dimensions
the measurement
is worth)

1 n k2
C = ln1 + 2
2 k =1

...

n-1

1
0

2
n2

2
n1

...
eigenvalues of H

MIT 2.717
Image quality metrics p-4

22

1
2

Precision of measurement

1 n k2

C
=
ln1 + 2 =

2
k =1

t2 <

2 <

t21

noise floor

t2 2

t21
t2
... +
ln
1 + 2 +
ln
1 + 2 + ln1 + 2 + ...


precision
of (t-2)th measurement

this term
1

E.g. 0.5470839348
these digits worthless
if 10-5
MIT 2.717
Image quality metrics p-5

this term

Formal definition of cross-entropy (1)

Entropy in thermodynamics (discrete systems):


log2[how many are the possible states of the system?]
E.g. two-state system: fair coin, outcome=heads (H) or tails (T)
Entropy=log22=1
Unfair coin: seems more reasonable to weigh the two states

according to their frequencies of occurence (i.e., probabilities)

Entropy = p(state ) log 2 p(state )

states

MIT 2.717
Image quality metrics p-6

Formal definition of cross-entropy (2)

Fair coin: p(H)=1/2; p(T)=1/2

1
1 1
1

Entropy = log 2
log 2 = 1
bit
2
2 2
2

Unfair coin: p(H)=1/4; p(T)=3/4

1 3
1
Entropy = log 2
log 2 = 0.81 bits
4

4 4
4

Maximum entropy Maximum uncertainty


uncertainty

MIT 2.717
Image quality metrics p-7

Formal definition of cross-entropy (3)

Joint Entropy
Entropy

log2[how many are the possible states of a combined variable


obtained from the Cartesian product of two variables?]

Joint Entropy( X , Y ) =

p(x, y )log p(x, y )

states states
x X yY

E.g. Joint Entropy(F , G ) = ?

object

physical
attributes
(measurement)

hardware
channel

f
MIT 2.717
Image quality metrics p-8

field
propagation

detection

Formal definition of cross-entropy (4)

Conditional Entropy
Entropy

log2[how many are the possible states of a combined variable


given the actual state of one of the two variables?]

Cond. Entropy(Y | X ) =

p(x, y )log p( y | x )

states states
xX yY

E.g. Cond. Entropy(G | F ) = ?

object

physical
attributes
(measurement)

hardware
channel

f
MIT 2.717
Image quality metrics p-9

field
propagation

detection

Formal definition of cross-entropy (5)

object
physical
attributes
(measurement)

hardware
channel

field
propagation

detection

Noise adds uncertainty to the measurement wrt the object


eliminates information from the measurement wrt object

MIT 2.717
Image quality metrics p-10

Formal definition of cross-entropy (6)

uncertainty added due to noise

Cond. Entropy(F | G )

representation by
Seth Lloyd, 2.100

Entropy(F )

C(F , G )

information
contained
in the object

Cond. Entropy(G | F )
information eliminated due to noise
MIT 2.717
Image quality metrics p-11

Entropy(G )
information
contained
in the measurement

cross-entropy
(aka mutual information)

Formal definition of cross-entropy (7)

Joint Entropy(F , G )

Cond. Entropy(F | G )

Entropy(F )
MIT 2.717
Image quality metrics p-12

C(F , G )

Cond. Entropy(G | F )

Entropy(G )

Formal definition of cross-entropy (8)

F
information
source
(object)

Physical Channel
(transform)

Corruption source (Noise)

G
information
receiver
(measurement)

C( F , G ) = Entropy(F ) Cond. Entropy( F |


G )
= Entropy(G ) Cond. Entropy(G | F )
= Entropy(F ) + Entropy(G ) Joint Entropy( F , G )
MIT 2.717
Image quality metrics p-13

Entropy & Differential Entropy

Discrete objects (can take values among a discrete set of states)


definition of entropy

Entropy = p ( xk ) log 2 p
( xk )
k

unit: 1 bit (=entropy value of a YES/NO question with 50%


uncertainty)
Continuous objects (can take values from among a continuum)
definition of differential entropy

Diff. Entropy =

( p) (x )ln p
(x ) dx

unit: 1 nat (=diff. entropy value of a significant digit in the


representation of a random number, divided by ln10)

MIT 2.717
Image quality metrics p-14

Image Mutual Information (IMI)

object
physical
attributes
(measurement)

hardware
channel

f
Assumptions:

Then

MIT 2.717
Image quality metrics p-15

field
propagation

detection

(a) F has Gaussian statistics


(b) white additive Gaussian noise (waGn)
i.e.
g=Hf+w
where W is a Gaussian random vector with diagonal
correlation matrix

1 n k2

C
(
F
,
G )
=

ln1 + 2
2 k =1

k : eigenvalues of H

Mutual information &

degrees of freedom

rank of
measurement
n
...

n-1
1
0

2
n2

mutual
information

n21

1 n k2
C = ln1 + 2
2 k =1

...

22

12

As noise increases
one rank of H is lost whenever
2 overcomes a new eigenvalue
the remaining ranks lose precision

2
MIT 2.717
Image quality metrics p-16

Example: two-point resolution

Finite-NA imaging system, unit magnification

Two point-sources
(object)

fA
fB

Two point-detectors

(measurement)

A
x
B

~
A

gA

~
B

gB

Classical view
intensities
emitted

noiseless

intensity

@detector

plane

g A gB
MIT 2.717
Image quality metrics p-17

intensities
measured

Cross-leaking power
g A = f A + sf B
g B = sf A + f B
s = sinc ( x )
2

s
~
A
MIT 2.717
Image quality metrics p-18

~
B

IMI for two-point resolution problem

1 s

H =

s 1

det (H ) =
1 s

1 =
1 + s
2 = 1 s

1
1 s

H =
2
1 s
s
1

1
(1
s )
C(F , G ) = ln1 +
2

MIT 2.717
Image quality metrics p-19

1 (1 + s )2

+ ln1 +
2

IMI vs source separation

(SNR ) =

MIT 2.717
Image quality metrics p-20

s1

1
2

s0

IMI for rectangular matrices (1)

=
H

underdetermined
(more unknowns than
measurements)

overdetermined
(more measurements
than unknowns)

eigenvalues cannot be computed, but instead


we compute the singular values of the
rectangular matrix
MIT 2.717
Image quality metrics p-21

IMI for rectangular matrices (2)

HT

square matrix

f = H T H

recall pseudo-inverse

HT g

inversion operation associated with rank of

eigenvalues H T H singular values(H


)
MIT 2.717
Image quality metrics p-22

IMI for rectangular matrices (3)

object
physical
attributes
(measurement)

hardware
channel
field
propagation

detection

under/over determined

1
n k

C = ln1 + 2

2
k =1


MIT 2.717
Image quality metrics p-23

singular values

of H

Confocal microscope

Small pinhole:

Depth resolution

pinhole

virtual slice

Light efficiency

object
beam
splitter

Intensity
detector

Large pinhole:
Depth resolution
Light efficiency

MIT 2.717
Image quality metrics p-24

Depth resolution vs. noise

Object structure:

point sources,
mutually

incoherent

optical axis

sampling distance

Imaging method
correspondence

intensity
measurements

CFM
object

scanning
direction

MIT 2.717
Image quality metrics p-25

NA=0.2

Depth resolution

vs. noise & pinhole size

MIT 2.717
Image quality metrics p-26

units: Rayleigh distance

IMI summary

It quantifies the number of possible states of the object that the


imaging system can successfully discern; this includes
the rank of the system, i.e. the number of object dimensions that
the system can map
the precision available at each rank, i.e. how many significant
digits can be reliably measured at each available dimension
An alternative interpretation of IMI is the game of 20 questions: how
many questions about the object can be answered reliably based on the
image information?
IMI is intricately linked to image exploitation for applications, e.g.
medical diagnosis, target detection & identification, etc.
Unfortunately, it can be computed in closed form only for additive
Gaussian statistics of both object and image; other more realistic
models are usually intractable
MIT 2.717
Image quality metrics p-27

Other image quality metrics

Mean Square Error (MSQ) between object and image

E =
f k fk
object

samples

result of

f =

inversion

e.g. pseudoinverse minimizes MSQ in an overdetermined problem


obvious problem: most of the time, we dont know what f is!
more when we deal with Wiener filters and regularization
Receiver Operator Characteristic
measures the performance of a cognitive system (human or
computer program) in a detection or estimation task based on the
image data

MIT 2.717
Image quality metrics p-28

Receiver Operator Characteristic

Target detection task


Example: medical diagnosis,
H0 (null hypothesis) =
no tumor
H1 = tumor
TP = true positive (i.e. correct
identification of tumor)
FP = false positive (aka false
alarm)

MIT 2.717
Image quality metrics p-29

Introduction to Inverse Problems


What is an image? Attributes and Representations
Forward vs Inverse
Optical Imaging as Inverse Problem
Incoherent and Coherent limits
Dimensional mismatch: continuous vs discrete
Singular vs ill-posed
Ill-posedness: a 22 example

MIT 2.717
Intro to Inverse Problems p-1

Basic premises
What you see or imprint on photographic film is a very narrow
interpretation of the word image
Image is a representation of a physical object having certain attributes
Examples of attributes
Optical image: absorption, emission, scatter, color wrt light
Acoustic image: absorption, scatter wrt sound
Thermal image: temperature (black-body radiation)
Magnetic resonance image: oscillation in response to radiofrequency EM field
Representation: a transformation upon a matrix of attribute values
Digital image (e.g. on a computer file)
Analog image (e.g. on your retina)

MIT 2.717
Intro to Inverse Problems p-2

How are images formed


Hardware
elements that operate directly on the physical entity
e.g. lenses, gratings, prisms, etc. operate on the optical field
e.g. coils, metal shields, etc. operate on the magnetic field
Software
algorithms that transform representations
e.g. a radio telescope measures the Fourier transform of the source
(representation #1); inverse Fourier transforming leads to a
representation in the native object coordinates (representation
#2); further processing such as iterative and nonlinear algorithms
lead to a cleaner representation (#3).
e.g. a stereo pair measures two aspects of a scene (representation
#1); a triangulation algorithm converts that to a binocular image
with depth information (representation #2).
MIT 2.717
Intro to Inverse Problems p-3

Who does what


In optics,
standard hardware elements (lenses, mirrors, prisms) perform a
limited class of operations (albeit very useful ones); these
operations are
linear in field amplitude for coherent systems
linear in intensity for incoherent systems
a complicated mix for partially coherent systems
holograms and diffractive optical elements in general perform a
more general class of operations, but with the same linearity
constraints as above
nonlinear, iterative, etc. operations are best done with software
components (people have used hardware for these purposes but it
tends to be power inefficient, expensive, bulky, unreliable hence
these systems seldom make it to real life applications)
MIT 2.717
Intro to Inverse Problems p-4

Imaging channels

Information generators
Wave sources
Wave scatterers
Imaging
Communication
Storage
MIT 2.717
Intro to Inverse Problems p-5

Physics
Algorithms

Humans
Humanoids

Processing elements

Users

GOAL: Maximize information flow

Generalized (cognitive) representations


Classical inverse problem view-point
encoded into
a scene

optical system produces a (geometrically


similar) image

cognitive
processing

answer

YES/
/NO

Situation of
interest

YES/
/NO

e.g. is there a tank


in the scene?

Non-imaging or generalized sensor view-point


encoded into
a scene

optical system produces an information-rich


light intensity pattern
answer

Situation of
interest

Advantages: - optimum resource allocation


- better reliability
- adaptive, attentive operation
MIT 2.717
Intro to Inverse Problems p-6

YES/
/NO

other
functions
if necessary (requires resource reallocation)

Forward problem
object
physical
attributes
(measurement)

hardware
channel
field
propagation

object

detection

measurement

The Forward Problem answers the following question:


Predict the measurement given the object attributes
MIT 2.717
Intro to Inverse Problems p-7

Inverse problem
object
physical
attributes
(measurement)

hardware
channel
field
propagation
object
representation

detection

measurement

The Inverse Problem answers the following question:


Form an object representation given the measurement
MIT 2.717
Intro to Inverse Problems p-8

Optical Inversion
free space
(Fresnel)
propagation

amplitude object
(dark A on bright
background)
amplitude
representation
MIT 2.717
Intro to Inverse Problems p-9

lens

free space
(Fresnel)
propagation

lens

free space
(Fresnel)
propagation

array of point-wise
sensors (camera)
array of
intensity
measurements

Optical Inversion: coherent

Nonlinear problem

f ( x, y )
object
amplitude

I ( x, y) =

f (x, y )h (x x, y y )dxdy
coh

intensity measurement at the output plane

Note: I could make the problem linear if I could measure


amplitudes directly (e.g. at radio frequencies)
MIT 2.717
Intro to Inverse Problems p-10

Optical Inversion: incoherent

Linear problem

I obj ( x, y )
object
intensity

MIT 2.717
Intro to Inverse Problems p-11

I meas (x, y) = I obj (x, y )hincoh (x x, y y )dxdy


intensity measurement at the output plane

Dimensional mismatch
The object is a continuous function (amplitude or intensity)
assuming quantum mechanical effects are at sub-nanometer scales, i.e.
much smaller than the scales of interest (100nm or more)
i.e. the object dimension is uncountably infinite
The measurement is discrete, therefore countable and finite
To be able to create a 1-1 object representation from the
measurement, I would need to create a 1-1 map from a finite set of
integers to the set of real numbers. This is of course impossible
the inverse problem is inherently ill-posed
We can resolve this difficulty by relaxing the 1-1 requirement
therefore, we declare ourselves satisfied if we sample the object
with sufficient density (Nyquist theorem)
implicitly, we have assumed that the object lives in a finitedimensional space, although it looks like a continuous function
MIT 2.717
Intro to Inverse Problems p-12

Singularity and ill-posedness


Under the finite-dimensional object assumption, the linear inverse problem
is converted from an integral equation to a matrix equation

g ( x, y) = f (x, y ) h( x x, y y ) dx dy

g =H f

If the matrix H is rectangular, the problem may be overconstrained or


underconstrained
If the matrix H is square and has det(H)=0, the problem is singular; it
can only be solved partially by giving up on some object dimensions
(i.e. leaving them indeterminate)
If the matrix H is square and det(H) is non-zero but small, the
problem may be ill-posed or unstable: it is extremely sensitive to errors
in the measurement f
MIT 2.717
Intro to Inverse Problems p-13

Resolution: a toy problem


Two point-sources
(object)

Two point-detectors
(measurement)

A
x

~
A

~
B

Classical view

Finite-NA imaging system

x
MIT 2.717
Intro to Inverse Problems p-14

Cross-leaking power
I A = J A + sJ B
I B = sJ A + J B

MIT 2.717
Intro to Inverse Problems p-15

~
A

~
B

Ill-posedness in two-point inversion


1 s

H =
s 1
det (H ) = 1 s 2
1 1 s

H =
2
1 s s 1
1

MIT 2.717
Intro to Inverse Problems p-16

Applications of Statistical Optics

Radio Astronomy
Michelson Stellar Interferometry
Rotational Shear Interferometer (RSI)

Optical Coherence Tomography (OCT)

MIT 2.717
Apps of Stat Optics p-1

Radio Telescope

(Very Large Array, VLA)

www.nrao.edu

MIT 2.717
Apps of Stat Optics p-2

27 Antennae (parabolic dishes,


diameter 25m, weight 230t each)
Y radius ranges between
1km and 36km
wavelengths 90cm 7mm
resolution 200-1.5arcsec in
smallest configuration; 6 to 0.05
arcsec in largest configuration
signals are multiplied and
correlated at central station to
obtain (x,y).
van Cittert-Zernicke theorem
is used to invert the observations
and obtain the source I(,),
e.g. a constellation of galaxies

VLA images

These four images are combined radio-optical


images of a large solar flare that occurred on
17 June 1989. The red-orange background
images are optical images (H-Alpha) and the
superimposed contours show radio emission
as seen with the VLA at a wavelength of 4.9
GHz. The four images are from four different
times during the event, showing the
progression toward maximum radio emission
(bottom right). This soft X-ray flare was
accompanied by a coronal mass ejection.

from www.aoc.nrao.edu
MIT 2.717
Apps of Stat Optics p-3

The two H alpha ribbons correspond to the


"footpoints" of an arcade of magnetic loops
which arch NE/SW. The magnetic field is
strongest toward the NW, where prominent
sunspots appear dark in H alpha. Early in the
event, the magnetically stronger footpoint
emits radio waves first (a), followed by
magnetically conjugate footpoints to the SW
(b). The entire magnetic arch connecting the
two footpoints then emits (c,d).

VLA images

from www.aoc.nrao.edu

MIT 2.717
Apps of Stat Optics p-4

This is a radar image of Mars, made with the Goldstone-VLA radar system
in 1988. Red areas are areas of high radar reflectivity. The south polar ice
cap, at the bottom of the image, is the area of highest reflectivity. The other
areas of high reflectivity are associated with the giant shield volcanoes of the
Tharsis ridge. The dark area to the West of the Tharsis ridge showed no
detectable radar echoes, and thus was dubbed the "Stealth" region.

VLA images

The center of the Milky Way

from www.aoc.nrao.edu

MIT 2.717
Apps of Stat Optics p-5

VLA images

from www.aoc.nrao.edu

The galaxy M81 is a spiral galaxy about 11 million light-years from Earth. It is about
50,000 light-years across. This VLA image was made using data taken during three of
the VLA's four standard configurations for a total of more than 60 hours of observing
time. The spiral structure is clearly shown in this image, which shows the relative
intensity of emission from neutral atomic hydrogen gas. In this pseudocolor image,
red indicates strong radio emission and blue weaker emission.

MIT 2.717
Apps of Stat Optics p-6

from www.aoc.nrao.edu

MIT 2.717
Apps of Stat Optics p-7

This pair of images illustrates the need to study celestial objects at different
wavelengths in order to get "the whole picture" of what is happening with those
objects. At left, you see a visible-light image of the M81 Group of galaxies.
This image largely shows light coming from stars in the galaxies. At right, a
radio image, made with the VLA, shows the hydrogen gas, including streamers
of gas connecting the galaxies. From the radio image, it becomes apparent that
this is an interacting group of galaxies, not isolated objects.

Michelson Stellar Interferometer

Optical version of the van Cittert-Zernicke theorem


Since multiplication cannot be performed directly, it is done through interference
(Youngs interferometer)
Extreme requirements on mechanical and thermal stability (better than /100
between the two arms)
Alternative: intensity interferometer (or Hanbury Brown Twiss interferometer)
MIT 2.717
Apps of Stat Optics p-8

from www.physics.usyd.edu.au/astron/susi

Hanbury Brown Twiss

interferometer

from www.physics.usyd.edu.au/astron/susi

MIT 2.717
Apps of Stat Optics p-9

The Rotational Shear Interferometer

beam splitter

folding mirror
sensor array

folding mirror

dither
translation
stage
input aperture

rotating object
MIT 2.717
Apps of Stat Optics p-10

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/

Experimental RSI implementation (University of Illinois)

mirror tilt flex stages

Princeton Instruments camera


shutter

camera
cooling fan

platform linear bearings


long-travel platform (2)

MIT 2.717
Apps of Stat Optics p-11

Aerotech stage
by David J. Brady, Duke University
www.fitzpatrick.duke.edu/disp/

Close-up view of the Interferometer Section of the RSI

shutter
input
aperture
magnetic
coupling
90
shearing
mirror

90 dither
mirror
beamsplitter
mirror
support

flexure
stage

MIT 2.717
Apps of Stat Optics p-12

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/

Mobile RSI

(University of Illinois and


Distant Focus Corporation

MIT 2.717
Apps of Stat Optics p-13

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/

EXPERIMENTAL RESULTS: 3-D


D

2-D spatial / 1-D spectral RSI reconstruction

Experimental Setup

Color Composite Image

Red (590-650 nm)

Green (520-570 nm)

Blue (430-500 nm)

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/
MIT 2.717
Apps of Stat Optics p-14

The Rotational Shear Interferometer

beam splitter

folding mirror
sensor array

folding mirror

dither
translation
stage
input aperture

rotating object
MIT 2.717
Apps of Stat Optics p-15

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/

What does the RSI measure?


Arm 2

Special case:
=90o

Input field

To Camera

Input field
MIT 2.717
Apps of Stat Optics p-16

Folding prism
at Arm 1

Arm 1

Folding prism
at Arm 2 (=90o)

Arms 1 & 2
combined
at camera plane

Intensity on the RSI Sensor Plane

The field on arm 1 is:

E1 ( x, y ) = Eo ( x cos 2 + y sin 2 , x sin 2 y cos 2 )

The field on arm 2 is:

E2 ( x, y ) = Eo ( x cos 2 y sin 2 , x sin 2 y cos 2 )


I s ( x, y ) = E1 + E2

= E1 + E2 + E1* E2 + E1 E *2

= I1 + I 2 +

( x = 2 y sin 2 , y = 2x sin 2 , x = 2x cos 2 + xo , y = yo 2x cos 2 , = 2 / c )


+*
MIT 2.717
Apps of Stat Optics p-17

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/

Coherence imaging using the RSI


RSI

Re[(xk , yl , xi , y j , )] + dc

Interference on CCD

S ( xk , yl , xi , y j , v )

J ( xk , yl , xi , y j ) = 0

Re[(x, y , xi , y j , 0 )] + dc

4-D Fourier transform relationship

(x, y , q, ) S [x ' , y ' , z ' , ]

MIT 2.717
Apps of Stat Optics p-18

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/

Example RSI Images

2 point

sources

Experimental
Mutual Intensity

MIT 2.717
Apps of Stat Optics p-19

by David J. Brady, Duke University


www.fitzpatrick.duke.edu/disp/

Welcome to ...

2.717J/MAS.857J
Optical Engineering

MIT 2.717J
wk1-b p-1

This class is about

Statistical Optics
models of random optical fields, their propagation and statistical
properties (i.e. coherence)
imaging methods based on statistical properties of light: coherence
imaging, coherence tomography
Inverse Problems
to what degree can a light source be determined by measurements
of the light fields that the source generates?
how much information is transmitted through an imaging
system? (related issues: what does _resolution_ really mean? what
is the space-bandwidth product?)

MIT 2.717J

wk1-b p-2

The van Cittert-Zernike theorem

radio
waves

Very Large Array (VLA)

Galaxy, ~100 million

light-years away

image

Cross-Correlation
+
Fourier
transform
MIT 2.717J
wk1-b p-3

optical image

Image credits:
hubble.nasa.gov
www.nrao.edu

Optical coherence tomography

Coronary artery

Image credits:
www.lightlabimaging.com

Intestinal polyps
MIT 2.717J
wk1-b p-4

Esophagus

Inverse Radon transform

(aka Filtered Backprojection)

The hardware
The principle

Magnetic Resonance Imaging (MRI)


Image credits:
www.cis.rit.edu/htbooks/mri/

www.ge.com

MIT 2.717J
wk1-b p-5

The image

You can take this class if

You took one of the following classes at MIT


2.996/2.997 during the academic years 97-98 and 99-00
2.717 during fall 00
2.710 during fall 01

OR

You have taken a class elsewhere that covered Geometrical Optics,


Diffraction, and Fourier Optics
Some background in probability & statistics is helpful but not
necessary

MIT 2.717J

wk1-b p-6

Syllabus (summary)

Review of Fourier Optics, probability & statistics 4 weeks


Light statistics and theory of coherence 2 weeks
The van Cittert-Zernicke theorem and applications of statistical optics
to imaging 3 weeks
Basic concepts of inverse problems (ill-posedness, regularization) and
examples (Radon transform and its inversion) 2 weeks
Information-theoretic characterization of imaging channels 2 weeks
Textbooks:
J. W. Goodman, Statistical Optics, Wiley.
M. Bertero and P. Boccacci, Introduction to Inverse Problems in
Imaging, IoP publishing.

MIT 2.717J

wk1-b p-7

What you have to do

4 homeworks (1/week for the first 4 weeks)


3 Projects:
Project 1: a simple calculation of intensity statistics from a model
in Goodman (~2 weeks, 1-page report)
Project 2: study one out of several topics in the application of
coherence theory and the van Cittert-Zernicke theorem from
Goodman (~4 weeks, lecture-style presentation)
Project 3: a more elaborate calculation of information capacity of
imaging channels based on prior work by Barbastathis & Neifeld
(~4 weeks, conference-style presentation)
Alternative projects ok
No quizzes or final exam

MIT 2.717J

wk1-b p-8

Administrative

Broadcast list will be setup soon


Instructors coordinates
George Barbastathis
Please do not phone-call
Office hours TBA
Class meets
Mondays 1-3pm (main coverage of the material)
Wednesdays 2-3pm (examples and discussion)
presentations only: Wednesdays 7pm-??, pizza served

MIT 2.717J

wk1-b p-9

The 4F system

f1

g1 ( x, y )
object plane
MIT 2.717J

wk1-b p-10

f1

f2

f2

x y

G1
,

f1 f1

f1
f1

g
1
x, y

f 2

f 2

Fourier plane

Image plane

The 4F system

f1

f1

f2

f2

G1 (u , v )

u=
v=

g1 ( x, y )
object plane
MIT 2.717J

wk1-b p-11

sin x

sin y

x y

G1
,

f1 f1

f1
f1

g
1
x, y

f 2

f 2

Fourier plane

Image plane

The 4F system with FP aperture

f1

f1

f2

f2

G1 (u , v )

g1 ( x
, y )
object plane
MIT 2.717J
wk1-b p-12

x y
r
circ
G1

,
R

f1 f1
Fourier plane: aperture-limited

f1
f1

(g1 h )
x, y
f 2

f
2
Image plane: blurred
(i.e. low-pass filtered)

The 4F system with FP aperture

Transfer function:
circular aperture

r

circ

MIT 2.717J

wk1-b p-13

Impulse response:

Airy function

r R

jinc

f
2

Coherent vs incoherent imaging

MIT 2.717J

wk1-b p-14

field in

Coherent
optical
system

field out

intensity in

Incoherent
optical
system

intensity out

Coherent vs incoherent imaging

Coherent impulse response


(field in field out)
Coherent transfer function
(FT of field in FT of field out)
Incoherent impulse response
(intensity in intensity out)

Incoherent transfer function


(FT of intensity in FT of intensity out)

h ( x, y )
H (u, v ) = FT{h(x,
y )}
~
2
h ( x, y ) = h ( x, y )

= H (u , v ) H (u, v )

~
H (u ,
v ) : Modulation Transfer Function (MTF)
~

H (u , v ) : Optical Transfer Function (OTF)


MIT 2.717J

wk1-b p-15

~
H (u, v ) = FT h (x, y )

Coherent vs incoherent imaging

f1

f1

f2

f2

2a

~
H (u)

H(u)
1

uc

uc =

Coherent illumination
MIT 2.717J

wk1-b p-16

a
f1

u
2uc

2uc
Incoherent illumination

Aberrations: geometrical

Paraxial
(Gaussian)
image point

Non-paraxial rays
overfocus

Spherical aberration
Origin of aberrations: nonlinearity of Snells law (n sin=const., whereas linear
relationship would have been n=const.)
Aberrations cause practical systems to perform worse than diffraction-limited
Aberrations are best dealt with using optical design software (Code V, Oslo,
Zemax); optimized systems usually resolve ~3-5 (~1.5-2.5m in the visible)
MIT 2.717J

wk1-b p-17

Aberrations: wave

Aberration-free impulse response

hdiffraction (x,
y )
limited

Aberrations introduce additional phase delay to the impulse response

haberrated (x, y ) = hdiffraction (x, y ) e i aberration ( x , y )


Effect of aberrations
on the MTF

~
H(u)
1

limited

unaberrated
(diffraction
limited)
aberrated

u
2uc
MIT 2.717J

wk1-b p-18

2uc

Optics Overview

MIT 2.71/2.710
Review Lecture p-1

What is light?

Light is a form of electromagnetic energy detected through its


effects, e.g. heating of illuminated objects, conversion of light to
current, mechanical pressure (Maxwell force) etc.
Light energy is conveyed through particles: photons
ballistic behavior, e.g. shadows
Light energy is conveyed through waves
wave behavior, e.g. interference, diffraction
Quantum mechanics reconciles the two points of view, through the
wave/particle duality assertion

MIT 2.71/2.710
Review Lecture p-2

Particle properties of light

Photon=elementary light particle

Mass=0
Speed c=3108 m/sec
According to Special Relativity, a mass-less particle travelling
travelling

at light speed can still carry momentum!


momentum!

Energy E=h
h=Plancks constant
=6.626210-34 J sec
MIT 2.71/2.710
Review Lecture p-3

relates the dual particle & wave


nature of light;
is the temporal oscillation
frequency of the light waves

Wave properties of light

: wavelength
(spatial period)
k=2/
wavenumber
: temporal
frequency
=2
angular frequency
E: electric
field

MIT 2.71/2.710
Review Lecture p-4

1/

Wave/particle duality for light

Photon=elementary light particle

Mass=0
Speed c=3108 m/sec
Energy E=h

c=

h=Plancks constant
=6.626210-34 J sec
=frequency
=wavelength (m)

(sec-1)

MIT 2.71/2.710
Review Lecture p-5

Dispersion relation

(holds in vacuum only)

Light in matter

light in vacuum
light in matter

Speed c=3108 m/sec

Speed c/n
n : refractive index
(or index of refraction)

Absorption coefficient 0

Absorption coefficient
energy decay coefficient,
after distance L : e2L

E.g. vacuum n=1, air n 1;


glass n1.5; glass fiber has 0.25dB/km=0.0288/km

MIT 2.71/2.710
Review Lecture p-6

Materials classification

Dielectrics
typically electrical isolators (e.g. glass, plastics)
low absorption coefficient
arbitrary refractive index
Metals
conductivity large absorption coefficient
Lots of exceptions and special cases (e.g. artificial dielectrics)
Absorption and refractive index are related through the Kramers
Kronig relationship (imposed by causality)
absorption

refractive index
MIT 2.71/2.710
Review Lecture p-7

Overview of light sources

Laser

non-Laser
Thermal: polychromatic,
spatially incoherent
(e.g. light bulb)
Gas discharge: monochromatic,
spatially incoherent
(e.g. Na lamp)
Light emitting diodes (LEDs):

monochromatic, spatially
incoherent

Continuous wave (or cw):

strictly monochromatic,
spatially coherent
(e.g. HeNe, Ar+, laser diodes)
Pulsed: quasi-monochromatic,
spatially coherent
(e.g. Q-switched, mode-locked)

~nsec

~psec to few fsec

pulse duration

mono/poly-chromatic = single/multi color

MIT 2.71/2.710
Review Lecture p-8

Monochromatic, spatially coherent

light
1/

nice, regular sinusoid


, well defined
stabilized HeNe laser
good approximation
most other cw lasers
rough approximation
pulsed lasers & nonlaser sources need
more complicated
description

Incoherent: random, irregular waveform

MIT 2.71/2.710
Review Lecture p-9

The concept of a monochromatic

ray

t=0
(frozen)

direction of
energy propagation:
light ray
wavefronts

In homogeneous media,

light propagates in rectilinear paths

MIT 2.71/2.710
Review Lecture p-10

The concept of a monochromatic

ray

t=t
(advanced)

direction of
energy propagation:
light ray
wavefronts

In homogeneous media,

light propagates in rectilinear paths

MIT 2.71/2.710
Review Lecture p-11

The concept of a polychromatic ray

t=0
(frozen)

wavefronts

energy from
pretty much
all wavelengths
propagates along
the ray

In homogeneous media,

light propagates in rectilinear paths

MIT 2.71/2.710
Review Lecture p-12

Fermat principle

light
ray

n( x, y, z ) dl

is chosen to minimize this


path integral, compared to
alternative paths

(aka minimum path principle)

Consequences: law of reflection, law of refraction

MIT 2.71/2.710
Review Lecture p-13

The law of reflection

P
O

c) Therefore, light follows the


symmetric path POP.

P
mirror
MIT 2.71/2.710
Review Lecture p-14

a) Consider virtual source P

instead of P
b) Alternative path POP is
longer than POP

The law of refraction

reflected
refracted

incident

n sin = n sin Snells Law of Refraction


MIT 2.71/2.710
Review Lecture p-15

Optical waveguide

n1.00

TIR

n=1.51
n=1.5105
n=1.51

TIR

Planar version: integrated optics


Cylindrically symmetric version: fiber optics
Permit the creation of light chips and light cables, respectively, where
light is guided around with few restrictions
Materials research has yielded glasses with very low losses (<0.25dB/km)
Basis for optical telecommunications and some imaging (e.g. endoscopes)
and sensing (e.g. pressure) systems
MIT 2.71/2.710
Review Lecture p-16

Refraction at a spherical surface

point
source

MIT 2.71/2.710
Review Lecture p-17

Imaging a point source

point
source

point
image

Lens

MIT 2.71/2.710
Review Lecture p-18

Model for a thin lens

point object
at 1st FP

1st FP

focal length f
plane wave (or parallel ray bundle);
image at infinity

MIT 2.71/2.710
Review Lecture p-19

Model for a thin lens

point image
at 2nd FP

focal length f
plane wave (or parallel ray bundle);
object at infinity

MIT 2.71/2.710
Review Lecture p-20

Huygens principle

Each point on the wavefront


acts as a secondary light source
emitting a spherical wave
The wavefront after a short
propagation distance is the
result of superimposing all
these spherical wavelets

MIT 2.71/2.710
Review Lecture p-22

optical
wavefronts

Why imaging systems are needed

Each point in an object scatters the incident illumination into a spherical wave,
according to the Huygens principle.
A few microns away from the object surface, the rays emanating from all
object points become entangled, delocalizing object details.
To relocalize object details, a method must be found to reassign (focus) all
the rays that emanated from a single point object into another point in space
(the image.)
The latter function is the topic of the discipline of Optical Imaging.

MIT 2.71/2.710
Review Lecture p-23

Imaging condition: ray-tracing

thin lens (+)

2nd FP

image
(real)

1st FP
chief

object

ray

Image point is located at the common intersection of all rays which


emanate from the corresponding object point
The two rays passing through the two focal points and the chief ray
can be ray-traced directly
The real image is inverted and can be magnified or demagnified
MIT 2.71/2.710
Review Lecture p-24

Imaging condition: ray-tracing

thin lens (+)

image

xo

2nd FP
1st FP
chief

object

so

xi

ray

si

Lens Law

Lateral
magnification

Angular
magnification

Energy
conservation

1 1 1
+
=
so si
f

xi
so
M
x =
=
xo
si

si
M
a =
so

M xM a = 1

MIT 2.71/2.710
Review Lecture p-25

Imaging condition: ray-tracing

thin lens (+)


image
(virtual)
2nd FP
1st FP

ch

object

ief

ray

The ray bundle emanating from the system is divergent; the virtual
image is located at the intersection of the backwards-extended rays
The virtual image is erect and is magnified
When using a negative lens, the image is always virtual, erect, and
demagnified
MIT 2.71/2.710
Review Lecture p-26

Tilted object:

the Scheimpflug condition

The object plane and the image plane

intersect at the plane of the thin lens.

MIT 2.71/2.710
Review Lecture p-27

Lens-based imaging

MIT 2.71/2.710
Review Lecture p-28

Human eye
Photographic camera

Magnifier
Microscope
Telescope

The human eye

Remote object (unaccommodated eye)

Near object (accommodated eye)


MIT 2.71/2.710
Review Lecture p-29

The photographic camera

meniscus
lens
or (nowadays)
zoom lens
digital imaging

MIT 2.71/2.710
Review Lecture p-31

Film
or
detector array (CCD or CMOS)

The pinhole camera

opaque
screen

image

pinhole

object
The pinhole camera blocks all but one ray per object point from reaching the
image space an image is formed (i.e., each point in image space corresponds to
a single point from the object space).
Unfortunately, most of the light is wasted in this instrument.
Besides, light diffracts if it has to go through small pinholes as we will see later;
diffraction introduces undesirable artifacts in the image.
MIT 2.71/2.710
Review Lecture p-35

Field of View (FoV)

FoV=angle that the chief ray from an object can subtend


towards the imaging system

MIT 2.71/2.710
Review Lecture p-36

Numerical Aperture

medium of
refr. index n

: half-angle subtended by
the imaging system from
an axial object

Numerical Aperture
(NA) = n sin
Speed (f/#)=1/2(NA)
pronounced f-number, e.g.
f/8 means (f/#)=8.

MIT 2.71/2.710
Review Lecture p-37

Resolution

How far can two distinct point objects be

before their images cease to be distinguishable?

MIT 2.71/2.710
Review Lecture p-38

Factors limiting resolution in an

imaging system

Intricately related; assessment of image


Diffraction

quality depends on the degree that the inverse


Aberrations

problem is solvable (i.e. its condition)


2.717 sp02 for details
Noise
electronic noise (thermal, Poisson) in cameras
multiplicative noise in photographic film
stray light
speckle noise (coherent imaging systems only)
Sampling at the image plane
camera pixel size
photographic film grain size
MIT 2.71/2.710
Review Lecture p-39

Point-Spread Function

Light distribution
near the Gaussian
(geometric) focus

= PSF

Point source
(ideal)

(rotationally
symmetric
wrt optical axis)

x ~

1.22
NA

2
z ~
NA 2

The finite extent of the PSF causes blur in the image


MIT 2.71/2.710
Review Lecture p-40

light intensity (arbitrary units)

Diffraction limited resolution

object
spacing

lateral coordinate at image plane (arbitrary units)

Point objects just


resolvable when
MIT 2.71/2.710
Review Lecture p-41

1.22
x
(NA)

Rayleigh resolution
criterion

Wave nature of light

Diffraction

broadening of
point images

diffraction grating

Inteference
?

Michelson interferometer

Fabry-Perot interferometer

Interference filter
(or dielectric mirror)

Polarization: polaroids, dichroics, liquid crystals, ...

MIT 2.71/2.710
Review Lecture p-42

Diffraction grating

incident
plane
wave

Grating spatial frequency: 1/

Angular separation between diffracted orders: 1/

m=3
m=2
m=1

m=0

straight-through order or DC term

m=1
m=2
m=3

MIT 2.71/2.710
Review Lecture p-43

Condition for constructive interference:

= 2m

sin = m
diffraction order

(m integer)

Grating dispersion

Anomalous
(or negative)
dispersion

polychromatic

(white)

light

MIT 2.71/2.710
Review Lecture p-44

Glass prism:
normal dispersion

Fresnel diffraction formulae

g in ( x, y )

g out ( x, y)

( x x )2 + ( y y )2
1
z

g out (
x, y
;
z )
=
expi 2
g in (x, y ) exp i
dxdy
iz

Gin (u, v )

Gout (u , v)

MIT 2.71/2.710
Review Lecture p-45

Gout (u , v; z ) = exp i 2 Gin (u, v ) exp - iz u 2 + v 2

)}

Fresnel diffraction

as a linear, shift-invariant system

Thin transparency
t ( x, y )
g1 ( x, y )

x2 + y2
1
z

expi 2 expi
h ( x, y ) =

iz
z

g 2 ( x, y ) =

= g1 ( x, y )t ( x, y )

impulse response

g 3 (x, y ) =

convolution

= g 2 ( x, y ) h ( x, y )

Fourier
transform

(plane wave
spectrum)

G2 (u , v )

Fourier
transform
transfer function

G3 (u , v) =

multiplication

= G2 (u , v) H (u , v)

H (u , v ) = expi 2 exp i u 2 + v 2 z

MIT 2.71/2.710
Review Lecture p-46

output
amplitude

)}

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

2.717J/MAS.857J Optical Engineering


Problem Set #1

Spring '02

Posted Feb. 6, 2002 | Due Wednesday Feb. 13, 2002


Notation: (u v) are the spatial frequencies conjugate to the Cartesian coordinate pair
(x y). H(u v) is the optical transfer function (OTF).

1. A unit amplitude, normally incident, monochromatic plane wave illuminates an


object of maximum linear dimension D, situated immediately in front of a larger


positive lens of focal length f (see Figure 1). Due to a positioning error, the
intensity distribution is measured across a plane at a distance f ; behind the
lens. How small must be if the measured intensity distribution is to accuretely
represent the Fraunhofer diraction pattern of the object

f
Figure 1

2. An innite periodic square-wave grating with transmittivity as shown in Fig

ure 2A is placed at the input of the optical system of Figure 2B. Both lenses are
positive, F1, and have focal length f . The grating is illuminated with monochromatic, spatially coherent light of wavelength and intensity I0 . The spatial period of the grating is X 4. The element at the Fourier plane of the system
is a nonlinear transparency with the intensity transmission function shown in
Figure 2C, where the threshold and saturating intensities are Ithr Isat 0:1I0.
2.a) To carry out the calculation analytically, you need to neglect the Airy patterns forming at the Fourier plane and pretend they are uniform bright dots.
Explain why this assumption is justied and what eects it might have.
1

2.b) Derive and plot the intensity distribution at the output plane using the
above assumption.

t(x)
1

...

X/2

...

X/2

x
0

Figure 2A
input
plane

output
plane

nonlinear
transparency

illumination

Figure 2B
I out
I sat

I thr

Figure 2C
2

I in

3. A commonly cited quantity determining the seriousness of aberrations of an op

tical system is the Strehl number D, which is dened as the ratio of the light
intensity at the maximum of the point-spread function of the system with aberrations to that same maximum for that system in the absence of aberrations (i.e.,
the diraction-limited case both maxima are assumed to exist on the optical
axis).
3.a) Prove that D is equal to the normalized volume under the optical transfer
function of the aberrated imaging system that is, prove
RR +1
Haberrated (u v)dudv
D RR;1
+1
;1 Hdir{lim(u v )dudv

3.b) Argue that D is a real number and that D 1 always.

4. Sketch the u and v cross-sections of the optical transfer function of an incoherent


imaging system having as a pupil function the two-pinhole combination shown


in Figure 4. Assume w d. Do not use Matlab for this calculation. Explain
briey the appearance of your sketches, and be sure to label the various cuto
frequencies and center frequencies.
y

2w
2d

2w

Figure 4
3

5. An ojbect with square-wave amplitude transmittance identical to the grating of


Figure 2A is imaged by a lens with a circular pupil function. The focal length of
the lens is 10 cm, the fundamental frequency of the square wave is 1X 100 cycles/mm, the object distance is 20 cm, and the wavelength is 1 m. What is
the minimum lens diameter that will yield any variations of intensity across the
image plane for the cases of
5.a) Coherent object illumination

5.b) Incoherent object illumination

Hint The Fourier series expansion of the square wave of Figure 2A is


nX
+1
n
n nx o
1
t(x)
sinc
exp
i2

2 n;1
2
X

where sinc( ) sin( )( ).

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

2.717J/MAS.857J Optical Engineering


Problem Set #2

Spring '02

Posted Feb. 13, 2002 | Due Wednesday Feb. 20, 2002


1. How to emulate a perfect coin. Given a biased coin such that the probability

of heads (H ) is , we emulate a perfect coin as follows: Throw the biased coin


twice interpret HT (T tails) as success and T H as failure if neither event
occurs repeat the throws until a decision is reached.
1.a) Show that this model leads to Bernoulli trials with p 12.

1.b) Find the distribution and the expectation value of the number of throws
required to reach a decision.

2. Birthdays. For a group of n people nd the expected number of days of the


year which are birthdays of exactly k people. Assume the year is 365 days long
and that all the arrangements are equally probable. What is the result for n 23
(the number of players in two opposing soccer teams plus the referee) and k 2
Do you nd that surprising

3. Misprints. A book of n pages contains on average misprints per page. Esti

mate the probability that at least one page will contain more than k misprints.

4. Detection threshold. We seek to determine if a tumor is present in tissue


from the voltage U measured between two strategically placed electrodes. In the
absence of tumor, U is Gaussian with mean V1 and variance 2 i.e., the \prior"
distribution is
)
(
1
(
u ; V1 )2
pU (u j no tumor) p
:
exp ;
22
2
In the presence of a tumor, U is Gaussian with mean V2 V1 and same variance
2 i.e.
(

1
(
u ; V2 )2
pU (u j tumor) p
:
exp ;
22
2
We seek a \detection threshold" V0 such that if U V0 we conclude that a
tumor is present whereas if U V0 we conclude that there is no tumor. Clearly,
our decision is in error if (i) we concluded that there is no tumor whereas in
actuality a tumor is present, i.e. a \miss" (ii) we concluded that there is a
tumor whereas in actuality there is no tumor present, i.e. a \false alarm." We
dene the probability of error (PE) as the sum of the probability of a miss and
the probability of a false alarm.
1

4.a) Show that the PE is minimized if we select


V0

V1 + V2

4.b) Using the optimum threshold, calculate the PE in terms of the \error function"

erf(z) p2

Z z

e;t2 dt:

Notes: (1) The above-described process of selecting a detection threshold is known


as \Bayes decision." (2) The erf denition above is after Abramowitz & Stegun,
Handbook of Mathematical Functions, Dover 1972 (p. 297). The constants and
integral limits are sometimes dened dierently in the literature.

5. Normalization. Let fXk g be a sequence of mutually independent random variables with a common distribution. Suppose that the Xk assume only positive
values and that EV fXk g xk a and EV Xk;1 b exist. Let
Sn X1 + : : : + Xn :

Prove that EV fSn;1g is nite and that

1
k
EV X

Sn
n

for k 1 : : : n:

6. Unbiased estimator. Let X1 : : : Xn be mutually independent random vari

ables with a common distribution let its mean be , its variance 2. Let
X + : : : + Xn
X 1
:
n

Prove that
1 EV

n;1
2

( n
X;

k1

Xk ; X

(Note:
In statistics,
X is called an unbiased estimator of x EV fX g, and
2
P;
Xk ; X (n ; 1) is an unbiased estimator of 2 .

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

2.717J/MAS.857J Optical Engineering


Problem Set #3

Spring '02

Posted Feb. 27, 2002 | Due Monday Mar. 11, 2002


1. The probability of an insect laying k eggs is Poisson with expectation value k


the probability of an egg developing is p. Assuming mutual independence of the


eggs, show that the probability of a total of n survivors is given the Poisson
distribution with expectation value kp.

2. Goodman problem 2-6.


3. Goodman problem 2-9.
4. Goodman problem 2-10.
5. Goodman problem 2-11.
6. Let X (t) be a randomstprocess describing the location X of a particle as function

of time t 0. The 1 -order statistics of this random process are described by


the function
pX (x t)

exp ; (x 2;Dtvt)
2Dt

p1

where v and D are real, positive numbers.


6.a) How do the mean and variance of X behave as time evolves

6.b) Show that pX satises

@pX
@t

X + D @ pX :
;v @p
@x
2 @x2
This is known as the Fokker-Planck equation for this random process.
2

6.c) Can you describe a physical system which should follow these statistics
What is the physical meaning of v and D in your system (Hint: the FokkerPlanck equation is also known under a dierent name what is then pX
replaced by).

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

2.717J/MAS.857J Optical Engineering


Problem Set #4

Spring '02

Posted Mar. 16, 2002 | Due Wednesday Mar. 27, 2002

1. Goodman 3-5.
2. A space-domain linear, shift-invariant system has impulse response


h(x) rect xa :
The system is driven by white noise. Find the autocorrelation function of the
output process.
3. Consider the random process

X (t) aej(t;)
where a is a xed amplitude, the frequency is a random variable with probability density function p(!), and the phase delay is independent of and
uniform in the interval (; ). Show that X (t) is wide-sense stationary with
zero mean and power spectrum

GX ( ) 2a2 p (2 ) :

4. A harmonic source of electromagnetic waves, emitting signal ej! t, is at distance r0


0

from a stationary observer at t 0. The oscillation frequency !0 is a deterministic


constant. The source moves with constant but random velocity V relative to the
observer. The random variable V has probability density function pV (v). The
observer receives a Doppler-shifted signal

r
0+Vt
S (t) a exp j!0 t ; c
:

4.a) What is the power spectrum of the received signal (Hint: Use the result
of the previous problem.)

4.b) What do you conclude about the power spectrum of the light produced by

a collection of gas molecules moving at random and emitting at identical


frequencies

4.c) What is an appopriate probability density function for V for this case
1

E out

M
k

E in
m
Electron

Nucleus

5. Why is the sky blue In 1899, Lord Rayleigh observed that when we look at the

sky, we see light scattered from particles in the atmosphere, primarily nitrogen.
He then proposed the model shown above in order to quantify the scattering
process. The gure shows an electron with mass m bound to the nucleus with
a spring with spring constant k and friction coecient b. The nucleus has mass
M m. A force is applied to the electron due to the electric eld of the
incident sunlight. The scattered eld is then proportional to the acceleration of
the electron.
5.a) Formulate a one-dimensional model for the scattering process described
above. (Hint: model the nucleus as immobile.)

5.b) Assuming that the power spectral density of sunlight is pretty much constant over the entire electromagnetic spectrum, derive an expression for the
power spectrum of the scattered light.

5.c) Explain the blue color of the sky given that the spring constant;31for nitrogen
is k 140Nm and the mass of the electron is m 9:11 10 kg.