Vous êtes sur la page 1sur 23

CC 3.

Gradient-Directed Multiexposure Composition


Wei Zhang , Wai-Kuen Cham
Image Processing, IEEE Transactions on Vol.21, No.4
April 2012
Reporter: ChengTing, Sheu
Advisor: Dr. Yuan-Kai Wang
Intelligent System Lab. Of EE Dept., Fu Jen Univ.
May 31, 2015

Outline
Introduction
Related Work
Algorithm

Experimental Result
Conclusions and

Discussions

Whats HDR?

Each exposure can be designed to capture a certain dynamic range.

Generating an HDR image from a stack of differently exposed.

Recovering the full dynamic range

Making all present details visible in one image.

HDR imaging in moving objects case is more challenging and prone


to ghosting artifacts.

Exposure Fusion

The process of creating a low dynamic range (LDR) image from a


series of bracketed exposures.

Weighting map for multiexposure images

Contribute & Strength

The gradient direction changes reveal object movement and thus can help
account for the ghosting problem in dynamic scenes.

Lower computational complexity since neither radiometric camera


calibration nor tone mapping is required.

It allows for lighting changes and can be naturally extended to other


tasks such as flash and no-flash photography.

Static HDR

Attempts to produce the desired tone-mapped-like HDR image directly


by the exposure composition in the image domain.

These methods skip the typical HDR process, and no intermediate HDR
image needs to be generated.

Require that the target scene is completely still or ghosting artifacts in


the area where motion occurs.

Dynamic HDR

Detect the motion regions and then produce a ghost-free HDR result by
removing the contributions of these regions in the composite radiance map.

Find out the regions where ghosting artifacts may occur due to object
motion.

To composite the desirable radiance with the guidance of a reference image


preselected automatically or manually.

Overview

Taking multiple exposures and combining them together as:


N

H ( x, y ) W i ( x, y ) I i ( x, y )
i 1

The weighting map of each exposure is estimated by a gradient-based quality


assessment system.

For dynamic scenes, assessments on visibility and consistency are both required.

For static scenes, only visibility is necessary.

Gradient-Based Image Quality Assessment

The first derivatives of 2-D Gaussian filter g in the X- and Y- directions


to extract the gradient information in this paper as follows:
I xi ( x, y ) I i ( x, y )

g ( x; y; d )
x

I yi ( x, y ) I i ( x, y )

g ( x; y; d )
y

The gradient magnitude reflects the maximum change in pixel values.

The angle points out the direction corresponding to the maximum change.
m ( x, y )
i

I ( x, y ) I ( x, y )
i
x

i ( x, y ) arctan

i
y

I yi ( x, y )
I xi ( x, y )

Visibility Assessment

The gradient magnitude becomes larger when a pixel gets better exposed.

It will gradually decrease as the pixel approaches over- or underexposure.

V ( x, y )
i

m i ( x, y )
i
m
i1 ( x, y)
N

10

Consistency Assessment

Most scenes encountered in practice are nonstatic.


It just cannot avoid compositing the inconsistent content appearing in the
motion area and yields an unpleasant result ruined by ghosting artifacts.
Image gradients are mainly due to the local changes in 3-D geometric
shape and reflectance.
If the content changes due to object movement, the gradient direction will
l
i
j
accordingly vary.

(
x

k
,
y

k
)

(x k, y k)
k l
dij ( x, y )

(2l 1) 2

11

Consistency Assessment

The moving object is only a shot for one position and appears
in a relatively small number of images.
The stationary parts of the scene that predominantly exist in
the sequence are what the photographer is interested in.
S i ( x, y ) j 1 exp(
N

C ( x, y )
i

dij ( x, y ) 2
2

2
s

S i ( x, y ) i ( x, y )
i
i
S
(
x
,
y
)

( x, y )
i1
N

1,1

I
( x, y )
i ( x, y )
0, otherwise

12

The final weights

The final weights in dynamic scenes are calculated by combining the


visibility and consistency measures as:
W i ( x, y )

V i ( x, y ) C i ( x, y )
i
i
V
(
x
,
y
)

C
( x, y )
i1
N

After exposure correction, these weights become much lower, and thus,
a desirable result with clouds is obtained and ghost is disappeared.

13

The final weights

Since pixels in the sky region are overexposed in most exposures, the
weights obtained without exposure correction are high.
The high weights favor overexposure and suppress the occurrence of
clouds in the composite result.
To eliminate the outlier weights, we employ cross-bilateral filtering to
refine the results by using the input exposures as control images.

14

Experimental Result-Dynamic scene-1

15

Experimental Result-Dynamic scene-2

16

Experimental Result-Dynamic scene-3

17

Experimental Result-Dynamic scene-3

18

Experimental Result-Dynamic scene-3

19

Experimental Result-Static scene

20

Experimental Result-Flash and no-flash photography

21

Conclusions
Free

users from the tedious radiometric calibration and tonemapping steps.


Common limitations of HDR imaging:
Blurring artifacts caused by
Severe sensor noise.
Trees blowing in the wind.

camera shake.

At least three images are required for dynamic scenes.


There is no such limitation for static scenes.

22

Creative Commons -
3.0

(1)(islab.tw) (2)

Website: islab.tw

23