Vous êtes sur la page 1sur 55

Digital Image Processing

CSC331
Image Enhancement

1
Summery of previous lecture
• Mask processing
• Linear smoothing operation
• Median filter
• Sharpening spatial filter

2
Todays lecture
• 1st and 2nd order derivatives
• Laplacian filter
• Unsharp masking and high-boost filtering

3
Sharpening
• The term sharpening is referred to the techniques
suited for enhancing the intensity transitions.
• In images, the borders between objects are
perceived because of the intensity change: more
crisp the intensity transitions, more sharp the
image.
• The intensity transitions between adjacent pixels
are related to the derivatives of the image.
• Hence, operators (possibly expressed as linear
filters) able to compute the derivatives of a digital
image are very interesting
4
Sharpening spatial filter
• By averaging over an image, then the image
becomes blurred or the details in the image
are removed. Now, this averaging operation is
equivalent to integration operation.
• The opposite differentiation operation or
derivative operations will make the image
sharp.
• We need derivative operations

5
First derivative of an image
• Since the image is a discrete function, the traditional
definition of derivative cannot be applied.
• Hence, a suitable operator have to be defined such that
it satisfies the main properties of the first derivative:
– 1. it is equal to zero in the regions where the intensity is
constant;
– 2. it is different from zero for an intensity transition;
– 3. it is constant on ramps where the intensity transition is
constant.
• The natural derivative operator is the difference
between the intensity of neighboring pixels (spatial
differentiation).

6
Second derivative of an image
• This operator satisfies the following properties:
– 1. it is equal to zero where the intensity is constant;
– 2. it is different from zero at the begin of a step (or a
ramp) of the intensity;
– 3. it is equal to zero on the constant slope ramps.

7
8
Observations
• 1st order derivatives produce thicker edges
• 2nd order derivatives have stronger response to finer
details
• 1st order derivatives have stronger response to gray
level step
• 2nd order derivatives produce double response to step
changes
• Both 1st and 2nd order derivatives produce negative
pixel values
– Shift output image for display
– Some applications use only the absolute value
– overall 2nd order is better for most of the cases

9
• First Order Derivative Edge Detection. Generally, the
first order derivative operators are very sensitive to
noise and produce thicker edges.
• 2nd Order Derivative Edge Detection. If there is a
significant spatial change in the second derivative, an
edge is detected. 2nd Order Derivative operators are
more sophisticated methods towards automatized
edge detection, however, still very noise-sensitive.
• As differentiation amplifies noise, smoothing is
suggested prior to applying
• Image derivatives are used in motion estimation and
object tracking in video.

10
Laplacian operator

• Usually the sharpening filters make use of the second order


operators.
– A second order operator is more sensitive to intensity variations
than a first order operator.
• Besides, partial derivatives has to be considered for images
– The derivative in a point depends on the direction along which it
is computed.
• Operators that are invariant to rotation are called isotropic.
– Rotate and differentiate (or filtering) has the same effects of
differentiate and rotate.
• The Laplacian is the simpler isotropic derivative operator
(wrt. the principal directions):

11
Laplacian filter

12
(a) and (c): Isotropic results for increments of 90o
(b) and (d): Isotropic results for increments of 45o

13
14
15
16
17
18
Unsharp masking and high-boost
filtering
• The technique known as unsharp masking is a
method of common use in graphics for making
the images sharper.
• It consists of:
– 1. defocusing the original image;
– 2. obtaining the mask as the difference between
the original image and its defocused copy;
– 3. adding the mask to the original image.

19
20
21
Mask of High Boost

22
23
24
25
What are Edges in image?
• Edges are significant local
changes of intensity in an
image.
• Edges typically occur on the 0 0 0 33
boundary between two
different regions in an image 0 0 45 78
• Intuitively, edge corresponds to 0 45 23 33
singularities in the image
• (i.e. where pixel value 0 0 42 76
experiences abrupt change)
0 0 0 38
• Detects large intensity
transitions between pixels

26
Goal of edge detection
• Produce a line drawing of a scene from an image
of that scene.
• Important features can be extracted from the
edges of an image (e.g., corners, lines, curves).
• These features are used by higher-level computer
vision algorithms (e.g., recognition).

27
Where is the edge?

Edge easy to find


28
Where is the edge?

Where is edge? Single pixel wide or multiple pixels?


29
What causes intensity changes?
• Various physical events cause intensity changes.
• Geometric events
– object boundary (discontinuity in depth and/or surface color and
texture)
– surface boundary (discontinuity in surface orientation and/or surface
color and texture)
• Non-geometric events
– specularity (direct reflection of light, such as a mirror)
– shadows (from other objects or from the same object)
– inter-reflections

30
Edge descriptors
• Edge normal: unit vector in the direction of maximum
intensity change.
• Edge direction: unit vector to perpendicular to the edge
normal.
• Edge position or center: the image position at which the
edge is located.
• Edge strength: related to the local image contrast along the
normal.

31
Modeling intensity changes
• Edges can be modeled
according to their intensity
profiles.
• Step edge: the image
intensity abruptly changes
from one value to one side of
the discontinuity to a
different value on the
opposite side.
• Ramp edge: a step edge
where the intensity change is
not instantaneous but occur
over a finite distance.

32
Modeling intensity changes
• Ridge edge: the image intensity abruptly
changes value but then returns to the starting
value within some short distance (generated
usually by lines).

33
Modeling intensity changes
• Roof edge: a ridge edge where the intensity
change is not instantaneous but occur over a
finite distance (generated usually by the
intersection of surfaces).

34
The four steps of edge detection
1. Smoothing: suppress as much noise as possible, without
destroying the true edges.
2. Enhancement: apply a filter to enhance the quality of the
edges in the image (sharpening).
3. Detection: determine which edge pixels should be
discarded as noise and which should be retained (usually,
thresholding provides the criterion used for detection).
4. Localization: determine the exact location of an edge (sub-
pixel resolution might be required for some applications,
that is, estimate the location of an edge to better than the
spacing between pixels). Edge thinning and linking are
usually required in this step.

35
Edge detection using derivatives
• Calculus describes changes of continuous
functions using derivatives.
• An image is a 2D function, so operators
describing edges are expressed using partial
derivatives.
– Points which lie on an edge can be detected by:
– (1) detecting local maxima or minima of the first
derivative
– (2) detecting the zero-crossing of the second
derivative

36
37
Gradient Operators
• Motivation: detect changes
change in the pixel value large gradient

image Gradient edge


Thresholding
operator map
x(m,n) g(m,n) I(m,n)
1 | g (m, n) | th
I (m, n)  
0 otherwise

38
First order derivatives using the
gradient operator
• An image gradient is a directional change in the intensity or color in
an image. Image gradients may be used to extract information from
images.
• the term gradient or color gradient is used for a gradual blend
of color which can be considered as an even gradation from low to
high values, as used from white to black in the images to the right..

39
• Image gradients can be used to extract information
from images.
• Each pixel of a gradient image measures the change in
intensity of that same point in the original image, in a
given direction. To get the full range of direction,
gradient images in the x and y directions are
computed.
• One of the most common uses is in edge detection.
After gradient images have been computed, pixels with
large gradient values become possible edge pixels. The
pixels with the largest gradient values in the direction
of the gradient become edge pixels, and edges may be
traced in the direction perpendicular to the gradient
direction.
• Image gradients can also be used for robust feature
and texture matching. 40
41
Gradient Representation

• The gradient is a vector which has magnitude and


direction: (approximation)

or f f
| || |
x y

• Magnitude: indicates edge strength.

• Direction: indicates edge direction.


– i.e., perpendicular to edge direction
Approximation

• Consider the arrangement of pixels about the pixel (i, j):

3 x 3 neighborhood:

• The partial derivatives can be computed by:

• The constant c implies the emphasis given to pixels closer to the


center of the mask.
Prewitt Operator

• Setting c = 1, we get the Prewitt operator:


Sobel Operator

• Setting c = 2, we get the Sobel operator:


Example (using Prewitt operator)
Example

d
I
dx

d
I
dy
Isotropic property of gradient
magnitude
• The magnitude of the gradient detects edges in all directions.

2
 d   d 
2
d d
I I    I   I 
dx dy  dx   dy 
Sobel operator using first order
derivatives

49
50
• The combination of different spatial
enhancement methods leads to “better quality”
images
• For instance
– utilize the Laplacian to highlight fine detail, and the
gradient to enhance prominent edges
– a smoothed version of the gradient image can be
used to mask the Laplacian image
– increase the dynamic range of the gray levels by using
a gray-level transformation

51
52
53
54
55

Vous aimerez peut-être aussi