Vous êtes sur la page 1sur 6

MC0086-DIGITAL IMAGE PROCESSING

1. Write short notes on:


a) Imaging in the Microwave Band
The dominant application of imaging in the microwave band is radar. The unique feature of
imaging radar is its ability to collect data over virtually any region at any time, regardless of
weather or ambient lighting conditions. Some radar waves can penetrate clouds, and under
certain conditions can also see through vegetation, ice, and extremely dry sand. In many cases,
radar is the only way to explore inaccessible regions of the Earths surface.

An imaging radar works like a flash camera in that it provides its own illumination (microwave
pulses) to illuminate an area on the ground and take a snapshot image. Instead of a camera
lens, a radar uses an antenna and digital computer processing to record its images. In a radar
image, one can see only the microwave energy that was reflected back toward the radar
antenna. Fig. 1.9 shows a spaceborne radar image covering a rugged mountainous area of
Southeast Tibet, about 90 km east of the city of Lhasa. In the lower right corner is a wide valley
of the Lhasa River, which is populated by Tibetan farmers and yak herders and includes the
village of Menba. Mountains in this area reach about 5800 m (19,000 ft) above sea level, while
the valley floors lie about 4300 m (14,000 ft) above sea level. Note the clarity and detail of the
image, unencumbered by clouds or other atmospheric conditions that normally interfere with
images in the visual band.
B.) Imaging in the Radio Band
As in the case of imaging at the other end of the spectrum (gamma rays), the major applications
of imaging in the radio band are in medicine and astronomy.
In medicine radio waves are used in magnetic resonance imaging (MRI). This technique places
a patient in a powerful magnet and passes radio waves through his or her body in short pulses.
Each pulse causes a responding pulse of radio waves to be emitted by the patients tissues.
The location from which these signals originate and their strength are determined by a
computer, which produces a two-dimensional picture of a section of the patient. MRI can
produce pictures in any plane. Fig. 1.10 shows MRI images of a human knee and spine.

2.) Explain the properties and uses of electromagnetic spectrum.
The electromagnetic spectrum can be expressed in terms of wavelength, frequency, or energy.
Wavelength () and frequency () are related by the expression

= c /

where c is the speed of light (2.998*108 m/s).The energy of the various components of the
electromagnetic spectrum is given by the expression

E =h

where h is Plancks constant. The units of wavelength are meters, with the terms microns
(denoted m and equal to 10
6
m) and nanometers (10
9
m) being used frequently. Frequency
is measured in Hertz (Hz), with one Hertz being equal to one cycle of a sinusoidal wave per
second.

Electromagnetic waves can be visualized as propagating sinusoidal waves with wavelength l as
or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern
and moving at the speed of light. Each massless particle contains a certain amount (or bundle)
of energy. Each bundle of energy is called a photon.that energy is proportional to frequency, so
the higher-frequency (shorter wavelength) electromagnetic phenomena carry more energy per
photon. Thus, radio waves have photons with low energies; microwaves have more energy than
radio waves, infrared still more, then visible, ultraviolet, X-rays, and finally gamma rays, the
most energetic of all. This is the reason that gamma rays are so dangerous to living organisms.
Light is a particular type of electromagnetic radiation that can be seen and sensed by the
human eye. The visible band of the electromagnetic spectrum spans the range from
approximately 0.43 m (violet) to about 0.79 m (red).For convenience, the color spectrum is
divided into six broad regions: violet, blue, green, yellow, orange, and red. No color (or other
component of the electromagnetic spectrum) ends abruptly, but rather each range blends
smoothly into the next. The colors that humans perceive in an object are determined by the
nature of the light reflected from the object. A body that reflects light and is relatively balanced in
all visible wavelengths appears white to the observer. However, a body that favors reflectance
in a limited range of the visible spectrum exhibits some shades of color. For example, green
objects reflect light with wavelengths primarily in the 500 to 570 nm range while absorbing most
of the energy at other wavelengths. Light that is void of color is called achromatic or
monochromatic light. The only attribute of such light is its intensity, or amount. The term gray
level is generally used to describe monochromatic intensity because it ranges from black, to
grays, and finally to white. Chromatic light spans the electromagnetic energy spectrum from
approximately 0.43 to 0.79 m, as noted previously.

3. Differentiate between Monochromatic photography and Color photography.
Monochromatic Photography
The most common material for photographic image recording is silver halide emulsion, depicted
in Fig. 5.3. In this material, silver halide grains are suspended in a transparent layer of gelatin
that is deposited on a glass, acetate or paper backing. If the backing is transparent, a
transparency can be produced, and if the backing is a white paper, a reflection print can be
obtained. When light strikes a grain, an electrochemical conversion process occurs, and part of
the grain is converted to metallic silver. A development center is then said to exist in the grain.
In the development process, a chemical developing agent causes grains with partial silver
content to be converted entirely to metallic silver. Next, the film is fixed by chemically removing
unexposed grains. The photographic process described above is called a nonreversal process.
It produces a negative image in the sense that the silver density is inversely proportional to the
exposing light. A positive reflection print of an image can be obtained in a two-stage process
with nonreversal materials. First, a negative transparency is produced, and then the negative
transparency is illuminated to expose negative reflection print paper. The resulting silver density
on the developed paper is then proportional to the light intensity that exposed the negative
transparency.A positive transparency of an image can be obtained with a reversal type of film.
This film is exposed and undergoes a first development similar to that of a nonreversal film. At
this stage in the photographic process, all grains that have been exposed to light are converted
completely to metallic silver. In the next step, the metallic silver grains are chemically removed.
The film is then uniformly exposed to light, or alternatively, a chemical process is performed to
expose the remaining silver halide grains. Then the exposed grains are developed and fixed to
produce a positive transparency whose density is proportional to the original light exposure.
Color Photography
Modern color photography systems utilize an integral tripack film, as illustrated in Fig. 5.4, to
produce positive or negative transparencies. In a cross section of this film, the first layer is a
silver halide emulsion sensitive to blue light. A yellow filter following the blue emulsion prevents
blue light from passing through to the green and red silver emulsions that follow in consecutive
layers and are naturally sensitive to blue light. A transparent base supports the emulsion layers.
Upon development, the blue emulsion layer is converted into a yellow dye transparency whose
dye concentration is proportional to the blue exposure for a negative transparency and inversely
proportional for a positive transparency. Similarly, the green and red emulsion layers become
magenta and cyan dye layers, respectively.Color prints can be obtained by a variety of
processes. The most common technique is to produce a positive print from a color negative
transparency onto nonreversal color paper. In the establishment of a mathematical model of the
color photographic process, each emulsion layer can be considered to react to light as does an
emulsion layer of a monochrome photographic material. To a first approximation, this
assumption is correct. However, there are often significant interactions between the emulsion
and dye layers. Each emulsion layer possesses a characteristic sensitivity, as shown by the
typical curves of Fig. 5.5. The integrated exposures of the layers are given bywhere dR, dG, dB
are proportionality constants whose values are adjusted so that the exposures are equal for a
reference white illumination and so that the film is not saturated. In the chemical development
process of the film, a positive transparency is produced with three absorptive dye layers of cyan,
magenta and yellow dyes.
4. Define and explain Dilation and Erosion concept.
Dilation
With dilation, an object grows uniformly in spatial extent. Generalized dilation is expressed
symbolically as

G j k = F j k H j k

where F(j, k), for 1 j, k N is a binary-valued image and H(j, k) for , 1 j, k L, where L is an
odd integer, is a binary-valued array called a structuring element. For notational simplicity, F(j,k)
and H(j,k) are assumed to be square arrays. Generalized dilation can be defined mathematically
and implemented in several ways. The Minkowski addition definition is
GO,k ) = \ u u r
r c
{ F ( j , k ) }
( r
y
c ) < E H


It states that G(j,k) is formed by the union of all translates of F(j,k) with respect to itself in which
the translation distance is the row and column index of pixels of H(j,k) that is a logical 1. Fig. 6.3
illustrates the concept.
Erosion
With erosion an object shrinks uniformly. Generalized erosion is expressed symbolically as
G( j , k) = F( j , k) 0 H( j , k)
where H(j,k) is an odd size L * L structuring element. Generalized erosion is defined to be
G( j , k ) = OC\ T
r c
{ F { j . k ) }
--- V ----------
0% c) e H
The meaning of this relation is that erosion of F(j,k) by H(j,k) is the intersection of all translates of F(j,k) in
which the translation distance is the row and column index of pixels of H(j,k) that are in the logical one
state. Fig. 6.4 illustrates this. Fig. 6.5 illustrates generalized dilation and erosion.
1 1 1 1 1
1 1 1 1 1 1 1 1 0 0 0 1 1
0 0 0 0 1 0 0 = 0 0 0
1 1 1 1 1 1 1 1 0 0 0
1 1 1 1 1
F { j . k ) H( j . k ) G( j , k )
Dilation is commutative:
A B = B A
but in general, erosion is not commutative.
A Q B # 80/ 4
Dilation and erosion are opposite in effect; dilation of the background of an object behaves like erosion of
the object. This statement can be quantified by the duality relationship. A Q B = A B
Dilation and erosion are often applied to an image in concatenation. Dilation followed by erosion is called
a close operation. It is expressed symbolically as G(y, k) = F{j , k) . H( j , k)
where H(j,k) is a L * L structuring element. The close operation is defined as
Gij, k) = [ F(j, k) HQ, k)]QH~ (j, k)

5. What is mean by Image Feature Evaluation? Which are the two quantitative approaches used
for the evaluation of image features?
Introduction
An image feature is a distinguishing primitive characteristic or attribute of an image. Some
features are natural in the sense that such features are defined by the visual appearance of an
image, while other, artificial features result from specific manipulations of an image. Natural
features include the luminance of a region of pixels and gray scale textural regions. Image
amplitude histograms and spatial frequency spectra are examples of artificial features. Image
features are of major importance in the isolation of regions of common property within an image
(image segmentation) and subsequent identification or labeling of such regions (image
classification). Image segmentation provides information on image classification techniques.

There are two quantitative approaches to the evaluation of image features: prototype
performance and figure of merit. In the prototype performance approach for image classification,
a prototype image with regions (segments) that have been independently categorized is
classified by a classification procedure using various image features to be evaluated. The
classification error is then measured for each feature set. The best set of features is, of course,
that which results in the least classification error.
The prototype performance approach for image segmentation is similar in nature. A prototype
image with independently identified regions is segmented by a segmentation procedure using a
test set of features. Then, the detected segments are compared to the known segments, and
the segmentation error is evaluated. The problems associated with the prototype performance
methods of feature evaluation are the integrity of the prototype data and the fact that the
performance indication is dependent not only on the quality of the features but also on the
classification or segmentation ability of the classifier or segmenter.
The figure-of-merit approach to feature evaluation involves the establishment of some functional
distance measurements between sets of image features such that a large distance implies a low
classification error, and vice versa. Faugeras and Pratt have utilized the Bhattacharyya distance
figure-of-merit for texture feature evaluation. The method should be extensible for other features
as well. The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the
probability densities of features of a pair of classes defined as

where x denotes a vector containing individual image feature measurements with conditional
density p (x | S1).

6. Explain about the Region Splitting and merging with example?

Region Splitting and Merging
Sub-divide an image into a set of disjoint regions and then merge and/or split the regions in an
attempt to satisfy the conditions stated in section 10.3.1.

Let R represent the entire image and select predicate P. One approach for segmenting R is to
subdivide it successively into smaller and smaller quadrant regions so that, for ant region , P(
) = TRUE. We start with the entire region. If then the image is divided into
quadrants. If P is FALSE for any quadrant, we subdivide that quadrant into sub quadrants, and
so on. This particular splitting technique has a convenient representation in the form of a so
called quad tree (that is, a tree in which nodes have exactly four descendants), as shown in Fig.
(10.3.3) 10.4. The root of the tree corresponds to the entire image and that each node
corresponds to a subdivision. In this case, only was sub divided further.
If only splitting were used, the final partition likely would contain adjacent regions with identical
properties. This draw back may be remedied by allowing merging, as well as splitting. Satisfying
the constraints of section 10.3.1 requires merging only adjacent regions whose combined pixels
satisfy the predicate P. That is, two adjacent regions and are merged
only if = TRUE.
1. Split into four disjoint quadrants any region for which where
P ( ) = FALSE
2. Merge any adjacent regions and for which = TRUE.
3. Stop when no further merging or splitting is possible.

Vous aimerez peut-être aussi