Académique Documents
Professionnel Documents
Culture Documents
SGU 3643
LECTURE 4
Visual Interpretation
Elements of Visual Interpretation :
tone
shape
size
pattern
texture
shadow, and
association
Pre-Processing
Preprocessing functions involve those operations that are
normally required prior to the main data analysis and
extraction of information, and are generally grouped as
radiometric or geometric corrections.
Radiometric corrections include correcting the data for sensor
irregularities and unwanted sensor or atmospheric noise, and
converting the data so they accurately represent the reflected
or emitted radiation measured by the sensor.
Geometric corrections include correcting for geometric
distortions due to sensor-Earth geometry variations, and
conversion of the data to real world coordinates ( e.g. latitude
and longitude) on the Earth's surface.
Image Enhancement
To improve the appearance of the
imagery to assist in visual
interpretation and analysis.
Examples of enhancement
functions include contrast
stretching to increase the
tonal distinction between
various features in a scene,
and spatial filtering to
enhance (or suppress) specific
spatial patterns in an image.
Image Transformation
Image transformations are operations similar in
concept to those for image enhancement.
However, unlike image enhancement operations
which are normally applied only to a single
channel of data at a time, image transformations
usually involve combined processing of data from
multiple spectral bands.
Arithmetic operations (i.e. subtraction, addition,
multiplication, division) are performed to combine
and transform the original bands into "new"
images which better display or highlight certain
features in the scene.
Two generic approaches which are used most often, namely supervised
and unsupervised classification.
Preprocessing
Image pre-processing has a number
of component parts:
Radiometric correction
Geometric correction
Noise removal
Radiometric Correction
Radiometric corrections may be necessary due to
variations in scene illumination and viewing
geometry, atmospheric conditions, and sensor
noise and response.
Each of these will vary depending on the specific
sensor and platform used to acquire the data and
the conditions during data acquisition.
Also, it may be desirable to convert and/or
calibrate the data to known (absolute) radiation or
reflectance units to facilitate comparison between
data
Stripping
Dropped line
Image Enhancement
Enhancements are used to make it easier for visual
interpretation and understanding of imagery.
The advantage of digital imagery is that it allows us to
manipulate the digital pixel values in an image
There are three main types of image enhancement:
Contrast enhancement
Spatial feature enhancement
Multi-image enhancement
Contrast Enhancement
Image Histogram
In raw imagery, the useful data often populates only a
small portion of the available range of digital values
(commonly 8 bits or 256 levels).
Image Histogram
A histogram is a graphical representation of the brightness
values that comprise an image.
The brightness values (i.e. 0-255) are displayed along the xaxis of the graph.
The frequency of occurrence of each ofthese values in the
image is shown on the y-axis
Histogram Stretch
By manipulating the range of digital values in an image,
graphically represented by its histogram, we can apply
various enhancements to the data.
There are many different techniques and methods of
enhancing contrast and detail in an image; we will cover
only a few common ones here.
Linear Stretch
Histogram Equalised Stretch
Linear Stretch
A linear stretch involves identifying lower and upper
bounds from the histogram (usually the minimum and
maximum brightness values in the image) and applying a
transformation to stretch this range to fill the full range.
Linear Stretch
Spatial Filtering
A common filtering
involves moving a 'window'
of a few pixels in dimension
(e.g. 3x3, 5x5, etc.) over
each pixel in the image,
applying a mathematical
calculation using the pixel
values under that window,
and replacing the central
pixel with the new value.
Spatial Filtering
Spatial Filtering
A low-pass filter is
designed to emphasise
larger, homogeneous
areas of similar tone and
reduce the smaller detail
in an image. Thus, lowpass
filters generally
serve to smooth the
appearance of an image.
Spatial Filtering
A high-pass filter does
the opposite, and serves
to sharpen the
appearance of fine detail
in an image.
Spatial Filtering
Directional or edge
detecting filters highlight
linear features, such as
roads or field boundaries.
These filters can also be
designed to enhance
features which are
oriented in specific
directions and are useful
in applications such as
geology, for the detection
of linear geologic
structures.
Colour Composites
A colour composite is a colour image produced
through optical combination of multiband images
by projection through filters.
True Colour Composite:
A colour imaging process whereby the colour of
the image is the same as the colour of the object
imaged.
False Colour Composite:
A colour imaging process which produces an
image of a colour that does not correspond to the
true colour of the scene ( as seen by our eyes).
Advantages:
Not affected by scene illumination, can
discriminate subtle spectral differences not
obvious on grey-scale and colour composite
images, and is easy to perform
Disadvantages:
Noise is accentuated, and thus has to be
removed before ratioing.