Vous êtes sur la page 1sur 182

Fundamentals of Remote Sensing

Course Outline: Fundamentals of


Remote Sensing
• Overview of Fundamentals on Remote Sensing;
• Understanding Popularity with Remote Sensing
Software (ERDAS IMAGINE),
• Understanding Various Remote Sensing
Platforms;
• Selecting and Downloading Satellite Image;
• Understanding about Various Image Bands:
Landsat 1-5 MSS, Landsat 4-5 TM, Landsat 7
ETM+, Landsat 8 OLI/TIRS
• Layer Stacking;
• Mosaicking, Creating AOI, Subsetting;
• Displaying Image Spectral Profile;
• Image Displaying;
• Image Stretching;
• Histogram Equalization;
• Image Rectification: Radiometric
Correction, Geometric Correction and
Atmospheric Correction;
• Image Enhancement: Spatial
Enhancement, Radiometric Enhancement,
and Spectral Enhancement;
• Image Processing and Classification:
Interactive Supervised Classification;
Maximum Likelihood Classification; ISO
Cluster Unsupervised Classification; Class
Probability; Principal Components; Image
Thresholding; Density Slicing;
• Application of Image Indices;
• Accuracy Assessment;
• Application of Remote Sensing in
Change Detection, Hazard, Risk and
Disaster Analysis, etc.
Remote Sensing
• Introduction
• The science of remote sensing has
emerged as one of the most fascinating
subjects over the past three decades.
Earth observation from space through
various remote sensing instruments has
provided a vantage means of monitoring
land surface dynamics, natural resources
management, and the overall state of the
environment itself. (Joseph, 2005)
• Remote sensing is defined, for
our purposes, as the
measurement of object properties
on the earth’s surface using data
acquired from aircraft and
satellites.
• It is therefore an attempt to
measure something at a distance,
rather than in situ. While remote-
sensing data can consist of
discrete, point measurement or a
profile along a flight path, we are
most interested here in
measurements over a two
dimensional spatial grid, i.e.
images.
• Remote sensing systems, particularly
those deployed on satellites, provide a
repetitive and consistent view of the earth
that is invaluable to monitoring the earth
system and the effect of human activities
on the earth. (Schowengerdt, 2006)
• What is remote sensing?
• Remote means away from or at a
distance, sensing means detecting
a property or characteristics.
Thus the term Remote Sensing
refers examination, measurement
and analysis of an object without
being in contact with it.
• Remote sensing can be
broadly defined as the
collection and
interpretation of
information about an
object, area, or event
without being in physical
contact with the object.
• Using our eyes to read or look
at any object is also a form of
remote sensing.
• However, remote sensing
includes not only what is visual,
but also what can’t be seen
with the eyes, including sound
and heat.
• “Science and art of obtaining
information about an object, area or
phenomenon through an analysis of
data acquired by a device that is not
in direct contact with the area, object
or phenomenon under investigation”.
– Lillesand, Thomas M. and Ralph W. Kiefer,
“Remote Sensing and Image Interpretation”
John Wiley and Sons, Inc, 1979, p. 1
• Why has remote sensing been
developed?
• Remote sensing has a very long
history dating back to the end of the
19th century when cameras were first
made airborne using balloons and
kites.
• The advent of aircraft further
enhanced the opportunities to
take photographs from the air. It
was realized that the airborne
perspective gave a completely
different view to that which was
available from the ground.
• What is it used for?
• Today, remote sensing is carried out using airborne
and spaceborne methods using satellite
technology.
• Furthermore, remote sensing not only uses film
photography, but also digital camera, scanner and
video, as well as radar and thermal sensors.
• Whereas in the past remote sensing was limited to
what could be seen in the visual part of the
electromagnetic spectrum, the parts of the
spectrum which can not be seen with the human
eye can now be utilized through special filters,
photographic films and other types of sensors.
• What is it used for?
• The most notable application is probably the aerial
reconnaissance during the First World War.
• Aerial photography allowed the positions of the
opposing armies to be monitored over wide areas,
relatively quickly, and more safely than a ground
based survey. Aerial photographs would also have
allowed rapid and relatively accurate updating of
military maps and strategic positions.
• Today, the benefits of remote sensing are heavily
utilized in environmental management which
frequently has a requirement for rapid, accurate and
up-to-date data collection.
Scope and Benefits of Remote
Sensing
• Remote sensing has many advantages over
ground-based survey in that large tracts of land
can be surveyed at any one time, and areas of
land (or sea) that are otherwise inaccessible
can be monitored.
• The advent of satellite technology and
multispectral sensors has further enhanced this
capability, with the ability to capture images of
very large areas of land in one pass, and by
collecting data about an environment that would
normally not be visible to the human eye.
Advantages of remote sensing
• Provides a regional view (large areas)
• Provides repetitive looks at the same area
• Remote sensors "see" over a broader portion of the
spectrum than the human eye
• Sensors can focus in on a very specific bandwidth
in an image or a number of bandwidths
simultaneously
• Provides geo-referenced, digital, data
• Some remote sensors operate in all seasons, at
night, and in bad weather
Remote sensing applications
• Land-use mapping
• Agriculture applications (crop condition, yield
prediction, soil erosion).
• Telecommunication planning
• Environmental assessment and monitoring
(urban growth, hazardous waste)
• Hydrology and coastal mapping
• Urban planning
• Emergencies and Hazards
• Global change monitoring (atmospheric ozone depletion,
deforestation, global warming).
• Meteorology (atmosphere dynamics, weather prediction).
• Forest
• Renewable natural resources (wetlands, soils, forests,
oceans).
• Nonrenewable resource exploration (minerals, oil,
natural gas).
• Mapping (topography, land use. Civil engineering).
• Military surveillance and reconnaissance (strategic
policy, tactical assessment).
Components of Remote Sensing
The process involves an interaction between
incident radiation and the targets of interest.
The following seven elements are involved in
remote sensing
• Energy Source or Illumination (A)
• Radiation and the Atmosphere (B)
• Interaction with the Target (C)
• Recording of Energy by the Sensor (D)
• Transmission, Reception, and Processing (E)
• Interpretation and Analysis (F)
• Application (G)
The Elements of Remote Sensing

Energy Source or Illumination (A) - the first


requirement for remote sensing is to have an
energy source which illuminates or provides
electromagnetic energy to the target of interest.
Radiation and the Atmosphere (B) - as the
energy travels from its source to the target, it
will come in contact with and interact with
the atmosphere it passes through. This
interaction may take place a second time as
the energy travels from the target to the
sensor.
Interaction with the Target (C) - once the
energy makes its way to the target through
the atmosphere, it interacts with the target
depending on the properties of both the
target and the radiation.
Recording of Energy by the Sensor (D) - after
the energy has been scattered by, or emitted from
the target, we require a sensor (remote - not in
contact with the target) to collect and record the
electromagnetic radiation.
Transmission, Reception, and
Processing (E) - the energy recorded by
the sensor has to be transmitted, often in
electronic form, to a receiving and
processing station where the data are
processed into an image (hardcopy and/or
digital).
Interpretation and Analysis (F) - the processed
image is interpreted, visually and/or digitally or
electronically, to extract information about the
target which was illuminated.
Application (G) - the final element of the
remote sensing process is achieved when
we apply the information we have been able
to extract from the imagery about the target
in order to better understand it, reveal some
new information, or assist in solving a
particular problem.
• Principles of Remote Sensing
• Remote sensing has been defined in many
ways. It can be thought of as including traditional
aerial photography, geophysical
measurements such as surveys of the earth’s
gravity and magnetic fields and even seismic
sonar surveys. However, in a modern context,
the term remote sensing usually implies digital
measurements of electromagnetic energy
often for wavelengths that are not visible to
the human eye. The basic principles of Remote
Sensing may be listed as below:
1. Electromagnetic energy has been classified by
wavelength and arranged to form the
electromagnetic spectrum.
2. As electromagnetic energy interacts with the
atmosphere and the surface of the Earth, the
most important concept to remember is the
conservation of energy (i.e., the total energy is
constant).
3. As electromagnetic waves travel, they encounter
objects (discontinuities in velocity) that reflect
some energy like a mirror and transmit some
energy after changing the travel path.
4. The distance (d) an electromagnetic wave
travels in a certain time (t) depends on the
velocity of the material (v) through which the
wave is traveling; d = vt.
5. The velocity (c), frequency (f), and wavelength
(l) of an electromagnetic wave are related by
the equation: c = fl.
6. The analogy of a rock dropped into a pond
can be drawn as an example to define wave
front.
7. It is quite appropriate to look at the amplitude
of an electromagnetic wave and think of it as
a measure of the energy in that wave.

8. Electromagnetic waves lose energy


(amplitude) as they travel because of several
phenomena.
• Electromagnetic Radiation (EMR)
Spectrum
• The sensors on remote sensing
platforms usually record
electromagnetic radiation.
Electromagnetic radiation (EMR) is
energy transmitted through space in
the form of electric and magnetic
waves (Star and Estes, 1990).
• Remote sensors are made up of
detectors that record specific
wavelengths of the
electromagnetic spectrum. The
electromagnetic spectrum is the
range of electromagnetic radiation
extending from cosmic waves to
radio waves (Jensen, 1996).
Radiation
Electromagnetic energy is emitted in waves

Amount of radiation emitted from


an object depends on its temperature

Planck Curve
• All types of land cover (rock
types, water bodies, etc.) absorb
a portion of the electromagnetic
spectrum, giving a
distinguishable signature of
electromagnetic radiation.
• Armed with the knowledge of which
wavelengths are absorbed by certain
features and the intensity of the
reflectance, you can analyze a
remotely sensed image and make
fairly accurate assumptions about the
scene. Figure 3 illustrates the
electromagnetic spectrum (Suits, 1983;
Star and Estes, 1990).
Electromagnetic energy
• The electromagnetic (EM) spectrum is the
continuous range of electromagnetic radiation,
extending from gamma rays (highest frequency
& shortest wavelength) to radio waves (lowest
frequency & longest wavelength) and including
visible light.
• The EM spectrum can be divided into seven
different regions —— gamma rays, X-rays,
ultraviolet, visible light, infrared, microwaves
and radio waves.
Use of Microwave Satellite Imagery to Analyses
Tropical Cyclone
• Remote sensing involves the measurement of
energy in many parts of the electromagnetic
(EM) spectrum. The major regions of interest in
satellite sensing are visible light, reflected and
emitted infrared, and the microwave regions.
The measurement of this radiation takes place in
what are known as spectral bands. A spectral
band is defined as a discrete interval of the EM
spectrum. For example the wavelength range of
0.4µm to 0.5µm(µm = micrometers or 10-6 m)
is one spectral band.
Visible Spectrum
• It is important to recognize how small the visible
portion is relative to the rest of the spectrum.
There is a lot of radiation around us which is
"invisible" to our eyes, but can be detected by
other remote sensing instruments and used to
our advantage.
• The visible wavelengths cover a range from
approximately 0.4 to 0.7 µm. The longest visible
wavelength is red and the shortest is violet.
• This is the only portion of the spectrum we can
associate with the concept of colors.
Infrared Spectrum
• Radiation in the reflected IR region is used for
remote sensing purposes in ways very similar to
radiation in the visible portion.
• The reflected IR covers wavelengths from
approximately 0.7 µm to 3.0 µm.
• The thermal IR region is quite different than the
visible and reflected IR portions, as this energy
is essentially the radiation that is emitted from
the Earth's surface in the form of heat.
• The thermal IR covers wavelengths from
approximately 3.0 µm to 100 µm.
• Remote sensing involves the measurement of
energy in many parts of the electromagnetic
(EM) spectrum. The major regions of interest in
satellite sensing are visible light, reflected and
emitted infrared, and the microwave regions.
The measurement of this radiation takes place in
what are known as spectral bands. A spectral
band is defined as a discrete interval of the EM
spectrum. For example the wavelength range of
0.4µm to 0.5µm(µm = micrometers or 10-6 m)
is one spectral band.
SWIR and LWIR
• The near-infrared and middle-infrared
regions of the electromagnetic spectrum
are sometimes referred to as the short
wave infrared region (SWIR). This is to
distinguish this area from the thermal or
far infrared region, which is often referred
to as the long wave infrared region
(LWIR). The SWIR is characterized by
reflected radiation whereas the LWIR is
characterized by emitted radiation.
Absorption / Reflection Spectra
• When radiation interacts with matter, some
wavelengths are absorbed and others are
reflected. To enhance features in image
data, it is necessary to understand how
vegetation, soils, water, and other land
covers reflect and absorb radiation. The
study of the absorption and reflection of
EMR waves is called spectroscopy.
Spectroscopy
There are two types of spectroscopy:
1.absorption spectra—the EMR wavelengths
that are absorbed by specific materials of
interest
2.reflection spectra—the EMR wavelengths
that are reflected by specific materials of
interest
Absorption Spectra
• Absorption is based on the molecular
bonds in the (surface) material. Which
wavelengths are absorbed depends upon
the chemical composition and crystalline
structure of the material. For pure
compounds, these absorption bands are
so specific that the SWIR region is often
called an infrared fingerprint.
Atmospheric Absorption
• In remote sensing, the sun is the radiation
source for passive sensors. However, the
sun does not emit the same amount of
radiation at all wavelengths. Figure 4
shows the solar irradiation curve, which is
far from linear.
• Solar radiation must travel through the Earth’s
atmosphere before it reaches the Earth’s surface. As it
travels through the atmosphere, radiation is affected by
four phenomena (Elachi, 1987):
• absorption—the amount of radiation absorbed by the
atmosphere
• scattering—the amount of radiation scattered away from
the field of view by the atmosphere
• scattering source —divergent solar irradiation scattered
into the field of view
• emission source —radiation re-emitted after absorption
• Reflectance Spectra
• After rigorously defining the incident
radiation (solar irradiation at target), it is
possible to study the interaction of the
radiation with the target material. When an
electromagnetic wave (solar illumination in
this case) strikes a target surface, three
interactions are possible (Elachi, 1987):
• Reflection
• Transmission
• Scattering
• Types of Platforms and Scanning Systems
• The vehicle or carrier for a remote sensor
to collect and record energy reflected or
emitted from a target or surface is called a
platform. The sensor must reside on a
stable platform removed from the target
or surface being observed. Platforms for
remote sensors may be situated on the
ground, on an aircraft or balloon (or some
other platform within the Earth's
atmosphere), or on a spacecraft or
satellite outside of the Earth's atmosphere.
• Typical platforms are satellites and
aircraft, but they can also include radio-
controlled aeroplanes, balloons kits for
low altitude remote sensing, as well as
ladder trucks or 'cherry pickers' for
ground investigations. The key factor for
the selection of a platform is the altitude
that determines the ground resolution
and which is also dependent on the
instantaneous field of view (IFOV) of the
sensor on board the platform.
• Ground-based sensors are often used to
record detailed information about the
surface which is compared with
information collected from aircraft or
satellite sensors. In some cases, this can
be used to better characterize the target
which is being imaged by these other
sensors, making it possible to better
understand the information in the imagery.
• Ground based sensors may be placed
on a ladder, scaffolding, tall building,
cherry-picker, crane, etc.
• Aerial Platforms
• Aerial platforms are primarily stable wing
aircraft, although helicopters are
occasionally used. Aircraft are often used
to collect very detailed images and
facilitate the collection of data over
virtually any portion of the Earth's surface
at any time.
• Satellite Platforms
• In space, remote sensing is sometimes conducted from the
space shuttle or, more commonly, from satellites.
Satellites are objects which revolve around another object
- in this case, the Earth.
• For example, the moon is a natural satellite, whereas man-
made satellites include those platforms launched for remote
sensing, communication, and telemetry (location and
navigation) purposes.
• Because of their orbits, satellites permit repetitive coverage
of the Earth's surface on a continuing basis. Cost is often a
significant factor in choosing among the various platform
options.
Salient feature of some important satellite platforms
Features Landsat 1,2,3 Landsat 4,5 SPOT IRS-IA IRS-IC

Nature Sun Syn Sun Syn Sun Syn Sun Syn Sun Syn

Altitude (km) 919 705 832 904 817

Orbital period 103.3 99 101 103.2 101.35


(minutes)

inclination 99 98.2 98.7 99 98.69


(degrees

Temporal 18 16 26 22 24
resolution
(days)

Revolutions 251 233 369 307 341

Equatorial 09.30 09.30 10.30 10.00 10.30


crossing (AM)

Sensors RBV, MSS MSS, TM HRV LISS-I, LISS-II LISS-III, PAN,


WIFS
Landsat Satellite Orbit
• The Landsat 8 and Landsat 7 satellites
both maintain a near-polar, sun-
synchronous orbit, following the World
Reference System (WRS-2). They each
make an orbit in about 99 minutes,
complete over 14 orbits per day, and
provide complete coverage of the Earth
every 16 days.
Landsat Image
• Landsat represents the world's longest
continuously acquired collection of
space-based moderate-resolution land
remote sensing data. Four decades of
imagery provides a unique resource for
those who work in agriculture, geology,
forestry, regional planning, education,
mapping, and global change research.
Landsat images are also invaluable for
emergency response and disaster relief.
• The Landsat Multispectral Scanner
(MSS) was carried on Landsats 1-5, and
images consist of four spectral bands with
60 meter spatial resolution. The
approximate scene size is 170 km north-
south by 185 km east-west (106 mi by 115
mi).
Landsat 1-5 Multispectral Scanner (MSS)

Specific band designations differ from Landsats


1, 2, and 3 to Landsats 4 and 5.

Landsat Landsat Wavelength Resolution


1-3 4-5 (micrometers) (meters)
Band 4 - Green Band 1 - Green 0.5-0.6 60*
Band 5 - Red Band 2 - Red 0.6-0.7 60*
Band 6 - Near Band 3 - Near
0.7-0.8 60*
Infrared (NIR) Infrared (NIR)
Band 7 - Near Band 4 - Near
0.8-1.1 60*
Infrared (NIR) Infrared (NIR)
• Landsat 4-5
• Thematic
Mapper
(TM)
• The Landsat Thematic Mapper
(TM) sensor was carried on Landsat 4 and
Landsat 5, and images consist of six
spectral bands with a spatial resolution of
30 meters for Bands 1 to 5 and 7, and one
thermal band (Band 6). The approximate
scene size is 170 km north-south by 183
km east-west (106 mi by 114 mi).
Landsat 4-5 Thematic Mapper (TM) Bands Designation
TM Band 6 was acquired at 120-meter resolution, but products are
resampled to 30-meter pixels.

Wavelength Resolution
Bands
(micrometers) (meters)

Band 1 - Blue 0.45-0.52 30

Band 2 - Green 0.52-0.60 30

Band 3 - Red 0.63-0.69 30

Band 4 - Near Infrared (NIR) 0.76-0.90 30

Band 5 - Shortwave Infrared (SWIR) 1 1.55-1.75 30

Band 6 - Thermal 10.40-12.50 120* (30)

Band 7 - Shortwave Infrared (SWIR) 2 2.08-2.35 30


• Landsat 7: The Landsat Enhanced Thematic
Mapper Plus (ETM+) sensor is carried on
Landsat 7, and images consist of seven spectral
bands with a spatial resolution of 30 meters for
Bands 1-5 and 7. The resolution for Band 8
(panchromatic) is 15 meters. All bands can
collect one of two gain settings (high or low) for
increased radiometric sensitivity and dynamic
range, while Band 6 collects both high and low
gain for all scenes (Bands 61 and 62). The
approximate scene size is 170 km north-south
by 183 km east-west (106 mi by 114 mi).
Landsat 7: Enhanced Thematic Mapper Plus (ETM+)
Bands Designation
Wavelength Resolution
Bands
(micrometers) (meters)
Band 1 - Blue 0.45-0.52 30
Band 2 - Green 0.52-0.60 30
Band 3 - Red 0.63-0.69 30
Band 4 - Near Infrared
0.77-0.90 30
(NIR)

Band 5 - Shortwave
1.55-1.75 30
Infrared (SWIR) 1

Band 6 - Thermal 10.40-12.50 60 * (30)

Band 7 - Shortwave
2.09-2.35 30
Infrared (SWIR) 2

Band 8 - Panchromatic .52-.90 15


• Landsat 8: Operational Land Imager (OLI) and
Thermal Infrared Sensor (TIRS) images consist of
nine spectral bands with a spatial resolution of 30
meters for Bands 1 to 7 and 9. The ultra blue Band
1 is useful for coastal and aerosol studies. Band 9
is useful for cirrus cloud detection. The resolution
for Band 8 (panchromatic) is 15 meters. Thermal
bands 10 and 11 are useful in providing more
accurate surface temperatures and are collected at
100 meters. The approximate scene size is 170 km
north-south by 183 km east-west (106 mi by 114
mi).
Landsat 8: Operational Land Imager (OLI) and Thermal
Infrared Sensor (TIRS) Bands Designation

Wavelength Resolution
Bands
(micrometers) (meters)

Band 1 - Ultra Blue (coastal/aerosol) 0.435 - 0.451 30

Band 2 - Blue 0.452 - 0.512 30


Band 3 - Green 0.533 - 0.590 30
Band 4 - Red 0.636 - 0.673 30
Band 5 - Near Infrared (NIR) 0.851 - 0.879 30

Band 6 - Shortwave Infrared (SWIR) 1 1.566 - 1.651 30

Band 7 - Shortwave Infrared (SWIR) 2 2.107 - 2.294 30

Band 8 - Panchromatic 0.503 - 0.676 15


Band 9 - Cirrus 1.363 - 1.384 30
Band 10 - Thermal Infrared (TIRS) 1 10.60 - 11.19 100 * (30)

Band 11 - Thermal Infrared (TIRS) 2 11.50 - 12.51 100 * (30)


Band Name Landsat 7 Landsat 8
Landsat 5

Ultra Blue
Band 1 Blue
(coastal/aerosol)
Band 2 Green Blue
Band 3 Red Green
Band 4 Near Infrared(NIR) Red
Shortwave Infrared
Band 5 Near Infrared (NIR)
(SWIR) 1
Shortwave Infrared
Band 6 Thermal
(SWIR) 1
Shortwave Infrared Shortwave Infrared
Band 7
(SWIR) 2 (SWIR) 2
Band 8 Panchromatic
Band 9 Cirrus
Band 10 Thermal Infrared (TIRS) 1
Band 11 Thermal Infrared (TIRS) 2
Visualization
Pixels, Images and colors
Color Composite Images
• In displaying a color composite image, three
primary colors (red, green and blue) are used.
When these three colors are combined in
various proportions, they produce different
colors in the visible spectrum. Associating each
spectral band (not necessarily a visible band) to
a separate primary color results in a color
composite image.
Many colors can be formed by combining the three primary
colors (Red, Green, Blue) in various proportions.
False Color Composite
• The display color assignment for any band of a
multispectral image can be done in an entirely
arbitrary manner. In this case, the color of a
target in the displayed image does not have any
resemblance to its actual color. The resulting
product is known as a false color composite
image. There are many possible schemes of
producing false color composite images.
However, some scheme may be more suitable
for detecting certain objects in the image.
Natural Color Composite
• When displaying a natural color composite
image, the spectral bands (some of which may
not be in the visible region) are combined in
such a way that the appearance of the displayed
image resembles a visible color photograph, i.e.
vegetation in green, water in blue, soil in brown
or grey, etc. Many people refer to this composite
as a "true color" composite. However, this term
may be misleading since in many instances the
colors are only simulated to look similar to the
"true" colors of the targets
Color Landsat 7 Landsat 8
Landsat 5 Bands
Bands Combination
Combination

Color Infrared: 4, 3, 2 5,4,3

Natural Color: 3, 2, 1 4,3,2

False Color: 5,4,3 6,5,4

False Color: 7,5,3 7,6,4

False Color: 7,4,2 7,5,3


Band Combination for Landsat 8
Natural Color 432
False Color (urban) 764
Color Infrared (vegetation) 543
Agriculture 652
Atmospheric Penetration 765
Healthy Vegetation 562
Land/Water 564
Natural With Atmospheric Removal 753
Shortwave Infrared 754
Vegetation Analysis 654
Common platforms and wave lengths
• A platform is the vehicle or carrier for
remote sensors for which they are
borne In Meteorology platforms are
used to house sensors which are
obtain data for remote sensing
purposes, and are classified
according to their heights and events
to be monitored.
• Aircraft and satellites are the common
platforms for remote sensing of the earth
and its natural resources. Aerial
photography in the visible portion of the
electromagnetic wavelength was the
original form of remote sensing but
technological developments has enabled
the acquisition of information at other
wavelengths including near infrared,
thermal infrared and microwave.
• Collection of information over a large
numbers of wavelength bands is
referred to as multispectral or
hyperspectral data.
• The development and deployment of
manned and unmanned satellites has
enhanced the collection of remotely
sensed data and offers an inexpensive
way to obtain information over large areas.
• The capacity of remote sensing to identify
and monitor land surfaces and
environmental conditions has expanded
greatly over the last few years and
remotely sensed data will be an essential
tool in natural resource management.
• Satellite sensors have been
designed to measure responses
within particular spectral bands to
enable the discrimination of the
major Earth surface materials.
• Scientists will choose a particular spectral
band for data collection depending on
what they wish to examine. The design of
satellite sensors is based on the
absorption characteristics of Earth surface
materials across all the measurable parts
in the EM spectrum.
Types of Satellite Platforms
 Spy satellites - optical observation platforms
 Telecommunications - telephone and satellite
TV for example
 Earth observation / weather
 National science platforms (landsat, hubble,
International Space Station)
 Navigation and GPS
 Weather Satellite: Geostationary Operational
Environmental Satellite
 Land use and land cover Satellite
Remote sensors
• The instruments used to measure the
electromagnetic radiation reflected/emitted
by the target under study are usually
referred to as remote sensors.Therer are
two classes of Remote Sensor:
1. Passive remote sensor.
2. Active remote sensor.
Passive Sensing
• Passive remote sensor: Sensors which
sense natural radiations, either emitted or
reflected from the earth, are called passive
sensors. The sun as a source of energy or
radiation. The sun provides a very
convenient source of energy for remote
sensing. The sun's energy is either
reflected, as it is for visible wavelengths,
or absorbed and then reemitted, as it is
for thermal infrared wavelengths.
• Remote sensing systems which measure
energy that is naturally available are called
passive sensors. Passive sensors can
only be used to detect energy when the
naturally occurring energy is available. For
all reflected energy, this can only take place
during the time when the sun is illuminating
the Earth. There is no reflected energy
available from the sun at night. Energy that
is naturally emitted (such as thermal
infrared) can be detected day or night, as
long as the amount of energy is large
enough to be recorded.
Passive Sensor
• Active remote sensor: Sensors which carry
electromagnetic radiation of a specific
wavelength or band of wavelengths to illuminate
the earth’s surface are called active sensors.
• Active sensors, on the other hand, provide their
own energy source for illumination. The sensor
emits radiation which is directed toward the
target to be investigated. The radiation reflected
from that target is detected and measured by the
sensor. Advantages for active sensors include
the ability to obtain measurements anytime,
• Regardless of the time of day or season.
Active sensors can be used for examining
wavelengths that are not sufficiently
provided by the sun, such as microwaves,
or to better control the way a target is
illuminated. However, active systems
require the generation of a fairly large
amount of energy to adequately illuminate
targets. Some examples of active sensors
are a laser fluorosensor and a synthetic
aperture radar (SAR).
Orbits and swaths
• Many satellites are designed to follow a
north-south orbit which, in conjunction
with the earth’s rotation (west-east),
allows them to cover most of the earth’s
surface over a period of time. These are
Near-polar orbits. Many of these satellites
orbits are also Sun-synchronous such
that they cover each area of the world at a
constant local time of day. Near polar
orbits also means that the satellite travels
northward on one side of the earth and the
southward on the second half of its orbit.
• These are called
Ascending and
Descending passes.
• The surface directly
below the satellite is
called the Nadir point.
Steerable sensors on
satellites can view an
area (off nadir) before
and after the orbits
passes over a target.
Satellite sensor characteristics
• The basic functions of most satellite sensors
are to collect information about the reflected
radiation along a pathway, also known as the
field of view (FOV), as the satellite orbits the
Earth.
• The smallest area of ground that is sampled is
called the instantaneous field of view (IFOV).
The IFOV is also described as the pixel size of
the sensor. This sampling or measurement
occurs in one or many spectral bands of the
EM spectrum.
• Several remote sensing satellites are
currently available, providing imagery
suitable for various types of applications.
Each of the sensor is characterized by
the wavelength bands employed in
image acquisition like:
Spectral resolution,
radiometric resolution,
spatial resolution, and
temporal resolution.
Characteristics of sensors
Resolution
• Resolution is a broad term commonly
used to describe:
– the number of pixels you can display on
a display device, or
– the area on the ground that a pixel
represents in an image file.
• These broad definitions are inadequate when describing
remotely sensed data. Four distinct types of resolution
must be considered:
 spectral—the specific wavelength intervals that a
sensor can record
 spatial—the area on the ground represented by each
pixel
 radiometric—the number of possible data file values
in each band (indicated by the number of bits into
which the recorded energy is divided)
 temporal—how often a sensor obtains imagery of a
particular area
• These four domains contain separate information that
can be extracted from the raw data.
Spectral resolution
• Spectral resolution refers to the specific wavelength intervals
in the electromagnetic spectrum that a sensor can record
(Simonett et al, 1983). For example, band 1 of the Landsat
TM sensor records energy between 0.45 and 0.52 μm in the
visible part of the spectrum.
• Wide intervals in the electromagnetic spectrum are referred to
as coarse spectral resolution, and narrow intervals are
referred to as fine spectral resolution. For example, the SPOT
panchromatic sensor is considered to have coarse spectral
resolution because it records EMR between 0.51 and 0.73
μm. On the other hand, band 3 of the Landsat TM sensor has
fine spectral resolution because it records EMR between 0.63
and 0.69 μm (Jensen, 1996).
Spatial resolution
• Spatial resolution is a measure of the smallest object that
can be resolved by the sensor, or the area on the ground
represented by each pixel (Simonett et al, 1983). The finer
the resolution, the lower the number. For instance, a
spatial resolution of 79 meters is coarser than a spatial
resolution of 10 meters.
– Scale : The terms large-scale imagery and small-scale imagery
often refer to spatial resolution. Scale is the ratio of distance on a
map as related to the true distance on the ground (Star and Estes,
1990).
– Large-scale in remote sensing refers to imagery in which each pixel
represents a small area on the ground, such as SPOT data, with a
spatial resolution of 10 m or 20 m. Small scale refers to imagery in
which each pixel represents a large area on the ground, such as
Advanced Very High Resolution Radiometer (AVHRR) data, with a
spatial resolution of 1.1 km.
– This terminology is derived from the fraction used to
represent the scale of the map, such as 1:50,000.
Small-scale imagery is represented by a small
fraction (one over a very large number). Large-scale
imagery is represented by a larger fraction (one over
a smaller number). Generally, anything smaller than
1:250,000 is considered small-scale imagery.
Spatial Resolution
• Instantaneous Field of View:
• Spatial resolution is also described as the instantaneous field of view
(IFOV) of the sensor, although the IFOV is not always the same as the
area represented by each pixel. The IFOV is a measure of the area
viewed by a single detector in a given instant in time (Star and Estes,
1990). For example, Landsat MSS data have an IFOV of 79 × 79
meters, but there is an overlap of 11.5 meters in each pass of the
scanner, so the actual area represented by each pixel is 56.5 × 79
meters (usually rounded to 57 × 79 meters).
• Even though the IFOV is not the same as the spatial resolution, it is
important to know the number of pixels into which the total field of view
for the image is broken. Objects smaller than the stated pixel size may
still be detectable in the image if they contrast with the background, such
as roads, drainage patterns, etc.
• On the other hand, objects the same size as the stated
pixel size (or larger) may not be detectable if there are
brighter or more dominant objects nearby. In Figure 8, a
house sits in the middle of four pixels. If the house has a
reflectance similar to its surroundings, the data file
values for each of these pixels reflect the area around
the house, not the house itself, since the house does not
dominate any one of the four pixels. However, if the
house has a significantly different reflectance than its
surroundings, it may still be detectable.
Radiometric resolution
• Radiometric resolution refers to the dynamic range, or
number of possible data file values in each band. This is
referred to by the number of bits into which the recorded
energy is divided. For instance, in 8-bit data, the data file
values range from 0 to 255 for each pixel, but in 7-bit
data, the data file values for each pixel range from 0 to
128.
• In Figure 9, 8-bit and 7-bit data are illustrated. The
sensor measures the EMR in its range. The total
intensity of the energy from 0 to the maximum amount
the sensor measures is broken down into 256 brightness
values for 8-bit data, and 128 brightness values for 7-bit
data.
Temporal resolution
• Temporal resolution is a measure of the repeat
cycle or frequency with which a sensor revisits
the same part of the Earth’s surface. The
frequency will vary from several times per day,
for a typical weather satellite, to 8—20 times a
year for a moderate ground resolution satellite,
such as Landsat TM. The frequency
characteristics will be determined by the design
of the satellite sensor and its orbit pattern
• A panchromatic image consists of only
one band. It is usually displayed as a grey
scale image, i.e. the displayed brightness
of a particular pixel is proportional to the
pixel digital number which is related to the
intensity of solar radiation reflected by the
targets in the pixel and detected by the
detector. Thus, a panchromatic image may
besimilarly interpreted as a black-and-
white aerial photograph of the area,
though at a lower resolution.
• Multispectral and hyperspectral images
consists of several bands of data. For
visual display, each band of the image
may be displayed one band at a time as a
grey scale image, or in combination of
three bands at a time as a color
composite image. Interpretation of a
multispectral color composite image will
require the knowledge of the spectral
reflectance signature of the targets in the
scene.
Purposes of Platforms
• Aerial photography
Aerial photography has been used in agricultural
and natural resource management for many
years. These photographs can be black and
white, color, or color infrared. Depending on the
camera, lens, and flying height these images
can have a variety of scales. Photographs can
be used to determine spatial arrangement of
fields, irrigation ditches, roads, and other
features or they can be used to view individual
features within a field.
• SATELLITE IMAGE
• Infrared images can detect stress in crops
before it is visible with the naked eye. Healthy
canopies reflect strongly in the infrared spectral
range, whereas plants that are stressed will
reflect a dull color. These images can tell a
farmer that there is a problem but does not tell
him what is causing the problem. The stress
might be from lack of water, insect damage,
improper nutrition or soil problems, such as
compaction, salinity or inefficient drainage. The
farmer must assess the cause of the stress from
other information. If the dull areas disappear on
subsequent pictures, the stress could have been
lack of water that was eased with irrigation.
• If the stress continues it could be a sign of
insect infestation. The farmer still has to
conduct in-field assessment to identify the
causes of the problem. The development
of cameras that measure reflectance in a
wider range of wavelengths may lead to
better quantify plant stress. The use of
these multi-spectral cameras are
increasing and will become an important
tool in precision agriculture.
Sources of Free Satellite Image
• GLOVIS: The USGS Global
Visualization Viewer
• NASA EOSDIS Worldview
• GLCF: Landsat Imagery
• Landsat Viewer
• EarthExplorer
Digital Image processing and analysis

• The most common image processing


functions can be placed into the following
four categories:
1.Preprocessing
2.Image Enhancement
3.Image Transformation
4.Image Classification and Analysis
Preprocessing
• Preprocessing functions involve those operations that are
normally required prior to the main data analysis and
extraction of information, and are generally grouped as
radiometric or geometric corrections. Some standard
correction procedures may be carried out in the ground
station before the data is delivered to the user. These
procedures include radiometric correction to correct for
uneven sensor response over the whole image and
geometric correction to correct for geometric distortion due
to Earth's rotation and other imaging conditions (such as
oblique viewing).
Radiometric corrections
• Radiometric correction is a preprocessing
method to reconstruct physically calibrated
values by correcting the spectral errors and
distortions caused by sensors, sun angle,
topography and the atmosphere. Figure Line
Dropout in the next slide shows a typical
systems errors which result in missing or
defective data along a scan line. Dropped lines
are normally corrected by replacing the line
with the pixel values in the line above or below,
or with the average of the two.
Geometric corrections
• Geometric corrections include correcting for geometric
distortions due to sensor-Earth geometry variations, and
conversion of the data to real world coordinates (e.g.
latitude and longitude) on the Earth's surface. The
systematic or predictable distortions can be corrected by
accurate modeling of the sensor and platform motion
and the geometric relationship of the platform with the
Earth. Therefore, to correct other unsystematic or
random errors we have to perform geometric registration
of the imagery to a known ground coordinate system.
The geometric registration process
can be made in two steps:
• identifying the image coordinates (i.e. row, column) of
several clearly discernible points, called ground control
points ( GCPs), in the distorted image and matching them
to their true positions in ground coordinates (e.g. latitude,
longitude measured from a map -○). Polynomial equations
are used to convert the source coordinates to rectified
coordinates, using 1st and 2nd order transformation. The
coefficients of the polynomial are calculated by the least
square regression method, that will help in relating any
point in the map to its corresponding point in the image.
(page 1/2)
• resampling: this process is used to
determine the digital values to place in the
new pixel locations of the corrected output
image. There are three common methods
for resampling: nearest neighbour, bilinear
interpolation, and cubic convolution
(Lillesand T. at al, 2008) (page 2/2)
Image Enhancement
• Image enhancement is conversion of the original
imagery to a better understandable level in spectral
quality for feature extraction or image interpretation. It is
useful to examine the image Histograms before
performing any image enhancement. The x-axis of the
histogram is the range of the available digital numbers,
i.e. 0 to 255. The y-axis is the number of pixels in the
image having a given digital number. Examples of
enhancement functions include: (page 1/3)
contrast stretching
• contrast
stretching to
increase the tonal
distinction between
various features in a
scene. The most
common types of
enhancement are: a
linear contrast
stretch, a linear
contrast stretch with
saturation a
histogram-equalized
stretch
• filtering is commonly
used to restore imagery
by avoiding noises to
enhance the imagery for
better interpretation and to
extract features such as
edges and lineaments.
The most common types
of filters: mean, median,
low-, high pass (Fig.),
edge detection
Image Transformation
• Image transformations usually involve combined
processing of data from multiple spectral bands. Arithmetic
operations (i.e. subtraction, addition, multiplication,
division) are performed to combine and transform the
original bands into "new" images which better display or
highlight certain features in the scene. Some of the most
common transforms applied to image data are: image
ratioing: this method involves the differencing of
combinations of two or more bands aimed at enhancing
target features or principal components analysis (PCA).
The objective of this transformation is to reduce the
dimensionality (i.e. the number of bands) in the data, and
compress as much of the information in the original bands
into fewer bands.
Image Classification
• Information extraction is the last step toward the final output
of the image analysis. After pre-processing the remotely
sensed data is subjected to quantitative analysis to assign
individual pixels to specific classes. Classification of the
image is based on the known and unknown identity to
classify the remainder of the image consisting of those
pixels of unknown identity. After classification is complete, it
is necessary to evaluate its accuracy by comparing the
categories on the classified images with the areas of known
identity on the ground. The final result of the analysis
consists of maps (or images), data and a report. These
three components of the result provide the user with full
information concerning the source data, the method of
analysis and the outcome and its reliability.
• There are two basic methods of classification:
supervised and unsupervised classification.
• In supervised classification, the spectral
features of some areas of known land cover
types are extracted from the image. These
areas are known as the "training areas".
Every pixel in the whole image is then
classified as belonging to one of the classes
depending on how close its spectral features
are to the spectral features of the training
areas. Figure 6-15. shows the scheme of
supervised classification.
Fig 6-15
• Training Stage. The analyst identifies the
training area and develops a numerical
description of the spectral attributes of the
class or land cover type (Fig. 6-16.).
During the training stage the location, size,
shape and orientation of each pixel type
for each class is determined.
Fig 6-16
• Classification Stage. Each unknown pixel in the image is compared to
the spectral signatures of the thematic classes and labeled as the class
it most closely "resembles" digitally. The most commonly mathematical
methods can be used in classification are the following:
– Minimum Distance: an unknown pixel can be classified by
computing the distance from its spectral position to each of the
category means and assigning it to the class with the closest mean.
– Parallelpiped Classifier: each class the estimate of the maximum
and minimum intensity in each band is determined. The
parallelpiped are constructed as to enclose the scatter in each
theme. Then each pixel is tested to see if it falls inside any of the
parallelpiped and has limitation. A pixel that falls outside the
parallelpiped remains unclassified.
– Maximum Likelihood Classifier. An unknown pixel can be
classified by calculating for each class, the probability that it lies in
that class.
• In unsupervised classification, the computer program
automatically groups the pixels in the image into
separate clusters, depending on their spectral features.
Each cluster will then be assigned a land cover type by
the analyst. This method of classification does not utilize
training data. This classifier involves algorithms that
examine the unknown pixels in the image and aggregate
them into a number of classes based on the natural
groupings or cluster present in the image. The classes
that result from this type of classification are spectral
classes (Fig. 6-17.).
Fig 6-17
• There are several mathematical strategies to represent
the clusters of data in spectral space. For example:
IsoData Clustering (Iterative Self Organising Data
Analysis Techniques). It repeatedly performs an entire
classification and recalculates the statistics. The
procedure begins with a set of arbitrarily defined cluster
means, usually located evenly through the spectral
space. After each iteration new means are calculated
and the process is repeated until there is some
difference between iterations. This method produces
good result for the data that are not normally distributed
and is also not biased by any section of the image.
• The other one is Sequential Clustering. In this method
the pixels are analysed one at a time pixel by pixel and
line by line. The spectral distance between each
analysed pixel and previously defined cluster means are
calculated. If the distance is greater than some threshold
value, the pixel begins a new cluster otherwise it
contributes to the nearest existing clusters in which case
cluster mean is recalculated. Clusters are merged if too
many of them are formed by adjusting the threshold
value of the cluster means.
• Segmentation
• A supervised classification is based on the value of the
single pixel and does not utilize the spatial information
within an object. Because of the complexity of surface
features and the limitation of spectral information, the
results of traditional classification methods (pixel-based)
are often mistaken, even confusion classification. Now a
days we have some new methods based on the group
of pixel.
• Segmentation is a process by which pixels are grouped
into segments according to their spectral similarity.
Segment-based classification is an approach that classifies
an image based on these image segments. One of the
process of segmentation employs a watershed delineation
approach to partition input imagery based on their
variance. A derived variance image is treated as a surface
image allocating pixels to particular segments based on
variance similarity (IDRISI TAIGA). Figure 6-18. shows the
result of segmentation. The results of land cover
classifications derived from remotely sensed data you can
compared in the figure 6-19. The object-oriented
classification produced more accurate results, than the
other method.
Fig. 6-18. The result of segmentation using different similarity tolerance
(10,30):the larger the tolerance value, the fewer the image segments in the
output.
Fig. 6-19. The result of the pixel-based and the segment-based classification.
Image processing and analysis
• Many image processing and analysis
techniques have been developed to aid the
interpretation of remote sensing images and to
extract as much information as possible from
the images. The choice of specific techniques
or algorithms to use depends on the goals of
each individual project. The key steps in
processing remotely sensed data are
Digitizing of Images, Image Calibration,
Geo-Registration, and Spectral Analysis.
Data Correction
There are several types of errors that can be
manifested in remotely sensed data. Among
these are line dropout and striping. These
errors can be corrected to an extent in GIS
by radiometric and geometric correction
functions.
• Line Dropout
• Line dropout occurs when a detector either completely
fails to function or becomes temporarily saturated during
a scan (like the effect of a camera flash on a human
retina). The result is a line or partial line of data with
higher data file values, creating a horizontal streak until
the detector(s) recovers, if it recovers.
• Line dropout is usually corrected by replacing the bad
line with a line of estimated data file values. The
estimated line is based on the lines above and below it.
• You can correct line dropout using the 5 × 5 Median
Filter from the Radar Speckle Suppression function. The
Convolution and Focal Analysis functions in the ERDAS
IMAGINE Image Interpreter also corrects for line
dropout.
Line Dropout
• Striping
• Striping or banding occurs if a detector goes
out of adjustment—that is, it provides
readings consistently greater than or less
than the other detectors for the same band
over the same ground cover.
• Use ERDAS IMAGINE Image Interpreter or
ERDAS IMAGINE Spatial Modeler for
implementing algorithms to eliminate striping.
The ERDAS IMAGINE Spatial Modeler
editing capabilities allow you to adapt the
algorithms to best address the data.
Table shows the name of indices and
reference of indices
Built-up index
NDBI = (SWIR – NIR)/(SWIR + NIR)

This index highlights urban areas where there is


typically a higher reflectance in the shortwave-infrared
(SWIR) region, compared to the near-infrared (NIR)
region. Applications include watershed runoff
predictions and land-use planning.
The NDBI was originally developed for use with Landsat TM
bands 5 and 4. However, it will work with any multispectral
sensor with a SWIR band between 1.55-1.75 µm and a NIR band
between 0.76-0.9 µm.
Reference: Zha, Y., J. Gao, and S. Ni. "Use of Normalized Difference Built-Up Index in
Automatically Mapping Urban Areas from TM Imagery." International Journal of
Remote Sensing 24, no. 3 (2003): 583-594.
• Vegetation Indices
• Different bands of a multi-spectral image
may be combined to accentuate the
vegetated areas. One such combination is
the ratio of the near-infrared band to the
red band. This ratio is known as the Ratio
Vegetation Index (RVI)
RVI = NIR / Red = (Layer4 / Layer3) for
landsat 5 and 7
• Since vegetation has high NIR reflectance
but low red reflectance, vegetated areas
will have higher RVI values compared to
non-vegetated areas. Another commonly
used vegetation index is the Normalized
Difference Vegetation Index (NDVI)
computed by
NDVI = (NIR - Red)/(NIR + Red)
=(Layer4-Layer3)/(Layer4+Layer3),for landsat 5 and 7
=(Layer5-Layer4)/(Layer5+Layer4), for landsat 8
• The NDVI itself thus varies between -1.0 and +1.0.
• NDVI of an area containing a dense vegetation canopy
will tend to positive values (say 0.3 to 0.8)
• free standing water (e.g., oceans, seas, lakes and
rivers) which have a rather low reflectance in both
spectral bands (at least away from shores) and thus
result in very low positive or even slightly negative
NDVI values,
• soils which generally exhibit a near-infrared spectral
reflectance somewhat larger than the red, and thus
tend to also generate rather small positive NDVI
values (say 0.1 to 0.2).
Water Indices
There are several multi-band techniques
were used in water extraction purpose,
these are NDWI, MNDWI, NDMI, WRI, and
AWEI. The Normalized Difference Water
Index (NDWI) was developed to extract
water features from Landsat imagery
(McFeeters, 1996).
The NDWI is expressed as follows, NDWI =
Green−NIR
, Where Green is a green band
Green+NIR
such as TM band 2, and NIR is a near
infrared band such as TM band 4.
In 2006, Xu proposed modified NDWI
through substituting MIR band for the NIR
band (Xu, 2006). The modified NDWI can be
Green−MIR
expressed as follows, MNDWI = ,
Green+MIR
where MIR is a middle infrared band such as
TM band 5.
It is referred to that Gao (1996) also named an
NDWI for remote sensing but used a different band
NIR−MIR
composite, NDWIGAO = NIR+MIR
.
Wilson et al. (2002) proposed a Normalized
Difference Moisture Index (NDMI), which had an
identical band composite with Gao’s NDWI. This
index contrasts the near-infrared (NIR) band 4,
which is sensitive to the reflectance of leaf
chlorophyll content to the mid-infrared (MIR) band 5,
which is sensitive to the absorbance of leaf
moisture.
• In addition, Water Ratio Index,
WRI = (Green + Red)/(NIR + MIR) (Shen
and Li, 2010)
• Automated Water Extraction Index,
AWEI = 4 x (Green - MIR) - (0.25 x NIR
+2.75 x SWIR) (Feyisa et al., 2014) were
used in the surface water extraction and
change detection.

Vous aimerez peut-être aussi