Académique Documents
Professionnel Documents
Culture Documents
Dr. Irmgard Niemeyer · Geomonitoring Group · Institute for Mine-Surveying and Geodesy · TU Bergakademie Freiberg
Reiche Zeche, Fuchsmühlenweg 9 · D-09599 Freiberg, Germany
Tel./Fax +49 3731 39-3591/-3601 · irmgard.niemeyer@tu-freiberg.de · www.geomonitoring.tu-freiberg.de
1. Introduction: Advantages of object-based satellite imagery
analysis
2. Segmentation
3. Object features
4. Feature extraction
5. Classification
6. Processes
7. Change detection
8. Examples
2
1. Advantages of object-based imagery analysis
3
1. Advantages of object-based imagery analysis
What, if we had
• the borders of real-world objects,
• thematic information on land register,
• a surface model?
4
Rule bases Change Detection Automation
1. Advantages of object-based imagery analysis
Human and computer vision
Human visual system:
⇑ identification and understanding of objects and
their context
⇓ estimation of grey values, distances and areas
multispectral
reflection
signature
wavelength
i=1...I rows
j=1...J lines/pixels
n=1...N bands
n
6
1. Advantages of object-based imagery analysis
Pixel-based image analysis
Band 2
multispectral, N-dimensional
feature space
d N
Ban
Band 1
Band 3
vegetation
7
1. Advantages of object-based imagery analysis
Pixel-based classification
of high resolution images?
© Definiens
8
1. Advantages of object-based imagery analysis
Object level 3
Object-based
Object level 2
approaches
Object level 1
Pixel
[Baatz et al. 2000]
9
1. Advantages of object-based imagery analysis
http://www.definiens.com
10
http://www.definiens.com
11
1. Advantages of object-based imagery analysis
• Beyond purely spectral information, image objects contain a lot of
additional attributes which can be used for classification: shape,
texture and—operating over the network—a whole set of relational /
contextual information.
• Multiresolution segmentation separates adjacent regions in an image as
long as they are significantly contrasted—even when the regions
themselves are characterized by a certain texture or noise.
• Thus, even textured image data can be analyzed.
• Each classification task has its specific scale. Only image objects of an
appropriate resolution permit analysis of meaningful contextual
information.
• Multiresolution segmentation provides the possibility to easily adapt
image object resolution to specific requirements, data and tasks.
• Homogeneous image objects provide a significantly increased signal-to-
noise ratio compared to single pixels as to the attributes to be used for
classification.
12
[eCognition User Guide 4, Definiens 2004]
1. Advantages of object-based imagery analysis
• Thus, independent of the multitude of additional information, the
classification is more robust.
• Segmentation drastically reduces the sheer number of units to be
handled for classification. Even if a lot of intelligence is applied to the
analysis of each single image object, the classification works relatively
fast.
• Using the possibility to produce image objects in different resolutions,
a project can contain a hierarchical network with different object
levels of different resolutions. This structure represents image
information on different scales simultaneously.
• Thus, different object levels can be analyzed in relation to each other.
For instance, image objects can be classified as to the detailed
composition of sub-objects.
• The object oriented approach which first extracts homogeneous regions
and then classifies them avoids the annoying salt-and-pepper effect of
the more or less spatially finely distributed classification results.
13
[eCognition User Guide 4, Definiens 2004]
2. Segmentation
14
[Definiens Developer 7 User Guide]
2. Segmentation: Chessboard Segmentation
Split the pixel domain or an image object domain into square image objects.
A square grid aligned to the image left and top borders of fixed size is applied
to all objects in the domain and each object is cut along these grid lines.
15
[Definiens Developer 7 Reference Book]
2. Segmentation: Quad Tree Based Segmentation
Split the pixel domain or an image object domain into a quad tree grid
formed by square objects. A quad tree grid consists of squares with sides each
having a power of 2 and aligned to the image left and top borders is applied
to all objects in the domain and each object is cut along this grid lines. The
quad tree structure is build in a way that each square has first maximum
possible size and second fulfils the homogeneity criteria as defined by the
mode and scale parameter.
16
[Definiens Developer 7 Reference Book / User Guide]
2. Segmentation: Quad Tree Based Segmentation
Result of quad tree based segmentation with mode color and scale 100
17
2. Segmentation: Multiresolution Segmentation
Apply an optimization procedure which locally minimizes the average
heterogeneity of image objects for a given resolution. It can be applied on
the pixel level or an image object level domain.
The segmentation procedure works according the following rules,
representing a mutual-best-fitting approach:
• The segmentation procedure starts with single image objects of 1 (one)
pixel size and merges them in several loops iteratively in pairs to larger
units as long as an upper threshold of homogeneity is not exceeded
locally. This homogeneity criterion is defined as a combination of
spectral homogeneity and shape homogeneity. You can influence this
calculation by modifying the scale parameter. Higher values for the scale
parameter result in larger image objects, smaller values in smaller image
objects.
• As the first step of the procedure, the seed looks for its best-fitting
neighbour for a potential merger.
18
[Definiens Developer 7 User Guide]
2. Segmentation: Multiresolution Segmentation
• If best-fitting is not mutual, the best candidate image object becomes the
new seed image object and finds its best fitting partner.
• When best fitting is mutual, image objects are merged.
• In each loop, every image object in the image object level will be
handled once.
• The loops continue until no further merger is possible.
19
[Definiens Developer 7 Reference Book]
2. Segmentation: Multiresolution Segmentation
20
[Definiens Developer 7 User Guide]
2. Segmentation: Multiresolution Segmentation
21
[Definiens Developer 7 User Guide]
2. Segmentation: Multiresolution Segmentation
22
[Definiens Developer 7 Reference Book]
2. Segmentation: Multiresolution Segmentation
23
2. Segmentation: Multiresolution Segmentation
Result of multiresolution segmentation with scale 50, shape 0.1 and compactness 0.5
24
2. Segmentation: Multiresolution Segmentation
• Segmentation procedure should produce highly homogeneous segments
for the optimal separation and representation of image regions.
• The average size of image objects must be adaptable to the scale of
interest.
• Resulting image objects should be of more or less same magnitude.
• Segmentation procedure should be universal and applicable to a large
number of different data types and problems.
• Segmentation results should be reproducible.
• Segmentation procedure should therefore be as fast as possible.
• Region merge
• Fusion of pixels by a criterion of homogeneity.
• Minimisation of the weighted heterogeneity of objects
• Segmentation Parameter: Scale Parameter, Homogeneity criterion
(Color/shape)
25
2. Segmentation: More Techniques
• Contrast Split Segmentation: Segments an image or an image object into dark
and bright regions. The contrast split algorithm segments an image (or image
object) based on a threshold that maximizes the contrast between the resulting
bright objects (consisting of pixels with pixel values above threshold) and dark
objects (consisting of pixels with pixel values below the threshold).
• Spectral Difference Segmentation: Merges neighbouring objects according to
their mean layer intensity values. Neighbouring image objects are merged if the
difference between their layer mean intensities is below the value given by the
maximum spectral difference. This algorithm is designed to refine existing
segmentation results.
• Contrast Filter Segmentation: Uses pixel filters to detect potential objects by
contrast and gradient and create suitable object primitives. An integrated
reshaping operation modifies the shape of image objects to help form coherent
and compact image objects. The resulting pixel classification is stored in an
internal thematic layer. Each pixel is classified as one of the following classes: no
object, object in first layer, object in second layer, object in both layers, ignored
by threshold. Finally a chessboard segmentation is used to convert this thematic
layer into an image object level. 26
[Definiens Developer 7 Reference Book]
2. Segmentation: Object levels
27
[Definiens Developer 7 User Guide]
2. Segmentation: Object levels
28
[Definiens Developer 7 User Guide]
2. Segmentation: Object levels
29
[Definiens Developer 7 User Guide]
2. Segmentation: Recommendations
Produce Image Objects that Suit the Purpose
• Always produce image objects of the biggest possible scale which still
distinguishes different image regions (as large as possible and as fine
as necessary). There is a tolerance concerning the scale of the image
objects representing an area of a consistent classification due to the
equalization achieved by the classification. The separation of different
regions is more important than the scale of image objects.
• Use as much colour criterion as possible while keeping the shape
criterion as high as necessary to produce image objects of the best
border smoothness and compactness. The reason for this is that a high
degree of shape criterion works at the cost of spectral homogeneity.
However, the spectral information is, at the end, the primary
information contained in image data. Using too much shape criterion can
therefore reduce the quality of segmentation results.
30
[Definiens Developer 7 Reference Book]
3. Object features
Image objects have spectral, shape, and hierarchical characteristics.
These characteristic attributes are called Features in Definiens software.
Features are used as source of information to define the inclusion-or-
exclusion parameters used to classify image objects.
There are two major types of features:
• Object features are attributes of image objects, for example the area
of an image object.
• Global features are not connected to an individual image object, for
example the number of image objects of a certain class.
31
[Definiens Developer 7 Reference Book]
3. Object features: Membership functions
32
[eCognition User Guide 4, Definiens 2004]
3. Object features
Object Features
Class-Related Features
Scene Features
Process-Related Features
Customized
Metadata
Feature Variables
33
[Definiens Developer 7 User Guide]
3. Object features: Membership functions
34
[eCognition User Guide 4, Definiens 2004]
3. Object features
35
[eCognition User Guide 4, Definiens 2004]
3. Object features
36
[eCognition User Guide 4, Definiens 2004]
3. Object features
37
[eCognition User Guide 4, Definiens 2004]
3. Object features
38
[eCognition User Guide 4, Definiens 2004]
4. Feature extraction: Automation
SEATH: A semi-automatic feature recognition tool
[Nussbaum et al. 2006, Marpu et al. 2008]
Based on training samples the feature analyzing tool SEaTH (SEparability
and THresholds) identifies the significant object features.
39
4. Feature extraction: Automation
thresholds
prominent features
Output:
• Object features providing the optimal separability of the object
classes (Jeffries-Matusita distance)
• Feature thresholds for the optimal separability
40
5. Classification
Classification algorithms analyze image objects according defined criteria
and assign them each to a class that best meets the defined criteria.
When editing processes, you can choose from the following classification
algorithms:
• Assign class: assigns a class to image object with certain features.
• Classification: uses the class description to assign a class
• Hierarchical classification: uses the class description as well as the
hierarchical structure of classes.
• Advanced Classification Algorithms: are designed to perform a
specific classification task like finding extrema or identifying
connections between objects.
41
5. Classification
42
[eCognition User Guide 4, Definiens 2004]
6. Processes
• Definiens Developer provides an artificial language for developing
advanced image analysis algorithms. These algorithms use the principles
of object oriented image analysis and local adaptive processing. This is
achieved by processes.
• A single process is the elementary unit of a rule set providing a solution to
a specific image analysis task. Processes are the main working tools for
developing rule sets.
• The main functional parts of a single process are the algorithm and the
image object domain. A single process allows the application of a specific
algorithm to a specific region of interest in the image. All conditions for
classification as well as region of interest selection may incorporate
semantic information.
• Processes may have an arbitrary number of child processes. The so
formed process hierarchy defines the structure and flow control of the
image analysis. Arranging processes containing different types of
algorithms allows the user to build a sequential image analysis routine.
43
[Definiens Developer 7 User Guide]
6. Processes
The algorithm defines the operation the process will perform. This can be
generating image objects, merging or splitting image objects, classifying
objects etc.
• Segmentation algorithms
• Classification algorithms
• Variables operation algorithms
• Reshaping algorithms
• Level operation algorithms
• Interactive operation algorithms
• Sample operation
• Image layer operation algorithms
• Thematic layer operation algorithms
• Export algorithms
• Workspace automation algorithms
• Process related operation
44
[Definiens Developer 7 User Guide]
6. Processes
45
[Definiens Developer 7 User Guide]
6. Processes
46
[Definiens Developer 7 User Guide]
6. Processes: Image Object Domain
The image object domain describes the region of interest where the
algorithm of the process will be executed in the image object hierarchy.
Examples for image object domains are the entire image, an image
object level or all image objects of a given class.
47
[Definiens Developer 7 User Guide]
7. Change Detection
Approaches: Changes of
• the spectral or texture pixel values:
arithmetic procedures, regression, change vector, ...
• transformed pixel values:
principal component analysis, multivariate alteration detection, ...
• the class memberships of the pixels:
comparison of classifications, multitemporal classification, ...
49
7. Change Detection
Change measure:
change of the image object
• layer features (mean, stdev,
ratio, texture,…)
• shape features (area,
direction, position,…)
• relations among neighbouring,
sub- and super-objects
time 1 time 2 • object class membership
50
7. Change Detection: Object extraction
Time 1
Time 1
Segmentation
levels
51
[Niemeyer et al. 2008]
7. Change Detection: Feature extraction
52
7. Change Detection: Bitemporal segmentation
Procedure
...
...
...
[Listner 2008] 55
7. Change Detection: Bitemporal segmentation
( )
OrthoEngine
58
Motivation Rule bases Automation
July 2002
July 2003
59
7. Change Detection: Combined pixel-/object-based approach
Pre-processing
• pan-sharpening by wavelet
transformation
60
Motivation Rule bases Automation
C omparison of different
fusion techniques
PC Spectral
ARSIS
Sharpening
61
7. Change Detection: Combined pixel-/object-based approach
(Credit: DigitalGlobe)
time 1 time 2
D= a TX - bTY
• Canonical correlation, MAD analysis.
• Fully automatic scheme gives regularized iterated MAD variates
(MAD’s), invariant to linear/affine transformations, orthogonal.
• Implemented as ENVI extension [Canty 2006], download at:
http://www.fz-juelich.de/ief/ief-ste/datapool/page/210/canty_7251.zip
63
Motivation Rule bases Automation
Automatic threshold
Automatic threshold * 2
Automatic threshold * 3 64
Motivation Rule bases Automation
Multiresolution Segmentation
Level 1
65
7. Change Detection: Combined pixel-/object-based approach
Object extraction
Chessboard segmentation &
subsequent multiresolution
segmentation
Process flow
66
7. Change Detection: Combined pixel-/object-based approach
Change information given by the pixel’s DN MAD transformation
Class hierarchy
Classification
of changes
68
7. Change Detection: Combined pixel-/object-based approach
industrial sites
Thematic information
Change detection (Using MAD)
Change information
Object classes “buildings” and “streets” Object classes “buildings” and “streets”
(Credit: DigitalGlobe)
(Credit: DigitalGlobe)
Time 1
Time 2
Segmentation
Feature views, i.e.
levels
Image data Layer features,
shape features
74
7. Change Detection based on object features
Multivariate Alteration Detection [Nielsen 2007, Nielsen et al. 1998]
75
7. Change Detection based on object features
Multivariate Alteration Detection
• As a consequence, the difference image D
contains the maximum spread in its pixel
intensities and therefore maximum change
information.
• For a given number of bands N, the
procedure returns N eigenvalues, N pairs
of eigenvectors and N orthogonal
(uncorrelated) difference images, referred
to as to the MAD variates.
• Multivariate autocorrelation factor (MAF)
transformation (Minimum noise fraction
transformation)
• Automatic thresholding (probability
mixture model)
76
7. Change Detection based on object features
memberships.
• The fuzzy cluster membership of an
object calculated in FMLE is the a-
posteriori probability p(C|f) of a object
(change) class C, given the object
feature f.
78
7. Change Detection based on object features
MAD transformation based on object features
79
7. Change Detection based on object features
FMLE classification of MAD components
Layer features Shape features
81