Vous êtes sur la page 1sur 81

Short Course Remote Sensing 2009

Part 1: Image Analysis for Remote Sensing

Object-based Image Analysis

Dr. Irmgard Niemeyer · Geomonitoring Group · Institute for Mine-Surveying and Geodesy · TU Bergakademie Freiberg
Reiche Zeche, Fuchsmühlenweg 9 · D-09599 Freiberg, Germany
Tel./Fax +49 3731 39-3591/-3601 · irmgard.niemeyer@tu-freiberg.de · www.geomonitoring.tu-freiberg.de
1. Introduction: Advantages of object-based satellite imagery
analysis
2. Segmentation
3. Object features
4. Feature extraction
5. Classification
6. Processes
7. Change detection
8. Examples

2
1. Advantages of object-based imagery analysis

3
1. Advantages of object-based imagery analysis

What, if we had
• the borders of real-world objects,
• thematic information on land register,
• a surface model?

4
Rule bases Change Detection Automation
1. Advantages of object-based imagery analysis
Human and computer vision
Human visual system:
⇑ identification and understanding of objects and
their context
⇓ estimation of grey values, distances and areas

Digital image processing system:


⇓ identification and understanding of objects and
their context
⇑ estimation of grey values, distances and areas

Æ„Enhancement “ of the human eye?

Æ Improvements of software techniques towards human image


understanding?!
5
1. Advantages of object-based imagery analysis
Pixel-based image analysis

multispectral
reflection
signature

wavelength

Image matrix X=(x)ijn j

i=1...I rows
j=1...J lines/pixels
n=1...N bands
n
6
1. Advantages of object-based imagery analysis
Pixel-based image analysis
Band 2

multispectral, N-dimensional
feature space

d N
Ban

Band 1
Band 3
vegetation

Using the multispectral feature space


soils
for classification
water Band 1
[Albertz 2001, modified]
Band 2

7
1. Advantages of object-based imagery analysis

Pixel-based classification
of high resolution images?
© Definiens

8
1. Advantages of object-based imagery analysis

Object level 3

Object-based
Object level 2
approaches
Object level 1
Pixel
[Baatz et al. 2000]

Object level 1 Object level 2 Object level 3

9
1. Advantages of object-based imagery analysis

http://www.definiens.com

10
http://www.definiens.com
11
1. Advantages of object-based imagery analysis
• Beyond purely spectral information, image objects contain a lot of
additional attributes which can be used for classification: shape,
texture and—operating over the network—a whole set of relational /
contextual information.
• Multiresolution segmentation separates adjacent regions in an image as
long as they are significantly contrasted—even when the regions
themselves are characterized by a certain texture or noise.
• Thus, even textured image data can be analyzed.
• Each classification task has its specific scale. Only image objects of an
appropriate resolution permit analysis of meaningful contextual
information.
• Multiresolution segmentation provides the possibility to easily adapt
image object resolution to specific requirements, data and tasks.
• Homogeneous image objects provide a significantly increased signal-to-
noise ratio compared to single pixels as to the attributes to be used for
classification.
12
[eCognition User Guide 4, Definiens 2004]
1. Advantages of object-based imagery analysis
• Thus, independent of the multitude of additional information, the
classification is more robust.
• Segmentation drastically reduces the sheer number of units to be
handled for classification. Even if a lot of intelligence is applied to the
analysis of each single image object, the classification works relatively
fast.
• Using the possibility to produce image objects in different resolutions,
a project can contain a hierarchical network with different object
levels of different resolutions. This structure represents image
information on different scales simultaneously.
• Thus, different object levels can be analyzed in relation to each other.
For instance, image objects can be classified as to the detailed
composition of sub-objects.
• The object oriented approach which first extracts homogeneous regions
and then classifies them avoids the annoying salt-and-pepper effect of
the more or less spatially finely distributed classification results.
13
[eCognition User Guide 4, Definiens 2004]
2. Segmentation

14
[Definiens Developer 7 User Guide]
2. Segmentation: Chessboard Segmentation

Split the pixel domain or an image object domain into square image objects.
A square grid aligned to the image left and top borders of fixed size is applied
to all objects in the domain and each object is cut along these grid lines.

Result of chessboard segmentation with scale 20

15
[Definiens Developer 7 Reference Book]
2. Segmentation: Quad Tree Based Segmentation
Split the pixel domain or an image object domain into a quad tree grid
formed by square objects. A quad tree grid consists of squares with sides each
having a power of 2 and aligned to the image left and top borders is applied
to all objects in the domain and each object is cut along this grid lines. The
quad tree structure is build in a way that each square has first maximum
possible size and second fulfils the homogeneity criteria as defined by the
mode and scale parameter.

16
[Definiens Developer 7 Reference Book / User Guide]
2. Segmentation: Quad Tree Based Segmentation

Result of quad tree based segmentation with mode color and scale 100

17
2. Segmentation: Multiresolution Segmentation
Apply an optimization procedure which locally minimizes the average
heterogeneity of image objects for a given resolution. It can be applied on
the pixel level or an image object level domain.
The segmentation procedure works according the following rules,
representing a mutual-best-fitting approach:
• The segmentation procedure starts with single image objects of 1 (one)
pixel size and merges them in several loops iteratively in pairs to larger
units as long as an upper threshold of homogeneity is not exceeded
locally. This homogeneity criterion is defined as a combination of
spectral homogeneity and shape homogeneity. You can influence this
calculation by modifying the scale parameter. Higher values for the scale
parameter result in larger image objects, smaller values in smaller image
objects.
• As the first step of the procedure, the seed looks for its best-fitting
neighbour for a potential merger.

18
[Definiens Developer 7 User Guide]
2. Segmentation: Multiresolution Segmentation
• If best-fitting is not mutual, the best candidate image object becomes the
new seed image object and finds its best fitting partner.
• When best fitting is mutual, image objects are merged.
• In each loop, every image object in the image object level will be
handled once.
• The loops continue until no further merger is possible.

19
[Definiens Developer 7 Reference Book]
2. Segmentation: Multiresolution Segmentation

20
[Definiens Developer 7 User Guide]
2. Segmentation: Multiresolution Segmentation

21
[Definiens Developer 7 User Guide]
2. Segmentation: Multiresolution Segmentation

22
[Definiens Developer 7 Reference Book]
2. Segmentation: Multiresolution Segmentation

h(s) = w c hc (s) + w s hs (s)


1 n _
hc (s) = ∑ i
n − 1 i =1
( x − x ) 2

hs (s) = w smoothhsmooth(s) + w comp hcomp (s)


l
hcomp (s) =
n
l c = color
hsmooth(s) = s = shape
b
smooth
compact = compactness
l = border length
b = lenght of edge box
n = number of pixels in the object

23
2. Segmentation: Multiresolution Segmentation

Result of multiresolution segmentation with scale 50, shape 0.1 and compactness 0.5

24
2. Segmentation: Multiresolution Segmentation
• Segmentation procedure should produce highly homogeneous segments
for the optimal separation and representation of image regions.
• The average size of image objects must be adaptable to the scale of
interest.
• Resulting image objects should be of more or less same magnitude.
• Segmentation procedure should be universal and applicable to a large
number of different data types and problems.
• Segmentation results should be reproducible.
• Segmentation procedure should therefore be as fast as possible.

• Region merge
• Fusion of pixels by a criterion of homogeneity.
• Minimisation of the weighted heterogeneity of objects
• Segmentation Parameter: Scale Parameter, Homogeneity criterion
(Color/shape)
25
2. Segmentation: More Techniques
• Contrast Split Segmentation: Segments an image or an image object into dark
and bright regions. The contrast split algorithm segments an image (or image
object) based on a threshold that maximizes the contrast between the resulting
bright objects (consisting of pixels with pixel values above threshold) and dark
objects (consisting of pixels with pixel values below the threshold).
• Spectral Difference Segmentation: Merges neighbouring objects according to
their mean layer intensity values. Neighbouring image objects are merged if the
difference between their layer mean intensities is below the value given by the
maximum spectral difference. This algorithm is designed to refine existing
segmentation results.
• Contrast Filter Segmentation: Uses pixel filters to detect potential objects by
contrast and gradient and create suitable object primitives. An integrated
reshaping operation modifies the shape of image objects to help form coherent
and compact image objects. The resulting pixel classification is stored in an
internal thematic layer. Each pixel is classified as one of the following classes: no
object, object in first layer, object in second layer, object in both layers, ignored
by threshold. Finally a chessboard segmentation is used to convert this thematic
layer into an image object level. 26
[Definiens Developer 7 Reference Book]
2. Segmentation: Object levels

27
[Definiens Developer 7 User Guide]
2. Segmentation: Object levels

28
[Definiens Developer 7 User Guide]
2. Segmentation: Object levels

29
[Definiens Developer 7 User Guide]
2. Segmentation: Recommendations
Produce Image Objects that Suit the Purpose
• Always produce image objects of the biggest possible scale which still
distinguishes different image regions (as large as possible and as fine
as necessary). There is a tolerance concerning the scale of the image
objects representing an area of a consistent classification due to the
equalization achieved by the classification. The separation of different
regions is more important than the scale of image objects.
• Use as much colour criterion as possible while keeping the shape
criterion as high as necessary to produce image objects of the best
border smoothness and compactness. The reason for this is that a high
degree of shape criterion works at the cost of spectral homogeneity.
However, the spectral information is, at the end, the primary
information contained in image data. Using too much shape criterion can
therefore reduce the quality of segmentation results.

30
[Definiens Developer 7 Reference Book]
3. Object features
Image objects have spectral, shape, and hierarchical characteristics.
These characteristic attributes are called Features in Definiens software.
Features are used as source of information to define the inclusion-or-
exclusion parameters used to classify image objects.
There are two major types of features:
• Object features are attributes of image objects, for example the area
of an image object.
• Global features are not connected to an individual image object, for
example the number of image objects of a certain class.

31
[Definiens Developer 7 Reference Book]
3. Object features: Membership functions

32
[eCognition User Guide 4, Definiens 2004]
3. Object features

Object Features
Class-Related Features
Scene Features
Process-Related Features
Customized
Metadata
Feature Variables

33
[Definiens Developer 7 User Guide]
3. Object features: Membership functions

34
[eCognition User Guide 4, Definiens 2004]
3. Object features

35
[eCognition User Guide 4, Definiens 2004]
3. Object features

36
[eCognition User Guide 4, Definiens 2004]
3. Object features

37
[eCognition User Guide 4, Definiens 2004]
3. Object features

38
[eCognition User Guide 4, Definiens 2004]
4. Feature extraction: Automation
SEATH: A semi-automatic feature recognition tool
[Nussbaum et al. 2006, Marpu et al. 2008]
Based on training samples the feature analyzing tool SEaTH (SEparability
and THresholds) identifies the significant object features.

Bayes’ statistical approach:


Solving p(x|C1) p(C1) = p(X|C2) p(C2) for x

39
4. Feature extraction: Automation

SEATH: A semi-automatic feature recognition tool

thresholds

prominent features

Output:
• Object features providing the optimal separability of the object
classes (Jeffries-Matusita distance)
• Feature thresholds for the optimal separability

40
5. Classification
Classification algorithms analyze image objects according defined criteria
and assign them each to a class that best meets the defined criteria.

When editing processes, you can choose from the following classification
algorithms:
• Assign class: assigns a class to image object with certain features.
• Classification: uses the class description to assign a class
• Hierarchical classification: uses the class description as well as the
hierarchical structure of classes.
• Advanced Classification Algorithms: are designed to perform a
specific classification task like finding extrema or identifying
connections between objects.

41
5. Classification

42
[eCognition User Guide 4, Definiens 2004]
6. Processes
• Definiens Developer provides an artificial language for developing
advanced image analysis algorithms. These algorithms use the principles
of object oriented image analysis and local adaptive processing. This is
achieved by processes.
• A single process is the elementary unit of a rule set providing a solution to
a specific image analysis task. Processes are the main working tools for
developing rule sets.
• The main functional parts of a single process are the algorithm and the
image object domain. A single process allows the application of a specific
algorithm to a specific region of interest in the image. All conditions for
classification as well as region of interest selection may incorporate
semantic information.
• Processes may have an arbitrary number of child processes. The so
formed process hierarchy defines the structure and flow control of the
image analysis. Arranging processes containing different types of
algorithms allows the user to build a sequential image analysis routine.
43
[Definiens Developer 7 User Guide]
6. Processes

The algorithm defines the operation the process will perform. This can be
generating image objects, merging or splitting image objects, classifying
objects etc.
• Segmentation algorithms
• Classification algorithms
• Variables operation algorithms
• Reshaping algorithms
• Level operation algorithms
• Interactive operation algorithms
• Sample operation
• Image layer operation algorithms
• Thematic layer operation algorithms
• Export algorithms
• Workspace automation algorithms
• Process related operation
44
[Definiens Developer 7 User Guide]
6. Processes

45
[Definiens Developer 7 User Guide]
6. Processes

46
[Definiens Developer 7 User Guide]
6. Processes: Image Object Domain
The image object domain describes the region of interest where the
algorithm of the process will be executed in the image object hierarchy.
Examples for image object domains are the entire image, an image
object level or all image objects of a given class.

47
[Definiens Developer 7 User Guide]
7. Change Detection

• Change detection is the process of identifying and quantifying temporal


differences in the state of an object or phenomenon.
• When using satellite imagery from two acquisition times, each image
pixel or object from the first time will be compared with the
corresponding pixel or object from the second time in order to derive
the degree of change between the two times.
• Most commonly, differences in radiance values are taken as a measure
of change.
• A variety of digital change detection techniques has been developed in
the past three decades.
• Differences in radiance values indicating significant (“real”) changes
have to be larger compared to radiance changes due to other factors.
The aim of pre-processing is therefore to correct the radiance
differences caused by variations in solar illumination, atmospheric
conditions and sensor performance and geometric distortion
[Singh 1989]
respectively.
48
7. Change Detection
change of the image pixel
Change measure:
• grey values,
• texture,
• transformed values,
• class membership.

Approaches: Changes of
• the spectral or texture pixel values:
arithmetic procedures, regression, change vector, ...
• transformed pixel values:
principal component analysis, multivariate alteration detection, ...
• the class memberships of the pixels:
comparison of classifications, multitemporal classification, ...

49
7. Change Detection

Change measure:
change of the image object
• layer features (mean, stdev,
ratio, texture,…)
• shape features (area,
direction, position,…)
• relations among neighbouring,
sub- and super-objects
time 1 time 2 • object class membership

50
7. Change Detection: Object extraction

Given image data from two acquisition times,


the segmentation could be carried out

Time 1

1. on the basis of the bitemporal data set,


Time 2 Segmentation
levels

2. by applying the segmentation parameters


to the image data of one date and Time 1

assigning the object borders to the image Time 2

data of the other date, Segmentation


levels

Time 1

3. separately for the two times.


Time 2

Segmentation
levels
51
[Niemeyer et al. 2008]
7. Change Detection: Feature extraction

• Common segmentation (1, 2):


- apparently time-invariant object features (e.g. shape features)
- time-variant object features (e.g. layer features)

• separate segmentation (3):


- time-variant object features
• but: problems in the extraction of no-change objects due to
overall variations

52
7. Change Detection: Bitemporal segmentation

[Niemeyer et al. 2009 from Listner 2008] 53


7. Change Detection: Bitemporal segmentation

Procedure

• For testing the plausibility of the merges, Listner (2008) suggested


two different techniques, the so-called threshold test and the local
best fitting test.
• Splitting of the segments was done either by global or universal
segment adjustment.
• Implementation: The procedure was programmed and implemented
using the development environment Matlab and the Matlab-Toolkit
Dipimage. 54
7. Change Detection: Bitemporal segmentation

global segment adjustment


universal segment adjustment

...

...

...

[Listner 2008] 55
7. Change Detection: Bitemporal segmentation

[Niemeyer et al. 2009 from Listner 2008] 56


7. Change Detection: Bitemporal segmentation

[Niemeyer et al. 2009 from Listner 2008] 57


7. Change Detection: Combined pixel-/object-based approach

( )
OrthoEngine

58
Motivation Rule bases Automation

Input Data: QUICKBIRD, Esfahan Nuclear Centre


PAN (0.6 m) MS (2.4 m)

July 2002
July 2003

59
7. Change Detection: Combined pixel-/object-based approach

Very-high resolution optical imagery

Pre-processing

• image-to-image registration by contour


matching or image correlation

• radiometric normalisation using no-


change pixels

• pan-sharpening by wavelet
transformation

60
Motivation Rule bases Automation

C omparison of different
fusion techniques

PC Spectral
ARSIS
Sharpening

2002 Pan-sharpened 2003

Gram Schmidt Resolution


Spectral Merge
Sharpening

61
7. Change Detection: Combined pixel-/object-based approach

July 2002 July 2003

Pre-processed Quickbird image Pre-processed Quickbird image


(Credit: DigitalGlobe)

[Nussbaum & Niemeyer 2007] 62


7. Change Detection: Combined pixel-/object-based approach
Change detection
Multivariate Alteration Detection (MAD) [Nielsen 2007, Nielsen et al. 1998]

(Credit: DigitalGlobe)
time 1 time 2
D= a TX - bTY
• Canonical correlation, MAD analysis.
• Fully automatic scheme gives regularized iterated MAD variates
(MAD’s), invariant to linear/affine transformations, orthogonal.
• Implemented as ENVI extension [Canty 2006], download at:
http://www.fz-juelich.de/ief/ief-ste/datapool/page/210/canty_7251.zip
63
Motivation Rule bases Automation

MAD Components 1 (red), 2 (green), 3 (blue)

Automatic threshold

Automatic threshold * 2

Automatic threshold * 3 64
Motivation Rule bases Automation

Multiresolution Segmentation

On the basis of the 2003 pan-sharpened MS

Level 1 Level 3 Level 5

Level 1 Level 3 Level 5

ƒ Level 1 ƒ Level 3 ƒ Level 5

Level 1
65
7. Change Detection: Combined pixel-/object-based approach
Object extraction
Chessboard segmentation &
subsequent multiresolution
segmentation

Process flow

AST_07 data, pixel level AST_07 data, object level

66
7. Change Detection: Combined pixel-/object-based approach
Change information given by the pixel’s DN MAD transformation

Change information given by the


object’s features

MAD 2 (red), MAD 3 (green), MAD 4 (blue)


Gray indicates „no change“. Different types of changes are represented by different
MADs and thus colours.
67
Motivation Rule bases Automation

Class hierarchy

Classification
of changes

68
7. Change Detection: Combined pixel-/object-based approach

ASTER July 2000 ASTER July 2001


Classification
(using SEaTH for
„industrial sites“)

industrial sites

Thematic information
Change detection (Using MAD)

Change information

Change given by MADs 2,3,4. changes within industrial areas


Gray: No change
Colours: Different types of changes
[Niemeyer & Nussbaum 2006] 69
7. Change Detection: Combined pixel-/object-based approach
Semantic classification using SEATH [Nussbaum et al. 2006, Marpu et al. 2008]
July 2002 July 2003

Object classes “buildings” and “streets” Object classes “buildings” and “streets”
(Credit: DigitalGlobe)

[Nussbaum & Niemeyer 2007] 70


7. Change Detection: Combined pixel-/object-based approach
Semantic classification of changes

(Credit: DigitalGlobe)

[Nussbaum & Niemeyer 2007] 71


7. Change Detection: Combined pixel-/object-based approach
Semantic classification using SEATH [Nussbaum et al. 2006, Marpu et al. 2008]

[Nussbaum & Niemeyer 2007] 72


7. Change Detection: Combined pixel-/object-based approach
MAD transformation and semantic classification of changes

[Nussbaum & Niemeyer 2007] 73


7. Change Detection based on object features

Pre- Object Feature Change Clustering Post-classification


processing extraction extraction detection of change processing
objects

Time 1

Map of Improved map of


changes changes

Time 2

Segmentation
Feature views, i.e.
levels
Image data Layer features,
shape features

74
7. Change Detection based on object features
Multivariate Alteration Detection [Nielsen 2007, Nielsen et al. 1998]

• linear combination of the intensities for all N object features in the


first image acquired at time t1, represented by the random vector F.

U = aTF = a1F1 + a2F2 + . . . aNFN

• linear combination all N object features in the second image acquired


at time t2, represented by the random vector G.

V = bTG = b1G1 + b2G2 + . . . bNGN

• scalar difference image


D=U −V= aTF − bTG

• determination of a and b, so that the positive correlation between U


and V is minimized (generalized eigenvalue problem)

75
7. Change Detection based on object features
Multivariate Alteration Detection
• As a consequence, the difference image D
contains the maximum spread in its pixel
intensities and therefore maximum change
information.
• For a given number of bands N, the
procedure returns N eigenvalues, N pairs
of eigenvectors and N orthogonal
(uncorrelated) difference images, referred
to as to the MAD variates.
• Multivariate autocorrelation factor (MAF)
transformation (Minimum noise fraction
transformation)
• Automatic thresholding (probability
mixture model)
76
7. Change Detection based on object features

Change detection: Change information given by object MADs

[Niemeyer et al. 2009] 77


7. Change Detection based on object features

Unsupervised classification of changes


• MAD transformation enhances different
types of changes within the object
level rather than classifying them.
• Clustering by fuzzy maximum likelihood
estimation (FMLE) (Gath and Geva,
1989).
MAD transformation of objects, with
• Advantages of forming elongated automatic thresholds
clusters and clusters of widely varying MAD 1 (R), MAD 2 (G), MAD 4 (B)

memberships.
• The fuzzy cluster membership of an
object calculated in FMLE is the a-
posteriori probability p(C|f) of a object
(change) class C, given the object
feature f.
78
7. Change Detection based on object features
MAD transformation based on object features

Layer features Shape features

MAF/MAD components 3 (red), 4 (green) and 5 (blue),


grey indicates no-change, different color indicate different types of changes

79
7. Change Detection based on object features
FMLE classification of MAD components
Layer features Shape features

Clustering for 6 classes


80
References
Bachmann, F., Marpu, P.R. & Niemeyer, I., 2008: An Architecture based on Nielsen, A.A., Conradsen, K. & Simpson, J.J., 1998: Multivariate alteration
Neural Networks. MatGeoS 1st Workshop on Mathematical Geosciences, detection (MAD) and MAF processing in multispectral, bitemporal image
Freiberg (Germany), June 2008 data: New approaches to change detection studies. Remote Sensing of
Environment 64, pp. 1–19.
Blaschke, T., Lang, S., & Hay, G., 2008: Object-Based Image Analysis Spatial
Concepts for Knowledge-Driven Remote Sensing Applications. Series: Niemeyer, I., Bachmann, F., Bratskikh, A., John, A., Kristinsdottir, B.,
Lecture Notes in Geoinformation and Cartography. Springer, Berlin. Listner, C. & Bachmann, F., 2009: Techniques for object-based image
analysis. In: Publikationen der Deutschen Gesellschaft für Photogrammetrie,
Canty, M.J., 2006: Image Analysis, Classification and Change Detection in
Fernerkundung und Geoinformation (DGPF) e.V., Band 17 (in print)
Remote Sensing, With Algorithms for ENVI/IDL. Taylor & Francis, CRC Press,
Niemeyer, I., Marpu, P.R. & Nussbaum, S., 2008: Change Detection using
Definiens, 2004: eCognition Userguide 4, Definiens, Munich.
Object Features. In: Blaschke, T., Lang, S., & Hay, G., 2008: Object-Based
Definiens, 2008: Definiens Developer 7 Reference Book. Definiens, Munich. Image Analysis Spatial Concepts for Knowledge-Driven Remote Sensing
Applications. Series: Lecture Notes in Geoinformation and Cartography.
Definiens, 2008: Definiens Developer 7 User Guide. Definiens, Munich.
Springer, Berlin, pp. 185-201.
John, A., 2008: Statistische Detektion und Analyse von Bildobjekt- und
Niemeyer, I. & Nussbaum, S., 2006: Change detection - the Potential for
Bildelementveränderungen, Research Paper, Geomonitoring Group, Institute
Nuclear Safeguards. In: Avenhaus, R., Kyriakopoulos, N., Richard, M. &
of Mine-Surveying and Geodesy, TU Bergakademie Freiberg.
Stein, G. (ed.), 2006: Verifying Treaty Compliance. Limiting Weapons of
Kristinsdóttir, B., 2008: Implications of Invariant Moments for Texture Mass Destruction and Monitoring Kyoto Protocol Provisions. Springer, Berlin,
Analysis, Segmentation and Classification. Diploma Thesis, Geomonitoring 335-348 .
Group, Institute of Mine-Surveying and Geodesy, TU Bergakademie Freiberg.
Nussbaum, S. & INiemeyer, I., 2007:Automated extraction of change
Listner, C., 2008: Bildsegmentierung für die objektbasierte information from multispectral satellite imagery. ESARDA Bulletin 36, 19-25
Änderungsdetektion digitaler Satellitenbilder. Master Thesis, Geomonitoring
Nussbaum, S., Niemeyer, I. & Canty, M.J. 2006: SEATH - A new tool for
Group, Institute of Mine-Surveying and Geodesy, TU Bergakademie Freiberg.
automated feature extraction in the context of object-based image
Marpu, P.R., Nussbaum, S., Niemeyer, I., & Gloaguen, R. 2008a: A analysis. In: Proc. 1st International Conference on Object-based Image
Procedure for Automatic Object-based Classification. In: Blaschke, T., Lang, Analysis (OBIA 2006), Salzburg, 4-5 July 2006, ISPRS Volume No. XXXVI –
S. & Hay, G. (ed.), 2008: Object-Based Image Analysis Spatial Concepts for 4/C42
Knowledge-Driven Remote Sensing Applications. Series: Lecture Notes in
Singh, A., 1989: Digital change detection techniques using remotely-sensed
Geoinformation and Cartography. Springer, Berlin, pp. 169-184.
data. In: International Journal of Remote Sensing 10(6): 989–1002.
Marpu, P.R., Bachmann, F. & Niemeyer, I., 2008b: A class dependent neural
network architecture for object-based classification. GEOBIA 2008, Calgary,
6-7 August 2008.
Nielsen, A.A., 2007: The Regularized Iteratively Reweighted MAD Method for
Change Detection in Multi- and Hyperspectral Data, IEEE Transactions on
Image Processing Vol. 16, No. 2, pp. 463-478.

81

Vous aimerez peut-être aussi