Académique Documents
Professionnel Documents
Culture Documents
Mixed
Pure
Pure
Pure
Mixed
Mixed
Mixed
Pure
to the
Matched Filtering
Often called a partial un-mixing.
No need to find the spectra of all end members in the
scene to get an accurate analysis.
Originally developed to compute abundances of
targets that are relatively rare in the scene.
Matched Filtering filters the input image for good
matches to the chosen target spectrum by maximizing
the response of the target spectrum within the data
and suppressing the response of everything else.
Soft Classification
Each pixel may represent the multiple and partial class
memberships.
It is an alternative to hard classification because of its
ability to deal with the mixed pixel.
Membership functions allocates to each pixel a real
value between 0 and 1, i.e. membership grade.
Sub-pixel scale information is typically represented in
the output of a soft classification by the strength of
membership a pixel displays to each class.
It is used to reflect the relative proportion of the classes
in the area represented by the pixel.
Soft classifiers
Most common soft classifiers are:
Maximum likelihood classification
Fuzzy c-means
Possibilistic c-means
Fuzzy set theory
Noise Clustering
based approaches
Artificial neural networks
Decision Trees
pm pi
Where
X is a vector of DN values of unclassified pixels
pi is likelihood of ith LC class (i=1to c) whereas
pm is likelihood of LC class c and given by
pm
1
t
1
ln N i x i N i x i
2
pa pm
mj
j 1
J fcm (U ,V ) ki D ( xk , vi )
m
i 1 k 1
D( xk , vi ) d xk - vi
2
ki
Where
( xk - vi )T A( xk - vi )
i 1
ki
1 for all k ;
k 1
ki
0 for all i ;
0 ki 1 for all k, i
Where
matrix
V (v1 ........vc ) is the collection of the vectors with the information
class center vi
ki is a class membership values of a pixel
U N c
of class i.
c and N are total number of information classes and pixels
respectively.
A is a weight matrix.
m is a weighting exponent (or fuzzifier) 1 m
From the objective function of the FCM the membership value can
calculated as:
D( xk , vi )
ki
D
(
x
,
v
)
j
j 1
k
1
m 1
where D( xk , v j ) D( xk , vi )
i 1
vi
k 1
N
ki
k 1
ki
xk
m
i 1
k 1
J pcm (U , V ) ki D( xk , vi ) i (1 ki ) m
m
i 1 k 1
k 1
ki
From the objective function of the PCM the membership value can calculated as:
ki
1 D( xk , vi ) i
1
m 1
where
N
i K D( xk , vi )
k 1
m
ki
ki
k 1
J nc (U , V ) ki D ( xk , vi ) k ,c 1
m
i 1 k 1
D ( xk , vi )
j 1 D ( xk , v j )
c
ki
1
m 1
k 1
D ( xk , v i )
1
m 1
,1 i c
j 1 D xk , v j
k ,c 1
1
m 1
Advantages of ANN
It is a non-parametric classifier, i.e. it does not require any assumption about the
statistical distribution of the data.
High computation rate, achieved by their massive parallelism, resulting from a dense
arrangement of interconnections (weights) and simple processors (neurones), which
permits real-time processing of very large datasets.
Disadvantages of ANN
ANN are semantically poor. It is difficult to gain any understanding about how the
result was achieved.
The training of an ANN can be computationally demanding and slow.
ANN are perceived to be difficult to apply successfully. It is difficult to select the type
of network architecture, the initial values of parameters such as learning rate and
momentum, the number of iterations required to train the network and the choice of
initial weights.
Super-resolution Mapping :
Although the soft classification is informative and
meaningful it fails to account for the actual spatial
distribution of class proportions within the pixel.
Super-resolution mapping (or sub-pixel mapping) is a
step forward.
Super-resolution mapping considers the spatial
distribution within and between pixels in order to
produce maps at sub-pixel scale.