Académique Documents
Professionnel Documents
Culture Documents
1939-1404 © 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution
requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
4442 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 11, NO. 11, NOVEMBER 2018
TABLE I
ACQUISITION DATES OF LANDSAT 8, SENTINEL-2, AND LANDSAT 7 SCENES
B. Field Data
Training and validation data were collected during differ-
ent field data campaigns for different purposes from December
2016 until March 2017. A total of 34 500 sampling points were
randomly selected and collected (see Fig. 1) using handheld
GPS receivers (Trimble Juno 5d, Garmin etrex Vista, etrex 10,
and etrex 20). UTM/WGS84 projection coordinate system was
adopted during the acquisition. Out of the total points, 22 000
Fig. 1. Study area showing distribution of land use classes and sampling
points. Inset shows location of study area (Manicaland province) in Zimbabwe. points were used during the class identification process, 12 500
points were used for accuracy assessment.
Fig. 2. Generic flowcharts for comparison between pixel-level (left side) and decision-level (right side) fusion processes.
B. Preprocessing TABLE II
CORRESPONDING SPECTRAL BANDS OF LANDSAT 8, SENTINEL-2, AND
1) Landsat 8 OLI: The downloaded tiles of Landsat 8 OLI LANDSAT 7 USED FOR PIXEL-BASED FUSION
are radiometrically corrected. Cloud and cloud shadow removal
is implemented using the Fmask algorithm developed and im-
proved by Zhu et al. [45], also available in ENVI [46]. Atmo-
spheric correction is applied using QUick Atmospheric Correc-
tion (QUAC) module. Cloud-free scenes are later mosaicked
and masking is done to exclude areas reserved for wildlife na-
tional parks, safari, forests, and urban areas using 2015 land-use
shapefile from Surveyor General’s office in Zimbabwe.
2) Sentinel-2 MSI: Sentinel-2 mission provides a combina-
tion of two satellites namely; Sentinel-2A and Sentinel-2B,
hence having a temporal resolution of 5 days at the equator C. Fusion of Raw Data (Pixel-Level Integration)
in cloud-free conditions. The Sentinel-2 Level-1C downloaded According to Schmitt and Zhu [20], complementary integra-
scenes, only four were usable with cloud cover less than 50%. tion is the fusion of homogeneous sensor data with different
Level-1C scenes are first converted to top of canopy using information content to reduce information gaps. The combina-
Sen2cor atmospheric correction module in SNAP before ge- tion of sensors with different but complementary measurement
ometric resampling. All the bands are geometrically resampled ranges, e.g., images with different cloud coverage or comple-
to 20 m spatial resolution. 20 m spatial resolution is chosen to mentary spectral ranges is crucial to create a single composite
preserve and maintain the red-edge bands. image. Resampling process is implemented in order to achieve
Cloud and cloud-shadow removal is performed in ENVI us- data matching or co-registration, which may be necessary not
ing method of (ready-to-use) decision tree of depth three as only for the spatial domain.
presented by Hollstein et al. [47]. Cloud-free images are mo- Mandanici and Bitelli [48] did a preliminary study on the
saicked and reserved areas where agriculture is not expected to comparison of Sentinel-2 and Landsat 8 imagery for a potential
take place are masked out using 2015 land-use shapefile. combined use and their results revealed that the corresponding
For pixel-level fusion, B2, B3, B4, B8, B11, and B12 are Sentinel-2 and Landsat 8 bands have a very high correlation of
further resampled to 30 m in ArcGIS. For decision-level fusion, their relative spectral response functions (RSRFs), therefore, the
base classification of Sentinel-2 are done on B2, B3, B4, B5, pixel-level fusion between the two sensor is justified. Although
B6, B7, B8, B8a, B11, and B12 (at 20 m resolution). the RSRFs of the instruments are not identical, some differences
3) Landsat 7 ETM+: Radiometric correction is applied on are expected in the recorded radiometric values.
the downloaded tiles. Gap filling functionality is applied, then For this study, total number of Landsat 8 scenes are more
cloud and shadow removal is done using Fmask algorithm [45], than those of either Landsat 7 or Sentinel-2 tiles, automatically
[46]. Atmospheric correction is applied using QUick Atmo- Landsat 8 becomes the target image, while both Landsat 7 and
spheric Correction (QUAC) module. Cloud-free tiles are then Sentinel-2 are employed as reference images. Jin et al. [49] de-
mosaicked. 2015 land-use shapefile from the Surveyor General’s fined in detail the meanings of both target and reference images.
office in Zimbabwe was used to mask out areas reserved for na- However, it is crucial to note that the spectral bands of Sentinel
tional parks and cities. The images are eventually mosaicked, are thinner than Landsat ones. Table II illustrates corresponding
hence ready for pixel-level merging and ensemble classification. spectral bands used in the pixel-level fusion process.
USEYA AND CHEN: COMPARATIVE PERFORMANCE EVALUATION OF PIXEL-LEVEL AND DECISION-LEVEL DATA FUSION OF LANDSAT 4445
Fig. 4. Cloud-free pixels fusion by mosaicking from Landsat and Sentinel-2. In ENVI the Jeffries–Matusita distance is squared to range
between 0 and 2, consequently one should take the square root
of this output.
Fig. 3 illustrates the methodology followed to execute the According to Apan et al. [55], accurate mapping at crop
pixel-level integration. Cloud and cloud shadow mask from species level is possible when the following ideal conditions are
Landsat 8 imagery are used to mask out cloud-free pixels from met:
either reference images. 1) the study area is limited to a relatively small region where
Landsat 7 and 8 have similar spatial resolution of 30 m, bio-physical conditions (mainly soil, water regimes, and
whereas the mosaicked Sentinel-2 data (20 m spatial resolu- topography) are homogenous.
tion) are further resampled to 30 m using nearest neighbor re- 2) The study area contains no crops planted at significantly
sampling technique prior to fusion. Cloud-free Sentinel-2 and different planting dates.
Landsat 7 pixels are introduced to areas of no information on the Literally, these two conditions are rare in nature and perhaps
Landsat 8 dataset. The fusion is achieved through creating could only be found on experimental plots, or when the fo-
mosaics [17]. Once the cloud/shadow (Landsat 8) and lay- cus of the study is on within-field variability mapping. Spectral
over/shadow masks (Landsat 7 and Sentinel-2) have been pro- separability distances are expected to be correlated with classi-
duced, then fusion is performed. fication results, where higher degree of spectra overlapping on
Quality of fused image is assessed by visual means, and Fig. 4 the data can lead to poorer accuracy despite the number of bands
exhibits a sample of the mosaic between Landsat and Sentinel-2. available.
Yellow pixels are from Sentinel-2 which replaced the eliminated
clouded pixels from Landsat imagery.
However, there exists a general rule of thumb that says prior E. Ensemble Classification
to fusing the images, it is crucial to first produce the best single- This research implements the parallel and concatenation ap-
sensor geometry and radiometry (geocoding, filter, line, edge proach (see Fig. 6) where the classification results generated by
detection, etc.) and then fuse the images. Any spatial enhance- individual classifiers are used as an input into the succeeding
ment performed prior to image fusion will benefit the resulting classifier [36], [51]. Thus, the results obtained through each clas-
fused images [17]. Fig. 5 shows the mosaicked (processed) im- sifier are similarly passed onto the next classifier until a result is
ages without cloud cover for Landsat 8, Sentinel-2, Landsat 7, obtained through the final classifier in the chain [51]. Ensemble
and pixel-level fused raw image. The white gaps indicate lack classifier comprises of MLC, Support Vector Machine (SVM),
of data and reserved areas as indicated in Fig. 1. and Spectral Information Divergence (SID). Classification is
4446 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 11, NO. 11, NOVEMBER 2018
Fig. 5. Mosaicked raw images without clouds. (a) Landsat 8. (b) Sentinel-2. (c) Landsat 7. (d) Pixel-level fused image.
TABLE III
CLASSIFICATION ACCURACIES OF LANDSAT 8, SENTINEL-2, LANDSAT 7, AND FUSED IMAGES
Preliminary ensemble classified images of Landsat 8 and performer but performed fairly well having overall accuracy
Landsat 7 are integrated more easily since their spatial resolu- values between 53–60%.
tion is the same (30 m). Sentinel-2 is first resampled to a spatial Endeavoring to improve accuracy of the overall classifica-
resolution of 30 m using nearest neighbor technique in ArcMap. tion, the base classifiers are combined using the base classifiers’
Decision-level fusion is eventually performed on Landsat 7, 8, a posteriori class membership probabilities which were fur-
and Sentinel-2 images with same spatial resolution and accu- ther classified using SVM. In all datasets configurations imple-
rately aligned. mented, the highest accuracy has been achieved by combining
the 3 classifiers (MLC + SVM + SID) in all scenarios.
G. Comparison of Classification Results Combining MLC + SVM improves the overall accuracy val-
ues but are slightly lower than MLC + SVM + SID. Despite
Based on confusion matrices, overall accuracies are com-
the fact that SID has the lowest accuracy, when combined with
pared. Statistical significance of kappa coefficients is deter-
either MLC or SVM, it makes a difference, there is a slight
mined using Z-test. Composite measure of producer’s and user’s
increase effected on the ultimate overall accuracy although the
accuracies (F1-score) are calculated for each class for compari-
magnitude of improvement is small. Generally, the ensemble
son of the two fusion methods.
classifier accomplished better results than its base counterparts.
Planted areas from pixel-level and decision-level classifica-
This experiment conducted exploits the idea that diverse classi-
tion results are compared and a regression line is plotted to
fiers perform differently, even though same training data is used.
determine the relationship between both results.
However, they can provide complementary information about
patterns to be determined, thereby increasing the effectiveness
V. RESULTS of the overall recognition process.
A. Overall Classification Accuracies Classifying the fused single composite raw image achieved
an accuracy of 82.53% and kappa coefficient of 0.80, whereas
Overall statistical accuracies and kappa coefficients obtained the integration of MLC + SVM + SID classified thematic maps
from the confusion matrices are used to analyze the quality of from Landsat 8, Landsat 7, and Sentinel-2 achieved an overall
classified data obtained from the three individual base classi- accuracy of 85.44% with kappa coefficient of 0.84.
fiers and combinations of integrated classifications results are Decision-level fused map attained a better result than pixel-
illustrated by Table III. Kappa coefficient considers the whole level thematic map. The overall accuracy results prove that
error matrix instead of the diagonal elements which is the case decision-level data fusion enable better crops identification com-
with overall accuracy. pared to classifying a single composite pixel-level integrated
SVM outperformed all the base classifiers on all the four image. This indicates that data fusion at decision-level does not
datasets classified. It performed the best on Sentinel-2 images lead to any significant data loss, instead there is an improvement
achieving an accuracy of 93.63%, followed by 78.58% on pixel- in the overall classification accuracy.
level fused image. MLC achieved 88.52% on Sentinel-2 dataset
followed by 77.46% on pixel-level fused image. However, SVM
performed slightly better than MLC on all the datasets. It has
proved to be a powerful tool for discriminating the various B. Statistical Significance of Kappa Coefficients
major crops grown in summer using remote sensing. Accord- The Z-test is a statistical test of the difference in accuracy
ing to Zhang et al. [56] and Szuster et al. [57], SVM usually values; it involves comparing the accuracy of a new classifier
yields slightly better results than MLC for land cover analysis. against one derived from the application of a standard clas-
The same has been proved by this research. SID is the least sifier [58]. Z-test is performed to assess significant differences
4448 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 11, NO. 11, NOVEMBER 2018
[24] M. A. Peña, R. Liao, and A. Brenning, “Using spectrotemporal indices to [48] E. Mandanici and G. Bitelli, “Preliminary comparison of sentinel-2 and
improve the fruit-tree crop classification accuracy,” ISPRS J. Photogramm. landsat 8 imagery for a combined use,” Remote Sens., vol. 8, no. 12, p.
Remote Sens., vol. 128, pp. 158–169, 2017. 1014, 2016.
[25] D. Lu and Q. Weng, “A survey of image classification methods and tech- [49] S. Jin et al., “Automated cloud and shadow detection and filling using
niques for improving classification performance,” Int. J. Remote Sens., two-date Landsat imagery in the USA,” Int. J. Remote Sens., vol. 34, no. 5,
vol. 28, no. 5, pp. 823–870, Mar. 2007. pp. 1540–1560, Mar. 2013.
[26] A. Rahman and S. Tasnim, “Ensemble classifiers and their applications: A [50] P. Ukrainski, “Supervised classification help. ROI separability,” 50˚ North,
review,” Int. J. Comput. Trends Technol., vol. 10, no. 1, pp. 31–35, Apri. 2016. [Online]. Available: 50northspatial.org. Accessed on: Dec. 25, 2017.
2014. [51] P. Du, J. Xia, W. Zhang, K. Tan, Y. Liu, and S. Liu, “Multiple classifier
[27] T. Matsuyama, “Knowledge-based aerial image understanding systems system for remote sensing image classification: A review,” Sensors MDPI,
and expert systems for image processing,” IEEE Trans. Geosci. Remote vol. 12, no. 12, pp. 4764–4792, Apr. 2012.
Sens., vol. GE-25, no. 3, pp. 305–316, May 1987. [52] M. J. Canty, Image Analysis, Classification And Change Detection In
[28] Y. Lu, “Knowledge integration in a multiple classifier system,” Appl. Remote Sensing, 3rd ed. New York, NY, USA: Taylor & Francis, 2014.
Intell., vol. 6, no. 2, pp. 75–86, 1996. [53] M. J. Canty, Image Analysis, Classification, and Change Detection in
[29] L. I. Kuncheva and C. J. Whitaker, “Measures of diversity in classifier Remote Sensing, 2nd ed. New York, NY, USA: Taylor & Francis, 2009.
ensembles,” Mach. Learn., vol. 51, no. 51, pp. 181–207, 2003. [54] X. Lin, S. Yacoub, J. Burns, and S. Simske, “Performance analysis of
[30] N. M. Baba, M. Makhtar, S. Abdullah, and M. K. Awang, “Current issues in pattern classifier combination by plurality voting,” Pattern Recognit. Lett.,
ensemble methods and its applications,” J. Theoretical Appl. Inf. Technol., vol. 24, no. 12, pp. 1959–1969, 2003.
vol. 81, no. 2, pp. 266–276, 2015. [55] A. Apan, R. Kelly, T. Jensen, D. Butler, W. Strong, and B. Basnet, “Spectral
[31] R. Ranawana and V. Palade, “Multi-classifier systems: Review and a discrimination and separability analysis of agricultural crops and soil
roadmap for developers multi-classifier systems - review and a roadmap attributes using aster imagery,” in Proc 11th Australas. Remote Sens.
for developers,” Int. J. Hybrid Intell. Syst., vol. 3, pp. 35–61, 2006. Photogramm. Conf., Brisbane, Australia, Jan. 2002, vol. 2, pp. 396–411.
[32] M. Pal, “Ensemble learning with decision tree for remote sensing classi- [56] Y. Zhang, J. Ren, and J. Jiang, “Combining MLC and SVM classifiers
fication,” World Acad. Sci. Eng. Technol., vol. 36, pp. 258–260, 2007. for learning based decision making: Analysis and evaluations,” Comput.
[33] R. Polikar, “Ensemble learning,” Scholarpedia, vol. 1, no. 1, pp. 1–34, Intell. Neurosci., vol. 2015, pp. 1–8, 2015.
2009. [57] B. W. Szuster, Q. Chen, and M. Borger, “A comparison of classification
[34] X. Liu, A. K. Skidmore, H. van Oosten, and H. Heine, “Integrated classifi- techniques to support land cover and land use analysis in tropical coastal
cation systems,” in Proc. 2nd Int. Symp. Operationalization Remote Sens., zones,” Appl. Geogr., vol. 31, no. 2, pp. 525–532, Apr. 2011.
pp. 16–20, Aug. 1999. [58] K. Jia et al., “Land cover classification of landsat data with phenological
[35] Y. I. Lu, “Knowledge integration in a multiple classifier system,” Appl. features extracted from time series MODIS NDVI data,” Remote Sens.,
Intell., vol. 6, no. 2, pp. 75–86, 1996. vol. 6, no. 11, pp. 11518–11532, 2014.
[36] R. Ranawana and V. Palade, “Multi-classifier systems: Review and a [59] M. of A. M. and I. Development, “First round crop and livestock assess-
roadmap for developers multi-classifier systems - review and a roadmap ment report (2016–2017),” Harare, Zimbabwe, 2017.
for developers,” Int. J. Hybrid Intell. Syst., vol. 3, pp. 35–61, Apr. 2006. [60] E. Nirmala and V. Vaidehi, “Comparison of pixel-level and feature level
[37] L. Lam, “Classifier combinations: Implementations and theoretical is- image fusion methods,” in Proc. Int. Conf. Comput. Sustain. Global De-
sues,” in Multiple Classifier Systems 2000, Lecture Notes in Computer velop., 2015, pp. 743–748.
Science. vol. 1857, Berlin,Germany: Springer, 2000, pp. 77–86. [61] R. S. Blum, Z. Xue, and Z. Zhang, “An overview of image fusion,” in
[38] S. Le Hegarat-Mascle, I. Bloch, and D. Vidal-Madjar, “Application of Multi-Sensor Image Fusion and Its Applications, R. S. Blum and Z. Liu,
dempster-shafer evidence theory to unsupervised classification in multi- Eds. New York, NY, USA: Taylor & Francis, 2006, pp. 2–35.
source remote sensing,” IEEE Trans. Geosci. Remote Sens., vol. 35, no. 4, [62] Q. Tao and R. Veldhuis, “Hybrid fusion for biometrics: Combining score-
pp. 1018–1031, Jul. 1997. level and decision-level fusion hybrid fusion for biometrics: Combining
[39] X. Zhang, X. Feng, and W. Ke, “Integration of classifiers for improvement score-level and decision-level fusion,” in Proc. IEEE Comput. Soc. Conf.
of vegetation category identification accuracy based on image objects,” Comput. Vis. Pattern Recognit. Workshops, no. Jul. 2008.
New Zealand J. Agricultural Res., vol. 50, no. 5, pp. 1125–1133, Dec.
2007.
[40] J. Wang, G. Su, J. Chen, and Y. Moon, “CPGL: A classification method
combining PCA and group lasso method,” in Proc. IEEE 17th Int. Conf.
Image Process., 2010, pp. 4529–4532.
Juliana Useya received the B.Sc. (Hons.) degree in
[41] I. Kanellopoulos, G. G. Wilkinson, and J. Megier, “Integration of neural geoinformatics and surveying from the University of
network and statistical image classification for land cover mapping,” in
Zimbabwe, Zimbabwe, in 2007, and the M.Sc. degree
Proc. IEEE Int. Geosci. Remote Sens. Symp, 1993, pp. 511–513.
in geoinformatics from the Faculty of ITC, University
[42] T. K. Ho, J. J. Hull, and S. N. Srihari, “Decision combination in multiple
of Twente, Enschede, The Netherlands, in 2011. She
classifier systems,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no. 1, is currently working toward the Postgraduate degree
pp. 66–75, Jan. 1994.
in the Department of Remote Sensing and GIS at the
[43] A. Chemura, O. Mutanga, and J. Odindi, “Empirical modeling of leaf
College of GeoExploration Science and Technology,
chlorophyll content in coffee (coffea arabica) plantations with sentinel-2 Jilin University, China.
MSI data: Effects of spectral settings, spatial resolution, and crop canopy
Her research interests include multisensor clas-
cover,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 10,
sification and synthetic aperture radar application
no. 12, pp. 5541–5550, Dec. 2017.
techniques.
[44] G. Forkuor, K. Dimobe, I. Serme, and J. E. Tondoh, “Landsat-8 vs.
sentinel-2: Examining the added value of sentinel-2’s red-edge bands
to land-use and land-cover mapping in burkina faso,” GIScience Remote
Sens., vol. 55, pp. 1–24, Aug. 2017.
[45] Z. Zhu, S. Wang, and C. E. Woodcock, “Remote sensing of environment
improvement and expansion of the fmask algorithm: Cloud, cloud shadow, Shengbo Chen received the Ph.D. degree in earth ex-
and snow detection for landsats 4–7, 8, and sentinel 2 images,” Remote ploration and information technology in 2000 from
Sens. Environ., vol. 159, pp. 269–277, 2015. Jilin University, Changchun, China.
[46] H. G. Solutions, “ENVI calculate cloud mask using fmask He is a full professor at College of GeoExploration
task,” 2018. [Online]. Available: http://www.harrisgeospatial.com/docs/ Science and Technology, Jilin University, China. His
envicalculatecloudmaskusingfmasktask.html. Accessed on: Apr. 9, 2018. current research interests include application of re-
[47] A. Hollstein, K. Segl, L. Guanter, M. Brell, and M. Enesco, “Ready-to-use mote sensing in geology, agriculture, and exploration
methods for the detection of clouds, cirrus, snow, shadow, water and clear of the lunar system.
sky pixels in sentinel-2 MSI images,” MDPI Remote Sens., vol. 8, no. 666,
pp. 1–18, 2016.