Vous êtes sur la page 1sur 28

Accepted Manuscript

Template for high-resolution river landscape mapping using UAV technology

Miloš Rusnák, Ján Sládek, Anna Kidová, Milan Lehotský

PII: S0263-2241(17)30653-X
DOI: https://doi.org/10.1016/j.measurement.2017.10.023
Reference: MEASUR 5025

To appear in: Measurement

Received Date: 26 June 2017


Revised Date: 10 October 2017
Accepted Date: 11 October 2017

Please cite this article as: M. Rusnák, J. Sládek, A. Kidová, M. Lehotský, Template for high-resolution river
landscape mapping using UAV technology, Measurement (2017), doi: https://doi.org/10.1016/j.measurement.
2017.10.023

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Template for high-resolution river landscape mapping using UAV technology

Miloš Rusnáka, Ján Sládeka,b, Anna Kidováa, Milan Lehotskýa,

Mgr. Miloš Rusnák, PhD., geogmilo@savba.sk (corresponding author)


a
Institute of Geography, Slovak Academy of Sciences, Štefániková 49, 814 73, Bratislava, Slovakia

Mgr. Ján Sládek, PhD., geogslad@savba.sk


a
Institute of Geography, Slovak Academy of Sciences, Štefániková 49, 814 73, Bratislava, Slovakia
b
GEOTECH Bratislava, s.r.o., Černyševského 26, 851 01 Bratislava

Ing. Anna Kidová, PhD., geogkido@savba.sk


a
Institute of Geography, Slovak Academy of Sciences, Štefániková 49, 814 73, Bratislava, Slovakia

RNDr. Milan Lehotský, CSc., geogleho@savba.sk


a
Institute of Geography, Slovak Academy of Sciences, Štefániková 49, 814 73, Bratislava, Slovakia

Abstract
This paper presents the template for high-resolution mapping of a river landscape by
Unmanned Aerial Vehicle (UAV) technology with the following five steps: (i) reconnaissance
of the mapped site; (ii) pre-flight field work; (iii) flight mission; (iv) quality check and
processing of aerial data; and (v) operations above the processed layers and landforms
(objects) mapping (extraction). The small multirotor UAV (HiSystem Hexakopter XL)
equipped with Sony NEX 6 camera with standard 16-50 mm lens provided image capture and
workflow design applications. Images were processed by Agisoft PhotoScan software and
georeferencing was ensured with 20 Ground Control Points (GCP) and 18 check points
certifying accuracy assessment. Three imaging methods for 3D model creation of the study
area were used: (i) nadir, (ii) oblique and (iii) horizontal. This minimized geometric error and
captured topography under treetop cover and overhanging banks.

Key words: UAV, fluvial landscape mapping, workflow, UAV photogrammetry, data
extraction
1. Introduction
Geomorphological mapping requires the collection of primary landform data and is dependent
on scale and changes in the studied object's attributes. Landform identification is normally
based on the object classification framework using field work and remotely sensed data with
temporal and spatial accessibility, flexibility and accuracy [1]. This methodology provides a
very powerful tool for landform mapping [2,3,4]. River morphology is one of the most
dynamic landscape entities, therefore it is essential in its mapping to generate the precise
dataset and topography, required for linking processes, patterns and spatio-temporal
volumetric changes [5,6,7]. Thus, remote sensing techniques combined with field survey
provides the basic information source in modern fluvial geomorphology [8].
The last decade has seen rapid development in the use and availability of unmanned aerial
vehicles (UAVs) for obtaining the earth surface spatial information [9,10], and various UAV's
platforms are used in landscape research. While fixed wings systems are suitable for mapping
of large areas, multirotors are superior for small, linear and narrow localities, and UAV's
equipped with digital cameras and lidar, or these combined systems [11], are optimal for data
acquisition in landscape research at resolutions from 0.5 to 2 cm [6,14,15,16,17]. Digital
camera payload is commonly used in UAV technology, and development of easily applied
commercial or open source software and inexpensive platforms enable high temporal and
resolution topography reconstruction and monitoring [18].
Turner et al. [12] highlighted that UAV photogrammetry is the most suitable technology for
real-time or near-real-time landscape monitoring and photogrammetrically-derived digital
elevation model (DEM) is limited by its inability to capture surface topography beneath
vegetation [19].
Technological advances in UAV enable production of a high quality elevation model and
orthophotomosaic with spatial resolution up to 10cm and vertical error to 50cm (Table 1).
UAV is therefore an ideal tool to assess riparian forest dynamics [20,32] and analyze post-
flood river channel morphology [22,28], planform changes, meander neck cut-offs [27] and
lateral channel shifts [21]. UAV photogrammetry generates the high resolution topographic
data essential to detect morphological changes [18,26,28,35], bankfull stages [33], riparian
vegetation dynamics and canopy height models [20,22,32]. While it provides excellent data
quality, including data for grain size identification [34,36] and topographic reconstruction of
submerged channels [6,23,29,30], evaluation of topographic accuracy and error range still
requires validation.
Research has processed data acquisition from UAV images [24,25,26,30,31,32], and now an
appropriate scheme encompassing all the aspects of flight mission logistics, data acquisition
and processing and the nested, multi-scale and component aspects is required. Workflow
approaches to UAV imaging and data processing consist of the following 6 main steps; (i)
UAV preparation, (ii) its calibration, (iii) ground control point (GCP) targeting, (iv) flight
mission, (v) image processing and (vi) point cloud and orthophoto processing and analysis
[24,25]. Other authors have also emphasized photogrammetric processing [13,37,38] and have
designed workflows to the data post-processing stage for image classification, validation and
calculation of submerged topography [6,25,31,32]. In addition, Miřijovský and Langhammer
[26] have described a legislative process in a workflow diagram, and this step has become an
integral part of UAV planning and drone operation [39].
The aim of this paper is to compile a template for the application of UAV technology in
riverine landscape mapping. Our illustrated example of a reach of braided-wandering river
knickzone incorporates the fore-going concepts. This reach was selected because its landform
and land cover diversity enable the combination of different method in discriminating riverine
landscape objects for scalar mapping and assessment.
Table 1
Rivers with different planform settings, which was studied by UAV technology with various
platform types and dataset accuracy.
Study River Planform setting UAV platform Camera Research UAV accuracy
Canon
Drôme
Powershot spatial resolution 5-
River; Ain shifting gravel bed bathymetric model of river
Lejot et al. [6] Pixy drone G5/ Canon 7cm; vertical residuals
River rivers bed
EOS 500 N ± 50cm
(France)
Reflex
Canon
Drôme
Dunford et al. shifting gravel bed Powershot riparian forest automatic spatial resolution 6.8 -
River Pixy drone
[20] rivers G5/Canon classification 21.8cm
(France)
EOS 5D
Feshire
Vericat et al. River SkyHookHelikite Ricoh Caplio generation photo-mosaics spatial resolution 5-
braided gravel bed
[21] (Scotland balloon R5 for river monitoring 10cm; errors 25-5 cm
UK)
Canon
Drôme Pixy drone/ Powershot G5
Hervouet et al. spatial resolution 3-
River braided ULAV /Canon EOS vegetation dynamics
[22] 16cm; RMSE 0.10m
(France) paraglider 5D/Sony
DSLR-A350
DEM: vertical
creation seamless
0.03/0.05m
Minicopter topography model by
Pulmanki Nikon (2010/2011), RMSE
Flener et al. river bend with Maxi-Joker combination mobile
River D500/Nikon 0.0152/0.088m;
[23] point bar 3DD/Align T- LiDAR, UAV photography
(Finland) D5100 bathymetry: vertical
Rex 700E and optical bathymetric
0.12/-0.5m, RMSE
modelling
0.221/0.163m
The Clark
Flynn and mapping aquatic vegetation
Fork stream meandering DJI GoPro Hero 3 spatial error 0.3m
Chapra [24] - Cladophora
(US)
Ahuriri complex DEM of river ground RMSE 0.20m;
Javernick et al. Robinson R22 Canon + 28
River (New braided chanenl (ground and bathymetry RMSE
[25] helicopter mm lens
Zealand) bathymetry) 0.31m
Miřijovský and Javoří Brook morphology changes of
Mikrokopter Canon EOS spatial resolution 2cm;
Langhammer (Czech meandering channel (volumetric
HexaXL 500D RMSE 3.7cm
[26] Republic) changes)
Morava
Miřijovský et River Mikrokopter Canon EOS monitoring cut-off of
meandering vertical RMSE 0.093m
al. [27] (Czech HexaXL 500D meander
Republic)
spatial resolution
5/4cm; RMSE(2012)
Aeryon Scout Photo 3S effect of floods to channel
Tamminga et Elbow River 0.088m (exposed)/
braided/wandering quadcoptor/eBee camera/Canon morphology (volumetric
al. [28] (Canada) 0.098m (submerged);
fixed-wing IXUS 127 HS changes)
RMSE(2013)
0.047m/0.0952m
assessment reach scale
vertical RMSE 8.8cm
Tamminga et Elbow River Aeryon Scout Photo 3S habitat and morphology
braided/wandering (exposed areas),
al. [29] (Canada) quadcoptor camera (DEM - exposed and
11.9cm (submerged)
submerged areas)
Arrow
River; Panasonic quantitative assessment of
Woodget et al. mean error from -0.029
Coledale small stream Draganflyer X6 Lumix DMC- submerged channel areas
[30] to 0.111m
Beck River LX3 and topographic mapping
(UK)
Sony NEX
hydromorpholocical spatial resolution
Casado et al. Dee River 7/Panasonic
meandering QuestUA Q-200 automatic classification of 2.5/5/10cm; RMSE
[31] (Wales UK) Lumix DMC-
river channel 0.0451/0.1574/3.0574m
LX7
Salm River;
Michez et al. Houille riparian forest automatic reprojection error 0.72/
- Gatewing X100 Ricoh GR3
[32] River classification 0.86 pixel
(Belgium)
Ścinawka alluvial
Niedzdielski et swinglet Cam integrated resolution orthopoto
River meandering with observing river stages
al. [33] fixed-wing RGB camera 3cm
(Poland) cutbanks
Canon IXUS mean difference -
Daan River 135/Canon topography identification 0.9cm/-0.85cm; RMS
Cook [18] bedrock gorge DJI Phanton 2
(Taiwain) Powershot and topography changes difference 39.4cm/
4000IS 45.9cm
Langhammer Javoří brook meandering Mikrokopter Canon EOS optical digital vertical RMSE
et al. [34] (Czech HexaXL 500D grannulometry 0.015/0.021m
Republic)
Eben Gill geomorphologic change orthophoto spatial
Marteau et al. stream GoPro Hero detection for river resolution 0.025m;
- DJI Phantom I
[35] (England 3+ restoration monitoring error
UK) (volumetric analyses) 0.025/0.030/0.017m
Coledale Panasonic grain size relation to the spatial resolution 1cm
Woodget and
Beck River gravel bed river Draganflyer X6 Lumix DMC- image texture and (orthophoto), 2cm
Austrums [36]
(UK) LX3 topographic roughness (DEM)

2. Workflow design
Data acquisition for riverine landscape mapping using a low-cost UAV outlay is divided in to
5 main steps (Fig. 1): (i) reconnaissance of the mapped site - identification of potential
problems and dangers in flight mission, takeoff and landing points; (ii) pre-flight field work -
the placement and targeting of ground control points (GCPs) for precise georeferencing; (iii)
flight mission - aerial imaging of the study area; (iv) quality check and processing of aerial
data - data accuracy assessment and software data processing and (v) operations above
processed layers and landform (object) mapping (extractions) - comprising visualization,
landform identification and morphometric analysis. The essential field research elements of
legislative processes and UAV permission and regulations were certified prior to the flight;
and these are designated step zero (0).
Fig. 1. Scheme for mapping riverine landscape. Workflow consists of the following steps: (i)
reconnaissance of the mapped site, (ii) pre-flight field work, (iii) flight mission, (iv) quality
check and processing of aerial data, (v) operations above processed layers and landform
(object) mapping (extractions). Prior to field mapping, it is necessary to check UAV
regulations and permission for field mapping and flight missions (step zero).

2.1. Step 0: Legislation and regulation


UAV mapping requires acceptance of legislation and the application of general field survey
rules. These include survey permission in protected areas and private properties and specific
requirements for areas such as designated military zones. Provision is also made for the
various regulations affecting drone operation, including pilot training courses and licensing.
Stöcker et al. [39] emphasize that UAV legal requirements are an integral part of the
preparation phase for UAV flights and summarize different legislation applications in
individual countries and national and international legislation in the following 6 main
regulation spheres: (1,2) applicability and technical prerequisites that cover regulated UAV
weight, range, control and safety systems; (3) operational limitations center on flight height,
definition of 'non-flight zones', the line of sight and urban zone restrictions; (4) administration
procedures involve flight permission, notification, approval from aviation authorities,
registration number, insurance, image control by military authorities, company permission
and appropriate license for aerial work; (5) human resource requirements include operator
training and license and (6) ethical constraints for privacy and data protection. Different
countries have varying regulations, it is essential to follow pre-flight instructions from
aviation authorities and ensure that flight mission follows regulations set or order flight
mission by relevant licensed companies and state institutions.

2.2. Step 1: Reconnaissance of the mapped site


Study area field reconnaissance and visual assessment identify the flight course, obstacles and
takeoff and landing sites. The area to be mapped and UAV technical limitations in operation
time and line-of-sight determine the numbers of flight sectors and number of take-off/landing
points. Electrical wiring and vegetation create fixed obstacles and shielded GPS signal, so the
visual survey identifies potential urban and economic risk areas and helps the operator select
the type of flight mode: autonomous/manual, if this is not nationally predefined.

2.3. Step 2: Pre-flight field work


Ground control points (GCPs) distribution and accuracy are key factor in precise UAV
mapping [13,40,41,42], because this technique is 5 to 10 times more accurate than direct
georeferencing by internal GNSS of the UAV [37]. Three or more GCPs may be required for
georeferencing [43,44,45], as errors increase with limited GCP number and distance from the
GCP [17,46]. PhotoScan software requires at least 10 GCPs for model referencing [47] and
these should not be distributed in lines or create regular patterns; such as equilateral triangles
[48]. A further relevant condition for GCP placement is their arrangement at different vertical
level; including river banks, in-channel structures, floodplain benches, terraces and bluffs.
The next paramount feature of pre-flight work is defining the settings used for UAV and
camera parameters based on flight plan design. Here, Miřijovský and Langhammer [26]
recorded the calculation equations for flight height, focal length, scale and ground sample
distance. Camera parameters are particularly important, and these must be set for shutter
speed in prevailing light conditions to produce sharp images of approximately 1/1000.
Camera can be pre-calibrated prior to the flight or self-calibrated in software processing.
Difficulties, however, are invited by low number of images, strip-imaging and sole use of
nadir photography because these can lead to model deformation and the 'doming' effect due to
radial distortion [40,49,50]. All these parameters must be considered in flight planning and
control to ensure the accuracy of the final topography.

2.4. Step 3: Flight mission


Nadir, oblique and horizontal imagery is recommended to combine in manual or autonomous
flights to achieve precision and model texture quality (Fig. 2). This combination improves
self-calibration and model precision, thus minimizing model deformation [40,50]. The oblique
photographs provide conjunction between nadir and horizontal imaging from ground or low
UAV flight paths and ensure the capture of overhanging river-banks and bank topography
obscured by treetop cover. Sunlight illumination and water surface reflectance also influence
successful flight mission planning and timing.

Fig. 2. Three method of data acquisition for geometry building and creation of the 3D river
landscape model: (a) nadir, (b) oblique and (c) horizontal.

2.5 Step 4: Quality check and processing of aerial data


Image quality control prevents the occurrence of unexpected and inaccurate results from the
data processing and 3D processing failure by inadequate flight line and images overlap. It is
firstly necessary to inspect the general coverage and image exposure and sharpness quality
after each flight mission and secondly quality check-level then ensures software detection of
criss-cross image overlap by recording the number of identical point visible in different
images. The final check level identifies optical and motion blurring, as does the image blurry
analysis in Agisoft PhotoScan [47]. Sieberth et al. [51] have previously presented automatic
detection methods for identifying blurred images in sets. Low quality images must therefore
be excluded from geometric model analysis. The accepted set of photographs then undergo
photogrammetric software processing.
Advances in technology and image analysis enable user-friendly black-box operation and
model generation [49,50]. Specialized research has described process of image analysis, SfM
photogrammetry, multi-view stereopsis techniques and model geometry generation with
image alignment, dense point cloud generation, geometry, texture building and georeferencing
[9,13,25,50]. In addition SfM software from open source and commercial or on-line solutions
are now available for data processing, including Bundler, PMVS, Pix4D Mapper, Agisoft
PhotoScan, Photosynth or ARC3D.
It is crucial to assess precision and accuracy following the final 3D model geometry
computation. This is essential for topographic models to undergo successful modeling and
volumetric changes computation. The reference sources for spatial DEM accuracy are used
terrestrial laser scanner (TLS) data [18,42,52,53]; lidar data [9,14,17]; and total station, DGPS
and RTK GPS check points [12,13,16,21,37,54].

2.6. Step 5: Operations above processed layers and landform (object) mapping (extractions)
The final step requires processing data generated in orthophotomosaics, raster, TIN DEM and
point cloud formats. In fluvial geomorphology, it is also necessary to identify topographical
surfaces under vegetation. Combined 2D and 3D datasets enable the use of classification
techniques and point cloud filtration for precise object identification. This especially includes
banks under tree canopies, overhanging banks and vertical structures. The final operations
here are strongly influenced by the study aims and the nature of the study objects. In fluvial
studies, UAV generated topography models detect morphology changes [18,26,28,35],
identify submerged area topography [6,23,25,30], vegetation dynamic [22,32], classify object
from orthophotomosaic [20,24,31] and analyze optical granulometry [34,36].

3. Workflow application: case study area and technical equipment


Imaging data of the 1.6 km long knickzone reach of the braided-wandering Belá River in the
northern part of Slovakia (Fig. 3) was chosen to illustrate the template's individual steps. This
reach has a system of bars with a well developed main channel, bedrock outcrop, vertically
diverse forested banks, forested floodplain with side arms, a terrace, 30m high undercut bluff
and variable land cover.

Fig. 3. Localization of the Belá River: (a) in Slovakia, (b) catchment hillshade relief with
position of the study reach for UAV monitoring and (c) orthophoto of the forestry floodplain
and in-channel structure (water surface - blue, gravel bars - grey and island - green).
Visualisation of riparian landscape (d) form 30 high fly of UAV and (e) presentation of the
multirotor platform HiSystem Hexakopter XL with Sony NEX 6 camera. (orthophoto:
EUROSENSE Slovakia, GIS data: Geodesy, Cartography and Cadastre Authority of Slovak
Republic (122-24-99-2012)).
The small multirotor UAV (HiSystem Hexakopter XL, Fig. 1e) equipped with Sony NEX 6
camera and standard 16-50mm lens was used for images capturing. All images were taken at
16mm focal length providing 82° field-of-view. The UAV was equipped with First Person
View (FPV) to image targets and visually check the flight path. This provides live-view for
transmitting video from either the traditional camera or small CCD camera UAV board to the
ground station with external monitor and Google maps for position localization. Agisoft
PhotoScan software running on standard 4 core workstation Intel i7 equipped with 32 GB
RAM and 2 GB graphic card then processed the images.

4. Methods
4.1 Image acquisition and processing
Field reconnaissance ensured successful location of six take off and landing sites on the valley
floor at the top of gravel bars centers. These provided the best conditions for UAV visual
flight control, with maximal direct visual contact for flights over floodplain forest and terraces
and the accompanying six imaging sectors highlighted in Figure 4a. A total of 38 GCPs were
measured by RTK GPS of GG03 antenna and Leica Zeno 5 with 1-2 centimeter accuracy
were measured (Figs. 3b,c.d). Images were obtained by Sony NEX6 camera with
automatically computed aperture, and the exposure time was set at shutter speeds of 1/1200 in
good sunlight and 1/800 in cloudy conditions. The autonomous flight mode of Hexakopters
for take-off, landing and waypoint flight stabilized by GPS could not be used for imaging
because GPS signal were shielded by tree canopy.
Fig. 4. Localization of (a) 6 take-off points and 6 sectors with image acquisition, (b)
distribution of ground control points (GCP's) in the study area, (c) detailed GCP on the gravel
bar and (d) GPS point targeting.

Nadir, oblique and horizontal imaging were used for 3D model creation over the 75 minute
flights. Nadir imaging was realized in all sectors with the average flight height (AGL - above
ground level) 80m and 50m above terrace surface in sector 4. Oblique imaging at 30° - 45°
was applied in sectors 2,3 and 4, where treetops covered the banks and bank line. AGL at 20
m was used for oblique imaging of sector 2 and 3, while sector 4 (bluff imaging) was
performed at 80m, 60m, 50m AGL. The Orthogonal and oblique imaging was complemented
by horizontal one which provided ground level photos of river banks under tree canopies and
bluffs by UAV from 30m and 20m AGL. Aerial image data was processed by Agisoft
PhotoScan with 2,413 aerial images and S_JTSK (the unified trigonometric cadastral
network) coordinates of 20 GCP's. The remaining 18 points served as check points for
accuracy assessment. Thus, data exported form PhotoScan provided the textured point cloud,
RGB orthophotomosaic in red, green and blue bands with pixel resolution 5 cm and digital
elevation model of channel and floodplain with 6.46 cm resolution.

4.2 Data analysis and processing


Orthophotomosaic provided manual classification of land cover categories and landforms in
ArcGIS software (Fig. 5). On the first level, landforms were classified in three main
braidplain, floodplain and terrace geomorphic components. Second level classification then
gave the braidplain in water area, bar area and islands categories and land cover was
classified in: water area, bedrock, gravel bar, low vegetation, shrub vegetation, tree
vegetation, and large woody debris categories. Process-oriented manually classified layer
(POMC) with 2 landform levels and 7 land cover categories was the result.
The photogrammetrically derived point cloud created a surface envelope, and the vegetation
cover used in creating the surface topography required identification and filtering.
Unfortunately, since the classification process with high density photogrammetrically derived
points was based on digital camera imaging with only one “return” from the screened object,
it was impossible to use the filtering algorithm with the first and last returns supplies by
airborne lidar. The photogrammetrically derived point cloud classification is therefore based
on the inspections of radius angles and the height between classified points by several
iterations. The quasi-vertical bank, bluff, landslide surface classification results prove
unreliable without operator input.
The Terrasolid-TerraScan extension in Microstation software for semi-automatic point cloud
classification filtered the following six vegetation cover classes: (1) high vegetation (> 5m);
(2) middle vegetation (1.5 - 5m); (3) low vegetation (0.2 - 1.5m); (4) ground; (5) water and
(6) LWD have been identified (Fig. 8). This classification of vegetation cover was
harmonized with American Society for Photogrammetry and Remote Sensing (ASPRS)
classification standards. Woody debris was included into the low vegetation class, and
therefore extraction was based on the previous manually-vectorised woody debris land cover
class in the POMC layer. The water area points were also difficult to classify by automatically
methods because the flat water surface was deformed by rippling water. This led to
classification of the water surface as ground or small vegetation or noise, and the water areas
were therefore classified similarly to the woody debris class based on polygons created from
the POMC layer. Class corrections were verified by the moving profile method in all study
area.
Fig. 5. Vectorisation and identification of objects in the 3-level database with its 3 main
components: braidplain, floodplain and terrace (level 1). Objects specification at level 2
(braidplain area, the most specific part of the fluvial landscape). The database was completed
by land cover information at level 3.

The RGB orthophotomosaic was classified in ArcGIS software by supervised Maximum


Likelihood Classification (MLC). Here training sites signatures identified the five water area,
bar surface, vegetation, LWD and bare surface land-covers. ISO cluster analysis and
dendrogram tools tested, separated and validated the potential number of classification
categories and training samples were manually digitized via spectral RGB diversity into five
author-predefined classes. Final post-classification processing to reduce noise and mis-
classification comprised in focal statistics filtering, smothing by the boundary clean toolset
and removing small isolated regions of pixels (Fig. 6).
Fig. 6. The five classes identified by (a) automatic supervised maximum likelihood
classification (MLC), (b) result of post-classification processing: noise and mis-classification
removal by application of the focal statistics tool, boundary clean toolset and removing small
isolated regions of pixels, (C) reference data source for quality assessment (POMC) and (d)
spatial distribution of automatic classification errors on orthophotomosaic .

5. Results
5.1. UAV data: accuracy and precision
Figure 7 highlights that the high resolution of UAV orthophotomosaic combined with nadir,
oblique and horizontal imaging provided landforms and land cover vectorization with one
dimension greater than 10 cm. This included vertically oriented bluffs and banks with
overhanging tree canopy. Aligned GCP's precision computed by Agisoft PhotoScan was
0.336 pixel (0.08m) and the average rate of root mean square errors (RMSE) of all GCPs after
alignment was 0.01915m (z = 0.10093m; x = 0.01474m; y = 0.02272m). The mean difference
in check points was 0.02642m and the RMSE vertical coordinates were 0.02836m and
0.02459m for the X coordinate and 0.02812m for Y.
Fig. 7. Differences in (a) orthoimages 2002 with pixel resolution 1m, (b) from 2006 - 40
cm/pixel, (c) 2015 - 20 cm/ pixel and (d) from UAV photogrammetry (2015) with 5 cm pixel.
(orthophoto: EUROSENSE Slovakia).

The point cloud was exported to RGB point information which provided photorealistic 3D
visualization (Fig. 8a) and the class information for terrain, vegetation and separately defined
classes (Figs. 8b,c). Filtering gave the DSM for combined all classes points (Fig. 8d) and also
DTM with only ground-point classes (Fig. 8e). Vertical differences between DTM and DSM
for vegetation cover height varied from -4.38 to + 34.09m, dependent on floodplain tree cover
(Fig. 8f). With exception of islands, the braidplain area has small vegetation from 50cm to 1m
providing a less precise topography model because of the low number of points remaining
after filtering vegetation points.
Fig. 8. (a) Photorealistic colorized point cloud landscape model in the study area, (b)
classified point cloud (water - blue, ground - grey, vegetation - green, LWD - orange), and (c)
point cloud of surface model containing only ground and water classes. Raster generated (d)
envelope surface model (DSM) with vegetation, or (e) terrain digital elevation model (DTM)
and (f) DEM of differences between these models.

5.2. Automatic supervised classification


The accuracy of the MLC supervised classification of RGB characteristics was assessed by
the combination toolset and cross-tabulation with the manually-classified POMC reference
layer dataset. POMC was adjusted here by merging the data in the equal 5 classes: (1) water
areas; (2) bar surface; (3) vegetation; (4) woody debris; (5) bare surface. Accuracy assessment
results are presented in the Table 2 confusion matrix, which provided overall accuracy of
81.39% and 0.62 KAPPA index of agreement. While proficient automated classification was
achieved for the “vegetation” and “gravel bar” classes when spatial error distribution in
Figure 6d was considered, close observation of the water areas class provided lower accuracy
due to vegetation class cover overhanging the bank line, visible bedrock under the water level
classified in the bare-surface class and the gravelled channel bed in the gravel-bar class. Here,
the bare surface and woody debris classes also exhibited low accuracy. The spatial
distribution of error matrix at study area margins was characterized by lower image quality,
especially in areas with visible bottom channel bed structure and in the vegetation-channel
and water area-bar surface contact zones (Fig. 6d).

Table 2
Confusion matrix and accuracy measures (classification accuracy, overall classification
accuracy and KAPPA index) between classified and reference data for supervised Maximum
Likelihood Classification.
reference data
classification
class woody bare accuracy
water area bar surface vegetation
debris surface

water area 0.78 0.02. 0.15 0.01 0.03 78.44 %

bar surface 0.04 0.89 0.03 0.01 0.04 88.52 %


classified data

vegetation 0.02 0.02 0.94 0.01 0.01 94.05 %

woody debris 0.26 0.08 0.43 0.18 0.05 17.70 %

bare surface 0.37 0.19 0.23 0.03 0.17 17.36 %

overall classification accuracy: 0.81 (81.39 %)

Kappa index of agreement: 0.62

5.3. Channel landforms mapping and woody debris volume calculation


Classified elevation models identified study area landforms and their vertical diversity. DTM
longitudinal profiles then determined vertical differentiations in the abandonment arms which
are approximately 60 to 100cm higher than the main channel water level (Fig. 9), and DTM
also provided information on the secondary channel incision which is approximately 150cm
compared to the main channel in the southern part of the knickzone accompanied by outcrop
of claystones. In addition, the combined DTM and high precision orthophotomosaic identified
bar vertical diversity (Fig. 9c). The point bar's three levels (Fig. 9e) define three accretion
stages which differ in deposited material grain size (Fig. 9d). The oldest level, No. 3, has
706.256 m a. s. l. average altitude and is stabilized by low vegetation cover and sandy
deposits, while bar level 2 has average 705.671 m a. s. l. altitude and the youngest bar level 1
elevation is 705.001 m a. s. l. The orthophotomosaic also highlights that the youngest bar
level surface is formed by finer material than level 2.
Fig. 9. (a) Identification of the Belá river main channel and several abandonment side arms
and (b) longitudinal profiles calculated from UAV photogrammetry 3D models. Identification
of (c) bar topography and (d) several vertical bar levels (e) on the cross-section plotted on the
point bar.

In addition to vertical landform differentiation, point cloud enables woody debris volume
calculation (Fig. 10); and this calculation example is chosen because woody debris has a very
important role in braided channel development. High resolution orthophotomosaic and point
cloud give precise debris identification, including calculation of its length, study area location
and orientation to water flow (Fig. 10a). The elevation model determines vertical profiles of
roots and trunks (Fig. 10b), which create flow obstruction; the point cloud captures woody
debris with height 30cm (profile 1), 10cm high trunk (profile 2), and 1m high root and woody
debris bulk (profile 3); and this corresponds to the actual terrain situation. The total air and
wood mass volume in the woody debris was computed by the 'cut method' volume analysis
tool in Topolyst software. A total of 516 woody debris accumulations with 931m3 volume
were registered in the study area (Fig. 10d); with 134.9m3 maximum accumulation, 1.8 m3
average and 0.0195 m3 median. This highlights that 75 % of all accumulations was less than
0.319 m3 (Fig. 10c).
It is essential in woody debris estimation to quantify wood mass volume by identifying
wood/air volume ratio in the different types of accumulated material; isolated wood pieces,
jams of logs, branches, root boles and twigs and shrubs and whole trees. An appropriate
approach for estimation of woody debris accumulation mass, calculation of the density index
for woody debris and air proportion estimation in log jams and shrubs derived from aerial
photography were developed by Thévenet et al. [55] and presented by Piégay et al. [56] or
Wyžga and Zawiejska [57].
Fig. 10. Woody debris accumulation identification (a) on the orthophotomosaic and digital
elevation model with (b) 3 cross-section plotted on woody debris. Statistical distribution (c)
of values calculated form 516 woody debris spots and (d) spatial distribution of woody debris
accumulation in the study area.

6. Discussion
6.1. Workflow design
The template for UAV technology application to riverine landscape mapping consists of five
steps. These encompass complying with UAV regulations, reconnaissance of the mapped site,
data processing and landform mapping/object classification. Universally applied UAV
technology describes the process of image acquisition and processing, including description
of the UAV platform, technical parameters, number of images, camera settings, software
processing and the specialized work involved in the use of UAV as the best tool for case study
data acquisition. Research also emphasizes problems involved in methods used for final data
post-processing, especially in image classification [31], vegetation segmentation and
classification [32], refraction correction [30] and submerged topography model generation
[23,25]. Other reports highlight photogrammetric processing, the calibration process, GCP
distribution and parameters setting for accuracy assessment in UAV derived models
[13,37,38,40,41]. Further articles then describe legislation and regulations mandated in the
image data acquisition phase. Here, Tonkin et al. [16] recorded aviation authority
requirements in line-of-sight, while Flynn et al. [24] conducted flights in accordance with the
Model Aircraft Operating Standards Advisory Circular, and Cassado et al. [31] described data
collection under UK Civil Aviation Authority regulation. Rango and Laliberte [58] then
detailed UAV flight planning required by US Federal Aviation Administration law (FFA) in
the National Airspace System and Stöcker et al. [39] provided wide overview of UAV
regulations in 19 countries with the common goal of minimizing risks in UAV operations.
The development of workflow template as an appropriate tool for mapping objects in riverine
landscape requires familiarization with all complexities involved in UAV application to this
process, and it ensure check systems for identifying and controlling all steps in model
creation. Workflow design must cover the multi-component aspects of field data acquisition
and all technical requirements in order to achieve precise data generation.
Advances in UAV image processing and progress in multi-view stereopsis techniques with
universal availability and low-cost UAV outlay as an easy data collection source led to UAV
photogrammetry black-box technology data generation. Combination of the following
essential steps is fundamental in UAV mapping; (i) data collection under legislative and
regulatory frameworks to minimize risks, (ii) photogrammetric processing with high accuracy
GCP, combined nadir/oblique imaging and geometric model accuracy assessment and (iii)
final data processing methodology.

6.2. Advantages and limitation of UAV riparian landscape mapping


New developments of UAV technology have enabled the creation of high precision models of
the actual landscape with appropriate information density and position referencing, especially
in the vertical dimension [59]. Miřijovský and Langhammer [26] emphasize the great
reliability of UAV in fluvial geomorphologic research and illustrate differences between
standard schematization of study area during classic field research with errors from subjective
observation and UAV mapping. This new mapping platform ensures less time-consuming
data acquisition, and its greatest advantages include real time capture of the landscape state
and subsequent changes elucidated by time series data. This determines the actual processes
involved in these changes and is specially applicable in capturing and monitoring flood events
and landslide movements. The precise results record small objects with a few centimeters
resolution, thus improving data identification, extraction and classification, with additional
benefit from both vertical data information and improved spatial data processing analysis.
A further advantage of this technology is coupling several data models generated from the one
flight. Three hours UAV mapping of six flights with 12 minutes duration and approximately 1
hour GCP targeted on the Belá River, together with DEMs, orthophotos, point cloud and nadir
or tilted aerial photos for complex modeling of landscape have thus been generated. While
UAV mapping is less expensive, it is better operational and has equal spatial information
density than airborne lidar [15], its optical data acquisition cannot penetrate the vegetation
cover and receive the signals from terrain topography [12]. Its technology is also limited by
surface envelope vegetation cover, where significant errors are especially recorded in
landscape covered by small trees and shrubs [15], vegetated areas [17] and areas continuously
covered by low vegetation which cannot be classified or removed from the surface [19]. In
contrast, highly precise DTM can capture ground terrain in sparsely vegetated areas by optical
sensors after point cloud filtering.
Ground level for data classifications in our study area involves non-densely, sparsely
vegetated, or bare surfaces, so classified point cloud greatly improves topographic surface
model accuracy. Its precise compilation of a sparsely point cloud covered floodplain plays a
crucial role in landforms mapping. However, interpolation during DTM generation is
impossible when the floodplain is covered by dense vegetation, so geodynamics analysis with
UAV photogrammetry is essential to study vegetation effect on vertical errors. While it is
possible to use UAV lidars [11] for point cloud acquisition in the areas where we need to hit
the ground under the vegetation canopy. Disadvantage of small UAV lidar are their relatively
high price and weight more than 2 kg that limits to use on small UAV platforms.

6.3. UAV technology in riparian landscape of Belá River


Multiple imaging by classic nadir images combined with oblique and horizontal ground
imaging is crucial in mapping bank height, bank line, bluffs and valley walls. Point cloud
combined with calculation of woody debris volume provides precise data for high spatial
resolution in land cover class classification. Here, debris below the planar surface and buried
in gravel is excluded, so wood debris volume is calculated by differences between woody
debris classified points and the planar surface.
The Belá River is subdivided into two reaches; (1) the northern reach dominated by gravel bar
and island areas with woody debris accumulated in abandonment arms created by material
accumulation up to 1 m above present river flow and (2) the reach in the lower part is affected
by anthropogenic creation of a small hydropower plant and artificial canal with evident
secondary channel incision [60]. This river has a riverine landscape with floodplain tree cover
so it is essential here to use the multiple imaging of classic orthogonal and oblique and ground
horizontal imaging. This provides workflow advantages in identifying the bank lines and
surface under tree canopies overhanging the river banks. The accuracy of the applied classic
approaches to bank delimitation from aerial photos in both models and orthophotos is severely
affected when the banks are inclined at an angle rather than forming a vertical cliff. It is
therefore important to create a methodology which applies landforms delimitation in 3-
dimensional space; employing 3D assessment of landscape objects rather than classic planar
geoscience analysis in GIS.

7. Conclusion
The presented template for the application of UAV technology in riverine landscape mapping
encompasses all technical aspects of flight mission logistics, data acquisition and processing.
It also includes the nested, multiscale and multi-component aspects of riverine landscape
object discrimination and their scalar mapping and assessment which further provide a
successful tool for researching this landscape type. Our proposed five step workflow
comprises (0) legislation and regulation control, (i) reconnaissance of the mapped site, (ii)
pre-flight field work, (iii) flight mission, (vi) quality check and processing of aerial data, (v)
operations above processed layers and landform (object) mapping (extractions). The
combination of all these essential steps is fundamental in UAV mapped identification of bank
lines and morphological changes.
Finally, it is of outmost importance to recognize that the accuracy of the applied classic
approaches to bank delimitation from aerial photos and UAV orthophotos is compromised
where banks are inclined at an angle rather than forming a vertical cliff. This, however, can be
successfully counteracted by methodology which applies landform delimitation in 3-
dimensional space, thus employing 3D assessment of landscape objects.

Acknowledgments
This research was supported by Science Grant Agency (VEGA) of the Ministry of Education
of the Slovak Republic and the Slovak Academy of Sciences; 02/0020/15. Flight mission was
accomplished by collaboration with GEOTECH Bratislava s.r.o. and the orthoimage aerial
was provided by EUROSENSE Slovakia, s. r. o. The authors are grateful to R. Marshall for
manuscript language review.

References
[1] J.M. Wheaton, K.A. Fryirs, G. Brierley, S.G. Bangen, N. Bouwes, G. O'Brien,
Geomorphic mapping and taxonomy of fluvial landforms. Geomorphol. 248 (2015) 273-295.
doi: http://dx.doi.org/10.1016/j.geomorph.2015.07.010.
[2] E.J. Milton, D.J. Gilvear, I.D. Hooper, Investigating change in fluvial systems using
remotely sensed data, in: A.M. Gurnell, G.E. Petts (Eds.), Changing River Channels, John
Wiley and Sons, Chichester, 1995, pp. 276–301.
[3] R.G. Bryant, D.J. Gilvear, Quantifying geomorphic and riparian land cover changes either
side of a large flood event using airborne remote sensing: River Tay, Scotland, Geomorphol.
29 (1999) 307–321. doi: http://dx.doi.org/10.1016/S0169-555X(99)00023-9.
[4] D. Gilvear, C. Davids, A.N. Tyler, The use of remotely sensed data to detect channel
hydromorphology; River Tummel, Scotland, River Res. Appl. 20 (2004) 795–811. doi:
http://dx.doi.org/10.1002/rra.792.
[5] J.M. Hooke, C.E. Redmond, Use of Cartographic Sources for Analyzing River Channel
Change with Examples from Britain, in: G.E. Petts (Ed.), Historical Change of Large Alluvial
Rivers: Western Europe. John Wiley and Sons, Chichester, 1989, pp. 79–93.
[6] J. Lejot, C. Delacourt, H. Piégay, T. Fournier, M.L. Trémélo, P. Allemand, P., Very high
spatial resolution imagery for channel bathymetry and topography from an unmanned
mapping controlled platform, Earth Surf. Process. Landf. 32 (2007) 1705-1725. doi:
http://dx.doi.org/10.1002/esp.1595.
[7] D. Gilvear, R. Bryant, Analysis of aerial photography and other remotely sensed data, in:
M. Kondolf, H. Piégay (Eds.), Tools in geomorphology, Wiley, Chichester, 2003, pp. 135-
170. doi: http://dx.doi.org/10.1002/0470868333.ch6.
[8] A.M. Gurnell, J.L. Peiry, G.E. Petts, Using Historical Data in Fluvial Geomorphology, in:
M. Kondolf, H. Piégay (Eds.), Tools in geomorphology, Wiley, Chichester, 2003, pp. 77-103.
[9] M.A. Fonstad, J.T. Dietrich, B.C. Courville, J.L. Jensen, P.E. Carbonneau, Topographic
structure from motion: a new development in photogrammetric measurement, Earth Surf.
Process. Landf. 38 (2013) 421-430. doi: http://dx.doi.org/10.1002/esp.3366.
[10] J. Sládek, M. Rusnák, Nizkonákladové mikro-UAV techológie v geografii (nová metóda
zberu priestorových dát), Geografický Časopis 65(3) (2013) 269–285.
[11] T. Sankey, J Donager, J. McVay, J.B. Sankey, UAV lidar and hyperspectral fusion for
forest monitoring in the southwestern USA, Remote Sens. Environ. 195 (2017) 30-43. doi:
http://dx.doi.org/10.1016/j.rse.2017.04.007.
[12] D. Turner, A. Lucieer, M. de Jong, Time Series Analysis of Landslide Dynamics Using
an Unmanned Aerial Vehicle (UAV), Remote Sens. 7 (2015) 1736–1757. doi:
http://dx.doi.org/10.3390/rs70201736.
[13] S. Harwin, A. Lucieer, Assessing the accuracy of georeferenced point clouds produced
via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery, Remote Sens. 4
(2012) 1573–1599. doi: http://dx.doi.org/10.3390/rs4061573.
[14] Ch.H. Hugenholtz, K. Whitehead, O.W. Brown, T.E. Barchyn, B.J. Moorman, A.
LeClair, K. Riddell, T. Hamilton, Geomorphological mapping with a small unmanned aircraft
system (sUAS): Feature detection and accuracy assessment of a photogrammetrically-derived
digital terrain model, Geomorphol. 194 (2013) 16-24. doi:
http://dx.doi.org/10.1016/j.geomorph.2013.03.023.
[15] U. Niethammer, M.R. James, S. Rothmund, J. Travelletti, M. Joswig, UAV-based remote
sensing of the Super-Sauze landslide: Evaluation and results. UAV-based remote sensing of
the Super-Sauze landslide: Evaluation and results. Eng. Geol. 128 (2012) 2-11. doi:
http://dx.doi.org/10.1016/j.enggeo.2011.03.012.
[16] T.N. Tonkin, N.G. Midgley, D.J. Graham, J.C. Labadz, The potential of small unmanned
aircraft systems and Structure-from-Motion for topographic surveys: a test of emerging
integrated approaches at Cwm Idwal, North Wales. Geomorphol. 226 (2014) 35–43. doi:
http://dx.doi.org/10.1016/j.geomorph.2014.07.021.
[17] F. Clapuyt, V. Vanacker, K. Van Oost, Reproducibility of UAV-based earth topography
reconstructions based on Structure-from-Motion algorithms, Geomorphol. 260 (2016) 4-15.
doi: http://dx.doi.org/10.1016/j.geomorph.2015.05.011.
[18] K.J. Cook, An evaluation of the effectiveness of low-cost UAVs and structure from
motion for geomorphic change detection, Geomorphol. 278 (2017) 195-208. doi:
http://dx.doi.org/10.1016/j.geomorph.2016.11.009.
[19] M. Rusnák, J. Sládek, J. Buša, V. Greif, Suitability of digital elevation models generated
by UAV photogrammetry for slope stability assessment (case study of landslide in Svätý
Anton, Slovakia), Acta Sci.Pol. Form. Cir. 15(4) (2016) 439-449. doi:
http://dx.doi.org/10.15576/ASP.FC/2016.15.4.439.
[20] R. Dunford, K. Michel, M. Gagnage, H. Piégay, M.L. Trémélo, Potential and constraints
of unmanned aerial vehicle technology for the characterization of Mediterranean riparian
forest, Int. J. Remote Sens. 30 (2009) 4915-4935. doi:
http://dx.doi.org/10.1080/01431160903023025.
[21] D. Vericat, J. Brasington, J. Wheaton, M. Cowie, Accuracy assessment of aerial
photographs aquired using lighter-than-air-blimps: Low-cost tools for mapping river
corridors, River Res. Appl. 25 (2009) 985-1000. doi: http://dx.doi.org/10.1002/rra.1198.
[22] A. Hervouet, R. Dunford, H. Piégay, B. Belletti, M.L. Trémélo, Analysis of post-flood
recruitment patterns in braided-channel rivers at multiple scales based on an image series
collected by unmanned aerial vehicles, ultralight aerial vehicles, and satellites, GIsci Remote
Sens. 48 (2011) 50-73. doi: http://dx.doi.org/10.2747/1548-1603.48.1.50.
[23] C. Flener, M. Vaaja, A. Jaakkola, A. Krooks, H. Kaartinen, A. Kukko, E. Kasvi, H.
Hyyppä, P. Alho, Seamless mapping of river channels at high resolution using mobile LiDAR
and UAV-photography, Remote Sens. 5 (2013) 6382–6407. doi:
http://dx.doi.org/10.3390/rs5126382.
[24] K.F. Flynn, S.C. Chapra, Remote Sensing of Submerged Aquatic Vegetation in a
Shallow Non-Turbid River Using an Unmanned Aerial Vehicle, Remote Sens. 6 (2014)
12815-12836. doi: http://dx.doi.org/10.3390/rs61212815.
[25] L. Javernick, J. Brasington, B. Caruso, Modeling the topography of shallow braided
rivers using Structure-from-Motion photogrammetry. Geomorphol. 213 (2014) 166-182. doi:
http://dx.doi.org/10.1016/j.geomorph.2014.01.006.
[26] J. Miřijovský, J. Langhammer, Multitemporal Monitoring of the Morphodynamics of a
Mid-Mountain Stream Using UAS Photogrammetry, Remote Sens. 7 (2015) 8586-8609. doi:
http://dx.doi.org/10.3390/rs70708586.
[27] J. Miřijovský, M.Š. Michalková, O. Petyniak, Z. Máčka, M. Trizna, Spatiotemporal
evolution of a unique preserved meandering system in Central Europe - The Morava River
near Litovel, Catena 127 (2015) 300-311. doi: http://dx.doi.org/10.1016/j.catena.2014.12.006.
[28] A.D. Tamminga, B.C. Eaton, Ch.H. Hugenholtz, UAS-based remote sensing of fluvial
change following an extreme flood event, Earth Surf. Process. Landf. 40 (2015) 1464-1476.
doi: http://dx.doi.org/10.1002/esp.3728.
[29] A. Tamminga, C. Hugenholtz, B. Eaton, M Lapointe, Hyperspatial remote sensing of
channel reach morphology and hydraulic fish habitat using an unnmanned aerial vehicle
(UAV): A first assessment in the context of river research and management, River Res. Appl.
31 (2015) 379- 391. doi: http://dx.doi.org/10.1002/rra.2743.
[30] A.S. Woodget, P.E. Carbonneau, F. Visser, I.P. Maddock, Quantifying submerged fluvial
topography using hyperspatial resolution UAS imagery and structure from motion
photogrammetry. Earth Surf. Process. Landf. 40 (2015) 47-64. doi:
http://dx.doi.org/10.1002/esp.3613.
[31] M.R. Casado, R.B. Gonzales, R. Wright, P. Bellamy, Quantifying the Effect of Aerial
Imagery Resolution in Automated Hydromorphological River Characterisation. Remote Sens.
8 (2016) 650. doi: http://dx.doi.org/10.3390/rs8080650.
[32] A. Michez, H. Piégay, J. Lisein, H. Claessens, P Lejeune, Classification of riparian forest
species and health condition using multi-temporal and hyperspatial imagery from unmanned
aerial system. Environ. Monit. Assess. 188 (2016) 146. doi: http://dx.doi.org/10.1007/s10661-
015-4996-2.
[33] T. Niedzielski, M. Witek, W. Spallek, Observing river stages using unmanned aerial
vehicles, Hydrol. Earth Syst. Sc. 20 (2016) 3193-3205. doi: http://dx.doi.org/10.5194/hess-20-
3193-2016.
[34] J. Langhammer, T. Lendzioch, J. Miřijovský, F. Hartvich, UAV-Based Optical
Granulometry as Tool for Detecting Changes in Structure of Flood Depositions, Remote Sens.
9 (2017) 240-261. doi: http://dx.doi.org/10.3390/rs9030240.
[35] B. Marteau, D. Vericat, Ch. Gibbins, R.J. Batalla, D.R. Green, Application of Structure-
from-Motion photogrammetry to river restoration. Earth Surf. Process. Landf. 42(3) (2016)
503-515. doi: http://dx.doi.org/10.1002/esp.4086.
[36] A.S. Woodget, R. Austrums, Subaerial gravel size measurement using topographic data
derived from a UAV-SfM approach. Earth Surf. Process. Landf. 42(9) (2017) 1434-1443. doi:
http://dx.doi.org/10.1002/esp.4139
[37] D. Turner, A Lucieer, Ch. Watson, An Automated Technique for generating Georectified
Mosaics from Ultra-High Resolution Unmanned Aerial Vehicle (UAV) Imagery, Based on
Structure from Motion (SfM) Point Clouds. Remote Sens. 4 (2012) 1392-1410. doi:
http://dx.doi.org/10.3390/rs4051392.
[38] A.S. Laliberte, M.A. Goforth, C.M. Steele, A. Rango, Multispectral Remote Sensing
from Unmanned Aircraft: Image Processing Workflows and Applications for Rangeland
Environments, Remote Sens. 3 (2011) 2529-2551. doi: http://dx.doi.org/10.3390/rs3112529.
[39] C. Stöcker, R. Bennett, F. Nex, M. Gerke, J. Zevenbergen, Review of the Current State
of UAV Regulations, Remote Sens. 9 (2017) 459. doi: http://dx.doi.org/10.3390/rs9050459.
[40] S. Harwin, A. Lucieer, J. Osborn, The Impact of the Calibration Method on the Accuracy
of Point Clouds Derived Using Unmanned Aerial Vehicle Multi-View Stereopsis, Remote
Sens. 7 (2015) 11933-11953. doi: http://dx.doi.org/10.3390/rs70911933.
[41] M.R. James, S. Robson, S. d'Oleire-Oltmanns, U. Niethammer, Optimising UAV
topographic surveys processed with structure-from-motion: Ground control quality, quantity
and bundle adjustment. Geomorphol. 280 (2017) 51-66. doi:
http://dx.doi.org/10.1016/j.geomorph.2016.11.021.
[42] M.R. James, S. Robson, Straightforward reconstruction of 3D surfaces and topography
with a camera: accuracy and geoscience application. J. Geophys. Res. 117 (2012) F03017.
doi: http://dx.doi.org/10.1029/2011JF002289.
[43] T.N. Tonkin, N.G. Midgley, Ground-Control Networks for Image Based Surface
Recostruction: An Investigation of Optimum Survey Desirns Using UAV Derived Imagery
and Structure-from-Motion Photogrammetry, Remote Sens. 8 (2016) 876. doi:
http://dx.doi.org/10.3390/rs8090786.
[44] K. Kraus, Photogrammetry. Geometry from images and laser scans. 2nd ed., DeGruyter,
Berlin, 1997.
[45] C. McGlone, E. Mikhail, J. Bethel, Manual of Photogrammetry. 5th ed., ASPRS,
Bethesda, 2004.
[46] F. Agüera-Vega, F. Carvajal-Ramírez, P. Martínez-Carricondo, Assessment of
photogrammetric mapping accuracy based on variation ground control points number using
unmanned aerial vehicle, Measurement 98 (2017) 221-227. http://dx.doi.org/
10.1016/j.measurement.2016.12.002.
[47] Agisoft PhotoScan User Manual. <http://www.agisoft.com/pdf/photoscan-
pro_1_3_en.pdf> (accessed 10.03.2017).
[48] J. Miřijovský, Bezpilotní systémy – sběr dat a využití ve fotogrammetrii, first ed., UP
Publishing, Olomouc, 2013. ISBN 978-80-244-3923-5
[49] C.S. Fraser, Automatic Camera Calibration in Close Range Photogrammetry,
Photogramm. Eng. Remote Sens. 79 (2013) 381-388.
[50] M.R. James, S. Robson, Mitigating systematic error in topographic models derived from
UAV and ground-based image networks, Earth Surf. Process. Landf. 39 (2014) 1413-1420.
doi: http://dx.doi.org/10.1002/esp.3609.
[51] T. Sieberth, R. Wackrow, J.H. Chandler, Automatic detection of blurred images in UAV
image sets, ISPRS J. Photogramm. Remote Sens. 122 (2016) 1-16. http://dx.doi.org/
10.1016/j.isprsjprs.2016.09.010.
[52] M.J. Westoby, J. Brasington, N.F. Glasser, M.J. Hambrey, J.M. Reynolds, “Structure
from Motion” photogrammetry: a low-cost, effective tool for geoscience applications.
Geomorphol. 179 (2012) 300–314. doi: http://dx.doi.org/10.1016/j.geomorph.2012.08.021.
[53] H. Obanawa, Y. Hayakawa, H. Saito, C. Gomez, Comparison of DSMs derived from
UAV-SfM method and terrestrial laser scanning. J. Jpn. Soc. Photogramm. Remote Sens. 53
(2014) 67–74. doi: http://dx.doi.org/10.4287/jsprs.53.67.
[54] M.M. Ouédraogo, A. Degré, C. Debouche, J. Lisein, The evaluation of unmanned aerial
system-based photogrammetry and terrestrial laser scanning to generate DEMs of agricultural
watersheds. Geomorphol. 214 (2014) 339–355. doi:
http://dx.doi.org/10.1016/j.geomorph.2014.02.016.
[55] A. Thévenet, A. Citterio, H. Piégay, A new methodology for the assessment of large
woody debris accumulations on highly modified rivers (Example of two French piedmont
rivers), Regul. River. 14 (1998) 467-483. doi: http://dx.doi.org/10.1002/(SICI)1099-
1646(1998110)14:6<467::AID-RRR514>3.0.CO;2-X.
[56] H. Piégay, A. Thévenet, A. Citterio, Input, storage and distribution of large woody debris
along a mountain river continuum, the Drôme River, France, Catena 35 (1999) 19-39. doi:
http://dx.doi.org/10.1016/S0341-8162(98)00120-9.
[57] B. Wyžga, J. Zawiejska, Wood storage in a wide mountain river: case study of the
Czarny Dunajec, Polish Carpathians, Earth Surf. Process. Landf. 30 (2005) 1475-1494. doi:
http://dx.doi.org/10.1002/esp.1204.
[58] A. Rango, A.S. Laliberte, Impact of flight regulationson effective use of unmanned
aircraft systems for natural resources applications, J Appl. Remote Sens. 4(1) (2010). doi:
http://dx.doi.org/10.1117/1.3474649.
[59] B. Kršlák, P. Blišťan, A. Pauliková, P. Puškárová, Ľ. Kovanič, J. Palková, V.
Zelizňáková, Use of low-cost UAV photogrammetry to analyze the accuracy of a digital
elevation model in a case study, Measurement 91 (2016) 276-287. doi: http://dx.doi.org/
10.1016/j.measurement.2016.05.028.
[60] A. Kidová, M. Lehotský, M. Rusnák, Geomorphic diversity in the braided-wandering
Belá River, Slovak Carpathians, as a response to flood variability and environmental changes,
Geomorphl. 272 (2016) 137-149. doi: https://doi.org/10.1016/j.geomorph.2016.01.002

Figure Captions
Fig. 1. Scheme for mapping riverine landscape. Workflow consists of the following steps: (i)
reconnaissance of the mapped site, (ii) pre-flight field work, (iii) flight mission, (iv) quality
check and processing of aerial data, (v) operations above processed layers and landform
(object) mapping (extractions). Prior to field mapping, it is necessary to check UAV
regulations and permission for field mapping and flight missions (step zero).

Fig. 2. Three method of data acquisition for geometry building and creation of the 3D river
landscape model: (a) nadir, (b) oblique and (c) horizontal.

Fig. 3. Localization of the Belá River: (a) in Slovakia, (b) catchment hillshade relief with
position of the study reach for UAV monitoring and (c) orthophoto of the forestry floodplain
and in-channel structure (water surface - blue, gravel bars - grey and island - green).
Visualization of riparian landscape (d) form 30 high fly of UAV and (e) presentation of the
multirotor platform HiSystem Hexakopter XL with Sony NEX 6 camera. (orthophoto:
EUROSENSE Slovakia, GIS data: Geodesy, Cartography and Cadastre Authority of Slovak
Republic (122-24-99-2012)).

Fig. 4. Localization of (a) 6 take-off points and 6 sectors with image acquisition, (b)
distribution of ground control points (GCP's) in the study area, (c) detailed GCP on the gravel
bar and (d) GPS point targeting.

Fig. 5. Vectorisation and identification of objects in the 3-level database with its 3 main
components: braidplain, floodplain and terrace (level 1). Objects specification at level 2
(braidplain area, the most specific part of the fluvial landscape). The database was completed
by land cover information at level 3.

Fig. 6. The five classes identified by (a) automatic supervised maximum likelihood
classification (MLC), (b) result of post-classification processing: noise and mis-classification
removal by application of the focal statistics tool, boundary clean toolset and removing small
isolated regions of pixels, (C) reference data source for quality assessment (POMC) and (d)
spatial distribution of automatic classification errors on orthophotomosaic .

Fig. 7. Differences in (a) orthoimages 2002 with pixel resolution 1m, (b) from 2006 - 40
cm/pixel, (c) 2015 - 20 cm/ pixel and (d) from UAV photogrammetry (2015) with 5 cm pixel.
(orthophoto: EUROSENSE Slovakia).

Fig. 8. (a) Photorealistic colorized point cloud landscape model in the study area, (b)
classified point cloud (water - blue, ground - grey, vegetation - green, LWD - orange), and (c)
point cloud of surface model containing only ground and water classes. Raster generated (d)
envelope surface model (DSM) with vegetation, or (e) terrain digital elevation model (DTM)
and (f) DEM of differences between these models.

Fig. 9. (a) Identification of the Belá River main channel and several abandonment side arms
and (b) longitudinal profiles calculated from UAV photogrammetry 3D models. Identification
of (c) bar topography and (d) several vertical bar levels (e) on the cross-section plotted on the
point bar.

Fig. 10. Woody debris accumulation identification (a) on the orthophotomosaic and digital
elevation model with (b) 3 cross-section plotted on the woody debris. Statistical distribution
(c) of values calculated form 516 woody debris spots and (d) spatial distribution of woody
debris accumulation in the study area.
Graphical abstract
Highlights
The five steps template for high-resolution mapping of river landscape.
Multiscale and multi-component aspects of riverine landscape object discrimination.
Point cloud classification and landform vertical diversity.
The accuracy of the supervised classification of UAV generated orthophotomosaic.
Volume assessment of objects based on 3D model.

Vous aimerez peut-être aussi