Vous êtes sur la page 1sur 83

Geo-Visualization in 4D environment - Simulation of floods over an

Urban Area

Thesis submitted in partial fulfillment


of the requirements for the degree of

Master of Science (by Research)


in
Computer Science and Engineering

by

Vishal Tiwari
201002025
vishal.tiwari@research.iiit.ac.in

Lab for Spatial Informatics


International Institute of Information Technology
Hyderabad - 500032, INDIA
April 2017
c Vishal Tiwari, 2017
Copyright ⃝
All Rights Reserved
To my family and friends.
Acknowledgments

I would like to express my very profound gratitude to my advisor Dr. K. S. Rajan for his con-
tinuous guidance and support, and for providing a nurturing environment for exploration and doing
research. His constant encouragement and support has been a source of motivation throughout the
years of my research. I could not have imagined having a better advisor and mentor.
To my parents, siblings and friends for providing me with unfailing support and continuous
encouragement throughout my years of study.

v
Abstract

Visualization of geographical data is one of the important aspects of a geographical information


system. Whilst 2D visualization techniques have been employed for decades, capturing dynamic
phenomenon like floods over static urban areas within a space-time framework is gaining impor-
tance in GIS systems. This is because viewing geospatial data in higher (3 or 4) dimensions enables
visualization of insightful information concerning events that may have been either limited or miss-
ing before. In certain phenomenon, the dynamics of it is better visualized and understood when the
3-dimensions of space and the time dimension (both forward and backward) can be employed to
highlight its trajectory and object states over the region of interest. For instance, the phenomenon
like ocean currents, atmospherics systems, airplane tracking, GPS tracks on terrain, etc.
In the recent past, with the availability of three-dimensional geospatial data and the advance-
ments in GPU processing power, various computer graphics applications have emerged including
its adoption for Geospatial applications like virtual globes, city visualization, etc. While efforts at
the visualization of space-time dependent phenomena like floods over natural surfaces have been
attempted, but doing so in the presence of 3D non-natural objects and developing it from a GIS
perspective is still a challenge.
From a geospatial perspective, depicting a space-time process requires not only the time state
information of the phenomenon but also its integration with the surface model. This throws up the
challenge of capturing the phenomenon in the right geospatial context, i.e., the projection system.
Other challenges include the ability to transform the data model to visualize the static or dynamic
objects over the terrain and creating a computational framework that can integrate the process
models with such geospatial visualization approaches.
We make use of the publicly available 3D Berlin CityGML dataset for creating static urban
models. As CityGML is an information model rather than visualization model, using them for
rendering is non-trivial, along with their integration with virtual globes. We handle such issues
by making use of 3DCityDB along with a tiling based approach which helps in rendering large
CityGML datasets. Further, to make building models more realistic, we try to map the building

vi
vii

textures with real world textures. We try to achieve this by automated draping of building textures
from geo-tagged images which are captured by a cell phone camera with a built-in GPS. We use
the properties of the images to tag them to the corresponding footprint by using the camera pose,
and the position of the camera to automate the process. The challenge of integrating the dynamic
phenomenon and the static urban model is handled by creating digital surface models of the region
of interest. And using these surface models as the base for hydrological simulation.
Some attempts have been made using methods of animation to depict the dynamic phenomenon
like floods, snow avalanches, etc. While these approaches do provide a good visualization of the
effect, they are based on simulated scenarios with an effort towards a smooth visual appearance
of the depicted phenomenon. In this process, they overlook the locational interactions, which a
near-real process simulation of the phenomenon capture. In this work, we present a 4D GIS system
to visualize space-time dependent phenomena - simulated hydrological water flow model over an
urban area. This work attempts to use the calculated water depth information to present a near-
real visual rendering of the same, with the emphasis on the visual interaction of the 3D objects
(buildings) with such phenomenon captured in 4D (space and time). The developed system is built
using the NASA’s WorldWind Globe and uses a depth filling algorithm as its input for time-step
generated water depth maps, the dynamic layer. The urban scene is derived from a static CityGML
LOD2 buildings layer overlaid on the digital elevation map. The dynamic flow visualization is
enabled through an appropriate color mapping scheme so that the user can have a fair sense of
water depth at various areas of the city over the time period.
While visualization helps to understand the phenomenon and its progress, it is also important
to provide an appropriate mechanism to derive or extract the relevant technical information from
such a system. Towards this, in the developed system, analytical tools like querying for water depth
at any given time, displaying of hydrographs showing the variation of height over time at a given
location and a slider to control the time parameter of the system, have been incorporated.
As the system uses 3DcityDB as the storage model, it is highly expendable for answering com-
plex queries like ”which buildings of a specific area of the city will get flooded and when ?”. Such
queries are not yet built in and is left for further work. The static model represents only the building
models, and doesn’t take into account other city features like city furniture and vegetation. Further
the system doesn’t capture the effect of the dynamic surface on the static model. Such interactions
can be considered for future work.
Contents

Chapter Page

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 GIS Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation and context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Thesis Objective and Research Questions . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 3D Geo-spatial visualization experiments . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Anaglyph 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 Grass NVIZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.3 Relief Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Buildings and Terrain Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Naive Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 VRML and X3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.3 Virtual Terrain Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Virtual Globes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.2.1 Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.2.2 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2.3 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2.4 Massive datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2.5 Multithreading . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.3.1 NASA World Wind . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.3.2 OsgEarth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.3.3 Cesium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

viii
CONTENTS ix

2.5 CityGML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.1 About . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.2 Naive CityGML Building Rendering . . . . . . . . . . . . . . . . . . . . . 21

3 Texture extraction for automated texture draping of 3D geo-spatial objects . . . . . . . 23


3.1 Introduction and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 Scene Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.2 Chan-vese segmentation method . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.1 Creating white models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.2 Chan Vese based approach for texture extraction . . . . . . . . . . . . . . 26
3.2.3 Automated tagging of texture . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 Interactive 4D Visualization of Dynamic environments on World Wind Globe - Simulated


Floods over an Urban Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 CityGML Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3.1 CityGML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3.2 Building Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3.3 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3.4 3DCityDB and Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3.5 Rendering using 3DCityDB . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4 Digital Surface Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.5 Time series water depth map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.6 Dynamic layer Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.6.1 Analytic Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.6.2 Time based Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.6.2.1 Difference and interpolation . . . . . . . . . . . . . . . . . . . . 54
4.6.2.2 Coloring and legend . . . . . . . . . . . . . . . . . . . . . . . . 55
4.6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.7 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.7.1 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
List of Figures

Figure Page

2.1 Anaglyph of a terrain surface generated using ILWIS . . . . . . . . . . . . . . . . 7


2.2 NVIZ visualizing a DEM data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Relief Shading of a 90m DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 VRML created Virtual Globe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 World Wind Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 World Wind globe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7 osgEarth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 Cesium Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9 Cesium Globe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.10 Naive CityGML rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.1 Flow Diagram of the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


3.2 Created scene graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Chan-Vese based texture Extraction . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Texture Draped 3D buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Texture Draped 3D buildings - top view . . . . . . . . . . . . . . . . . . . . . . . 31
3.6 Texture Draped 3D buildings - semi street view . . . . . . . . . . . . . . . . . . . 31

4.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Multiple level of details as described by Biljecki . . . . . . . . . . . . . . . . . . . 39
4.3 Simplified schema of building model from 3DCityDB.org . . . . . . . . . . . . . . 41
4.4 Initial Tile selection, green showing the fetched tiles . . . . . . . . . . . . . . . . . 43
4.5 Camera movement, red shows removed, and green fetching new tiles . . . . . . . . 44
4.6 Top level view of non-texture buildings . . . . . . . . . . . . . . . . . . . . . . . 45
4.7 Street level view of non texture buildings . . . . . . . . . . . . . . . . . . . . . . 46
4.8 Top level view of texture buildings . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.9 Street level view of texture buildings . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.10 Rectangular region selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.11 DEM of the bounding box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

x
LIST OF FIGURES xi

4.12 DSM of the bounding box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51


4.13 Time series depth maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.14 Analytic Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.15 Top view of the time series visualization . . . . . . . . . . . . . . . . . . . . . . . 56
4.16 Slant view of the time series visualization . . . . . . . . . . . . . . . . . . . . . . 57
4.17 Street view of the time series visualization . . . . . . . . . . . . . . . . . . . . . . 58
4.18 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.19 HydroGraphs and depth Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Chapter 1

Introduction

1.1 GIS Visualization


As concisely described by the national geographic society, a Geographic Information System
(GIS)[1] is a computer system for capturing, storing, checking, and displaying data related to
positions on Earth’s surface. GIS can show many different kinds of data on one map as layers. This
enables people to more easily see, analyze, and understand patterns and relationships in spatial
domain.
Visualization is one of the important pillars of a geographical information system [1]. The users
of GIS system need to view the geographic data collected from various sources for analysis and
decision-making across disciplines - policy making, traffic analysis, city planning [2] and so on.
Statistical or data-based visualization of the geospatial data can help these stakeholders to better
understand and appreciate the data, and aid in decision-making. Thus the aim of GIS visualization
systems is to achieve the goal to organize geospatial data into a coherent visual display which can
be readily interpreted and understood.
Visualization takes various forms and covers a broad range of analytic tools and techniques like
data and statistical visualization. Data visualization corresponds to the display of graphics data
which is too complicated to process manually, and the final imagery is the product of an algorithm
or something created from a large data-set[3]. Typical examples include network analysis, flood
simulation, etc. On the other hand, statistical visualization is how one can put the results of any
analysis of spatial data in a statistical form, for better understanding the results, like creating a
graph, or color classification of vegetation distribution of a forest.
GIS has two main kinds of spatial data formats [4], the vector data format which consists of
points, polylines and polygons, which can be used to represent city, roads or a vegetation area

1
boundary. The other type of data is the raster data, which contains information at the pixel level,
for example, the Digital Elevation Model, which contains the elevation data of a region on the
surface of the earth.
All GIS systems support visualization for these two data types as 2D visualizations. Vector
data being a collection of geometries are rendered by these systems after doing appropriate trans-
formation between Coordinate Reference Systems, via high-level abstract, rendering APIs, which
uses OpenGL or DirectX as its lowest level drawing calls. Similarly, for raster data visualization,
the value of each pixel is mapped onto a grayscale or a color mapping to the RGB space and is
displayed similarly as an image on the projected display window.

1.2 Motivation and context


All GIS systems support 2D visualizations and thus is the most common kind. While 2D vi-
sualization techniques have been employed for decades, it’s important to view the data in higher
dimensions of space and time. This is because viewing geospatial data in higher (3 or 4) dimen-
sions enables visualization of insightful information concerning events that may have been either
limited or missing before.
When it comes to 3D visualization, there are two perspectives to it, geospatial and non-geospatial
views. In the non-geospatial viewpoint, the focus is towards designing algorithms for faster ren-
dering and more appealing graphics in local space. While in geospatial perspective, visualizations
are created in a georeferenced space from a GIS point of view, i.e., being able to interact with the
system and being able to derive or extract relevant information.
3D is the new trend in the visualization of vector and raster geospatial data. For vector data,
most data is being extruded using a specified attribute of the shapefile. For example, if the vector
data represent building footprints, and have one of the attributes as height, then extrusion of this
data to their heights would result in basic building models, resulting in 3D visualization of that data.
For raster data, either grid based or TIN-based (Triangulated Irregular Network)[5] [6] approach is
used to create 3D surfaces. For example, natural terrain surfaces can be rendered using TIN models
from Digital Elevation Models.
When it comes to spatiotemporal phenomenon, the dynamics of it is better visualized and un-
derstood when the 3-dimensions of space and the time dimension (both forward and backward) can
be employed to highlight its trajectory and object states over the region of interest. For instance, the
phenomenon like ocean currents, atmospherics systems, airplane tracking, GPS tracks on terrain,
etc. Thus it’s important to move towards an interactive system capable of rendering time-varying

2
phenomena in 3D space. We call such a system as 4D GIS system. Similar visualization systems
have been developed, but they either lack the geospatial perspective or don’t utilize the time di-
mension to create the animation of the phenomenon that they are trying to simulate. For example,
Leskins[7] system creates visualizations of standing floods over a city area where the stakeholders
are the policy makers. The system attempts to bridge the gap between the hydrological experts
and the policy makers but lacks the ability for time dynamic flood visualizations. The Integrated
Tsunami Research and Information System (ITRIS)[8] is a 3D GIS system that visualizes tsunami
travel charts on a virtual globe but doesn’t create dynamic visualizations. Bender [9] in his paper
presents a tool for creating static floods on top of terrains and buildings but lacks the GIS aspect.
As Spatio-temporal visualizations are a dynamic phenomenon on top of static 3D urban models,
different systems use different data models. For example, Leskins[7] used Lidar data for building
rendering, Bender[9] and Jiang[10] use shapefiles. Each of these methods faces problems, ranging
from computational cost in Lidar to very simplistic models in shapefile rendering. Thus there is a
need to use a standardized data model, like the CityGML data model. Creating urban models from
CityGML data have challenges of its own as it is an information model rather than visualization
model, thus using them for rendering is non-trivial.
Moving away from the traditional 2D GIS system, towards a system capable of visualizing
static 3D objects like buildings, along with the integration of dynamic phenomenon is not present
in current GIS systems and developing it is still a challenge. This is because depicting a space-time
process requires not only the time state information of the phenomenon but also its integration with
the surface model. This leads to the challenge of capturing the phenomenon in the right geospatial
content, i.e., the projection system. This data capture is out of the scope of our work, even though
attempts at capturing such process using GRASS-Overland flow hydrologic simulation [11], were
made but were unsuccessful due to some technical reasons. Instead, we use a simple algorithm
for capturing such process for demonstration purpose, which is discussed in chapter 4. Other
challenges include the ability of such a system to transform the data model to visualize the static
and dynamic objects over the terrain and building a computational rendering framework that can
integrate the process models with geospatial visualization techniques.

1.3 Thesis Objective and Research Questions


The thesis is centered around various visualization challenges in GIS domain which tries to
move away from the more traditional 2D to 3D system. Later it tries to define 4D visualizations
and how such 4D GIS systems can be build by moving away from the traditional animation-based

3
approaches. Before reaching the 4D visualization stage, some problems needs to be addressed, like
how information data models like CityGML be used for rendering urban models? How can these
white models be draped with textures on its surfaces to make more realistic models of the city?
How can the dynamic phenomenon be combined with the generated city models? Such problems
are summarized as follows:

1. How can 4D visualization systems be built for rendering dynamic phenomenon on top of
static urban 3D models?

2. Can such systems be interactive and queriable at a given time instance, thus moving from a
visual-based information only to a queriable information model approach?

3. Can information models like CityGML be infused into the current virtual globes to create
static city models?

4. How can these white building models be made more realistic by draping actual textures of
the buildings, and can these textures be automatically extracted from geo-tagged images?

1.4 Summary of Contribution


The main contribution of the thesis is a 4D Geographic Information System that can render
dynamic phenomenon like floods on top of a 3D urban model. This system moves away from the
traditional animated based approach where 4D visualization was achieved by combining snapshots
of different time intervals and joining them to form a video or gif like output. This research has
resulted in the following contributions:

1. We present a 4D system that can visualize dynamic phenomenon like floods on top of urban
models from a GIS point of view. This system is queriable and interactive throughout the
start and end time of the simulation. And thus moves away from the tradition systems as
mentioned before.

2. The system is capable of rendering buildings from CityGML data model. As CityGML
model is primarily for storing information rather than visualization, our system structures
the data in a way that takes care of rendering city building data and is robust, irrespective of
the size of the data.

3. We propose a solution to the problem of automated texture draping of white models, so as to


reduce the manual task of tagging each of the building surfaces with a texture.

4
1.5 Outline of the Thesis
• Chapter 2: This chapter starts off by giving an introduction to different ways of visualiz-
ing 3D geospatial data, and later talks about different visualization formats. It follows the
discussion on virtual globes and the advantages they bring when it comes to designing 3D
GIS systems, along with some examples of virtual globes. It then gives a naive approach for
rendering small CityGML data-sets.

• Chapter 3: The chapter proposes a method for automated texture draping on buildings from
geo-tagged images.

• Chapter 4: This chapter explains in detail about the interactive 4D GIS system that visualizes
dynamic environments. This chapter consists of sections that describe different components
of the system. It starts with the rendering of CityGML data model and how 3DCityDB comes
in handy. Creation of time series depth maps required for creating dynamic visualizations
follows. In the end, it explains how such temporal rendering is achieved.

5
Chapter 2

Literature Review

2.1 Introduction

As seen in the previous section, the importance of visualization in GIS systems and how in-
creasing the dimensionality of visualization aids in understanding and analysing the spatial world,
which in turn helps in better decision making. This was only possible due to the advancements in
the processing power of the Graphics Processing Units (GPU). Due to the increase in the number
of triangles these GPUs can process per second, has evolved how geographic information systems
render geospatial data. Further new optimized rendering algorithms have helped this boost.
In this chapter we will look into various aspects of 3D visualizations in GIS systems. Followed
by a series of experimentation on the existing technology that lead us to use virtual globes as the
base for our 4D visualization system.

2.2 3D Geo-spatial visualization experiments

The following are some of the 3D visualizations approaches we explored, which lead us to the
importance of virtual globes, and how useful they are in making 3D GIS systems.

2.2.1 Anaglyph 3D

Anaglyph[12] images are composed of two different filtered colored images, one for each eye.
These colors are chromatically opposite, typically red and cyan. Using this technique 3D stereo-
scopic effects can be achieved. These images are viewed through colored glasses. Integrated

6
Land and Water Information System (ILWIS)[13], is an information system that provides func-
tions which contains a set of stereo pair images, and converts them to analyphs, by marking the
fiducial points on the stero images. An example can be seen in figure 2.1.

Figure 2.1 Anaglyph of a terrain surface generated using ILWIS

Apart from the aerial images or hill shades, analyphs are used for urban infrastructure as well,
for example building stero images, such a system is developed by geoWed3D.[14]

2.2.2 Grass NVIZ

NVIZ[15] is a GRASS[16] module that helps the user to render surfaces in 3D space. It can
also drape the surface with color and texture, vector data on top of the 3D surface. The following
figure shows an example of NVIZ.

7
Figure 2.2 NVIZ visualizing a DEM data

2.2.3 Relief Shading

Shaded relief maps [17] highlights features on the surface, such as mountains, valleys, plateaus,
and canyons. Regions showing valleys and plateaus have flat features and are smooth and contin-
uous while mountains, slopes and canyons appear more rough and irregular. A relief shade can
be seen in figure 2.3. Relief shades are created from elevation data, and different shades can be
obtained depending on the resolution of the DEM used, the vertical Z-exaggeration used and the
position of the light source which is defined by the azimuth and altitude [18].

2.3 Buildings and Terrain Models


For the aim to render phenomenon over an urban area, we first need to create the urban area
consisting of buildings on top of a terrain, over which the dynamic phenomenon can be rendered
on.

8
Figure 2.3 Relief Shading of a 90m DEM

2.3.1 Naive Rendering

Building rendering was first done using naive extrusion of shapfiles [19], which has height as
one of its attributes. Shapefile is a vector data format, from which contained polygons, polylines
or points can be read by the GDAL library [20]. These geometries can be passed to the openGL
library for rendering purpose. Similarly for rendering of the terrain, elevation data sources like
SRTM [21] DEM can be used. For each height value, a point can be stored, and using triangle
strips, a terrain can be rendered. These approaches for building and terrain rendering are quite
primitive and naive because of the following reasons:

• As buildings can be quite complex in structure in 3D space, while shapfiles were designed
to store 2D vector data, using shapefiles for rendering buildings result in box like structures,
and looses structures like roofs.

9
• Rendering the terrain using naive algorithms drastically drops the frame rates when working
with large data sets. Thus level of details based terrain rendering algorithms should be availed
for better performance.

• This model lacks proper overlay of building and terrain models. The geometry coordinates
should be converted to a common projection system before rendering, so that its possible
to overlay the buildings and terrain, further proper overlaying of buildings on top of terrain
should be performed.

• Performance degrades when working with large data-set. Thus for better performance, tiling
of building and terrain data needs to be implemented. This would help divide the data and
render only the part which is required for rendering which depends on the camera position.

2.3.2 VRML and X3D

Virtual Reality Mark up Language [22] is a standard file format for representing 3D vec-
tor graphics, where vertices and edge of a 3D polygon can be specified along with the color,
transparency and texture. GEO-VRML [23] [24] standards build on top of VRML enables geo-
referenced data, like aerial images and triangulated terrains to be visualized on a web brower which
has a VRML plugin. Scripts or programs (written in java or ECMA Script) can be linked to such
events which would execute when that particular action was performed. All this was possible by
writing some standard text files.
An example of geo vrml can be seen below:

#A Cylinder
Shape {
appearance Appearance {
material Material {
diffuseColor 0.75 0.5 1.0
specularColor 0.7 0.7 0.8
shininess 0.1
}
}

10
geometry Cylinder {
height 0.2
radius 3.0
}
}

Later geoVRML was introduced to include the support for geo-referenced data. But due to the
high programming requirements of writing these VRML to create virtual globes made it difficult
for the users to use it or extend implementing solutions. Later VRML community changed it name
to X3D [25], where they now try to standardise their format by the help of writing them in xml.
And defining well defined schemas of these xml, which would be less error prone, and easy to
debug, which was not the case with VRML files. An example of the VRML virtual globe would
look something like in the figure 2.4. Following are some of the limitations of VRML
X3D supports multi-texture rendering, shading and lighting, along with normal maps. Later
version of X3D supports optimiztions like Binary Space partitioning(BSP), Quadtree, OctaTree
and culling of different rendering levels of a scene.
A number of tools are available over these formats for rendering the urban scene and gives an
edge over naive rendering approach like geo-referencing of data for overlying building and terrain
data. Tools for rendering continuous LOD based terrains, but still lacks features like tiling of large
data-sets, placing building models on top of terrain, and generalization algorithms for building GIS
applications.

Cylinder : X3DGeometryNode {
SFNode [in,out] metadata NULL [X3DMetadataObject]
SFBool [] bottom TRUE
SFFloat [] height 2 (0,)
SFFloat [] radius 1 (0,)
SFBool [] side TRUE
SFBool [] solid TRUE
SFBool [] top TRUE
}

11
Figure 2.4 VRML created Virtual Globe

2.3.3 Virtual Terrain Project

The Virtual terrain project [26] aim to provide a platform for the development of tools for the
easy construction of any part of the world in an interactive and 3D forms. This goal requires the
synergetic convergence of fields like CAD, GIS, computer graphic, visual simulation, surveying
and remote sensing. VTP supports a set of open source software tools, including an interactive
runtime environment (VTP Enviro), that supports the visualization of various rendering models.
For the rendering of its run time environment, VTP creates a scene graphs and exposes the apis
via vtlib which can be used to manipulate the scene graph and ultimately changing the scene ac-
cordingly. The vtlib library is a C++ library that can render terrains from geospatial elevation data,
and add other 3D models on that. It uses GDAL [20] for geospatial data processing and Open-

12
scenegraph [27] for 3D rendering of objects. It stores the DEM data into its vtdata model, and uses
vtEngine to tessellate the elevation data to terrains. It further supports continuous level of details
rendering algorithms. A vtScene object in the library is a node object of a scene graph, that needs
to be rendered on the screen. Rendering is done by the Open Scene Graph apis which uses the
opengl rendering calls. Virtual Terrain project can be thought of the underlying api that can used
to build a virtual globe. It provides a terrain rendering engine, that provides a number of Level of
detail terrain rendering algorithm. But it lacks basic features that a virtual globes provides which
are discussed in the next section.

2.4 Virtual Globes

2.4.1 Introduction
A Virtual globe [28] is a 3D software that models the Earth or another planet. They provide
the capability of freely moving around the virtual environment of Earth. They are different from
regular globes, in the sense that they have the power to go to different levels of virtual reality, for
example view the globe from space to the deep valleys, the troughs and crests of the mountains in
the Himalayas, to the buildings and roads of a city. They can further represent attributes relating
to specific areas of the globe, which is something standard feature in a Geographic Information
System(GIS).
Virtual Globes are known for their powerful rendering capability of very large elevation, im-
agery and vector data-sets. These data-sets are generally so huge that they are hosted on dedicated
data servers and are retrieved by these globes to be rendered and visualized. These data-sets are of
the scale of hundreds of terabytes, and is continuously increasing every day.

2.4.2 Features
Some of the few points as discussed by Cozzi and Ring in their book where they design a virtual
globe [29], make a note of some of the challenges from a rendering point of view, are discussed in
this section.

2.4.2.1 Precision

Virtual globe provides the user to view the globe at various scales, like viewing the globe as
a whole, or zoom to a street level. Thus they require a large view distance and large world coor-

13
dinates. Trying to render massive scene using close near planes, very large distant far planes and
large single-precision floating point coordinates leads to z-fighting artifacts and jittering.

2.4.2.2 Accuracy

Another challenge comes when we try to model earth perfectly. Assuming the earth as a perfect
sphere solves these issues, but it is 21km longer at the equator as compared to the poles. Thus it’s
important to take this into account, or else it introduces errors when we try to position assets space
assets or models.

2.4.2.3 Curvature

The curvature of Earth, whether modeled with a sphere or a more accurate representation,
presents additional challenges compared to many graphics applications where the world is ex-
truded from a plane: lines in a planar world are curves on Earth, oversampling can occur as latitude
approaches 90◦ and 90◦ , a singularity exists at the poles, and special care is often needed to handle
the International Date Line.

2.4.2.4 Massive datasets

Real world elevation and aerial imagery are massive and obvious won’t fix in GPUs memory or
systems memory or even in a hard disk. Virtual globes have data servers that stores these massive
datasets and fetch data based on view parameters using a technique called out-of-core rendering.

2.4.2.5 Multithreading

In virtual globes, multithreading is an essential part of the 3D engine. As the viewer moves,
virtual globes need to perform various tasks at the same time, like paging data making WMS
requests to the data servers, rendering of terrain, process data, etc. Now if everything being done
in rendering thread will cause the application to stall. Thus, virtual globe resources are loaded and
processed in one or more separate threads.

2.4.3 Examples
In the following section we will explore some open source virtual globes in details, covering
their architecture in detail.

14
2.4.3.1 NASA World Wind

World Wind [30] [31] is a desktop based virtual globe developed by NASA in 2003, so that it
can be used on personal computers. They now also have a web based version of virtual globe.
World wind can intricately display 3D geographical information within java applications [32].
Java application place one or more world window objects in their user interface. This world Win-
dow renders the 3D geographical content and other information related to the application.
World Wind components are extensible. The API is defined primarily by interfaces, so com-
ponents can be selectively replaced by alternative implementation, for example implementation
of ones own tessellation algorithm of terrain. Concrete classes can also be replaced or extended.
Extensibility is one of the fundamental objectives of World Wind. Figure 2.5 shows the the archi-
tecture diagram of world wind globe.

Figure 2.5 World Wind Architecture

Figure 2.6 shows the world wind globe.

• Model:

– Globe: It represents the earth’s shape and terrain. It had a tesselator that generates the
terrain, and is also responsible for LOD based rendering of the terrain.

15
Figure 2.6 World Wind globe

– Layer: It is like a layer of object on the globe, for the application. It applies aerial
imagery to the terrain, or building polygons on top of the terrain, or any renderable of
worldwind. Very similar to layers in a GIS.

• View: It gives the view of a model and can be changed by input events from the user.

• Scene Controller: Controls the rendering of the globe. It is associated with model and view.

In typical applications, a globe which create the terrains is handled by world wind itself. The
layers will typically represent your geographical data that the application tries to visualize on the
globe. These two are combined into a model. The application will also create a world window
object to which the model is passed. WorldWindow’s scene controller subsequently manages the
display of the globe and its layers, in along with an interactive view that defines the user’s view of
the scene.

• Data Retrieval and Offline Mode: As virtual globe handle massive amounts of data process-
ing and visualizing, they can’t be stroed on the local disk, thus world wind use data server
to retrieve data, and stores them and caches them locally for better performance. The size
of the cache is managed by the application. All the data retrival is managed by a separate

16
thread that runs in the background. There is also an option that disables fetching data over
the network called the offline mode, and would use only cache data for rendering.

• Picking and Selection: World Wind can determine the displayed objects at a given screen
position, typically the cursor position, in a WorldWindow. It can also determine the geo-
graphic position beneath the cursor. Both of these operations are performed automatically.
The results are delivered to the application via Select events, and can also be queried from
the WorldWindow.

2.4.3.2 OsgEarth

OsgEarth [33] is an open-source virtual globe which provides with a geospatial SDK and a
terrain engine, written in C++ and maintained by Pelican Mapping. Osgearth helps in the develop-
ment of 3D geo-spatial applications on top of open scene graph, and can visualizes terrain models
from direct source as well.
OsgEarth supports such visualization from an earth file, where one can define a data model
which is called a map. Map is a container for all the aerial imagery, elevation data and vector
models and other feature layers. MapNode is the node attached to the root node of the scene OSG
which contains the map data, and osgEarth related nodes. TerrainEngineContainer node is attached
to the mapNode which further has a child for terrainEngine, which is responsible for creation of
the terrain. A Decorator node is attached to terrainNodeContainar, which is also linked to the
TerrainEngine. This decorator is responsible for draping texture, surface properties, and clamping
objects on top of terrain. Multiple models or features node can be attached to the mapNode so
as to render them on osgEarth. Each Models node may have multiple model node attached to it,
which contains the vector related coordinates data. Osgearth files come in handy as, they can be
dynamically handled by the SDK provided. A sample earth file can be seen below.

<map name="Boston Demo" type="geocentric" version="2">


<image name="readymap_imagery" driver="tms">
<url>http://readymap.org/readymap/tiles/1.0.0/22/</url>
</image>
<elevation name="readymap_elevation" driver="tms">
<url>http://readymap.org/readymap/tiles/1.0.0/9/</url>
</elevation>

17
<model name="buildings" driver="feature_geom">
<features name="buildings" driver="ogr">
<url>TO_PATH.shp</url>
<build_spatial_index>true</build_spatial_index>
<resample min_length="2.5"/>
</features>
<layout tile_size="500">
<level name="default" max_range="20000" style="buildings"/>
</layout>
</model>
<external>
<viewpoints>
<viewpoint name="Sample Viewpoint" heading="24.261" height="0
</viewpoints>
<sky driver="simple" hours="14.0"/>
</external>
</map>

2.4.3.3 Cesium

Cesium [34] is a web based virtual globe. It uses webgl library, that is implemented to render
graphics on a browser. A web based virtual globe gives the advantage of high portability, saves the
effort of setting up an application on an operating system. And is independent of the underlying
OS.
Cesium is similar to a graphics engine, but as the layers of abstraction increases up the layer, the
classes become more specific to the virutal globe domain. In fig 2.9, the architecture of the virtual
globe engine can be seen.

18
Figure 2.7 osgEarth

At the bottom of the layers stack, is the Renderer. It is a webGL abstraction layer which manages
the resource, makes the draw calls to render polygons, and all the state needed to execute it for
example, shaders, uniforms, VBO, VAO, etc.

The next layer on top of Renderer is the Scene , it is responsible for rendering a frame. The
Scene requests commands from higher levels in Cesium, culls and orders them, and dispatches
them to the Renderer.

The top layer in the stack, built on the Renderer and the Scene, is the Primitives, which represent
real-world objects that are rendered by creating commands and providing them to the Scene. Prim-
itives have a globe primitives which is responsible for rendering the terrain, imagery and animated
water. It uses a quad-tree data structure for hierarchical level of detail rendering of the terrain.

19
Figure 2.8 Cesium Architecture

2.5 CityGML

As mentioned above shapefiles are not suitable for rendering building structures, thus we need a
different data model for buildings that can be used for rendering buildings. Such data model can be
CityGML [35] data format which is an xml based structured data standardised by Open Geospatial
Consortium (OGC).

2.5.1 About

City Geographical Markup Language is an information model that represents 3D urban objects.
It’s a concept to model and exchange 3D city models and is becoming very popular across the
globe. It is an open data model and is an xml based format. It is implemented as an application
schema on top of Geographical Markup Language version 3.1.1, which is an international standard
for the exchange of geo spatial data issued by Open Geospatial Consortium(OGC).

20
Figure 2.9 Cesium Globe

2.5.2 Naive CityGML Building Rendering

In the very basic CityGML rendering, we extracted the buildings geometries and put everything
in a single layer. Thus the layer as a whole is created, and buildings don’t have a sense of indi-
vidual identity. The rendering process is as follows: it starts of by extracting the geometries of
the different parts of a building, like the GroundSurace, WallSurface, RoofSurface and the soilds
which is a part of the LOD1 building model. Each of these LOD2 surfaces defines an exterior
polygon in the LinerRing element, which needs to be extracted so as to get the coordinates of each
surface. For parsing the CityGML data citygml4j library is used. citygml4j [36] is an open-sourced
library written in java which makes reading, writing and processing CityGML data easy and helps
developing CityGML aware softwares.
The extracted data is stored in a data model defined in the system. Each instance of the data
model contains four kind of lists, one for each of the surface class. For supporting multiple
CityGML files, multiple instances of these data models are created. These data models are used to
create RenderableLayers. A RenderableLayer is a worldwind class which contains a set of geome-
tries along with their material properties, which is rendered by the underlying openGL rendering
APIs. These geometries are projected to the world coordinate system, i,e latitude and longitude
from the local projection system. This is an important step in CityGMLGML visualization as,
wrong conversion or not projecting would lead to the rendering of these buildings on the wrong

21
location of the globe, or might result in unusual results. This conversion is done using the java
binding of the Geospatial Data Abstraction Library (GDAL) library [20]. GDAL is a translator
library for rater and vector geo-spatial data formats as released by OGC. The result of this can be
seen in figure 2.10.

Figure 2.10 Naive CityGML rendering

The above approach has a number of issues. One major problem is the size of the data-set. As
the number of buildings increases, its performance would degrade. The time required just to read
very large CityGML data on a single thread shoots up and consumes a large amount of memory.
Even after reading the data, the frame rate is very low on large data-sets. Secondly as the model
contains four lists one each for surface, roof, ground and solids, there is no sense of individuality
in the data-model which renders the urban scene. Thirdly there was no support for texture draping
on these models.
We try to improve our solution by the integration of 3DCityDB [37] schema into our solution.
This is discussed in chapter 4.

22
Chapter 3

Texture extraction for automated texture draping of 3D geo-spatial


objects

In the previous chapter, we saw various aspects of 3D visualizations in GIS space and for ren-
dering an urban area with buildings and terrain. Later we came across the basic principles behind
the virtual globes, and the challenges and issues that they solve, which makes them so powerful
and usefully for making applications on top of them.
In the goal of making 4D visualizations, we also want to make a more realistic virtual world, an
urban area with buildings which have their real world texture draped on top of them so as to make
it more visually appealing and would give the user familiarity with the area.
In this chapter we have addressed the problem of visualization of vector building data on top of
a terrain with real world textures which are extracted from geo-tagged camera images.

3.1 Introduction and Background


Visualization is one of the key aspects of GIS systems, as it helps us to see our data with geo-
spatial properties and the results of our analysis of the data. As we live in three dimensional world
having the extra dimension in our GIS system for viewing of maps brings more perspective to it.
Further 3D visualization provides a whole new way of looking at data which is not provided by
2D platforms. In addition to this, the advancement in the rendering capabilities of the graphics
processing unit and development of fast rendering algorithms have boosted the growth of 3D ap-
plications over the past two decades. Further advanced technologies for collecting 3D data from
laser scanning of cities and projects like SRTM that tries to globally map the elevation of the earth
have created the need to make systems for its processing and visualization. This data can be used

23
to visualize urban cities and help in urban planning, and simulate various dynamic phenomena like
floods, and may also help in query the virtual world. A lot of commercial systems have been in
the market like 3D analyst of arc-view(ESRI) or Imagine VirtualGIS(ERDAS) and open-source
projects like osgEarth, Virtual terrain project and many more [38][39].
One of the challenges faced today is that these systems require a lot of manual intervention for
obtaining the final urban model, i.e, after obtaining the rendered buildings, the textures might have
to be manually extracted from the building images, and these textures have to be manually draped
on the buildings. In this paper we try to address this problem, and try to automate this process.
We also handle one of the limitation of osgEarth, where it can’t apply individual texture to an
individual building but rather randomly selects a texture for a building.
In the following subsections, we will look into a scene graph that was used to create the urban
scene and the segmentation method that was used for the extraction of building texture.

3.1.1 Scene Graphs

A scene graph is a data structure used by vector drawing applications and games. A scene
graph is a collection of nodes in a graph or a tree. A node can have multiple children. One of the
advantages of a scene graph being that the effect of the parent is traversed to all its children down
a tree. This is very useful, as we can now group a collection of small shapes as an object. For
example if an object needs to be moved which is made up of several different small components,
then just adding a traversal node at a higher level to the sub-node of that model will make the object
move.
One of the scene graph play around technologies is the OpenSceneGraph [27], which provides
with a set of basic APIs to create scene graphs and then traverses them and renders the scene created
by the developer.

3.1.2 Chan-vese segmentation method

The Chan-Vese algorithm[40] [41] is an example of a geometric active contour model. It begins
by defining a contour in the image plane which is the initial segmentation, and later this contour is
evolved according to some equation. The aim of this method is to evolve the contour such that it
stops on the boundary of the foreground region. There are various ways of defining this equation,
where the rate of contour movement depends on the local curvature at a given point or the gradient
of the image at that point.

24
3.2 Methodology

The proposed method consists to three parts, which are as follows: a) creating of the white
models, b) using Chen Vese based method for extraction of building textures and c) automated
tagging of textures. Figure 3.1 shows the proposed approach.

Figure 3.1 Flow Diagram of the system

3.2.1 Creating white models

White Models are just a 3D representation of the buildings without any texture draped on them.
OsgEarth provides a number of services for retrieving data over the web. We use TMS (Tile Map
Services) [42] [43] for the retrieval of aerial images and DEM data for rendering of Terrains from
readymaps. OsgEarth has a map data model which stores all the other data models as layers which
are mainly elevation, imagery and model layers. This map data model is used to create a mapnode
which is responsible for rendering all the different data layers. A terrain engine node is created

25
under MapNode which is used to render the terrain by using a GPU based level of detail(LOD)
algorithm, which divides the terrain dataset into rectangular patches of different resolution. This
elevation data required for the terrain is obtained by making a TMS request for SRTM data to the
readymap server, of the area in which we are interested in. Draping of the aerial imagery on top of
the terrain is also taken care by the terrain engine and is obtained by TMS request.
All the vector data is under the model layer in the map data model. A Model node is created
under the MapNode which contains different models/vector data at an individual node. Each model
can be a dataset for buildings, roads, lakes, forest, etc. The 3D models of the buildings are created
by extruding the buildings polygons to the height of the buildings which is stored as one of its
attributes, in the vector data. One of the things that has to be taken care of is the projection system
of the vector data. They are reprojected to the regional UTM projection system, so that there is
proper overlay of different models. The scene graph created by osgEarth can be seen in figure 3.2.

Figure 3.2 Created scene graph

3.2.2 Chan Vese based approach for texture extraction

A number of approaches has been suggested in the past for extracting the building textures using
various segmentation techniques. Our approach of segmentation is similar to the method proposed

26
by M.Turker and E. Sumer[44]. But instead of using watershed segmentation we used an active
contour based segmentation approach as mentioned by Chan, T.F., and Vese, L.A[40][41]. Here
we take the assumption that the image has only one building in it (as seen in Figure 3.3) that we
want to segment out and which contains the majority area in the image. Using the multiphase
contour segmentation approach with the starting contours both set to the outer image boundary
we get the building part which doesn’t include the windows. This image requires some cleaning
because of the presence of unwanted blobs. For cleaning purposes, morphological and median
filtering operations are applied. Now we fill the enclosed empty windows by finding the connected
components, and adding the small window components to the building segment on the basis of
pixel count of that component, thus obtaining the segmented building region. After the image is
segmented, canny edge detection is applied on the segmented region to extract the edges, followed
by some morphological operation to get prominent edges. Four corner points closest to the image
edges are selected, and that is used to make the bounding rectangle for the image. After this some
area inside the rectangle from an end might be unfilled or might have holes so proper geometrical
rectification is done to get the final texture.

Figure 3.3 Chan-Vese based texture Extraction

27
3.2.3 Automated tagging of texture

We want to apply a specific texture to each of the buildings in our model layer. For this we need
a mapping from the texture dataset to the corresponding building. As the textures are geo-tagged
we know the location from which the image was taken, and we also have the camera pose. From
these parameters, we can use a straight forward approach to find all the polygonal footprints in the
2D map which intersect with the ray originating from the point, and select that building which is
closest to that point. This mapping is stored in an xml file where each texture has one tag, which
is the id of the building. An illustration can be seen in figure 3.4. There might be a case when a
building is not mapped to a texture, in which case a random texture from a texture set is applied to
the building.
While creating the scene graph, we take the vector layer and for each building create a separate
node under the model node. Each of these buildings will have their own id as a tag, which is to be
matched in the xml file to obtain the texture of the building, and thus getting the draped buildings.
Below is a sample of how the catalog.xml looks like.

<?xml version="1.0" ?>


<resources name="texlib-us">
<!-- tiled repeatables -->
<skin name="Texture1" tags="IIIT4">
<url>extractedTexture/building4.jpg</url>
<tiled>true</tiled>
<image_width>20</image_width>
<image_height>30</image_height>
<texture_mode>modulate</texture_mode>
</skin>
<resources>

Here is some of the exif data shown about the geo-tagged image:

28
Tag Value
YCbCr Positioning Centered
Compression JPEG compression
Color Space sRGB
Pixel Y Dimension 2448
Exif Version Exif Version 2.2
Pixel X Dimension 3264
ISO Speed Ratings 100
F-Number f/2.4
GPS Image Direction 325
GPS Image Direction M
North or South Latitude N
Latitude 17, 26, 45.1300
East or West Longitude E
Longitude 78, 20, 56.7535
Altitude Reference Sea level
Altitude 526.000
GPS Time (Atomic Clock) 07:46:38.00
Name of GPS Processi ASCII
GPS Date 2014:10:12

The results of texture extraction can be seen in figure 3.5 and 3.6.

3.3 Conclusion

The paper demonstrates the use of Chan-Vese based method to extract the texture from geo-
tagged image and automatically drape them over the corresponding faces of the 3D white models.
This method gives us near realistic models because it tries to map the buildings with their real
world textures. In addition, while draping, the textures are scaled accordingly to the dimensions of
the buildings face by osgEarth itself.
One of the challenges faced was to get images for all the faces and draping them on the cor-
responding wall of the building. Also extracting multiple building faces from a single image, and
tagging them to the corresponding building face still needs to be taken care of. Further improved

29
Figure 3.4 Texture Draped 3D buildings

methods of segmentation might be needed to handle images that might contain multiple building
faces or occlusions due to trees and other road furniture.

30
Figure 3.5 Texture Draped 3D buildings - top view

Figure 3.6 Texture Draped 3D buildings - semi street view

31
Chapter 4

Interactive 4D Visualization of Dynamic environments on World


Wind Globe - Simulated Floods over an Urban Area

4.1 Introduction

In the recent past, the advancements in GPU processing power has lead to various computer
graphics applications including its adoption for Geospatial applications like building or city visual-
isation [45] [46]. While efforts at 3D visualization of dynamic phenomena like floods over natural
surfaces have been attempted, but doing so in presence of 3D non-natural objects is still a chal-
lenge. In addition, to make such visualizations interactive it is necessary to incorporate the time
dimension along with 3D water flows over the digital surface model.
In this chapter a 4D GIS system is presented, to visualize a dynamic phenomena - simulated
hydrological water flow model over an urban area. This work attempts to move away from the
animation based approaches and would use the calculated water depth information to present a
near-real visual rendering of the same. The developed system is built using the NASA’s WorldWind
Globe and uses the depth filling algorithm as its input for time-step generated water depth maps, the
dynamic layer. The urban scene is derived from a static CityGML LOD2 buildings layer overlaid
on the digital elevation map. The dynamic flow visualization is enabled through an appropriate
color mapping scheme so that the user can have a fair scene of water depth at various areas of the
city over the time period. In addition, the system provides for analytical tools like querying for
water depth at any time interval, hydrographs showing the variation of height over time and a slider
to control the time parameter of the system.
As mentioned in chapter 2, World Wind is a java based free and open source virtual globe,
which is highly extensible. Its APIs are primarily defined by java interfaces, thus it is easy to

32
implement our own definition of an API, or extend which is already implemented. World Wind has
a renderable class called analytic surface, which can be extended to render dynamic surfaces. For
these primary reasons, in our work we have used NASAs WorldWind virtual globe to create a 4D
visualization system for changing environment.
Previous work for visualization of dynamic environments like flood water simulation as done by
Leskins[7] in their paper, where they develop realistic visualisations of standing floods over a city
area for decisions makers and bridging the gap between the hydrological and simulations experts
and the decision makers, but lacks the ability for time dynamic flood visualisations.
The Integrated Tsunami Research and Information System (ITRIS)[8], is again a 3D GIS system
that visualizes the simulation of Tsunamis and visualizing the information from results, by creating
a tsunami travel chart on a virtual globe.
A web based 3D visualization flood inundation system[10], tries to find the inundation area using
the areas DEM [47] and marking the area based on the targeted flood level height.
One of the drawback of these systems being, not able to utilize the time dimension to create a
near-real time animation of the dynamic environment that they are trying to simulate.
On the other hand Bender [9] in his paper presents a tool for static flood on top of rendered ter-
rains and buildings, and also lacks the GIS aspect from the system which were mentioned before.
The lack of proper color mapping of the water depth, analysis tools and other GIS capabilities re-
quires new systems to be developed, that makes use of these simulated data-sets to create interactive
animated water surfaces shown over time from a GIS perspective, over non-natural structures.
In this work a 4D interactive system is presented which is capable of rendering time varying
phenomena in 3D space. 4D visualization comprises of rendering time varying phenomena in 3D
space. In case of floods, we would like to have a view of how the water level is rising at various
local regions in the area of interest over time. Also we would like to visualize the 3D space on
top of which this phenomenon is taking place, i.e we would like to render the urban area. Further
this 4D visualization should take into account how the phenomenon is being effected by the urban
environment. 4D visualization of dynamic phenomenon gives insights of how an environment
evolves with time in 3D space. Thus helps in understanding the dynamics of the phenomenon.
Further moving from the standard 3D visualizations which are static in nature, 4D visualizations
help the user understand the complete flow or process of which the static 3D snapshots were a part
of. Thus for spatio-temporal data, there is a need to move from static 3D visualization to time
based 4D visualizations.
Leskins[7], when trying to visualize flood visualization, uses lidar data model as its 3D static
urban model. Bender and Jiang in their papers have used shape files for buildings. Different

33
systems use different approach to render urban area, and this leads to a number of issues. For
example rendering a point cloud is highly computer intensive, because of huge data sets, and is
not a standard approach for rendering buildings. Blender and Jiang uses shapefiles which is a very
generic format for 2D vector data, and can’t be used to store the complex building structures. Due
to lack of using open standard format, different tools come up with different approaches to render
urban data. This issues of not using a standard file format, which represents an urban area can be
overcome by CityGML, as it is a standard model for describing 3D objects with respect to their
geometry, topology, semantics and appearance and have be standardized by OGC. Due to which
we are using this data format as our input model to render urban areas.

4.2 Framework

Following is an overview of the 4D GIS system. The system inputs a CityGML file and process
that data and extracts the relevant building data from it, and ignores other city data like vegetation,
tunnels, pipelines etc. Using this data it renders these buildings as a layer on top of the terrain. As
the building position won’t change, this is the static layer. The next step being the extraction of the
elevation data of the static urban area, on which the hydrological simulation will take place, which
will result in a time series depth maps of the area. These depth maps will be taken as the input
to create the interactive dynamic flood layer. During the extract of elevation, a Digital Surface
Model (DSM) [47] is created which takes into account the building surfaces as well, rather than
just the underlying elevation model. When the user gets the DSM of the area, he/she runs it on any
simulation system to get the time series depth map of the area.
For rendering the 4D visualisation, interpolation between two consecutive depth maps is carried
out, depending on the sampling rate provided by the user. Here two cases might occur, first where
the difference between consecutive depth maps is small, thus there is no need for sampling between
two depth maps, as there won’t be much difference in the interpolated values, and visualization will
render the surface using depth map data. This is when the sampling rate should be set to one. The
second case being when the difference between two depth maps is comparatively large. In this case
the sampling will tell the algorithm, how many snapshots needs to be created in between two depth
maps, and will result in the dynamic visualization of the flood layer. The framework of the system
can be seen in figure 4.1.

34
Figure 4.1 Framework

35
4.3 CityGML Rendering
As seen in the framework, we first need to render the city layer on top of which we can render
4D visualization of dynamic surface layers. In the following sections we will see how we try to
solve this problem.

4.3.1 CityGML
We use CityGML data for rendering LOD 2 buildings. To achieve this goal we first need to ex-
tract the data associated with the building model of the data-set. The building model is represented
in five level of details from LOD0 to LOD4.
An example of a structured CityGML can be seen in listing 4.1. It shows a building with a
ground polygon.

Listing 4.1 Sample CityGML data


1 <? xml v e r s i o n =” 1 . 0 ” e n c o d i n g =” u t f −8” ?>
2 <C i t y M o d e l>
3 <gml:boundedBy>
4 <g m l : E n v e l o p e s r s D i m e n s i o n =” 3 ” srsName =”EPSG:CODE”>
5 <g m l : l o w e r C o r n e r>x 0 y 0 z 0</ g m l : l o w e r C o r n e r>
6 <g m l : u p p e r C o r n e r>x 1 y 1 z 1</ g m l : u p p e r C o r n e r>
7 </ g m l : E n v e l o p e>
8 </ gml:boundedBy>
9 <c i t y O b j e c t M e m b e r>
10 <b l d g : B u i l d i n g g m l : i d =” BUILDING ID ”>
11 <b l d g : r o o f T y p e> F l a t</ b l d g : r o o f T y p e>
12 <b l d g : y e a r O f C o n s t r u c t i o n>1999</ b l d g : y e a r O f C o n s t r u c t i o n>
13 < b l d g : f u n c t i o n> R e s i d e n t i a l</ b l d g : f u n c t i o n>
14 <b l d g : s t o r e y s A b o v e G r o u n d>5</ b l d g : s t o r e y s A b o v e G r o u n d>
15 <b l d g : b o u n d e d B y>
16 <b l d g : G r o u n d S u r f a c e>
17 <b l d g : l o d 2 M u l t i S u r f a c e>
18 <g m l : M u l t i S u r f a c e>
19 <g m l : s u r f a c e M e m b e r>
20 <g m l : P o l y g o n g m l : i d =”POLYGON ID”>

36
21 < g m l : e x t e r i o r>
22 <g m l : L i n e a r R i n g>
23 <g m l : p o s L i s t>0 . 0 0 . 0 0 . 0 −2.0 5 . 0 0 . 0 3 . 0 7 . 0 0 . 0 5 . 0
2 . 0 0 . 0 0 . 0 0 . 0 0 . 0</ g m l : p o s L i s t>
24 </ g m l : L i n e a r R i n g>
25 </ g m l : e x t e r i o r>
26 </ g m l : P o l y g o n>
27 </ g m l : s u r f a c e M e m b e r>
28 </ g m l : M u l t i S u r f a c e>
29 </ b l d g : l o d 2 M u l t i S u r f a c e>
30 </ b l d g : G r o u n d S u r f a c e>
31 </ b l d g : b o u n d e d B y>
32 </ b l d g : B u i l d i n g>
33 </ c i t y O b j e c t M e m b e r>
34 </ C i t y M o d e l>

The data as seen above shows only one ground surface of, a single building of a CityGML
dataset. The CityModel tag contains schema definitions of various models of the CityGML and
gml standards as defined by OGC(not shown in the data). The second element gml:boundedBy
contains an envelope that encloses the city, with its lower and upper corners. Then follows a series
of cityObjectMemebers which contains one or more data models as mentioned before. The sample
data shows a Building model containing some building properties along with a single GroundSur-
face cityobject, defining its polygon geometry.

4.3.2 Building Model

The Building Model is represented in five level of details from LOD0 to LOD4. LOD0 contains
the outline of the building area, i.e no walls, no roofs , just ground surface. LOD1 contains solid
blocks for the buildings, i.e containing the ground surface, and walls. LOD2 gives more details by
adding roofs rather than plain blocks. LOD3 adds windows and doors of buildings and LOD4 is
the most detailed representation of a building which contains interior details for floors and rooms
of the building. In our work we have experimented and worked with 1 2 level of details.

37
4.3.3 Dataset

We started of by using small dataset of Waldbruecke, Germany which had around 600 buildings
in its CityGML dataset, containing LOD1 and LOD2 level of buildings. This data-set can be found
on the official CityGML website.[48].
We wanted to experiment with large datasets so we came across the work of Filip Biljecki, in his
work [49], he was able to generate random CityGML data. We have used this randomly generated
data for visualization purpose, so as to have more control on the number of buildings we want
to work around with. In his work he further sub categories the different Level-of-Details of the
building model. Figure 4.2 shows how he further sub divides level of details. We can specify the
number of buildings that we wish to generate. Thus this helps us testing our framework on huge
data-sets.
But the data generated looks artificial and random, thus doesn’t give a look and feel of a real
city. Thus we have worked with some of the dataset given from berlin 3d virtual city project [50],
which also contains the texture of different building components.

4.3.4 3DCityDB and Schema

3DCityDB[37] [51] is a free and open source database designed to store and analyze 3D city
data. It is a relation based schema that models the CityGML. It’s schema closely represents the
CityGML modeling standards, thus storing CityGML data is very efficient and easy. It is imple-
mented on top of spatially enhanced relational DBMS like oracle with spatial/location option and
postgreSql with the postGIS [52] extension. These spatial DBMS provides support for geographi-
cal objects and provides capabilities to perform spatial queries. It provides tools for importing and
exporting CityGML data. These highly efficient importers uses multiple cores to process very large
CityGML data-set in parallel.
Further is gives features for exporting the data to KML [53] [54] and COLLADA [55] formats
for visualization. But the conversion to such formats often result in the loss of detailed data. Further
it’s open source.
As its based on such powerful spatial DBMS like postgis extension of postgres, thus provides
a large number of spatial operations, which can used for tile creation, building assignment to the
tiles, query of tiles based on point, and so forth.
Single or multiple CityGML classes have being mapped to a single table in the database, with
the attributes being converted to the columns with the same name, and the name of the table being

38
Figure 4.2 Multiple level of details as described by Biljecki

39
the name of the class. Types of these columns will vary with the database on top of which is being
built, ie. different for oracle and postgres.
The core model which contains all the classes and sub-classes of the CityGML file. The
cityobject table will contain the objects of the subclasses like buildings, roofSurface, etc.
The field names a identical with the addition of meta data like modification details. There is a
field envelope of the object, which is a rectangular region, contained by the minimum and maxi-
mum coordinates of the object. There is a table citymodel which aggregates the cityobject into
one group. Then there are tables like cityobjectgroup and group to cityobject which
group different cityobjects together.
The geometries are stored in the surface geometry table. The geometries are stored as
planar surfaces as the GEOMETRY attribute of the respective databases. Solid geometries i.e three
dimensional geometries are stored as the attribute SOLID GEOMETRY. Each surface might have
an associated texture or color to it, which is stored in the appearance model.
The appearance model contains tables appearance which contains one or more themes for
the cityobjects or the objects of citymodel. The table surface data contains the data for each
of the surface geometry object. The table textureparam contains the texture coordinate
list for mapping the textures to their corresponding geometries.
In Building model the table building has a sub class relation with cityobject, with the column
cityobject id , which means that for each of the building, there will be an object in cityobject table.
Buildings are aggregated using the field BUILDING PARENT ID, which points to the id of the
building in the same table. Most fields in the table are similar to the attributes of the counter part
CityGML. The buildings are related to their corresponding boundary surfaces from the Building
to the THEMATIC SURFACE. A small subset of the building tables are shown in the figure 4.7 .

4.3.5 Rendering using 3DCityDB

As compared with the naive approach, which was reading the CityGML data using the citygml4j
[36] library, 3dCityDB’s command line interface (CLI) helped in removing the direct dependency
of the citygml4j library. And as it uses multiple cores to read very large CityGML data in parallel,
helped in reducing the time required for processing the file. Even though we are not directly
using citygmk4j library, 3DCityDB importer tool uses citygml4j under its hoods for processing the
CityGML data.

40
Figure 4.3 Simplified schema of building model from 3DCityDB.org

41
Figure 4.17 Street view of the time series visualization

4.7 Graphical User Interface

The GUI consists of a worldwind window, which shows the virtual globe and other static and
dynamic visualizations. The gui can be seen in the figure 4.18, which highlights various compo-
nents of the GUI. Top left shows two data layers panels, one is data layer panel which contains the
building and dynamic layers, created by the application and the other being the panel that contains a
list of aerial imagery from various data-sets that are provided by the world wind services. Then we
have a time panel which contains three buttons, one for playing and pausing the animation, while
the rest on the two buttons are for speed controlling the animation. There is a time slide as well,
which allows the user to go at any specific time of the simulation, to look at the visualization or
perform some queries at that point in time. Then there is a tool box at the top, which contains but-
tons for importing CityGML data, preparing the animation i,e running the depth filling algorithm.
A button for exporting digital surface model, which is required for creating the time series depth

58
maps. Then there are buttons for querying water depth on the globe and for creating hydro-graphs
of the simulation.

Figure 4.18 Graphical User Interface

4.7.1 Tools

The system provides tools like querying for water depth height at a point on a terrain for the
current time of visualisation. And for creating hydro-graphs, i.e a graph showing the variation
of depth values over time of a specified point on the region. These graphs are created from the
calculated depth maps, which gives the depth values at different time intervals of the simulation.
A user can select multiple points on the region and compare the rise and rate of rise, of water on
various regions in the area. An example can be seen in figure 4.19 . A thing to note in the hydro-
graph being, they are all linear, this is because the method used to generate the depth maps was
linear.

59
Figure 4.19 HydroGraphs and depth Query

4.8 Results
The CityGML data of Lichtenberg, Berlin was used to generate the city buildings on an open
source virtual globe, NASAs world wind globe, which can be seen in figure 4.2 , 4.7. Time series
depth maps are created to show time varied visualization, as seen in figure 4.15 , 4.16 and 4.17. The
framework is written in java, open source postgis extension of postgres using 3DCityDB schema
for storing building data was used. The visualization is done using the jogl wrapper of opengl(also
used by NWW). This visualizations are run on a 4GB memory system, and uses the Intel HD
3000 graphics card for rendering purpose. This shows the importance of open source libraries and
software, which makes the development of such applications possible.

4.9 Conclusion
In the paper we have approached the problem of visualization of dynamic environments using
the time dimension, and build a system which is capable of handling such environments and, it’s
been shown for a flood simulation.

60
A couple of drawbacks to be discussed are about the performance of the system, in terms of mem-
ory usage and frame rates. We have implemented a tile based building for rendering which is
suitable for street or near street level view, but when the user tries to see the entire city as a whole,
not all the buildings are rendered because of the huge building data volume. This problem can be
solved by using level-of-details approach, which would simplify the geometries at such large scale.
This can be considered for future work. Also as the area of the simulation, increases, the inter-
polation time goes up, as the number of pixels which needs to be interpolated also goes up, thus
performance is effected. This could be fixed by using a parallel approach, where the interpolation
of a group of pixel is done at the same time by multiple cores.
Also, the dynamic layer uses a naive approach to render the surface tessellation, this can be im-
proved by using level-of-detail based tessellation, for very large areas.

61
Chapter 5

Conclusions

In this work, we are addressing the question of how we can build a 4D Geographic Information
System, which is capable of rendering dynamic phenomenon on top of a static urban model in
an interactive and queryable manner. We have taken the case study of floods as our dynamic
phenomenon in this work. We have used Berlin-3d CityGML data-set for the rendering of the
static urban models. Initial attempts of building rendering use naive rendering methods of parsing
the geometries and rendering them on the virtual globe. Later we scale the rendering method for
very large CityGML datasets by making use of 3DCityDB and tilling based rendering approach.
We also try to answer the question of how the static building models are automatically draped with
texture, so as to make the models more realistic and reduce the manual labor involved in texture
draping. To achieve this we have demonstrated a chan-vese based method to extract the texture
from geo-tagged images and automatically drape them over the corresponding faces of the 3D
white models. This method gives a near realistic model because it tries to map the buildings with
their real world textures.
The static urban model and the dynamic phenomenon interacts via the creation of the Digital
Surface Model. The generated DSM is used to generate time series depth maps of the region of
interest by passing it to a hydrological model or our depth filling algorithm. These time series
depth maps are given to the analytical surface for the generation of the dynamic layer, resulting in
the interactive visualization of the dynamic phenomenon on top of an urban model.
The system moves away from the existing animation or visualization only approaches where
rendering is done in a local, non-interactive space and where the primary goal is for more appealing
graphics rather than using them as a means to extract meaningful information. Thus our work
moves towards a 4D GIS system where visualizations are made in a geo-referenced space and

62
the user controls the phenomenon and is competent for answering basic queries in any particular
moment of time.
A couple of limitations from the system architecture point of view are as follows:

• The tessellation algorithm for the dynamic surface is quite primitive and can be optimized
for better performance on large dynamic areas.

• The analytic surface update is done for every grid point in a sequential order. This can be
improved by taking advantage of parallelism by GPUs or multi-core CPU processors.

The urban model is limited to the visualization of only the building models and doesn’t include
other city features like city furniture, vegetation, etc, and is left for future work.
Rendering of buildings from CityGML data model uses tile based rendering for fetching a sub-
set of data for better performance. It uses a D8 or D24 neighboring approach for rendering the
buildings in the vicinity of the camera. Rendering of building data can further be improved by
incorporating a 3D generalization approach, where the detailing of buildings can be varied from
the camera zooming levels. More zoomed up levels would show the buildings in more detailed
structure, far away camera would show lesser details.
Further, as the system uses 3DcityDB as the storage model of urban models, it is highly ex-
pendable for answering complex queries like ”which buildings of a specific area of the city will get
flooded and when ?”. Such queries are not yet built in and are left for future work. These queries
can also be linked to the visualization of the GIS system, for example, marking the set of buildings
where the flood height reaches a threshold height or putting a red flag on top of such buildings.
Further, the system doesn’t capture the effect of the dynamic surface on the static model. Such
interactions can be considered for future work.

63
Related Publications

1. Vishal Tiwari, K. S. Rajan


A Chan Vese based method of texture extraction for automated texture draping of 3D geospa-
tial objects.
2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).
July 26-31, 2015; Milan, Italy

2. Vishal Tiwari, K. S. Rajan


Interactive 4D Visualization of Dynamic environments on WorldWind Globe - Simulated
Floods over an Urban Area.
Springer Journal of Open Geo-spatial Data, Software and Standards.
(Under Review)

64
Bibliography

[1] Michael F Goodchild. Geographical information science. International journal of geograph-


ical information systems, 6(1):31–45, 1992.

[2] Piotr Jankowski. Integrating geographical information systems and multiple criteria decision-
making methods. International journal of geographical information systems, 9(3):251–273,
1995.

[3] Jasrul Nizam Ghazali, Amirrudin Kamsin, NE Mastorakis, V Mladenov, Z Bojkovic,


D Simian, S Kartalopoulos, A Varonides, C Udriste, E Kindler, et al. A real time simulation
and modeling of flood hazard. In WSEAS International Conference. Proceedings. Mathemat-
ics and Computers in Science and Engineering, number 12. WSEAS, 2008.

[4] Roger S Bivand, Edzer Pebesma, and Virgilio Gómez-Rubio. Hello world: Introducing spatial
data. In Applied Spatial Data Analysis with R, pages 1–16. Springer, 2013.

[5] Thomas K Peucker, Robert J Fowler, James J Little, and David M Mark. The triangulated
irregular network. In Amer. Soc. Photogrammetry Proc. Digital Terrain Models Symposium,
volume 516, page 532, 1978.

[6] Robert J Fowler and James J Little. Automatic extraction of irregular network digital terrain
models. ACM SIGGRAPH Computer Graphics, 13(2):199–207, 1979.

[7] Johannes G Leskens, Christian Kehl, Tim Tutenel, Timothy Kol, Gerwin de Haan, Guus
Stelling, and Elmar Eisemann. An interactive simulation and visualization tool for flood
analysis usable for practitioners. Mitigation and Adaptation Strategies for Global Change,
pages 1–18, 2015.

65
[8] Viacheslav K Gusiakov. An integrated tsunami research and information system: Application
for mapping of tsunami hazard and risk assessment. In Solutions to Coastal Disasters 2008:
Tsunamis, pages 27–38. ASCE.

[9] Jan Bender, Dieter Finkenzeller, Peter Oel, et al. Hw3d: A tool for interactive real-time
3d visualization in gis supported flood modelling. In Proceedings of the 17th International
Conference on Computer Animation & Social Agents, Geneva (Switzerland), pages 7–9, 2004.

[10] Rengui Jiang, Jiancang Xie, Jianxun Li, and Tianqing Chen. Analysis and 3d visualization of
flood inundation based on webgis. In E-Business and E-Government (ICEE), 2010 Interna-
tional Conference on, pages 1638–1641. IEEE, 2010.

[11] https://grass.osgeo.org/grass72/manuals/r.sim.water.html/. [GRASSSIM].

[12] Maged N Kamel Boulos and Larry R Robinson. Web gis in practice vii: stereoscopic 3-d
solutions for online maps and virtual globes. International Journal of Health Geographics,
8(1):1, 2009.

[13] Ben GH Gorte. Tools for advanced image processing and gis using ilwis. ITC, 1994.

[14] http://www.geoweb3d.com/new-feature/3d-stereo/. [geoWeb3D].

[15] Alexandre Sorokine. Implementation of a parallel high-performance visualization technique


in grass gis. Computers & Geosciences, 33(5):685–695, 2007.

[16] Markus Neteler, M Hamish Bowman, Martin Landa, and Markus Metz. Grass gis: A multi-
purpose open source gis. Environmental Modelling & Software, 31:124–130, 2012.

[17] AE Barnes. What a relief shade can be. AAPG Explorer, 8, 2002.

[18] Everette B Hill Sr. Process for making shade relief maps and the map made thereby, April 10
1979. US Patent 4,148,580.

[19] ESRI ESRI. Shapefile technical description. An ESRI White Paper, 1998.

[20] Frank Warmerdam. The geospatial data abstraction library. In Open Source Approaches in
Spatial Data Handling, pages 87–104. Springer, 2008.

[21] Jakob J Van Zyl. The shuttle radar topography mission (srtm): a breakthrough in remote
sensing of topography. Acta Astronautica, 48(5):559–565, 2001.

66
[22] Klaus-Peter Beier. Virtual reality: A short introduction. Retrieved February, 2:2004, 2004.

[23] Jianghui Ying, Denis Gracanin, and Chang-Tien Lu. Web visualization of geo-spatial data
using svg and vrml/x3d. In Image and Graphics (ICIG’04), Third International Conference
on, pages 497–500. IEEE, 2004.

[24] Jianqin Zhang, Jianhua Gong, Hui Lin, Gang Wang, JianLing Huang, Jun Zhu, Bingli Xu,
and Jack Teng. Design and development of distributed virtual geographic environment system
based on web services. Information Sciences, 177(19):3968–3980, 2007.

[25] Don Brutzman and Leonard Daly. X3D: extensible 3D graphics for Web authors. Morgan
Kaufmann, 2010.

[26] Ben Discoe. Virtual terrain project. URL: http://www. vterrain. org/(last date accessed: 01
July 2005), 2002.

[27] Robert Osfield, Don Burns, et al. Open scene graph, 2004.

[28] Declan Butler. Virtual globes: The web-wide world. Nature, 439(7078):776–778, 2006.

[29] P. Cozzi and K. Ring. 3D Engine Design for Virtual Globes. CRC Press, 2011.

[30] David G Bell, Frank Kuehnel, Chris Maxwell, Randy Kim, Kushyar Kasraie, Tom Gaskins,
Patrick Hogan, and Joe Coughlan. Nasa world wind: Opensource gis for mission operations.
In 2007 IEEE Aerospace Conference, pages 1–9. IEEE, 2007.

[31] Patrick HOGAN. Nasa world wind. In Geological Society of America Abstracts with Pro-
grams, volume 39, page 42, 2006.

[32] P Hogan and J Coughlan. Nasa world wind, open source 4d geospatial visualization plat-
form:*. net & java. In AGU Fall Meeting Abstracts, volume 1, page 1333, 2006.

[33] Jun Zhu and Jin Hong Wang. Interactive virtual globe service system based on osgearth. In
Applied Mechanics and Materials, volume 340, pages 680–684. Trans Tech Publ, 2013.

[34] ANALYTICS GRAPHICS INC. Cesium-webgl virtual globe and map engine. retrieved jan-
uary 17, 2015, 2015.

[35] Thomas H Kolbe, Gerhard Gröger, and Lutz Plümer. Citygml: Interoperable access to 3d city
models. In Geo-information for disaster management, pages 883–899. Springer, 2005.

67
[36] Claus Nagel. Citygml4j, 2013.

[37] Thomas H Kolbe, Claus Nagel, and Javier Herreruela. 3d city database for citygml. Adden-
dum to the 3D City Database Documentation Version, 2(1), 2013.

[38] Yotam Livny, Zvi Kogan, and Jihad El-Sana. Seamless patches for gpu-based terrain render-
ing. The Visual Computer, 25(3):197–208, 2009.

[39] Frank Losasso and Hugues Hoppe. Geometry clipmaps: terrain rendering using nested regular
grids. In ACM Transactions on Graphics (TOG), volume 23, pages 769–776. ACM, 2004.

[40] Tony F Chan and Luminita A Vese. Active contours without edges. IEEE Transactions on
image processing, 10(2):266–277, 2001.

[41] Luminita A Vese and Tony F Chan. A multiphase level set framework for image segmentation
using the mumford and shah model. International journal of computer vision, 50(3):271–293,
2002.

[42] Yun Feng Nie, Hu Xu, and Hai Ling Liu. The design and implementation of tile map service.
In Advanced Materials Research, volume 159, pages 714–719. Trans Tech Publ, 2011.

[43] Hailing Liu and Yunfeng Nie. Tile-based map service geowebcache middleware. In Intelligent
Computing and Intelligent Systems (ICIS), 2010 IEEE International Conference on, volume 1,
pages 692–697. IEEE, 2010.

[44] M Turker and E Sumer. Automatic retrieval of near photo-realistic textures from single
ground-level building images. Int. Arch. Photogramm. Remote Sens. Spatial Inform. Sci,
38(4).

[45] Siyka Zlatanova, A Rahman, and Morakot Pilouk. 3d gis: current status and perspectives.
International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences,
34(4):66–71, 2002.

[46] Siyka Zlatanova, Alias Abdul Rahman, and Morakot Pilouk. Trends in 3d gis development.
Journal of Geospatial Engineering, 4(2):71–80, 2002.

[47] David Francis Maune. Digital elevation model technologies and applications: the DEM users
manual. Asprs Publications, 2007.

[48] http://www.citygml.org/. [CityGML].

68
[49] https://3d.bk.tudelft.nl/biljecki/Random3Dcity.html. [Random3DCity].

[50] ftp://download-berlin3d.virtualcitymap.de/citygml/. [berlin-3D].

[51] TH Kolbe, G König, C Nagel, and A Stadler. 3dcitydb-documentation. Institute for Geodesy
and Geoinformation Science, Technische Universität Berlin, page v2, 2009.

[52] Regina O Obe and Leo S Hsu. PostGIS in action. Manning Publications Co., 2015.

[53] T Wilson. Ogc keyhole markup language, 2.2. 0. Open GIS Consortium, 2008.

[54] Deborah Nolan and Duncan Temple Lang. Keyhole markup language. In XML and Web
Technologies for Data Sciences with R, pages 581–618. Springer, 2014.

[55] Mark Barnes. Collada. In ACM SIGGRAPH 2006 Courses, page 8. ACM, 2006.

[56] Tom G Farr and Mike Kobrick. Shuttle radar topography mission produces a wealth of data.
Eos, Transactions American Geophysical Union, 81(48):583–585, 2000.

[57] ftp://download-berlin3d.virtualcitymap.de/citygml/. [Online].

[58] Michael F Goodchild. Geographic information system. In Encyclopedia of Database Systems,


pages 1231–1236. Springer, 2009.

[59] Chaowei Yang, Menas Kafatos, David W Wong, Henry D Wolf, and Ruixin Yang. Geographic
information system, May 25 2010. US Patent 7,725,529.

[60] Stan Aronoff. Geographic information systems: a management perspective. 1989.

[61] Paul Longley. Geographic information systems and science. John Wiley & Sons, 2005.

[62] Russell G Congalton. Remote sensing and geographic information system data integration:
error sources and. Photogrammetric Engineering & Remote Sensing, 57(5):677–687, 1991.

[63] Tim Bahaire and Martin Elliott-White. The application of geographical information systems
(gis) in sustainable tourism planning: A review. Journal of Sustainable Tourism, 7(2):159–
174, 1999.

[64] Richard Kingston, Steve Carver, Andrew Evans, and Ian Turton. Web-based public partic-
ipation geographical information systems: an aid to local environmental decision-making.
Computers, environment and urban systems, 24(2):109–125, 2000.

69
[65] Oliver Kersting and Jürgen Döllner. Interactive 3d visualization of vector data in gis. In Pro-
ceedings of the 10th ACM international symposium on Advances in geographic information
systems, pages 107–112. ACM, 2002.

[66] Gilberto Câmara, Ricardo Cartaxo Modesto Souza, Ubirajara Moura Freitas, and Juan Gar-
rido. Spring: Integrating remote sensing and gis by object-oriented data modelling. Comput-
ers & graphics, 20(3):395–403, 1996.

[67] David Koller, Peter Lindstrom, William Ribarsky, Larry F Hodges, Nick Faust, and Gregory
Turner. Virtual gis: A real-time 3d geographic information system. In Proceedings of the 6th
conference on Visualization’95, page 94. IEEE Computer Society, 1995.

[68] Dave Shreiner, Bill The Khronos OpenGL ARB Working Group, et al. OpenGL programming
guide: the official guide to learning OpenGL, versions 3.0 and 3.1. Pearson Education, 2009.

[69] Kris Gray. Directx 9 programmable graphics pipeline. 2003.

[70] Edward Verbree, Gert Van Maren, Rick Germs, Frederik Jansen, and Menno-Jan Kraak. In-
teraction in virtual world views-linking 3d gis with vr. International Journal of Geographical
Information Science, 13(4):385–396, 1999.

[71] Volker Coors. 3d-gis in networking environments. Computers, Environment and Urban Sys-
tems, 27(4):345–357, 2003.

[72] Alexander Köninger and Sigrid Bartel. 3d-gis for urban purposes. Geoinformatica, 2(1):79–
103, 1998.

[73] Shinji Masumoto, Venkatesh Raghavan, Go Yonezawa, Tatsuya Nemoto, and Kiyoji Shiono.
Construction and visualization of a three dimensional geologic model using grass gis. Trans-
actions in GIS, 8(2):211–223, 2004.

[74] Jürgen Döllner, Konstantin Baumann, and Henrik Buchholz. Virtual 3D city models as foun-
dation of complex urban information spaces. na, 2006.

[75] Ernesto Rodrı́guez, CS Morris, JE Belz, EC Chapin, JM Martin, W Daffer, and Scott Hensley.
An assessment of the srtm topographic products. 2005.

[76] Le Yu and Peng Gong. Google earth as a virtual globe tool for earth science applications
at the global scale: progress and perspectives. International Journal of Remote Sensing,
33(12):3966–3986, 2012.

70
[77] L Boschetti, DP Roy, and CO Justice. Using nasa’s world wind virtual globe for interac-
tive internet visualization of the global modis burned area product. International Journal of
Remote Sensing, 29(11):3067–3072, 2008.

[78] Declan G De Paor and Steven J Whitmeyer. Geological and geophysical modeling on virtual
globes using kml, collada, and javascript. Computers & Geosciences, 37(1):100–110, 2011.

[79] Vishal Tiwari and KS Rajan. A chan vese based method of texture extraction for automated
texture draping of 3d geospatial objects. In 2015 IEEE International Geoscience and Remote
Sensing Symposium (IGARSS), pages 5007–5010. IEEE, 2015.

[80] Mathias Walker, Pirmin Kalberer, and AG Sourcepole. Comparison of open source virtual
globes. FOSS4G2010, 2010.

[81] Hugues Hoppe. Smooth view-dependent level-of-detail control and its application to terrain
rendering. In Visualization’98. Proceedings, pages 35–42. IEEE, 1998.

[82] KyoHyouk Kim and Jie Shan. Building footprints extraction of dense residential areas from
lidar data. In Annual Conference of the American Society for Photogrammetry and Remote
Sensing, Milwaukee, WI, 2011.

[83] Florent Lafarge, Xavier Descombes, Josiane Zerubia, and Marc Pierrot-Deseilligny. Auto-
matic building extraction from dems using an object approach and application to the 3d-city
modeling. ISPRS Journal of Photogrammetry and Remote Sensing, 63(3):365–381, 2008.

[84] Don Burns and Robert Osfield. Tutorial: open scene graph a: introduction tutorial: open
scene graph b: examples and applications. In Virtual Reality, 2004. Proceedings. IEEE, pages
265–265. IEEE, 2004.

[85] Michael Flaxman. Using the virtual terrain project to plan real cities: alternative futures for
hangzhou, china. In ACM SIGGRAPH 2002 conference abstracts and applications, pages
275–275. ACM, 2002.

[86] Jian Chen, Arleen A Hill, and Lensyl D Urbano. A gis-based model for urban flood inunda-
tion. Journal of Hydrology, 373(1):184–192, 2009.

[87] Helena Mitasova, Chris Thaxton, Jaroslav Hofierka, Richard McLaughlin, Amber Moore, and
Lubos Mitas. Path sampling method for modeling overland water flow, sediment transport,

71
and short term terrain evolution in open source gis. Developments in Water Science, 55:1479–
1490, 2004.

72

Vous aimerez peut-être aussi