Académique Documents
Professionnel Documents
Culture Documents
Joe Parker
Computer Science BSc
Newcastle University
Supervisor:
Dr Gary Ushaw
Page 2 of 80
Abstract
Procedural generation is a field of computing wherein assets ranging from graphical textures to
background game music are created algorithmically during runtime. The implementation of
procedural subsystems in modern applications enables developers and level designers to spend
less time performing trivial tasks, enabling the computer to handle pseudo-random creation and
placement of assets in order to mimic the effect of realism.
In this paper the background of procedural generation is explored pertaining to the subject of
virtual environments. Various algorithms used to create fractal noise patterns are analysed and
used to assign height values to a two-dimensional grid of vertices defined as a heightmap, which
can be translated into a virtual landscape mesh appropriate for a game world or simulation.
Finally, the algorithmic implementation is refined so that the generated terrain can be controlled
and modified in real-time by defining a set of user-input parameters detailing the properties of
the desired landscape.
Page 3 of 80
Declaration
I declare that this dissertation represents my own work, except where otherwise stated.
Page 4 of 80
Table of Contents
1.
Introduction
1.1
1.2
1.3
1.4
1.5
1.6
2.
p6
Project Aim
Dissertation Structure
Problem Statement
Purpose
Objectives
Project Cycle
p6
p6
p7
p8
p10
p12
Background
p13
2.1
2.2
2.3
Heightmaps
Procedural Content Definition
Commercial Examples
2.3.1 Rogue (1980)
2.3.2 Minecraft (2011)
2.3.3 Space Engine (2015)
2.4
Fractal Algorithms
2.4.1 Midpoint Displacement
2.4.2 Diamond-Square
2.4.3 Perlin Noise
2.4.4 Simplex Noise
3.
p13
p18
p19
p19
p21
p22
p25
p25
p28
p32
p36
Methodology
p39
3.1
3.2
3.3
System Design
Class Hierarchy
Implementation
3.3.1 Diamond-Square
3.3.2 Perlin Noise
3.4
Algorithm Refinement
3.5
Interface and Controls
4.
p39
p40
p41
p41
p44
p49
p54
Evaluation
4.1
4.2
4.3
4.4
5.
Testing Strategy
Performance Analysis
Improvements
Algorithm Comparison
Conclusion
5.1
5.2
5.3
Satisfaction of Objectives
Improvements and Further Investigation
Final Reflections
p56
p56
p57
p60
p62
p64
p64
p65
p66
6.
References
p70
7.
Appendices
p75
7.1
7.2
7.3
p75
p75
p75
Page 5 of 80
7.4
7.5
7.6
7.7
p75
p78
p80
p82
Page 6 of 80
1. Introduction
1.1
Project Aim
To create and investigate procedural landscapes commonly seen in modern computer games and
simulations.
1.2
Dissertation Structure
This paper is separated into five individual sections, each with several subsections. The first,
Introduction, outlines the projects purpose and details the problem that I have chosen to investigate and
begin the development of a potential solution. The overall project aim, stated above, blankets the
dissertation goal and is further divided into the four specific objectives detailed in section 1.5.
Section two, Background, explores my research relating to the topic, the problem and other pastproposed solutions. All background study in this area is explored and investigated in concern with the
described project aim and proposal outlined in section one.
Section three, Methodology, discusses my various design choices when deciding to approach the
project and how these choices affected my ability to develop a workable solution in the most effective
manner. Following this, the actual program implementation is detailed; including any modifications,
refinements or alternative strategies employed during the development cycle.
Section four, Evaluation, discusses the testing methods used in order the monitor the system and
analyze it in accordance with the overall project aim. The parameters of evaluation are explained before
both the overall solution and each individually implemented algorithm are tested in comparison.
The final section of discussion, Conclusion, provides a summative review of the dissertation.
Most importantly this section asks whether or not the aim was met in addition to each individual
objective. The entire development cycle is critically assessed and areas of improvement are discussed,
finalising with suggestions for future work pertaining to the project and its subject of investigation.
Page 7 of 80
1.3
Problem Statement
With computational power ever-increasing and high-performance personal computers and games
consoles frequenting a greater number of households than ever before, industries developing virtual
realities (scaling from recreational to military specifications) are often employing new or repurposed
strategies for doing so. One relatively unexplored approach to creating computerized content which has
become much more evident in recent years is the application of runtime-generated assets. Producing
content on-the-fly has seen a gain in usage in modern computer games and simulations, although it still
stands as an area of development that remains largely uncharted and with a potential yet to be realised. As
Smith (2014) states, Some of the most delightful moments in games come where the game delivers
surprising content that follows the theme of what the player has seen previously, such that the player must
learn how it works and build new game strategies.
Runtime-generated resources can concern anything stretching from the background music in a
computer game to graphical models or anything else outside of the core game engine or AI. In each of
these areas, content produced in this way is typically performed algorithmically. Although the hope of
coming up with an exact definition for the term procedural content generation would be futile [Togelius
et al. 2011], it can be justified that player-made levels or features should be excluded. Other assets, such
as adaptive environments can be argued for or against; depending on whether or not their particular
deployment would be considered an extension of AI or a discrete procedural subsystem. I will be further
discussing a concrete definition of procedural content generation in section 2.2.
Depending on the type of media being generated, pre-existing algorithms may be in use or refined
to suit the needs of a particular project, meaning that a large scope for design and performance
improvement remains. While procedural generation can save both system memory and development time,
the main limitation of this approach comes in the form of reduced accuracy or detail. Furthermore, the
potential lack of control over pseudo-randomness and potential outputs may lead many developers away
from the consideration of investing in a procedural subsystem. A last consideration is possibly the cost of
performance; which is heavily dependent on the type of system in place and the implementation of the
Page 8 of 80
designers chosen algorithms. In programs where performance is key, such as computer games, this can
pose a large problem. While the idea of infinite landscapes or highly complex large-scale game universes
is less of a fantasy than ever before, there is still a way to go to match the speed and stability of games
that load pre-built landscapes directly from memory. In contrast, looking at non-game projects such as
flight or military simulations, the lack of fidelity or accuracy can outweigh the potential benefits of
runtime generation. The effort to master algorithmically-produced content brings forward the possibility
for real-world accurate models; imagine a military training simulation that could create a completely
different scenario each time it is loaded, without compromising any tiny level of accuracy, realism or
performance. Relating specifically to the region of runtime-generated terrain, there are a small number of
commonly applied algorithms and methods which have their own individual applications. In this
dissertation I will be particularly focusing on the implementation, refinement and further exploration of
the following algorithms; Diamond-Square and Perlin Noise.
1.3
Purpose
Automatically generated terrain is one of the most identifiable examples of algorithmically
produced content in computer games; due in no small part to the nature of its observable visual
implementation. Some modern games, including Markus Notch Perssons acclaimed 2011 title
Minecraft and the open-source indie platformer Spelunky, have heightened the profile of this promising
but underused [Smelik et al. 2009] approach to world design. In computer games development, however,
the idea of both large-scale and real-world accurate landscapes remains largely unrealised. To address this
I intend to research and implement practical solutions to tackle some of the issues that deter developers
from taking algorithmic approaches to representing simulated game worlds. I will be expanding beyond
current work in the field of algorithmic terrain generation by concentrating on topographical aspects in
addition to the implementation of assets relating to real-world geography. Rather than analysing graphical
fidelity, these features will be evaluated based on locational accuracy and conformity to user-defined
parameters stating whether their existence is sensible and reliable in accordance with natural topography.
Page 9 of 80
Focusing on the subject of pseudo-realistic world visualisation, this study hopes to promote the
understanding of this area of computer graphics. Although the concept of procedural graphics and levels
has been explored, a great deal of further investigation is required in order for it to be a viable and
interesting alternative to manual creation in games and simulations. Cook (2015) explains; "Procedural
content generation feels a bit like violating laws of the universe creating something from nothing, again
and again. Of course it's nothing like that really, but you get a rush from seeing something appear
suddenly." [Moss, R. 2015]. Aside from some of the earlier described reasons for adopting an algorithmic
approach to content generation, the most obvious advantage is the removal of the requirement to employ a
human developer or game artist [Shaker et al. 2015]. A developer needs experience or training, as well as
being expensive to hire and relatively slow to produce quality content; a procedural subsystem is none of
these. It is also very important to note the exact role of runtime-generated content in the computer games
industry. Game artists and level designers arent looking for an algorithmic solution to replace them at the
push of a button, or even a system that will completely generate high-detail game assets with minimal
effort. Procedural subsystems should be in place in order to take control of the tedious, repetitive and
menial tasks that a computer can perform to an acceptable level of fidelity. For example, a system in place
to generate hundreds of appropriately sized and shaped buildings with plain facades in a city simulation
would aid asset artists and designers by allowing them to focus their time on adding small and meticulous
details. A greater amount of time spent on intricacies equates to a more satisfying product; where the
realism of a completely man-made landscape is replaced by the authenticity of a pseudo-random
foundation with a much finer outer appearance. Ubisofts Chris Preston (2010) summarises, Its
extremely hard to procedurally generate the kind of subtlety, detail and uniqueness that a human designer
can put into an object. Whether thats gameplay nuances, visual detail or whatever. Not impossible, just
very hard, and there arent many examples of it having been done really well. Its the opposite of
randomness or natural effects objects in our world are so specifically and carefully designed by people
to deliver on a given purpose.
Page 10 of 80
Nowadays, AAA titles (the highest-budget, bestselling and most publicized computer games) are
developed by teams of hundreds of developers between office spaces spreading across international
studios. A typical open world blockbuster project will employ the use of up to 600 people working
across ten or more studios in various areas of development [Weber, R. 2013], with the Ubisoft Reflections
studio enlisting a team of 90 people to work as merely a third of the total development staff of the 2014
title Watch Dogs [Ponce, T. 2013]. The results of this are ventures that have a high monetary input and
calculated staffing and development costs. Thus, in order for a project of this magnitude to achieve a
profit margin designers have become increasingly reluctant to take risks or offer diversity in both
engineering and invention. This leads to indie development studios being the ones currently exploring
procedural content generation due to the potential risks and instability, holding back the progress of
development in the area. Despite this, it can be argued that large studios undertaking the use of
algorithmic approaches would have a significant competitive advantage [Shaker et al. 2015]. The
requirement for a lower number of human staff in the realms of resource creation, game art and level
design (which are often the positions that necessitate the highest pay-share of development teams) would
result in faster-completed projects with the additional advantage of a lower total expense.
1.5
Objectives
The projects primary objective is to develop one or more examples of automatically generated
terrain by investigating similar implementations and studying their successes and failures. After reviewing
a number of algorithms I will refine and implement my own interpretations of some of the most
commonly-adopted in order to record their performance and accuracy in regards to the generation of
graphical landscapes. This will be evaluated both independently and in tandem with the following specific
objectives:
Page 11 of 80
Objective 1: Investigate at least 2 common methods and algorithms for producing runtime-generated
game worlds and landscapes.
In order to produce an effective solution to the problem outlined, investigation of several possible
solutions is required. My research into previous approaches to runtime-generated content will allow me to
see the benefits and drawbacks of individual past solutions, in addition to the refinements and
improvements that may have been made between iterations or in subsequent projects. I have chosen to
identify at least two algorithms suitable for terrain generation so that I will be able to analyse and evaluate
their differences in terms of performance and scalability. The methods I aim to base my work on are the
Diamond-Square algorithm and Perlin Noise. These algorithms and some of their implementations in
industry are discussed in detail in section 2.
Objective 2: Create an example in software of a large-scale procedural landscape using the discovered
methods.
Once initial background research has been performed it is crucial that the framework for my own
project is developed and provides a stable testing environment for my own procedural generation system.
The second objective will be considered achieved once an algorithmically-generated heightmap has been
implemented in code using at least two of the researched algorithms. Whether or not the objective is
satisfied will be determined by the performance of the algorithms and the ability of the program to run at
a playable frame rate.
Objective 3: Refine the solution to account for adding landscape features with real-world accuracy
governed by topographical patterns.
Following the creation of a working prototype and initial evaluation, further improvements and
additions are to be produced to account for realistic topographical features. This iteration involves
refining the algorithms to restrict how terrain is formed and evaluating it comparatively against a set of
Page 12 of 80
user-defined parameters conforming to real-world rules of how and where topographical landscape
features are formed.
Objective 4: Monitor and evaluate the performance of the program, using optimisation techniques to
implement possible improvements.
The final objective of the project is a summative assessment of the finished program. The
objective will be measured based on the performance of the implementation and by comparing revisions
of the different algorithms. Additional functionality controlling topographical features and providing a
solution addressing the issue of real-world accuracy of the produced terrain will define a success of the
overall project aim.
1.6
Project Cycle
The development of the project and evaluation of implemented algorithms requires a thoroughly
planned project timeline. Development of the software solution follows the Rolling Wave plan [Larman,
C. 2004]. This method of planning has a focus on being adaptive; wherein detailed tasks are yet to be
determined and thus only major deliverables are currently specified. This allows the project to be planned
and improved iteratively, with an emphasis on analysis and self-feedback, making it particularly suitable
for a research and creation-based project [Sharma, R. 2013]. Undertaking a project of this scope using an
agile development method requires a substantial concentration on documentation, which will benefit the
refinement of the solution and its ongoing evaluation when analysing iterations comparatively.
Page 13 of 80
2. Background
2.1
Heightmaps
In computerised graphical simulations, landscapes built by game artists or level designers can
take on a variety of forms when represented in system memory. One such form, and the focus of this
project, is a type of multidimensional-array known as a heightmap. Typically a heightmap is composed of
a type of dot matrix data graphic called a raster image, which stores coloured pixel data in a rectangular
structure. A well-known category of raster image is a bitmap file, in which a single bit on the image
relates to a single bit on screen this is called a single-bit raster [Foley, J D. 1995]. Aside from bit-for-bit
data (categorized by image width and height), a raster image also stores a colour depth attribute,
signifying the number of colours (bits) each pixel can represent.
Figure 1 A simple bitmap representation of a heightmap. Terrain height data is stored on a per-pixel
basis as an analog decimal value between 0 and 1, represented on the raster image as grey values between black
and white. (Source: Reallusion Forums, 2011).
Page 14 of 80
Figure 2 The heightmap pixel data from Figure 1 imported as a terrain model using iClone 5 Pro. The
software reads in the bit values from the heightmap and translates them into y-coordinates for the vertex data of the
terrain mesh, where 0 equates to the minimum height of the landscape and 1 translates to maximum height. As such,
notice how the pure white spot on the raster image creates a mesa when converted to a landscape mesh. (Source:
Reallusion Forums, 2011).
So far we have considered the use of heightmaps as a data structure to represent terrain, with their
inherent advantages being obvious they offer fast and efficient data access, are simple to implement and
just as simple for a user or developer to understand. As previously described, a heightmap will typically
store the x and z values of a terrain model in either a two-dimensional array or one long list of values
(where the width and height are known in order to demarcate each z-step). The actual data value stored in
the array will simply be the y value (terrain height) at that offset point. As such, a heightmap data
structure storing a single numerical floating-point value at each x,z position manages to take up a very
small amount of space in memory, allowing for fast reading and writing at runtime even for a large-scale
terrain model, depending on the total number of vertices being rendered. No other information, such as
Page 15 of 80
texture or colour data, is represented by a heightmap and as such these must be implemented by the
rendering system separately. This maintains the speed of the actual loading of terrain height values from
memory, meaning textures and other features including level-of-detail (LOD) are applied by the rendering
context being used (e.g. OpenGL, Microsoft DirectX) and can even be leveraged to the advantage of the
program design.
For example, a game loading a large world from heightmap data in square patches would be able
to load the y-axis data for the landscape from stitched bitmaps very quickly. Texture data could be applied
afterwards and only to areas nearby to the camera while being updated as required. This approach could
prove advantageous in scenarios such as an online game or racing simulation where the shape of the
landscape being loaded as fast as possible is more integral to the overall gameplay experience than having
all of the detail and texture data loaded in at the same time as the actual map. Additionally, textures stored
and applied separately to the terrain height data can be added on a per-pixel basis rather than per-vertex,
which deters the presence of low-quality looking ground textures. These defects are the result of textures
which are stretched across two adjacent vertices that have a large y-axis discrepancy.
Figure 3 A comparison between texture application methods showing the potential advantages of a
thoughtfully-implemented heightmap-based approach; wherein texture data is stored independently of vertex
position data. The left image shows a landscape mesh loaded from a heightmap with textures applied afterwards by
the rendering platform on a per-pixel basis, using tiling to seamlessly repeat undersized patterns. The right image
Page 16 of 80
shows a landscape mesh where texture coordinates are loaded onto the same vertices as the heightmap position
data, resulting in stretched and unrealistic patterning. (Source: Reallusion, 2012).
Despite this, heightmaps are not without their disadvantages. The most notable drawback of the
data structure lies in its own simplicity, since only one y value is stored in each x,z index, meaning
multiple terrain heights on each vertex of the terrain mesh are made impossible. In practical terms, this
means that topographical structures such as caves or overhanging cliff-faces are incapable of being read
from a heightmap into a mesh. At any one ground coordinate, a cave would have to store a height
coordinate for its floor, roof and the ground level above it in addition to any other cave systems above or
underneath at the same x,z position. Obviously this makes the implementation of complex underground
systems far from a possibility, but even limits the ability to represent simple landscape features such as a
rock arch, since no single point can overlap another in vertical space.
In order to combat this downside, a number of alternatives are available to a heightmap-based
approach they are discussed here, but are not the focus of this project. Perhaps the simplest refinement,
which could prove to be quick to implement but remains limited, is that of an n-layer height field.
Typically implemented as a 3-layer height field in older computer games, these are essentially three
individual heightmaps layered on top of each other. As discussed previously, this allows for the uppermost
to represent the ground, the middle layer to represent cave roofs and the lowest map to represent cave
floors. This basic adjustment lacks a great deal of scalability and only allows the representation of a
couple of individual levels before efficiency and performance costs begin to outweigh the benefits of such
a simple implementation. One other solution is a vector field, which stores a vector at each position in the
array, unlike a heightmap which stores a scalar quantity. Vector fields can be applied to Euclidean space in
n-dimensions by assigning a vector to each point [Galbis et al. 2012], and so are able to appropriately
hold data values to represent virtual landscapes. Since a vector is able to represent a direction as well as a
magnitude, vector fields are able to define a vertical offset of the model in three-dimensional space and
thus can describe landscape features such as cliffs and overhangs with little extra effort. With this is mind,
Page 17 of 80
the extra storage space required to hold vector quantities at each point on the map is a great investment
when considering large scale projects, especially since human artists and level designers may be more
capable of simply adding these overhanging landscape features by hand. A simple heightmap foundation
with complex features added by hand allows for the combination of a low-memory and fast loading
setting with the addition of intricate and man-made topography.
A final, more advanced solution is the use of voxels a data representation of a 3D block in
space. Voxels, a portmanteau of the words volume and element; much like picture-elements (pixels)
and texture elements (texels) [Foley et al. 1990], represent a single sample on a three-dimensional grid.
As with heightmaps, the space between samples is not stored, only the attributes of the sample itself,
which may be a single or multiple values. This means that any intermediary data is added afterwards; for
example by the use of interpolation. Since every point in a world space grid is stored as voxel data,
topographical features such as caves, rock arches and overhangs can be represented. In many modern
computer games, voxels are rendered as cubes at their position in grid space, leading to a unique blocky
aesthetic that has risen to great popularity in no small part due to the success of the 2011 open-world
sandbox game Minecraft. The fact that a voxel engine gives an abstraction of the game world in terms of
a grid can prove to be a great benefit in some projects, as Lexaloffle games Joseph White (2011)
explains, The most immediate [property of voxels] is that it makes everything very malleable; the world
can be represented as one big voxel map which is easy to manipulate. For example, you can arbitrarily
subtract big chunks from it without worrying about resolving a lot of tricky geometry. [Kuchera, B.
2011].
Rendering a voxel map is typically done via a process known as ray casting, whereby rays are
shot at the surface of the map and intersections are determined algorithmically, or otherwise by converting
the voxel data into polygons in preparation for traditional rendering methods. Both of these functions
have relatively high computational costs and the storage space required to hold voxel data in memory is
massively greater than that of heightmap data. Therefore, while voxels provide a solution for the problem
Page 18 of 80
of a single height value, they do not come without their own drawbacks and as such both approaches are
individually suited to different projects.
2.2
game assets, textures or even music, it is important to define what exactly encompasses procedural
content. In particular, it is helpful to delineate resources and subsystems that can definitely be certified as
outside the area of procedural content generation. The term has been summarised by Togelius et al (2011)
as The algorithmical creation of game content with limited or indirect user input. It is noted by the
authors that this definition does not contain the words random or adaptive, despite common semantic
associations with the subject, as procedural generation methods could be both, either or none.
Specifically, one main part of the term procedural content generation which must be defined is
the word content. For our description, content will refer to game assets outside of the core game engine,
including (but not limited to): maps, levels, textures, scripts, objects, characters and even music or sound
effects. Since all of these resources are independent to the game engine, they can be created
algorithmically as an addition to the core game without altering any mechanical framework. Importantly,
we can declare with certainty that the definition of the term for the scope of this project will not cover any
area of player or non-player character AI. The reason for this is not least for the fact that there is much
more research already performed within the field of artificial intelligence in games than there has been
applied to the more specialist subject of procedural content generation [Togelius et al. 2015], and thus it
proves greatly more beneficial to narrow down the definition and create a dichotomy between these two
algorithmic and behavior-based areas of computing. In order to add to the concreteness of our definition,
some examples of what are (and arent) considered to be procedural content generation in games and
simulations are described here:
Page 19 of 80
Procedural generation:
An online character editor allowing a user to sculpt and modify the body and facial
2.3
Commercial Examples
2.3.1
Rogue (1980)
In the early history of computer games and simulations, due to limitations in memory, content
often had to be generated through algorithms [Lee, J. 2014]. One of the earliest examples of algorithmic
random level generation is 1980s Rogue, developed by Michael Toy, Glenn Wichman and Ken Arnold.
The game created dungeons rendered using ASCII text characters, using an algorithmic approach in order
to produce entirely new maps during run-time. The game offered no tutorial system for new players and
the algorithm employed by the game engine to create dungeon rooms on-the-fly made no checks to ensure
that a level was winnable [Campbell, D. 2010], which combined with the aspect of a permanent game
over state upon death of the player character made for an unforgiving but enterprising new gameplay
Page 20 of 80
experience. One of the original developers of the game, Wichman (1997) declares, I think Rogues
biggest contribution, and one that still stands out to this day, is that the computer itself generated the
adventure in Rogue. Every time you played, you got a new adventure. Ever since, the game has spawned
an entirely new subgenre in computer gaming [Zapata, S. 2007] wherein a typical game features
dungeon-crawl roleplaying gameplay, procedurally generated environments, randomized items and
perma-death. This subtype of RPG-style adventure game has since been termed as roguelike [King, A.
2015]. Wichman (2007) states, I cant remember if I thought we would spawn a genre of games. But I
was aware at the time we first unleashed Rogue, that we had created something unlike anything else out
there.
Figure 4 A screenshot of Rogue (1980), showing a simple dungeon composed of rooms and
interconnecting pathways; all generated randomly at runtime. (Source: The Married Gamers, 2010).
2.3.2
Minecraft (2011)
Developed by Swedish programmer Markus Persson as a clone of the voxel-based digging game
Infiniminer [Goldberg et al. 2013], the now-ubiquitous block-building game Minecraft was released for
public download in its earliest stage in 2009. In 2013 the game attained the accolade of 5 th best-selling
Page 21 of 80
computer game of all time [OBrien, C. 2013]. The game was created by Persson alone, with no large
studio or backing investors to help develop or promote the project. Shortly after beginning development,
Persson assembled and founded the games studio Mojang Specifications (now known as Mojang AB
Swedish for Gadget Ltd), who were responsible for publishing the first stable release of the game
(Minecraft 1.0) in 2011 [Persson, M. 2010].
Between the years of 2009 and 2011, Minecraft moved from the stage named Indev (short for in
development) and onto the next iteration Infdev (infinite development). It was during this cycle that
world sizes were increased to be infinite on the horizontal plane, and the point at which Persson was
impelled to search for an algorithmic approach to terrain generation. In its earliest stages of production,
the game utilised two-dimensional Perlin Noise heightmaps to construct procedural terrain; one for
overall elevation, another for terrain roughness and a final one for local detail [Persson, M. 2011]. More
information on the Perlin Noise algorithm is presented in detail in section 2.4.3. Since the implementation
followed a heightmap-based approach, it suffered from the limitations described earlier in section 2.1.
Specifically, the engine presented no way of generating overhanging landscape features. Persson
acknowledged this by switching over to a three-dimensional variation of the Perlin Noise function. This
allows the engine to comb over the landscape on different levels, first by shaping broad topography such
as mountains, hills and plains, before adding noise layers to represent details such as lakes and foliage
[Fingas, J. 2015]. By combining the new algorithm with the existing 2D elevation and noise maps, while
introducing new sampling and interpolation methods [Persson, M. 2011], a procedural generation engine
was created which would set the standard for open world voxel-based games to come. The game also
allows players to set a seed when generating a new map; a starting point for the engines random number
generator. This ensures that any map created on any computer with the same level seed results in an
identical landscape and player starting position [Bergensten, J. 2011].
Minecraft also makes use of a biome system, which relates to my further investigation of
topographical accuracy in procedural terrain generation. The quasi-infinite game world is segmented into
randomly partitioned habitats; each with its own temperature, terrain and weather subsystems. By
Page 22 of 80
ensuring that snow blocks only spawns atop mountain ranges, and that mountains themselves only appear
in certain regions, Minecraft demonstrates a genuine attempt at pseudo-realism.
Figure 5 A screenshot of Minecraft Beta version 1.4 (2011). The procedural generation subsystem builds
endless voxel-based landscapes based on a random seed. Land patches are loaded in chunks (16x16 horizontal
blocks) using a biome system to ensure that meteorological and geographical features are non-contradictory.
(Source: The Scientific Gamer, 2010).
2.3.3
Space Engine is a Virtual recreation of the known universe that you can explore and travel
around without the use of a supercomputer [Tamblyn, T. 2014]. Development of Space Engine was
initiated in 2005 [Sapozhkov, A. 2012], before it was released for the first time publically in 2010. The
project has a single developer, Russian programmer and astronomer Vladimir Romanyuk [Tamblyn, T.
2014], and maps the entirety of the universes known star systems while using procedural content
generation to add more beyond the scope of the discovered cosmos. According to Romanyuk (2016),
The entire Hipparcos catalog of stars, as well as all known extrasolar planets, over ten thousand
Page 23 of 80
galaxies, and all of the most prominent objects in our Solar system are included. This adds up to over
130,000 objects. As for procedural objects, there are more galaxies and star systems in Space Engine
than exist in reality in all of the observable universe.
Figure 6 A screenshot showing a procedurally generated solar system in Space Engine (2015). The game
engine populates space around the player camera with celestial bodies, each with their own atmospheric system and
attributes; all of which are moderated in order to be accurate to the known laws of space and physics. (Source:
Eurogamer, 2015).
One of the most innovative features implemented in Space Engine is the process by which
planets, stars and nebulae are created. Specifically, the engine generates, stores and handles their unique
datasets in order to calculate and retain their physical and scientific properties with meticulous technical
accuracy. Take the example from Figure 6 of a procedurally generated star outside of the documented
universe, named by the game simply as RS 8409-2710-8-14246776-51 8.7. The star is classified as a cold
titan with a temperature of 102.1 Kelvin, in addition to having its own size, mass, age, gravity,
atmospheric pressure, axial tilt, orbital patterns and more [Ellison, C. 2015]. While producing and storing
Page 24 of 80
this data for such an unsurmountable number of astronomical objects (which are seeded to be at exactly
the same universal locations and with the same attributes no matter where the simulation is run) is an
impressive feat in itself, it is the way in which Romanyuk uses this pseudo-random data to create entire
galaxies filled with stars, each of which conforming to scientific laws, that makes Space Engine a
remarkable example of an application employing procedural generation without compromising realism.
According to Romanyuk (2011), the engine uses an exact model of the Earths atmosphere developed by
Eric Bruneton, which has been adapted for other planetary objects. All of the generated star data follows a
series of rules closely modelled on reality; terrain generation is based upon not only the objects physical
properties, but also its temperature, state, proximity to other stars and even any gravitational tidal forces
[Hooper, T. 2015]. This is combined with an implementation of the Perlin Noise algorithm in order to
produce a virtual universe filled with endless stars, each with their own procedural and quasi-realistic
terrain. It is this type of parameterised modelling that should be added to my own solution in order to
satisfy objective three.
Figure 7 A screenshot of a planets surface in Space Engine (2015). The procedural terrain subsystem
uses an implementation of the Perlin Noise function in order to produce fractal landscapes on a planetary scale. All
Page 25 of 80
pseudo-random generation follows a carefully designed procedural ruleset so that stars and other astronomical
phenomena are realistically positioned in space with accurate properties and surface terrain. (Source: The Torch:
Entertainment Guide, 2015).
2.4
Fractal Algorithms
So far it has been established that a 2D data structure known as a heightmap can be used to
generate 3D terrain. In a procedural system, the heightmap must be populated algorithmically, with the
goal being to ensure all values in the dataset are randomized without being too distant at neighbouring
points. This type of pseudo-random pattern is known as noise, multiple octaves of which can be
combined to create fractal noise. Several algorithms have been defined which create noise in order to be
used for procedural generation, which are evaluated here.
2.4.1
Midpoint Displacement
The midpoint displacement algorithm is a function for generating semi-controllable random noise
in a single dimension. It is the precursor to the diamond-square algorithm, which is discussed in section
2.4.2 and implemented in my program. Midpoint displacement is most easily visualised when run on a
line connecting two points in space. The function works by finding the centre-point of the line and
displacing it by a random arbitrary amount along the y-axis, then repeating for each side of the midpoint
while reducing the displacement amount each time in order to create a smoothening effect. This is
reiterated as many times as is required.
Page 26 of 80
Figure 8 An example run of the midpoint displacement algorithm in one dimensional space. During each
iteration every edge between two points is split in half, where its centre-point is raised or lowered by a random
amount in the y direction. Each step the extent of random displacement is lowered in order to create the effect of
noise while retaining the overall shape of the first iteration. (Source: Aschenblog: Thoughts on Code and
Fabrication, 2014).
The defining parameter of the midpoint displacement algorithm is the smoothness constant,
which directly correlates to the displacement of the midpoint at each step. Typically this value falls into
the range of 0 to n, where n is an arbitrary unsigned number, and as such for smoothness S we can assume
{S | 0 S n}. In order for a higher value of S to create a greater disparity between adjacent points it
should be inverted in the implementation. One way to achieve this is by applying its value to an
expression such as 2(-S). In standard implementation a value for smoothness between 0 and 1 is used, since
any amount greater than one will show exponentially less difference as 2 (-S) tends towards zero, meaning
that a result between 1.0 and 0.5 is produced; wherein the lowest smoothness value outputs 1.0 and the
highest outputs 0.5 [Martz, P. 1997]. This final value is used as a multiplier to decrease the random
Page 27 of 80
displacement at each iteration of the algorithm (i.e. a smoothness value of 0 would result in the random
offset range to never be reduced thus producing a jagged output).
Figure 9 The output of the midpoint displacement algorithm after several iterations with three different
values used for smoothness (H). As demonstrated, a higher smoothness value equates to a smaller multiplier of the
random displacement at each step when put through an expression such as 2(-H), meaning the change in height offset
between nearby points is lower and as such a smoother line is produced. Conversely, a lower smoothness value
results in a greater disparity between the height offset of neighbouring points, thus creating a noisier output.
(Source: Game Programmer, 2014).
2.4.2
Diamond-Square
Introduced by Alain Fournier, Don Fussell and Loren Carpenter in 1982 [Fournier et al. 1982],
the Diamond-Square algorithm was developed as an improvement over fractional Brownian motion; a
continuous-time stochastic process [Mandelbrot, B. 1968]. This original model was applicable to the
representation of multidimensional terrains, but is greatly improved by the diamond-square functions
ability to produce a satisfactory approximation to fractional Brownian motion in a faster time [Fournier
et al. 1982]. Although described as flawed in the years following its proposal for creating noticeable
creases [Miller, G. 1986], diamond-square has seen moderate use due to its simple implementation and
Page 28 of 80
low computational cost. The algorithm is otherwise known as the random midpoint displacement fractal,
cloud fractal or the plasma fractal [Beard, D. 2010].
The diamond-square algorithm generates noise using recursive subdivision. It runs based on the
theory of midpoint displacement, extending the idea from one dimension into two and thus making it
ideal for the population of a heightmap. While the midpoint displacement algorithm works on the idea of
repeatedly splitting a line into even segments and adding a random height offset, diamond-square
continuously splits a square into an even number of smaller square segments, populating the corners and
centre-point of each new square with a y offset at each iteration. It requires a fixed grid with a width and
height of size 2n+1 in order to prevent problems during recursion, wherein each sub-square must have an
exact midpoint and therefore a width of odd parity. After the initial corner points of the grid are seeded,
the algorithm finds the midpoint of any corners by calculating their average (the diamond step), then
produces four smaller squares by finding the midpoint of each diamond shape created (the square step).
These steps are repeated every iteration until every vertex on the grid has been adjusted, with the amount
of height displacement reduced at each stage controlled by a smoothness value as described previously.
This means that naturally the algorithm performs most effectively with a square grid, however it can be
adapted to fit a rectangular map with some effort. Although the function is recursive by nature, it can also
be implemented as an iterative routine [Martz, P. 1997].
Page 29 of 80
Figure 10 A visualization of the diamond-square algorithm during execution. The image shows the
function demonstrated in two dimensions over five stages, where a red circle resembles a new vertex and a blue
circle denotes a vertex which has already been set: i) Initial corner vertices are seeded with a random value in a
specified range. ii) Diamond step. iii) Square step. iv) Diamond step. v) Square step. After five stages (two total
iterations of diamond step plus square step), all vertices have been set to a value and therefore the algorithm has
reached its stopping condition. (Source: JavaWorld, 1998).
As seen in Figure 10, the number of vertices being set dramatically increases in size during each
subdivision. At each iteration, a single square is turned into 4 squares, followed by 16, 64, 256, then 1024
and so on. Specifically, the number of squares yielded is equal to 4 i, where i resembles the number of
complete iterations performed in total. For a grid of width 2n+1, the total number of vertices to be set by
the algorithm is (2n+1)2. Therefore to generate a heightmap with a width of 257 units a total of 66,049
unique vertices must be stored, equating to 393,216 indices when using an index buffer. This presents a
drawback in terms of scalability, since in this example the next available width after 257 would be 513,
resulting in over a million indices stored in graphics memory, although the actual data being stored is
fairly small. Depending on the program implementation, the large disparity between potential grid sizes
may not be suitable. In terms of complexity however, diamond-square runs in linear time T(n) = O(n), as
Page 30 of 80
opposed to alternative fractal-based approaches such as the Fourier transformation which has a typical
execution time of T(n) = O(n log(n)) [Bird et al. 2013] and thus runs in linearithmic time.
One concern presented by the algorithm in its most basic form is the way in which it deals with
edge cases. Any vertex located on an edge of the grid has only three diamond corners for which it can be
averaged from in the square step. Three solutions to this are to either use a random value, use the average
of only the three points or to wrap the heightmap and get the value from the aligned vertex on the
opposite side of the grid [Mijailovic, V. 2015]. This means that when taking a diamond corner from the
other side of the grid we can assume that both edge vertices will be set to equal values, therefore allowing
the heightmap to be seamlessly stitched to another to give the impression of infinitely repeating terrain
this works as long the four corner values are seeded to the same amount on each grid. Another small
advantage of this is that since an edge value will be the same on both sides of the grid the function can be
implemented such that these vertices can be set simultaneously and so less unique values need to be
calculated in total [Martz, P. 1997].
While fairly simple to implement and computationally inexpensive, the algorithm can produce
various visual artifacts [Mogensen, T. 2009]. Aside from the aforementioned issue of dealing with edge
cases, further problems are inherent with the functions reliance on square shapes to produce its y offset
values, as scrutinized by Miller (1986). Since the amount of random offset is reduced at each step, the
potential for high peaks and funnels in the outputted terrain exists primarily during the first few
diamond and square stages. This may occur if the amount of random offset is reduced too rapidly, or the
random number generator creates values at adjacent points with a large discrepancy. It is near impossible
to eliminate these sharp points using the basic implementation of the algorithm alone, although they can
be mitigated to the point of being unnoticeable by using an appropriate value for the smoothness
parameter. Another consideration is the lack of controllability that diamond-square presents. If a designer
wishes to generate a mountain peak at a certain point, it is not enough to simply set that vertex to a large
height before running the algorithm since it may only be passed over by the function at a late iteration
level. In this case the point is considered too late, resulting in a spike in the terrain. To prevent this, values
Page 31 of 80
around the point must also be seeded by hand [Mijailovic, V. 2015] a time-consuming process that may
simply eliminate the point of using procedural generation in the first place.
Figure 11 An example of artifacts present as a result of the diamond-square algorithms edge cases not
being properly accommodated, resulting in steep cliffs where values are calculated independently of adjacent
vertices. (Source: StackExchange, 2012).
Figure 12 An example of sharp spikes and dips in a landscape generated by the diamond-square
algorithm. Each artifact exists at a location that is the square corner or midpoint of another artifact, where that
value is set with too greater disparity to its adjacent vertices. (Source: Fractal Forums, 2016).
Page 32 of 80
2.4.3
Perlin Noise
Developed by Ken Perlin in 1983 after working on the 1982 Disney science fiction film Tron
[Kerman, P. 2006], the eponymous Perlin noise function was first described in a formal document in 1985
[Perlin, K. 1985]. Since its introduction as a type of gradient noise, it has seen use in visual effects for
films, graphical textures, procedural terrain and other media. Unlike diamond-square, Perlin noises
gradient details are all of a uniform size, allowing for a much more controllable and scalable output. In
one-dimensional terms, the algorithm works by plotting pseudorandom points which are interpolated to
create smooth graphical functions. This can be done a number of times with various amplitudes and
frequencies and summed to produce a final fractal noise function; in which case each further noise
function that is added is known as an octave. Noise functions created this way have properties defining
their shape, allowing the final output to be more controllable than something created by a variation of the
midpoint displacement algorithm (such as diamond-square). These are defined as amplitude and
frequency; where amplitude is the difference between possible maximum and minimum y values and
frequency is 1/wavelength, with wavelength being the difference between neighbouring points in the x
direction [Elias, H. 1998].
Figure 13 An example of the output of Perlin noise in one direction by combining three noise functions
into a final pattern. For each subsequent octave, the amplitude is divided by four and the frequency is multiplied by
four. By changing the amount of amplitude per frequency (a relation named by Mandelbrot as persistence)
different results can be created, such as smooth large parabolas or straight and jagged lines. (Source: Hugo Elias,
1998).
Page 33 of 80
Perlins function uses a technique involving creating a lattice of values with random gradients in
order to produce gradient noise, although an alternative is to create value noise by interpolation of
random values. Either of these can be combined in octaves to result in fractal noise. Value noise is briefly
discussed here as an alternative implementation. After the noise function has output a set of values for
each point on the line, interpolation is used to fill and smoothen the intermediary space between them.
Any method of interpolation can be used, each of which having a trade-off between accuracy of curvature
and runtime complexity. For an extremely basic interpretation of the algorithm, linear interpolation
produces a very fast result but with poor visual fidelity. Other methods include cosine and cubic
interpolation; each producing a more accurate curve than the last but with a greater decrease in
computational performance. Another method for smoothing noise is to average each point with its
neighbouring vertices in order to apply a blurring filter. When extended into two or more dimensions, this
reduces the effect of squareness but comes at the cost of reducing the contrast of the noise.
Figure 14 A comparison between different interpolation methods applied to create value noise. The top
function is interpolated linearly based on a value between 0 and 1 between each two points, resulting in a fast but
unsmoothed output. The middle function uses cosine interpolation, producing much more visually pleasing results
compared with the former at a slight loss of speed. The final function makes use of cubic interpolation, which
Page 34 of 80
calculates curved edges based the points before and after the two being interpolated between. It can be difficult to
implement and is computationally expensive, while producing results not noticeably more accurate than the cosine
method for most applications. (Source: Hugo Elias, 1998).
Figure 15 An example of gradient noise, the alternative of value noise described previously and the type
of noise produced by Perlins algorithm. Rather than interpolating between points by using their set values, each
point is assigned a pseudo-random gradient (represented by a red line). The space between vertices is interpolated
using the closest linear gradients from each neighbouring point and a function with a zero first derivative (such as
Hermite blending: 3t2-2t2 in Ken Perlins original implementation). (Source: Linkping University, 2005).
When gradient noise is extended into two dimensions the array of gradient vectors becomes a
grid, which then becomes a lattice in three dimensions and so forth. A point on the grid (P) then gets
assigned a value computed from the scalar product between the points distance vectors to the closest four
surrounding grid points and their random gradient vectors [Biagioli, A. 2014]. The contribution of the
gradient values from these four points is blended using a method similar to bilinear interpolation
[Gustavson, S. 2005], resulting in a final noise value for the point P. In 2D space, any point P has four
surrounding grid points, while in 3D space there will be eight surrounding points (requiring a form of
trilinear interpolation to determine the interpolant). This can be generalised to n dimensions where the
number of surrounding grid points for P is always 2n. The gradient vectors assigned to these grid points
must be calculated in a pseudorandom manner and at a high speed so as not to bottleneck the performance
of the algorithm. Pseudorandomness is used so that it can be certain that neighbouring gradients have
dissimilar values, producing consistent noise with the imitation of true randomness. Perlin noise
addresses this by using a set of two pre-computed tables indexing both permutations (0 to 255 in a
random order) and gradients [Perlin, K. 1985].
Page 35 of 80
Figure 16 A visualisation of the calculation of a final noise value for a single point P using Perlin noise
in two-dimensions with a grid data structure. Each point on the grid has a pseudorandom 2D gradient vector
assigned to it, typically from a pre-computed array. The 2D distance vectors between P and its four closest
neighbouring grid points are calculated (indicated by the green arrows), which are then each applied to a respective
scalar product operation with the four gradient vectors (represented by the cyan arrows). Bilinear interpolation is
used to determine the weighted influence of each vector, resulting in a final noise value for P. (Source: ECGLAB
Wiki, 2011).
Perlin noise has a time complexity of T(n) = O(2n), where n denotes the number of dimensions
the algorithm is implemented in. Note that this complexity scaling directly correlates to the number of
surrounding grid points that an arbitrary point P is influenced by i.e. the number of dot product
operations which must be computed. Perlin noise is typically implemented in 2D, 3D or even 4D space,
Page 36 of 80
but the function can theoretically be defined for any number of dimensions. Where the function is
represented in an extra dimension it may be used to represent time. An example implementation of this
could be a 3D landscape that is animated by the noise to create rippling ocean waves [Biagioli, A. 2014].
2.4.4
Simplex Noise
Developed as an improvement of Perlin noise by the same author in 2001, simplex noise modifies
the original algorithm in order to be less computationally expensive, remove artifacts by providing
complete visual isotropy and scale into higher dimensions at a smaller performance cost [Perlin, K.
2001]. The refined algorithm works based on the principle of simplicial tessellation of N-space
[Gustavson, S. 2005], replacing the quadrilateral grid shapes that surround an arbitrary point with
simplices (defined as the simplest N-dimensional polytope that can surround the point). In 2D space this
means that grid points are positioned in the shape of tessellated equilateral triangles rather than squares;
since every two triangles can be aligned to create a rhombus and thus fill up an equivalent area. Beyond
two dimensions these triangular structures become more visually complex and less equilateral in form. In
3D the simplex grid becomes a tetragonal disphenoid honeycomb, while in four dimensions the simplex
shape used is a fived-cornered pentatope [Gustavson, S. 2005]. For n dimensions, the shape always has
n+1 corners to be considered rather than 2n, resulting in a much more efficient computation of noise
calculation between a point and its surrounding vertices. This refinement consequences in an overall time
complexity of T(n) = O(n2), as opposed to Perlins original functions T(n) = O(2n), meaning that the
improved algorithm scales with greater efficiency into higher dimensions [Perlin, K. 2001].
Page 37 of 80
Figure 17 An example of a simplex grid in two dimensions, where the area of squares is replaced by
rhombi (consisting of two equilateral triangles). An arbitrary point with x,y coordinates is transformed into u,v
coordinates to be identified in the simplex grid. Note that the grid is skewed so that the direction vectors v and u are
symmetrical along the line y = x. A point such as (3,4) on the simplex grid directly translates to (3,4) on the square
grid. This reduces the number of multiplications required to transform between grid states from four to one.
(Source: Kristian Nielsen, 2015).
Since the simplex grid rhombi directly relate to grid squares, it is fairly easy to compute which
rhombus a point lies in. The next step is to determine which of the two triangular halves of the rhombus
the point is inside of. To achieve this, the points x coordinate is compared as a fraction with its y
coordinate. If the x coordinate is greater, the point must lie in the lower triangle, otherwise if the y
coordinate is larger the point must be inside of the upper triangle. One of the limiting factors in the design
of both the original Perlin noise function and similar value noise functions was the use of interpolation to
determine contributions from surrounding grid points. Simplex noise solves this by instead using a falloff
function [Perlin, K. 2001], chosen so that the influence from each corner reaches zero before crossing the
boundary to the next simplex [Gustavson, S. 2005]. In two dimensions this function is radial, while being
Page 38 of 80
spherical in three dimensions, and so on. This speeds up the calculation of derivatives by simply summing
the independent contribution from each corner to get a final noise value for a point. The falloff function
implemented should work in n dimensions and have zero derivatives when appropriate in order to bound
each simplex when calculating corner influences [Flick, J. 2014].
Figure 18 A comparison between similar falloff functions with different exponents. The red function; f(x)
= (1-x2) has non-zero first and second derivatives at x = 1. The orange function; f(x) = (1-x2)2 still has a single nonzero derivate at x = 1. Finally, the blue function; f(x) = (1-x2)3 has both zero derivatives making it an ideal falloff
function for simplex noise implementation. (Source: Catlike Coding, 2014).
Page 39 of 80
3. Methodology
3.1
System Design
Following the agile development method, the software solution is implemented in the C++
programming language using the OpenGL rendering context. Additional library resources are integrated
using the NCLGL framework (Newcastle Game Library) for Microsoft Visual Studio and the Simple
OpenGL Image Library (SOIL), allowing for the implemented procedural generation algorithms to be
represented visually at each cycle of development. The solution aims to be modular and additive,
allowing for runtime modification of function parameters to produce visually different segments of
terrain. Each improvement or refinement of a function is added as an independent modification, meaning
earlier iterations are still available for comparison during the evaluation stage. I decided against using
middleware that provides premade fractal noise functionality, such as libnoise [Bevins, J. 2003], as the
aim of the project is to define these algorithms myself in order to discuss their low-level differences.
C++ has been chosen as the implementation language after analysing the system specification. It
is the language of choice in the computer games industry, making the source code directly portable to
graphical simulations where it could be of use or further development. Technically, the language allows
for intricate memory management and provides the ability to make use of specialised features such as
operator overloading. While the control and storage of data in memory is important for optimizing
algorithms, it is of even greater concern when considering procedural generation, which itself aims to
minimize memory usage by generating content at runtime. Memory limitations are, after all, one of the
main reasons for which procedural generation was implemented for games and simulations originally
[Yannakakis et al. 2011]. The latter mentioned feature, operator overloading, is particularly useful when
writing functions involving complex vector equations. By using NCLGLs existing set of vector classes,
time can be saved when manipulating their components whilst improving the readability of code in
subsequent functions. Microsoft Visual Studio 2015 has been chosen as the development environment of
Page 40 of 80
the software for its powerful debugging and analysis tools which will be vital to the performance testing,
refinement and final evaluation of the project.
3.2
Class Hierarchy
The program is arranged into a series of classes with a hierarchical structure. NCLGL creates an
OpenGL rendering context which feeds data into the window to be rasterized on screen. Specific details
of render data is implemented in the Renderer class, which initializes meshes and shader files. Textures
can also be loaded in (and out) by making use of the functionality of the SOIL extension.
Figure 19 A representation of NCLGLs class structure. The OGLRenderer class creates an OpenGL
rendering context which the Renderer class derives from. Objects are loaded into the program in the form of Meshes
and Shaders. (Source: Newcastle University, 2015).
Objects sent to the renderer are created using the Mesh class data structure. A Mesh encapsulates
an array of 3D position vectors, 2D texture coordinates, vertex colours and indices. This allows for the
use of OpenGLs index buffer, equating to large performance savings with large data structures (such as
heightmaps). The HeightMap class is an extension of the Mesh class, storing a single array of vertex
position data with modifiable texture, colour and index attributes.
Page 41 of 80
3.3
Implementation
3.3.1
Diamond-Square
HeightMap() {
generateArray;
initialiseCorners;
DiamondSquare(array);
}
DiamondSquare(array) {
diamondStep;
squareStep;
splitIntoQuarters;
reduceDisplacement;
for(array : quarters) {
DiamondSquare(array);
}
}
In order to provide an abstraction of the implementation, a Square class is added which stores the
data for a square object represented by its corner vertices. This allows the grid to be quartered at each step
into sub-squares which are provided as arguments to the recursive function call. My recursive
interpretation involves successive calls back to the function on smaller subarrays until a stopping
condition is met. For the first implementation of the function the absolute distance between top left and
Page 42 of 80
top right corners is calculated at each step, with the function stopping if this distance is too small to be
further subdivided.
Figure 20 A screenshot of the first implementation of the diamond-square algorithm on a 65x65 array of
vertices. The image shows the output after several iterations until the first corner of the grid is reached.
The diamond-square function takes two modifiable parameters to provide some level of control
over the output. The first, maxY, represents the maximum possible y value of the heightmap. In my initial
implementation, the four corner values are seeded to exactly half of this value at the start of the function,
creating flat terrain. The amount of noise is controlled by the smoothness parameter (A value between 0
and 1), which works as described in section 2.4.1. The main drawback of this is the fact that the noise
output by the function can only be determined post-execution. If further parameters could be added to
provide more of an estimation as to the final output, a developer would have a much greater degree of
control over the landscape produced without having to model anything by hand; all while retaining the
pseudorandomness of a procedural approach. I address this limitation in my implementation and
refinement of Perlin Noise, outlined in section 3.3.2.
Page 43 of 80
Figure 21 A comparison between complete runs of the diamond-square algorithm with corner values set
to maxY / 2 and two different smoothness values. Both terrain meshes have a globally flat appearance due to the
seeding of corner values. The top heightmap uses a smoothness value of 0.1, resulting in a very rocky surface in all
Page 44 of 80
regions since the random displacement is barely reduced at each iteration. The lower image has a smoothness value
of 1, greatly reducing the amount of random displacement at each step and therefore smoothening the final output.
After the first development cycle, the implementation of diamond-square underwent various
revisions in order to increase the amount of automation and reduce dependencies on human-input
parameters. Unlike gradient noise functions, diamond-square offers less controllability and therefore
ideally should not rely on human input. As seen in Figure 21, changing the smoothness and maxY values
provides similar outcomes to modifying the persistence and amplitude parameters of fractal noise. These
values alone, however, are not enough for a designer to reliably control and predict the general output of
the program. I addressed this in the revised implementation of the function, seeding each initial grid
corner to a random height between 0 and maxY, promoting the formation of more curved landscapes.
However, as stated by Miller (1986), the algorithm has a natural tendency to create more noticeable
perturbations at the locations of square and diamond corner points, producing some unavoidable artifacts
in the functions elementary implementation.
Figure 22 The issue of flat yet noisy terrain is solved by seeding the initial four corner points to random
height values within a desired range. However, this does not eliminate the presence of artifacts as a result of the
square-like perturbation pattern of the algorithm.
Page 45 of 80
3.3.2
Perlin Noise
It should be noted that in this section Perlin noise refers to my own implementation of the
algorithm based on the principles defined by both Ken Perlins original noise function and his improved
Simplex noise. Rather than implement each of these similar algorithms separately, a single definition
was created and iteratively refined throughout the development cycle. In order to implement a variation of
the Perlin noise algorithm, I extended the HeightMap class and added a further class PerlinNoise; used as
a data structure to represent a noise object. Unlike Perlins improved implementation (see Appendix 7.5),
I avoided the use of a permutation table described as an array of integers between 0 and 255 in a
randomized order. My interpretation of Perlin noise is based on the following pseudocode:
HeightMap() {
generateArray;
for(vertex : array) {
PerlinNoise(vertex);
}
}
PerlinNoise(vertex) {
for(each neighbouring point) {
setRandomGradient;
calculateScalarProduct;
}
interpolate;
}
As before, an array of vector position data is generated before being populated by the noise
function. Perlin noise operates on a single point in the array at a time, using the pseudorandom gradient
values of its neighbouring points to determine a final value weighted by interpolation. Since the gradients
Page 46 of 80
are calculated pseudorandomly, we can be certain that a single point will have the same gradient
whenever the function is called. This is extremely important since the function is called independently for
every vertex in the array and therefore each gradient must be consistent. An obvious difference in this
interpretation is that, unlike diamond-square, there is no reliance of recursion. Although as a design
choice the application of recursion can result in concise code, it can be difficult to visualize or implement
for more complex procedures.
Figure 23 An run of the first Perlin noise implementation to populate the heightmap array. Since
parameters are defined randomly at this stage, the outcome of the program cannot be predicted. Note that the
resulting terrain mesh here is visually very similar to that produced by the diamond-square algorithm in Figure 21.
In Ken Perlins original implementation, pre-computed lookup tables were used to define the
gradient values for grid points surrounding an arbitrary point P when determining the weighted
contribution from each corner. These tables sacrificed memory for improved performance and the
guarantee that any grid point would always have the same gradient vector, although my implementation
determines corner weightings without the requirement for either a permutation table or gradient array. For
a point P, values surrounding P in all directions are given a noise value based on an independent noise
function; ensuring that a grid point always has the same value. The details of this procedure are outlined
in Figure 24.
Page 47 of 80
Figure 24 A diagrammatic visualisation of the weighting system in place in the Perlin noise
implementation. For an arbitrary point P at local coordinates (x,y), the noise values of the surrounding points are
calculated. The bottom-left corner P(x,y) takes its value from the weights of the surrounding points in all directions.
The bottom-right corner P(x+1,y) ignores the contributions from points to its left and takes greater influence from
noise values to its right. Likewise, the top-left corner ignores those below it and considers those above it. Finally,
the top-right corner takes weights from those above and to the right, ignoring any below or to the left. Once all
corners have been weighted they are interpolated into a final noise value for P.
The output of the noise function at this stage of development is close to that of diamond-square,
due in part to the lack of control over the output of the algorithm in this version of the implementation.
However, there are no artifacts as a result of random height displacement, such as those that occur with
variations of the midpoint displacement algorithm. Another advantage over diamond-square is the fact
that the absolute grid width and height are not required to be of size 2n+1. This has two inherent benefits;
firstly, a square heightmap can be generated between as many non-zero sizes as is available for an
arbitrary value w. For instance, up to a w value of 1500, a grid could have a width of anywhere from 1 to
1500 (w-1 total possibilities). In contrast, a square grid limited by diamond-squares 2n+1 expression has
only 10 possible widths up to w (3, 5, 9, 17, 33, 65, 129, 257, 513 and 1025). Secondly, since Perlin noise
Page 48 of 80
doesnt require splitting the array into sub-squares, the width and height of the grid do not necessarily
need to be equal. Gradient noise functions can be applied to any surface shape, with the actual noise
algorithm remaining the same while other specific implementation details may differ. These factors alone
mean that even in its most basic form Perlin noise offers much improved scalability and flexibility over
diamond-square.
Figure 25 A comparison between binary representations of diamond-square and Perlin noise maps
created by the program. Both functions have been run across 4,225 vertices (a fixed grid width of 26+1) with y offset
values coloured black and white where y is less than or greater than the average height, respectively. The left image
shows the output of the diamond-square algorithm, whilst the noise map on the right is created by the primary
implementation of Perlin noise.
As discussed and visualized in Figure 25, in its first implementation Perlin noise produces output
that is similar to midpoint displacement in terms of random variation, while avoiding many of the
drawbacks associated with the latter-mentioned algorithm. Like diamond-square, the ability to tile
heightmaps can be added trivially. Since Perlin noise uses pseudorandom gradients to produce quasiisotropic sub-patterns, there is a point at which noise tessellations will show repetitions when applied to a
very large scale grid. This would only ever be evident if the frequency of the noise wave was severely
Page 49 of 80
increased, and according to Perlin (2000), Fortunately, it is ok for the noise function to repeat, as long
as it repeats only after long distance. After a distance of a few hundred units, it doesn't matter if the same
pattern appears again. The noise function has no large-scale features, so by the time an observer is
zoomed out far enough to see a repeat pattern, the whole noise texture is too small to see anyway.
[Perlin, K. 2000].
Figure 26 A slice through the outputted binary map of the Perlin noise function at a high frequency and
over a large scale grid. As indicated by the red squares, the pattern begins to show signs of repetition due to the use
of a set of pseudorandom computed gradients. The repeating patterns would not be visible in the noise at a lower
frequency or smaller scale. In the case of a landscape, each white bump in the image would be the equivalent of a
hill or mountain range when scaled appropriately, meaning that patterns that appear to repeat nearby would
typically be unnoticeable in a game world or simulation.
3.4
Algorithm Refinement
In order to make use of the various parameters that offer some level of control over the Perlin
noise function, the original implementation needed to be reconsidered. To achieve objectives 3 and 4
(outlined in section 1.5), which are to account for pseudo-natural topographical landscape features and to
Page 50 of 80
refine the solution to lower computational costs where possible, the structure of the function was
reviewed. During the next stage of the development cycle, I revised the pseudocode for the
implementation of the Perlin noise function to the following:
HeightMap(properties) {
generateArray;
generatePerlinNoise(properties);
for(vertex : array) {
getNoiseAt(vertex);
}
}
PerlinNoise(properties) {
setProperties;
getSeed;
for(each octave) {
GenerateNoise(seed);
}
}
GenerateNoise(seed) {
for(each neighbouring point) {
setRandomGradientFromSeed;
calculateScalarProduct;
}
interpolate;
}
In this secondary definition an instance of a PerlinNoise object is created which stores the values
for the properties of the noise produced. This allows for these properties to be translated into the noise
function to determine the final output. The implementation accounts for five parameters; persistence,
Page 51 of 80
amplitude, frequency, octaves and seed. Most of these have been detailed previously, but are reiterated
here: [Bevins, J. 2003]
Amplitude The maximum possible y offset value that the noise function can output at an
arbitrary point. Modifying amplitude visually stretches the noise in the y direction.
Frequency The number of cycles per wavelength that the noise function outputs.
with the previous octave. Octaves are combined to form fractal noise.
Persistence The amount of amplitude that is diminished or retained at each consecutive
octave. If noise is generated with a single octave, modifying the persistence will show no
Figure 27 A comparison between two heightmaps produced by the refined Perlin noise function at
different frequencies. Note that both terrain meshes share the same maximum and minimum possible height,
roughness and seed (meaning that the output on the right is the result of the left output being stretched along the y
axis).
The introduction and configuration of these parameters allows for runtime modification of
landscape meshes in the program and for the user to determine the general appearance of terrain they wish
to produce. As seen in Figure 26, the difference between a rocky surface and rolling flatlands can be
Page 52 of 80
determined by frequency alone, with additional properties being changeable in order to fine tune the
ultimate output. Pertaining to my third objective, to accurately control topography by defining a set of
user-input parameters, this implementation succeeds in determining that the objective has been met.
Figure 28 An example of the improved Perlin noise function output to a non-square quadrilateral grid. In
this instance the grid has a length of 200 points and a width of 10 points, resulting in a modelled representation of a
one-dimensional noise wave that can be manipulated and traversed in three dimensions.
Page 53 of 80
Figure 29 A comparison between the output of the refined Perlin noise function over four successive
octave levels with added colour zoning. In the top-left image the map is generated over a single octave with an
arbitrary amplitude and a frequency of 0.1. Each additional octave increases the frequency, creating a noisier
output, which is then combined with all previous octaves to retain the original shape of the terrain. The persistence
in this example is set at 0.9 throughout.
To succeed in meeting objective four, I experimented with different functions used to calculate
the gradient value of each grid corner, concluding that the implementation described in Figure 24 was
producing desirable results without the need for hard-coded permutation or gradient tables. To further
improve performance I investigated the use of different interpolation methods before evaluating and
comparing the computation of each individual implementation. The interpolation function that is used in
the final solution is one with a faster performance than cosine interpolation without the inaccuracies of
linear interpolation (see Appendix 7.3).
3.5
implementation satisfies the appropriate objectives in terms of being able to algorithmically create and
model virtual landscapes using procedural generation, the only way for a user to modify the terrains
properties at current is via access to the source code. To address this, the program makes use of NCLGLs
input classes in order to allow full 3D navigation around the rendering environment. Additionally, a text
interface is displayed onto the window which responds to user input and navigation. This enables the user
to modify the properties of a heightmap in real time to create instantly generated and rendered landscapes.
Page 54 of 80
Figure 30 A screenshot of the final program and its user interface. The heightmap rendered on screen
can be directly modified during runtime by altering the parameters in the text interface. The landscape model
automatically deletes and re-generates procedurally using the final implementation of the improved Perlin noise
function and the newly input parameters.
When any single parameter is modified the entire terrain model is deleted and repopulated before
the next draw call, meaning that the landscape mesh seamlessly updates as its attributes are altered. It also
provides contextual feedback by detailing the properties of a displayed map at any given point in time, so
that a user or developer can work with the virtual landscape and edit it visually at runtime. Additional
controls have been added to cycle through rendering modes, such as solid colour zones, grid mesh or
point mesh, in addition to the option to automatically generate a landscape using random parameters
within a fixed range. Overall this toolset aims to provide a visual abstraction of the fundamental
algorithms behind the process of procedural generation, while offering a solution to allow for the creation
and modification of procedural landscapes at runtime.
Page 55 of 80
Page 56 of 80
4. Evaluation
4.1
Testing Strategy
All individual tests are performed using the final version of the program across two test
environments, as detailed in Table 1. For each test, four variables are measured; heightmap generation
time, heightmap render time, plus vertex generation and render times. Further tests measuring process
memory usage and CPU utilization are run where appropriate. The Visual Studio 2015 local debugging
environment offers a toolset allowing for the former two aspects to be monitored during execution. In
order to measure the speed of the algorithms and responsiveness of the rendering platform, I implemented
a system timer which periodically records the time taken for a call to each algorithm to result in a
completely populated vertex array, in addition to the amount of time required for the vertices to be
buffered. For each test, the window camera remains in a stationary position so as not to influence the
validity of rendering performance by ensuring that the same vertices for a mesh are within the view
frustum during each test. The user interface is also disabled during tests so as not to influence results.
Unless stated otherwise, each test is performed 100 times with the final average being calculated, wherein
for each run a heightmap is generated with random properties within a finite set of values. The time taken
to determine these random properties is not included in the measured generation time.
Table 1
Testing Environment Specifications
Page 57 of 80
4.2
Performance Analysis
In order to test the performance of the solution, a benchmark test is run on each test machine to
create a comparator for further tests. For this benchmark, the generation function is run a total of 1,000
times for a heightmap with a fixed width and height of 100 vertices.
Table 2
Benchmark Test (Width 100, Run 1000 times)
Figure 31 Benchmark test memory usage and CPU utilization during runtime.
The results of the benchmark test indicate that the program uses a total pool of 38MB of system
memory in order to store the data for the heightmap and local variables, an amount that is consistent for
the duration of the test. The CPU utilization shows peaks each time a heightmap is generated and periodic
Page 58 of 80
dips after creation. In order to test the scalability of the solution, the test has been repeated for several
different grid sizes (17, 500 and 1000 vertices).
Table 3
Performance Test (Width 17, Run 100 times)
Table 4
Performance Test (Width 500, Run 100 times)
Table 5
Performance Test (Width 1000, Run 100 times)
From the test data it can be assumed that the final algorithm scales exponentially into higher grid
sizes. As Figure 32 shows, the increase in time taken to populate the vertex array with noise values
Page 59 of 80
increases greatly at each step, due to the fact that the total number of vertices accommodated by the
function is equal to w2 for a width of size w, and therefore an exponential increase is expected for the
solution. When considering the time taken to process the noise function per vertex, however, it can be
concluded from the data that the algorithm performs linearly.
3291.22
3000
2500
2000
1500
901.28
1000
500
16.5
0
17
33.35
100
500
1000
Figure 32 A chart of the relationship between heightmap generation times for different grid width intervals.
Render times are barely increased along with the size of the grid, despite the many hundredthousand more vertices sent to the renderer at width 1000 compared with width 17. This is likely due to
the usage of OpenGLs index buffer, allowing for vertices to be accessed multiple times. Interestingly, the
test with width set to 17 provided surprising results in some areas. Despite being a much smaller grid than
the benchmark test, the time taken to generate and render each vertex (and the array as a whole) was
slower. Subsequent repeat tests further the conclusion that the algorithmic implementation offers
diminishing returns when considering smaller grid sizes approaching 1.
Page 60 of 80
Figure 33 A comparison between memory and processor usage during execution for different grid widths (17,
100, 500 and 1000 from top-left to bottom-right). As the grid width (w) doubles, the amount of memory required
increases exponentially due to the total amount of vertex data stored being equal to w2.
4.3
Improvements
When refining the solution to improve the performance of the algorithm in order to meet
objective four, my main area of investigation was in calculating the weightings of gradients surrounding a
point P in the function. The implemented method removes the need for a pre-computed permutation and
gradient table, lowering memory costs at the expense of a small amount of computational overhead. I
added functions for various interpolation methods in order to find a compromise between accuracy and
Page 61 of 80
performance which would be acceptable for the program requirements. As discussed in section 2.4.3, a
popular choice for interpolation is the cosine method, being accurate enough compared with its
reasonable speed [Bourke, P. 1999]. Cubic and Hermite interpolation were not considered given that the
performance of the algorithm is of much greater consideration than accuracy.
The problem encountered with this is that the native cosine function provided by the C++
languages cmath library has a slow computation speed which can be improved by manual replacement. I
refined the solution by implementing an improved interpolation function (see Appendix 7.3) with no
reliance on expensive mathematical operations such as trigonometric functions or square rooting. As
detailed in Table 6 and Table 7, this results in an overall performance increase, while identically-seeded
results are visually similar in terms of surface accuracy (see Figure 34).
Table 6
Cosine Interpolation Test (Width 100, Run 1000 times)
Table 7
Improved Interpolation Test (Width 100, Run 1000 times)
Page 62 of 80
Figure 34 A comparison between identically-seeded Perlin noise heightmaps with different interpolation methods
applied. The left image uses a cosine interpolation function to determine noise weightings while the right image
uses the improved interpolation function. Despite the cosine method being computationally slower, the visual
outputs are to an equally appropriate standard for a game landscape. Notice that the cosine function does show
some signs of smoother interpolation, such as where is indicated by the coloration of the central island.
4.4
Algorithm Comparison
Due to its satisfaction of the third objective, the implementation of Perlin noise is used as the
default algorithm in the final terrain generation program; allowing for runtime modification of the
generated landscape model. Since the project is developed additively, however, the diamond-square
implementation and earlier revisions can be accessed. After performance, memory and computational
analysis, I found that the performance of diamond-square at moderate-to-high grid sizes was severely
underwhelming compared with Perlin noise. This is surprising considering the speed associated with
diamond-square, as Makin (2016) concludes, Perlin noise is fast enough for runtime pre-computation
and provides pretty decent results - in fact for pre-computed grids Perlin noise is considerably better than
diamond-square unless you considerably enhance the diamond-square algorithm. But if you need
realtime computation of height (y) given a location (x,z) then on either GPU or CPU a good diamondsquare implementation should be considerably faster - even if the algorithm is somewhat enhanced
though maybe not to the point of being as good aesthetically as Perlin noise.
Page 63 of 80
Table 8
Diamond-Square Test (Width 17, Run 100 times)
Table 9
Diamond-Square Test (Width 65, Run 100 times)
As is evident from the results when compared with Perlin noise (Table 2 through Table 5), the
implementation of the diamond-square algorithm cannot compete in terms of performance, although with
further work it could be revised to the point of being faster than a gradient noise function [Archer, T.
2011]. This outcome can be attributed to a poor implementation of the algorithm in the program, since
much more time was invested into the refinement of the Perlin noise function. Despite this, the increased
memory requirements for diamond-square, in addition to the limited set of grid widths and lack of
controllability, make Perlin noise a much more suitable function for the final program.
Page 64 of 80
5. Conclusion
5.1
Satisfaction of Objectives
In order to meet the projects aim; To create and investigate procedural landscapes commonly
used in modern computer games and simulations, there were four individual objectives outlined in
section 1.5 that needed to be met.
Objective 1 is satisfied completely, as the usage of procedural generation and each intended
fractal algorithm has been researched thoroughly. I extended this investigation by approaching the wider
topic of the role of algorithmically-generated content in computing and its potential applications, benefits
and drawbacks. This pertains to the subject of objective 3, wherein I have looked into the possibilities of
procedural generation when concerning complex or intricate assets, and whether or not it is sensible to
use algorithms to replace human designers in some situations.
The second objective has been partially completed. Although algorithmic terrain generation has
been implemented and can be defined on a large scale, I am not content with the performance or output of
the diamond-square implementation. However, the program is able to load and modify parameterised
landscapes during runtime with no stutter in performance, meaning that the objective by definition has
been met.
Objective 3 has been satisfied in the respect that the program allows landscapes to be controlled
in order to conform to a set of user-defined parameters. After some investigation it has become obvious
that most unique geographical features (those that should be completely prevented from appearing in
particular landscape regions) are too complex or intricate to be added by procedural generation alone.
While the program can control where mountains, hills and oceans exist by reading the parameters input
by the user, features such as caves and cliffs would require a great deal of complex implementation just to
be generated algorithmically; even before their location and sensibility are taken into account. The main
issue with approaching topographical accuracy with an algorithmic solution is that the complexity and
performance costs typically outweigh the benefits of using a procedural subsystem in the first place. It is
Page 65 of 80
generally more sensible for the subsystem to generate a base mesh from which intricate and complex
details can be added by hand. As Lange (2000) states, Even the best simulation is only a representation
of the real world. A virtual walk-through is not the same and will never be the same as a real walk in
nature.
The final objective has been achieved in regards to the completed solution having gone through
several stages of refinement in order to provide an uncompromising outcome that meets the intended
requirements of the program; allowing the user to model and control the topographical properties of
procedural and virtual landscapes at runtime. By investigating various algorithmic implementations in
addition to comparing and improving important sub-functions such as interpolation, I have succeeded in
ensuring that the final program is optimized where possible to fulfill its requirements more effectively.
5.2
could have been undertaken more advantageously. The agile development method was successful in
ensuring that each implemented algorithm was refined and that the performance and capabilities of the
program improved iteratively. However, if the project were to be redone it would be more practical to
implement each algorithm simultaneously, improving each one as development continues. This would
prevent the ideology that each successive algorithm is already an improvement of the previous, resulting
in a more balanced and thorough comparison during the evaluation stage. If the diamond-square function
had received the same amount of revision and expansion as Perlin noise did in the program, it would
likely have been a viable alternative for terrain modelling in the final solution.
Secondly, the capabilities of the program could ideally be extended beyond a basic framework for
landscape visualisation and real-time modification. With further investigation into heightmap stitching
and edge-matching it would be possible to piece together the kind of endless landscapes for which
procedural generation has gained a reputation in modern games.
Page 66 of 80
A final improvement would be to expand upon the subject of my third objective; concerning the
placement and natural accuracy of topographical features by the procedural subsystem. As discussed
earlier, I have concluded that a landscape region cannot be defined by surface values alone, and that the
addition of vegetation and water systems would be vital in constructing a terrain generator that can
account for real-world accuracy of geographical feature sets. Further work in this area, such as the nodebased hydrological terrain framework produced by the University of Lyon [Gnevaux et al. 2013],
encourage an advancement towards creating lifelike virtual environments without the need for humandesigned intricacies.
5.3
Final Reflections
The project aimed to investigate the way procedural landscapes are generated in modern
simulations and apply a combination of successful techniques to create a solution that could model userdefined pseudo-random terrain based on the principles of fractional motion. Various algorithms for
random noise generation have been researched and evaluated. The final solution is able to generate terrain
conforming to a set of user-defined parameters with the ability to be modified at runtime, demonstrating
the possibilities presented by a procedural approach to environment design; wherein a potentially infinite
number of virtual worlds can be generated without ever restarting the application or having to load
external assets.
Beyond this, the question of where procedural generation has its place in computer simulations
has been raised. I have investigated the closeness at which an algorithmically-generated landscape can
come to representing a real environment, in addition to how intricate topography can possibly be created
pseudo-randomly. It is evident that while procedural subsystems can be implemented to a great degree of
complexity in order to account for such features, it realistically eliminates the point of using a procedural
solution in the first place.
Page 67 of 80
6. References
Togelius, J. Kastbjerg, E. Schedl, D & Yannakakis G N. (2011). What is Procedural Content Generation?
Mario on the borderline. In: Proceedings of the 2nd Workshop on Procedural Content Generation
in Games. IT University of Copenhagen. Retrieved from:
http://www.ccs.neu.edu/course/cs5150f14/readings/togelius_what.pdf
Smelik, R M. Kraker, K J & Groenewegen, S A. (2009). A Survey of Procedural Methods for Terrain
Modelling. TNO Defence, Security and Safety. Retrieved from:
http://graphics.tudelft.nl/~ruben/RMSmelik3AMIGAS09.pdf
Togelius, J. Shaker, N & Nelson, M J. (2015). Procedural Content Generation in Games: A Textbook and
an Overview of Current Research. Retrieved from: http://pcgbook.com/wpcontent/uploads/chapter01.pdf
Weber, R. (2013). On Reflections: First interview with the Ubisoft Studios new MD. Retrieved from:
http://www.gamesindustry.biz/articles/2014-02-26-on-reflections-first-interview-with-the-ubisoftstudios-new-md
Ponce, T. (2013). AAA game development teams are too damn big. Retrieved from:
http://www.destructoid.com/aaa-game-development-teams-are-too-damn-big-247366.phtml
Lethem, J. (2010). Procedural City Generation with City Zoning. Newcastle University.
Larman, C. (2004). Agile and iterative development: a managers guide. Boston: Addison-Wesley.
Sharma, R. (2013). Basics of Rolling Wave Planning. Retrieved from:
http://www.brighthubpm.com/project-planning/48953-basics-of-rolling-wave-planning/
Gelperin, D. (2008). Exploring Agile. Proceedings of the 2008 international workshop on Scrutinizing
agile practices or shoot-out at the agile corral. New York, NY: ACM.
Collet, P. (2014). Raytracer. Retrieved from: http://p-col.org/blog/raytracer/index.php
Page 68 of 80
Sapaiev, V. (2015). Modelling by numbers: Supplementary (Terrain). Retrieved from:
https://unionassets.com/blog/modelling-by-numbers-supplementary-terrain-304
Lange, E. (2000). The limits of realism: perceptions of virtual landscapes. Institute of National, Regional
and Local Planning, Swiss Federal Institute of Technology. Retrieved from:
http://www.geogra.uah.es/patxi/lange01_Viz_realism.pdf
Foley, J D. (1995). Computer Graphics: Principles and Practice. Addison-Wesley Professional.
Galbis, A & Maestre, M. (2012). Vector Analysis Versus Vector Calculus. Springer.
Foley, J D. van Dam, A. Hughes, J F & Feiner, S K. (1990). Spatial-partitioning representations; Surface
detail. Computer Graphics: Principles and Practice. The Systems Programming Series. AddisonWesley.
Kuchera, B. (2011). We <3 voxels: why Voxatron is an exciting indie shooter. Retrieved from:
http://arstechnica.com/gaming/2011/01/we-3-voxels-why-voxatron-is-an-exciting-indie-shooter/
Smith, G. (2014). The Future of Procedural Content Generation in Games. Northeastern University,
Playable Technologies Group. Retrieved from:
https://pdfs.semanticscholar.org/5edd/7d97907122eff6c96c3c3ea6bcda02563ce6.pdf
Beard, D. (2010). Terrain Generation Diamond Square Algorithm. Daniel Beards Programming Blog.
Retrieved from: https://danielbeard.wordpress.com/2010/08/07/terrain-generation-andsmoothing/
Lee, J. (2014). How Procedural Generation Took Over The Gaming Industry. Retrieved from:
http://www.makeuseof.com/tag/procedural-generation-took-gaming-industry/
Zapata, S. (2007). Glenn Wichman (Rogue). The Temple of the Roguelike. Retrieved from:
http://www.roguetemple.com/interviews/glenn_wichman_interview/
King, A. (2015). The Key Design Elements of Roguelikes. Retrieved from:
http://gamedevelopment.tutsplus.com/articles/the-key-design-elements-of-roguelikes--cms-23510
Campbell, D. (2010). Go Hard, Go Rogue. Retrieved from: http://www.themarriedgamers.net/go-hard-gorogue/
Page 69 of 80
Goldberg, D & Larsson, L. (2013). The Amazingly Unlikely Story of How Minecraft Was Born.
Minecraft: The unlikely tale of Markus Notch Persson and the game that changed everything.
Retrieved from: http://www.wired.com/2013/11/minecraft-book/
Persson, M. (2010). Hiring some people, getting an office, and all that! The Word of Notch. Retrieved
from: http://notch.tumblr.com/post/1075326804/hiring-some-people-getting-an-office-and-all
Persson, M. (2011). Terrain generation, Part 1. The Word of Notch. Retrieved from:
http://notch.tumblr.com/post/3746989361/terrain-generation-part-1
OBrien, C. (2013). How Minecraft become one of the biggest video games in history. Los Angeles Times.
Retrieved from: http://articles.latimes.com/2013/sep/03/business/la-fi-tn-how-minecraft-videogames-20130822
Fingas, J. (2015). Heres how Minecraft creates its gigantic worlds. Retrieved from:
http://www.engadget.com/2015/03/04/how-minecraft-worlds-are-made/
Bergensten, J. (2011). A short demystification of the map seed. Retrieved from:
http://mojang.com/2011/02/a-short-demystification-of-the-map-seed/
Sapozhkov, A. (2012). Interview with the Space Engine developer Vladimir Romanyuk. Retrieved
from: http://www.elite-games.ru/conference/viewtopic.php?
t=56775&sid=5b476f64debe8ad35f2ba327b865d1cf
Tamblyn, T. (2014). Man Builds Massive Virtual Universe You Can Download And Explore. The
Huffington Post. Retrieved from: http://www.huffingtonpost.co.uk/2014/10/21/universesimulator-program_n_6019968.html
Romanyuk, V. (2016). Space Engine Frequently Asked Questions. Retrieved from:
http://en.spaceengine.org/index/faq/0-29
Ellison, C. (2015). 2014: A Space Engine. Retrieved from: http://www.eurogamer.net/articles/2015-01-31cara-ellison-on-2014-a-space-engine
Hooper, T. (2015). AtomP (P)Reviews: Space Engine. The Torch: Entertainment Guide. Retrieved from:
http://thetorchentertainmentguide.com/atomp-previews-space-engine/
Page 70 of 80
Moss, R. (2015). Creative AI: Procedural generation takes game development to new worlds. Retrieved
from: http://www.gizmag.com/creative-ai-procedural-game-development-angelina/35874/
Fournier, A. Fussell, D & Carpenter, L. (1982). Computer rendering of stochastic models.
Communications of the ACM. 25 (6).
Mandelbrot, B & van Ness, J. (1968). Fractional Brownian Motions, Fractional Noises and Applications.
SIAM Review. 10 (4).
Miller, G. (1986). The definition and rendering of terrain maps. Proceedings of the 13th annual conference
on Computer graphics and interactive techniques. New York, NY: ACM.
Mogensen, T. (2009). Planet Map Generation by Tetrahedral Subdivision. University of Copenhagen.
Retrieved from: http://www.diku.dk/~torbenm/Planet/PSIslides.pdf
Martz, P. (1997). Generating Random Fractal Terrain. Retrieved from:
http://www.gameprogrammer.com/fractal.html
Bird, K. Dickerson, T & George, J. (2013). Techniques for Fractal Terrain Generation. Retrieved from:
http://web.williams.edu/Mathematics/sjmiller/public_html/hudson/Dickerson_Terrain.pdf
Mijailovic, V. (2015). A Graph-Based Approach to Procedural Terrain. KTH Royal Institute of
Technology. Retrieved from: http://www.gecode.org/~schulte/teaching/theses/TRITA-ICT-EX2015:72.pdf
Biagioli, A. (2014). Understanding Perlin Noise. Retrieved from:
http://flafla2.github.io/2014/08/09/perlinnoise.html
Perlin, K. (1985). An Image Synthesizer. Proceedings of the 12th annual conference on Computer
graphics and interactive techniques. New York, NY: ACM.
Kerman, P. (2006). Macromedia Flash 8 @work: Projects and Techniques to Get the Job Done.
Indianapolis, Ind: Sams.
Elias, H. (1998). Perlin Noise. Retrieved from: http://freespace.virgin.net/hugo.elias/models/m_perlin.htm
Perlin, K. (2001). Noise Hardware. In: Real-Time Shading. Retrieved from:
http://www.csee.umbc.edu/~olano/s2002c36/ch02.pdf
Page 71 of 80
Gustavson, S. (2005). Simplex noise demystified. Linkping University, Sweden. Retrieved from:
http://staffwww.itn.liu.se/~stegu/simplexnoise/simplexnoise.pdf
Flick, J. (2014). Simplex Noise, keeping it simple. Retrieved from:
http://catlikecoding.com/unity/tutorials/simplex-noise/
Yannakakis, G N. Togelius, J. (2011). Experience-Driven Procedural Content Generation. Affective
Computing, IEEE Transactions on. 2 (3).
Perlin, K. (2000). Making Noise. Retrieved from:
https://web.archive.org/web/20151103110936/http://www.noisemachine.com/talk1/
Bevins, J. (2003). Glossary. libnoise. Retrieved from: http://libnoise.sourceforge.net/glossary/index.html
Bourke, P. (1999). Interpolation methods. Retrieved from:
http://paulbourke.net/miscellaneous/interpolation/
Archer, T. (2011). Procedurally Generating Terrain. Morningside College. Retrieved from:
http://micsymposium.org/mics_2011_proceedings/mics2011_submission_30.pdf
Gnevaux, J.D. Galin, . Gurin, E. Peytavie, A & Bene, B. (2013). Terrain generation using procedural
models based on hydrology. ACM Transactions on Graphics (TOG). 32 (4).
Page 72 of 80
7. Appendices
7.1
7.2
7.3
7.4
p[B +
float
float
float
start
B + 2];
g3[B + B + 2][3];
g2[B + B + 2][2];
g1[B + B + 2];
= 1;
Page 73 of 80
=
=
=
=
p[
p[
p[
p[
i
j
i
j
+
+
+
+
by0
by0
by1
by1
];
];
];
];
sx = s_curve(rx0);
sy = s_curve(ry0);
#define at2(rx,ry) ( rx * q[0] + ry * q[1] )
q = g2[ b00 ] ; u = at2(rx0,ry0);
Page 74 of 80
q = g2[ b10 ] ; v = at2(rx1,ry0);
a = lerp(sx, u, v);
q = g2[ b01 ] ; u = at2(rx0,ry1);
q = g2[ b11 ] ; v = at2(rx1,ry1);
b = lerp(sx, u, v);
return lerp(sy, a, b);
}
float noise3(float vec[3])
{
int bx0, bx1, by0, by1, bz0, bz1, b00, b10, b01, b11;
float rx0, rx1, ry0, ry1, rz0, rz1, *q, sy, sz, a, b, c, d, t, u, v;
register i, j;
if (start) {
start = 0;
init();
}
setup(0, bx0,bx1, rx0,rx1);
setup(1, by0,by1, ry0,ry1);
setup(2, bz0,bz1, rz0,rz1);
i = p[ bx0 ];
j = p[ bx1 ];
b00
b10
b01
b11
=
=
=
=
p[
p[
p[
p[
i
j
i
j
+
+
+
+
by0
by0
by1
by1
];
];
];
];
t = s_curve(rx0);
sy = s_curve(ry0);
sz = s_curve(rz0);
#define at3(rx,ry,rz) ( rx * q[0] + ry * q[1] + rz * q[2] )
q = g3[ b00 + bz0 ] ; u = at3(rx0,ry0,rz0);
q = g3[ b10 + bz0 ] ; v = at3(rx1,ry0,rz0);
a = lerp(t, u, v);
q = g3[ b01 + bz0 ] ; u = at3(rx0,ry1,rz0);
q = g3[ b11 + bz0 ] ; v = at3(rx1,ry1,rz0);
b = lerp(t, u, v);
c = lerp(sy, a, b);
q = g3[ b00 + bz1 ] ; u = at3(rx0,ry0,rz1);
q = g3[ b10 + bz1 ] ; v = at3(rx1,ry0,rz1);
a = lerp(t, u, v);
q = g3[ b01 + bz1 ] ; u = at3(rx0,ry1,rz1);
q = g3[ b11 + bz1 ] ; v = at3(rx1,ry1,rz1);
b = lerp(t, u, v);
d = lerp(sy, a, b);
return lerp(sz, c, d);
}
static void normalize2(float v[2])
Page 75 of 80
{
float s;
s = sqrt(v[0] * v[0] + v[1] * v[1]);
v[0] = v[0] / s;
v[1] = v[1] / s;
}
static void normalize3(float v[3])
{
float s;
s = sqrt(v[0]
v[0] = v[0] /
v[1] = v[1] /
v[2] = v[2] /
}
static void init(void)
{
int i, j, k;
for (i = 0 ; i < B ; i++) {
p[i] = i;
g1[i] = (float)((random() % (B + B)) - B) / B;
for (j = 0 ; j < 2 ; j++)
g2[i][j] = (float)((random() % (B + B)) - B) / B;
normalize2(g2[i]);
for (j = 0 ; j < 3 ; j++)
g3[i][j] = (float)((random() % (B + B)) - B) / B;
normalize3(g3[i]);
}
while (--i) {
k = p[i];
p[i] = p[j = random() % B];
p[j] = k;
}
for (i = 0 ; i < B + 2 ; i++) {
p[B + i] = p[i];
g1[B + i] = g1[i];
for (j = 0 ; j < 2 ; j++)
g2[B + i][j] = g2[i][j];
for (j = 0 ; j < 3 ; j++)
g3[B + i][j] = g3[i][j];
}
}
7.5
Page 76 of 80
int X = (int)Math.floor(x) & 255,
// CONTAINS POINT.
y -= Math.floor(y);
// OF POINT IN CUBE.
z -= Math.floor(z);
double u = fade(x),
v = fade(y),
w = fade(z);
int A = p[X
// HASH COORDINATES OF
], x
, z
),
], x-1, y
, z
)), // BLENDED
grad(p[BA
, y
lerp(u, grad(p[AB
], x
grad(p[BB
, y-1, z
], x-1, y-1, z
, y
// RESULTS
))),// FROM
, z-1 ),
grad(p[BA+1], x-1, y
lerp(u, grad(p[AB+1], x
),
// AND ADD
// CORNERS
, y-1, z-1 ),
double u = h<8 ? x : y,
v = h<4 ? y : h==12||h==14 ? x : z;
return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
}
static final int p[] = new int[512], permutation[] = { 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
Page 77 of 80
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};
static { for (int i=0; i < 256 ; i++) p[256+i] = p[i] = permutation[i]; }
}
7.6
-----------------System Information
-----------------Time of this report: 5/7/2016, 21:10:55
Machine name: JOENOTEBOOK
Operating System: Windows 10 Home 64-bit (10.0, Build 10586) (10586.th2_release_sec.1603281908)
Language: English (Regional Setting: English)
System Manufacturer: Dell Inc.
System Model: Inspiron 7537
BIOS: A13
Processor: Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz (4 CPUs), ~1.6GHz
Memory: 6144MB RAM
Available OS Memory: 6042MB RAM
Page File: 6041MB used, 1542MB available
Windows Dir: C:\WINDOWS
DirectX Version: 12
DX Setup Parameters: Not found
User DPI Setting: Using System DPI
System DPI Setting: 96 DPI (100 percent)
DWM DPI Scaling: Disabled
Miracast: Available, with HDCP
Microsoft Graphics Hybrid: Supported
DxDiag Version: 10.00.10586.0000 64bit Unicode
Page 78 of 80
-----------DxDiag Notes
-----------Display Tab 1: No problems found.
Display Tab 2: No problems found.
Sound Tab 1: No problems found.
Input Tab: No problems found.
-------------------DirectX Debug Levels
-------------------Direct3D: 0/4 (retail)
DirectDraw: 0/4 (retail)
DirectInput: 0/5 (retail)
DirectMusic: 0/5 (retail)
DirectPlay: 0/9 (retail)
DirectSound: 0/5 (retail)
DirectShow: 0/6 (retail)
--------------Display Devices
--------------Card name: Intel(R) HD Graphics Family
Manufacturer: Intel Corporation
Chip type: Intel(R) HD Graphics Family
DAC type: Internal
Device Type: Full Device
Device Key: Enum\PCI\VEN_8086&DEV_0A16&SUBSYS_05FA1028&REV_09
Display Memory: 5014 MB
Dedicated Memory: 1992 MB
Shared Memory: 3021 MB
Current Mode: 1366 x 768 (32 bit) (60Hz)
Monitor Name: Generic PnP Monitor
Monitor Model: unknown
Monitor Id: LGD03DF
Native Mode: 1366 x 768(p) (60.044Hz)
Output Type: Internal
Driver Name:
igdumdim64.dll,igd10iumd64.dll,igd10iumd64.dll,igd12umd64.dll,igdumdim32,igd10iumd32,igd10iumd
32,igd12umd32
Driver File Version: 10.18.0015.4274 (English)
Driver Version: 10.18.15.4274
Card name: NVIDIA GeForce GT 750M
Manufacturer: NVIDIA
Chip type: GeForce GT 750M
DAC type: Integrated RAMDAC
Device Type: Render-Only Device
Device Key: Enum\PCI\VEN_10DE&DEV_0FE4&SUBSYS_05FA1028&REV_A1
Display Memory: 5014 MB
Dedicated Memory: 1992 MB
Page 79 of 80
Shared Memory: 3021 MB
Current Mode: n/a
Driver Name:
nvd3dumx,nvwgf2umx,nvwgf2umx,nvwgf2umx,nvd3dum,nvwgf2um,nvwgf2um,nvwgf2um
Driver File Version: 10.18.0013.5362 (English)
Driver Version: 10.18.13.5362
7.7
-----------------System Information
-----------------Time of this report: 5/5/2016, 15:46:16
Machine name: JP-DESKTOP
Operating System: Windows 10 Home 64-bit (10.0, Build 10586) (10586.th2_release_sec.1602231728)
Language: English (Regional Setting: English)
System Manufacturer: Gigabyte Technology Co., Ltd.
System Model: H87-HD3
BIOS: BIOS Date: 04/17/13 22:05:19 Ver: 04.06.05
Processor: Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 CPUs), ~3.2GHz
Memory: 8192MB RAM
Available OS Memory: 8068MB RAM
Page File: 4312MB used, 5035MB available
Windows Dir: C:\WINDOWS
DirectX Version: 12
DX Setup Parameters: Not found
User DPI Setting: Using System DPI
System DPI Setting: 96 DPI (100 percent)
DWM DPI Scaling: Disabled
Miracast: Available, with HDCP
Microsoft Graphics Hybrid: Not Supported
DxDiag Version: 10.00.10586.0000 64bit Unicode
-----------DxDiag Notes
-----------Display Tab 1: No problems found.
Sound Tab 1: No problems found.
Sound Tab 2: No problems found.
Sound Tab 3: No problems found.
Input Tab: No problems found.
-------------------DirectX Debug Levels
-------------------Direct3D: 0/4 (retail)
DirectDraw: 0/4 (retail)
DirectInput: 0/5 (retail)
DirectMusic: 0/5 (retail)
Page 80 of 80
DirectPlay: 0/9 (retail)
DirectSound: 0/5 (retail)
DirectShow: 0/6 (retail)
--------------Display Devices
--------------Card name: NVIDIA GeForce GTX 970
Manufacturer: NVIDIA
Chip type: GeForce GTX 970
DAC type: Integrated RAMDAC
Device Type: Full Device
Device Key: Enum\PCI\VEN_10DE&DEV_13C2&SUBSYS_31601462&REV_A1
Display Memory: 8042 MB
Dedicated Memory: 4008 MB
Shared Memory: 4034 MB
Current Mode: 2560 x 1440 (32 bit) (59Hz)
Monitor Name: AOC 2770G4
Monitor Model: Q2770
Monitor Id: AOC2770
Native Mode: 2560 x 1440(p) (59.951Hz)
Output Type: Displayport External
Driver Name:
nvd3dumx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvd3dum,nvwgf2um,nvwgf2um,nvwgf2um
Driver File Version: 10.18.0013.5850 (English)
Driver Version: 10.18.13.5850