Vous êtes sur la page 1sur 42

UNIVERSITY POLITEHNICA of BUCHAREST

The Faculty of Automatic Control and Computer Science,


Computer Science Department

BACHELOR THESIS
Development of a Game Engine for
3D Platformers

Scientific advisers:

Author:

l.dr.ing. Victor Asavei


l.dr.ing. Andreea Urzic

Gabriel Ivnic

Bucharest, 2014

Contents
ABSTRACT ..........................................................................................................................................6
1 INTRODUCTION ............................................................................................................................7
GAME ENGINES .............................................................................................................................7
PLATFORM GAMES.........................................................................................................................8
2 STATE OF THE ART ........................................................................................................................9
3 IMPLEMENTATION ...................................................................................................................... 10
OVERVIEW ................................................................................................................................. 10
ENGINE PORTABILITY .......................................................................................................................... 10
TOOLS AND FRAMEWORKS .................................................................................................................. 10
3RD PARTY SDKS AND LIBRARIES ............................................................................................................ 10
ENGINE ARCHITECTURE .................................................................................................................. 11
CORE ENGINE STRUCTURE ................................................................................................................... 11
ENGINE MAIN LOOP ........................................................................................................................... 12
THE INPUT SYSTEM ............................................................................................................................ 12
INPUT PROCESSING ............................................................................................................................ 13
MOUSE MOVEMENT AND ACTION PROCESSING ....................................................................................... 13
GAME SPEED AND FRAME TIME ............................................................................................................ 14
WORLD INSTANCE ............................................................................................................................. 14
DOUBLE BUFFERING ........................................................................................................................... 14
RESOURCE MANAGEMENT .............................................................................................................. 15
THE RESOURCE MANAGER ................................................................................................................... 15
ASSET MANAGEMENT ......................................................................................................................... 15
SCENE MANAGEMENT ........................................................................................................................ 15
TEXTURE MANAGEMENT ..................................................................................................................... 16
PHYSICS MANAGER ............................................................................................................................ 16
SHADER MANAGER ............................................................................................................................ 16
AUDIO MANAGER .............................................................................................................................. 16
FONT MANAGER ................................................................................................................................ 17
PHYSICS SYSTEM .......................................................................................................................... 17
COLLISIONS SYSTEM ........................................................................................................................... 17
SYNCHRONIZATION BETWEEN WORLDS .................................................................................................. 18
SIMULATION TIME AND SPEED .............................................................................................................. 18
GAME OBJECT REPRESENTATION ...................................................................................................... 19

COMPONENT COMMUNICATION ........................................................................................................... 19


THE BASE OBJECT CLASS ...................................................................................................................... 19
GAME ENTITIES ................................................................................................................................. 20
TRANSFORM COMPONENT .................................................................................................................. 20
MESH COMPONENT ........................................................................................................................... 20
RENDERER COMPONENT ..................................................................................................................... 21
SHADER COMPONENT......................................................................................................................... 21
PHYSICS COMPONENT ........................................................................................................................ 21
INPUT COMPONENT ........................................................................................................................... 21
EXTENDING COMPONENTS ................................................................................................................ 21
COMMUNICATION SYSTEM ............................................................................................................. 22
MENU SYSTEM ............................................................................................................................ 23
MENU CONFIGURATION ...................................................................................................................... 23
RUN-TIME REPRESENTATION AND BEHAVIOR........................................................................................... 23
RENDERING AND POST PROCESSING .................................................................................................. 24
THE RENDERING PIPELINE .................................................................................................................... 25
MVP TRANSFORMATIONS ................................................................................................................... 26
DEFERRED RENDERING AND FRAMEBUFFERS ........................................................................................... 26
SHADOW MAPPING ............................................................................................................................ 27
SCREEN SPACE AMBIENT OCCLUSION .................................................................................................... 28
LIGHT MODEL ................................................................................................................................... 30
4 PLATFORM GAME DEMO ............................................................................................................ 33
STARTUP ................................................................................................................................... 33
THE GAME LOOP .......................................................................................................................... 33
RUNTIME STATES ............................................................................................................................... 34
THE GAME WORLD ....................................................................................................................... 34
CAMERA SYSTEM ......................................................................................................................... 34
AVATAR AND CHARACTER MECHANICS .............................................................................................. 35
GAMEPLAY MECHANICS ................................................................................................................. 35
GAME MENU.............................................................................................................................. 36
5 RESULTS ..................................................................................................................................... 37
OVERVIEW ................................................................................................................................. 37
MEMORY PERFORMANCE............................................................................................................... 37
RUN-TIME PERFORMANCE .............................................................................................................. 38
6 CONCLUSIONS AND FUTURE WORK ............................................................................................. 39
BIBLIOGRAPHY ................................................................................................................................. 40

Notations and Abbreviations


AO

Ambient Occlusion

API

Application Programming Interface

GBuffer

Geometry Buffer

GLEW

The OpenGL Extension Wrangler Library

GPU

Graphic Processing Unit

Havok

Havok Physics Library

OpenGL

Open Graphics Library

SSAO

Screen Space Ambient Occlusion

UBO

Uniform Buffer Object

VAO

Vertex Array Object

VBO

Vertex Buffer Object

XML

Extensible Markup Language

Abstract
Game engines are complex pieces of software used to design and create games for different
platform. Designing and writing a game engine is clearly not an easy task. Building a state of the
art engine that can rival those that are currently available, requires a big team of engineers and
artists and most probably a time span of at least 3-4 years. Expected quality and portability
dictates the complexity and requirements for such a project. Less complex, game specific
engines, can still be developed by small teams, but usually they are built around a single
product or with a very specific set of features and design constrains.
This project will be concentrating on the architectural designs and development of a basic game
engine used for creating 3D platformers. A short game demo will be implemented in order to
demonstrate and explain built-in features as well as game specific mechanics.

Chapter 1

Introduction
Game Engines
Game engines are software frameworks specially created with the purpose of designing and
developing video games. Game engines usually consist of different modules and visual
development tools needed for content creation, resource management and packing, debugging
tools, platform specific tools, etc.
Since the very first video game, games evolved a lot, becoming complex pieces of software that
are constantly pushing hardware and software design limits. Writing such kind of software
again and again for every new game would have not been possible without a great set of
reusable software component and development tools.
Game engines consists of well-defined modules, or specific subsystems loosely coupled that can
be easy integrated and used during development of video games. These systems usually offer
an abstraction layers for different platforms APIs.
The typical core modules of game engines includes a rendering engine for 2D and 3D graphics, a
physics engine, an animation system, resource and game assets management, sound scripting,
artificial intelligence, scene graphs, visual effect, game cameras, networking capabilities,
profiling and debugging tools, and many more. The most popular game engines offer far more
than just implementation of these modules by providing a complete set of tools and high level
method of interacting with them. Commercially available game engines usually offer front end
designs tools, interactive 3D or 2D world creation applications, game specific modules and a
huge variety of visual effects and graphic enhancement.
Game engines are usually crafted for a specific game genre. There isnt such thing as a universal
engine that can be used for every kind of video game. At the core, each engine can offer the
same functionality and theoretically could be used for any kind of game implementation, but
the tools and modules provided are usually designed with a specific game genre in mind. It is
almost impossible to offer a really efficient and a well-designed architecture for all kinds of
games at the same time.
Game genre: First Person Shooters (FPS), Role Playing Games (RPG), Real Time Strategy (RTS),
Racing, Simulators, Puzzle, Fighting, Sports, Platformers, Massive Multiplayer Online Games and
the list can go on.
7

Chapter 1. Introduction

Platform Games
A platform game (or platformer) is a type of video game which involves guiding an avatar to
jump between suspended platforms and other different kind of obstacles to advance the game.
Game obstacle can be both environment obstacle as well as AI controller enemies.
First platform games appeared in the early 1980s as 2D side-scrolling video games. Later on,
around 1990s, they were replaced by 3D platformers. Beside the existence of platforms and a
characterized side-view camera angle, platformers were never viewed as a pure genre. Most
games are frequently coupled with elements from other genres, such as action elements,
adventure elements, puzzle elements or event role-playing game elements.
Platformer games were very popular up until the 21st century when the decline of genre started
in favor of more advances Triple-A games. In the later years, the explosion of indie game
developers revive the popularity of the genre by giving birth to numerous other bold
combinations of game mechanics and elements. Both 2D and 3D platformers are very popular
among indie developers because they require far less resources to be created.
With the advance of mobile gaming and technology the genre saw once again a rise in
popularity. The existence of touchscreen interfaces and integrated gyroscope offered once
again the possibility to design and innovate on the gameplay elements as well as mechanics
while also keeping the classic platformer characteristics.
Typical games from the 2D era include Donkey King, Mario, Space Panic, Jump Bug, Super Mario
Brothers, etc. From the 3D era some notable entries can be Super Mario 64, Crash Bandicoot,
Rayman 2, Sonic the Hedgehog, Super Mario Galaxy, etc.
In terms of technology requirements, platformer games usually have a lot in common with
Third-person character based games. Also, high-fidelity full-body animation the playable avatar
is usually a strict requirement. Another common requirement is a complex camera collision
system that can ensure that the view point is not clipping through the scene geometry.

Figure 1. Giana Sisters: Twisted Dreams - modern 3D platformer (2012)

Chapter 2

State of the art


In the last few years, many game engines have pushed the boundary of visual quality for what it
is possible to achieve in real time application. Actual engines are no more just simple game
engines or rendering engines. State of the art engines have evolved to the point where they
offer a full creation pipeline, capable of driving Hollywood movie like image quality. Art
direction become a true core experience for many new triple A games. Achieving a natural
looking, photorealistic images, using physically based rendering is becoming a standard for high
end games while advanced post effects temporal stability, filter quality and accuracy are no
longer terms used to describe movie production but also real-time applications.
Compared to films, photorealistic image quality for real time based systems is really challenging
to accomplish because of the limited time budget of 16.6 ms, which typically is orders of
magnitude less than what can be found in movie productions.
Full cinematic experience, advanced specific lighting, effects like bloom, depth of field, lens
distortions, extended color information, HDR rendering, surface light scattering, volumetric fog,
smoke and haze with varying participating media density, light shafts, volumetric lighting and
shadows, transparency and reflection, geometry reconstruction through tesselation are just a
few keywords that define the rendering state of actual game engines. Besides that, full
integration with animation tracking systems, audio composition and particle effects generation
tools, real-time editing and composing of a scene, full physics simulation and behavior and AI
scripting are some features that really offer a great content creation and pipeline.
Of course, none of these would have been possible without a great software architecture that
can push the limit of available hardware capabilities and use efficiency.
The motivation for this thesis was to research and implement core features and rendering
techniques used by modern game engines. High efficiency as well as integration and
communication between different subsystems were a main focus in this thesis. Moreover, a
modular and extensible architecture was targeted from the start.
In order to offer support for advanced rendering technique as well as to provide a good runtime
efficiency just modern graphic API extensions were used: VAO, UBO, VBO, layout location
bindings, OpenGL 3.3+ features.

Chapter 3

Implementation
Overview
Engine portability
The engine is written in C++ and is primary designed to run on Windows system (Windows 7,
Window 8 or 8.1) that offer support for OpenGL API, version 3.2 and upwards. It should also run
on Mac OS or Linux distributions since the OpenGL API is supported by all major platforms, but
compiling and running of these platforms was not intended.

Tools and frameworks


The project is built and maintained using Microsoft Visual Studio 2012. Having to support a lot
of multimedia assets and resources a set of software and tools were needed in order to handle
these kind of resources.
3Ds Max (2014, 2015) student version was used for modeling, creating and exporting various
3D assets as well as textures files. Havok Content Tools plugin for 3D Max was used for
exporting Havok Physics data files. Data files contain physic system properties about 3D object
and the physics world, needed for the physics simulation.
Sound editing and transcoding was performed using the free version of JetAudio software. For
simple image manipulation and editing Faststone Image Viewer was used.

3rd party SDKs and libraries


Building a game engine prototype without using a set of libraries or 3rd party SDK would not be
possible in a short amount of time. For example, the OpenGL API does not provide run time
mechanisms for determining which extensions are supported by the running platform or some
simple cross-platform methods for creating windows, handling input or even managing OpenGL
context. Thus, we can understand why there was a need for solid cross-platform libraries that
offer this kind of high-level API.
Window creation, context management, input and event handling are managed by the GLFW
multi-platform OpenGL library. GLEW extension library is used for determining run-time
OpenGL extension support
10

Chapter 3. Implementation
For run-time image decoding and encoding stb_image header library is used. It provides fast
methods for reading and writing most popular file formats such as JPEG, PNG, GIF, TGA, BMP or
PSD.
In order to represent anything inside a 3D world we need to rely on common vector and matrix
operation, which usually are performed by the OpenGL Mathematics library for 3D software
GLM.
3D assets are loaded using Open Asset Import Library (Assimp). Internally, Assimp stores data
as a hierarchy of nodes thus offering a good support for scene management and data request.
It supports most of the file formats available including animation systems. Assimp also comes
with a great viewer tool that offer a fast way to load and examine 3D assets.
pugixml library was used for parsing XML resource files.
The physics engine component is provided by the free version of Havok Physics library. Havok
middleware suite consists of 7 modules from which the most important 2, Animation Studio
and Physics Engine are also offered for free under a special license. Together with the Content
Creation Tools plugin, used to export physics system information from 3D Max or Autodesk
Maya, they offer a complete middleware for integration into game engines.
Support for text and font rendering is provided by the FreeType library. For high level
processing and glyph caching to textures files as well as distance field rendering the freetype-gl
library was used.
Audio integration is done using SMFL (Simple and Fast Multimedia Library). The library provides
support for buffered sounds usually used for sound effects as well as music streaming from file
for background music, audio logs, etc.

Engine architecture
Core engine structure
The static Engine class represents the heart of the engine, being responsible with the startup
and update loop of the program. When the program starts it first create a window, a rendering
context and then attaches the GLFW callbacks.
Because GLFW offers support for multiple windows, window creation and management is not
handled by the Engine class, but the WindowManager. During the initialization process a new
WindowObject is requested. Each window must be initialized either as full screen or windowed
mode because GLFW does not provide support for both at the same time. Then the window will
be registering itself to the InputSystem for mouse actions and key input as well as to the
WindowManager for window related events such as reshape.
After the context and all callbacks are initialized a new World Object can set and then run the
main loop.

11

Chapter 3. Implementation

Engine main loop


Games are real-time application, which means that, the world must be updated as fast as it can
be. The following actions must be performed between each 2 consecutive frames:

GLFW event pooling


o Treat window events such as resize or running mode(windowed/full screen)
o Process new input actions keyboard and mouse
Compute the new delta-time
Update the world instance
Buffer swapping

The input system


All inputs events are handled by the InputSystem manager, which acts as a publish-subscribe
system. In order to receive input events an observer class, must be registered to the
InputSystem.
The ObjectInput is an abstract class that must be implemented by the desired receiver object.
When constructed it will register itself to the InputSystem. It offers a lot of methods for
handling most input event cases: key pressed, key released, an update method for continuous
actions, mouse move, scroll and click events.
The input system was designed to be really easy to use while also offering a loose coupled
system. When an event is triggered it will be dispatched to all active observers at that specific
time.
Because certain set of keys or mouse events must be used multiple times in different situations,
a rule based system was designed. The input system is organized into multiple input rules, each
one of them describing a set of active input groups. The same group can be active for multiple
rules.
While input rules are meant for application level behavior, groups represent object level
behavior.
Examples of application level behavior:
Application menus
Debug console
Main gameplay
Game engine as a level editor
Other systems that should rebind most of the keys or mouse events
o Text input
o Mini games
o All kind of custom in-game systems (inventory, skill profilers, etc)
o Interactive cutscenes

12

Chapter 3. Implementation
As we can see, collision between same key bindings can easily be avoided by using different
input rules.
Input group represents sets of keys events and mouse actions used by a certain object. This can
be very useful for activating and deactivating sets of key bindings, while still keeping the same
overall input behavior (rule).
Use scenarios:

Character special power (spell) can only be used after 30 seconds of charging. During
the charging time it doesnt make sense to handle event input from the activation key
The character is stunned and can no longer use special abilities for 5 seconds

Common examples of groups can be:

Controls for the main character


Camera control
Certain debug controls
Gameplay control groups
o Selecting certain items from the inventory
o Perform different game specific actions (spells/activate abilities/change weapon
etc.)

Each observer objects belongs to only one input group.

Input processing
Since GLFW does not provide methods to handle multiple keyboard events at the same time a
buffer is used for storing pressed keys at a moment in time.
By default GLFW provides the following states for keyboard keys:
GLFW value

Meaning

The key has been released

The key was pressed once

By continuous pressing a key

Each key is identified by its GLFW keycode. The key buffer is updated each frame, thus offering
the possibility to implement continuous actions such as movement (character, camera).

Mouse movement and action processing


The mouse movement callback offers information about the current cursor position as well as
the delta distance since last frame. The system pointer is captured inside the context window
and repositioned in the center of the window after each frame. The system cursor can be hid
with the scope of replacing it with a custom application cursor.
13

Chapter 3. Implementation

Game speed and frame time


Above 24 frames per second the human eye can no longer make a clear distinction between
each frame (image) and because of the fast sequence the illusion of motion is created. 24
frames per seconds is the standard for movies and videos, but for real-time application such as
games, at least 30 frames/seconds must be targeted. Of course, usually we consider this the
lowest limit for a game to be playable.
Even though the human eye and brain are capable of processing and distinguish motion at a far
greater rate, hardware limits usually push developers to consider 60 fps as the desired limit for
fast paced action. Nowadays, we even have monitors capable of displaying up to 120 frames
per seconds, but the video processing power needed in order to deliver it, is actually double
than at 60fps. Thus, game engines usually sacrifice very high frame-rate for better image quality
in order to offers a better overall experience.
Each second the engine must update the image at least 60 times. During this time all dynamic
objects and events must also be updated. The time between each 2 consecutive frames is not
necessarily equal, so, in order to achieve frame independent simulation we need to compute
the delta time and update the next frame based on this value.
In order to achieve frame independent simulation the time between each 2 consecutive frames
is used as a scale factor for the game speed.

World Instance
The World Class represents the base object for the game world. Its constructor will request
from the global Manager to load all resources specified inside the configuration files. After that
game specific initialization code will be run.
Each frame the engine will call the Update method of the current World Class instance that was
set to be updated.

Double buffering
The engines uses double buffering, which means that each frame is first drawn in memory while
the previous one is sent to the screen, in order to be displayed. Thus double buffering refers to
the need of having 2 drawing buffers at the same time. While the frame N that resides in the A
buffer, is sent to the screen, frame N+1 is being drawn in the B buffer. After the process is
complete the result from the B buffer will be sent to the screen and the A buffer will be used to
render a new frame. Swapping between the 2 buffers after each frame will prevent the engine
from writing on the same buffer.

14

Chapter 3. Implementation

Resource management
A resource managing system is an essential core feature found in all game engines. It should
offer fast and robust way to store, edit and retrieve information for all kind of game assets,
resources or configuration settings. The management system must handle both off-line
resources in terms of loading and unloading all kinds of specific asset files and run-time
resource manipulation.
For great flexibility a database system or configuration files must be used. Both methods offer
runtime loading, storing and editing of various configuration settings or data files, while also
providing a great way to organize resources.
Most common used file format are .INI files, .CFG files, JSON files and XML files.
This implementation uses XML files.

The resource manager


In order to initialize a new game world a configuration file must be specified first. The file
should contain the path to, and the names of the files that need to be loaded by resource
system.
The resource manager is not a single unified system that handled all kind of data, but instead is
composed from a collection of managing subsystem, each one of them having to handle
different class of resources data. All subsystems or managers are singleton objects instantiated
by the core central Manager, in order to provide easily retrieved at run-time.

Asset management
The ResourceManager deals with 3D assets loading and game objects bindings. Besides loading
it also provides fast retrieval of stored data by using unique resource IDs.
3.3.2.1 The asset resource files
Describes what 3D assets must be loaded (path to file, file name, unique object name) and
custom game object bindings.
Game object bindings are defined based on a 3D mesh, a havoc physics file that describes
object physics properties, a transform property for setting the default position, scale and
rotation as well as other properties, like whether it should cast shadows or not. This way we
can easily define custom game object that needs to be instantiated multiple times in the scene.

Scene management
The SceneManager loads the scene file and offers run-time management for world objects:
add, remove and update. The scene files is used to instantiate the world environment with
15

Chapter 3. Implementation
static object and simple dynamic ones that are affected by gravity. The same file is also used
defining deferred lights locations and power factor.
In order to create a new instance, a game object reference and a transform filter must be
specified.

Texture management
Textures are created when the scene is imported. Each texture is identified by a unique name
and/or and ID. During 3D asset loading when, specified texture maps are loaded by the
TextureManager. The TextureManger stores a vector of all the loaded textures and a hashmap
for retrieving Texture objects by a unique name. If a texture file is shared between multiple 3D
assets it is loaded only once then cached and used for all assets that request that file during
loading.
Textures files are loaded using stb_image header library. Image data is uploaded to the GPU
video memory as an openGL texture then only the texture identification ID is kept in system
memory.
If a mesh component requires a specific texture that cannot be found or loaded, a default
checker texture is provided for the rendering process.

Physics manager
The physics manager loads Havok data files and needed by the physics system for simulation
purposes. Also, the manager is responsible with physics objects instantiation. Because
simulation objects can share physics properties such as shapes instantiation is done with a
minimum use of additional memory.

Shader manager
In order to share as many resources as it is possible a shader manager was also implemented. A
shader file represent just a programmable state in the graphic pipeline. In order to achive
different special effects or post processing effect or simply render an object on the screen
multiple shader files must be combined into one shader program. A shader program can be
constructed from combination of various shaders: vertex, fragment, geometry, tessellation
evaluation, tessellation control or compute shader. All, except the compute shader, are custom
programs for the GPU rendering pipeline.
A configuration file is used for defining shader programs. Each XML <shader> will represents a
new shader program constructed from the specified files.

Audio manager
Audio processing is provided by SFML library that utilizes the cross platform Open Audio Library
(OpenAL). SFML offers the possibility to defined buffered sounds as well as using streams for
playing large audio files that dont fit into memory. Buffered sounds are meant to be used for
16

Chapter 3. Implementation
sound effects that should suffer no lag when played (footsteps, jumping sound effect, gun
shots, etc.) while music streams are served on the fly and are mainly targeted for noncritical
gameplay sounds like background music, audio logs, etc.

Font manager
The text component is as important as every other rendering features available in a game
engine. Text rendering is heavily used in games for menus, dialogue subtitles, gameplay
information or hints, inventory system, displaying information about world object and the list
can go on.
The most used library available for text rendering is OpenGL FreeType. The FreeType library is
freely available and was designed to be small, efficient, highly customizable, and portable. It is
capable of producing high-quality output for most vector and bitmap font formats but sadly
offers a really low-level and quite complicated API.
The freetype-gl library was written on top of FreeType and offers a great high-level API while
also adding font management features. Also, it offers the possibility to render aliasing-free text
by computing font texture distance field.
Common font glyph are cached in a texture then distance field is computed. The FontManager
loads and keeps a list of all fonts loaded while text rendering is done by the text component
object.

Physics system
The physics system is provided by Havok Physics engine.

Collisions system
Usually game objects can be separated into 2 main categories: decorative ghost objects that are
not influenced by a physics system and props or objects that for whom physics simulation is
needed.
Havok physics offer both single threaded and multi-threaded simulation, the later one being
implemented in this engine. The physics world is represented as a box whose size is set before
initialization. Havok offers support for many types of physics objects like: dynamic, fixed, bullet,
debris, key framed, etc.
Objects physics properties are imported using Havok serialization system. Physics files are
exported from 3D Max using Havok Content Tools plugin. The plugins allow creation of rigid
bodies as well as defining custom constrains between objects. Exporting can be done in a
readable XML format or as a binary file. Before exporting various filter can be applied in order
to transform or select just a part of the data.

17

Chapter 3. Implementation
Most 3D assets that need to interact with the world physics system, have a corresponding
physics property file that is loaded at initialization. Of course a lot of dynamic interaction is not
imported from data files because certain kind of properties or interaction configuration cannot
be saved on the disk.

Synchronization between worlds


The physics simulation needs to be updated constantly in order to get smooth and realistic
results. The physics world and the rendered world are two entities complete separated, nothing
being shared between them. Because of that, a new step must be added that keeps these two
worlds in sync.
The process of synchronization is actually a 2 steps update. Most objects need to be updated
based on the physics simulation, which usually means getting the transform properties:
position, rotation, scale and recomputed object model matrices.
In a game there are multiple object that also have an impact on the physics world by creating
different forces that drive further the simulation. These forces usually happen at certain point
in time timed based on predefined game mechanics or player input. These kind of forces must
be applied into the physics simulation. We can consider this, the second step in the
synchronization process.

Simulation time and speed


The synchronization process happens after each frame. The simulation must be stepped either
with a constant value, which translates to a frame time simulation speed dependency, either
with the actual delta time between two consecutive frame. Theoretically, if the rendering
process would offer a constant frame rate the first method would be much better because the
simulation outcome will be constant between each two consecutive frames. Sadly, usually is
impossible to guarantee the minimum value of the frame rate so in order to avoid simulation
speed issues the delta time between frames can be used as the stepping value. This way we can
guarantee frame independent simulation speed. The only problem is that large variation/drops
in frame rate will produce the same kind of variation inside the physics simulation and not a
slow-motion effect like in the first case.
High frame rate produces smooth animation while a sudden drop may result into a
teleportation like event. The problem not only visual, it can also impact the gameplay. Usually,
very few dynamic objects are checked for continuous collision, like bullets or other know fast
travailing objects critical to the gameplay mechanics, because limited processing resources
dont allow for that. When that happens and certain conditions are met, object that dont check
for continuous collision might appear like they travelled through thin objects, like walls for
example.
Having that in mind, even if a constant frame rate is hard to achieve, we also have to ensure a
minimum variation between consecutive frames.

18

Chapter 3. Implementation

Game object representation


From the start, the engine architecture was designed to offer high level of customization while
also keeping different subsystems decoupled from each other.
The core class that defines game entities is based on a components system. The actual design of
components are inspired by the Component Pattern article from Game Programming Patterns
and Unity class structure.
A component based system offers many advantages over classic representation but adds some
complexity too. Using components we can easily define a great variety of objects while also
adding the possibility to share components across different object. Also, it reduces a game
entity to a simple container of components, thus creating a loosely coupled system which make
the code easier to maintain and extend. Another great feature is the fact that components can
be swapped at run-time in order to change entity behavior or functionality.

Component communication
Because components are part of the same object implies that they are part of a larger whole
and need to coordinate. Coordination mean communication.
Communication can be done in multiple forms depending on what benefits are pursued on
each specific implementation. If further component decoupling is intended a message system
might be the best choice. Sadly, an Event Queue or messaging systems adds a considerably
amount of overhead over a classic direct communication system. Of course, in order to still
preserve the decoupled state of the components indirect communication is the best approach.
One common method for indirect communication is through the owner object. Since by design
a component is just a form of encapsulation specific part of an entity behavior, it should always
be owned by a certain entity.
By defining an owner object for each component we can use it as the proxy communication
system. For most purposes this is good enough because usually communication is done
between the components owned by the same entity. Sadly, a system that allows
communication between 2 game entities or any other 2 objects that do share no common
properties can only be done efficient by using a global messaging system like an topic based
event observer or event queues.

The base object class


The engine utilizes C++ object inheritance feature to define most of the game entity classes.
Almost all classes are derived forms of the base Object class. This offers some nice advantages,
like the possibility to use C++ polymorphism in the global event system in order to have a two
way bindings.

19

Chapter 3. Implementation

Game entities
The GameObject class represents a base game entity. Entities are not necessary those object
that will end up visible on the screen. They can have multiple behaviors or usages based on how
were they defined and constructed. A game entity is defined as set of primary components,
none of them being mandatory.
Examples of game entities: a controllable character, different props, dynamic objects, static
objects, light sources, virtual objects like trigger volumes of phantoms (from Havok), particle
emitters and the list can go on..
The following game components were created in order to define a broad range of behaviors:

Mesh component
Render component
Shader component
Physics component
Transform component
ObjectInput component

Beside this, new components can be designed and integrated really quickly in the game engine.
Other example of common components can be: a particle system, audio sound, motion effect,
etc.

Transform component
A transform component is used to store 3D world positioning information for an entity:
position, rotation and scale. Rotation is the only property stored in two formats: as quaternion
as well as Euler angles. Quaternions are used for computation and the physics synchronization
process while Euler angles are meant to be just a user friendly interface for programmers or
artists. Thus, default rotations as well as interaction with objects is done by using Euler angles.
The transform component offers common methods to easily update or reposition game
entities. This API is intended to be used for a simple real-time game editor.

Mesh component
The mesh component represents a 3D or 2D assets that needs to be rendered. A mesh isnt just
a single renderable entry, it can as well be formed from multiple entries that act like a single
one. A group of flowers and some grass around them for example.
A mesh object contains a number of entries that need to be rendered as well as their
corresponding material objects. A material contains information about a specific texture and
lighting properties.

20

Chapter 3. Implementation

Renderer component
Ideally, lighting would be computed in real-time for all objects from the scene no matter if they
are static or dynamic in behavior. Since many rendering effect are way too expensive from a
computational perspective a render component was created. This components is used to
enable or disable various rendering flags for each game entity, separately. Examples: if an
object should cast shadows or not, if should reflect light or emit light, etc.

Shader component
A shader component holds the information about what program must be used by the object
when is rendered. This component is mostly used to define base objects that are constantly
rendered using only one shader program. A shader component can also be received as a
parameter to the render method in order to use that specific program. This is intended because
an asset might be rendered multiple times for different rendering tasks.

Physics component
The physics component represents the link between a physics world entity and a game entity.
As a base object it is used for the synchronization process between simulation and game world.
After each simulation step, the component requests the changes and updates entity properties
according to the data received from the simulation world.

Input component
An input component is used for getting access to the input system events. When created it
registers itself to the input system in order to receive common input events: keyboard or
mouse events. The input component by itself does nothing beside the registration process. It
must be extended into a new custom input component that treats the desired events.
Examples of game entities that utilize an input component: main character, debug camera, the
menu system, the debug system, etc.

Extending components
Presented components, are just base implementations for a specific subsystem interaction
object. By default, they ensure that basic requirements and functionality are met. In order to
describe complex behaviors each component can be extended further into a new custom one.
Doing so, unique mechanics and behaviors can be defined very fast and easy.

21

Chapter 3. Implementation

Communication system
The run-time resource manager plays a very important role in the communication system
because many request end up being treated by it. Since the resource manager was not
designed as a unified system but instead as a collection of subsystems that handle specific kind
of data, a fast way to retrieve each subsystem was needed. Thus, a static global object was
created with the purpose of managing and providing fast access to each subsystem.
Common moments when the manager can be invoked:

Loading and unloading off-line resources


The asset-manager and physics manager are requested to provide the necessary data in
order to create and spawn new objects into the scene
The audio-manager has to play a specific sound or start streaming a new audio file
The font-manager need to create a new text block
The simulation world needs to be updated (add/remove new entries)
All objects that need information about off-line or run-time resources

Of course direct access to the resource system wont solve all the problems we may face in a
game. Theres still the need to send messages between 2 ordinary object or between isolated
components from the same entity. Solving both of them using the same method will not
provide a good efficient system so instead two different approaches were used: the owner
proxy and the central event queue.
The proxy method was described in section 3.5.1, Component communication. Basically, all
communication is done through the component owner entity.
The second methods offers greater flexibility but at the cost of speed. A global event system
was implemented, which offers the possibility to emit and subscribe for events. Events are
treated in the order they are received, synchronous.
Emitting an event is as easy as calling the event system to broadcast an event with a specified
topic ID and object data.
For receiving events a game entity must provide the base of an EventListener class. After that it
must subscribe itself to the EventSystem on all specific topics expected. When an event is
broadcasted the OnEvent function of the listener will be called with the appropriated topic and
data sent. Data sent is just a pointer reference to a base engine Object instance. Since, the
receiver knows what it received, it can recast the object to the expected type. This way the
message system can also be used as a two way binding between game entities.
Ex: Entity A sends itself as an event to an entity B that will use the received message to update
itself and then call A to update, by using the reference provided in the message data.
The event system provides an efficient implementation for fire and forgot type of events
while also offering a complete decoupled system.

22

Chapter 3. Implementation

Menu system
A game engine should offer support for menus. Creating and configuring new menus should be
fairly easy as well rendering and using them at run-time.
Usually menus can be seen as a collection of entries that perform certain actions. Some of
these entries redirect the user to other pages (menu) while others perform certain actions.
There is a wide variety of common interaction methods used in games, such as:

simple one action entries (Ex: resume/exit buttons)


2 state entries Yes / No (Ex: render shadows)
cycle through a predefined set of entries (Ex: setting resolution)
text input entry (Ex: Setting character name)
slider entry (Ex: setting audio volume)
multi-option entry (Ex: select 2 items from 6)

Menu configuration
In order to offer dynamic configuration for menus a hierarchical XML structure was created.
Menus are defined as pages that contain multiple entries. Each page is unique identified by a
string ID while each entry must specify a type and an action. Currently only 3 types of behaviors
are supported: action, page and toggle.
The action entry will emit the specified action ID when activated. The page entry will change
the active rendered page menu while the toggle entry will emit a Yes/No value each time is
triggered.

Run-time representation and behavior


The menu-pages and all their entries are created at run-time based on configuration file. Pages
are stored inside a map in order to offer full customization over the rendering decisions. Thus,
the described hierarchy flow is computed in real-time using a stack.
Menus are rendered at run-time using the name value provided by each entry. A default
rendering method is provided which can be replaced with a custom one based on game design
decisions.

Figure 2. Menu Components

23

Chapter 3. Implementation

Rendering and post processing


The rendering process has a great responsibility in every game engine. Visual representation of
the world as well as all kind of post processing effect are handled by the rendering process.
Because rendering life-like world with all the lighting properties observed in real world is not
possible with actual hardware processing power, or at least not in real time, many rendering
techniques that can efficiently simulate different visual aspects were developed. Most
techniques are not based on real life physic formulas but instead different tricks or simplified
math that can offer an acceptable level of quality.
State of the art game engines offer a great variety of rendering features and post processing
effect like:

shading describes the color and brightness of a surface that varies with lighting
texture mapping a method used to apply detail to a surface
bum-mapping method to simulate small-scale bumpiness on surface
parallax mapping creates the illusion of apparent depth on different surfaces
normal mapping a technique used to fake the lighting of bumps and dents for lowpolygon meshes
shadow techniques a lot of different methods were created for adding the illusion of
light obstruction by different objects: shadow mapping, shadow volumes, etc
light reflection
transparency, translucency, refraction, diffraction
indirect illumination also known as global illumination in most engines
depth of field blur effect for object that are too in front or behind the object in focus
motion blur object edges appear blurry due to high-speed motion
fogging a variable dense medium that can affect light intensity and scattering

Beside these methods for adding real-life features into a virtual world there are also a lot of
post-processing effect created for 2D image processing. Some very popular effect used in
games are: edge detection, blur filters, algorithms that influence image colors (grayscale, night
vision, etc.), distortion effects, composite, image masks, adding noise or sharpening, and the list
can go on.
Everything we see is due to the fact that light is reflected by the objects around us. On the
other hand display screens emit light that is captured by our eyes and then processed by the
brain. What we actually see is based on how light was emitted. There are certain techniques to
trick the brain into thinking that a 3D world is front of our eyes but what we actually see most
of the time is a 2D rendering of a 3D scene (is the case of 3D worlds).
Since the beginnings of graphics visualization many rendering algorithms have been researched.
The most advanced ones, are using scene lighting data (ray-tracing, photon mapping, beam
tracing, path tracing) in order reproduce an image as close to real-life as possible. Sadly, these
kind of algorithms are extremely expensive from a computational perspective and impractical
for real-time rendering. Most real time systems, like games for examples, use the rasterization
24

Chapter 3. Implementation
technique for the rendering process. In this algorithm scene objects are projected to an image
plane without advance optical effects, thus the need for all the post-processing and advance
rendering features that add realism to the scene.

The rendering pipeline


In order to represent a scene data on as a 2D raster image a sequence of rendering steps must
be performed by the graphic card. In OpenGL the graphic API chosen for this implementation
the rendering pipeline is composed from seven stages [11]
Prepare vertex array data
Vertex processing
vertex shader
optional tessellation stages
optional geometry shader
Vertex post-processing
transform feedback
clipping
viewport transforms
Primitive assembly
Rasterization
Fragment processing
fragment shaders
Fragment tests

scissor test
stencil test
depth test
blending
logical operation
write mask
Figure 3: Diagram of the Rendering Pipeline
Programmable shader stages are colored in blue [11]

The latest version of OpenGL offers the 5 programmable shader stages: Vertex Shader,
Tesselation Control Shader, Tesselation Evaluation Shader, Geometry Shader and Fragment
Shader. Also there is a sixth programmable shader, the Compute Shader but its not part of the
rendering pipeline.

25

Chapter 3. Implementation

MVP transformations
MVP stands for Model View Projection and represents the matrices used by OpenGL in order to
transform geometric data for the rendering purpose. The simplest way to represents objects in
a 3D space is by using the Cartesian coordinates, rotation and information.
The vertex transformation process is essential for the rendering process. Using a set of three
4x4 matrixes geometric data is transformed by the graphic pipeline from Local (Model)
Coordinates to World Space by using the Model matrix, then to View Space (Eye Space) by using
the View matrix and then to Clip Space using the Projection matrix. After that coordinates are
normalized by dividing with the 4th component resulted from the previous 3 transforms and
then using the viewport transformation window coordinates are computed.

Figure 4. OpenGL Transformation [1]

The model matrix contains the transitions, rotation and scale factor for the 3 basic coordinate
systems. The model matrix is obtain by multiplying the transition matrix with the rotation
matrix and then the scaling matrix. When a property is changed the Model matrix must be
updated accordingly.
The view matrix ensure the transition to the eye space coordinate system. The matrix is
computed using the following camera properties: position, the up vector and the center point
towards it is looking.
The projection matrix defines the viewing volume (the frustum), how vertex data is projected to
the screen. Usually games use a perspective transformation but it is also possible to use an
orthogonal transformation.

Deferred rendering and framebuffers


By default the scene is rendered to the default framebuffer provided by the graphics API.
Because most video graphic features cant be achieved efficiently in a single rendering step,
additional stages are required. The process of rendering a scene in multiple stages is called
deferred rendering.
In deferred rendering the final image is produced by applying shading at the end. Deferred
rendering offers a considerable performance boost because shading computation is done in
screen-space. Most advanced real-time rendering techniques are based on various deferred
rendering methods.

26

Chapter 3. Implementation
For a multi-stage rendering process, additional memory is required. Modern graphic hardware
offer the possibility to use and define multiple framebuffers in the video graphic cards
memory. Besides a default depth and stencil buffer, the framebuffer can provide one or
multiple rendering targets. Each rendering target is represented in video card memory as a
texture with a predefined sized and color bit depth.
This engine implementation offers the possibility to define multiple framebuffers with variable
number of rendering textures. Framebuffers are used for storing different scene properties like
positions, normals, texture coordinates, depth information, etc. which are used later for various
rendering techniques like shadow mapping, screen space ambient occlusion, post processing
filters, etc.

Shadow mapping
Real-time shadow rendering is based on the shadow mapping technique. Only one global light
source (similar to the sun) affects scene shadow.
The shadow mapping technique is based on testing whether a pixel is visible from the light
emitting source. By comparing the distance to the pixel from the light sources view with the
distance of the corresponding pixel from the depth buffer of the light source, a decision is made
if it is occluded or not.
Computing shadows is a two steps rendering process. In the first step all objects that cast
shadows are rendered from the light point of view in order to fill the depth buffer. Because the
light acts like the sun, which is a directional light source, an orthogonal projection is used for
the scene transformation. This means that distances from objects to the light source will not
affect shadow size.
The texture size for storing depth information from the light source is 4096 * 4096 pixels. At
this resolution shadows quality is quite good, but the edges are very crisp. If the storing
resolution is lower an aliasing effect will appear which certainly, is not a desired effect. Sadly
blurring the edges in the conventional way by applying a filter kernel on a texture is not
possible because occluded pixel are not stored anywhere.
Percentage Closer Filtering (PCF) is a technique used to offer a smooth transition between
shadowed and lighted regions. Instead of taking one sampler in order to determine if a pixel is
lit or not, several samplers around the pixel are taken and an average result is computed.
Samplers are taken by using a custom kernel. Large kernels will offer a better quality but with a
serious drop in performance because the number of texture fetches needed will rise
exponentially (N * N).
Assuming that the kernel size is N, we would need to be fetching N * N texels for every pixel on
the screen in order to decide what is the pixel value. Most of the time these texture fetches are
not even needed unless a pixel is very close to the edge of the shadow, more exactly a distance
below N. Because taking all the samplers for every pixel on the screen will result in a serious
performance drop, a small set of them can be used in order to decide whether a pixel is close to
a shadowed area or not. Thus, initially only 5 samplers are used, the actual computed pixel, and
27

Chapter 3. Implementation
the 4 corners of the kernel. If all of them are lit or exactly the opposite we conclude that no
further test must be perform and return immediately the answer. Otherwise the average sum is
computed and an interpolated values is returned as the shadow value.

Figure 5. Shadow mapping rendering

Screen Space Ambient Occlusion


Ambient occlusion is a rendering technique used to approximate global illumination by
simulating the shadowing caused by object that block the ambient lighting. The technique is not
a specific real life physically accurate method for global illumination but generally speaking,
offers great quality improvement to rendered images.

Figure 6. Stanford Dragon rendered without Ambient Occlusion(left), with AO (right)

Ambient occlusion techniques offers shadows with wide and smooth gradations.
In order to perform AO test, each pixel lighting factor is computed based on a surrounding
hemisphere. Ideally for a point on a surface the occlusion value can should be computed by
28

Chapter 3. Implementation
integrating the visibility function over the hemisphere with respect to the infinitesimal
projected solid angle.

Because this is not practical to compute a variety of techniques are used to approximate the
result. By using the Monte Carlo method, a certain numbers of rays are casted in the
hemisphere space from the point p and then tested for intersection with scene geometry.

Screen space ambient occlusion is a most basic technique used for simulating AO in real-time
applications. The computation is done in eye-space using the following buffers: depth, view
normal and view positions. Thus, the final result will depend on the viewing angle relatively to
the scene position. Ambient occlusion is performed in real-time for each single frame.
Because ray tracing is too expensive for real-time based systems, casting rays are pre-computed
on CPU and sent to the fragment shader at run-time. Using fixed angle rays will affect image
quality, so slight variation are added to each casted ray by using random values which are
provided by a noise texture. The rays are more concentrated around the normal vector of the
tested point and more dispersed around the bottom of the hemisphere.
Using a ray direction a specific point in space inside the hemisphere is computed. If the distance
between it and the computed view-space depth (usually from a GBuffer) is less than the
provided tested radius a weight value is computed based on the difference between the viewnormals of each point.
The final AO factor is computed from averaging all weighted values for each casted ray.
Resulted image is stored in a new render target because a blurring step is required for softening
the shadow effect. AO resolution is independent to the scene rendering resolution. By using a
lower resolution than the default rendering one a more blurred effect can be obtained.
The number of rays used for performing AO test can be configured by the application, more of
them resulting in a better quality but at the cost of performance.
SSAO effect is added during the final composition stage of the frame.
29

Chapter 3. Implementation

Figure 7. Screen Space Ambient Occlusion

Light model
Scene lighting is performed on a per-fragment basis using the Phong Model. Compared to the
Goraud shading that computes light information for each processed vertex, the Phong model
provides a better approximation of shading for smooth surfaces because the reflection model is
computed for each resulted pixel on the screen.
The Phong model describes the way surfaces reflect light as a combination of diffuse reflection
of rough surfaces, the specular reflection of shiny surfaces and the uniform ambient
component.

Figure 8. Phong model [10]

For each point on a surface, illumination can be computed using the Phong model equation

30

Chapter 3. Implementation
Ks, Kd, Ka represent the specular, diffuse and ambient reflection constants of a specific material

shininess constant of material, larger for smoother and high reflective surfaces
m
direction from the point on the surface to each light source
N
normal at the point on the surface
Rm
direction of a perfectly reflected ray of light
V
direction pointing towards the viewer
3.8.6.1 Forward and deferred rendering
Forward rendering is the standard rendering technique in which output image is created by
using a single rendering pass, thus texturing and lighting are done in the same rendering step.
In a forward rendering technique for every rasterized pixel on the screen, the light equation is
computed. If no efficient occlusion culling is performed or scene rendering is not done using a
front-to-back ordering, so benefits from early z rejection are minimum, the illumination model
will be computed multiple times for many fragments, which will greatly reduce rendering
performance.
Without scene optimizations the cost of computing the illumination model for one light is
directly impacted by the scene complexity. Besides that, rendering complexity will further
increase each time with each additional light source added to the scene. In practice a forward
technique will show its limit after placing a relatively small number of light sources.
Deferred rendering is a screen-space shading technique that offers great speed improvement
and well as support for up to thousands of light sources. In deferred rendering each light source
affects just those objects that are placed inside the lights area of effect. Moreover, rendering
performance is not affected by scene geometry complexity because lighting is done in screen
space. That means, for every pixel on the screen the Phong model (diffuse and specular lights)
is computed only if the correspondent point in space is affected by one or more lights sources.
Deferred shading complexity is only affected by the rendering resolution and the number of
lights that affects each fragment.
3.8.6.2 Deferred shading
The deferred shading technique was created in order to separate the shading from the
geometry rendering pass. Thus, the first pass is used for saving screen-space scene data for
later use. Multiple rendering targets are used for storing the diffuse color, world-normal, worldpositions and depth information which will be used later for shading.
In a second pass, all light volumes are rendered and for each computed fragment, the
corresponding scene point is obtained from the world-position buffer, and if it is affected by a
light source, the shading value will be computed. Since multiple light sources can affect the
same pixel, rendering is done using GL_BLEND equation one plus one, in order to sum all
resulted shading values for a specific pixel.

31

Chapter 3. Implementation
In a final step, the diffuse component will be combined with the shading component as well as
the constant ambient value.

Figure 9. Normals (left) and world positions (right) used for computing lighting

Figure 10. Deferred lighting results

32

Chapter 4

Platform game demo


Startup
When creating a new game, a configuration file for the resource system must be provided. The
configuration file is used during the startup phase to retrieve the location of all the managers
resource files that needs to be loaded. After the resource manager is started and all resources
are loaded the game initialization phase is started.
During the initialization phase, all the specific game and rendering components must be
defined: camera type, what perspective projection is used, framebuffers for different rendering
techniques that will be used.
This state is also used to initialize the gameplay world. Additional objects can be created and
added to the scene such as a controlled character, dynamic objects, etc.

The game loop


The gameplay loop is provided by the Update function of the world. After each frame the game
state is updated and then the world rendered.
During the state update, input events are processed and treated, custom events are handled,
audio events are processed, the simulation world is stepped and then all objects are updated
accordingly.
The rendering processes offers both a forward rendering as well as a deferred rendering that
supports multiple light sources.
The following steps are done in order for the scene to be rendered. Object geometry is
rendered for the shadow process, then the normal scene is rendered and results are stored
inside multiple rendering targets. Lights volumes are processed for the shading pass and then
result is stored in another rendering texture. After that, screen space ambient occlusion is
performed. A final composition step is required for combining results into the final image.
An optional debug rendering step for framebuffer inspection can be rendered as an overlay in
the top-right corner.

33

Chapter 4 Platform game demo

Runtime states
As any other application a game can be represented as a multistate system. The runtime state
object is used to define game runtime states such as gameplay, in game menu, debug state, etc.
By using these states certain rules can be applied to the update and rendering of the world.
For example, when the runtime state enters the game menu state, which means that the user
wants to open the menu, gameplay simulation and update is stopped and the user input rule is
changed in order to handle menu events.

The game world


The world design follows the specific 3D platformer model which means a linear world spread
around the X axis. The scene is initialized from the configuration file while dynamic elements
are created at run-time.
Multiple 3D platforms were created in 3D Studio Max. Platforms represents the basic building
block used for creating the level path. By welding together two or more platforms unlimited
choices for the level design are provided.
Dynamic objects like the classic collectible coins from 3D platform games can created very easy.
We can consider them simple static object, which means that we can easily define them and
place in the world using the scene file, then, at runtime, additional behaviors can be are added.

Camera system
As a 3D side scrolling platformer, the camera should remain almost all the time outside of the
physics world. Level design might force the camera to move through some objects during the
gameplay but. For this kind of situation effects like semi-transparent environment can be used.
The avatar moves from left to right in the world, by jumping over different platforms. Because
just a small fraction of the world is visible at a certain point, game camera needs to follow the
avatar wherever it goes. Smooth camera transitions can be obtained by using another proxy
object for camera positioning. The proxy is set to follow the avatar while the camera will
implement a smooth animation towards the new updated position of the proxy object.
For this demo, a rigid implementation that moves with the player is provided in order to always
keep de avatar centered on the screen.

34

Chapter 4 Platform game demo

Avatar and character mechanics


The avatar is the main actor in gameplay. Usually most of the actions performed by the human
player are transferred to this character. There are 2 main actions that must be provided for
every platformer game are: moving left and right as well as jumping.
The avatar is a dynamic character that also interacts with the physics world, thereby it all its
actions must also have an impact in the physics world. Havok Physics provides two different
methods for creating controllable character: the character proxy and the character rigid body,
the later one also being used in this implementation. The character rigid body is a special type
of rigid body that provides a character state machine for the controlling scheme of the avatar.
The sate machine offers default implementation basic avatar states like: on ground, in air,
jumping, climbing. Using this state machine the jumping mechanism is implemented. The
physics world updates the state machine accordingly to the avatar position so no extra
management is necessary.
Avatar collision with the environment is performed using a capsule shape.
By default, the avatar is free to move on both X and Z axis, while the negative Y axis represents
world gravity, and rotation around all 3 axis are possible. Most 3D platformer games restrict
movement and rotation to just a specific subset. Restrictions are usually influenced by level
design or certain gameplay mechanics. Like most classic 3D platformers, level design is based
on a linear world crated on the X axis. Using Havok constraints system movement of the avatar
was restricted only to the X axis, while rotation around X and Z axis were blocked.

Gameplay mechanics
The game implements the simple and classic platformer mechanic. A certain number of light
orbs (the coin corresponded for this implementation) must be collected in order to advance to
the next level.
Like most of the games, gameplay mechanics heavily rely on the collision system. Havok physics
engine offers the possibility to define phantom object or trigger volumes that can be used to
track when two objects are colliding. For example, when the avatar enters the coins phantom
object an event is triggered, therefor we can play a sound effect, collect the coin and update
the UI accordingly.
The physics engine also offers the ability to define key-framed objects which are really great for
creating movable platforms. The character state machine, can be used for triggering various
events like jumping sound or foot-steps. Most of gameplay behaviors usually represent just the
effect from the cause and effect concept, while the physics engine is the one that usually
provides the cause, or at least the methods to capture these events.

35

Chapter 4 Platform game demo

Game Menu
The game menu is constructed based on the configuration file provided. The default rendering
method can be changed with a custom one, specific for the game. By default it is rendered
using an identity view matrix with negative Z transition. For rendering depth testing is disabled
in order to preserve the menu as an UI overlay no matter how it is transformed.
While in the menu, the gameplay will froze until it is resumed.
The first menu page contains the specific resume and exit game buttons as well two additional
links to the Options and Debug page. The option offers the possibility to toggle on/off a few
rendering settings.

Figure 11. Gameplay and menu rendering

36

Chapter 5

Results
Overview
Great game engines are very complex and well written. In order to obtain great efficiency and
performance a lot of optimization techniques are used.
This engine was designed around a modular system that will offer great customization
possibilities. The end result is an engine capable of rendering a scene using different visual
techniques like deferred lighting, real-time shadows and screen space ambient occlusion while
also maintaining good performance. The integrated physics engine provides a simulation world
for all game aspects like collisions, trigger volumes and other custom physics solvers. Game
objects are highly customizable through the usage of components thus allowing rapid
prototyping and fast behaviors changings during run-time. The resource system offers a great
way to define, load and populate the game scene. The scene environment can be provided
using configuration files without the need of recompiling the code each time.
The architecture design offers the possibility to decouple many subsystem and treat them as
separated entities. In order to solve the communication problem a global messaging system
was added. Because events are handled synchronous many problems can occur that can crash
the application. For example, if an event deletes an objects that was part of a loop update the
application might crash because null object might not be treated or the loop is invalidated.
Because of that removing or adding new objects is done at the end of a frame.
The engine was not meant to provide everything, but instead to offer an overview of the design
decisions and implementation solution for the required modules.

Memory performance
The engine tries to reuse memory as much as it can. Resources are loaded only one time and
shared between multiple actors. Mesh files as well as physics world shapes are shared between
same object instances. Moreover, the component model allows sharing of different
components between objects that have the same behavior.
Example: All 1000 thousands barrel rendered on the screen use the same Vertex Array Object
from the video card memory, as well as the same material component and texture data.
37

Chapter 5. Results
In order to keep track of the resources unique identifiers are used, which allows fast retrieving
of resource data. For easy sharing of resources a cloning method is provided for the
GameObject class. Cloned objects will try to share as much data as possible, like behavior
object, audio component, text component as well as simulation shape or geometry mesh.

Run-time performance
Run-time performance can be affected by the complexity of the scene, computational cost on
the CPU side, the number of draw calls that are made to the GPU as well as the complexity of
post-processing features.
The process of fetches texels from a texture is usually really expensive and will drastically limit
rendering performance when many texture samples are performed. For example the blur filter
used for screen-space ambient occlusion or the PCF method used for softening shadow edges
use a kernel of a specified size. A kernel of size 4 x 4 means 16 sampled regions around a pixel.
If this computation must be performed for every single pixel on the screen the cost of rendering
will jump dramatically. Larger kernels usually came with an exponential addition in cost which
certainly is not preferred.
The number of lights from the scene can also have an impact on performance but only if
hundreds of them are rendered at a time or they overlap. Usually a deferred technique such as
the one implemented here, will provide enough performance for lighting even complex scenes
where a great number light sources are used.

38

Chapter 6

Conclusions and future work


Even though the game engine is far from being a complete solution and provides very few
features compared to state of the art engines, the modular design, that was built around, offers
a lot of possibilities for future upgrades. Subsystems can be rewritten without affecting the
overall functionality and new components can be added any time without braking the
application logic.
For future work the engine can be extended to provide a real-time scene editor. An input
component for positioning, rotating and scaling a selected object is already provided and can be
used from the keyboard. A good solution should involve using the mouse for the selection and
manipulation as well as the possibility to affect a group of objects at the same time. Of course
the game editor would not make sense without a possibility to export scene data to a file.
Other significant future work will involve building a particle system, an animation system that
integrates well with the physics simulation, adding new texturing features like normal-mapping
and parallax occlusion mapping and improving scene management and rendering performance
by using a culling mechanism and instantiated rendering.
The engine can be used as a great starting point for fast prototyping of new ideas such as game
mechanics, game components, rendering techniques as well as new engine modules.

39

Bibliography
[1]
[2]

OpenGL Insights, Taylor & Francis Group, LLC, 2012


Advances in Real-Time Rendering, http://advances.realtimerendering.com/s2014, Siggraph
2014, 05.09.2014
[3] Jorge
Jimenez, Next generation post processing
in
Call of Duty,
http://www.iryoku.com/next-generation-post-processing-in-call-of-duty-advancedwarfare, Siggraph 2014, 30.08.2014
[4] OpenGL Transformation, http://www.songho.ca/opengl/gl_transform.html, 2013,
02.07.2014
[5] Jason Gregory, Game Engine Architecture, A K Peters/CRC Press, 1st edition, 2009
[6] Wolfgang Engel, GPU Pro 3 Advanced Rendering Techniques, A K Peters/CRC Press, 1st
edition, 2012
[7] Fabio Pellacini, Shadow Map Antialiasing, GPU Gems 3, 2007
[8] Fabien Sanglard, Shadow Mapping, http://fabiensanglard.net/shadowmapping, 2009,
14.05.2014
[9] Jared Hoberock and Yuntao Jia, High-Quality Ambient Occlusion, GPU Gems 3, 2007
[10] Phong reflection model, http://en.wikipedia.org/wiki/Phong_reflection_model, 2014,
07.08.2014
[11] Rendering Pipeline Overview, http://www.opengl.org/wiki/ Rendering_Pipeline_Overview
2012, 02.09.2014
[12] Havok User Guide, Telekinesys Research Ltd., 2013

40

Appendix A

Figure 12. Engine Objects and Components

41

Appendix B

Figure 13. Engine Core Modules

Figure 14. Identifiers

42

Vous aimerez peut-être aussi