Vous êtes sur la page 1sur 9

Course project Procedural Content Generation ITU 2010

Procedural Content Generation


a course at the IT University of Copenhagen

Project:

Sempiternus:
Puzzle-game Level Generation

Teacher:
Julian Togelius

By:

Stefan Holm Madsen (shma@itu.dk)


Prakash Prasad (prpr@itu.dk)
Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)

Abstract
The objective of this project is to find out viable strategies for generating levels for a tile-based puzzle game.
While postulating such a level generation methodology, we examine the general process involved in converting
game mechanics to construction mechanics.

The Game
The experiment subject we have chosen is a game produced during the Nordic Game Jam 2010 - Monkey of
Puppets. The game challenges players to achieve the level objectives, all the while trying to avoid certain
obstacles. The game level is represented as an indexed tile based system where each tile either has a
passable or impassable nature, i.e. the entities in the game can traverse the passable tiles only. Apart from the
player avatar, there are two non-offensive entities in the game (objectives) - a key and a door. Every level can
only have one of each. The challenge in the level is provided by the enemy avatars that work on a line of sight
logic. That means they will chase down the player’s avatar to its viewing point if they can see it. Once they lose
sight of the player, they wander back to their starting position.

The existing levels that the game comes packaged with usually have linear solvability - i.e. the player tackles
each obstacle one at a time, before moving onto the next obstacle. All the levels included in the game are pre-
generated by a level designer. Using either random or constructive Procedural Content Generation (PCG)
methodology, we will try to create challenging and interesting levels.

Problem Space
As per the description of the game, we can draw out a high-level list of entities to be placed in a procedurally
generated level for it to satisfy the requirements of a playable and interesting level:
● World Map
● Player start position
● Position of intermediate level objective - the Key
● Position of final level objective - the Door
● Location of one enemy

In order to attain this target, we’ll have to keep certain considerations in mind:
● Connectivity - While the world map is being generated, it needs to be made sure that each of the level
objectives are reachable from the player’s starting position.
● Distance to Objective - The player experience concerning level objectives can be predicted to a
certain extent. It would be uninteresting to place objectives in the level right next to player start location.
A good rule of thumb would be to place the objectives at an equal distance from each other. This
enables the level to a have a much more focused and predictable flow.
● Stochastic vs. Deterministic - The level generation is based on a deterministic approach, even
though the levels are generated at a seemingly random nature - each map can be regenerated again at
any time given a specific ID - represented by an integer. By allowing it to be deterministic is also a
requirement to compute good test results for evaluating levels, as a random approach would give
inconsistency in replicating the already approved levels.
● Random vs. Planned - The method for placing different components for the level can be a simple
random generator process. This process actually has a lot of merit that might not be clear at first
glance. A random search space can enforce that level construction rules that are not familiar to us can
be explored. This strategy might result in interesting game levels. On the other hand, random levels
have a higher failure rate and a larger simulation space - i.e. an evaluation method (such as a
Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)
simulation based evaluation) also needs to be able to understand the level complexity to successfully
evaluate the level generated.
● Game Mechanism - Certain features in the level to be generated are dependent on the rules of the
game and the mechanics available to the player. In the game under consideration, the player can
exploit certain locations on the map where he can hide from the enemy. A logical constructive method
to create such locations would make the level non-trivial and enhance the quality of levels generated.

Agent Based Procedural Generation


In this section we explore some possible methods that can be employed to generating levels for the puzzle
game. These methods are categorized in an agent based approach where each agent takes control of a
certain task, and operates on the data it has received from the previous agent.

Agent 1- Level Construction (Turtle)


This is the first agent used for generating each level. The tile map representing the level is an indexed matrix
that can have either a passable tile-type or a non-passable tile-type. Some of the possible strategies that can
be used for this purpose are:

● Random Tile Picker - One methodology to create the level layout can be to randomly choose each tile
from the possible data-set with equal probability of getting selected. This method leads to generation of
highly unpredictable patterns (Figure 1). However, we cannot predict any of the characteristic features
for such a pattern, such as connectivity. This leads to the problem that many generated maps may not
be upto the standards required for the game. Hence, maps generated through this process need to be
further processed through a Convex Check to hide tiles that can never be reached, in addition to a
Connectivity Algorithm in order to connect major parts of the map. However, a major advantage of such
an agent is the high fidelity in shapes generated in the level, which can lead to several level designs
that may not have been taken into consideration by a more constructive method.

● Probabilistic Tile picker - This method resembles the previously explained agent, except that each
tile’s “passability” probability dictates if that tile will be passable or impassable. Once the tile type for an
index is found, the probabilities for the tiles in its vicinity are updated. Tiles around a passable tile have
a higher probability of being passable. This can reduce the occurrence of concavity or discontinuous
portions in the map. However, there is no certainty of eradication of these occurrences. Hence, similar
to the previously mentioned approach, additional algorithms need to be added to ensure generation of
a playable map.

Figure 1 - Random Map Generator


Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)
● Constructive Agent - While we have been discussing generation methods where the tile type for each
index is explored, another method to tackle this problem could be by formulating constructive
paradigms that we may want to have in the level. This process can be compared to a “Rogue-like Level
Generators” like the Dungeon-Building Algorithm described by Mike Anderson. The advantages of such
an approach are that the generated level are more predictable to have desired features in them, such
as certain archetypes (room, corridors), along with ensures connectivity between all parts of the map.
However, such generation methods loose the capability of generating levels with high fidelity. In this
particular case, we use a turtle-like level generator which moves around in an impassable level and
leaves a passable trail in the level (Figure 2). The turtle can generate one of two archetypes - a room or
a corridor.

Figure 2 - Map generation by a turtle agent

Constructive Turtle Agent


initialize the random number generator with the seed number
select N = number of level archetypes to be generated, random between 10 to 20
select start position of Turtle
start loop, upto N
Chance of constructing a room = 10%
Draw a room of size = (6, 8)
Update position of Turtle to end of room
Chance of constructing a corridor = 90%
Choose a new Direction (North, South, East, West)
Choose a corridor size
Move Turtle to new position at the end of corridor

Agent 2 - Object placement (Navigation Map)


Once the level map has been generated by the turtle agent, we can move onto placing entities on it. Keeping in
mind that we want to place the objectives at an equal distance from each other, we use a weighted graph to
depict the cost of movement between tiles.

Non-offensive Entity Placement


pick start point of path
get longest distance from start, P1
get second longest distance from start, P2
find the location on map equidistant from P1 and P2
i.e. distance is P3 = ( P2 - P1 ) / 2

Since these three points are equi-distant from each other (or approximately equidistant), they can serve as the
placement locations for the game entities. In the game under consideration, the objective is for the player to
Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)
get the key and then proceed to the door. This means that we can place the entities in either the sequence of
progressing objectives or as branches to be completed individually.

As we’ll describe later on, the level generation process places an enemy that the player needs to navigate
through. This means that if we arrange the entities in a branching format, then the player will have to traverse
the same enemy twice on the same branch, whereas he’ll encounter no enemies on the other branch. In case
of the alternative method (progressive objectives), the player will only have to navigate the enemy once.

Agent 3 - Enemy placement and avoidance


This agent is an evolutionary algorithm which evaluates the “fitness” for an enemy location.

Placing enemy
The position of the enemy is found through a random number generator. But since the available map positions
are finite, there is a slight chance that it might place the enemy at a position it already has been placed at, in a
previous map testing loop. This entails that the enemy positions must be managed so a “loop” does not occur,
and an already evaluated position is not tested again.

Hiding place generation


This method is constructed around the fact that the bot/player can “hide” from enemy’s field of view, in other
words - the method attempts to generate hiding positions that will allow it to escape its upcoming demise.

Figure 3 - Bot tracks its point of death

Using the path calculated by the navigation system, the bot moves towards its points of interest (objectives). In
the illustration above (Figure 3) it shows what would happen in the case it met an enemy just around the
corner. The bot would try to escape, but since there is no immediate hiding positions it can run to, it will be
killed by the enemy.

Armed with the information about the location of conflict, we can attempt to generate a hiding position for our
bot. The enemy’s “line of sight” seeking logic will always follow a player/bot to the point where it was spotted.
On reaching that spot, the enemy will rescan the area for the player/bot within its field of view. Only then will
the enemy loose interest and return to its point of origin. This suggests a possible hiding-position-approach,
which generates hiding positions just out of the enemy’s field of view, very near its point of death (Figure 4).
Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)

Figure 4 - Generation of hiding locations

The approach considers both the inbound and outbound paths to be viable options for generating a hiding
location. This is because the method does not have any knowledge with regards to whether or not the bot
might come about and return through this specific path at another point in time. It simply adds a certain amount
of hiding locations to attempt to “fix” the levels playability.

Evaluating Generated Levels


Part of the objective of generating a level, is to somehow evaluate a level’s performance. This quantification of
“quality of the level” is essential for categorizing a level as acceptable or not. Some features on the generated
map that can be used for this fitness value are:
● Can the level be solved?
○ Can the player traverse the level from the starting position go to the key, and from there the
door?
● Is the level interesting for a player?
○ What is the amount of time that it takes for the player to finish the level?
○ Is it “hard” to complete?

Evaluation of a map
Because of the exhaustive search based nature of this PCG process, it is extremely tedious for a human player
to evaluate the levels generated. Hence, an AI bot was made that could be simulated through a given level,
adhering to the rules and mechanics allowed to a human player. This bot performs several tasks.
● Allows feedback to whether or not a map is solvable
○ In this case if the level is unsolvable, it is simply discarded.
● In the case that the level is solvable several additional metrics are evaluated:
○ How many times is the bot killed by the enemy in this specific map seed?
○ How many times has the bot done exhaustive navigation search, in the sense that it cannot
circumnavigate the enemy?
○ How many “steps” the bot has taken in its current mission?

A special task assigned to the bot is to reevaluate the level in certain cases, and attempt to manipulate the
map so as to make it solvable. This includes allowing the bot to directly affect the map layout, and re-evaluate
the modified map. Using the bot, a test system for evaluating the levels performance can be compiled through
the following procedure (Figure 5).
Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)

Figure 5 - Map evaluation process overview

Heuristics provided by the Bot


The evaluation mechanics are based on a simple logic concerning the bot’s interaction with the game-world.
As described on the illustration below (Figure 6), it simply goes towards points of interests - but will go hiding in
the case a enemy spots it. Once it is safe (or the hiding timer expires) - the bot will resume its original course.
Once the bot has reached its current point of interest, it will be provided the next point of interest in the level (if
present).

Figure 6 - Bot behavior

Level completeness
It is only logical that if we can create a path from the players starting position to its points of interests, in this
case the key and the door, completely ignoring the fact about the enemy at this time, then the level can be
Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)
solved. If the level was generated using the turtle agent, then the methodology has already made sure that this
heuristics is always confirmed to.

Level solvable statistics


The above mentioned heuristics does not always confirm that the level can be solved. Once the enemy is
placed by the agent, the level may become unsolvable. These heuristics can then be evaluated by the bot, as
described above. The information can be used to evaluate a level from different aspects, as in - attempting to
derive a certain level of difficulty.

If the statistics shows that the bot simply cannot solve the level - given the enemy’s position, then it is pretty
straight forward to derive that this specific configuration makes the level unsolvable - despite there being a
good completeness to the map.

Another important aspect is how “big” the map is for the player. This is also stored in the bot, as how many
steps the bot took going from start to the final objective in the level, including how much “time” was spent
hiding from the enemy. This can be used to evaluate the level as too “small” or too “big” - if the step values are
too small or high respectively. However, if the bot is spending unreasonable large amounts of time hiding from
the enemy, then the level could be unsuitable for selection.

Large Number of Deaths


If the enemy kills the bot then the accumulated death count is stored. If the count exceeds 10, then there is
something wrong with the map - and the map is tagged for destruction. The State Machine (not illustrated) also
has an option to move the enemy’s position around the map before marking the map as dead.

Run-Time Failure
In the case that an unexplained, run-time failure is observed, we flag the map as having an error, and simply
discard it.

Exhausted
In the special case that the bot is unable to go around the enemy - some experiences have shown that the bot
skitters back and forth a corner with an enemy near. This is a special case where the parameters of the game,
combined with the level generated, lead to a stalemate. If this particular occurrence is not interrupted, it results
in an infinite loop. In such a case, the evaluation needs to be reset to force the simulation process forward.

Conclusion
In conclusion, let us analyze the process described in this paper according to the generalization proposed by
Togelius for Search based PCG:

Online vs. Offline - All constituent agents for the process described above can be processed online.
Even the generate-and-test agent can produce results in real-time for a playable level.
Necessary vs. Optional - Since the level data for the game is needed to play the game, it is
categorized as necessary content.
Random Seed vs. Parameter based - The PCG process is based on random number generators,
seeded by a particular number. The agent that places the non-offensive components on the map is the
only agent that is parameterized, i.e. it starts looking for the two farthest points from a pre-stated
starting point.
Procedural Content Generation (Julian Togelius) Fall 2010 IT University of Denmark
Stefan Holm Madsen (shma) Prakash Prasad (prpr)
Stochastic vs. Deterministic - The PCG methodology is deterministic. Since we’re using a search
based PCG technique, the content needs to be re-creatable, in order to enable the evolutionary
algorithm to be able to retrace back to the candidate.
Constructive vs. Generate-and-test - The first 2 agents, the turtle and the asset placement agent are
constructive, however the enemy placement agent is revolving around interaction with the bot, and the
bots feedback might have other effects to the map that will require retesting to ensure completeness -
ergo agent 3 is using the Generate-and-test paradigm.

The approach to generate procedurally generated levels in this project was discussed back and forth, what
would be the best approach - and how would this affect the different tasks that was planned ahead to be
solved by the multiple agent paradigm. Having multiple agents to perform different tasks is a great idea to
manage the sometimes complicated context of PCG science. It can be compared with having a bullet point
setup of the various tasks to be done, before the result is representable.

Using the turtle approach with the added in combination of a room-growing function did turn out pretty
interesting levels sometimes - which was the initial goal of this project. However, even though interesting levels
can be generated, it has no knowledge about if the generated map is actually playable. Adding in the mechanic
of the enemy did show on many occasions that the level design wasn’t really fit for playing. And thus there are
alot of maps which is discarded purely on this basis alone.

This could maybe have been solved by incorporating some enemy mechanic knowledge into the initial map
building step. Or allow a much more drastic approach by the 3rd agent (that places the enemy), to make sure
that it is placeable in a position that makes it “annoying” but manageable to circumnavigate.

References
J. Togelius, G. N. Yannakakis, K. O. Stanley, and C. Browne, “Search based procedural content generation,” in
Proc. of the European Conference on Applications of Evolutionary Computation (EvoApplications), vol. 6024.
Springer LNCS, 2010, pp. 141–150.

J. Togelius and J. Schmidhuber, “An experiment in automatic game design,” in Proc. of the IEEE Symposium on
Computational Intelligence and Games, 2008, pp. 111–118.

M. Anderson, “Dungeon-Building Algorithm,” available online at


http://roguebasin.roguelikedevelopment.org/index.php?title=Dungeon-Building_Algorithm URL visited on 2010-12-
08, 2010.

P. Prasad, M. Hvidtfeldt, A. Loza, H. H. Hvoslef, “Monkey of Puppets,” in Global Game Jam, available online at
http://www.globalgamejam.org/2010/monkey-puppets URL visited on 2010-12-08, 2010.

Turtle inspired algorithm based on knowledge of the graphics based Turtle algorithm, as well as the L-System
algorithm http://en.wikipedia.org/wiki/Turtle_graphics and http://en.wikipedia.org/wiki/Lindenmayer_system

Vous aimerez peut-être aussi