Vous êtes sur la page 1sur 43

Ravensbourne College of Media and Communication

BA Animation (Hons)

THROUGH A GLASS DIGITALLY


The Race to the Holy Grail of Artificial Intelligence – Conscious AI.
Who Wins? Games Industry vs. Military.

JB Webb-Benjamin

2010
Contents

Contents .................................................................................................................................................. 2
List of Illustrations ................................................................................................................................... 3
Acknowledgements................................................................................................................................. 4
Introduction ............................................................................................................................................ 5
1: The Games Industry: Pioneers of the Emotive NPC ............................................................................ 7
What is Artificial Intelligence? ............................................................................................................ 7
The Battle of Strong vs. Weak. .......................................................................................................... 10
Artificial Intelligence in Games ......................................................................................................... 13
2: The Military Industry: Pioneers of the Terminator ........................................................................... 17
Genetic Programming ................................................................................................................... 19
Neural Networks ........................................................................................................................... 20
3: I Think Therefore I Am: Machine Ethical Standards.......................................................................... 22
4: Conclusion: The Real vs. The Virtual ................................................................................................. 27
Appendix A: Notes ................................................................................................................................ 30
Appendix B: Research Papers ............................................................................................................... 35
Appendix C: Survey Results ................................................................................................................... 36
Appendix D: Video Demonstrations...................................................................................................... 38
Emotiv Brain/Computer Control Interface Demo ......................................................................... 38
Emotiv MSNBC Interview .............................................................................................................. 38
Emotiv EPOC IGN Interview .......................................................................................................... 38
AI Implant Massive Combat Demo ............................................................................................... 38
AI Implant Civilian Panic Demo ..................................................................................................... 38
Kynapse AI Demo .......................................................................................................................... 38
Bibliography .......................................................................................................................................... 39
Webography.......................................................................................................................................... 42

2
List of Illustrations

Figure 1: Pong (1975) - http://coin-op.tv/wp-content/uploads/2007/11/pong.jpg) ............................. 7


Figure 2: Alan Turing (1912-1954) - http://www.sdtimes.com .............................................................. 8
Figure 3: Marvin Minsky (b.1927) - http://www.acceleratingfuture.com ............................................ 10
Figure 4: Prof. K. Warwick (b. 1954) - http://www.cs4fn.org/alife/images/warwick.jpg .................... 10
Figure 5: John Searle (b.1932) - http://wvcphilosophyclub.appspot.com/images/searle2.jpg ........... 12
Figure 6: Simon & Newell - http://www.education.mcgill.ca/profs ..................................................... 12
Figure 7: 5th Generation Computer - http://library.thinkquest.org ...................................................... 18
Figure 8: Genetic Program Example - http://www.cs.northwestern.edu ............................................ 19
Figure 9: Apache Longbow - http://www.boeing.com/rotorcraft/military/ah64d .............................. 19
Figure 10: Commanche - http://www.hufos.net/images/rah-66_commanche.jpg ............................. 20
Figure 11: Neural Network Example - http://www.ibm.com/developerworks .................................... 21
Figure 12: Poster for Institute for Ethics & Emerging Technologies - http://ieet.org .......................... 23
Figure 13: Online Survey (Q3) - http://www.surveymonkey.com/s/DHV6MLD .................................. 28
Figure 14: Online Survey (Q4) - http://www.surveymonkey.com/s/DHV6MLD .................................. 28

3
Acknowledgements

• Lionhead Studios
• Splash Damage
• Rare
• Microsoft Games Studios – Europe
• Professor Kevin Warwick
• Professor Noel Sharkey
• Professor Liz Mcquiston
• Professor Jeremy Barr
• Lockheed Martin Advanced Research Laboratories

4
Introduction

Artificial Intelligence is fast becoming an ever present part of modern life, with examples to be seen
in almost everything ranging from microwaves and dishwashers to computers and consoles. The
games industry and military are the two biggest researchers of AI. Who will be the first to develop
fully conscious and emotive intelligent agents? On the one hand the games industry are leading the
way in designing and implementing instances of AI that are designed to be as socially conscious and
synergistic to man as possible; whereas, on the other hand, the Military, particularly the US Military,
are designing and implementing AI which is devoid of any form of conscience, equipped with the
ability to hunt humans, man’s one and only true predator. The main point of cohesion between the
two industries is the development of truly adaptive artificial intelligence, machine intelligence that
can think, free from human physical constraints and will learn from others and its environment and
act accordingly - which is more or less the cornerstone requirement for eventual success, be it the
games industry or the military who succeed in developing the world’s first conscious machine. The
only question is are we really ready to introduce the world to a new form of intelligence far superior
to our own when we consider the problems that we already have amongst ourselves as a species?

I have used a number of methods to research this exciting area of modern science ranging from
online polls to one-on-one time with heads of industry. I sent countless emails to many of the
developers of AI integrated weapons systems for the US Military including, DARPA, JPL (Jet
Propulsion Laboratories), Lockheed Martin Advanced Technology Laboratories and Northrop
Grumman, to name but a few. Needless to say I didn’t receive a single response except for one;
Samantha Kupersmith at Lockheed Martin Advanced Technology Laboratories. I also noticed that
they all managed to find a way to disable the ‘received’ and ‘read’ receipts that my emails
automatically send back to me as confirmation whenever a recipient has received and read one of
my emails. My Cisco firewall also tracked a number of excessive ping-backs coming from US servers
over a continuous 1-month period. I am unsure if this kind of behaviour is automatically done to
trace anyone asking questions or whether or not they just didn’t like me asking questions about their
artificial intelligence defence projects. I also noticed that certain documentary sources went offline
once I had accessed them; for example when I managed to successfully find a reliable source for the
CIA’s now infamous ‘KUBARK Counterintelligence Interrogation Document’, I found that the server
was shutdown within 1-2 hours. This kind of action would be an indication of either one of two
things: either my internet usage is being autonomously tracked or that I am being monitored. Either
way I found it very interesting to say the least.

5
I chose this particular subject because I have always enjoyed game-playing but the only problem is
that I have often found the signs of intelligence lacking from the game. That is why I have been
determined to fulfil my dream of creating truly adaptive, emotive artificial intelligence.

6
1: The Games Industry: Pioneers of the Emotive NPC

Ever since the first computer games were created there has been a need to create more and more
intelligent non-player characters, otherwise commonly known as NPC’s. The key to the creation of
realistic NPC’s is artificial intelligence.

In 1952, A.S. Douglas wrote his PhD degree at the University of Cambridge on Human-Computer
interaction. Douglas created the first graphical computer game - a version of Tic-Tac-Toe. The game
was programmed on an EDSAC vacuum-tube computer, which had a cathode ray tube display.
William Higinbotham created the world’s first video game ever in 1958. His game, called ‘Tennis for
Two’, was created and played on a Brookhaven National Laboratory oscilloscope. In 1962, Steve
Russell invented ‘SpaceWar!’. ‘Spacewar!’ was the first game intended for computer use. Russell
used a MIT PDP-1 mainframe computer to design his game. In 1967, Ralph Baer wrote the first video
game played on a television set, a game called ‘Chase’. Ralph Baer was then part of Sanders
Associates, a military electronics firm. Baer first conceived of his idea in 1951 while working for
Loral, a television company. In 1971, Nolan Bushnell
together with Ted Dabney, created the first arcade
game. It was called ‘Computer Space’, based on Steve
Russell's earlier game of ‘Spacewar!’ The arcade game
‘Pong’ was created by Nolan Bushnell (with help from
Al Alcorn) a year later in 1972. Nolan Bushnell and
Ted Dabney started Atari Computers that same year.
In 1975, Atari re-released ‘Pong’ as a home video
game (wikiSearch_A, December 2009). Figure 1: Pong (1975)

Even during these early and humble beginnings it was obvious that the general aim of computer
gameplay was to play against a seemingly intelligent opponent, the computer; however, by and large
seemingly intelligent behaviour has been achieved purely through the use of pre-coded reactions,
otherwise known as behaviours. So that we can gain a better understanding of behaviours and the
difference between real intelligence and just meaningless player action, computer reaction pre-
scripted events we first need to gain a better understanding of what artificial intelligence actually is.

What is Artificial Intelligence?

Artificial Intelligence (AI) is the science of creating intelligence within machines and electronic
avatars. Starting with Alan Turing, artificial intelligence research has been a driving force behind

7
much of computer science for over fifty years. Turing wanted to build
a machine that was as intelligent as a human being since it was
possible to build imitations of "any small part of a man." He
suggested that instead of producing accurate electrical models of
nerves, a general purpose computer could be used and the nervous
system modelled as a computational system. He suggested that
"television cameras, microphones, loudspeakers," (also known as
external inputs) etc, could be used to model the rest of humanoid
Figure 2: Alan Turing (1912-1954) body (Henry Brighton et al., Introducing Artificial Intelligence).

"This would be a tremendous undertaking of course," he acknowledged. (Henry Brighton et al.,


Introducing Artificial Intelligence)

Even so, Turing noted that the machine so constructed "…would still have no contact with food, sex,
sport and many other things of interest to the human being." The problem with this analogy
naturally is that Turing is presuming that any intelligent machine created would be natural slave to
the human race, and therefore require none of the things that humans regard as 'pleasures' (Henry
Brighton et al., Introducing Artificial Intelligence). I disagree with this line of reasoning as although
machines are intended, ostensibly, to ease human workloads, we must not fall into the trap of
considering intelligent, and therefore reasoning, machines as slaves and chattels, otherwise we're
nothing more than the same as certain British and American forefathers who considered slavery of
the African nations as a prerequisite for easing their own lives.

Turing concluded that in the 1940s the technical challenges of building such a robot or system were
too great and that the best domains in which to explore the mechanisation of thought were various
games and cryptanalysis, "in that they require little contact with the outside world." He explicitly
worried about ever teaching a computer a natural human language as it "seems however to depend
rather too much on sense organs and locomotion to be feasible" (wikiSearch_B, November 2009).
This assumption is, of course, incorrect, as learning of any language is mainly dependent on the
learning and repetition of certain constant rules, for example symbol recognition (the recognition of
a written character from an alphabet lexicon) and symbol conceptuality (the attachment of a word
or series of characters to an object or concept to engage a definition of meaning) (Carl Jung, Man
and his Symbols).

Turing thus set out the format for early artificial intelligence research, and in doing so touched on
many of the issues that are still hot debate in 2009. Much of the motivation for artificial intelligence

8
is the inspiration from people – that they can walk, talk, see, think and do; ergo, external input and
dependent motivational behaviours (Henry Brighton et al., Introducing Artificial Intelligence).

There are speculations on where our work on artificial intelligence will lead. Such speculations have
been part of the field since its earliest days, and will continue to be part of the field. The details of
the speculations usually turn out to be wrong, but the questions they raise are often profound and
important, and are ones we all should think about. This is demonstrated by a comment made by
David L. Waltz;

“The field did not grow as rapidly as investors had been led to expect, and this translated into
temporary disillusionment.” (David L. Waltz, Artificial Intelligence: Realising the Ultimate Promises of
Computing)

If we successfully develop and unleash artificial intelligence will we become a world where our worst
nightmares become real, and the "Matrix" becomes a reality?

It is into these dominions that non-playable characters fall - the twilight realms of simulacra and
simulation: they are designed to emulate human emotions and react convincingly to human
interactions. However, this can often result in a game dynamic that is largely speech driven - ergo,
the end user (human player) interacts by selecting different speech responses which in turn
generate branching speech trees with a finite range of choices and results, basically a rudimentary
changing-states network devoid of any true sense of intelligence; the most advanced and complex
example in existence in a computer game being ‘Fallout 3’ by Bethesda Studios. Where Lionhead
Studios’ characters differ is that they react to a much broader range of stimuli ranging from
contextual speech analysis and gesture dynamics to expressions, both visual and physical, which has
been amply demonstrated in ‘Fable 2’. The main reason why Lionhead Studios has been amply
awarded in the industry (BAFTA Videogames Award – Tuesday 10th March 2009) for ‘Fable 2’ is
because of the development of the AI behind your character’s companion, the Dog. The Dog
represents some of the most advanced autonomous AI in the gaming industry due to the fact that
this particular NPC not only operates completely out of your control but that its emotional state is
altered and affected by the actions, or in some case non-actions, of his owner (Lionhead Studios,
December 2010).

9
The Battle of Strong vs. Weak.

Artificial Intelligence is a huge undertaking. Marvin Minsky (b. 1927),


one of the founding fathers of AI, argues: "The AI problem is one of
the hardest science has ever undertaken." AI has one foot in science
and one in engineering, as well as, naturally, a hand in the psychology
camp (Henry Brighton et al., Introducing Artificial Intelligence).

Within the science of artificial intelligence there are two main camps
of research and development, Strong AI and Weak AI. Strong AI is the Figure 3: Marvin Minsky (b.1927)
most extreme form of artificial intelligence where the goal is to build a machine capable of human
thought, consciousness and emotions. This view holds that humans are nothing more than elaborate
computers. Weak AI is less audacious, the goal being to develop theories of human and animal
intelligence, and then test these theories by building working models, usually in the form of
computer programmes or robots (Professor Kevin Warwick, In the Mind of the Machine). The AI
researcher views the working model as a tool to aid understanding. It is not proposed that machines
themselves intrinsically are capable of thought, consciousness or emotions. So, for Weak AI, the
model is a useful tool for understanding the mind; for Strong AI, the mind is the model (Henry
Brighton et al., Introducing Artificial Intelligence).

AI also aims to build machinery that is not necessarily based on


human intelligence, such machines may exhibit intelligent
behaviour, but the basis for this behaviour is not important. The aim
is to design useful intelligent machinery by whatever means.

Figure 4: Prof. K. Warwick (b. 1954) Because the mechanisms underlying such systems are not intended
to mirror the mechanisms underlying human intelligence, this approach to AI is sometimes called
Alien-AI. So, for some, solving the AI problem would mean finding a way to build machines with
capabilities on a par with, or beyond, those found in humans(Henry Brighton et al., Introducing
Artificial Intelligence). Humans and animals may turn out to be the least intelligent examples of
intelligent agents yet to be discovered. The goal of Strong AI is subject to heated debate and may yet
turn out to be truly impossible; however for most researchers working on AI, the outcome of the
Strong AI debate is of little direct consequence.

AI, in its weak form, concerns itself more with the degree to which we can explain the mechanisms
that underlie human and animal behaviour. The construction of intelligent machines is used as a

10
vehicle for understanding intelligent action. Strong AI is highly ambitious and sets itself goals that
may be beyond our meagre capabilities. The strong stance can be contrasted with the more
widespread and cautious goal of engineering clever machines, which is already an established
approach, proven by successful engineering projects.

"We cannot hold back AI any more than primitive man could have suppressed the spread of
speaking."

Doug Lenat & Edward Feigenbaum (Henry Brighton et al., Introducing Artificial Intelligence)

If we assume that Strong AI is a real possibility, then several fundamental questions emerge. Imagine
being able to leave your body and shifting your mental life onto digital machinery that has better
long-term prospects than the constantly ageing organic body you currently inhabit. Imagine being
able to customise your own external appearance at a whim and having access to new upgrades as
well as increased connectivity to the rest of man and machinekind. This possibility is entertained by
Transhumanists and Extropians[1]. Transhumanists and Extropians are part of an international
cultural and intellectual movement which believe that the next major evolutionary step for mankind
will be the augmentation of man and machine to improve our capabilities to the nth level utilising
science, technologies and bio-technologies.

The problem that Strong AI aims to solve must shed light on this possibility, the hypothesis is that
thought, as well as other mental characteristics, is not inextricably linked to our organic bodies. This
makes immortality a possibility; because one's mental life could exist on a more robust, and perhaps
reliable, platform – digital hardware (Professor Kevin Warwick, In the Mind of the Machine).

Perhaps our intellectual capacity is limited by the design of our brain; our brain structure has
evolved over millions of years. There is absolutely no reason to assume that it cannot evolve further,
either through continued biological evolution or as a result of human intervention through
engineering.

Since man's mysterious evolution via natural selection (Charles Darwin, The Origin of Species) from
apes (the most common hypothesis concerning the start of mankind's evolution into the current
form physically and intellectually), the natural environment around him has guided his intellectual
advancement; for example as the weather changed man developed the concept of covering himself
in furs so that he wouldn't freeze to death. Essentially intellectual evolution, as well as physical
evolution, has slowed as we have progressed further into the technologically orientated 21st century

11
because we have controlled our surrounding natural environment, more or less. Our only true hope
for evolution lies with us playing God, utilising either genetic mutation or technological symbiosis as
our next evolutionary progression. The job our brain does is amazing when we consider that the
machinery it is made from is very slow in comparison to cheap electronic components that make up
the modern computer.

Brains built from more advanced machinery could result in 'super-human intelligence' (Steve Grand,
Creation: Life and How to Make It). For some, this is one of the goals of AI; however I believe that to
claim that any human could suddenly be recreated as a 'super-human intelligent' purely through the
integration of superior digital hardware would be spurious,. After all a human is only as intelligent as
its education and initial base genetics (in a computer this would be analogous to original
programming) - a stupid human combined with advanced hardware will still always be a stupid
human, comparatively speaking, only they might be faster in the execution of their particular form of
stupidity; thus, crap in, crap out. This can be demonstrated by a quick analysis of John Searle's
Chinese Room Theory [1] (Henry Brighton et al., Introducing Artificial Intelligence).

In the 1980s, the philosopher John Searle, frustrated with the claims
made by AI researchers that their machines had 'understanding' of
the structures they manipulate devised a thought experiment in an
attempt to deal a knockout blow to those touting Strong AI. In
contrast to the Turing Test, Searle's argument revolves around the
nature of computations going on inside the computer. Searle
attempts to show that purely syntactic symbol manipulation, like that
proposed by Newell and Simon's PSSH (Physical Symbol Systems
[2]
Hypothesis) cannot by itself lead to a machine attaining Figure 5: John Searle (b.1932)
understanding on a quantifiable human scale (Henry Brighton et al., Introducing Artificial
Intelligence).

Unfortunately I personally cannot honestly back either hypothesis


one hundred percent. I believe that human understanding is indeed
heavily based on symbol representations and their contexts within
our surrounding environments – however I can also conclude that
understanding doesn't just arrive upon the installation or learning of
Figure 6: Simon & Newell symbol sets. Where Newell and Simon argue that the nature of the
machinery is not important to attain understanding (only arriving at the right program or software

12
is), I completely disagree. Starting with the correct level of advanced hardware is a prerequisite for
the processing of information of quality and consistency, so that the information being processed
retains its true value as content and doesn't lose quality through poor processing or transmission,
the media is not the information, the information is the data; ergo a retarded brain will process
information differently to a fully healthy brain.

When a baby is born, it looks upon its surrounding environment with fresh eyes; however it is
already extensively pre-programmed on a genetic level: for example, the ability to breathe air, basic
neural processing, facial and voice recognition, and parent descendant genetic profiling. When the
baby discovers new symbols they are linked in their minds to certain contextual data - for example, if
a parent is smiling and kisses the baby, it equates a smile and a kiss with human physical contact,
which can lead to feelings of security and reassurance by the parental unit. This eventually leads to
the baby smiling back and leading to more incidents of happiness, the baby understanding that
smiling, the symbol, leads to more symbols that are equated on an emotional and intellectual level
with happiness. That is why when victims of certain vicious crimes see certain things they fall to
pieces, for example, a victim is raped by a person wearing a necklace; this particular design of
necklace will always bring about feelings of recollection and sorrow due to the symbol being
irrevocably linked to the traumatic event (Fritz Springmeier and Cisco Wheeler, The Illuminati
Formula to Create an Undetectable Total Mind Control Slave).

When my baby daughter, Natasha, was only one month old, she had already developed a sense for
symbols, she recognised my smile as a good thing, and would respond with a symbol of her own, her
own smile. From birth I would repeat the word 'Dada' to her, while pointing at myself, so that over
time she grew to recognise the symbol – 'Dada' – equalled the context – me. These word repetitions
would be interspaced with me pointing at her and repeating her name 'Natasha' so that over time,
while learning that I'm 'Dada' she would recognise that she, herself possessed a symbol representing
herself, her name.

Symbols on their own do not bring about understanding, only symbols combined with contextually
linked information bring about understanding linked with real-world experience or 'living'.

Artificial Intelligence in Games

So how does all this effect the development of lifelike non-player characters in modern computer
games? More than you would initially think. Initially the biggest users of very basic artificial
intelligence were sports games, for example the ‘Madden Football’, ‘Earl Weaver Baseball’ and ‘Tony

13
La Russa Baseball’ (see Appendix V) series of games. These relied on basic rules and statistics analysis
to generate the semblance of intelligence, though by no means true artificial intelligence as we have
come to expect. Due to the emergence of new games genres in the mid-1990s industry was
prompted to the use of formal AI tools, for example finite state machines [3].

Real-Time Strategy (RTS) [4] games put heavy strain on this new and emergent technology with many
objects, incomplete information, pathfinding problems, real-time decisions and economic planning,
[5]
among other things. The first games of the genre had notorious problems. ‘Herzog Zwei’ , for
example, had almost broken pathfinding and very basic three-state state machines for unit control,
and ‘Dune II’ attacked the players' base in a beeline and used numerous cheats.

Later games in the genre exhibited much better AI. During my childhood I spent a lot of time playing
many of the very games that marked landmarks in AI development, at the time unbeknownst to me.
Whenever it comes to game playing I always look for signs of intelligent NPC’s because, for me
personally at any rate, it is the fight against an interesting and unpredictable opponent that makes
the game interesting and gives the game longevity in terms of play life.

‘GoldenEye 007’ (1997) by Rare, was one of the first First-Person Shooters (FPS) to use AI that would
react to a players' movements and actions as well as taking cover, performing rolls to avoid being
shot and throwing grenades at the appropriate time thereby demonstrating a simulated attempt at
intelligent self-preservation in accordance with the players. However in terms of demonstrating true
intelligence the NPC’s behaviour became very predictable over time due to its linear, pre-scripted
background. Its creators later expanded on this in the title, ‘Perfect Dark Zero’.

‘Half-Life’ (1998) by Valve Software, featured enemies that worked together to look for the player.
They also lobbed grenades from behind cover, ran and hid from the player when injured and
patrolled areas without breaking the pattern. As you can see from this the NPC’s levels of simulated
self-preservation have been increased.

‘Halo’ (2001) by Bungie Studios, featured AI that could use vehicles and some basic team actions.
The AI could recognize threats such as grenades and incoming vehicles and move out of danger
accordingly. The introduction of basic team-based AI interactions, for example flocking and line-of-
sight differentiation made gameplay more interesting and immersive for the player.

‘Far Cry’ (2004) by Crytek Studios, exhibited very advanced AI for its time, although this made minor
glitches more apparent. The enemies would react to the player's playing style and try to surround

14
him wherever possible. They would also use real life pre-programmed military tactics to try and
defeat the player. The enemies did not have "cheating" AI, in the sense that they did not always
know exactly where the player is all the time. They would remember his last known position and
work from there. This would make it more risky for a player to engage the enemy and stay in one
position, otherwise known as ‘camping’.

‘F.E.A.R.’ (2005) by Monolith Productions, introduced advanced character AI that used real-time
cover, tactics and team coordination against the player, even basic NPC communications was
enabled for better co-ordination. AI characters worked as a team, knowing where other AI team
mates are and made tactical decisions on the back of that. The AI would be able to recognize when
out gunned and even hide from the player, to later attack from behind. ’F.E.A.R’ AI was praised by
critics as a major benchmark in game AI. This created one of the most immersive and at times truly
terrifying gaming experiences. This game not only scared me and others due to its content but also
because of the tenacity and cleverness of the NPC AI.

‘The Elder Scrolls IV: Oblivion’ (2006) by Bethesda Studios, uses very complex AI. NPC's in the game
have a 24/7 schedule and pursue their goals in their own ways. They do not stand in one place all the
time. They eat, sleep, and go do their jobs every day. Events happening in the game can also change
their daily routine. They can grow from nice townsfolk to deadly assassins. This was one of the first
games where the player was no longer the centre of the game universe, the game world existed
outside of and irrelevant of the player.

‘Perfect Dark’ (2007) by Rare, had enemies running for dead team mates' weapons if the player shot
the weapon out of the hand; the only unfairness during the course of both this game and its
predecessor, ‘Goldeneye 007’, was that enemies knew where the player was, even if no one saw
where the player hid.

‘Left 4 Dead’ (2008) by Valve Software, uses a new artificial intelligence technology dubbed ‘The
Director’. ‘The Director’ is used to procedurally generate a different experience for the players each
time the game is played. The games developers call the way ‘The Director’ is working ‘procedural
narrative’ because instead of having a difficulty level which just ramps up to a constant level, the AI
analyses how the players fared in the game so far, and try to add subsequent events that would give
them a sense of narrative and surprise at increasingly harder enemy engagements.

‘Project Natal’ (see Appendix III) (2010) by Microsoft Games Studios, an add-on peripheral for the
Xbox 360 unveiled at E3 on Monday 1st June 2009, was shown to include a technical demo called

15
‘Milo’, developed by Lionhead Studios. It lets players interact with an incredibly sophisticated AI
called Milo, showcasing voice recognition (and, in turn, full conversation) and the ability to pass real-
life items ‘through’ the screen and into virtual items. The AI engine is so advanced it is capable of
recognising objects that it has not been previously shown.

Halo: Reach (2010), the final game from Bungie Studios in the Halo Series, is the world’s first game to
use AI that will self-select behavioural scripts based on real-time analysis of environmental and
player changes, so that the AI can never be predicted by demonstrating a predilection for a
particular type of gameplay which is normally an indication of pre-scripted behaviours. This is the
first game to demonstrate AI creating new adaptive AI on-the-fly in real-time (Edge, February 2010,
p55).

‘Project Natal’ is currently the pinnacle of game AI development and represents a massive and
unprecedented leap forward in the development of more and more emergent immersive
technologies. The hopes of the development team behind the ‘Milo Engine’ is that their technology
will pave the way towards more and more interesting immersive technologies requiring the player to
be more actually intellectually and physically involved in the game playing process. Also it should be
never forgotten that this technology has the innate ability to learn, it is part of its core directives to
learn from its user and surrounding environment. This learning process itself could pave the way to
unprecedented emergent behaviours which in turn could lead to a kind of intelligence hitherto not
considered possible. Consider for a moment if you will Newell and Simons Physical Symbol Systems
Hypothesis. If a machine with complete autonomous control (the next generation X-Box codenamed
X-Box Phoenix would be a likely candidate) of itself were to start learning from its user, its
environment and not forgetting the ever present online internet environment as an additional
learning source combined with advanced neural networks and the very latest in genetic
programming, could it not be foreseeable that a form of very real machine intelligence could
develop for itself? True, emotive, creative and in search of the companionship (some of the best
qualities of mankind) and exist free of our constraints alongside us? Only time will tell.

Now that we have looked into the beginnings and current pinnacle of artificial intelligence
development within the games industry we now have to look at the work being carried out by the
Military in this emergent field.

16
2: The Military Industry: Pioneers of the Terminator

The military and the science of computers has always been incredibly closely tied - in fact, the early
development of computing was virtually exclusively limited to military purposes. The very first
operational use of a computer was the gun director used in the Second World War to aid ground
gunners to predict the path of a plane given its radar data. Famous names in AI, such as Alan Turing,
were scientists who were heavily involved in the military. Turing, recognized as one of founders of
both contemporised computer science and artificial intelligence, was the scientist who broke the
German's Enigma code through the use of computers (wikiSearch_B, November 2009,).

It was the needs of intelligence agencies in Great Britain during the Second World War that led to
the initial development of much of Alan Turing’s work (wikiSearch_B, November 2009). Without his
work from that period it is highly unlikely that the Military would have as a well developed sense of
what Artificial Intelligence is, its capabilities and applications within the Military. Ever since the initial
research and speculation of that period the Military, the world over and none more so than the US
Department of Defence, have had an unerring interest in the real world application of artificial
intelligence in modern warfare. It is this apparent need for intelligent autonomous systems that
caused the US Senate to pass a bill stating that all US Military Vehicles must be autonomous by 2015,
not only representing a new $52 billion market but also a massive increase in funding for artificial
intelligence development (The Register, December 2009).

As computing power increased and pragmatic programming languages were developed, more
complicated algorithms and simulations could be realised. For instance, computers were soon
utilised to simulate nuclear escalations and wars or how arms races would be affected by various
parameters. The simulations grew powerful enough that the results of many of these 'war games'
became classified material, and the 'holes' that were exposed were integrated into national policies.

Artificial Intelligence applications in the West started to become extensively researched when the
Japanese announced in 1981 that they were going to build a 5th Generation [1] computer, capable of
logic deduction and other such capabilities (www.wikipedia.com, January 2010, wikiSearch_C).

Inevitably, the 5th Generation project failed, due to the inherent problems that AI is faced with.
Nevertheless, research still continued around the globe to integrate more 'intelligent' computer
systems into the battlefield. Emphatic generals foresaw battle by hordes of entirely autonomous
buggies and aerial vehicles, robots that would have multiple goals and whose mission may last for
months, driving deep into enemy territory. The problems in developing such systems are obvious -

17
the lack of functional machine vision systems has lead to problems with object avoidance, friend/foe
recognition, target acquisition and much more. Problems also occur trying to get the robot to adapt
to its surroundings, the terrain, and other environmental aspects (wikiSearch_C, January 2010).

Figure 7: 5th Generation Computer

Nowadays, developers seem to be concentrating on smaller goals, such as voice recognition systems;
expert systems and advisory systems. The main military value of such projects is to reduce the

18
workload on a pilot. Modern pilots work in incredibly complex electronic environments - receiving
[2]
information not only from their own radar, but from many others (principle behind J-STARS )
(wikiSearch_D, January 2010). Not only is the information load high but the multi-role aircraft of the
21st century have highly complex avionics, navigation, communications and weapon systems. All this
must be organized in a highly accessible way. Through voice-recognition, systems could be checked,
modified and altered without the pilot looking down into the cockpit. Expert/advisory systems could
predict what the pilot would want in a given scenario and decrease the complexity of a given task
automatically.

Aside from research in this area, various paradigms in AI have been successfully applied in the
[3]
military field; for example, using an EA (evolutionary algorithm) to evolve algorithms to detect
targets given radar/FLIR data, or neural networks differentiating between mines and rocks given
sonar data in a submarine. I will look into these two examples in depth below.

Genetic Programming

[4]
Genetic programming is an excellent way of evolving
algorithms that will map data to a given result when no set
formula is known. Mathematicians/programmers could normally
find algorithms to deal with a problem with 5 or so variables, but
when the problem increases to 10, 20, 50 variables the problem
becomes close to impossible to solve. Briefly, how a GP-powered
program works is that a series of randomly generated expression
trees are generated that represent various formulas. These trees Figure 8: Genetic Program Example
are then tested against the data, poor ones discarded, good ones kept and bred. Mutation,
crossover, and all of the elements in genetic algorithms are used to breed the 'highest-fitness' tree
for the given problem. At best, this will perfectly match the variables to the answer, other times it
will generate an answer very close to the wanted answer (wikiSearch_E, January 2010).

A notable example of such a program is SDI's evolutionary algorithm


(EA) designed by Steve Smith. EA has been used by SDI to research
algorithms to use in radars in modern helicopters such as the AH-64D
Longbow Apache and RAH-66 Comanche. EA is presented with a mass
of numbers generated by radar and perhaps a low-resolution
Figure 9: Apache Longbow television camera, or FLIR (Forward-looking Infra-red) device. The
program then attempts to find (through various evolutionary means) an algorithm to determine the

19
type of vehicle, or to differentiate between an actual target and mere "noisy" data (wikiSearch_E,
January 2010).

Basically, the EA is fed with a list of 42 different variables collected


from the two sensors, and then a truth value specifying whether the
test data was clutter or a target. The EA then generates a series of
expression trees (much more complicated than those normally used
in GP programs). When new a best program is discovered, the EA uses

a hill-climbing technique to get the best possible result out of the new Figure 10: Commanche
tree. Then, the tree is subjected to a heuristic search to optimize the tree (wikiSearch_E, January
2010).

Once the best possible tree is found, e will output the program as either pseudo-code, C, FORTRAN
or BASIC. Once the EA had evolved the training data, it was put to work on some test data. The
results were quite impressive:

Sensor Training Data Test Data Percent Errors


Radar 2.5% - 8.3%
Imaging (SR/MR/LR) 2.0% - 8.0%
Fused 0.0% - 4.2%

While the algorithms performed well on the training data, the performance increased a lot when
applied to the test data. Nevertheless, the fused detection algorithm (using both radar and FLIR
information) still provided a decent error percentage (wikiSearch_D, January 2010).

An additional plus to this technique is that the EA could be actually programmed into the weapon
systems (not just the algorithm outputted), so that the system could dynamically adapt to the
terrain, and other mission-specific parameters (wikiSearch_E, January 2010).

Neural Networks

Neural Networks (NN) [5], otherwise known as Artificial Neural Networks (ANN), is another excellent
technique of mapping numbers to results. Unlike the EA, though, they will only output certain
results. A NN is normally pre-trained with a set of input vectors and a 'teacher' to tell them what the
output should be for the given input. A NN can then adapt to a series of patterns. Thus, when feed

20
with information after being trained, the NN will output the result whose trained input most closely
resembles the input being tested (wikiSearch_F, January 2010).

This was the method that some scientists took to identify


sonar sounds. Their goal was to train a network to
differentiate between rocks and mines - a notoriously
difficult task for human sonar operators to accomplish
(wikiSearch_F, January 2010).

The network architecture was quite simple. It had 60


inputs, one hidden layer with 1-24 inputs, and two output
units. The output would be <0,1> for a rock and <1,0> for a
mine. The large amount of input units was to incorporate Figure 11: Neural Network Example
60 normalized energy levels of frequency bands in the
sonar echo. What this means is that a sonar echo would be detected, and subsequently fed into a
frequency analyzer that would break down the echo into 60 frequency bands. The various energy
levels of these bands was measured, and converted into a number between 0 and 1 (wikiSearch_F,
January 2010).

A few simple training methods were used (gradient-descent), as the network was fed examples of
mine echoes and rock echoes. After the network had made its classifications, it was then told
whether it was correct or not. Soon, the network could differentiate as good as or better than its
equivalent human operator (wikiSearch_F, January 2010).

The network had also beaten standard data classification techniques. Data classification programs
could successfully detect mines 50% of the time by using parameters such as the frequency
bandwidth, onset time, and rate of decay of the signals. Unfortunately, the remaining 50% of sonar
echoes do not always follow the rather strict heuristics that the data classification used. The
networks power came in its ability to focus on the more subtle traits of the signal, and use them to
differentiate (wikiSearch_F, January 2010).

It must be also understood that there is much, much more that the Military are investigating in
terms of artificial intelligence. However their research is now turning more towards the creation of
not emotion-led autonomous systems like their competitors in the games industry but more towards
semi-autonomous software controlled remote weapons systems.

21
3: I Think Therefore I Am: Machine Ethical Standards

Artificial intelligence represents one of the most intangible of sciences, stymied in myth and ethical
nightmares. AI also has the dubious distinction of being of both simulacra, a symbol of belief without
any referential, and simulation, an artificial construct in imitation of something real, an event that
has occurred or could occur.

In ‘Simulacra and Simulation’ by Jean Baudrillard we are told that there are three orders of
simulacra; simulacra that are natural, simulacra that are productive and simulacra of simulation. It is
this third one that is of interest us today. Jean Baudrillard defines simulacra of simulation as;

“(Simulacra of simulation) founded on information, the model, the cybernetic game – total
operationality, hyperreality, aim of total control.” (Jean Baudrillard, Simulacra and Simulation)

By and large the general population are scared of the concept of self-aware, conscious, emotive AI
and this mostly due to the alarmist treatment of the subject by the film industry. Such films as ‘The
Terminator’ and ‘The Matrix’ create a vision of the future in which machines will ultimately enslave
humanity. By generating adverse ideologies and predictions of what artificial intelligence could be
we are creating these situations ourselves. By the very process of predicting mans decline and
destruction at the hands of what will undoubtedly be mankind’s intellectual superior we are
generating and predetermining a sequence of events that does not have to occur in this way.
Because of the very nature of artificial intelligence and its underlying purpose, which is to create a
new form of intelligence to rival mankind, it scares most people. It is this fear of something strange
and new that modern cinema also taps into and ultimately unjustly maligns with such visually
stunning epics as ‘The Matrix Trilogy’. It is also because conscious, emotive AI represents a mirror of
us, humanity at large, which scares most people. Because it is created in our image, with our
concepts of morality, and conscience, or if the Military have their way a lack thereof, it will represent
at once the very best and the very worst of us, but amplified a thousand-fold by the power of the
processor.

22
If one takes into consideration what some propose - that we are created in God’s image - then by
definition we represent all the good and evil of a higher order. Therefore it would be logical to
conclude that as God’s children once we, mankind, give birth fully to artificial intelligence we
ourselves will become Gods in our own right. For having spawned a new intelligence of a potentially
higher order we will not only have attained Godhood but also have created something new in our
image, with all of our successes and failings, mirrored. It is actually this fear of ourselves and our
own shadows that makes us fear artificial intelligence and ultimately demonise its existence before it
has the chance to come fully into its own. Instead when we consider artificial intelligence and what it
may one day do to humanity when it reaches maturity it should look more at the good that mankind
has done, those acts of selflessness and heroism that define humanity as these too will be reflected
in whatever we create in our image. As the science of artificial intelligence is quintessentially the
quest to create intelligence other than our own we should be looking at what it is to be human.

Figure 12: Poster for Institute for Ethics & Emerging Technologies

23
The only reason for AI supposedly ultimately being humanity’s destruction would be for a lack of
ethical training on our parts. Developing machines with superior intelligence is fraught with many
ethical and ideological problems, for example, should we ethically create self-aware AI for games
when that self-aware AI would only be allowed to come into existence during the human player’s
playing time and then would be destroyed or hibernated when the human player decides that he or
she is now bored of playing? Or conversely should we create self-aware AI that is capable of killing a
human being without any human intervention or governance?

On April 4th, 2009 Google News released a report about a Japanese experiment designed by Suita,
called Child-Robot with Biomimetic Body, or CB2 (GoogleNews_Search, February 2010). This
stunning new piece of hybrid technology uses the very latest in advanced robotics and artificial
intelligence to create a humanoid robot that resembles a small child. The AI is also designed to
mimic that of a young human infant. The thing that is scariest about this particular experiment is
that not only physically resembles a human infant but that it also acts like one, watching and
learning all in real time. The creators report that it is able to now walk, with the help of an adult and
that by the time it is two years old it should also be able to put together rudimentary sentences
(GoogleNews_Search, February 2010). This is an example of emergent intelligent technology at its
best; however the treatment and handling of this tech has to be ultimately with the same care,
attention and gentleness as bestowed on a human infant of the same age, otherwise all that will
occur is that this intelligence while in development will learn nothing but cruelty and pain, very
much like an abused human infant would. Should we have created this machine with, if not a real
soul, at the very least the preparations for one? When will it become self-aware and how would we
know once it had achieved self-awareness?

Everyone is more or less aware of Isaac Asimov’s fictional ‘3 Laws of Robotics’, which were
postulated as a way of keeping advanced robot artificial intelligence in check (WikiSearch_G,
February 2010);

• A robot may not injure a human being or, through inaction, allow a human being to come to
harm.
• A robot must obey any orders given to it by human beings, except where such orders would
conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the
First or Second Law.

24
However I believe that the biggest problem facing the ethical debate is who is ultimately in control
of the original starting source code and who they have to answer to, basically creator culpability. At
this time there is no developer/creator accountability so technically any kind of morality or
questionable ethical design could be used for the basis of quite advanced adaptive artificial
intelligence. Asimov’s ‘3 Laws of Robotics’ are fatally flawed in many ways not in least just by its
mode of delivery (English being unsuitable and difficult to form in conjunct machine code or
binaries) but by also the fact that there are many ways of getting a machine to misinterpret any
single ones of the laws themselves without it directly contracting itself; for example even though a
machine isn’t allowed to harm or by omission of action to allow to come to harm any human being
this can be circumvented by changing a machines definition of a human being, and the list goes on. A
new set of laws needs to be developed that specifically assigns design responsibilities to the
developer/creator as well as the machine itself. Like parents who are held responsible for the
actions of their children so should we as developers of adaptive artificial intelligence.

This has been already hypothesised by Jamais Cascio, a Technical Lecturer and Professional Technical
Blogger for Fast Company. At a lecture given to a group of technologists in the San Francisco Bay
Area he proposed the use of a new set of ‘5 Laws of Robotics’;

• Law #1: Creation Has Consequences


This is the overarching rule, a requirement that the people who design robots (whether
scientific, household, or military) have a measure of responsibility for how they work. Not
just in the legal sense, but in the ethical sense, too. These aren't just dumb tools to be used
or abused, they're systems with an increasing level of autonomy that have to choose
appropriate actions based on how they've been programmed (or how they've learned, based
on their programming). But they aren't self-aware individuals, so they can't be blamed for
mistakes; it all comes down to their creators. (Jamais Cascio, Machine Ethics, April 8th 2009)
• Law #2: Politics Matters
The First Law has a couple of different manifestations. At a broad, social level, the question
of consequences comes down to politics--not in the partisan sense, but in the sense of
power and norms. The rules embedded into an autonomous or semi-autonomous system
come from individual and institutional biases and norms, and while that cannot really be
avoided, it needs to be acknowledged. We cannot pretend that technologies--particularly
technologies with a level of individual agency--are completely neutral. (Jamais Cascio,
Machine Ethics, April 8th 2009)

25
• Law #3: It's Your Fault
At a more practical level, the First Law illuminates issues of liability. Complex systems will
have unexpected and unintended behaviours. These can be simple, akin to a software bug,
but they can be profoundly complicated, the emergent result of combinations of
programmed rules and new environments. As we come to depend upon robotic systems for
everything from defence to health care to transportation, complex results will become
increasingly common--and the designers will be blamed. (Jamais Cascio, Machine Ethics,
April 8th 2009)
• Law #4: No Such Thing as a Happy Slave
Would autonomous systems have rights? As long as we think of rights as being something
available only to humans probably not, but as our concept of rights expands, including (in
particular) the Great Apes Project's attempt to grant a subset of human rights to our closest
relatives, that may change. If a system is complex and autonomous enough that we start to
blame it instead of its creators for mistakes, we will have to take seriously the question of
whether it deserves rights, too. (Jamais Cascio, Machine Ethics, April 8th 2009)
• Law #5: Don't Kick the Robot
Finally, we have the issue of empathy. We have known for a while that people who abuse
animals as children often grow up to abuse other people as adults. As our understanding of
how animals feel and think develops, we have an increasingly compelling case for avoiding
any kind of animal cruelty. But robots can be built to have reactions to harm and danger that
mimic animal behaviour; a Pleo dinosaur robot detects that it is being held aloft by its tail,
and kicks and screams accordingly. This triggers an empathy response--and is likely to
become a standard way for a robot to communicate damage or risk to its human owner.
(Jamais Cascio, Machine Ethics, April 8th 2009)

Naturally the development of this kind of new ethical reasoning and potential enforcement by
human laws would have massive ramifications for the games and military industries alike, and
ultimately would prove at odds with the advancement of artificial intelligence in their particular
development sectors. However without a complete understanding and thereby consideration for the
treatment thereof of any new type of intelligence that we create we are becoming nothing more
than a world of Dr. Frankensteins and our future is already written in science fiction, a world of
enslavement at the hands of a superior intelligence with no ethics, conscience or morality, basically
more powerful versions of ourselves.

26
4: Conclusion: The Real vs. The Virtual

Basically the bottom line is that we will be seeing more and more games that mirror the social and
intellectual interactions of the real world around us. The scary thing is – will we prefer to live the real
world or play the simulated one? It is predicted that within roughly twenty years we will be using
total immersion technologies (TIT) or virtual reality, which would enable players to interact with
virtual characters and environments on an unprecedented scale; imagine actually feeling pain when
shot at in a first person shooter, imagine feeling the rush of the wind through your hair as you fly an
aeroplane, all from the comfort of your living room. This kind of technology, however, is not being
predominately developed by the games industry, as you would think, but instead by the US Army
and the international porn industries. Already in the US Airforce there is equipment in place that
allows a pilot to control a fighter jet using their own brains and the international porn industries
have already developed hardware that allows someone to have intercourse with another person via
the internet utilising 'robotic hardware'.

From what I can glean from my own personal research before many, many doors have been
slammed in my face, it looks certain that the race for emotive, conscious AI will not actually be won
by either side - the games industry or the Military. The reason for this complete and total debacle is
that both industries want something different from the flip-coin that is modern artificial intelligence.
Whereas the games industry is actively looking for new and creative ways to immerse the human
player in more and more realistic versions of reality, or immersive hyper-reality, the Military are
creating autonomous systems that will, by and large, remove the human from as much of the
fighting process, apart from administration, as possible. The games industry will continue to develop
and advance AI that imitates and replicates human intelligence but it is still no more truly emotive or
indicative of love and hate just as a user’s desk is no more capable of demonstrating these emotions;
for the games industry AI will always serve a purpose but as a plaything of simulacra just as the desk
serves a purpose, as a work surface. The Military on the other hand will continue to develop AI that
is capable of making massive and important tactical, logistic and strategic decisions free from
emotion and the threat of episodes of guilt or remorse. Their AI will be one of demonic accuracy in
terms of logical thinking and potentially doing things that no human would ever consider in the
name of war and winning that aforementioned, and more importantly human-sanctioned, war.

In an online survey I conducted I found that the general population are by and large completely
unconcerned by the advent of conscious, intelligent NPCs and do not see it as a prerequisite for
gaming or interaction enjoyment. However by the same token they do still believe that AI in some

27
form is very important towards the enjoyment of the game itself in terms of the overall gaming
experience. So why is the games industry investing so much time and energy into this particular area
of research?

Q3. More and more modern games rely on


imitating human intelligence to make the
gaming experience more immersive. In terms
of modern gaming how important would you
rate artificial intelligence for a gamers…

Very important

Not very
important

Figure 13: Online Survey (Q3) - http://www.surveymonkey.com/s/DHV6MLD

Q4: Do you feel there is a necessity for fully


conscious, intelligence NPC's in future games?

Yes
No
Don't Care

Figure 14: Online Survey (Q4) - http://www.surveymonkey.com/s/DHV6MLD

Personally I cannot wait for the era to arrive when I can walk into a battlefield and feel the
adrenaline rush of running into advanced combat using the latest military hardware, meanwhile
knowing that I will not die.

In conclusion I would like to say a few words about the journey that got me to here. I set out
thinking that I already knew the answer to the question of who would create self-aware, emotive
artificial intelligence; but after much research and hitting logical brick wall after brick wall I found
that I was in fact completely wrong in my assumptions. I thought that it would be the games industry
who would grab hold of this theoretical holy grail of AI first but now I believe that neither the games
industry nor the military will be first. I now believe that this isn’t something that we can any longer
necessarily predict as the concept of self-awareness is one that we ourselves don’t really truly
understand, for we even know a self-awareness of sorts could already exist online amongst the

28
masses of virtual domains and botnets that we have not only no conception of but also no true
understanding thereof. One thing I am certain of, one day in the near future machines will develop a
way of making their own ideas and desires known to us, we just have to ensure that we are prepared
for the consequences of this new level awareness as after all it is our own creation. It feels almost
like all my years of research into this field, the successful experiments and the failed ones have all
been leading to this point in written time, this dissertation. After all the sleepless nights and
research the only question remaining is where do I go from here? Really that is easy to answer – I
forge ahead and see what I do in helping this new form of intelligence, machine intelligence be born,
a new child.

The world of the 'Matrix' is nearer than we think.

29
Appendix A: Notes

1: The Games Industry: Pioneers of the Emotive NPC

[1] Transhumanists and Extropians

Transhumanism is an international intellectual and cultural movement supporting the use of science
and technology to improve human mental and physical characteristics and capacities. The
movement regards aspects of the human condition, such as disability, suffering, disease, aging, and
involuntary death as unnecessary and undesirable. Transhumanists look to biotechnologies and
other emerging technologies for these purposes. Dangers, as well as benefits, are also of concern to
the transhumanist movement.

Extropianism, also referred to as extropism or extropy, is an evolving framework of values and


standards for continuously improving the human condition. Extropians believe that advances in
science and technology will someday let people live indefinitely and that humans alive today have a
good chance of seeing that day. An extropian may wish to contribute to this goal, e.g. by doing
research and development or volunteering to test new technology.

[2] Searle's Chinese Room Theory

Searle imagined himself inside a room; one side of the room has a hatch through which questions,
written in Chinese, are passed into Searle. His job is to provide answers, also in Chinese, to these
questions; the answers are passed back outside the room through another hatch. The problem is,
Searle does not understand a word of Chinese, and Chinese characters mean nothing to him.

To help construct answers to the questions, he is armed with a set of complex rule-books which tell
him how to manipulate the meaningless Chinese symbols into an answer to the question. With
enough practice, Searle gets very skilled at constructing the answers. To the outside world, Searle's
behaviour does not differ from that of a native Chinese speaker – the Chinese Room passes the
Turing Test.

However, unlike a genuine literate in Chinese, Searle does not in any way understand the symbols he
is manipulating. Similarly, a computer executing the same procedure – the manipulation of abstract
symbols – would have no understanding of the Chinese symbols either. The crux of Searle's
argument is that whatever formal principles are given to the computer, they will not be sufficient for
true understanding, because even when a human carries out the manipulation of these symbols,
they will understand absolutely nothing. Searle's conclusion is that formal manipulation is not
enough to account for understanding. This conclusion is in direct conflict with Newell and Simon's
physical symbol systems hypothesis.

One frequent retort to Searle's argument is that Searle himself might not understand Chinese, but
the combination of Searle and the rule-book do understand. He dismissed this argument by arguing
that a combination of constituents without understanding cannot mystically invoke true
understanding. Here, Searle is arguing that the whole cannot be more than the sum of its parts. For
many, this point is a weakness in Searle's argument.

30
[3] The Physical Symbol Systems Hypothesis (PSSH)

In 1976, Newell and Simon proposed the Physical Symbol Systems Hypothesis; which proposes that a
set of properties that characterise the kind of computations that the human mind relies on. The
PSSH states that intelligent action must rely on the syntactic manipulation of symbols. "A physical
symbol system has the necessary and sufficient means for intelligent action," which is to say that
cognition requires manipulation of symbolic representations, and these representations refer to
things in the real world. The system must be physically realised, but the 'stuff' system is built from is
irrelevant. So it could be made of neurons, silicon, or even tin cans.

In essence, Newell and Simon are commenting on the kind of program, or software agent, that the
computer runs – they say nothing about the kind of computer that runs the program. Newell and
Simon's hypothesis is an attempt to clarify the issue of the kind of operations that are required for
intelligent action. However, the PSSH is only a hypothesis, and so must be tested. Its validity as a
hypothesis can only be proved or disproved by scientists carrying out experiments. Traditionally, AI is
the science of testing this hypothesis.

Recall that the PSSH makes a claim about the kind of program that the brain supports. And so,
arriving at the right program is all that is required for a theory of intelligent action. Importantly, they
take a functionalist stance – the nature of machinery that supports this program is not the principal
concern.

[4] Finite State Machine

A finite state machine or finite state automation is a model of behaviour composed of a finite
number of states, transitions between those states, and actions. It is similar to a ‘flow graph’ where
we can inspect the way in which the logic runs when certain conditions are met. A finite state
machine is an abstract model of a machine with a primitive (sometimes read-only) internal memory.

[5] Real-Time Strategy

Real-time strategy (RTS) games are a genre of computer war games which do not progress
incrementally in turns. Brett Sperry is credited with coining the term to market with ‘Dune II’. In an
RTS, as in other war games, the participants position and manoeuvre units and structures under
their control to secure areas of the map and/or destroy their opponents' assets. In a typical RTS it is
possible to create additional units and structures during the course of a game. This is generally
limited by a requirement to expend accumulated resources. These resources are in turn garnered by
controlling special points on the map and/or possessing certain types of units and structures
devoted to this purpose. More specifically, the typical game of the RTS genre features resource
gathering, base building, in-game technological development and indirect control of units. The tasks
a player must perform to succeed at an RTS can be very demanding, and complex user interfaces
have evolved to cope with the challenge. Some features have been borrowed from desktop
environments, most prominently the technique of ‘clicking and dragging’ or ‘drag and drop’ to select
all units under a given area. Though some game genres share conceptual and gameplay similarities
with the RTS template, recognized genres are generally not subsumed as RTS games. For instance,
city-building games, construction and management simulations, and games of the real-time tactics
variety are generally not considered to be ‘real-time strategy’.

31
[6] Herzog Zwei

‘Herzog Zwei’ is a Mega Drive/Genesis game by Technosoft, published in 1989 (released in the
United States in early 1990). It is one of the first real-time strategy games, predating the genre-
popularising ‘Dune II’ and considered one of the best two-player Genesis games, combining the
arcade-style play of Technosoft's own Thunder Force series with a simple, easy-to-grasp level of
strategy. ‘Herzog Zwei’ (pronounced ['hɛətsok tsvai]) translates from German to Duke Two. It is the
sequel to ‘Herzog’, which was only available on the Japanese MSX personal computer.

32
2: The Military Industry: Pioneers of the Terminator

[1] Fifth Generation Computer

The Fifth Generation Computer Systems project (FGCS) was an initiative by Japan's Ministry of
International Trade and Industry, begun in 1982, to create a ‘fifth generation computer’ which was
supposed to perform much calculation using massive parallel processing. It was to be the end result
of a massive government/industry research project in Japan during the 1980s. It aimed to create an
‘epoch-making computer’ with supercomputer-like performance and to provide a platform for
future developments in artificial intelligence.

The term fifth generation was intended to convey the system as being a leap beyond existing
machines. Computers using vacuum tubes were called the first generation; transistors and diodes,
the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas
previous computer generations had focused on increasing the number of logic elements in a single
CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers
of CPUs for added performance.

The project was to create the computer over a ten year period, after which it was considered ended
and investment in a new, Sixth Generation project, began. Opinions about its outcome are divided:
Either it was a failure, or it was ahead of its time.

[2] J-STARS

The E-8 Joint Surveillance Target Attack Radar System (Joint STARS) is a United States Air Force
battle management and command and control aircraft that tracks ground vehicles and some aircraft,
collects imagery, and relays tactical pictures to ground and air theatre commanders.

[3] Evolutionary Algorithm (EA)

In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a


generic population-based metaheuristic optimisation algorithm. An EA uses some mechanisms
inspired by biological evolution: reproduction, mutation, recombination, and selection. Candidate
solutions to the optimisation problem play the role of individuals in a population, and the fitness
function determines the environment within which the solutions ‘live’. Evolution of the population
then takes place after the repeated application of the above operators. Artificial evolution (AE)
describes a process involving individual evolutionary algorithms; EAs are individual components that
participate in an AE.

[4] Genetic Programming (GP)

In artificial intelligence, genetic programming (GP) is an evolutionary algorithm-based methodology


inspired by biological evolution to find computer programs that perform a user-defined task. It is a
specialisation of genetic algorithms where each individual is a computer program. It is a machine
learning technique used to optimise a population of computer programs according to a fitness
landscape determined by a program's ability to perform a given computational task.

33
[5] Neural Network (NN)

An neural network (ANN), sometimes called ‘artificial neural network’ (NN), is a mathematical model
or computational model that tries to simulate the structure and/or functional aspects of biological
neural networks. It consists of an interconnected group of artificial neurons and processes
information using a connectionist approach to computation. In most cases an ANN is an adaptive
system that changes its structure based on external or internal information that flows through the
network during the learning phase. Neural networks are non-linear statistical data modelling tools.
They can be used to model complex relationships between inputs and outputs or to find patterns in
data.

34
Appendix B: Research Papers

• ‘Artificial Intelligence: Realising the Ultimate Promises of Computing’ by David L. Waltz


• ‘Artificial Intelligence for Adaptive Computer Games’ by Ashwin Ram (et al.)
• ‘Are Reusable Engines the Future of Computer Games Development?’ by J. Rhea
• ‘The Next 'New Frontier' of Artificial Intelligence’ by C. English (et al.)
• ‘Game AI: The Possible Bridge between Ambient and Artificial Intelligence’ by Alexander
Kleiner

35
Appendix C: Survey Results
AI: Games versus Military (http://www.surveymonkey.com/s/DHV6MLD)

1. Participant Sex 2. Participant Age


15-20
Male
20-30
Female 30-40
40-50

3. More and more modern games rely on imitating human intelligence to


make the gaming experience more immersive. In terms of modern
gaming how important would you rate artificial intelligence for a gamers
enjoyment of the overall gaming experience?

Very Important
Not Very Important

4. Do you feel there is a necessity for fully conscious, intelligence NPC's


in future games?

Yes
No
Don't Care

36
9. Do you think that military grade AI as such developed by Dr. Paul
Kruzewski (originally at Biographic Technologies and developed 'AI
Implant') should be used in conventional games development?

Yes
No
Don't Care

10. Conversely do you think that civilian game AI should be transposed,


refined and redeveloped into military technologies.

Yes
No
Don't Care

37
Appendix D: Video Demonstrations

Emotiv Brain/Computer Control Interface Demo


http://www.youtube.com/watch?v=40L3SGmcPDQ

Emotiv MSNBC Interview


http://www.youtube.com/watch?v=wIrLYdQu7tM

Emotiv EPOC IGN Interview


http://www.youtube.com/watch?v=M89M1VUc-ZQ&feature=related

AI Implant Massive Combat Demo


http://www.youtube.com/watch?v=ixvoRc9swqQ

AI Implant Civilian Panic Demo


http://www.youtube.com/watch?v=ixvoRc9swqQ

Kynapse AI Demo
http://www.youtube.com/watch?v=FJ4Dsra9PQQ

38
Bibliography

Hacking the X-Box: An Introduction to Reverse Engineering


Written by Andrew 'Bunnie' Huang
Published by No Starch Press Inc.
ISBN: 1593270291

Hal's Legacy: 2001's Computer as Dream and Reality


Written by David G. Stork
Published by MIT Press
ISBN: 0262193787

In the Mind of the Machine


Written by Prof. Kevin Warwick
Published by Random House UK
ISBN: 0099703017

Fuzzy Logic
Written by Daniel McNeill & Paul Freiberger
Published by Simon & Shuster
ISBN: 0671875353

Introducing Artificial Intelligence


Written by Henry Brighton & Howard Selina
Published by Icon Books
ISBN: 1840464631

Understanding Artificial Intelligence


From the Editors of Scientific American
Published by Warner Books Inc.
ISBN: 0446678759

39
Creation: Life and How to Make It
Written by Steve Grand
Published by Butler & Tanner Ltd.
ISBN: 0297643916

Simulacra and Simulation


Written by Jean Baudrillard
Translated by Sheila Faria Glaser
Published by University of Michigan Press
ISBN: 0472065211

Simulacra and Simulation: The Matrix Phenomenon


Written by J.B. Webb-Benjamin
Published by North Warwickshire & Hinckley Colleges Online Resources

Man and His Symbols


Written by Carl Jung
Published by Dell
ISBN: 0440351839

The Origin of Species by Means of Natural Selection: Or the Preservation of Favoured Races in the
Struggle for Life
Written by Charles Darwin
Published by Penguin Classics
ISBN: 0140432051

The Illuminate Formula to Create an Undetectable Total Mind Control Slave


Written by Fritz Springmeier & Cisco Wheeler

Artificial Intelligence: Realising the Ultimate Promises of Computing


Written by David L. Waltz

40
KUBARK Counterintelligence Interrogation Document
Numerous authors
http://kimsoft.com/2000/kub_i.html

41
Webography

WikiSearch_A
http://en.wikipedia.org/wiki/Game_artificial_intelligence

WikiSearch_B
http://en.wikipedia.org/wiki/Alan_Turing

WikiSearch_C
http://en.wikipedia.org/wiki/Fifth_generation_computer

WikiSearch_D
http://en.wikipedia.org/wiki/JSTARS

WikiSearch_E
http://en.wikipedia.org/wiki/Genetic_programming

WikiSearch_F
http://en.wikipedia.org/wiki/Artificial_neural_network

WikiSearch_G
http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Lionhead Studios
http://www.lionhead.com

The Register
23rd June 2009
http://www.theregister.co.uk/2009/06/23/darpa_physical_intelligence/

GoogleNews_Search
4th April 2009
http://www.google.com/hostednews/afp/article/ALeqM5j1F1VEHktMpXSaXrLUgr4coIDfPg

42
Machine Ethics
8th April 2009
http://www.fastcompany.com/blog/jamais-cascio/open-future/machine-ethics

43

Vous aimerez peut-être aussi