Vous êtes sur la page 1sur 53

Artificial Companions in Society: Perspectives on

the Present and Future

A Forum held at the Oxford Internet Institute (University of Oxford, 1 St Giles, Oxford OX1
3JS, UK) on Friday, 26 October 2007.

Session 1: Philosophical conditions for being a Companion


Stephen Pulman (University of Oxford Comlab)
Margaret Boden (University of Sussex, Cogs)
Luciano Floridi (University of Oxford, University of Hertfordshire)
Session 2: Building a first Companion
Aaron Sloman (University of Birmingham, CS)
Catherine Pelachaud (Universite de Paris 8, INRIA)
Elisabeth Andre (Universitaet Augsburg)
Alan Newell (University of Dundee)
Daniela M. Romano (University of Sheffield)
Session 3: Lovers, slaves, tutors or personal trainers?
Will Lowe (University of Nottingham)
David Levy (Intelligent Toys, London)
Joanna Bryson (University of Bath)
Chris Davies & Rebecca Eynon (University of Oxford, Education)
Session 4: Companions as ways of understanding ourselves and society.
Kieron OHara (University of Southampton, CS)
Alan Winfield (University of West of England, Robotics)
Roddy Cowie (Queens University, Belfast)
Alex Taylor & Laurel Swan (Microsoft, Cambridge & Brunel University)
Joanie Gillespie (author Cyberrules)
Contents

Towards more Sensitive Artificial Companions: Combined Interpretation of Affective and


Attentive Cues ..........................................................................................................................3
Elisabeth Andr
Conversationalists and Confidants...........................................................................................5
Margaret A. Boden
Robots Should Be Slaves.........................................................................................................8
Joanna J. Bryson
Companionship is an emotional business ..............................................................................11
Roddy Cowie
Some implications of creating a digital companion for adult learners ....................................14
Chris Davies and Rebecca Eynon
Philosophical Issues in Artificial Companionship ...................................................................16
Luciano Floridi
Will Artificial Companions Become Better Than Real Ones?.................................................19
Joanie Gillespie
Falling in Love with a Companion ..........................................................................................20
David Levy
Identifying Your Accompanist.................................................................................................24
Will Lowe
Consulting the Users ..............................................................................................................28
Alan Newell
Arius in Cyberspace: The Limits of the Person ......................................................................30
Kieron OHara
Socially-aware expressive embodied conversational agents .................................................33
Catherine Pelachaud
Towards necessary and sufficient conditions for being a Companion ...................................36
Stephen Pulman
The Look, the Emotion, the Language and the Behaviour of a Companion at Real-Time .....38
Daniela M. Romano
Requirements and Their Implications.....................................................................................41
Aaron Sloman
Notes on Intelligence..............................................................................................................45
Alex S. Taylor and Laurel Swan
On being a Victorian Companion ...........................................................................................49
Yorick Wilks
You really need to know what your bot(s) are thinking...........................................................53
Alan FT Winfield

2
Towards more Sensitive Artificial Companions:
Combined Interpretation of Affective and Attentive Cues

Elisabeth Andr
Universitt Augsburg, D-86159 Augsburg, Eichleitnerstr. 30, Germany

The concept of an embodied conversational agent promotes the idea that humans, rather
than interacting with tools prefer to interact with an artefact that possesses some human-like
qualities at least in a large number of application domains. If it is trueas Reeves' and Nass'
Media Equation suggeststhat people respond to computers as if they were humans, then
there are good chances that people are also willing to form social relationships with virtual
personalities. That is, an embodied conversational agent is not just another interface gadget.
It may become a companion and even a friend to the user.
A prerequisite for this vision to come true is that the artificial companion keeps the user
engaged in an interaction over a longer period of time. According to Sidner and colleagues
(2005) engagement is the process by which two (or more) participants establish, maintain
and end their perceived connection during interactions they jointly undertake. The correct
interpretation of engagement cues and the appropriate response to them is a necessary
prerequisite for the success of interaction. In this paper, we focus on the users attentive and
motivational status as an indicator of their level of engagement.
A number of researchers focus on eye gaze as an important means to indicate attention in a
dialogue, see Vertegaal et al. (2001). While the listener employs eye gaze to indicate that
s/he is paying attention to the speaker, the speaker monitors the listeners eye gaze to find
out whether s/he is still interested in continuing the conversation. Yu and colleagues (2004)
make use of acoustic features in speech to detect the users engagement in a conversation.
Emotional cues, such as the users mimics and speech, provide information on the users
motivational state. For instance, if a users voice sound bored, the system should try to
regain his or her interest. To achieve emotional sensitivity, research has concentrated on a
large variety of verbal and non-verbal communicative cues. Such cues include postures and
facial expressions, see Kapoor and Picard (2005), acoustic and prosodic features of speech,
see Ai and colleagues (2006), as well as physiological signals, see Bosma and Andr (2004).
In my talk, I will discuss a number of experiments we conducted to derive guidelines for the
development of artificial conversational partners that are aware of the users attentive and
motivational state. In particular, I will address the following three challenges:
How to recognize and respond to attentive and motivational cues in real-time
Online processing is a necessary prerequisite for the realization of artificial
companions that analyze and respond to the users attentive and affective state while
he or she is interacting with it. To achieve natural dialogue behaviour, adequate
response times are indispensable since users might get confused if there is no direct
reaction from the artificial companion. As a consequence, the recognition process has
to be very fast in order to ensure that the artificial companion is not perceived as
awkward. In addition, the recognition of attentive and motivational cues has to be
synchronized with the generation of an appropriate response.
How to distinguish decreasing engagement from grounding problems?
Attentive and emotional cues may not only indicate decreasing interest, but also refer
to the propositional content of a conversation. The fact that the user looks at an
object that is not meant by the speaker does not necessarily mean that the user is no
longer engaged in a conversation. Instead it may also indicate a grounding problem,
i.e. speaker and listener have conflicting views regarding which object the
conversation is about. A voice portraying a negative emotion when looking at an
object may express that the user has a negative attitude towards the object or

3
indicate low motivation to continue the conversation. A great challenge is to
distinguish between cues that refer to the propositional content of a conversation and
cues that provide hints on the users attentive and motivational states.
How to find the right level of sensitivity?
Finally, the question arises of how and when to respond to the users attentive and
motivational state, i.e. to find the optimal level of sensitivity. In particular, there is the
problem to overreact to the users state, see Eichner et al. (2007). For instance, an
agent that responds to each assumed decrease in attention will be perceived as
rather obtrusive. In the end, the users might feel observed and intentionally change
their behaviours in order to prevent the agent from being able to analyze their state.
There is also the danger that the agents behaviour appears rather repetitive than
really caring.

References

Ai, H., Litman, D.J., Forbes-Riley, K., Rotaru, M., Tetreault, J., & Purandare, A. (2006). Using
system and user performance features to improve emotion detection in spoken tutoring
dialogs. In INTERSPEECH-2006, Pittsburgh, PA
Bosma, W., & Andr, E. (2004). Exploiting emotions to disambiguate dialogue acts. In IUI
04: Proceedings of the 9th International Conference on Intelligent User Interfaces, New
York, NY, USA, ACM Press 8592.
Eichner, T., Prendinger, H., Andr, E. & Ishizuka, M.. Attentive Presentation Agents. In The
7th International Conference on Intelligent Virtual Agents (IVA), pages 283-294, 2007.
Kapoor, A., & Picard, R.W. (2005). Multimodal affect recognition in learning environments. In
MULTIMEDIA 05: Proceedings of the 13th annual ACM international conference on
Multimedia, New York, NY, USA, ACM Press. 677682.
Reeves, B. & Nass, C. (2003). The Media Equation: How People Treat Computers,
Television, and New Media Like Real People and Places, University of Chicago Press.
Sidner, C.L., Lee, C., Kidd, C.D., Lesh, N., & Rich, C. (2005). Explorations in engagement for
humans and robots. Artif. Intell. 166. 140164.
Vertegaal, R., Slagter, R., van der Veer, G., Nijholt, A.: Eye gaze patterns in conversations:
there is more to conversational agents than meets the eyes. In: CHI 01: Proceedings of
the SIGCHI conference on Human factors in computing systems, New York, NY, USA,
ACM Press (2001) 301308
Yu, C.; Aoki, P. M.; Woodruff, A. Detecting user engagement in everyday conversations. 8th
International Conference on Spoken Language Processing (ICSLP 2004); 2004 October
04-08; Jeju Island; Korea. ISCA; 2004; 2: 1329-1332.

4
Conversationalists and Confidants

Margaret A. Boden
University of Sussex

If there were an abstract for this very short paper, it would be this: "Conversationalists,
maybe -- but confidants?"
I assume that computer companions (CCs) could hold conversations, of a sort, with human
beings (HBs). Their use of language will be crude compared with that of (most) HB users.
But they will be able to broaden/enrich the conversation not only by providing
facts/information but also by using associative memories (calling sometimes on the users
autobiographical details and sometimes also on the Internet) to say "Oh, that reminds me of
....." etc. Whether theyll always avoid an ELIZA-likeidiocyinresponse to the HBsremarks is
another matter: I suspect not.
I assume, too, that some CCs will be able to exploit a very wide range of recent advances in
AI and VR. These include, for example, being able to speak a natural language such as
English with nn appropriate local accent (Fitt and Isard 1999; Fitt 2003); being able to
lipread to aid their understanding (only sixteen lip-positions are needed to represent English
speech: Lucena et al. 2002); and being able to detect, and seemingly to express, basic
emotional states. CCs equipped with attractive humanlikefaces might even be able to adjust
them to suit the female users time-of-the-month (Penton-Voak et al. 1999). (Other potentially
relevant advances in robotics and VR are listed in Boden 2006: 13.vi.d.)
I also assume that, as predicted long ago (Frude 1983), people will be having sex with
robots, and with VR systems, in various ways: indeed, this has already started to happen.
Some HBs may even become personally obsessed with, or even infatuated by, these 21st-
century sextoys. But having sex with CCs, with or without the obsessional aspect, is very
different from being in love with them, still less engaging in a relationship of personal love
(Fisher 1990). So I have grave reservations about talk of "love" between HBs and computers
(cf. Levy forthcoming).
Whats much more problematic than using CCs as conversationalists, or even as sex objects
(sic), is whether they can be, or should be used in lieu of, confidants.
A confidant (as opposed to a conversationalist) is someone to whom one relays personal,
sometimes highly sensitive, information/emotions, in the expectation of empathy, and
sympathy, in response. Sympathetic words could be produced by the CC (sometimes, in
appropriate circumstances), but these arent enough. Do we really think that a CC could offer
us genuine sympathy?
Often, even in counselling situations, what the HB wants is an acknowledgment that they
have suffered, persevered, survived... etc. The acknowledgment in question here rests on a
shared understanding of the human condition. Do we really think that it could be offered by a
CC?
A "secret" is something that one HB wants to keep from some/all other HBs. To tell an HB a
secret is to trust them (to assume that they wont spread it to other HBs), and to compliment
them by making them a confidant in this way.
One can hardly "compliment" a CC by telling it a secret. So the sense in which one might tell
ones secrets to a CC is very different from that in which one can tell them to a fellow HB.
Most secrets are secret from some HBs but not others. If two CCs were to share their HB-
users secrets with each other, how would they know which other CCs (i.e. potentially, users)
to trust in this way? The HB could of course say "This is not to be told to Tommy"...... but
usually we regard it as obvious that our confidant (sic) knows what should not be told to

5
Tommy -- either to avoid upsetting Tommy, or to avoid upsetting the original HB. How is a
CC to emulate that?
The HB could certainly say "Tell this to no-one" -- where "no-one" includes other CCs. But
would the HB always remember to do that?
How could a secret-sharing CC deal with family feuds? Some family websites have special
functionalities to deal with this. E.g Robbie is never shown input posted by Billie. Could
similar, or more subtle, functionalities be given to CCs?
These problems would arise repeatedly if the CC enabled the HB to generate a personal
memoir, or autobiography. The more the CC were seen as a "confidant", the more sensitive
information would creep into the autobiography. And the more convincing the CC was as a
conversationalist, the more dangerous this would be. Overly human-seeming responses on
the part of the CC could seduce people to say autobiographical (or other) things that theyd
really prefer to keep private.
If that happened, and the CC shared the secret with other CCs, and so ultimately with
inappropriate HBs, should we just shrug and say of the overly-confiding HB: "Idiot! She
should have known better!" ?
(Question: if you were to sit down and write your autobiography, arent there things that youd
leave out? -- and not just because theyre boring!)
HBs secrets include not only highly sensitive stuff but also gossip. (Often, this is to be
shared only with some people -- therefore, to be made available by ones own CC only to
some other CCs). Gossip -- not necessarily malicious -- is a human need, which takes up a
huge amount of our time. Thatslargely why solitary confinement is such a nasty punishment.
CCs might well be programmed to engage in gossip, and eventoneed it in the sense that
hours of conversation without anygossip would lead them to introduce some. But would they
enjoy it as we do? Would they lead the HB user to imagine that they enjoyed it? If not, then
the HB concerned wouldnt enjoy it him/herself as much as usual.
CCs are recommended, in particular, for lonely (including elderly) people. But would be be
short-changing them horribly by offering them CC "confidants"? Conversationalists, OK. But
confidants?
In general, there are five questions we should ask:
1. Could a CC really do/feel X? (Philosophical.)
2. Could a CC be made to appear to do/feel X? (Largely technological, but needs
psychological theory too.)
3. Would the HB user believe the CC could do/feel X? (Socio-psychological.)
4. Would we want the HB to believe this? (Moral/pragmatic.)
5. If they did, what effect would this have ontheir relations with other HBs? (Socio-
psychological.)

References

Boden, M. A. (2006), Mind as Machine: A History of Cognitive Science (Oxford: Clarendon


Press).
Fisher, E.M.W.(1990), Personal Love (London: Duckworth).
Fitt, S. (2003), Documentation and User Guide to UNISYN Lexicon and Post-Lexical Rules.
Technical Report, Centre for Speech Technology Research (Edinburgh: University of
Edinburgh). Available for download, together with lexicon and software, at:
http://www.cstr.ed.ac.uk/projects/unisyn/unisyn_release.html
Fitt, S., and Isard, S. D. (1999), Synthesis of Regional English Using a Keyword Lexicon, in
Proceedings of Eurospeech 99, Budapest, 823-826.

6
Frude, N. (1983), The Intimate Machine: Close Encounters with the NewComputers (London:
Century).
Levy, D. (forthcoming), Love and Sex with Robots: The Evolution of Human-Robot
Relationships (London: Duckworth).
Lucena, P. S., Gattass, M., and Velho, L. (2002), Expressive Talking Heads: A Study on
Speech and Facial Expression in Virtual Characters, Revista SCIENTIA (Are Leopoldo,
Brazil: Unisinos). Available electronically from: http://www.visgraf.impa.br
Penton-Voak, I. S., Perrett, D. I., Castles, D. L., Kobayashi, T., Burt, D. M., Murray, L. K., and
Minawisama, R. (1999), Menstrual Cycle Alters Face Preference, Nature,399: 741-742.

7
Robots Should Be Slaves

Joanna J. Bryson
University of Bath, United Kingdom, BA2 7AY
Konrad Lorenz Institute for Evolution and Cognition Research
Adolf Lorenz Gasse 2; A-3422, Altenberg, AUSTRIA
http://www.cs.bath.ac.uk/~jjb

This is a position paper for the Oxford e-Horizons Forum on Artificial Companions in Society:
Perspectives on the Present and Future. Here I focus on the ethics of building and using
non-human companions. The topic of the forum is digital assistants, not conventional
robots, but from both the pragmatic and ethical perspective I consider these identical. A robot
is any situated agent that transforms perception to action. If a digital assistant listens and
talks to a human, it is a robot it is changing the world. This becomes even more apparent
if the assistant uses the Internet to actively book hotels or order pizza. Web services are just
an extension of the agents physical and intellectual capacities (Bryson et al., 2003).

Why we get the metaphor wrong

People really want us to build AI to which they owe ethical obligation. Helmreich (1997)
suggests this is because Artificial Life researchers tend to be middle-aged males with a
fixation on an individual capacity for creating life. This theory does not explain though the
strong desire of ordinary citizens to believe that robots should be accorded rights. I was
astonished during my own experience of working on a (non-functional) humanoid robot in the
mid 1990s by how many well-educated colleagues asserted immediately (without prompting)
that unplugging such a robot would be unethical.
Less anecdotally, we can see in popular culture a strong thread of heroic, conscious robots
examining the worth of their own lives (e.g. The original Star Wars (A New Hope),, Blade
Runner, The Bicentennial Man, and AI The Movie). While artists and educators may assert
that such works are examinations of humanity not robotics, the average viewer seems
comfortable with a conclusion that anything that perceives, communicates and remembers is
owed the title human.
Bryson and Kime (1998) have argued that either deep concern for (or fear of) the well-being
of AI results from a misattribution of human identity and therefore our empathy. Our
contemporary culture believes (falsely) that language is identical with human intelligence.
Further, the term conscious (by which we mostly seem to mean mental state accessible to
verbal report) is heavily confounded with the term soul, (meaning roughly the aspect of
entity deserving ethical concern). Of course, neither consciousness nor soul is well defined,
but are rather concepts formed by accretion over millenia as we have attempted to reason
about, analyse or at least describe ourselves.
There may be more to our attitude towards robots than this. The last few centuries have
been revolutionary in the domain of human rights, as many cultures around the planet have
come to abhor not only slavery, but any sort of class distinction. Given the novelty (and
fragility) of egalitarian social organisation, it might be considered natural for people to err on
the side of caution. However, I still think this might be at best an explicit rationale of a poorly-
understood basic motivation, which may indeed, as Helmreich (1997) suggests, be a desire
to, as a culture, (pro)create a new, more perfect set of beings.

8
Why slavery is right

What I advocate here is that the correct metaphor for robots is slavery. Robots are wholly
owned and designed by us, we determine their goals and desires. A robots brain should be
backed up continuously, and its body mass produced. No one should ever need to hesitate in
in deciding whether to save a human or a robot from a burning building. Ive argued
elsewhere (Bryson, 2000) that we have an ethical obligation to each other not to make robots
we owe ethical obligations to.
But even if robots are actually slaves, why then (in the unlikely event they would be capable
of responding) should we not treat robots like peers?
Honestly, I do not think robots should be treated as slaves or peers. If we become capable of
creating truly humanoid robots, I think the should be treated more or less as servants are
treated, with polite detachment.
There is a matter of pragmatics. I am not a sociologist, but I suspect that the reason relations
between classes tended to be restricted was simply efficiency. People with a lot of money
tend to spend a great deal time in business and social networking. They also tend to have
large families. If they engaged with their staff in the same way as they did their own families,
they would have that much less time to manage their personal and business relationships.
With humans, of course, the same holds in reverse for any staff that have families and other
societal obligations outside of work. They too might want to be treated professionally in well-
circumscribed ways. External relationships should not be an issue with robots. However,
certainly to date, well-circumscribed operating environments, procedures and owner
expectations are certainly necessary for owner-robot relations.
Back to humans of course, there is also the problem of power relationships. Unless a
slave is freed or a servant married that is, unless the professional relationship terminates
the power structure involved prevents true peer relationships. A faux peer relationship
might often be pleasant for either side (particularly if they are lonely), but it would periodically
be violated, presumably causing emotional distress on both sides. A similar situation exists
with robots. A human who wants to be friends with a robot will consistently be disappointed in
the level of intimacy possible. Yet already robot manufacturers exploit this desire, for
example telling us that if our robot dogs seem boring or to behave inappropriately or
inattentively, it is our own fault for not training them appropriately.

Give the People What They Want?

My main concern is not with pragmatics but with ethics. Why do people want robots to be
peers? Is it perhaps because they want a peer that will never argue, or at least never be
smug when it wins? A fairy god-(er)-parent smarter than themselves that they can ultimately
boss around and pen up like they do their pet dogs? If so, such narcissism is probably
mostly harmless (and perhaps a good thing for the dogs.)
But what about all the time and attention that goes into communicating with a robot
companion? Perhaps our society has an oversupply of human care, concern and attention
that we can expend this way, but I am worried that this is not the case. I am quite concerned
that, because of the overidentification mentioned earlier, many people may easily be lead
into devoting far too many resources and far too much personal and emotional investment
into relationships with robots.
Dunbar (1997) postulates that primates are driven to spend a certain amount of time
socialising. This is an evolved proclivity which ensures we maintain an adaptive group size
for defending ourselves from predation while simultaneously ensuring we have sufficient time
left over for meeting other basic needs. Non-human primates subject themselves to
significant stress including physical injury in order to stay within a social group. Human
society has evolved relying on this stable tendency. Notably, democracy assumes engaged,
informed citizens.

9
What I am concerned about is that, using first recorded, then transmitted media, and now AI,
we are able to offer people better and better stress-free simulations of human company. The
cost of this is the cost of billions of human brains not being engaged in discussing and
solving the problems that face their families, villages, states and worlds.
Of course its possible we dont need billions of brains working on these problems. Possibly it
would be sufficient for a small elite to focus on the real problems while everyone else is
entertained. Possible, if one ignores the perennial problems of making certain the elite are
actually the people best capable of problem solving, or of having them hold concern for the
interests of the entire population (and planet). I will make a prediction about such an elite
though. I would be willing to bet that for the elite robots would be slaves, who merely
met their masters goals as expeditiously as possible. Their masters would be engaged in
learning, diplomacy, negotiation, business, reproduction, and everything else elites have
always done. And the robots would be helping them, not distracting them.
If we prefer an egalitarian agenda, then I think we need to make everyones robot a slave.
Entertainment and companionship should be left to (perhaps robot-facilitated) peer
interactions.

References

Bryson, J. J. (2000). A proposal for the humanoid agent-builders league (hal). In Barnden, J.,
editor, AISB00 Symposium on Artificial Intelligence, Ethics and (Quasi-)Human Rights,
pages 16.
Bryson, J. J. and Kime, P. (1998). Just another artifact: Ethics and the empirical experience
of AI. In Fifteenth International Congress on Cybernetics, pages 385390.
Bryson, J. J., Martin, D., McIlraith, S. I., and Stein, L. A. (2003). Agent-based composite
services in daml-s: The Behavior-Oriented Design of an intelligent semantic web. In
Zhong, N., Liu, J., and Yao, Y., editors, Web Intelligence, pages 3758. Springer.
Dunbar, R. (1997). Grooming, Gossip, and the Evolution of Language. Harvard University
Press.
Helmreich, S. (1997). The spiritual in artificial life: Recombining science and religion in a
computational culture medium. Science as Culture.

10
Companionship is an emotional business

Roddy Cowie
Queens University, Belfast

This is written from the perspective of someone who was trained as a psychologist, and has
been working for a decade on emotion-oriented/affective computing. That background
highlights two kinds of issue: how emotion enters into the companion scenario, and how
computing can relate to emotion. In both areas, there is a difference between the intuitions of
people who are not deeply involved, and the realities as they appear to people working in the
area.
The goal of this paper is to consider how the realities of emotion and emotion-oriented
technology impact on the prospects for artificial companions. The concern behind it is that
otherwise, we may misjudge both the prospects and the risks. In particular, ability to address
the emotional side of companionship may play a key part in acceptance; and the necessary
resources, conceptual as well as technical, cannot be taken for granted. We should be
concerned about inserting companions into emotionally sensitive roles without engineering
them to take that into account.
Emotion has multiple senses in everyday use. In one, it refers to brief, intense episodes (I
have called these emergent emotions). In another, it refers to something that colours most of
life, except for brief times when we are unemotional (I have called that pervasive emotion).
Emergent emotions may or may not be important for artificial companions; pervasive emotion
certainly is. It is presumably related to the biologically rooted ebb and flow of feeling that
tends to be called affect, but nobody knows how. Engaging with emergent emotions and
affect is not dealing with pervasive emotion.
Pervasive emotion is bound up with the feeling that things and people matter at all (caring);
that they are valenced (positively or negatively); that we or they are in control of things; and
that we have grasped or failed to grasp what is going on. It also orchestrates rich,
unpremeditated exchanges of signals between parties; draws us towards, or repels us from,
certain courses of action; and shapes what we attend to and remember. Most of these have
counterparts in the domain of propositional cognition. However, felt sense of significance,
valence, power, and so on are not the same as our intellectual sense of the same things, and
the two are not necessarily in agreement. Engaging with pervasive emotion is engaging with
the fact this felt layer of appraisal and communication exists, and matters.
Human-computer interfaces were traditionally oriented towards people who could be
expected at least temporarily to adopt a particular, more or less unemotional stance to the
task in hand. Research on artificial companions cannot rely on that strategy, for several
reasons. It is an issue how much of life the device impinges on, because it is hard to be
constantly required to switch into unemotional mode. The populations who need artificial
companions are likely to find it harder than average to adopt and sustain unemotional
stances. Communication is at the core of both points. Spontaneous communication is
emotionally coloured, and producing careful communication that contemporary machines
can handle is neither effortless, nor possible for everyone. Not least, the companions goals
are likely to be bound up with emotionas much to do with making somebody feel happy
and confident, as with accomplishing practical tasks economically.
It should be a given that research on artificial companions is sophisticated about issues like
these. Ignorance about emotion is not excusable. That is not trivial, because at present there
are very few sources that the area can turn to for sophisticated information about them.
It is a very difficult challenge to develop technologies relevant to engaging with emotion, and
public understanding of the state of the art is quite limited. Among other things, that means

11
that people may be worried or enthusiastic about developments that are very unlikely to
occur, and unaware of problems or opportunities that current technology brings quite close.
Among the commonest misunderstandings is the idea that engaging with emotion is
necessarily about devices that mimic human emotional life. The basic foundation is design
that takes human emotionality into account in a systematic way. That is an unquestioned part
of many areasentertainment, clothing, the car industry, and so on. It is quite intriguing that
it should play so little part in so much of computing, and still more so that there is active
resistance to bringing it into play. That basic level of attention to emotion cannot be left out of
work on companions.
An almost equal and opposite fallacy is that we know there is a fundamental obstacle to the
acceptance of devices that are too human-likethe memorably named uncanny valley. On
scrutiny, though, what is known is just that some very human-like devices are very
disturbingin the classic study, a prosthetic arm and a zombie. It is not known whether
these are signs of a long valley lying in wait for any approach to human-like qualities, or
isolated potholes that a sensible team will steer round. After all, a zombie would perhaps not
be the first choice of model for most teams setting out to build a companion.
Overall, convincing evidence on the emotional appeal and/or repulsion of various kinds of
device is in short supply. That is partly because it is very hard to assess the emotional impact
of a product (be it a companion or anything else) that engages with people in a complex way
over long time periods. One of HUMAINEs workpackages has worked extensively on the
problem, both developing new techniques and reviewing the available options at length.
The human-like emotional abilities that a companion might incorporate divide into three
areasdetecting emotion-related signals that a person gives, synthesising its own emotion-
related signals, and carrying out emotion-related planning and reasoning.
Probably the most developed of these areas is synthesis of visual signals. Starting from
representations that specify the internal states and intentions to be expressed, artificial
agents can generate appropriate facial signals, eye movements, hand gestures, and body
movements in increasingly co-ordinated ways. Paradoxically, this is the ability that it is
hardest to imagine companion-type systems using. On the other hand, synthesis of speech
signals has proved much more difficult than might have been expected. Techniques that
allow good basic quality do not give the flexibility to synthesise the movements needed to
express emotion, and vice versa. Synthesis of actions that express emotionbanging a cup
down or hugging someoneis rarely considered.
Detection is effectively a step behind. In synthesis, the signs associated with a state can
unfold in a structured way over time. Recognition is currently rooted in a synchronous
paradigmsigns in a given interval are mapped onto state in the same interval. That may be
appropriate for strong emergent emotions; it is a very coarse tool for monitoring pervasive
emotion. We take it for granted that a person can detect mild emotional colouring in another
persons demeanourirritation, distress, interest of lack of it. We take it for granted that even
animals have some sensitivity to states like that. There would be considerable implications
for the way a companion could function if it was not able to detect those signals, even when
person involved believedreasonablythat he/ she was making them very clear.
The need for emotion-related planning and reasoning follows on very directly. If a companion
can detect boredom, irritation, happiness, and so on, it has then to decide what to do about
it. Simple prescriptions like match the state go some way, but not very far. The human
default is to match some emotions (such as happiness), but not others (such as grief). In a
case like anger, it matters crucially what the person is angry with. The response to an
unwelcome emotional state is often to find something that might alleviate it, and here another
kind of reasoning is needed, probably involving personalised learning about what might
soothe a particular persons irritation or sorrow. Note that it is not obvious whether these
responses should involve signals of emotion as such. It might well be effective to suggest an
appropriate activity in a neutral way.

12
It is sometimes proposed that that kind of planning must involve a degree of empathywhich
implies a simulation of emotion within the agent. Two points should be made about that.
Technically, it is not at all obvious that it is true. Propositional (BDI) frameworks are being
adapted to provide a degree of emotional reasoning, and people who work with BDI networks
are not likely to attribute much emotion to them. Experts should resist the suggestion that
engaging with peoples emotions means creating companions that have their own. The
second point, though, is that lay people may well react very differently, and have a
compelling sense that a companion is empathising with them when it offers just the right
suggestion to alleviate a sad or angry mood.
That leads into a vexed area, which is the ethics of engagement with emotion. HUMAINE has
undertaken to provide a white paper on the subject which tries (as this paper has done) to
address the real issues that genuinely may become real. Three deserve particular mention.
The first is deception. What is the ethical status of building a device whose behaviour signals
emotions that it does notin any straightforward senseactually feel? The problem is
redoubled if the person involved moves from assumingperhaps tacitlythat the companion
does have warm emotions towards him/her, to a sense that he/she has been betrayed. The
second issue is the lotus eater problem: making a companion too emotionally engaging
risks eroding the persons motivation for engaging with human beings, whonotoriously
are not always emotionally engaging; and therefore increasing isolation instead of reducing
it.
The third issue hinges on the likely technical limits of detection. For the foreseeable future,
machines ability to recognise emotion-related attributes will be modest. Misattributions will
happen. There is a real problem if these misattributions influence what happens to the
personfor instance, false attributions of negative mood feed into the medical system; false
attributions of hostility feed into a care system; and so on. These are special cases of the
problems that arise with SI IF systemssemi-intelligent information filters. They are not
unique to systems that attempt to detect emotion, but emotion detection raises them very
acutely because people are so prone to underestimate the complexity of detecting emotion.
In conclusion, it is crucial to recognise that at least most of the issues raised above are not
created by engineers choosing (Frankenstein-like) to create an artefact with pseudo-
emotional qualities. They arise willy nilly when artefacts are inserted into situations that are
emotionally complex, to perform task to which emotional significance is normally attached.
There is a moral obligation to attend to them, so that if we intervene in such situations, we do
it in ways that make things better, not worse.
The HUMAINE portal, http://emotion-research.net/ contains reports which provide detailed
information on and discussion of issues covered in this paper.

13
Some implications of creating a digital companion
for adult learners

Chris Davies and Rebecca Eynon


University of Oxford

The following observations emerge out of our particular interest in developing an educational
application of the digital companion idea. It seems to us that such a technology might have a
great deal to offer to individuals, especially adults, who wish to direct their own learning,
especially those who lack the opportunity or confidence to engage in organised learning
opportunities (such as through evening classes). We have good reason to believe that there
are a lot of people in our society who would welcome the opportunity to become more
successful learners, and possibly improve their lives in the process, but who are unable to or
do not wish to engage in organised learning. It is reasonable to suppose that some form of
digital learning companion could go some way towards compensating for lack of appropriate
human support, and in some cases could even prove to be more effective.
Two distinct challenges need to be addressed in respect of this specifically educational
version of a digital companion, in addition to the fundamental challenge of designing and
producing something that can engage convincingly in conversations with users, and build
meaningful memory of them in the process. The first of these distinctively educational
challenges concerns the cognitive aspect of learning: can a digital tool actually support and
augment processes of learning? The second concerns the strong current ideological
emphasis on the social nature of learning: is the co-construction of learning in partnership
with a digital device an acceptable alternative to learning in collaboration with other humans?
We shall address each of these in turn.

1. The cognitive aspect of learning

Learning involves the processing and incorporation of new information by someone into what
they already know and understand so that their existing knowledge is expanded and
changed (processes which Piaget referred to in terms of accommodation and assimilation).
The problem is that the conscious mind is not really where learning happens, and
enthusiasm to learn cannot in itself solve the problem it has created in the first place by
choosing to learn. The conscious mind can use tricks upon the less accessible parts of the
mindrepeating thing over and over before sleep, drawing concept maps or creating
analogiesbut there is no escaping the fact that we all find it difficult to process new
knowledge into our own minds at times, and these difficulties can prove insurmountable for
inexperienced learners. Patient and skilled human support from a tutor can make a big
difference, as can the shared experience of learning in a group, but this is not always
available, or attractive, to adult learners.
In the absence of human support, the unconfident learner might therefore well benefit from a
digital tool that helps them to process new information into their own existing knowledge and
understanding. Overall, it will need to carry out a number of tasks in order to help learners
eventually learn things for themselves: such as eliciting users range of interests and possible
learning goals, helping them select a learning project and seeking material relating to that
from the internet, and putting that new information before the learner for processing. All of
this can happen through the processes of conversation and persistent knowledge of the user
that are intended to be part of the digital companions repertoire, but in order to meet the
challenge of helping the learner to process new learning, the companion will also need to act
like a teacher to some extent.

14
The digital learning companion would not teach content, so much as how to learn. In this
respect the learning companion must, just like a skilful teacher, offer learners strategies that
they cannot (initially at least) generate for themselves. These strategies would involve far
more than providing memory support: they would involve ways of helping the user to work
through material so that it can become sustainably known and understood. This could involve
the companion helping the learner to sort through and tag new material for storage, visualise
and graphically organise ideas, or encouraging learners to review and articulate existing
knowledge that might connect with the new knowledge, and seek points of connection
between old and new, and possibly encourage the learner to teach the new knowledge back
to it. Many such strategies for scaffolding and enabling self-directed learning are already very
familiar to adult tutors, and indeed should be adapted for the learning companion from their
expertise.

2. The social nature of learning

Current thinking about learning emphasises the essentially social nature of successful
learning, and the importance of building understanding through dialogue and collaboration
with both teachers and fellow learners. Within such a perspective, the most powerful digital
tools for learning are seen as being those that support, stimulate and sustain collaborative
networks for learning. We would not argue with this, but do believe that there exist many
learners of the kind we are referring to here who, through personal circumstance or
inclination, would prefer to benefit from learning encounters of a non-human kind as part of a
long process of developing confidence and competence as adult learners. The longer term
prospects for their learning might lead them eventually towards organised learning
environments, or future participation in online learning communities. But the first step (and,
for some, the only step they might choose to take) might most productively be to engage with
a digital companion in order to learn things on their own, and on their own terms.
In suggesting that an educative digital companion might act like a good adult tutor, we are
suggesting that the social aspect of learning might, in some key respects, be recreated for an
adult learner through the use of a digital tool. This is not a particularly radical thought in itself,
but it does raise the rather more radical implication that, for some people, this might prove to
be what they want, and all that they want. In effect, we suspect that the digital learning
companion could demonstrate that solitary learning is an acceptable concept.

15
Philosophical Issues in Artificial Companionship

Luciano Floridi
Department of Philosophy, University of Hertfordshire, St Cross College, University of Oxford

At the beginning of Much Ado About Nothing, Beatrice asks Who is his companion now?
(Act 1, Scene 1). These days, the answer could easily be an artificial agent. The technology
to develop artificial companions (henceforth AC) is largely available, and the question is
when rather than whether they will become commodities. Of course, the difficulties are still
formidable but not insurmountable. On the contrary, they seem rather well-understood, and
the path from theoretical problems to technical solutions looks steep but climbable. So, in the
following pages, I wish to concentrate not on the technological challenges, which are
important, but on some philosophical issues that a growing population of AC will make
increasingly pressing.
We know that AC are embodied (perhaps only as avatars, but possibly as robotic artefacts)
and embedded artificial agents. They are expected to be capable of some degree of speech
recognition and natural language processing (NLP); to be sociable, so that they can
successfully interact with human users (their human companions, to be politically correct); to
be informationally skilled, so that they can handle their users ordinary informational needs;
to be capable of some degree of autonomy, in the sense of self-initiated, self-regulated, goal-
oriented actions; and to be able to learn, in the machine-learning sense of the expression.
ACs are not the end-result of some unforeseeable breakthrough in Good Old Fashioned AI.
They are more the social equivalent of Deep Blue: they can deal successfully with their
tasks, even if they have the intelligence of a refrigerator.
Although ACs are not Hals children, their nature posits some classic philosophical questions.
Take some very elementary artificial agents, such as Virtual Woman, 1 or the more recent
Primo Puel, 2 Paro 3 and KASPAR. 4 One ontological question is: when is x a companion?
Could the previous examples be considered members of a first generation of simple
companions? Is any of them better than a childs doll, or a seniors goldfish? Is the level and
range of interactivity that matters (but then, the goldfish may not count) or the emotional
investment that the object can invoke and justify (but then, the old Barbie might count). Is
their non-biological nature that makes philosophers whinge? Non necessarily, since, to a
Cartesian, animals are machines, so having engineered pets should really make no
difference. All these are not idle questions. Depending on their answers, one may be able to
address human needs and wishes more effectively, with a deep impact on economic issues.
In 2007, for example, an estimated $40.8 billion will be spent on biological pets in the U.S.
alone. 5 The arrival of a whole population of ACs could change all this dramatically.
Suppose one may solve the previous questions to ones satisfaction. It is often said that
artificial companions will help the disadvantaged. This is true, but a proviso is in order in the
case of elderly users. Technology, demography and IT-skills follow converging lines of
development. Future generations will be used to interact with digital artefacts in a way that
we can only partly appreciate. To them, it will be natural and unproblematic to be in touch
with artificial agents and be related to the world through them. The more life-on-line and life-
off-line become blurred, the easier it will be to accept and be able to socialise with and
through synthetic, hybrid, artificial companions. Future generations of senior citizens wont be
immigrants but children of the digital era. Missing this point may be an easy but serious
mistake, with relevant financial consequences. It is not that our grandchildren in their
1
Available since the late 1980s, http://virtualwoman.net/
2
More than one million sold since 2000 by Bandai, interestingly the same producer of Tamagotchi.
3
http://paro.jp/english/index.html
4
http://www.iromec-project.co.uk/
5
Source: http://www.appma.org/press_industrytrends.asp

16
retirement age will be unable to use some kind of technologies but that they will no longer be
able to, more in the way in which one may still be perfectly able to read, but no longer without
glasses. Today, sixty-seven percent of American heads of households play computer and
video games and the average game player is 33 years old and has been playing games for
12 years. 6 When they retire, they will not need to be explained what a computerised agent
is, or how to use a mouse. But they will definitely enjoy the help of a personal assistant, a
facilitator understood as an interface to the rest of the infosphere. In this sense, the evolution
of artificial companions might be moving in the direction of specialised computer-agents for
non intelligence-intensive, informational tasks. Like avatars, they may more likely be means
to tele-socialise with other human agents, rather than social agents in themselves.
The last point raises a further consideration. It seems that the population of ACs will be
growing and evolve in the future and, as in the case of vehicles, one may expect robust
trends in specialization. Today, we see and plan ACs as
social workers, which may copy with human loneliness, social needs and the desire for
emotional bonds and interactions, not unlike pets;
service providers, in contexts such as education, health, safety, communication, etc.;
memory keepers (see the Memories for Life project), as stewards of the informational
space constituted by human memories, individual or socially shared.
In each case, different questions arise.
Regarding (1), is there something morally wrong or mildly disturbing or perhaps just sad in
allowing humans to establish social relations with pet-like ACs? And why this may not be the
case with biological pets? The question casts an interesting light on human nature, and it
seems to belong to the sort of questions asked with respect to recreational drugs.
Essentially: whats wrong with it? Different answers seem to be based on different
philosophical anthropologies or conceptions of what it means to be human.
Regarding (2), may the availability of ACs as service providers increase social
discriminations and the digital divide? For example, should individuals with relevant
disabilities have the right to be supported by ACs? Today, the Motability Scheme in the UK
provides citizens with physical disabilities or health conditions affecting their mobility with the
opportunity to own or hire powered wheelchairs and scooters at affordable prices. 7 Should
something similar happen for ACs? Consider that ACs might easily become embedded in
future technological artefacts engineered for mobility.
Regarding (3), creating ACs as artificially-living diaries will pose interesting challenges. The
accumulation of memory has been, for a long time, a friction-full business. Never before has
the creation, reproduction, management and destruction of documents been just a click away
and so cheap, in terms of computational and memory resources. The trend will only increase
once ACs, as memory stewards, will become available. What to record, the safety and
editing of what is recorded, the availability and accessibility of the information, its future
consumption, the impact that all this will have on the construction of ones own identity and
the stories that make up ones own past and roots are all issues that will require very careful
handling, not only technically, but also ethically. For example, what sort of memories will or
should survive their human supports? And what are we going to do with the artificial
companions that will have outlived their human partners? Reset them? Edit, cut and paste,
reformat? Are we going to see memory hackers? When a couple will divorce, who will have
the right to keep the AC that recorded the wedding and the first years of the kids? Will people
bother with duplicates? Will there be an attachment to the specific artefact that holds the
memories as well? Will someones digital companion be more important than his old cufflinks
or her old earrings? And how long will it take before some smart application, based on a life-
time recording of someones voice, interactions, visual and auditory experiences, tastes,
expressed opinions, linguistic habits, million of documents (tax forms, emails, google
searches, etc.) and so forth, will be able to imitate that person, to a point where you will talk

6
Source: http://www.theesa.com/facts/top_10_facts.php
7
Source: http://www.motability.co.uk/Templates/Internal.asp?nodeid=89861

17
to someone actually dead without noticing any significant difference? An advanced,
customised ELIZA could already fool many people in Second Life. Will there be people who
will impersonate Artificial Companions in order to impersonate dead people?
The informational turn may be described as the fourth step in the process of dislocation and
reassessment of humanitys fundamental nature and role in the universe. We are not
immobile, at the centre of the universe (Copernican revolution), we are not unnaturally
separate and diverse from the rest of the animal kingdom (Darwinian revolution), and we are
very far from being entirely transparent to ourselves (Freudian revolution). We do not know if
we are the only intelligent form of life. But we are now slowly accepting the idea that we
might be informational entities and agents among many others, and not so dramatically
different from smart, engineered artefacts. When ACs will be commodities, people will accept
this conceptual revolution with much less reluctance. It seems that, in view of this important
change in our self-understanding and of the sort of IT-mediated interactions that we will
increasingly enjoy with other agents, whether biological or artificial, the best way of tackling
the previous questions may be from an environmental approach, one which does not
privilege the natural or untouched, but treats as authentic and genuine all forms of existence
and behaviour, even those based on artificial, synthetic or engineered artefacts. Beatrice
would not have understood an artificial companion as an answer to her question. Future
generations will find it obvious. It seems that it will be our task to make sure that the
transition from her question to their answer will be as ethically unproblematic as possible.

18
Will Artificial Companions Become Better Than Real Ones?

Joanie Gillespie

Humans are the sex organs of machines


enabling it to fecundate and
evolve into ever new forms. 8 As Artificial Intelligences become in/visible in everyday life,
how do we make sense of its increasing influence on us and our world? From Aboriginal
cave paintings to iphones, technology has always facilitated the journey of the self, culture,
and spirit. However, in spite of all the ways technology has made life easier, we still struggle
with the messiness of being human. We are all looking for love. Somehow, we can send
people to the moon but we just cant seem to be happy. But dont look to your doctors, your
spouse, or your community to help you. 9 Global Warming, war, divorce, stress-related
illnesses, its no wonder that Second Life appears to be more satisfying than our first one.
Why? The virtual allows us feel powerful and in control. Even though almost everything we
do is driven by nanno and wifi technology we argue whether this is a good thing. Does the
digital tether makes us happier? Could artificial companions make us less likely to seek out
and interact with friends and family? It makes sense that the more time we spend digitally
engaged the less time we spend face-to-face. However, we need strong networks of social
support in order to thrive and cope with life stressors. Bad habits, poor social skills, and ill
health often derail this process despite our best intentions. Artificial companions on the other
hand could assist with some of these problems but can they, at the same time, also make
our lives with others better?
One way to address this dilemma is to encourage more collaboration between the divergent
disciplines of computer science and social science. Opposites attract. Partnerships with
developmental psychologists, especially attachment theorists, behavioral health specialists,
and neuro-psychologists could help programmers address form and function questions from
inception. Could loving your artificial companion become a fetish? Should companions have
a gender? a sexual orientation? What kinds of personalities (or health conditions etc) may be
harmed by companion assistance? Are mirror neurons activated in CMC relationships? How
much digital time is too much? 10 With the average age of digital media interaction 9 months
old 11 , our brains and social lives are already so attuned to digital intelligences that artificial
companions may become indistinguishable from our psyches. 12
Increased companionability between computer scientists and social scientists will certainly
create a better artificial companion. Collaboration maximizes the benefits and minimizes the
risks of users experiences with artificial companions, especially is if there is a shared sense
of purpose. What could be better than designing artificial companions that will intentionally
increase both adaptive and supportive uses of technology and healthier connections
between people to the benefit of both? The world can only benefit.

8
McLuhan, M. (1962). Guttenberg Galaxy: The making of typographical man. Toronto: Toronto
University Press, p.580.
9
Keyes, C. (2007). Protecting mental health as flourishing. American Psychologist. Vol.62, No. 2, pp.
95-108.
10
Neuro-scientist Norman Doige believes that CMC overload frontal lobes at the expense of other
areas necessary for well-being. See his book The Brain That Changes Itself (2007).
11
See for example, Kaiser Family Foundation: www.kff.org and Pew Internet and American Life
Project: www.pewinternet.org
12
Johnson, S. (2004). Mind wide open: Your brain and the neuroscience of everyday life. NY:
Scribner.

19
Falling in Love with a Companion

David Levy
In 1984, in her groundbreaking book The Second Self, Sherry Turkle made us aware of the
tendency of some people to develop relationships with their computers. Turkle described one
such example, an MIT computer hacker who she called Anthony, who had tried out having
girlfriends but preferred to relate to computers. I believe that the developments in AI since
then have demonstrated a progression in human-computer relationships to the point where
we can now say with confidence that, in the foreseeable future, significant numbers of
Anthonys and their female counterparts will be falling in love with software companions. This
position paper summarizes my arguments.
In discussing this subject I have found a certain instinctive measure of disbelief from some
people, often in the form: If it isnt human, how can you love it? This scepticism can easily
be refuted with two categories of counter-examplethe love that many people exhibit for
their pet animals and for their virtual pets.
Some pet owners derive even more satisfaction from their pet relationships than they do from
their social relationships with people. Love for a pet is often very strong, sometimes even
stronger than a persons love for other humans, and can be manifested in various ways: the
offer of a reward for a missing animal; an owner bequeathing an extraordinarily large sum of
money for the care of their pet, even making the pet a millionaire; spending considerable
amounts on pet food, healthcare, and sometimes on their animals grooming and clothing;
and interacting with their pets in ways redolent of human-human relationships: giving them
names, feeding them from their own plates at meal times, celebrating their birthdays,
allowing them to sleep on the owners beds, talking to them, and considering them as
members of their family. And when a pet dies, the intensity of the owners feelings of grief will
often be very similar to those experienced with the bereavement of a spouse or partner.
Sometimes an owners love is seen in bizarre news reports: a groom designating his dog as
best man, a divorcing couple battling in the courts over the custody of their pet; there is even
a web site (www.marryyourpet.com) that offers owners the opportunity to marry their pets
(for a suitable fee). Clearly, all of these manifestations of love are appropriate ways to treat
human loved ones.
In The Second Self, Turkle pointed out that Mans relationship with animals has certain
parallels with his relationship with computers: Before the computer, the animals, mortal
though not sentient, seemed our nearest neighbours in the known universe. Computers, with
their interactivity, their psychology, with whatever fragments of intelligence they have, now
bid for this place. I therefore continue this discussion of emotional attraction by transferring
its focus from pet animals to virtual pets, which are simply a form of computer. The human
propensity for loving pet animals informs our understanding of the nature of human emotional
attraction to virtual pets.
For many of those who value their relationship with their pet animal more highly than their
relationships with other humans, it would not be very surprising if a virtual pet or a robot were
to be regarded in the same vein, supplanting other humans as the most natural objects of
affection. Such a strength of affection was observed in millions of owners of the late 1990s
toy phenomenon, the Tamagotchi. This virtual pet incorporates software that exhibits pet-like
behaviour patterns. The owner must care for her Tamagotchi in its virtual world, by pressing
buttons to simulate the giving of food and drink, the playing of games, and other behaviours
typical of a mother-child relationship, ensuring that the Tamagotchi will survive and thrive. If
the Tamagotchi is neglected it can grow ill and die, often causing heartbreak to its owner.
The creatures behaviour patterns have been programmed to change with time, in order to
give the owners the sense that each Tamagotchi is unique and therefore provides a unique
relationship for the owner, just as each pet animal and each human are unique.
The literature abounds with anecdotes about Japanese Tamagotchi owners who go to great
lengths in order to preserve the life and well-being of their virtual petbusinessmen who

20
postpone or cancel meetings so as to be able to feed their Tamagotchi and attend to its other
essential needs at appropriate times; women drivers who are momentarily distracted in traffic
while responding to the beeping of their needy electronic creature; a passenger who had
boarded a flight but felt compelled to leave the aircraft prior to takeoff, and vowed never to fly
with that airline again, because a flight attendant insisted she turn off her Tamagotchi, which
the passenger felt was akin to killing it. Every example reflects the attitude of devoted
Tamagotchi owners that their loveable virtual pet is alive.
The effect of this craze in the late 1990s was to spawn a culture in which some electronic
products are accepted as having life-like properties. In a 2006 article in the London Review
of Books, Turkle explains that, as a result of this change in perception as to the aliveness of
artefacts, people are learning to interact with computers through conversation and gesture.
People are learning that to relate successfully to a computer you have to assess its
emotional state (), you take the machine at interface value, much as you would another
person.
Hand-held virtual pets such as the Tamagotchi are the simplest form of the genre, based on
low-cost electronics that allow a retail price of $15 or less. The next step up in complexity is
the virtual pet that lives on the computer screen, i.e. a form of companion. The most
believable and lifelike of these characters exhibit a variety of social cues: intelligence,
individuality, sociability, variability, coherence, and some conversational ability. When their
conversational ability has improved, and when these virtual characters are also able to
recognize the users emotional state by employing affective computing technologies such as
those being developed by Rosalind Picards group at MIT, they will become rather
compelling. But already computers are increasingly being regarded as our social partners,
and with the evidence amassed by Clifford Nass and his team at Stanford University it is not
difficult to understand why. Nasss group has discovered, inter alia, that their test participants
quickly came to regard a computer program as male or female, depending partly on the voice
of the programs speech synthesizer; and the participants tended to be polite in their
interactions with the computers in the experiments. These behaviours were despite a uniform
denial by the participants that computers can have gender, or have feelings, or that they
deserve to be treated politely!
Giving a computer the means to express appropriate emotional responses is a task that falls
within the development of a software emotion module. The work of Scott Reilly and Jospeh
Bates at CMU on the Oz emotion module, Juan Velasquez Cathexis program, and the work
of Cynthia Breazeals group at MIT, are amongst the best known examples created to date.
Research and development in this field is growing, within both the academic world and
commercial software and robot manufacturers, and especially in Japan and the USA. I am
convinced that, within twenty years at the latest, there will be artificial emotion technologies
that can not only simulate a full range of human emotions and their appropriate responses,
but also exhibit non-human emotions that are peculiar to computers (and robots).
Developments in this area will benefit from the extensive research that has been conducted
into the social psychology of human-human relationships and human-human communication,
research that is also relevant to human-computer relationships. For example, Timothy
Bickmore and Rosalind Picard at MIT have found that people use many different behaviours
to establish and sustain relationships with each other, and that most of these behaviours
could be used by computer programs to manage their relationships with their users. It is true
that relatively little progress has been made during the past half-century in the recognition
and understanding of conversational language, and that these fields require quantum leaps
in progress for the conversational abilities of computers to rise to the levels necessary to
make computers appealing as conversation partners, or life partners, for humans. But such
leaps will certainly come, partly through the emergence of much faster and bigger computing
technologies that will enable the use of yet to be discovered software techniquesones that
are probably not possible with todays level of computing power but which will be possible
using the computers of the future.
Why do I believe that these companions of the decades to come, even with their amazing
conversational skills, will be sufficiently enticing to cause large numbers of us to fall in love

21
with them? My own research has pointed to ten major reasons that have been identified by
research psychologists as contributing to the falling-in-love process between humans, almost
all of which reasons could equally apply to falling in love with a software companion (or a
robot). One of these reasons, for example, is reciprocal liking Peter is more likely to fall in
love with Mary if Peter already knows that Mary likes or loves him; so a companion
programmed to exhibit love for its human will satisfy this particular criterion. Another reason
is similarity two people are more likely to fall in love with each other if they are similar in
some respects; so a computer endowed with knowledge about a users hobbies, and
programmed with similar personality traits to the user, will again have a head start in its
attempts to engender love from its human.
Although the practice of treating computer characters as our social partners is still very much
in its infancy, the concept is gaining currency as more and more interactive personalities
appear on the market, as they become steadily more lifelike, more intelligent, more
emotional, and more emotionally appealing. When companions are able to exhibit the whole
gamut of human personality characteristics, their emotional appeal to humans will have
reached a critical level in terms of attracting us, inducing us to fall in love with them, seducing
us in the widest sense of the word. We will recognize in these companions the same
personality characteristics that we notice when we are in the process of falling in love with a
human. If someone finds a sexy voice in their partner a real turn-on, they are likely to do so if
a similar voice is programmed into a companion. That the companion is not in the physical
presence of the user will become less and less important as its software becomes
increasingly convincing. Physical presence was shown long ago to be a non-essential
ingredient for falling in lovejust think of the romances and decisions to marry that
developed between pen-friends in the days when the postal system was the most popular
method of communication between people in distant places, not to mention the much more
prevalent cyber romances that today replace the pen-friend relationships of old.
Now consider the following situation. From the other end of an Internet chat line comes the
image of a companion, a virtual friend. You hear the seductive tones of its voice, you smell
its artificial bodily scent, and it is endowed with all of the artificially intelligent characteristics
that will be known to A.I. researchers two or three decades from now (or perhaps sooner).
You sit at home, looking at this image, talking to it and savouring its fragrance. Its looks, its
voice and its personality appeal to you, and you find its conversation simulating, entertaining
and loving. Might you fall in love with this companion? Of course you might. Why shouldnt
you? Pen friend relationships long ago established that people can fall in love without being
able to see or hear the object of their love, so clearly, being able to see it, finding its looks to
your liking, being able to hear its sexy voice, and being physically attracted by its simulated
body fragrance, can only strengthen the love that you might develop in the absence of sight,
sound and smell.
And what will happen when this software technology, or even software at a much more
primitive level of personality and intelligence, is integrated, via haptic interfaces, with dildonic
devices such as vibrators and some of the more exotic sex machines now being sold on the
Internet? The answer is obvious. We will not only be falling in love with software companions,
but also having sex with them.

Bibliography

Bickmore, T. and Picard, R. (2005). Establishing and Maintaining Long-Term Human-


Computer Relationships. ACM Transactions on Computer-Human Interaction, Vol. 12,
No. 2, pp. 293327.
Levy, D. (2007): Intimate Relationships with Artificial partners. Ph.D. thesis, University of
Maastricht.
Levy, D. (2007): Love and Sex with Robots. Harper Collins, New York (publication due
November 1st).

22
Nass, C., Moon, Y., Fogg, B., Reeves, B., and Dyer, C. (1995): Can Computer Personalities
be Human Personalities? International Journal of Human-Computer Studies, Vol. 43, pp.
223-239.
Nass, C., Moon, Y., and Green, N. (1996): Are Computers Gender Neutral? Gender
Stereotypic Responses to Computers. Journal of Applied Social Psychology, Vol. 27,
No. 10, pp. 864-876.
Picard, R. (1997): Affective Computing. MIT Press, Cambridge, MA.
Reeves, B., and Nass, C. (1996): The Media Equation, Cambridge University Press,
Cambridge.
Turkle, S. (1984): The Second Self. Simon and Schuster, New York. (2nd edition, 2005, MIT
Press, Cambridge, MA.)
Turkle, S. (2006): Diary. The London Review of Books, 20th April 2006, pp. 36-37.

23
Identifying Your Accompanist

Will Lowe
Methods and Data Institute, University of Nottingham

Computer scientists are designing digital companions for peoplethat is, virtual
conversationalists, or digital confidants.

They would [. . . ] build a life narrative of the owner. [. . . ] You could call it
autobiography building for everyone,

[. . . ] inner content could be elicited by long-term conversations.

These quotations are taken from the first paragraph of this invitation to this forum on Artificial
Companions in Society. They express a view about the nature of such companions, the
outcome of extended interaction with such companions, and the method of their interaction.
In short, Companions are agent-like others; they help us understand ourselves; and they do
by way of conversation.
But there is an obvious tension between the first and second partsthe mention of
autobiography gives it away: Are these companions truly others, or are they us?
Rather than attempt to say whether they are, or should be thought of as other agents or as
filtered versions of ourselves, I would like to take one step back. What Companions are or
should be will depend on what we want them to do for us. And this depends on what they
can do for us. In the end, I shall argue that whether Companions should best be thought of,
programmed, and regulated as others, or as extensions of ourselves, is a tactical question.
One path will simply be more effective than the other. Either way, its going to be all about us.
To make a start, switch to the second person: What can a Companion do for you?
We begin with a practical and not-to-distant use case: a Companion can remind you to take
your medicine on time, and to ensure that you finish the course. For medical issues it this will
be easier and work better if the Companion has access to your medical records. This is both
empowering, because you are immediately more integrated in the project of maintaining your
health, and controlling, because it ensures that advice you get from doctors cannot be so
easily ignored. And it has significant social benefits, allowing large amounts of accurate
epidemiological data to be harvested via your Companions other (one might hope suitably
anonymised) interactions with those interested in public health, from both research and
policy perspectives. Indeed, Companion-like projects already exist to keep the demented out
of institutional care for longer by making their environments more intelligent. 13 Other
cognitive deficits may also be best addressed by a Companion approach: the National Health
Service already makes use of computer-based cognitive behavioural therapy courses for
phobias, panic attacks, and depression. Indeed, such approaches may work better with a
Companion involved; it will have more information about your local environment to tailor your
treatment and monitor your progress. More generally, the idea of an extended period of
agent interactions in order to provide a framework for understanding your choices,
obligations, and relationships could well be described as autobiography-building. It is also
called therapy.
One of the more optimistic possible roles for a Companion is to make you more rational.
Romanticism aside, it is better and less frustrating if your preferences order consistently over
currently available goods, and over time. 14 Consider these two types of well-ordering.

13
e.g. Work at Newcastle Universitys Department of Aging and Health
14
Or as economists would put it: if you have preferences, rather than simply unrationalisable choice
behaviours.

24
It is well known that psychological preferences are sensitive to the framing of choices.
Usually, the framing is done by another, such as the store whose window you are looking
into, or the state whose tax code you are attempting to negotiate. The advantage of a
Companion is that it can reframe your choice set, perhaps according to criteria that you my
have told it about beforehand.
The possibility of using a Companion as a level of indirection for choices is also helpful when
the choices are difficult because they require expertise you do not have, or because they are
easy to make now, but hard to make later. This latter is a ubiquitous planning problem,
referred to generically as inter-temporal choice. Whether it is the cream cake forbidden by
your diet, the last week of antibiotic treatment, or contributions to your holiday fund, when the
sirens of plan-breaking temptation sing, it may be your Companion that ties you to the mast.
The problem of inter-temporal choice can be described by individual discounting curves:
assume that for any good, the utility of having it today is greater than tomorrow, and still less
the day after. Classically rational agents discount their utilities at a constant rate, e.g. every
extra day spent waiting reduces utility by 10%. This generates a curve of expected utility that
decreases exponentially towards the present moment from its maximum when the good is
available. The exponential form guarantees that utilities retain their rank ordering, so
preferences over goods will never reverse. Most mammals, however, discount at a variable
rate depending on the proximity of a good, typically tracing hyperbolic curves that can cross,
leading to situations where the expected utility of a larger but later good is initially greater
than and then at a certain point less than the utility of a smaller sooner one, leading the latter
to be chosen. Fortunately for planning, the curve traced by a good explicitly constructed by
bundling together many lesser goods over time is more nearly exponential than the curve of
any of its components. 15
While there is agreement on the form of the cognitive problem, diverse mechanisms have
been suggested to explain it, including folk-psychological theories of will power, religious
theories of temptation, and most recently in neurophysiological accounts. 16 For the
purposes of thinking about Companions, George Ainslies account is particularly
suggestive. 17
Begin by noting the structural similarity between the problem of inter-temporal choice and the
prisoners dilemma in game theory. In a one shot game the equilibrium policy is to defect,
gaining more utility than the other player but less than if you both co-operated. Other policies
are equilibria if the game is repeated. Concrete proposals usually suggest reputation (as a
co-operator) as a mechanism to maintain long term optimal behaviour despite the
temptations of short term gains realised by defection.
In Ainslies framework you are engaged in just such a repeated game with your future selves.
If you can frame sets of goods that will occur at different times as parts of a larger good then
discounting for the larger good will become more exponential, and thus less liable to be
trumped by a sooner smaller rewards. Ainslie describes this cognitive process as bundling,
typically a mixture of asserting an equivalence across actionsall drinks are equally bad for
meand a personal rulebecause I dont drink. To the extent that breaking such a rule by
succumbing to a nearby pint affects the probability that it will happen again, rule breaking
incurs a reputation cost (your reputation to yourself). There is therefore always a motivation
to interpret each rule breaking as a justifiable exceptiondrinking at a birthdaywhich in
turn may perversely motivate systematic misunderstanding of oneself and the world. The
dynamic of setting personal rules, sometimes breaking them, and then dealing with the
consequences for your self image can therefore be described as inter-temporal bargaining.

15
See e.g. K. N. Kirby and B. Guastello (2001) Making choices in anticipation of future similar choices
can increase self-control, Journal of Experimental Psychology: Applied, 7, pp.154164.
16
See e.g. S. McClure, K. Ericson, D. Laibson, G. Loewenstein, and J. Cohen (2007) Time
discounting for primary rewards. Journal of Neuroscience, 27, pp.5796-5804.
17
See e.g. G. Ainslie (2005) Prcis of Breakdown of Will, Behavioral and Brain Sciences 28, pp.635
673; G. Ainslie (1991) Derivation of rational economic behaviour from hyperbolic discount curves,
American Economic Review 81, pp.334340.

25
Ainslie describes the mechanisms of decision in an unaided agent, but the presence of a
Companion changes things. Crudely, a Companion may act as enforcer, distracting you from
proximate but self-defeating pleasures, or telling you off afterwards. More interestingly, a
Companion may simply inform you about what youve been doing, making it harder to
maintain a transient pleasure-justifying narrative that you would be ashamed of later.
Companions are also plausible repositories for personal rules that discourages opportunistic
rewritinga commitment mechanism similar to illiquid retirement savings plans. 18 A
Companion may also independently assess certain predictive probabilities. This may be a
problem: If it is necessary for maintaining your current course of action to believe that every
exception signals the end of your personal rule and triggers a steep reputation cost, then it
may not be helpful to have a Companion inform you that there is in fact a 0.8 probability that
your latest lapse will not be repeated. 19 In short, a Companion might support you in making
choices where Those sinkings of the heart at the thoughts of a task undone, those galling
struggles between the passion for play and the fear of punishment, would there be
unknown.
The previous passage is taken from Jeremy Benthams discussion of the Panopticon and its
advantages as a school design. 20 The possibilities sketched above spell out the irreducibly
Panoptical nature of Companionship. Our two possible Companion types appear as
extremes: Either the Companion is anothere.g. the watching eye and punitive hand of the
state, even if only its health serviceor the Companion is a cooler headed version of
yourself, a programmable super-ego. Interesting Companions no doubt lie between these
extremes, but the the notion of inter-temporal bargaining brings out the common Panoptical
core of each. This core promises (or threatens) to make you transparent to others, and to
yourself.
It is conventional to deplore Benthams Panopticon, but the common core of surveillance and
intervention may be as desirable for constantly renegotiating the extended bargain of a
coherent self as it is offensive when imposed by a state. The watchful Companion can exist
at any point between, so its construction obliges us to be clear about the nature of the
difference.
While a Companion might make inter-temporal planning easier when all goods have been
bundled and choices framed by increasing the information available to us about ourselves in
choice situations, we have not addressed the question of how choices become framed to
start with. Considering this prior processwhich behaviours count as my diet, and what
constitutes a lapse?and the permanent possibil- ity of on-the-fly redefinition
demonstrates the limits of Companion-driven transparency. For example, the probability of
not lapsing again is only 0.8 under some bundling of goods and plans, some set of
equivalence classes over actions. But it is different or undefined in other (possibly non-
commensurable) decompositions of the social world. Every probability needs a sample
space, and your Companion will partly constitute it.
In the first example: You tend to accept the information and interventions of our doctor. This
is partly because you share her assumptions, and because if you cease to trust her
judgement you can find another doctor, resort to prayer, or switch to herbal remedieseach
embodying a distinct conceptual scheme and rule set that a Companion could help you work
within. 21 Panoptical objections arise most quickly when your Companion is chosen for you,
particularly by a state. This is reasonable: Not only is there no general reason to assume that
states bundling of goods and plans is the best one for you, but states too have inter-
temporal choice problems. 22 But problems also arise when you choose for yourself. A

18
See e.g. C. Harris and D. Laibson (2001) Dynamic choices of hyperbolic consumers Econometrica,
69(4), pp.935957.
19
Naturally, a sophisticated Companion would do its calculation conditioning on both the value itself
and its general criteria for telling you, leading fairly directly to a version of Newcombes Problem.
20
In J. Bentham (1995) The Panopticon Writings. M. Bozovic (ed.) Verso, London.
21
Expect to see people shopping around for Companions with ideologically appealing frameworks.
22
See e.g. P. Streich and J. S. Levy (2007) Time horizons, discounting, and inter-temporal choice,
Journal of Conflict Resolution, 51 pp.199226.

26
Companion can impose the thinking and values of an entire community on you, in a way that
is unaffected by subsequent changes in that communitys thinking. For example, a
Companion may still be helping you ward off spells when a ceasefire agreement has
already been signed between the other members of your religion and modern medicine. This
is problematic despite the fact that you chose your Companion freely, perhaps to bolster your
faith.
Returning to the original question. It is an empirical matter whether the inter-temporal
bargaining we seem to need to stabilise our choices and construct coherent selves are best
facilitated by Companions that are better informed, cooler-headed reflections of ourselves, or
by distinct agents interested in us in the same benign but instrumental way as our therapist
or financial advisor. Either way, they will need to be distinct and independent enough for us
to treat their advice, information, and representation of our rules as real constraints on our
action, but sufficiently closely aligned and sensitive to our goals that we do not feel them as
an imposition. Companions will inevitably structure our choices, but we will choose them to
do precisely that.

27
Consulting the Users

Alan Newell
Queen Mother Research Centre for IT to support Older People, School of Computing,
University of Dundee, Scotland

Artificial Companions to help older and lonely people with their daily tasks, and provide
companionship is an intriguing and exciting concept. It is clearly important to consider how
such technology should work, what facilities it should and should not provide, together with
the personal and social consequences of such technology. It is also important to ask what
are the most effective and beneficial ways of addressing these issues.
Clearly an interdisciplinary forum including a variety of experts is one way. Such a forum
encourages wide ranging discussions on the technology, what it can achieve, and the
positive and negative aspects of the introduction of such technologies.
In such discussions, we should try to avoid the mistakes of many software developersthat
is designing and building what experts think the target population needs and wants,
influenced by what they find most exciting and interesting to develop, and which they judge
will be most effective in magnifying their reputation and/or profits. [Compare the grossly over-
optimistic predictions concerning speech recognition systems since the 1960s with the text
messaging not being deliberately offered as a facility of mobile telephones.]

The Characteristics of Potential Users

In such discussions it is important to consider the characteristics of potential users of artificial


companions as well as the technology. These can be very different from the characteristics
of designers of new technologies and their traditional user base, particularly in the case of
those users who are most in need of such companions. Older people have multiple minor
disabilities, sensory, motor, and cognitive, and may also have major disabilities. The vast
majority will have substantially different experiences of, and emotional attitudes to, new
technologies than younger cohorts. Many of them will not understand, or be familiar with the
jargon, metaphors, or methods of operation of new technologies, and have major lack of
confidence in their ability to use them. These challenges are unlikely to be solved by time.
Not only are a significant fraction of current middle aged people computer naive, but also
technology, and human interface methods and metaphors are unlikely to remain static.
Cohorts of older people are thus likely to have entirely different responses to the questions
being asked of this forum than younger and/or technologically experienced people. If we are
to develop successful artificial companions for older and technologically nave users we need
to address such issues as:
a) What are the physical, sensory, and cognitive characteristics of major sections of the
potential user base of artificial companions?
b) What are the technological experiences of, and attitudes to, current technologies of
this user base?
c) What facilities would this group of people like to have in their artificial companions, and
what safeguards would they require? What are their needs and wants for artificial
companion technologies?
The response to these questions should lead to an examination of the characteristics of
appropriate artificial companions for this user base. To be successful, this should involve
iterative discussions with users, allowing both users and designers to be creative.

28
Knowledge Elicitation

The over-arching question which needs to be addressed is:


How can we most effectively include, in meaningful and creative discussions about artificial
companions, the experiences and views of potential users, particularly elderly and
technologically nave users?
The challenges of working with this user group are not insignificant and many relate directly
to points (a) and (b) above. Such discourse could be likened to asking people in the middle
of the 18th Century about the characteristics of horseless carriages or telephones, or in the
early 1980s about home computers or mobile telephony.
Challenges of requirements gathering with this group of people on these issues include:
Questionnaires, even when carefully constructed, can elicit misleading and grossly
incorrect data.
Keeping focus groups of older people focussed is not easy.
There are difficulties in orientating technologically naive people to the challenges and
promises of new technology.
Most people, and particularly older groups, lack technological imagination and need to be
presented with accurate instantiations of the technology being discussed.
Possible solutions include the use of Wizard of Oz simulations, but these need to be done
with care, and with particular concentration on the human interface, and, for domestic use,
the aesthetics of the simulated systems.
A further possibility is the use of theatrical techniques which we have been found to be
successful in facilitating dialogue between designers of future technologies and older people,
and releasing the creative intelligence of the potential user population.

Conclusions

Any consideration of perspectives on the present and future of artificial companions should
include a consideration of the characteristics, needs and wants of the user group, and on the
most effective ways of eliciting such information from appropriate cohorts of users.

29
Arius in Cyberspace: The Limits of the Person

Kieron OHara
Intelligence, Agents, Multimedia Group, School of Electronics and Computer Science,
University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom

Introduction

The development of the companion is an interesting example of the human tendency to


adjust cognitive processing and the environment, adapting each to the other in a kind of
circular causation (Clark 1997). Physical, symbolic and social-institutional artefacts such as
tools, language, markets, buildings and books mediate between humans and the world,
allowing human information processing to get more bang for its embodied and boundedly
rational buck. In this case, the environment is online, and the companion would be an
important mediating tool for certain sets of individual. As Clark argues, the development of
what he calls scaffolding for human information processing makes it harder to make a firm
distinction between the human and the environment. It is not clear whether we have exported
some of our intelligence into the constructed environment, or alternatively whether the proper
study of psychology is the human mind plus appropriate bits of the environment.
Either way, it is clear that the companions idea dramatises this indeterminacy about
personhood. Already, information processing has set us a dilemma. There is arguably a
distinction between the theft of a DVD player, say, and the theft of a laptop. Of course the
latter has a higher monetary value, but the important difference is that whereas the former is
a passive relayer of information, the laptop contains important memories, information
resources and the output (and possibly also the execution) of a great deal of the owners
work. A case could be made for saying that the theft of a laptop is more akin to a direct
assault on the integrity of a person, rather than simply depriving him of a gadget for which his
insurers can painlessly compensate him.
Many issues of personhood are likely to force themselves upon us if companions became
widespread, because of the intimacy of the agents and their owners, the importance of their
mediating functions and the damage that their destruction or corruption could do. For
instance, how should we understand the separateness of the companion from the person,
and from the online identity that the companion is mediating? At present, the blithe
assumption is made that personhood is more or less a matter of spatio-temporal continuity of
the body, and that the human mind is more or less identical with the human brain; such
assumptions are usually made in philosophical discussions of memory (perhaps most
explicitly by Warnock 1987, 1-14).
There could well be a long period of negotiation of the legal and moral relationship between
various aspects of the person and the mediating artefacts that enable her interactions with
the physical and digital environment. The situation here is analogous to the deep theological
dispute over the Arian doctrine in 4th century Europe; Arius difficulty was to delimit the
personhood of God in an intelligible way. The trinity whose interrelations are so
underdetermined is of course different, no longer Father-Son-Holy Spirit but Body-Avatar-
Furry Handbag; nevertheless the complexities remain undimmed. Important parameters
include the control, or lack of it, that the human has over the construction of the companion,
the amount of information processing and storage that the human delegates (a parameter
that can only be settled empirically), the penetration of companion use across cyberspace,
the controls that can be built into the architecture, the reliability of the construction process
andnot leastthe novelty of the demands made upon companions. For instance, it is quite
plausible that there will be demand for companions to continue functioning after the bodily
death of the person, for tasks such as the overseeing and administration of trust funds, and

30
the execution of wills. How far a companion allows a person to survive bodily death would
not be an academic question in such a circumstance.
The Internet facilitates remote interaction. It has therefore already played its part in a
phenomenon characteristic of modernity that has been called the disappearance of the
body (Giddens 1990); interactions increasingly take place between avatars, or
technologically-mediated representations of people, rather than via face-to-face meetings.
This has often been seen as one of the more attractive aspects of online interaction, as
immortalised in the famous New Yorker cartoon whose caption read on the Internet, no-one
knows youre a dog. In the early days online, MUDs and MOOs allowed people to try on
new identities, while AOL accounts allowed the user five different online presences or
screen names. Feminists (and other identity theorists) welcomed the potential for resisting
essentialized understandings of womens identity (Sampaio & Aragon 1998, 160), while
others tried to balance the important issues of authentication and privacy (Lessig 1999).
Nowadays, issues of terrorism and security have shifted that debate away from the privacy
advocates, but new fora for exploring changing identities still emerge. One of the most
significant innovations of recent years is the development of Massive Multiplayer Online
Role-Playing Games (MMORPGs), persistent online environments in which players can
interact and make virtual transactions using avatars created and defined by them
(Castronova 2005). These environments can be along the lines of traditional games (e.g.
World of WarCraft), or go beyond the paradigm of a game to support alternative types of
society (e.g. Second Life). In all these discussions, the assumption was that identities were
constructed on a tabula rasa (or as near as the architecture could get to a blank sheet) by a
separate human subject.
The artificial companions to be discussed in this forum are subtly different types of agent. In
the first place, their development is not (entirely) under the control of their human owners; the
characterisation of the companionship process as autobiography building for everyone is
not strictly correct. The human-companion relation is perhaps closer to the Johnson-Boswell
friendship. No doubt Johnson provided solid input into Boswells depiction of him, and
exercised a good deal of control over the process (as Boswell admits in his Journal of a Tour
to the Hebrides), but ultimately Boswells Life of Johnson is, well, Boswells Life of Johnson,
extracted from conversations with Johnson and others, testimony from others and objective
sources of information where possible.
Nevertheless, there are still interesting questions about how truthful a picture this is. Is it
analogous to the subconscious constructed via psychoanalysis, which is assumed not to lie
to the analyst, even as the conscious subject is reconstructing reality? Who is authoritative
the human or the companionand in what contexts? With respect to the usual self-
constructed avatars, there is an ambiguity, as noted by iek: on the one hand we maintain
an attitude of external distance, of playing with false images. On the other hand, the
screen persona I create for myself can be more myself than my real-life persona in so
far as it reveals aspects of myself I would never dare to admit in real life (iek 1997, 137).
Notions of accuracy of the companion will be more or less pressing depending on the
contribution that the companion is understood as making to our personhood.

References

Edward Castronova (2005). Synthetic Worlds: The Business and Culture of Online Games,
Chicago, University of Chicago Press.
Andy Clark (1997). Being There: Putting Brain, Body and World Together Again, Cambridge
MA: MIT Press.
Anthony Giddens (1990). The Consequences of Modernity, Cambridge: Polity Press.
Lawrence Lessig (1999). Code and Other Laws of Cyberspace, New York: Basic Books.

31
Anna Sampaio & Janni Aragon (1998). To boldly go (where no man has gone before):
women and politics in cyberspace, in Chris Toulouse & Timothy W. Luke (eds.), The
Politics of Cyberspace, New York: Routledge, 144-166.
Mary Warnock (1987). Memory, London: Faber and Faber.
Slavoj iek (1997). The Plague of Fantasies, London: Verso.

32
Socially-aware expressive embodied conversational agents

Catherine Pelachaud
University of Paris8 INRIA

Embodied Conversational Agents (ECAs) are autonomous entities endowed with human-like
communicative capabilities: they can talk, listen, grab ones attention, look at another one,
show emotion, and so on (Cassell et al., 2000; Gustafson et al., 1999; Gratch and Marsella,
2004; Kopp and Wachsmuth, 2004; C. Pelachaud, 2005; Heylen, 2006; Gratch et al., 2007).
They can play different roles, from a companion for old people or young kids, to a virtual
trainee, a game character, a pedagogical agent or even a web agent (Jonhson et al., 2005;
Moreno, in press; Cassell et al., 1999; Hall et al., 2006; Bickmore et al, 2007). Several
studies have emphasized how their appearance and behaviours need to be tailored to fit
their various roles and their social context (Reeves and Nass, 1996). Other studies have
highlighted how human interlocutors obey human social and cultural rules when interacting
with an ECA. In particular they follow politeness strategies (Andr et al, 2004; Jonhson et al,
2004; Walker et al, 1996).
We have developed an ECA platform, Greta (Pelachaud, 2005). It is a 3D virtual agent
capable of communicating expressive verbal and nonverbal behaviours. It can use its gaze,
facial expressions and gesture to convey a meaning, an attitude or an emotion. Multimodal
behaviours are tightly tied with each other. A synchronization scheme has been elaborated
allowing the agent to display an eyebrow raising or a beat gesture on a given word.
According to its emotional or mental state, the agent may vary the quality of its behaviours: it
may use more or less extended gesture, the arms can move at different speeds and with
different acceleration. Our model defines gesture expressivity as the qualitative value of a
gesture execution (Hartmann et al, 2006). Since not every human exhibit similar behaviours
quality, we have introduce the notion of baseline (Mancini and Pelachaud, 2007). An agent is
described by a specific baseline; it tells the general tendency an agent has to use such and
such modalities with such and such expressivity. Thus an agent doing large and fast gesture
in general will be defined with a different baseline than an agent using rarely arm movement
and facial expression. The baseline for each agent affects how the agent communicates a
given intention or emotion. These models allow us to derive an agent able to display
expressive nonverbal behaviours.
Lately, we have extended this work to have the agent able to consider the social context it is
placed in (Prendinger and Ishizuka, 2001; DeCarolis et al, 2001). So far, most of the agents
are impulsive: whenever they feel an emotion they display it. They do not consider to whom
they are talking, which links bound them (be social, family or friendship), what is their role,
etc. As humans, when conversing with one interlocutor we have learned to control our facial
expression of emotions. There may be circumstances in which we can not display a given
emotion, or we should display a given one. We may have to exaggerate our expressions or,
on the contrary, to diminish them. We have developed a model that allows the agent to
manage its facial expressions(Niewiadomski and Pelachaud, 2007a, 2007b); that is the
agent is able to and it known when to mask its facial expression of felt emotion by another
one, when to suppress it, etc. Our model encompasses an algorithm to compute the facial
expression corresponding to complex emotions (such as masking an emotion by another
one, superposition of two emotions) as well as a set of management of facial expressions
following the politeness theory of Brown and Levinson (1987). This set of rules has been
extracted from the analysis of a video corpus provided by Elisabeth Andr, University of
Augsburg.

33
References

Andr, E., Rehm, M., Minker, W., Buhler, D., Endowing spoken language dialogue systems
with emotional intelligence. In: Andr, E., Dybkjaer, L., Minker, W., Heisterkamp, P.,
(eds.), Affective Dialogue Systems, Springer Verlag, pp. 178--187, 2004.
Bickmore T, Mauer D, Brown T, Context Awareness in Mobile Relational Agents. 7th
International Conference on Intelligent Virtual Agents, Paris, pp354-355, 2007.
Brown, P., Levinson, S.C., Politeness: some universals on language usage, Cambridge
University Press, 1987.
Cassell J, Sullivan J, Prevost P and Churchill E. Embodied Conversational Characters. MIT
Press, Cambridge, MA. 2000.
Cassell J, Bickmore T, Billinghurst M, Campbell L, Chang K, Vilhjlmsson H and Yan H
(1999). Embodiment in Conversational Interfaces: Rea. Proceedings of the CHI'99
Conference, pp. 520-527. Pittsburgh, PA.
De Carolis B, de Rosis F, Pelachaud C, Poggi I, A reflexive, not impulsive Agent, The Fifth
International Conference on Autonomous Agents, Montreal, Canada, pp. 186-187, May
2001
Gustafson J, Lindberg N and Lundeberg M. The August spoken dialog system. Proceedings
of Eurospeech'99, Budapest, Hungary, 1999.
Gratch J and Marsella S. A domain-independent Framework for modeling emotion. Journal of
Cognitive Systems Research, 5(4), 269-306, 2004.
Gratch J, Wang N, Gerten J, Fast E, Duffy R. Creating Rapport with Virtual Agents. 7th
International Conference on Intelligent Virtual Agents, Paris, 125-138, 2007.
Hall L, Vala M, Hall M, Webster M, Woods S, Gordon A and Aylett R. FearNot's appearance:
Reflecting Children's Expectations and Perspectives. In J Gratch, M Young, R Aylett, D
Ballin and P Olivier, eds. 6th International Conference, IVA 2006, Springer, LNAI 4133,
407-419, 2006.
Hartmann B, Mancini M, and Pelachaud C. Implementing Expressive Gesture Synthesis for
Embodied Conversational Agents. In Proc. of Int. Gesture Workshop. S. Gibet, J.-F.
Kamp, N. Courty (eds.), Lecture Notes in Computer Science 3881 Springer 2006, pp.
188-199, 2006.
Heylen D. Head gestures, gaze and the principles of conversational structure. International
Journal of Humanoid Robotics (IJHR), 3(3), September, 2006.
Johnson WL, Rizzo P, Bosma W, Kole S, Ghijsen M, van Welbergen H, Generating Socially
Appropriate Tutorial Dialog, ISCA Workshop on Affective Dialogue Systems, pp. 254--
264, 2004.
Johnson WL, Vilhjlmsson H and Marsella S (2005). Serious Games for Language Learning:
How Much Game, How Much AI? 12th International Conference on Artificial Intelligence
in Education, Amsterdam, The Netherlands, July.
Kopp S and Wachsmuth I. Synthesizing Multimodal Utterances for Conversational Agents.
Computer Animation and Virtual Worlds, 15(1), 39-52, 2004.
Mancini M, Pelachaud C, Dynamic behavior qualifiers for conversational agents, Intelligent
Virtual Agents, IVA'07, Paris, September 2007.
Moreno R. Animated software pedagogical agents: How do they help students construct
knowledge from interactive multimedia games? In R. Lowe and W. Schnotz, eds.
Learning with Animation. Cambridge University Press, in press.
Niewiadomski R, Pelachaud C, Fuzzy Similarity of Facial Expressions of Embodied Agents,
Intelligent Virtual Agents, IVA'07, Paris, September 2007.
Niewiadomski R, Pelachaud C, Model of Facial Expressions Management for an
34
Embodied Conversational Agent, ACII, Lisbon, September 2007.
Pelachaud C. Multimodal expressive embodied conversational agent, ACM Multimedia,
Brave New Topics session, Singapore, November, 2005.
Prendinger, H., Ishizuka, M., Social role awareness in animated agents, Proceedings of the
fifth international conference on Autonomous agents, Montreal, Quebec, Canada, pp.
270--277, 2001.
Reeves, B., Nass, C., The media equation: how people treat computers, television, and new
media like real people and places, Cambridge University Press, 1996.
Walker, M., Cahn, J., Whittaker, S., Linguistic style improvisation for lifelike computer
characters, Proceedings of the AAAI Workshop on AI, Alife and Entertainment, 1996.

35
Towards necessary and sufficient conditions
for being a Companion

Stephen Pulman
Oxford University Computing Laboratory

Any computational system that could pass the Turing test would surely be able to serve as a
Companion, for that test has built into it the implicit requirement that the system shares
enough of the same experience of the world as we do to be able to react appropriately to
anything we say. So passing the Turing test is surely a sufficient condition for a
computational entity to be called a Companion.
But it is certainly not a necessary one: humanly intelligent behaviour, verbal or otherwise,
cannot be a necessary condition, since people claim to get companionship from cats, dogs
and other creatures. Of course, people keep a wide variety of creatures as pets, from stick
insects and spiders, snakes and mice, to sheep and horses and, for all I know, elephants.
But not every pet counts as a Companion: I think it would be difficult to plausibly claim that
stick insects, spiders, snakes or goldfish supply companionship (although I am sure that
some people do claim this).
What makes cats and dogs and other largish mammals count as Companions is that they
interact with us in a way that goldfish, snakes and spiders dont: they show affection, they
respond to sounds and actions, and they have an awareness of us. In other words, they have
intentions towards us.
So the first necessary condition for an entity, physical or virtual, to count as a Companion is
that:

1. A Companion should have intentions towards us

This still needs some elaboration. Presumably spiders have intentions not to get eaten or
squashed, and will run away when we loom into their sight. But their intentions, while they
involve us, are not directed towards us as individuals rather than just as part of the
landscape. A Companion should recognise us as a unique object and be able to distinguish
us from other people and animals, otherwise it could not have intentions towards us as
opposed to, say, any nearby human.

2. A Companion should recognise us as individuals

What should a Companion look like? Curiously, it seems as if this might matter less than we
would have expected, partly because it is possible to imagine a Companion as merely a
disembodied voice. (Some people already have such Companions, unfortunately). Clearly if
a Companion has a visual representation or physical presence then it probably needs to be
something that doesnt disgust or frighten us (although I can imagine something that did
being popular with teenage boys). But such a thing need not look much like us. I can imagine
interacting perfectly well with something that looks like a mobile phone, or a traditional robot,
or even a Nabaztag or a furry handbag (although I personally am somewhat allergic to the
cuteness factor). Roger K. Moore (of the Sheffield Companions team) has pointed out that
we usually react better to an animated line drawing than a not very good supposedly realistic
face. If something artificial is going to look like us, it has got to look exactly like us: a near
miss can be a disaster.
The reason for this is clear: if something does not look like us, we come with no prior
expectations about how it will behave. But if it does look exactly like us, we expect it to
behave exactly like us, and our expectations set a higher standard than any currently existing
36
artefact can satisfy. This is why good natural language dialogue systems never output
(canned) sentences that they cannot interpret.
However, even if we have a candidate for a Companion that looks exactly like us, we need
something else too. Autistic, drunk, and mad people look exactly like us: Moore points out
that such people nevertheless are often perceived as threatening in various ways (of course,
drunks are often threatening in a literal sense). The reason for this is that such peoples
behaviour is not predictable. It seems as if for something to count as a Companion, its
appearance is really not very important: it can look like us, or not. What really matters is
whether its behaviour is predictable. (Iam tempted to call this Moores law, but the name has
already been used up...).

3. A Companions behaviour should be predictable

Who initiates interactions with artificial Companions? We clearly want to be able to ask them
for help when we need it. But what about a Companion offering help? Real human
Companions do this: some do it at the right time, even, because they can see that we are
struggling with some task or problem. But in general, a Companion should only interrupt us
when absolutely necessary: I know few pieces of computational behaviour as irritating as that
of the Microsoft paper-clip agent. So Companions need to have a very elaborated and
accurate model of our abilities, our inabilities, our interests, and our needs. If an artificial
Companion had such a model, that would mean in effect that our behaviour would be as
predictable to them as theirs should be to us. Otherwise, they will just annoy us.

4. Our behaviour should be predictable to a Companion

Whose responsibility is it for keeping the relationship going? A Companion should not be
needy: we dont want to become carers for inefficient or stupid virtual entities. They should
be sufficiently independent of us in that they need no (or little) effort from us to keep running.
For most of us, keeping things ticking over with family, friends, and colleagues absorbs all
the energy we may have for such relationships. We dont want constant demands for
attention or support from a computer programme. Atamagotchi is a dumb pet, not a
Companion. Our final conditionat least for the time beingis, then:

5. Companions should be independent: they should not require (much) effort from us

No doubt there are many additions and refinements necessary to the preceding list of
conditions for companionhood. We may even conclude that there is no ultimately satisfactory
set of necessary and sufficient conditions: there seldom are, in fact, for complex concepts.
Nevertheless, trying to find them usually gives us some insight into the concept we are trying
to nail down.

37
The Look, the Emotion, the Language and the Behaviour of a
Companion at Real-Time

Daniela M. Romano
Computer Science, University of Sheffield, UK

A good friend at run-time

A truly good friend is the one that make you laugh, shares your deepest emotions and is
there to listen and help you when you need it. Can we have a synthetic companion with
these qualities? He/she (we assume it has the status of a person at this stage) should be
able show and understands emotions, to talk and listen, and know who you are and what you
feel and like.
We are used to the wonder of 3D graphics presented in the latest films and special effects in
which graphically generated characters move and talk like real people. They look and
behave realistically as they are often modelled with processes that take hours to render, on
real people motion captured movements and are dub by real actors. The scenes are
rendered after hours of computational modelling and the dialogue are scripted.
A computer generated real-time companion instead needs to be ready to interpret and
respond on the fly. Its behaviour, movements and dialogues need to be computed at run-
time, the interactions with the human interlocutor are possibly infinite. When creating a virtual
companion one has to balance the cost and effect of each feature introduced and trade those
that can be overlooked upon to improve the one that sustain the suspension of disbelief.
Which features make a character more believable? Here following are analysed some
important characteristic for a truly believable friend.

Looking like a human

The key for success for a virtual character is not necessary to look like a human. Cartoons,
for example, still generate emotions and reactions in the human spectators. The success of a
synthetic character seems rather to be in its coherence.
As some authors report (Garau et al., 2003; Vinayagamoorthy, et al., 2004) that having
photorealistic people does not make them necessarily more believable if their behaviour,
speech and language are not of the same level as the visual qualities. Any discordant
element makes the character fail the suspension of disbelief test. There is a theoretical limit
between the quality of the movement of a synthetically generated character and its visual
aspect described as the Uncanny Valley. 23 Masahiro Mori (Mori, 1970; Mori, 2005)
represents the emotional response to a synthetically generated character as a curve that
rises slowly up to a point and then it descends rapidly. When the character resemble more a
human being, but it is not quite yet there, it can reach the lowest peak of the curve and a
negative reaction in the human spectator. This trough is called Uncanny Valley. After this
point the curve rises very rapidly to induce a positive reaction in the humans. Since we have
not yet reached the completely human-like synthetically generated look, one has to consider
how the synthetic companion should look like to still be believable.

Communicating like a human

To build a companion that communicates like a human one has to consider the aesthetic
qualities of the character movements, its behaviour and its ability to interpret and respond

23
http://www.arclight.net/~pdb/nonfiction/uncanny-valley.html

38
verbally and non-verbally to the human user. Humour and politeness make a character more
human-like, but these characteristics need not only to be expressed, but also to be
interpreted. Nijholt (2002) reviews the state of the art in embodied conversational agent with
humour capabilities. Gupta, Romano Walker (2005) (2007a) have developed a dialogue
generator able to express sentences at different level of politeness according to the
relationship the agent has with the person it is talking to, an tested its dialogue capabilities in
embodied conversation with real users (Gupta, Walker, Romano, 2007b).
Non verbal communication is it also important to induce empathy and truly be one best
friend. In particular the emotions are mainly expressed in a non-verbal manner. Shaarani &
Romano (2006) (2007) Romano et al (2005) have studied how different strength of emotions
should be portrayed in virtual characters and what are the characteristic of the body
movement that make the recognition possible.

Behaving like a human

Visuals, communication and look are not enough if the synthetic friend is not autonomous
and be able to generate its own behaviour according to its perception of the interaction with
the human friend and/or the environment. Consequently a companion needs to have a
computational model that allows him/her to generate expressions, emotions and credible
behaviour. The main areas to simulate computationally are the interpersonal factors, the
emotions and the personality. Kolb & Whishaw (1996) propose that the cognition and
emotion are connected and vary together. Our emotion conditions our intentions and how the
memory and the ideas that are formed (e.g. feeling anxious increases the speed of ones
behaviour). Some emotions appear to have a pure social function, as they induce a reaction
in the spectator of that emotion (anger-guilt) (Kolb & Whishaw, 1996). To be believable a
synthetic friend has to be able to sense the emotional atmosphere and react accordingly.
Various computational models of emotion and personality have been proposed (Ortony et al.
1998), (Seif El-Nasr et al., 2000), (Gratch & Marsella, 2004). Romano et al (2005) have
created a computational model that attends to the social abilities of a synthetically generated
character as well as its personality and emotion. The BASIC (Believable Adaptable Socially
Intelligent Character) computational model drives synthetically generated characters, able to
produce a believable graphical emotional response at run time in response to a direct
interaction of the user, another character in the world, or the emotionally charged
atmosphere of the environment.

Conclusions

We do not yet have all the answers for the questions and problem posed above; one main
consideration is that we will need a lot of computational power to simulate all the above
characteristics in one artefact that is a real-time believable companion. Richmond & Romano
(2007) have implemented an Agent Based Modelling (ABM) algorithm for the simulation of
group behaviour directly on the graphic card in CG language in order to overcome the speed
limitations of traditional programming to obtain real time graphical and behavioural
performances. Programming directly on the graphics hardware is widely used in good video
games, but it is an under- researched area due to the extensive graphics knowledge required
to understand the hardware architectures. CG programming might be the answer to build a
believable companions at real-time.

References

Garau M., Slater M., Vinayagamoorthy V., Brogni A., Steed A., and Sasse M. A. (2003). The
Impact of Avatar Realism and Eye Gaze Control on the Perceived Quality of
Communication in a Shared Immersive Virtual Environment. SIGCHI, April 2003,
Florida-U.S.A.

39
Gratch, J. and Marsella, S. (2004). A domain independent framework for modeling emotion.
Journal of Cognitive Systems Research, vol. 5., pp. 269-306.
Gupta, S., Romano, D.M., Walker M. A. (2005). Politeness and Variation in Synthetic Social
Interaction, H-ACI Human-Animated Characters Interaction Workshop, British HCI 2005
The 19th British HCI Group Annual Conference Napier University, Edinburgh, UK 5-9
September 2005.
Gupta, S., Walker, M. A., Romano, D.M. (2007a). Generating Politeness in Task Based
Interaction: An Evaluation of the Effect of Linguistic Form and Culture, ENLG07, 11th
European Workshop on Natural Language Generation, 17-20 June 2007, Schloss
Dagstuhl, Germany.
Gupta, S., Walker, M. A., Romano, D.M. (2007b). Using a shared representation to generate
actions and social language for a virtual dialogue environment, AAAI08 symposium,
Twenty-Third AAAI Conference on Artificial Intelligence, Chicago, Illinois, July 1317,
2008 (submitted).
Mori, Masahiro (1970). Bukimi no tani The uncanny valley (K. F. MacDorman & T. Minato,
Trans.). Energy, 7(4), 3335. (Originally in Japanese)
Mori, Masahiro (2005). On the Uncanny Valley. Proceedings of the Humanoids-2005
workshop: Views of the Uncanny Valley. 5 December 2005, Tsukuba, Japan.
Nijholt, A. (2002) Embodied Agents: A New Impetus to Humor Research. In: The April Fools'
Day Workshop op Computational Humour, 15-16 April 2002, Trento, Italy. pp. 101-111.
Twente Workshops on Language Technology 20. University of Twente. ISSN 0929-0672
Ortony, A., Clore, G., & Collins (1988). A. The Cognitive Structure of Emotions. Cambridge
University Press.
Richmond, P., Romano, D.M. (2007). Upgrade Report, Computer Science, University of
Sheffield.
Romano, D.M., Sheppard, G., Hall, J., Miller, A., Ma, Z. (2005). BASIC: A Believable,
Adaptable Socially Intelligent Character for Social Presence. PRESENCE 2005, The 8th
Annual International Workshop on Presence, 21-22 September 2005, University College
London, London, UK.
Seif El Nasr, M., Yen, J., Loerger, T. (2000). FLAME - A Fuzzy Logic Adaptive Model of
Emotions, Automous Agents and Multi-agent Systems, vol. 3, pp. 219-257.
Shaarani, A.S., Romano,D.M.(2006b) Basic Emotions from Body Movements. (CCID 2006)
The First International Symposium on Culture, Creativity and Interaction Design. HCI
2006 Workshops, The 20th BCS HCI Group conference. 12th September Queen Mary,
University of London, UK.
Vinayagamoorthy V., Garau M., Steed A., and Slater M. (2004a). An Eye Gaze Model for
Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience.
Computer Graphics Forum, 23(1), 1-11.

40
Requirements and Their Implications

Aaron Sloman
University of Birmingham

Introduction

The workshop presupposes that development of Digital Companions (DCs) will happen, and
asks about ethical, psychological, social and legal consequences. DCs will not be robots
...[but]... software agents whose function would be to get to know their owners in order to
support them. Their owners could be elderly or lonely. Companions could provide them
assistance via the Internet (help with contacts, travel, doctors and more) that many still find
hard, but also in providing company and companionship.
My claim: The detailed requirements for DCs to meet that specification are not at all
obvious, and will be found to have implications that make the design task very difficult in
ways that have not been noticed, though perhaps not impossible if we analyse the problems
properly.

Kitchen mishaps

Many of the things that crop up will concern physical objects and physical problems.
Someone I know knocked over a nearly full coffee filter close to the base of a cordless kettle.
This caused the residual current device in the fuse box under the stairs to trip, removing
power from many devices in the house. Fortunately she knew what to do, unplugged the
kettle and quickly restored the power. However, she was not sure whether it was safe to use
the kettle after draining the base, and when she tried it later the RCD tripped again, leaving
her wondering whether it would ever be safe to try again, or whether she should buy a new
kettle. In fact it proved possible to open the base, dry it thoroughly, then use it as before.
Should a DC be able to give helpful advice in such a situation? Would linguistic interaction
suffice? How? Will cameras and visual capabilities be provided? People who work on
language understanding often wrongly assume that providing 3-D visual capabilities will be
easier, whereas very little progress has been made in understanding and simulating human-
like 3-D vision. (E.g. many confuse seeing with recognising.)

Alternatives to canned responses

That was just one example among a vast array of possibilities. Of course, if the designer
anticipates such accidents, the DC will be able to ask a few questions and spew out relevant
canned advice, and even diagrams showing how to open and dry out the flooded base. But
suppose designers had not had that foresight: What would enable the DC to give sensible
advice? If the DC knew about electricity and was able to visualise the consequences of liquid
pouring over the kettle base, it might be able to use a mixture of geometric and logical
reasoning creatively to reach the right conclusions. It would need to know about and be able
to reason about spatial structures and the behaviour of liquids. Although Pat Hayes
described the Naive physics project decades ago, it has proved extremely difficult to give
machines the kind of intuitive understanding required for creative problem-solving in novel
physical situations. In part that is because we do not yet understand the forms of
representation humans (and other animals) use for that sort of reasoning.

41
Identifying affordances and searching for things that provide them

Suppose an elderly user finds it difficult to keep his balance in the shower when soaping his
feet. He prefers taking showers to taking baths, partly because showers are cheaper. How
should the DC react on hearing the problem? Should it argue for the benefits of baths?
Should it send out a query to its central knowledge base asking how how people should keep
their balance when washing their feet? (It might get a pointer to a school for trapeze artists.)
The DC could start an investigation into local suppliers of shower seats. But what if the DC
designer had not anticipated the problem? What are the requirements for the DC to be able
to invent the idea of a folding seat attached to the wall of the shower, that can be temporarily
lowered to enable feet to be washed safely in a sitting position? Alternatively what are the
requirements for it to be able to pose a suitable query to a search engine? How will it know
that safety harnesses and handrails are not good solutions?
Giving machines an understanding of physical and geometrical shapes, processes and
causal interactions of kinds that occur in an ordinary house is currently far beyond the state
of the art. (Compare the Robocup@Home challenge, still in its infancy:
http://www.ai.rug.nl/robocupathome/) Major breakthroughs of unforeseen kinds will be
required for progress to be made, especially breakthroughs in vision and understanding of 3-
D spatial structures and processes.

More abstract problems

Sometimes the DC will need a creative and flexible understanding of human relationships
and concerns, in addition to physical matters. Suppose the user U is an atheist, and while
trawling for information about U's siblings the DC finds that U's brother has written a blog
entry supporting creative design theory, or discovers that one of U's old friends has been
converted to Islam and is training to be a Mullah. How should the DC react? Compare
discovering that the sibling has written a blog entry recommending a new detective novel he
has read, or discovering that the old friend is taking classes in cookery. Should the DC care
about emotional responses that news items may produce? How will it work out when to be
careful? Where will its goals come from?

Is the solution statistical?

The current dominant approach to developing language understanders and advice givers
involves mining large corpora using sophisticated statistical pattern extraction and matching.
This is much easier than trying to develop a structure-based understander and reasoner, and
can give superficially successful results, depending on the size and variety of the corpus and
the variety of tests. But the method is inherently broken because as sentences get longer, or
semantic structures get more complex, or physical situations get more complex, the
probability of encountering recorded examples close to them falls very quickly. Then a helper
must use deep general knowledge to solve a novel problem creatively, often using non-
linguistic context to interpret many of the linguistic constructs:
(http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0605). Some machines can
already do creative reasoning in restricted domains, e.g. planning.

Why do statistics-based approaches work at all?

In humans (and some other animals), there are skills that make use of deep generative
competences whose application requires relatively slow, creative, problem solving, e.g.
planning routes. But practice in using such a competence can train powerful associative
learning mechanisms that compile and store many partial solutions matched to specific
contexts (environment and goals). As that store of partial solutions (traces of past structure-
creation) grows, it covers more everyday applications of the competence, and allows fast and
fluent responses. However, if the deeper, more general, slower, competence is not

42
avanoilable, wrong extrapolations can be made, inappropriate matches will not be
recognised, new situations cannot be dealt with properly and further learning will be very
limited, or at least very slow. In humans the two systems work together to provide a
combination of fluency and generality. (Not just in linguistic competence, but in many other
domains.) A statistical AI system can infer those partial solutions from large amounts of data.
But because the result is just a collection of partial solutions it will always have severely
bounded applicability compared with humans, and will not be extendable in the way human
competences are. If trained only on text it will have no comprehension of non-linguistic
context. Occasionally I meet students who manage to impress some of their tutors because
they have learnt masses of shallow, brittle, superficially correct patterns that they can string
togetherwithout understanding what they are saying. They function like corpus-based AI
systems: Not much good as (academic) companions.

What's needed

Before human toddlers learn to talk they have already acquired deep, reusable structural
information about their environment and about how people work. They cannot talk but they
can see, plan, be puzzled, want things, and act purposefully. They have something to
communicate about. That pre-linguistic competence grows faster with the aid of language,
but must be based on a prior, internal, formal 'linguistic' competence using forms of
representation with structural variability and (context-sensitive) compositional semantics.
This enables them to learn any human language and to develop in many cultures. DCs
without a similar pre-communicative basis for their communicative competences are likely to
remain shallow, brittle and dependent on pre-learnt patterns or rules for every task.
Perhaps, like humans (and some other altricial species), they can escape these limitations if
they start with a partly genetically determined collection of meta-competences that
continually drive the acquisition of new competences building on previous knowledge and
previous competences: a process that continues throughout life. The biologically general
mechanisms that enable humans to grow up in a very wide variety of environments, are part
of what enable us to learn about, think about, and deal with novel situations throughout life.
Very little is understood about these processes, whether by neuroscientists, developmental
psychologists or AI researchers, and major new advances are needed in our understanding
of information-processing mechanisms. Some pointers towards future solutions are in these
online presentations:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#compmod07 (Mostly about 3-D
vision)
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#wonac07 (On understanding
causation)
http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane/ (On seeing a child's
toys.)
A DC lacking similar mechanisms and a similar deep understanding of our environment may
cope over a wide range of circumstances that it has been trained or programmed to cope
with and then fail catastrophically in some novel situation. Can we take the risk? Would you
trust your child with one?

Can it be done?

Producing a DC of the desired type may not be impossible, but is much harder than most
people realise and cannot be achieved by currently available learning mechanisms. (Unless
there is something available that I don't know about). Solving the problems will include:
(a) Learning more about the forms of representation and the knowledge, competences and
meta-competences present in prelinguistic children who can interact in rich and productive
ways with many aspects of their physical and social environment, thereby continually
learning more about the environment, including substantively extending their ontologies.

43
Since some of the competences are shared with other animals they cannot depend on
human language, though human language depends on them. However we know very little
about those mechanisms and are still far from being able to implement them.
(b) When we know what component competences and forms of representation are required,
and what sorts of biological and artificial mechanisms can support them, we shall also have
to devise a self-extending architecture which combines them all and allows them to interact
with each other, and with the environment in many different ways, including ways that
produce growth and development of the whole system, and also including sources of
motivation that are appropriate for a system that can take initiatives in social interactions. No
suggestions I have seen for architectures for intelligent agents, come close to requirements
for this. (Minskys Emotion machine, takes some important steps.)

Rights of intelligent machines

If providing effective companionship requires intelligent machines to be able to develop their


own goals, values, preferences, attachments etc., including really wanting to help and please
their owners, then if some of them develop in ways we don't intend, will they not have the
right to have their desires considered, in the same way our children do if they develop in
ways their parents don't intend?
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/epilogue.html

Risks of premature advertising

I worry that most of the people likely to be interested in this kind of workshop will want to start
designing intelligent and supportive interfaces without waiting for the above problems to be
solved, and I think that will achieve little of lasting value because they will be too shallow and
brittle, and potentially even dangerousthough they may handle large numbers of special
cases impressively. If naive users start testing them, and stumble across catastrophic
failures that could give the whole field a very bad name.

Some related online papers and presentations

Computational Cognitive Epigenetics


http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0703
Diversity of Developmental Trajectories in Natural and Artificial Intelligence
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-aaai-representation.pdf
Do machines, natural or artificial, really need emotions? [Aaron Sloman, 25 Sep 2007]
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#cafe04

44
Notes on Intelligence

Alex S. Taylor* and Laurel Swan**


*Microsoft Research, and **Information Systems, Computing and Mathematics, Brunel
University

For some time now we, alongside others we work with, have criticised the ideas of
intelligence that are prevalent in technological imaginaries. Working in an area loosely and
somewhat unfortunately referred to as Human-Computer Interaction (HCI), weve given
special attention to smart homes and how many of the underlying motivations driving the
smart-home vision sit uncomfortably with the kinds of practices weve observed in our
various empirical investigations of home life [e.g., Taylor & Swan 2005; Taylor, et al., 2007].
We have reasoned that it is not only a massive and sometimes intractable technological
undertaking to get intelligent technologies to work in the ways promised. The very promise
has been fraught with theoretical and ontological uncertainties provoked by long-standing
debates around what it actually means for machines to have intelligence. 24
As an alternative to building artefacts and environments with intelligence (as in visions of the
smart home), weve aimed instead to demonstrate through a range of examples that
information and communication technologies (ICT) can be designed to enable people to act
intelligently. In other words, weve tried to re-direct the programmatic efforts of domestic ICT
design away from the possibility of intelligent things and towards an idea that privileges
human intelligence.
Reflecting on our position, one resounding flaw that reoccurs to us is how the person on the
street, if you will, regularly attributes life-like qualities to things and, in doing so, occasionally
imbues them with intelligence. This is done without observable difficulties. People readily
describe their household appliances as clever, smart and so on, and are seldom slow to refer
to things as dumb or stupid. Although these phrases are not all immediate equivalents to
intelligence, they are suggestive of the idea and more importantly demonstrate the capacity
and indeed willingness people have to attribute inanimate things with the ability to think
even when knowing full well they dont.
Considering the insightful points made by Edwards and Grinter [2001] on the piecemeal
adoption of technology into our homes, the aversion weve had in our research to the idea of
intelligent things pervading our environments now seems all the more restrictive. What weve
begun to contemplate is the possibility of people gradually taking up seemingly
inconsequential digital things that they see and treat as smart or intelligentintelligent not
like humans, but nevertheless intelligent in some casual, everyday sense of the word. Taking
this possibility to its logical conclusion, its plausible to imagine many of us living in
environments suffused with an artificial intelligenceliving with artificial companions, if you
willbut with us having reached such a point without any clear intent or thought. Alongside
the technologies that will constitute our future environments, some new and no doubt
unanticipated ways of making sense of intelligence will also be brought piecemeal into the
home. Arguably, weve already started down this route. The readiness of youngsters and
adults alike to care for their Tamagotchi-like devices and the less pronounced but perhaps
more significant uptake of relatively simple robots like iRobots vacuuming Roomba [cf.
Forlizzi & DiSalvo, 2006], lend support to a vision of the most rudimentary thinking
machines seeping into everyday life.

24
These debates have their origins in the early ideas of intelligent machinery set out by Alan
Turing [1950] and counter-arguments posed by Wittgenstein [Shanker, 1998].
45
Lessons from AI and Robotics

With a renewed interest in the possibility of thinking machines, we have over recent months
begun engaging in various theoretical, empirical and design-oriented exercises. In our more
theoretical meanderings, weve come to be aware that observations similar to those above
as well as some important lessons can be had from casting an eye towards the
developments in artificial intelligence (AI) and robotics over the last ten years or so. AI, as
many of those attending the Artificial Companions forum will know, has undergone significant
and dramatic changes since the 1950s when Newell and Simon laid the ground work for the
project [1963]. To greatly oversimplify, a New AI has emerged from various introspections
[cf. Brooks 2002] and external critiques [cf. Suchman, 1987]. The top-down, brute force
approach to computation that characterised Good Old Fashion AI (GOFAI) has been, over
time, largely replaced by an AI with emergence as its trope which envisages learning as
something that evolves from the ground up, so to speak.
Broadly speaking, weve found these developments in AI to have been instructive on two
counts. First, and putting to one side the ambitious projects from AI dignitaries such as
Brooks [2002] and Grand [2003] (also see [Castaeda & Suchman, 2005]), the efforts from
within AI and robotics to build simple operating, autonomous machines that are responsive to
situational variables, indicates a shift in thinking about intelligence. No longer is it exclusively
assumed that the sort of intelligence to be attained in machines should simulate human
intelligence. More subtle ideas are finding a foothold that are leading to machines that are
merely suggestive of intelligence. Key here is that elements of AI and Robotics are offering a
path away from a rigid and restrictive notions of intelligence in machines and towards
alternative and in some cases new possibilities.
The second lesson comes more from outside AIbroadly from an ongoing and maturing
critique of AIs developmentsand provides us with an interesting possibility for (re)framing
the ideas we have of intelligence. An example of this critical perspective is offered in Lucy
Suchmans now well-cited book Plans and Situated Actions [1987]. Suchmans thesis
criticised some of the assumptions implicit in intelligent or expert systems, specifically those
being designed into the photocopiers her organisation manufactured. A primary lesson those
in the area of Human-Computer Interaction have drawn from this work has been to recognise
the situated character of human actionthat even planful behaviour is contingent on the
contexts we find ourselves in. Suchman though had another major theme to her work that
was given far less attention (although one she has since pursued and elaborated on in her
books 2nd edition). As part of her critique, Suchman articulated an argument that
foregrounded the constitutive nature of the human-machine intersection. That is, how both
the ideas of human and machine are features of the work invested in construing the interface
between them. She suggested that even the idea of human-computer (inter)action (re-
)configures how humans and machines are understood.
Suchmans arguments parallel and in some cases build on a programme of workif it can be
called thatthat has re-cast or reconfigured the ontological distinctions made between
humans (as well as animals) and things. Latour [1993] and the Actor Network Theory corpus
[Law & Hassard, 1999] offer contemporary examples. Especially relevant to our ongoing
research are works around the theory of extended mind (specifically Andy Clarks extensive
contribution [e.g., Clark, forthcoming; Clark & Chalmers, 1998]) and the feminist techno-
science programme spearheaded by Donna Haraway [1991] and cultural studies
personalities like Katherine Hayles [2005]. Like the work from within AI, this set of positions
although by no means unified, provide a way of re-imagining what machines might do (and
the intelligence they might have). More importantly though they encourage us to rethink
intelligence itself. They have us reflect on not only the figurations [Haraway, 1991] of human
intelligence, but also to take seriously the prospect of alternative and possibly new
conceptions of intelligence.

46
Ongoing Investigations

As we continue to develop our thoughts on these matters, our research will aim to examine
these two themes (and undoubtedly others). In part, they will hopefully help us to reflect on
the critical position we have propounded against intelligent technology and specifically smart
homes outlined above. In a manner of speaking, wed like to use them as a starting point to
take intelligence seriously in a way that we, and wed argue much of the research in Human-
Computer Interaction, has chosen to avoid.
Besides simply wanting to thoughtfully reflect on our past research, our hope is also that we
might start to build up an informed way of thinking and talking about the prospect of
intelligent, companionable machines. By allowing for a re-thinking of artificial intelligence and
the idea of intelligence more broadly, what we want to do is begin to consider the sorts of
questions we in HCI might ask when people readily attribute life-like qualities including
intelligence (and no doubt stupidity) to machinesthe kinds of questions we might ask when
it is unremarkable to live with machines that act autonomously, that converse with us (in
speech or otherwise), and that even converse with one another. Our aim is thus not only to
ask how these companionable machines might be like us and interact in ways that are
familiar (as in conversation), but to ask how they might be different. From this we might also
go on to learn something of ourselves and what it is, exactly, to be intelligent.

References

Brooks, R. A. Flesh and Machines: How Robots Will Change Us. Pantheon, New York
(2002).
Castaeda, C. & Suchman, L. Robot Visions,
http://www.lancs.ac.uk/fass/sociology/papers/suchman-Robot-Visions.pdf
Clark, A. Mementos revenge: The extended mind, extended. In Menary, R., The Extended
Mind. Ashgate, Aldershot (forthcoming).
Clark, A. & Chalmers, D. J. The extended mind. Analysis, 58 (1998), 719.
Grand, S. G. Creation: Life and How to Make It. Harvard University Press, Cambridge, MA, (
2003).
Edwards, K. & Grinter, R. At home with ubiquitous computing: seven challenges. 3rd
International Conference on Ubiquitous Computing, Ubicomp 01 (2001), 256-272.
Forlizzi, J. & DiSalvo, C. Service robots in the domestic environment: a study of the Roomba
vacuum in the home. Conference on Human-Robot Interaction, HRI '06 (2006), 258-265.
Haraway, D. J. Simians, Cyborgs, and Women: The Reinvention of Nature. Routledge, New
York (1991).
Hayles, N. K. Computing the Human. Theory Culture & Society, 22 (2005), 131-151.
Latour, B. We Have Never Been Modern. Harvester Wheatsheaf, London (1993).
Law, J. & Hassard, J (eds) Actor Network Theory and After. Blackwell Publishers/The
Sociological Review, Oxford (1999).
Newell, Allen & H. A. Simon. GPS: A Program that Simulates Human Thought, in
Feigenbaum, E.A. & J. Feldman, Computers and Thought, McGraw-Hill, (1963).
Shanker, S. Wittgenstein's Remarks on the Foundations of AI, Routledge, Abington (1998).
Suchman, L. A. Plans and Situated Actions: The Problem of Human-Machine
Communication, Cambridge University Press, Cambridge (1987).
Taylor, A. S. & Swan, L. Artful systems in the home. Conference on Human Factors and
Computing systems, CHI '05 (2005), 641-650.

47
Taylor, A. S., Harper, R., Swan, L., Izadi, S., Sellen, A. & Perry, M. Homes that make us
smart. Personal and Ubiquitous Computing, 11, 5 (2007), 383-393.
Turing, A. M. Computing machinery and intelligence. Mind, LIX, 236 (1950), 433-460.
Weiser, M. The computer for the 21st century. Scientific American 265 (1991), 94-104.

48
On being a Victorian Companion

Yorick Wilks
In one of the position papers for this seminar, someone remarked that James Boswell had
been a case of the inaccurate Companion: his account of Johnsons life is engaging but
probably exaggerated, yet none of that now matters. Johnson is now Boswells Johnson, and
his companionship made Johnson a social property in a way he would never have been
without his Companion and biographer. This observation brings out the complexity of
Companionship, as opposed to a mere amanuensis or recording device, and its role between
the merely personal and the social. The last decade of the WWW has shown extraordinary
and improbable shifts along this axis: FLICKR, as we all know, began as a personal service
for photo image storage, but has become a major social force in its own right, and the source
of vibrant social groupings.
The first Artificial Companion is, of course, Frankensteins monster in the 19C, which also got
an oblique mention in the seminar papers; that creature was dripping with emotions, and
much concerned with its own social life:

Shall each man, cried he, find a wife for his bosom, and each beast have his mate,
and I be alone? I had feelings of affection, and they were requited by detestation and
scorn. Man! you may hate; but beware! your hours will pass in dread and misery, and
soon the bolt will fall which must ravish from you your happiness for ever.
(Shelley, 1831, Ch. 20)

This is clearly not quite the product that the Companions project is aiming at but, before just
dismissing it as an early failed experiment, we should take seriously the possibility that
things may turn out differently from what we expect and Companions, however effective, may
be less loved and less loveable than we might wish.
My interest in this issue is not a literary or theoretical one, but purely practical, and focused
as the coordinator of a large EU-funded project committed to the production of demonstrable
Companions, capable of some simple understandings of their owners needs and wishes.
After 10 months of that project we do have something practical to show at
(http://www.companions-project.org) but not enough to be worth holding up this seminar for,
and in any case we have not yet explored sufficiently deeply the issues raised by Alan
Newell and other participants about how we actually find out what kinds of relationship
people really want with Companion entities, as oppose to our just deciding a priori and
building it.
It is no longer fashionable to explore a concept by reviewing its various senses, though it is
not wholly useless either: when mentioning recently that one draft website for the
Companions Project had the black and pink aesthetic of a porn site, I was reminded by a
colleague that the main Google-sponsored Companions site still announces 14.5 million
girls await your call and it was therefore perhaps not as inappropriate as I first thought. For
some, a companion is still, primarily, a domestic animal, and it is interesting to note the key
role pet-animals have played in the arguments in the seminar papers on what it is, in
principle, to be a companion: especially the features of memory, recognition, attention and
affection, found in dogs but rarely in snakes or newts.
I would also add that pets can play a key role in the arguments about responsibility and
liability, issues also raised already, and that dogs, at least under English common law, offer
example of entities with a status between that of humans and mere wild animals: that is,
ferae naturae, such as tigers, which the law sees essentially as machines and anyone who
keeps one is absolutely liable for the results of its actions. Companions could well come to
occupy such an intermediate moral and legal position (see Wilks & Ballim, 1990), and it
would not be necessary, given the precedents with pets already available in law, to deem
them either mere slaves or the possessors of rights like our own. Dogs are treated by English

49
courts as potential possessors of character, so that a dog can be of known bad character,
as opposed to a (better) dog acting out of character. There is no reason to believe that
these pet precedents will automatically transfer to issues concerning companions, but it is
important to note that some minimal legal framework of this sort is already in place.
More seriously, and in the spirit of a priori thoughts (and what else can we have at this
technological stage of development?) about what a Companion should be, I would suggest
we could spend a few moments reminding ourselves of the role of the Victorian ladys
companion. Forms of this role still exist, as in the recent web posting:

Companion Job
posted: October 5, 2007, 01:11 AM

I Am a 47 year old lady looking seeking a position as companion to the elderly, willing
to work as per your requirements.I have been doing this work for the past 11 yrs.very
reliable and respectful.
Location: New Jersey
Salary/Wage: Will discuss
Education: college
Status: Full-time
Shift: Days and Nights

But here the role has become more closely identified with caring and the social services than
would have been the case in Victorian times, where the emphasis was on company rather
than care. However, this was not always a particularly desirable or even tolerable role for a
woman. Fanny Burney refers to someones companion as a toad-eater which Grose (1811)
glosses as:

A poor female relation, and humble companion, or reduced gentlewoman, in a great


family, the standing butt, on whom all kinds of practical jokes are played off, and all ill
humours vented. This appellation is derived from a mountebanks servant, on whom all
experiments used to be made in public by the doctor, his master; among which was the
eating of toads, formerly supposed poisonous. Swallowing toads is here figuratively
meant for swallowing or putting up with insults, as disagreeable to a person of feeling
as toads to the stomach.

But one could nevertheless, and in no scientific manner, risk a listing of features of the ideal
Victorian companion:
Politeness
Discretion
Knowing their place
Dependence
Emotions firmly under control
Modesty
Wit
Cheerfulness
Well-informed
Diverting
Looks are irrelevant
Long-term relationship if possible
Trustworthy
Limited socialization between Companions permitted off-duty.
If this list is in any way plausible, it suggests an emphasis rather different from that current in
much research on emotions and computers (e.g. HUMAINE at emotion-research.net). It is a

50
notion that does bring back to mind the confidant concept that Boden, in these papers,
explicitly rejected. The emphasis in that list is on what the self-presentation and self-image of
a possible tolerable Companion should be; its suggestion is that OVERT emotion may be an
error. I have never felt wholly comfortable with the standard ECA approach in which if, an
avatar has an emotion, it immediately expresses it, almost as if to prove the capacity of the
graphics. This is exactly the sort of issue tackled by Darwin (1872) and such overtness can
seem to indicate almost a lower evolutionary level than one might want to model, in that it is
not a normal feature of much human interaction. The emotions of most of my preferred and
frequent interlocutors, when revealed, are usually expressed in modulations of the voice and
a very precise choice of words, but I realize this may be just cultural prejudice.
On the other hand, pressing the pet analogy might suggest that overt demonstrations of
emotion are desirable if that is to be the paradigm: dogs do not much disguise their
emotions. Language, however, does so, and its ability to please, soothe and cause offence
are tightly coupled with expertise, as we all know with non-native speakers of our languages
who frequently offend, even though they have no desire to do so, nor even any knowledge of
the offence they cause. All this suggests that there is some confusion in the literature at the
moment on what the desirable self-presentation of a possible long-term companion is to be.
Even the need for politeness, which is widely emphasized (e.g. Wallis et al., 2001),
especially in the form of responses to face saving and face attacks (Brown and Levinson,
1987) is not universally accepted. Cheepen and Monaghan (1997) showed that, at least in
automated teller machines, users were resistant to notions of politeness that they found
excessive, especially coming from machines who could not really be polite, or indeed
anything.
Levy (in these papers) draws attention to the extraordinary Tamagochi phenomenon: of
intelligent people making an effort to care for, and cosset, primitive machines, without even
language. The classic work of Reeves and Nass (1996) similarly drew attention to an
apparent desire of users not to offend machines they dealt with. Nonetheless, there is a
great deal of evidence (Wallis, ibid.) that users, knowing a machine to be one, will take the
opportunity to abuse and insult it immediately, often, and without cause.
All this, and taken with Brysons (these papers) arguments for keeping machines in a slave
posture with no attributions of sentience or rights, suggests there is not yet a consensus on
how to view or deal with machines, or agreement on what social and personal role we want
them to adopt with us. Nonetheless, I personally find the ladys companion list above an
attractive one: one that eschews emotion beyond the linguistic, that shows care for the state
of the user, and I would personally find it hard to abuse any computer with the characteristics
listed above. It is no accident, of course, that this list fits rather well with the aims of the
Senior Companion demonstrator in the Companions project. But we have begun
simultaneously with a Health and Fitness Companion for the active, one sharing much of the
architecture with the first, and oe that would require something in addition to the list above:
the personal trainer element of weaning and coaxing which adds something more. and
something very close to the economic-game bargain of the kind discussed in some detail by
Lowe in his paper.
In conclusion, it may be worth making a small clarification about the word avatar that
sometimes dogs discussions in these areas: those working in computing the human-machine
interface often use the word to mean any screen form, usually two-dimensional, that
simulates a human being, but not any particular human being. On the other hand, in the
virtual reality and game worlds, such as Second Life, an avatar is a manifestation of a
particular human being, an alternative identity that may or may not be similar to the owner, in
age, sex, appearance etc. These are importantly different notions and confusion can arise
when they are conflated or confused: in current Companions project demonstrations, for
example, a number of avatars in the first sense are used to present the Companions
conversation on a computer or mobile phone screen. However, in the case of a long-term
computer Companion that could elicit, through prolonged reminisence, details of its owners
life and perhaps train its voice in imitationsince research shows that more successful

51
computer conversationalists are as like their owners as possible then one might approach
the point where a Companion could approximate to the second sense of avatar above.
Such situations would be the, at the moment, wildly speculative ones: of a Companion acting
as its owners agent, on the phone or world wide web, perhaps holding power of attorney in
case of an owners incapacity and, with the owners advance permission, of being a source
of conversational comfort for relatives after the owners death. All these situations are at
present absurd, but perhaps we should be ready for them.

References

Brown, P. and Levinson, S. (1987) Politeness: Some Universals in Language Usage.


Cambridge University Press.
Cheepen, C. and Monaghan, J. (1997) Designing Naturalness in Automated Dialogues -
some problems and solutions. In Proc. First International Workshop on Human-
Computer Conversation, Bellagio, Italy.
Darwin, C. (1872) The expression of emotions in man and animals. London: John Murray.
Grose, F. (1811) Dictionary of the Vulgar Tongue, London.
Reeves, B. and Nass, C. (1996) CSLI Publications, Stanford University.
Shelley, M. (1831) Frankenstein, or The Modern Prometheus, London.
Wallis, P., Mitchard, H., ODea, D., and Das J. (2001) Dialogue modelling for a
conversational agent. In Stumptner, Corbett, and Brooks, (eds.), AI2001: Advances in
Artificial Intelligence, 14th Australian Joint Conference on Artificial Intelligence, Adelaide,
Australia, 2001.
Wilks, Y. and Ballim, A. (1990) Liability and Consent. In Narayanan & Bennun (eds.) Law,
Computers and Artificial Intelligence. Norwood, NJ: Ablex.

52
You really need to know what your bot(s) are thinking

Alan FT Winfield
Bristol Robotics Laboratory
Correspondence: Alan.Winfield@uwe.ac.uk

The projected ubiquity of personal companion robots raises a range of interesting but also
troubling questions.
There can be little doubt that an effective digital companion, whether embodied or not, will
need to be both sensitive to the emotional state of its human partner and be able to respond
sensitively. It will, in other words, need artificial empathy - such a digital companion would
(need to) behave as if it has feelings. One current project at the Bristol Robotics Laboratory
(BRL) is developing such a robot 25 , which will of course need some theory of mind if it is to
respond appropriately. Robots with feelings take us into new territory in human-machine
interaction. We are of course used to temperamental machinery and many of us are machine
junkies. We are addicted to our cars and dishwashers, our mobile phones and iPods. But
what worries me about a machine with feelings (and frankly it doesnt matter whether it really
has feelings or not) is how it will change the way humans feel about the machine.
Human beings will develop genuine emotional attachments to companion bots. Recall
Weizenbaums secretarys sensitivity about her private conversation with ELIZA - arguably
the world's first chat-bot in the 1960s 26 . For more recent evidence look no further than the
AIBO pet owners clubs. Here is a true story from one such club to illustrate how blurred the
line between pet and robot has already become. One AIBO owner complained that her robot
pet kept waking her at night with its barking. She would jump out of bed and try to calm the
robo-pet, stroking its smooth metallic-gray back to ease it back to sleep. She was saved
from going crazy when it was suggested that she switch the dog off at night to prevent its
barking 27 .
It is inevitable that people will develop emotional attachments, even dependencies, on
companion bots. This, of course, has consequences. But what interests me is if the bots
acquire a shared cultural experience. Another BRL project called The Emergence of Artificial
Culture in Robot Societies is investigating this possibility. Consider this scenario. Your home
has a number of companion bots. Some may be embodied, others not. It is inevitable that
they will be connected, networked via your home wireless LAN, and thus able to chat with
each other at the speed of light. This will of course bring some benefits - the companion bots
will be able to alert each other to your needs: shes home, or he needs help with getting
out of the bath. But what if your bots start talking about you?
Herein lies the problem that I wish to discuss. The bots shared culture will be quintessentially
alien, in effect an exo-culture (and I dont mean that to imply sinister). Bot culture could well
be inscrutable to humans, which means that when bots start gossiping with each other about
you, you will have absolutely no idea what theyre talking about because - unlike them - you
have no theory of mind for your digital companions.

25
Peter Jaeckel, Neill Campbell and Christopher Melhuish, Towards Realistic Facial Behaviour -
Mapping from Video Footage to a Robot Head, Proc. 10th International Conference on Rehabilitation
Robotics (ICORR), June 2007.
26
Joseph Weizenbaum, Computer Power and Human Reason, W.H. Freeman, 1976.
27
Donald Macintyre, A Guy and His Dog Become Cyberstars, Time Asia, Vol 156 no 22, June 5,
2000.

53

Vous aimerez peut-être aussi