Académique Documents
Professionnel Documents
Culture Documents
I. INTRODUCTION
2168-2291 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4
learning topic as well as pedagogical and communicative aspects that are essential for accelerating the learning process by
uniquely adapting to the learners states [34].
Reliable detection of emotional states from neurophysiological sensors remains a challenge, particularly in the context of
learning and training environments; however, the motivation
for doing so is promising. One example, although it did not use
any neurophysiological assessments of affect, demonstrates that
with reliable attribution of affective states during a learning process, learning relevant states can be inferred such as boredom,
frustration, confusion, flow, and delight [35], [36]. From this
paper, the researchers proposed that detection of these states affords the opportunity for intelligent systems to mitigate negative
affective states that act as a detriment to learning and optimize
oscillation between flow and surprise states that keep a learner
both challenged and engaged [36].
Utilization of SCAN research technologies and techniques in
this context is a practice relatively in its infancy. In fact, many
studies of emotion and learning have relied on methods such as
self-report or expert judgment to infer affective states (see, e.g.,
[36]). While only a brief overview has been provided, empirical
support in SCAN is growing for the study of emotion in relation
with cognition demonstrating that emotion plays a significant
role in learning, memory, attention, social understanding, and
behavioral responses (see [37] for review). It is likely that only
through such efforts (i.e., adopting a SCAN-based approach),
that technologies can truly be centered on learners and adapt the
learning environment in real time. In doing so, and as advances
are made in SCAN, insights will be gained to better understand
how the dynamics of mental and affective states contribute to
learning. This can inform the improvement of educational and
training support systems by accounting for social, cognitive,
affective, and behavioral processes, as well as the neurophysiological underpinnings as individuals learn by themselves, as
well as when they learn in groups [14], [36]. Last, in service
of developing a roadmap that can be used to help guide future research toward the integration of SCAN with learning and
training, in Table I, we provide a set of research questions that
are offered as a guide for R&D in pursuit of such an integration.
IV. APPLICATIONS OF SOCIAL COGNITIVE AND AFFECTIVE
NEUROSCIENCE TO HUMANROBOT INTERACTION
In addition to learning and training, many of todays business, industry, and military organizations are adopting increasingly intelligent robotic technologies with which humans must
interact and collaborate on a daily basis. For example, astronauts collaborate with robots and intelligent agents to complete
their missions and, in the military, robots and unmanned vehicles provide military teams with better intelligence and keep
them safe from harm. Although humans are increasingly interacting with these technologies, little research, until recently,
has focused on developing technologies that have the ability to fully reciprocate in the interaction. To do so, it is required that such technologies have the appropriate social intelligence required to understand and respond appropriately
to the complex social dynamics of human behavior (see, e.g.,
TABLE I
QUESTIONS FOR A RESEARCH ROADMAP TO FURTHER THE INTEGRATION
OF SCAN WITH LEARNING AND TRAINING R&D
Learning and Training Questions
Emotion
Theory of Mind
Joint Action
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS
a robot is, the more the activated brain regions correlated with
those that are activated during humanhuman interactions (i.e.,
the medial frontal cortex and the right temporoparietal junction).
As a potential intermediary step in developing robots capable
of displaying and recognizing the full social expressivity of humans, advances in braincomputer interfaces (BCI, sometimes
referred to as brain-controlled interfaces) are relevant. One of
the pioneering efforts in this area implanted electrodes into the
frontal and parietal lobes of a macaque monkeys brain as a
means of developing a brain-controlled robotic arm [55]. Of
course, such invasive techniques are not typically employed in
humans, but advances have been made for using EEG coupled
with signal processing techniques to control robotic arms [56],
as well as a means for controlling the navigation of mobile robots
(see [57] for review). More relevant for the present purposes,
some work in BCI has focused on incorporating the detection
of emotions through EEG into humanmachine systems [58].
While the reliable detection of emotions from EEG is limited
and requires extensive training of the algorithms, the technique
highlights the possibility for a machine to possess such a beneficial capability. With detection capabilities realized, the next
logical step is control. In this sense, the detection of emotion or
other socially relevant states can be utilized as an input control
mechanism for robots to display certain social cues.
At a more general level, a useful framework to further research along these lines is that of distinguishing social cues
and signals (see, e.g., [59]). This is helpful for understanding
the social exchange that occurs during human interaction with
robots because it specifies social cues as those that are observable to humans, and conceivably robots, in the environment and
links these with unobservable social signals that are indicative
of mental states. An example here of social cues is lips in the
shape of a smile, bared teeth, and wide eyes, which could be
interpreted as the social signal of happy. Recent research has
shown ways to identify the social cues that are indicative of
untrustworthy behavior in humanhuman interactions and then
programmed a robot to display these cues, which resulted in a
similar distrust of the robot [60]. Other work has shown that by
manipulating the social cues a robot displays, even when that
robot is minimally expressive (nonhumanoid), affects the types
of intentional and emotional states (i.e., social signals) that are
attributed to it [59], [61]. However, such investigations have not
yet incorporated neurophysiological assessments. In doing so,
further levels of specificity may be gained with regard to understanding the ways in which humans perceive and ultimately
understand robots [40]. While a detailed framework for how to
conduct the type of research suggested here is beyond the scope
of this paper, an outline for how to conduct research within this
framework is forthcoming [62].
In addition, SCAN is needed both to inform the design of
computational models of social cognition in robots, but also to
empirically evaluate the physiological mechanisms employed
when humans interact with such robots. In doing so, technological systems can be developed that are able to understand
the humans that operate or work with them as well as appropriately convey the social cues necessary for humans to
easily understand that system. This area of research is just
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
6
TABLE II
QUESTIONS FOR A RESEARCH ROADMAP TO FURTHER THE INTEGRATION
OF SCAN WITH HRI R&D
HumanRobot Interaction Questions
Emotion
Theory of Mind
Joint Action
The last example application of SCAN to the design of technologies is to enhance team performance in complex domains.
The goal here is to draw attention to novel constructs worth examining neurophysiologically to serve as another metric of team
performance and, at some point, a source of real-time performance feedback. Such efforts can, in turn, revolutionize the way
in which teams interact when they have an enriched knowledge
of each others mental and emotional states as they dynamically
shift throughout the execution of complex tasks.
Traditionally, team performance and team cognition research
has relied upon a knowledge-based approach that posits that
teams require shared mental models of their task and of their
team in order for effective coordination to occur [63]. However, relying solely on the knowledge of the team at any given
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS
TABLE III
QUESTIONS FOR A RESEARCH ROADMAP TO FURTHER THE INTEGRATION OF
SCAN WITH TEAM PERFORMANCE R&D
Team Performance Questions
Emotion
Theory of Mind
Joint Action
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
8
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS
[34] R. Sottilare and B. Goldberg, Designing adaptive computer-based tutoring systems to accelerate learning and facilitate retention, Cognit. Tech.,
vol. 17, no. 1, pp. 1933, 2012.
[35] S. DMello, Monitoring affective trajectories during complex learning,
in Encyclopedia of the Sciences of Learning, 2012, pp. 23252328.
[36] S. DMello and A. Graesser, Dynamics of affective states during complex
learning, Learn. Instruction, vol. 22, no. 2, pp. 145157, 2012.
[37] E. A. Phelps, Emotion and cognition: Insights from studies of the human
amygdala, Annu. Rev. Psychol., vol. 57, pp. 2753, 2006.
[38] T. J. Wiltshire, D. C. Smith, and J. R. Keebler, Cybernetic teams: Towards
the implementation of team heuristics in HRI, in Virtual Augmented
and Mixed Reality. Designing and Developing Augmented and Virtual
Environments. Berlin, Germany: Springer, 2013, pp. 321330.
[39] R. I. Dunbar and S. Shultz, Evolution in the social brain, Science, vol.
317, no. 5843, pp. 13441347, 2007.
[40] E. J. C. Lobato, T. J. Wiltshire, and S. M. Fiore, A dual-process approach
to human-robot interaction, in Proc. Hum. Factors Ergonom. Soc. Annu.
Meet., 2013, vol. 57, pp. 12631267.
[41] T. J. Wiltshire, D. Barber, and S. M. Fiore, Towards modeling social
cognitive mechanisms in robots to facilitate human-robot teaming, in
Proc. Hum. Factors Ergonom. Soc. Annu. Meet., vol. 57, 2013, pp. 1273
1277.
[42] E. Phillips, S. Ososky, J. Grove, and F. Jentsch, From tools to teammates: Toward the development of appropriate mental models for intelligent robots, in Proc. Hum. Factors Ergonom. Soc. Annu. Meet., 2011,
pp. 14811485.
[43] T. J. Wiltshire, E. J. C. Lobato, F. G. Jentsch, and S. M. Fiore, Will
(dis)embodied LIDA agents be socially interactive? J. Artif. Gen. Intell.,
vol. 4, no. 2, pp. 4247, 2013.
[44] U. Kurup and C. Lebiere, What can cognitive architectures do for
robotics? Biol. Inspired Cognit. Archit., vol. 2, pp. 8899, 2012.
[45] E. I. Barakova and T. Lourens, Mirror neuron framework yields representations for robot interaction, Neurocomputing, vol. 72, pp. 895900,
2009.
[46] S. Gallagher, Direct perception in the intersubjective context, Consciousness Cognit., vol. 17, no. 2, pp. 535543, 2008.
[47] G. Pezzulo, M. Candidi, H. Dindo, and L. Barca, Action simulation in
the human brain: Twelve questions, New Ideas Psychol., vol. 31, no. 3,
pp. 270290, 2013.
[48] F. Van Overwalle, Social cognition and the brain: A meta-analysis, Hum.
Brain Mapping, vol. 30, no. 3, pp. 829858, 2009.
[49] M. R. Wilkinson, L. J. Ball, and R. Cooper, Arbitrating between theorytheory and simulation theory: Evidence from a think-aloud study of counterfactual reasoning, in Proc. 32nd Annu. Conf. Cognit. Sci. Soc., 2010,
pp. 10081013.
[50] M. R. Wilkinson and L. J. Ball, Dual processes in mental state understanding: Is theorising synonymous with intuitive thinking and is simulation
synonymous with reflective thinking? in Proc. 32nd Annu. Conf. Cognit.
Sci. Soc., 2013, pp. 37713776.
[51] C. Breazeal, J. Gray, and M. Berlin, An embodied cognition approach to
mindreading skills for socially intelligent robots, Int. J. Robot. Res., vol.
28, pp. 656680, 2009.
[52] E. Sahin, M. Cakmak, M. R. Dogar, E. Ugur, and G. Ucolik, To afford or
not to afford: A new formalization of affordances toward affordance-based
robot control, Adapt. Behav., vol. 15, no. 4, pp. 447472, 2007.
[53] L. M. Oberman, J. P. McCleery, V. S. Ramachandran, and J. A. Pineda,
EEG evidence for mirror neuron activity during the observation of human
and robot actions: Toward an analysis of the human qualities of interactive
robots, Neurocomputing, vol. 70, no. 13, pp. 21942203, 2007.
[54] S. Krach, F. Hegel, B. Wrede, G. Sagerer, F. Binkofski, and T. Kircher,
Can machines think? Interaction and perspective taking with robots investigated via fMRI, PLoS ONE, vol. 3, no. 7, pp. 111, 2008.
[55] J. Carmena, M. Lebedev, R. Crist, J. ODoherty, D. Santucci, D. Dimitrov,
P. Patil, P. C. Henriquez, and M. Nicolelis, Learning to control a brainmachine interface for reaching and grasping by primates, PloS Biol., vol.
1, no. 2, pp. 193208, 2003.
[56] J. R. Millan, F. Renkens, J. Mourino, and W. Gerstner, Noninvasive brainactuated control of a mobile robot by human EEG, IEEE Trans. Biomed.
Eng., vol. 51, no. 6, pp. 10261033, Jun. 2004.
[57] L. Bi, X. A. Fan, and Y. Liu, EEG-based brain-controlled mobile robots:
A survey, IEEE Trans. Human-Mach. Syst., vol. 43, no. 2, pp. 161176,
Mar. 2013.
[58] G. Molina, T. Tsoneva, and A. Nijholt, Emotional braincomputer interfaces, Int. J. Auton. Adapt. Commun. Syst., vol. 6, no. 1, pp. 925,
2013.
[59] S. M. Fiore, T. J. Wiltshire, E. J. C. Lobato, F. G. Jentsch, W. H. Huang, and
B. Axelrod, Towards understanding social cues and signals in humanrobot interaction: Effects of robot gaze and proxemic behavior, Front.
Psychol., vol. 4, no. 859, pp. 115, 2013.
[60] D. DeSteno, C. Breazeal, R. H. Frank, D. Pizarro, J. Baumann, L. Dickens, and J. J. Lee, Detecting the trustworthiness of novel partners in
economic exchange, Psychol. Sci., vol. 23, no. 12, pp. 15491556,
2012.
[61] T. J. Wiltshire, E. J. C. Lobato, A. Wedell, W. Huang, B. Axelrod, and S. M.
Fiore, Effects of robot gaze and proxemic behavior on perceived social
presence during a hallway navigation scenario, in Proc. Hum. Factors
Ergonom. Soc. Annu. Meet., 2013, pp. 12731277.
[62] T. J. Wiltshire, S. L. Snow, E. J. C. Lobato, and S. M. Fiore, Leveraging
social judgment theory to examine the relationship between social cues
and signals in human-robot interactions, in Proc. Hum. Factors Ergonom.
Soc. Annu. Meet., to be published.
[63] J. A. Cannon-Bowers, S. I. Tannenbaum, E. Salas, and C. E. Volpe,
Defining competencies and establishing team training requirements,
in Team Effectiveness and Decision Making in Organizations, R.A.
Guzzo and E. Salas Eds. San Francisco, CA, USA: Jossey-Bass, 1995,
pp. 333381.
[64] N. J. Cooke, J. C. Gorman, C. W. Myers, and J. L. Duran, Interactive
team cognition, Cognit. Sci., vol. 37, no. 2, pp. 255285, 2013.
[65] P. Silva, J. Garganta, D. Araujo, K. Davids, and P. Aguiar, Shared knowledge or shared affordances? Insights from an ecological dynamics approach to team coordination in sports, Sports Med., vol. 43, pp. 18,
2013.
[66] H. Bekkering, E. R. De Bruijn, R. H. Cuijpers, R. Newman-Norlund, H.
T. Van Schie, and R. Meulenbroek, Joint action: Neurocognitive mechanisms supporting human interaction, Topics Cognit. Sci, vol. 1, no. 2, pp.
340352, 2009.
[67] J. Zaki and K. Ochsner, The need for a cognitive neuroscience of naturalistic social cognition, Ann. N.Y. Acad. Sci., vol. 1167, no. 1, pp. 1630,
2009.
[68] A. J. Strang, G. J. Funke, S. M. Russell, A. W. Dukes, and M. S. Middendorf, Physio-behavioral coupling in a cooperative team task: Contributors
and relations, J. Exp. Psychol.: Hum. Perception Perform., vol. 40, no. 1,
pp. 145158, 2014.
[69] R. Kanso, M. Hewstone, E. Hawkins, M. Waszczuk, and A. C. Nobre,
Power corrupts cooperation: Cognitive and motivational effects in a double EEG paradigm, Soc. Cognit. Affective Neurosci., vol. 9, no. 2, pp.
218224, 2014.
[70] C. Morawetz, E. Kirilina, J. Baudewig, and H.R. Heekeren, Relationship
between personality traits and brain reward responses when playing on a
team, PloS One, vol. 9, no. 1, pp. 110, 2014.
[71] R. H. Stevens, T. L. Galloway, P. Wang, and C. Berka, Cognitive neurophysiologic synchronies: What can they contribute to the study of teamwork? Hum. Factors, vol. 54, no. 4, pp. 489502, 2012.
[72] A. D. Likens, P. G. Amazeen, R. Stevens, T. Galloway, and J. C.
Gorman, Neural signatures of team coordination are revealed by
multifractal analysis, Soc. Neurosci., vol. 9, no. 3, pp. 219234,
2014.
[73] J. C. Gorman, E. E. Hessler, P. G. Amazeen, N. J. Cooke, and S.
M. Shope, Dynamical analysis in real time: Detecting perturbations
to team communication, Ergonomics, vol. 55, no. 8, pp. 825839,
2012.