Vous êtes sur la page 1sur 9

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.


IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

Social Cognitive and Affective Neuroscience in HumanMachine Systems:


A Roadmap for Improving Training, HumanRobot Interaction,
and Team Performance
Travis J. Wiltshire and Stephen M. Fiore
AbstractThis paper augments recent advances in social cognitive and
affective neuroscience (SCAN) and illustrates their relevance to the development of novel humanmachine systems. Advances in this area are crucial
for understanding and exploring the social, cognitive, and neural processes
that arise during human interactions with complex sociotechnological systems. Overviews of the major areas of SCAN research, including emotion,
theory of mind, and joint action, are provided as the basis for describing three applications of SCAN to humanmachine systems research and
development. Specifically, this paper provides three examples to demonstrate the broad interdisciplinary applicability of SCAN and the ways it
can contribute to improving a number of humanmachine systems with
the pursuit of further research in this vein. These include applying SCAN
to learning and training, informing the field of humanrobot interaction
(HRI), and, finally, for enhancing team performance. The goal is to draw
attention to the insights that can be gained by integrating SCAN with ongoing humanmachine system research and to provide guidance to foster
collaborations of this nature. Toward this end, we provide a systematic
set of notional research questions for each detailed application within the
context of the three major emphases of SCAN research. In turn, this study
serves as a roadmap for preliminary investigations that integrate SCAN
and humanmachine system research.
Index TermsAccelerated learning, humanrobot interaction, joint action, social cognition, social cognitive and affective neuroscience, team performance.

I. INTRODUCTION

The Integration of recent advances in social cognitive and


affective neuroscience (SCAN) into the development of novel
humanmachine systems is an important trend for understanding and exploring the social, cognitive, and neural processes that
arise during both humanhuman interactions as well as human
machine interactions. Although SCAN is a discipline relatively
in its infancy, applications from this field have potential to substantively advance the development of future humanmachine
systems. More specifically, in this paper, three examples are
provided to demonstrate the broad interdisciplinary applicability of SCAN and the ways in which doing so can contribute
to improving a number of humanmachine systems with further research. These include descriptions of how to 1) improve
technologies to better adapt to students and trainees to facilitate
Manuscript received August 19, 2013; revised March 17, 2014 and June 22,
2014; accepted July 15, 2014. This work was supported in part by the Army
Research Laboratory and accomplished under Cooperative Agreement Number
W911NF-10-2-0016. Views contained here are the authors and should not be
interpreted as representing official policies, either expressed or implied, of the
Army Research Laboratory, the U.S. Government or the University of Central
Florida. The U.S. Government is authorized to reproduce and distribute reprints
for Government purposes notwithstanding any copyright notation herein. This
paper was recommended by Associate Editor M. Dorneich.
T. J. Wiltshire is with the Institute for Simulation & Training, University of
Central Florida, Orlando, FL 32826 USA (e-mail: twiltshi@ist.ucf.edu).
S. M. Fiore is with the Philosophy Department and the Institute for Simulation
& Training, University of Central Florida, Orlando, FL 32826 USA (e-mail:
sfiore@ist.ucf.edu).
Digital Object Identifier 10.1109/THMS.2014.2343996

better learning outcomes in shorter amounts of time; 2) inform


the field of humanrobot interaction (HRI) broadly by providing insights from humanhuman interactions to model social
cognitive mechanisms in robots and provide techniques for investigation of the ways in which social cognitive mechanisms
are employed when humans interact with robots; and, finally, 3)
by drawing attention to novel constructs, worth examining neurophysiologically, which will serve as parallel metrics of team
performance.
These three topics were strategically chosen because, not only
are they representative topics of study within humanmachine
systems research, but also their interdisciplinary nature illustrates the broad applicability of SCAN. Further, these examples
share underlying similarities with a large proportion of human
machine systems in that their use requires training, there is often
some form of machine intelligence or automation, and typically,
teams or groups are involved.
By demonstrating the interdisciplinary value of SCAN in
these applications, our goal is to spur future interdisciplinary
collaborations between SCAN researchers and the researchers
and developers of complex humanmachine systems. For the
present purposes, SCAN researchers are considered those typically trained in the tools and techniques of cognitive neuroscience and aim to discover basic truths about the neural underpinnings of human social cognition and affect (see, e.g., [1] and
[2]), and researchers and developers of humanmachine systems
are considered those in the fields of augmented cognition and
operational neuroscience. Typically, individuals in these fields
focus on coupling information from neurophysiological sensors
regarding the cognitive state of a human with a complex technological system to adapt the functions of that system to augment
human performance in applied settings (see, e.g., [3] and [4]). In
this way, the type of integration proposed here facilitates a more
ecological approach to the design of humanmachine systems
through a systematic balance of basic and applied research aims
and techniques (see [5]).
While researchers in augmented cognition and operational
neuroscience utilize many of the same neurophysiological measures as those in SCAN (e.g., electroencephalogram, functional
magnetic resonance imaging, and functional near-infrared spectroscopy), so far such efforts have not emphasized or highlighted
the importance of the social cognitive and affective dimensions.
That is, much humanmachine system research focuses on the
technological and interface design aspects (see [6]) and taskspecific cognitive processes. Nonetheless, researchers in this
area acknowledge this need and state that the influence of the
individuals nontask related states must be factored into any
applied setting (see [4, p. B192]). As such, we suggest that

2168-2291 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2

IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

addressing this gap is critical given that most humanmachine


systems are inherently socially situated in the sense that they are
composed of groups, teams, and organizations working toward
their shared goals, not only through task-relevant interactions,
but also through social interactions. In this way, teams comprising humanmachine systems are fundamentally social systems
(see, e.g., [7]). Further, some accounts argue that cognition, generally, is socially situated and inextricably linked to the social
world [8]. In other words, the social cognitive and affective dimensions, as they are related to performance in humanmachine
systems research, is quite novel and represents a gap that has
both theoretical and practical implications for advancing research. Accordingly, this paper is structured as follows. First,
an overview of social cognition and SCAN is provided. Then,
in the subsequent sections, the three example applications of
SCAN, described above, are further elaborated upon. Finally, a
short discussion is provided to help articulate the next steps for
the integration proposed herein.
II. SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE

For over a decade now, a growing body of research has been


dedicated to explicating the neurophysiological underpinnings
involved in human social cognition. Specifically, social cognition is thought to encompass the perceptual, behavioral, and
cognitive processes involved in human social interactions with a
specific focus on determining how humans are able to interpret
another persons intentional, affective, and other mental states
in order to respond appropriately [9]. Social cognition, then, incorporates person perception, self-perception, and interpersonal
knowledge for use in understanding the mental states of others
to facilitate successful interactions [10], [11]. This differs from
the more general study of cognition in that the social domain
(e.g., other persons) is the object of cognition, often for the
explicit purposes of adaptive regulation of social interactions
[8].
SCAN, then, is an interdisciplinary field that seeks to explain social cognitive and affective processes and, when first
introduced as a field, adhered to three levels of analysis [12].
Specifically, SCAN seeks to explain human behavior at a social
level by accounting for constructs that are typically studied in
social psychology such as attitudes, culture, and motivations;
a cognitive level by focusing on the ways in which social information is processed; and finally, at the neural level through
examination of the neural substrates associated with the aforementioned social and cognitive processes [12]. Last, the study of
emotions is intertwined with both the social and cognitive levels of analysis and with the underlying neural substrates [13]. In
short, SCAN applies methods from neuroscience and integrates
these with methods from social and cognitive psychology to
study the social and emotional aspects of human cognition and
behavior.
While the above distinctions were useful for introducing
SCAN as a field, a recent overview of the work in SCAN occurring over the past decade provides greater utility in understanding this field for present purposes. Specifically, Chatel-Goldman
et al. [11] illustrate SCAN research as clustering into the

following three domains: emotion, joint action, and theory of


mind. Here, theory of mind research is akin to our prior description of social cognition in that it incorporates the study of the
cognitive processes underpinning how humans understand the
mental states of other people (see [14] and [15]). Joint action
is when two or more people coordinate their actions in space
and time [16, p. 59]. Finally, emotions are typically a combination of a positive or negative affective state and a certain
level of arousal (see, e.g., [17]). Indeed, these three constructs
are of critical importance to the present argument, and they are
referred back to throughout the paper when appropriate.
Not only does SCAN research cluster around the aforementioned three dimensions (emotion, joint action, and theory of mind), but it also studies these constructs as stages
of social information processing [11]. The stages range from
social cognitive processes that are more implicit and automatic to those that are explicit and controlled. Indeed, as
with much cognitive research, there is broad support in SCAN
that social cognitive processes emerge in such ways (see [1]
for review). While understanding the stages of social information processing as such are important, it is also necessary to properly situate the field of SCAN before detailing
the applications of SCAN to the research on humanmachine
systems.
Recently, there has been a shift in SCAN due to major critiques against the field. Much of SCAN research has tended to
examine the social cognitive processes of individuals passively
observing social stimuli (e.g., viewing a video of a conversation), whereas by definition, the field seeks to make claims about
social interactions involving the active exchange of social information between at least two individuals (e.g., engaging in a
conversation), if not more [18][20]. This distinction here is an
important one because it has been a tradition in SCAN to conduct
research investigating social phenomena in isolation, partially
as a function of limitations of neuroimaging techniques. The
critical issue is that this could lead to results and explanations of
social cognitive and affective processes that are fundamentally
limited to an individual level, or at least different from true interpersonal interaction. Indeed, recent work has discussed evidence
that if those social phenomena were studied in the context of two
or more participants engaging in an actual interaction, then the
results would differ from studies conducted with participants in
isolation (see, e.g., [11], [21], [22]).
This shift in SCAN has been characterized by a focus on
understanding social cognitive and affective processes during
actual social interactions as opposed to in isolation [20]. De
Jaegher et al. [23] define these types of social interaction as:
. . . complex phenomena involving different dimensions of verbal
and nonverbal behavior, varying contexts, numbers of participants,
and frequently technological mediation. They involve strict timing
demands, involve reciprocal and joint activity, exhibit a mixture of
discrete and continuous events at different times scales and . . . it
involves engagement between agents (p. 442).

Clearly, a larger repertoire of cognitive processes are at play


during actual social interaction, at least when contrasted with
isolated studies of participants passively judging social stimuli.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

This limitation of SCAN has only recently begun to pervade


the literature and become increasing addressed. For example, a
rather exhaustive outline for research on actual social interaction
in SCAN was provided by Schilbach et al. [21] in what they
termed second-person neuroscience. Indeed, their description
is quite similar to the previously described definition of actual
social interaction, but they do this in a way such as to sketch the
experimental landscape for SCAN along the dimensions of
participation in interaction, engagement in the interaction, and
the number of individuals involved.
Ultimately, for SCAN to be of great utility to research
and development (R&D) of novel humanmachine systems,
interaction-based approaches to SCAN require maturity. In particular, as SCAN continues to assess and detail the ways in
which humans understand each others emotions, attribute certain mental states to others (theory of mind), and the behaviors
they choose to exhibit during the interaction (joint action), its
utility will become increasingly translatable from basic to applied research. Put succinctly, the relevance of SCAN to the
R&D of humanmachine systems is an enriched understanding
of not only the social cognitive and affective dimensions, but
also behavioral and neural dynamics at individual as well as
group and team levels [14].
SCAN can convey such knowledge through its systematic
study at social, cognitive, and neural levels; research focus
on emotions, joint action, and theory of mind; and explication of various stages of social information processing. It is the
aim of this paper to demonstrate how such an understanding
can be applied to advance the development of future human
machine systems. For each of the example applications that
follow, discussion of the SCAN dimensions is limited to emotions for learning and training, theory of mind with regard to
HRI, and joint action with regard to team performance. Our
discussion is limited in this sense to cover a breadth of applications; however, at the conclusion of each section, a systematic
set of research questions is proposed that spans the three SCAN
dimensions.
III. APPLICATIONS OF SOCIAL COGNITIVE AND AFFECTIVE
NEUROSCIENCE TO LEARNING AND TRAINING

Given that one of the subfocuses in humanmachine systems


design is on developing programs and simulations that can be
used for learning, education, and training [24], and that each of
these is inherently human-centered, SCAN holds the potential
to significantly advance the development of such technologies.
In this section, the application of SCAN to learning, education,
and training at a broad level is detailed. Then, a specific application of SCAN to the design of a humanmachine system
is detailed. This takes the form of articulating how SCAN can
inform intelligent and adaptive tutoring systems.
At a high level, findings from SCAN suggest that the cognitive
processes typically involved in education and training such as
learning, memory, attention, and decision-making are in fact
highly interdependent with social and emotional processes [25].
That is, while typical approaches to learning emphasize the
cognitive or behavioral component, it is often the case that social

and emotional aspects are overlooked. This gap is problematic


given the interdependence of emotional and social cognitive
processes with the traditional forms of cognition involved in
learning. Indeed, Immordino-Yang and Damasio [25] provide a
compelling review of neuropsychological evidence supporting
this notion.
Importantly, to redress this gap, SCAN can provide new ways
to move beyond traditional approaches to learning and training.
Such approaches have followed the industrial model and are
designed to minimize the variance around some instructional
standard [26]. That is, each learner must meet a certain standard as measured by knowledge-based tests where everyone is
instructed in the same way to form the basis of such knowledge.
However, recent shifts in the field, such as learner- and humancentered approaches, have moved away from this traditional
approach.
Learner-centered approaches to instruction and/or education
shift away from the traditional role of teacher as a lecturer
and student as a passive receiver of information to a paradigm
characterized by teacher as a mediator and student as an active engager with the information, often in collaboration with
other students [27]. Such an approach radically reconceptualizes the training and instructional environment by placing a
greater emphasis on social and cultural aspects involved in the
collaborative engagement with other learners, a greater concern
for fostering life-long learning abilities, and additionally, an effective use of technological systems to support learning [28].
It is within this shift in learning and training that SCAN research synthesized with the design of technological systems
will have a large contribution to advancing the state of the
art.
More specifically, a subfield characterized by the aforementioned aspects of learner-centered approaches is referred to as
accelerated learning, which emphasizes on developing novel
ways to reduce the amount of training time, improved proficiency, and retention, as well as putting learners on a life-long
path to the development of expertise that is adaptive and flexible (see, e.g., [29][31]). This type of expertise is contrasted
with routine expertise in which an individual is able to fluently
execute rules and procedures for their domain, but often only in
a rigid way. Accelerated learning is, thus, desirable given that
it focuses on the learner to more rapidly develop within them
the capacities to make sense of and perform effectively in complex environments that are inherently dynamic with ill-defined
problems [31], [32].
One proposed mechanism for accelerated learning is through
the development of computer-based tutoring systems that model
cognitive and affective states of the learner to optimize the presentation of the learning material either on the computer or
through training simulations. A representative example of this is
the Generalized Intelligent Framework for Tutoring [33]. These
types of systems tend to rely on similar technologies used in
SCAN such as neuroimaging and physiological sensors in order to assess the behavioral, physiological, affective, and learning competency states as they occur during learning [34]. This
model, in turn, is used to feed into a set of other modules of
the system that contain domain-specific knowledge for a given

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4

IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

learning topic as well as pedagogical and communicative aspects that are essential for accelerating the learning process by
uniquely adapting to the learners states [34].
Reliable detection of emotional states from neurophysiological sensors remains a challenge, particularly in the context of
learning and training environments; however, the motivation
for doing so is promising. One example, although it did not use
any neurophysiological assessments of affect, demonstrates that
with reliable attribution of affective states during a learning process, learning relevant states can be inferred such as boredom,
frustration, confusion, flow, and delight [35], [36]. From this
paper, the researchers proposed that detection of these states affords the opportunity for intelligent systems to mitigate negative
affective states that act as a detriment to learning and optimize
oscillation between flow and surprise states that keep a learner
both challenged and engaged [36].
Utilization of SCAN research technologies and techniques in
this context is a practice relatively in its infancy. In fact, many
studies of emotion and learning have relied on methods such as
self-report or expert judgment to infer affective states (see, e.g.,
[36]). While only a brief overview has been provided, empirical
support in SCAN is growing for the study of emotion in relation
with cognition demonstrating that emotion plays a significant
role in learning, memory, attention, social understanding, and
behavioral responses (see [37] for review). It is likely that only
through such efforts (i.e., adopting a SCAN-based approach),
that technologies can truly be centered on learners and adapt the
learning environment in real time. In doing so, and as advances
are made in SCAN, insights will be gained to better understand
how the dynamics of mental and affective states contribute to
learning. This can inform the improvement of educational and
training support systems by accounting for social, cognitive,
affective, and behavioral processes, as well as the neurophysiological underpinnings as individuals learn by themselves, as
well as when they learn in groups [14], [36]. Last, in service
of developing a roadmap that can be used to help guide future research toward the integration of SCAN with learning and
training, in Table I, we provide a set of research questions that
are offered as a guide for R&D in pursuit of such an integration.
IV. APPLICATIONS OF SOCIAL COGNITIVE AND AFFECTIVE
NEUROSCIENCE TO HUMANROBOT INTERACTION

In addition to learning and training, many of todays business, industry, and military organizations are adopting increasingly intelligent robotic technologies with which humans must
interact and collaborate on a daily basis. For example, astronauts collaborate with robots and intelligent agents to complete
their missions and, in the military, robots and unmanned vehicles provide military teams with better intelligence and keep
them safe from harm. Although humans are increasingly interacting with these technologies, little research, until recently,
has focused on developing technologies that have the ability to fully reciprocate in the interaction. To do so, it is required that such technologies have the appropriate social intelligence required to understand and respond appropriately
to the complex social dynamics of human behavior (see, e.g.,

TABLE I
QUESTIONS FOR A RESEARCH ROADMAP TO FURTHER THE INTEGRATION
OF SCAN WITH LEARNING AND TRAINING R&D
Learning and Training Questions
Emotion

What are the neural correlates of learning relevant states


such as boredom, frustration, confusion, surprise, and
flow?
What are the types of neurophysiological sensors that can
be used to detect these?
What are the limitations that such sensors would place on
various learning environments?
What are the effects of being in certain states on learning
and training outcomes?
Is there an optimum temporal structuring of states to
accelerate learning and, ultimately, adaptive expertise?
How should adaptive tutoring systems detect and adapt the
learning experience to the optimum temporal structuring
of affective states?

Theory of Mind

During what types of learning activities are neural regions


associated with theory of mind active?
Towards whom does a learner engage in mental state
attribution during a learning episode (e.g., instructor,
other learners, characters within instructional content)?
Can the neural correlates of misattribution (i.e.,
misunderstanding an other) be identified?
How would (mis)understanding others mental states
affect learning outcomes?
How can theory of mind mechanisms be instantiated in
adaptive and intelligent tutoring systems so as to better
understand the learner?

Joint Action

During what types of learning activities are neural regions


associated with joint action activated (e.g., discussion,
problem-based learning, simulation-based training)?
Can similar activation of shared representations between a
learner and an instructor serve as an index that material
has been learned or at least understood accurately?
Can an intelligent tutoring system store expert joint action
neural activation patterns and provide feedback to a
learner as they practice a task?
To what degree does predictive action simulation facilitate
upcoming learning performance and, if so, can an
intelligent tutoring system prompt a learner to do so?

[38]). Given that human social intelligence has been shown to


be one of the driving evolutionary factors that contributes to
our species success [39], leveraging insights from SCAN for
computationally modeling such social mechanisms could exponentially extend the capabilities of current technologies such
that HRI becomes more natural, effective, personal, and fulfilling (see, e.g., [40] and [41]).
The current state of the art in HRI is that most robots are
viewed as tools that are only used to accomplish a given task
and are primarily teleoperated systems often requiring more
than one human operator [42]. It has recently been argued that a
transition is needed in which robots will gain the social cognitive
mechanisms that are necessary to be perceived as a teammate
and, thus, be capable of reciprocal and bidirectional modes of
communication that will facilitate complex collaborations between robots and humans (see, e.g., [38] and [43]). However, the
ways of providing robots with such mechanisms is quite diverse,
although one of the most promising approaches is that they are
biologically inspired (see, e.g., [44] and [45]), in other words,
that rely on findings from human cognitive science for guidance.
Naturally, the ways in which SCAN can inspire computational cognitive models are vast; therefore, to briefly illustrate
the point here at a high level, SCAN can inform the modeling of

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

social cognitive processes by guiding the design of similar types


of mechanisms in robots [41]. More generally, Kruse [4] posits
that Fundamental investigations [in operational neuroscience]
will also enable artificial intelligence and cognitive computational systems to leverage how the brain actually functions for
inspiration, mimicry, and modeling (p. B191). More specifically, work in SCAN is converging on the idea that, in social
interactions, humans have mechanisms for gaining an immediate understanding of others mental states such as intentions
and emotions, possess the ability to make inferences regarding
others mental states, as well as are able to mentally simulate situations to better understand current mental states and
predict future states based on simulated responses (see, e.g.,
[14] and [46][50]). Such research, integrating findings from
psychology, neuroscience, and philosophy, should prove useful
in developing computational cognitive models (see, e.g., [14]
and [20]).
Indeed, research aimed at modeling social cognitive mechanisms in robots has already begun to apply such findings from
SCAN and cognitive science, albeit, in a somewhat fragmented
manner. For example, Breazeal et al. [51] developed a mindreading system for an embodied robot that was able to use
its own perceptual, motor, and cognitive system to better understand and simulate future states of humans with which it was
interacting. In another example, Barakova and Lourens [45] implemented a mirror neuron framework in robots based on findings in SCAN. Through the creation of this system, the robots
mirror neuron system was able to synchronize its movement
and artificial neuronal firing patterns when interacting with another robot. However, efforts such as these are scarce, as most
research in robotics has typically focused more on interaction
with the physical environment rather than the social environment. That is, much research in robotics focuses on how to
develop robots that can manipulate objects or navigate around
obstacles (see, e.g., [52]) as opposed to understanding and responding to humans during social interaction.
Taken together, the synthesis of SCAN into the design and
development of complex humanmachine systems will guide
efforts to develop the next generation of robotic teammates with
social cognitive mechanisms. While the challenge here is certainly complex, it should prove to be a worthwhile endeavor.
Recent efforts have even begun to provide recommendations for
how to begin addressing this issue (see [41] and [43]). However, as detailed earlier, SCAN is only beginning to scratch the
surface of understanding the many facets of social cognition beyond an individual level. Thus, as SCAN research progresses,
approaches to modeling social cognitive mechanisms in robots
should develop in tandem.
Although SCAN can be used to inform the design of cognitive
models to be instantiated in robots, it can also be used to better
understand the way in which humans perceive and interact with
robots. Some research has in fact already been conducted along
these lines. For example, certain brain regions associated with
the human mirror neuron system show increased activation during social interactions whether or not the interaction partner is
human or a robot, at least when observing goal-directed behaviors [53]. Further, Krach et al. [54] found that the more humanoid

a robot is, the more the activated brain regions correlated with
those that are activated during humanhuman interactions (i.e.,
the medial frontal cortex and the right temporoparietal junction).
As a potential intermediary step in developing robots capable
of displaying and recognizing the full social expressivity of humans, advances in braincomputer interfaces (BCI, sometimes
referred to as brain-controlled interfaces) are relevant. One of
the pioneering efforts in this area implanted electrodes into the
frontal and parietal lobes of a macaque monkeys brain as a
means of developing a brain-controlled robotic arm [55]. Of
course, such invasive techniques are not typically employed in
humans, but advances have been made for using EEG coupled
with signal processing techniques to control robotic arms [56],
as well as a means for controlling the navigation of mobile robots
(see [57] for review). More relevant for the present purposes,
some work in BCI has focused on incorporating the detection
of emotions through EEG into humanmachine systems [58].
While the reliable detection of emotions from EEG is limited
and requires extensive training of the algorithms, the technique
highlights the possibility for a machine to possess such a beneficial capability. With detection capabilities realized, the next
logical step is control. In this sense, the detection of emotion or
other socially relevant states can be utilized as an input control
mechanism for robots to display certain social cues.
At a more general level, a useful framework to further research along these lines is that of distinguishing social cues
and signals (see, e.g., [59]). This is helpful for understanding
the social exchange that occurs during human interaction with
robots because it specifies social cues as those that are observable to humans, and conceivably robots, in the environment and
links these with unobservable social signals that are indicative
of mental states. An example here of social cues is lips in the
shape of a smile, bared teeth, and wide eyes, which could be
interpreted as the social signal of happy. Recent research has
shown ways to identify the social cues that are indicative of
untrustworthy behavior in humanhuman interactions and then
programmed a robot to display these cues, which resulted in a
similar distrust of the robot [60]. Other work has shown that by
manipulating the social cues a robot displays, even when that
robot is minimally expressive (nonhumanoid), affects the types
of intentional and emotional states (i.e., social signals) that are
attributed to it [59], [61]. However, such investigations have not
yet incorporated neurophysiological assessments. In doing so,
further levels of specificity may be gained with regard to understanding the ways in which humans perceive and ultimately
understand robots [40]. While a detailed framework for how to
conduct the type of research suggested here is beyond the scope
of this paper, an outline for how to conduct research within this
framework is forthcoming [62].
In addition, SCAN is needed both to inform the design of
computational models of social cognition in robots, but also to
empirically evaluate the physiological mechanisms employed
when humans interact with such robots. In doing so, technological systems can be developed that are able to understand
the humans that operate or work with them as well as appropriately convey the social cues necessary for humans to
easily understand that system. This area of research is just

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
6

IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

TABLE II
QUESTIONS FOR A RESEARCH ROADMAP TO FURTHER THE INTEGRATION
OF SCAN WITH HRI R&D
HumanRobot Interaction Questions
Emotion

What are the neural correlates of emotions that people


experience when interacting with robots?
Is there a way to provide those emotional indices to inform the
robots selection of adaptive interaction behavior?
Should a robot display the social cues indicative of emotions
similar to the ways human do?
If a robot displays emotions similar to humans, do
neurophysiological data suggest it is perceived similar to a
human displaying that emotion?
Do people prefer a robot to be emotionally expressive both
psychologically and neurologically?
How can a neurocomputational model of emotions be
developed and, if so, would this benefit, or negatively affect,
the robots interaction with humans?

Theory of Mind

What are the core social cognitive mechanisms and underlying


neural structures needed to develop neurocomputational
theory of mind capabilities in a robot?
Should a robot even have a social cognitive system that is
similar to that of a human or are there other more efficient
ways to instantiate this?
Does a robot that interprets the social environment similar to
the way a human does facilitate intuitive understanding and, in
turn, more effective interaction?
Are there robust neural correlates to perceived social signals as
a function of observed social cues?

Joint Action

How can a neurocomputational model of joint action be


developed based upon SCAN studies of interaction dynamics?
Would such a model facilitate better, more coordinated, joint
action between humans and robots and perhaps, between
robots too?
What is the appropriate level of synchrony versus
complementarity across HRI joint action tasks?
What are the neural correlates for each of the types of
emergent and planned coordination?
Are each of the types of emergent and planned coordination
useful for instantiating in robots in terms of contributing to
developing adaptive interaction capabilities?

developing, but, as stated before, represents a key application of


SCAN that can contribute substantively to advancing human
machine system interaction, more broadly. Last, in service of
developing a roadmap that can further the integration of SCAN
with HRI, below we provide a series of representative research
questions that can guide R&D in this area (see Table II).
V. APPLICATIONS OF SOCIAL COGNITIVE AND AFFECTIVE
NEUROSCIENCE TO MODELING AND IMPROVING
TEAM PERFORMANCE

The last example application of SCAN to the design of technologies is to enhance team performance in complex domains.
The goal here is to draw attention to novel constructs worth examining neurophysiologically to serve as another metric of team
performance and, at some point, a source of real-time performance feedback. Such efforts can, in turn, revolutionize the way
in which teams interact when they have an enriched knowledge
of each others mental and emotional states as they dynamically
shift throughout the execution of complex tasks.
Traditionally, team performance and team cognition research
has relied upon a knowledge-based approach that posits that
teams require shared mental models of their task and of their
team in order for effective coordination to occur [63]. However, relying solely on the knowledge of the team at any given

moment only provides a discrete snapshot and index of team


performance. In light of this limitation and similar to some of
the limitations of traditional SCAN research detailed above, recent theorizing and experimentation has called for an interactive
view of team cognition that situates team cognition in the activity of a team as they dynamically perform their tasks over time
and under varying situations [64]. There have even been questions raised as to whether team members need to have shared
knowledge at all [65]. Although the answer to this question
likely seems to depend on both the team and the task, Knoblich
et al. [16] provide some insights in their review of joint action research. These authors distinguish between two types of
coordination that comprise joint action: emergent coordination
and planned coordination. Whereas emergent coordination is
an unintentional form of coordination that arises from spontaneous mechanisms of entrainment, affordances, perceptionaction matching, and predictive action simulations, planned coordination is an intentional form of communication that often
relies on shared representations and the ability to take the perspectives of teammates [16]. As such, each of these constructs
illustrates an aspect of team performance, at least in joint action
teams, that could draw from SCAN methodologies to improve
their detection.
For example, predictive action simulation, i.e., a mechanism
used to imagine and think through potential actions that an
individual or another may take, has been detected to be a function
of the mirror neuron system in cases of joint action [66] and
perhaps a much broader network [47]. Another example is that
of shared representations, a notion akin to the psychological
construct of shared mental models. The core idea here is that
team members (or those involved in a social interaction) may
develop shared representations that manifest neurally if they are
experiencing or understanding an event in a similar way, even if
only one of those individuals is observing and imagining what
the other might be experiencing (see [47] and [67]). While these
mechanisms of joint action manifest neurally and behaviorally,
it is the interdependence and coupling of these mechanisms that
is important to examine at inter- and intraindividual levels of
study on team performance (see, e.g., [68]).
Taken together, each of the levels of analysis included in
SCAN is important for understanding team cognition and team
performance. In general, recent work in SCAN has shown that
social context and emotions have an impact on the way in which
teams interact and, thus, perform collectively (see, e.g., [69]
and [70]). As with our earlier examples, SCAN states that are
relevant to team performance can be detected, and thus, R&D
of complex humanmachine systems can benefit from the application of SCAN methodologies. One of the potential ways
human-machine systems can benefit from SCAN is through a
contribution to the development of novel ways to model and
interpret team performance in real time. One related example
was shown by Stevens et al. [71] when they investigated neurophysiologic synchronies (NS) of engagement, as measured
by EEG, during simulated submarine piloting and navigation
teams. Results showed that examination of NSs may serve as a
metric for determining the cognitive status of team members in
close to real time. This study was extended using multifractal

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

analysis on the EEG data as a means to detect and demonstrate


the fact that neurophysiological measures can be indicative of
cascading effects across scales of analysis [72]. In other words,
this analysis of the EEG data corroborated analysis of team behavior, except it could be conducted within a fraction of the time
required for detailed behavioral analyses.
Naturally, a neurophysiological metric of engagement is
just one of the many neurophysiological factors that contribute
to team performance in addition to a host of other social,
behavioral, and affective dimensions. For example, research by
Gorman et al. has used dynamical analyses to assess patterns of
team communication in simulated unmanned vehicle command
and control in order to serve as another real-time behavioral
metric of team performance [73]. As with much of the research
detailed above, combining neurophysiological and behavioral
metrics of team performance is just beginning to develop.
Of course, the examination of a single or even a handful of
constructs at a time helps to make such challenges tractable.
However, the field will ultimately advance toward more
sophisticated and nuanced metrics that simultaneously provide
insights regarding the social cognitive and affective states of a
team, the neural indices of these states, and to how these impact
performance on a teams task.
Once research can reliably establish such real-time metrics,
these can be used to reconceptualize team performance. This can
help better model and inform the design of team support systems
as well as the types of training that team members require when
their task environments become enriched with such data. An
example of this can be given in the context of distributed teams.
SCAN information could be incorporated into displays to help
indicate whether team members hold shared representations of
a situation, or even indicate something as simple as being upset. In short, by drawing on SCAN, designers and engineers of
humanmachine systems can better identify those social cognitive and affective dimensions necessary for performance and
advance the way the teams of the future accomplish their objectives. Last, in service of developing a roadmap to encourage
research integrating SCAN with team performance in Table III,
we provide a set of research questions that can guide the interdisciplinary integration of methods and technologies in support
of improving team performance.
VI. DISCUSSION

In this paper, it was argued that the integration of SCAN into


the design and development of novel humanmachine systems
is crucial for advancing the contributions of future technologies. Although SCAN is still a developing discipline in itself,
applications from this field will be substantive if integrated with
the technologies and techniques utilized across a number of
disciplines. Specifically, in this paper, three examples were provided to demonstrate the broad applicability of SCAN. First,
this included leveraging insights and techniques from SCAN to
develop technologies that are able to better assess and adapt to
students and accelerate learning so that outcomes are accomplished in shorter amounts of time and put on a path toward expertise acquisition. Second, SCAN can inform the field of HRI

TABLE III
QUESTIONS FOR A RESEARCH ROADMAP TO FURTHER THE INTEGRATION OF
SCAN WITH TEAM PERFORMANCE R&D
Team Performance Questions
Emotion

Can systems be developed to unobtrusively detect emotional


states in team members and display these as they are relevant
to effective team performance?
What are effective ways to display team member emotional
states such that it does not detract from their task?
What types of system adaptation or team member
interventions can be used to mitigate detected affective states
that pose a threat to effective team performance?
Would neurophysiological sensors even be the most effective
technique to detect team affective states in real time?
Are there patterns of emotional states that, in proper sequence,
keep team members engaged and vigilant in their tasks?

Theory of Mind

Are there patterns of neural activation that can serve as an


index that team members have appropriately understood each
other?
How can a neurocomputational model that detects members
mental states (e.g., intentions) be used to effectively convey
team member states to the team in real time and afford
effective performance?
Under what types of conditions/tasks would such models be
necessary for performance (e.g., distributed)?

Joint Action

What other indices besides engagement are appropriate to


model and detect using neurophysiological sensors?
Can similar activation of shared representations between team
members serve as an index that common ground is
established?
How can a real-time feedback system convey SCAN data such
that teams are able to assess whether they are executing the
appropriate joint actions to reach their shared goals?
Can each of the types of emergent and planned coordination be
detected in real time and, if so, can a system provide easily
interpreted feedback to team members to allow them to adapt
their performance as necessary?

by leveraging findings from humanhuman interactions in order


to model social cognitive mechanisms in robots so that they can
interact more naturally and effectively, but also to provide methods for investigating the ways social cognitive mechanisms are
employed by humans when they interact with robots and other
autonomous systems. Finally, SCAN can be used to enhance
the performance of teams working in complex domains through
the instantiation of novel techniques for modeling and interpreting real-time neurophysiological and behavioral indices of team
cognition and performance.
For each of these illustrative examples, only support for one of
the dimensions of SCAN was provided. Specifically, emotions
were emphasized with regard to learning and training, theory
of mind was emphasized with regard to HRI, and joint action
was emphasized with regard to team performance. Ultimately,
the systematic study of each of these SCAN dimensions across
each of the applications is necessary to encapsulate the vision
of integrating SCAN with R&D in humanmachine systems.
Although, by no means can claims be made that this paper can
facilitate such an integration, the goal here was more so to begin
a dialogue and attempt to guide researchers of humanmachine
systems to consider how SCAN might benefit them. The major
contribution of this paper has been to provide an overview of
SCAN and highlight some of the applications it may have in
other domains. Of course, this paper is limited in the sense that
it is more illustrative and meant to draw attention to these issues
as opposed to exhaustive.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
8

IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

For the efficacious integration of SCAN with humanmachine


systems, a research roadmap is needed. In each of our sections, so as to help guide future research toward the integration
envisioned herein, we provided an integrated set of research
questions meant to foster interdisciplinary collaborations that
incorporate SCAN methodologies. Notably, the answer to some
of these questions may already reside in extant literature; however, to the best of our knowledge, the questions themselves
or the answers have not been collated and structured as such.
In particular, it is likely that many more questions may exist,
and we encourage others to elaborate on this initial attempt.
However, what is unique here is the structuring of the questions
across the three dimensions of SCAN that are widely studied
(emotion, theory of mind, and joint action) and the three applications reviewed in this paper (learning and training, HRI, and
team performance). Future efforts may even want to distinguish
between the social, cognitive, and neural levels of analysis or
the stages of social information processing. Nonetheless, the answers to these questions would be of broad utility to researchers
and developers in humanmachine systems, as well as allow
SCAN researchers to begin to think about what is necessary for
their work to transition from basic to applied (cf., [5]).
In addition, when fully leveraged by the designers of complex humanmachine systems, SCAN holds a large potential
to contribute to the advancement of the ways in which humans
interact with their environment, their technology, and even each
other. In particular, integrating SCAN with research in the fields
of augmented cognition and operational neuroscience affords
a multilevel, systematic, and ecological approach to the study
of human cognition and performance that will support effective humanmachine interaction in complex real-world environments. Up until this point, much of the research in augmented
cognition and operational neuroscience have focused on narrow
task specific constructs when, in fact, a variety of constructs
falling within dimensions of SCAN could ultimately affect human interaction with technological systems. Of course, further
research is needed to assess exactly which SCAN constructs are
worth examining and how performance is affected by these. It
has been the aim of this paper to spur interest, discussion, and
research toward this end.
REFERENCES
[1] M. D. Lieberman, Social cognitive neuroscience: A review of core processes, Annu. Rev. Psychol., vol. 58, pp. 259289, 2007.
[2] A. B. Satpute and M. B. Lieberman, Integrating automatic and controlled
processes into neurocognitive models of social cognition, Brain Res., vol.
1079, no. 1, pp. 8697, 2006.
[3] K. M. Stanney, D. D. Schmorrow, M. Johnston, S. Fuchs, D. Jones, K.
S. Hale, and P. Young, Augmented cognition: An overview, Rev. Hum.
Factors Ergonom., vol. 5, no. 1, pp. 195224, 2009.
[4] A. A. Kruse, Operational neuroscience: Neurophysiological measures
in applied environments, Aviation, Space, Environ. Med., vol. 78, pp.
B191B194, 2007.
[5] J. M. Flach, The ecology of human-machine systems I: Introduction,
Ecol. Psychol., vol. 2, no. 3, pp. 191205, 1990.
[6] K. J. Vicente and J. Rasmussen, The ecology of human-machine systems II: Mediating direct perception in complex work domains, Ecol.
Psychol, vol. 2, no. 3, pp. 207249, 1990.
[7] G. Salomon and T. Globerson, When teams do not function the way they
ought to, Int. J. Edu. Res., vol. 13, no. 1, pp. 8999, 1989.

[8] G. R. Semin and E. R. Smith, Socially situated cognition in perspective,


Soc. Cognit., vol. 31, no. 2, pp. 125146, 2013.
[9] C. D. Frith and U. Frith, Mechanisms of social cognition, Annu. Rev.
Psychol, vol. 63, pp. 287313, 2012.
[10] J. S. Beer and K. N. Ochsner, Social cognition: A multi level analysis,
Brain Res., vol. 1079, no. 1, pp. 98105, 2006.
[11] J. Chatel-Goldman, J. L. Schwartz, C. Jutten, and M. Congedo, Non-local
mind from the perspective of social cognition, Front. Hum. Neurosci., vol.
7, no. 107, pp. 17, 2013.
[12] K. N. Ochsner and M. D. Lieberman, The emergence of social cognitive
neuroscience, Amer. Psychol., vol. 56, no. 9, pp. 717734, 2001.
[13] A. Olsson and K. N. Ochsner, The role of social cognition in emotion,
Trends Cognit. Sci, vol. 12, no. 2, pp. 6571, 2008.
[14] V. Bohl and W. van den Bos, Toward an integrative account of social cognition: Marrying theory of mind and interactionism to study the interplay
of type 1 and type 2 processes, Front. Hum. Neurosci., vol. 6, no. 274,
pp. 115, 2012.
[15] D. Mier, S. Lis, K. Neuthe, C. Sauer, C. Esslinger, B. Gallhofer, and P.
Kirsch, The involvement of emotion recognition in affective theory of
mind, Psychophysiology, vol. 47, no. 6, pp. 10281039, 2010.
[16] G. Knoblich, S. Butterfill, N. Sebanz, Psychological research on joint
action: Theory and data, Psychol. Learn. Motivation-Adv. Res. Theory,
vol. 54, pp. 59101, 2011.
[17] J. A. Russell, A circumplex model of affect, J. Personality Soc. Psychol.,
vol. 39, no. 6, pp. 11611178, 1980.
[18] C. E. Forbes and J. Grafman, Social neuroscience: The second phase,
Front. Hum. Neurosci., vol. 7, no. 20, pp. 15, 2013.
[19] U. J. Pfeiffer, B. Timmermans, K. Vogeley, C. D. Frith, and L. Schilbach,
Towards a neuroscience of social interaction, Front. Hum. Neurosci.,
vol. 7, no. 22, pp. 12, 2013.
[20] M. Przyrembel, J. Smallwood, M. Pauen, and T. Singer, Illuminating
the dark matter of social neuroscience: Considering the problem of social
interaction from philosophical, psychological, and neuroscientific perspectives, Front. Hum. Neurosci., vol. 6, no. 190, pp. 115, 2012.
[21] L. Schilbach, B. Timmermans, V. Reddy, A. Costall, G. Bente, T. Schlicht,
and K. Vogeley, Toward a second-person neuroscience, Behav. Brain
Sci., vol. 36, no. 4, pp. 393414, 2013.
[22] K. Tylen, M. Allen, B. K. Hunter, and A. Roepstorff, Interaction versus observation: Distinctive modes of social cognition in human brain
and behavior? A combined fMRI and eye-tracking study, Front. Hum.
Neurosci., vol. 6, no. 331, pp. 111, 2012.
[23] H. D. De Jaegher, E. D. Di Paolo, and S. Gallagher, Can social interaction
constitute social cognition? Trends Cognit. Sci., vol. 14, no. 10, pp. 441
447, 2010.
[24] R. C. Clark and R. E. Mayer, Simulations and games in E-learning, in ELearning and the Science of Instruction: Proven Guidelines for Consumers
and Designers of Multimedia Learning, 2nd ed. Asslar, Germany: Pfeiffer,
2008, ch. 15, pp. 345378.
[25] M. H. Immordino-Yang and A. Damasio, We feel, therefore we learn:
The relevance of affective and social neuroscience to education, Mind,
Brain, Edu., vol. 1, no. 1, pp. 310, 2007.
[26] K. Robinson, Mind the gap: The creative conundrum, Crit. Quart., vol.
43, no. 1, pp. 4145, 2001.
[27] L. B. Nilson, Teaching at Its Best: A Research-Based Resource for College
Instructors. New York, NY, USA: Wiley, 2010.
[28] D. A. Norman and J. C. Spohrer, Learner-centered education, Commun.
ACM, vol. 39, no. 4, pp. 2427, 1996.
[29] R. R. Hoffman, D. H. Andrews, and P. J. Feltovich, What is Accelerated
Learning?, Cognit. Technol., vol. 17, no. 1, pp. 710, 2012.
[30] R. R. Hoffman, P. J. Feltovich, S. M. Fiore, G. Klein, and D. Ziebell,
Accelerated learning (?), IEEE Intell. Syst., vol. 24, no. 2, pp. 1822,
Mar./Apr. 2009.
[31] R. R. Hoffman, P. Ward, P. J. Feltovich, L. DiBello, S. M. Fiore,
and D. H. Andrews, Accelerated Expertise: Training for High Proficiency in a Complex World. New York, NY, USA: Psychology Press,
2014.
[32] T. J. Wiltshire, K. Neville, M. Lauth, C. Rinkinen, and L. F. Ramirez,
Applications of cognitive transformation theory: Examining the role of
sensemaking in the instruction of air traffic control students, J. Cognit.
Eng. Decis. Making, vol. 8, no. 3, pp. 219247, 2014.
[33] R. A Sottilare, K. W. Brawner, B. S. Goldberg, and H. K. Holden, (2012,
Jul.). The generalized intelligent framework for tutoring (GIFT). Army
Research Laboratory. [Online]. Available https://litelab.arl.army.mil/
system/files/2012_07_Sottilare_IFest_GIFT%20brief%20plus%20panel%
20briefs_0.pdf

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS

[34] R. Sottilare and B. Goldberg, Designing adaptive computer-based tutoring systems to accelerate learning and facilitate retention, Cognit. Tech.,
vol. 17, no. 1, pp. 1933, 2012.
[35] S. DMello, Monitoring affective trajectories during complex learning,
in Encyclopedia of the Sciences of Learning, 2012, pp. 23252328.
[36] S. DMello and A. Graesser, Dynamics of affective states during complex
learning, Learn. Instruction, vol. 22, no. 2, pp. 145157, 2012.
[37] E. A. Phelps, Emotion and cognition: Insights from studies of the human
amygdala, Annu. Rev. Psychol., vol. 57, pp. 2753, 2006.
[38] T. J. Wiltshire, D. C. Smith, and J. R. Keebler, Cybernetic teams: Towards
the implementation of team heuristics in HRI, in Virtual Augmented
and Mixed Reality. Designing and Developing Augmented and Virtual
Environments. Berlin, Germany: Springer, 2013, pp. 321330.
[39] R. I. Dunbar and S. Shultz, Evolution in the social brain, Science, vol.
317, no. 5843, pp. 13441347, 2007.
[40] E. J. C. Lobato, T. J. Wiltshire, and S. M. Fiore, A dual-process approach
to human-robot interaction, in Proc. Hum. Factors Ergonom. Soc. Annu.
Meet., 2013, vol. 57, pp. 12631267.
[41] T. J. Wiltshire, D. Barber, and S. M. Fiore, Towards modeling social
cognitive mechanisms in robots to facilitate human-robot teaming, in
Proc. Hum. Factors Ergonom. Soc. Annu. Meet., vol. 57, 2013, pp. 1273
1277.
[42] E. Phillips, S. Ososky, J. Grove, and F. Jentsch, From tools to teammates: Toward the development of appropriate mental models for intelligent robots, in Proc. Hum. Factors Ergonom. Soc. Annu. Meet., 2011,
pp. 14811485.
[43] T. J. Wiltshire, E. J. C. Lobato, F. G. Jentsch, and S. M. Fiore, Will
(dis)embodied LIDA agents be socially interactive? J. Artif. Gen. Intell.,
vol. 4, no. 2, pp. 4247, 2013.
[44] U. Kurup and C. Lebiere, What can cognitive architectures do for
robotics? Biol. Inspired Cognit. Archit., vol. 2, pp. 8899, 2012.
[45] E. I. Barakova and T. Lourens, Mirror neuron framework yields representations for robot interaction, Neurocomputing, vol. 72, pp. 895900,
2009.
[46] S. Gallagher, Direct perception in the intersubjective context, Consciousness Cognit., vol. 17, no. 2, pp. 535543, 2008.
[47] G. Pezzulo, M. Candidi, H. Dindo, and L. Barca, Action simulation in
the human brain: Twelve questions, New Ideas Psychol., vol. 31, no. 3,
pp. 270290, 2013.
[48] F. Van Overwalle, Social cognition and the brain: A meta-analysis, Hum.
Brain Mapping, vol. 30, no. 3, pp. 829858, 2009.
[49] M. R. Wilkinson, L. J. Ball, and R. Cooper, Arbitrating between theorytheory and simulation theory: Evidence from a think-aloud study of counterfactual reasoning, in Proc. 32nd Annu. Conf. Cognit. Sci. Soc., 2010,
pp. 10081013.
[50] M. R. Wilkinson and L. J. Ball, Dual processes in mental state understanding: Is theorising synonymous with intuitive thinking and is simulation
synonymous with reflective thinking? in Proc. 32nd Annu. Conf. Cognit.
Sci. Soc., 2013, pp. 37713776.
[51] C. Breazeal, J. Gray, and M. Berlin, An embodied cognition approach to
mindreading skills for socially intelligent robots, Int. J. Robot. Res., vol.
28, pp. 656680, 2009.
[52] E. Sahin, M. Cakmak, M. R. Dogar, E. Ugur, and G. Ucolik, To afford or
not to afford: A new formalization of affordances toward affordance-based
robot control, Adapt. Behav., vol. 15, no. 4, pp. 447472, 2007.
[53] L. M. Oberman, J. P. McCleery, V. S. Ramachandran, and J. A. Pineda,
EEG evidence for mirror neuron activity during the observation of human
and robot actions: Toward an analysis of the human qualities of interactive
robots, Neurocomputing, vol. 70, no. 13, pp. 21942203, 2007.
[54] S. Krach, F. Hegel, B. Wrede, G. Sagerer, F. Binkofski, and T. Kircher,
Can machines think? Interaction and perspective taking with robots investigated via fMRI, PLoS ONE, vol. 3, no. 7, pp. 111, 2008.
[55] J. Carmena, M. Lebedev, R. Crist, J. ODoherty, D. Santucci, D. Dimitrov,
P. Patil, P. C. Henriquez, and M. Nicolelis, Learning to control a brainmachine interface for reaching and grasping by primates, PloS Biol., vol.
1, no. 2, pp. 193208, 2003.

[56] J. R. Millan, F. Renkens, J. Mourino, and W. Gerstner, Noninvasive brainactuated control of a mobile robot by human EEG, IEEE Trans. Biomed.
Eng., vol. 51, no. 6, pp. 10261033, Jun. 2004.
[57] L. Bi, X. A. Fan, and Y. Liu, EEG-based brain-controlled mobile robots:
A survey, IEEE Trans. Human-Mach. Syst., vol. 43, no. 2, pp. 161176,
Mar. 2013.
[58] G. Molina, T. Tsoneva, and A. Nijholt, Emotional braincomputer interfaces, Int. J. Auton. Adapt. Commun. Syst., vol. 6, no. 1, pp. 925,
2013.
[59] S. M. Fiore, T. J. Wiltshire, E. J. C. Lobato, F. G. Jentsch, W. H. Huang, and
B. Axelrod, Towards understanding social cues and signals in humanrobot interaction: Effects of robot gaze and proxemic behavior, Front.
Psychol., vol. 4, no. 859, pp. 115, 2013.
[60] D. DeSteno, C. Breazeal, R. H. Frank, D. Pizarro, J. Baumann, L. Dickens, and J. J. Lee, Detecting the trustworthiness of novel partners in
economic exchange, Psychol. Sci., vol. 23, no. 12, pp. 15491556,
2012.
[61] T. J. Wiltshire, E. J. C. Lobato, A. Wedell, W. Huang, B. Axelrod, and S. M.
Fiore, Effects of robot gaze and proxemic behavior on perceived social
presence during a hallway navigation scenario, in Proc. Hum. Factors
Ergonom. Soc. Annu. Meet., 2013, pp. 12731277.
[62] T. J. Wiltshire, S. L. Snow, E. J. C. Lobato, and S. M. Fiore, Leveraging
social judgment theory to examine the relationship between social cues
and signals in human-robot interactions, in Proc. Hum. Factors Ergonom.
Soc. Annu. Meet., to be published.
[63] J. A. Cannon-Bowers, S. I. Tannenbaum, E. Salas, and C. E. Volpe,
Defining competencies and establishing team training requirements,
in Team Effectiveness and Decision Making in Organizations, R.A.
Guzzo and E. Salas Eds. San Francisco, CA, USA: Jossey-Bass, 1995,
pp. 333381.
[64] N. J. Cooke, J. C. Gorman, C. W. Myers, and J. L. Duran, Interactive
team cognition, Cognit. Sci., vol. 37, no. 2, pp. 255285, 2013.
[65] P. Silva, J. Garganta, D. Araujo, K. Davids, and P. Aguiar, Shared knowledge or shared affordances? Insights from an ecological dynamics approach to team coordination in sports, Sports Med., vol. 43, pp. 18,
2013.
[66] H. Bekkering, E. R. De Bruijn, R. H. Cuijpers, R. Newman-Norlund, H.
T. Van Schie, and R. Meulenbroek, Joint action: Neurocognitive mechanisms supporting human interaction, Topics Cognit. Sci, vol. 1, no. 2, pp.
340352, 2009.
[67] J. Zaki and K. Ochsner, The need for a cognitive neuroscience of naturalistic social cognition, Ann. N.Y. Acad. Sci., vol. 1167, no. 1, pp. 1630,
2009.
[68] A. J. Strang, G. J. Funke, S. M. Russell, A. W. Dukes, and M. S. Middendorf, Physio-behavioral coupling in a cooperative team task: Contributors
and relations, J. Exp. Psychol.: Hum. Perception Perform., vol. 40, no. 1,
pp. 145158, 2014.
[69] R. Kanso, M. Hewstone, E. Hawkins, M. Waszczuk, and A. C. Nobre,
Power corrupts cooperation: Cognitive and motivational effects in a double EEG paradigm, Soc. Cognit. Affective Neurosci., vol. 9, no. 2, pp.
218224, 2014.
[70] C. Morawetz, E. Kirilina, J. Baudewig, and H.R. Heekeren, Relationship
between personality traits and brain reward responses when playing on a
team, PloS One, vol. 9, no. 1, pp. 110, 2014.
[71] R. H. Stevens, T. L. Galloway, P. Wang, and C. Berka, Cognitive neurophysiologic synchronies: What can they contribute to the study of teamwork? Hum. Factors, vol. 54, no. 4, pp. 489502, 2012.
[72] A. D. Likens, P. G. Amazeen, R. Stevens, T. Galloway, and J. C.
Gorman, Neural signatures of team coordination are revealed by
multifractal analysis, Soc. Neurosci., vol. 9, no. 3, pp. 219234,
2014.
[73] J. C. Gorman, E. E. Hessler, P. G. Amazeen, N. J. Cooke, and S.
M. Shope, Dynamical analysis in real time: Detecting perturbations
to team communication, Ergonomics, vol. 55, no. 8, pp. 825839,
2012.

Vous aimerez peut-être aussi