Vous êtes sur la page 1sur 8

EXPERT OPINION

Editor: Daniel Zeng, University of Arizona and Chinese Academy of Sciences, zengdaniel@gmail.com

Man versus Machine


or Man + Machine?
Mary (Missy) Cummings, Duke University and MIT

I
n developing any complex system that involves But how do we know what’s the right balance
the integration of human decision making and between humans and computers in these complex
systems? Engineers and computer scientists often
an automated system, the question often arises as
seek clear design criteria, preferably quantitative
to where, when, and how much humans and directive. Most engineers and computer scien-
and automation should be in the decision-making tists have little to no training in human interac-
loop. Allocating roles and functions between the tion with complex systems and don’t know how to
human and computer is critical in defi ning effi- address the inherent variability that accompanies
cient and effective system architectures. However, all human performance. Thus, they desire a set of
despite the recognition of this problem more than rules and criteria that reduce the ambiguity in the
60 years ago, in this case by NASA (see Figure 1), design space, which for them typically means re-
little progress has been made in balancing role and ducing the role of humans or at least constraining
function allocation across humans and computers. human behavior.
The problem of human-automation role alloca-
tion isn’t an academic exercise or limited to a few A Brief Historical Perspective
highly specialized domains such as NASA. The In 1951, a National Research Council committee
rise of drones (or unmanned aerial vehicles) and attempted to characterize human-computer inter-
the problems with remote human supervision are action (then called human-machine interaction)
an extension of well-documented human-automa- prior to developing a national air traffic control
tion interaction problems in fly-by-wire systems in system.1 The result was a set of heuristics about
commercial aviation. Mining industries increas- the relative strengths and limitations of humans
ingly use automation to augment and in some and computers (see Table 1), sometimes referred
cases outright replace humans, and robots that to as “men are better at’’ and what “machines are
require human interaction are on the battlefield better at’’ (MABA-MABA).
and in surgical settings. While these applications The heuristic role allocation approach, exem-
might seem far from everyday life, Google’s recent plified in Table 1, has been criticized as attempt-
announcement to introduce driverless cars to the ing to determine points of substitution—because,
mass market in 2017 and the race to develop in- for example, such approaches provide engineers
home robots will make the human-automation al- with justification (possibly erroneously) for how
location issue and associated computing demands to replace the human with automation.2 For tra-
ubiquitous. ditional engineers with no training in human-au-
The predominant engineering viewpoint across tomation interaction, this is exactly what they’re
these systems is to automate as much as possible, trained to do—reduce disturbances and variabil-
and minimize the amount of human interaction. ity in a system and make it more predictable. In-
Indeed, many controls engineers see the human deed, they’re trying to “capitalize on the strengths
as a mere disturbance in the system that can and [of automation] while eliminating or compen-
should be designed out. Others may begrudgingly sating for the weaknesses,”2 and this is an im-
recognize that humans must play a role in such portant piece of ethnographic information criti-
systems, either for regulatory requirements or low cal for understanding why traditional engineers
probability event intervention (such as problems in and computer scientists are so attracted by such
nuclear reactors). representations.

2 1541-1672/14/$31.00 © 2014 IEEE Ieee INteLLIGeNt sYstems


Published by the IEEE Computer Society
(a) (b)

Figure 1. The role allocation conundrum for the Apollo missions. (Photos provided courtesy of The Charles Stark Draper
Laboratory, Inc.)

Table 1. Fitts’ list,1 which characterizes human-computer interaction.


Attribute Machine Human
Speed Superior Comparatively slow
Power Output Superior in level in consistency Comparatively weak
Consistency Ideal for consistent, repetitive action Unreliable learning and fatigue are factors
Information capacity Multichannel Primarily single channel
Memory Ideal for literal reproduction, access restricted, Better for principles and strategies, access is
and formal ­versatile and innovative
Reasoning computation Deductive, tedious to program, fast and accurate, Inductive, easier to program, slow, accurate, and
poor error correction good error correction
Sensing Good at quantitative assessment, poor at pattern Wide ranges, multifunction, judgment
recognition
Perceiving Copes with variation poorly, susceptible to noise Copes with variation better, susceptible to noise

In part to help traditional engineers a­ utomated system where the human is addresses types of collaborative inter-
and computer scientists understand kept completely out of the loop, and action between the human and com-
the nuances of how humans could this framework was later expanded to puter. Raja Parasuraman and his col-
interact with a complex system in a include 10 LOAs (see Table 2). leagues later clarified that the LOAs
decision-making capacity, Levels of For LOA scales like that exemplified could be applied across the primary
Automation (LOAs) were proposed. in Table 2, at the lower levels the hu- information processing functions per-
LOAs generally refer to the role allo- man is typically actively involved in the ception, cognition, and action, and not
cation between automation and the decision-making process. As the levels strictly to the act of deciding but again
human, particularly in the analysis increase, the automation plays a more using the same 10 levels.4
and decision phases of a simplified active role in decisions, increasingly re- Other taxonomies have proposed
information processing model of ac- moving the human from the decision- alternate heuristic-based LOAs, at-
quisition, analysis, decision, and ac- making loop. This scale addresses au- tempting to highlight less rigid and
tion phases.3,4 Such LOAs can range thority allocation—for example, who more dynamic allocation structures,5
from a fully manual system with has the authority to make the final de- as well as address the ability to hu-
no computer intervention to a fully cision, and to a much smaller degree, it mans and computers to coach and

september/october 2014 www.computer.org/intelligent 3


Table 2. Levels of automation. 4
Automation
level Automation description
1 The computer offers no assistance: human must take all decision and actions.
to the previously discussed lists and
2 The computer offers a complete set of decision/action alternatives, or
debates surrounding them, the most
3 Narrows the selection down to a few, or useful representation I’ve found that
4 Suggests one alternative, and elicits the “aha” moment most edu-
5 Executes that suggestion if the human approves, or cators are looking for is depicted in
6 Allows the human a restricted time to veto before automatic execution, or ­Figure 2.
7 Executes automatically, then necessarily informs humans, and First, a map is needed that links
information processing behaviors
8 Informs the human only if asked, or
and cognition to increasingly com-
9 Informs the human only if it, the computer, decides to.
plex tasks, which is best exemplified
The computer decides everything and acts autonomously, ignoring the through Jens Rasmussen’s taxonomy
10
human.
of skills, rules, and knowledge-based
behaviors9 (SRK; see Figure 2). One
addition to the SRK taxonomy is my
representation of uncertainty via the
y axis. Uncertainty occurs when a sit-
uation can’t precisely be determined,
often due to a lack of or degraded in-
formation with potentially many un-
Expertise known variables. Both external (en-
Knowledge vironmental) and internal (operator
Uncertainty

performance variability and the use


of stochastic algorithms) sources of
Rules
uncertainty can drive system uncer-
tainty higher.
Skills For Rasumussen,9 skill-based be-
haviors are sensory-motor actions
that are highly automatic, typically
Computers Humans acquired after some period of train-
ing. Indeed, he says, “motor output
Relative strengths of computer vs. human information processing is a response to the observation of
an error signal representing the dif-
Figure 2. Role allocation for information processing behaviors (skill, rule, ference between the actual state and
knowledge, and expertise) and the relationship to uncertainty. the intended state in a time-space en-
vironment” (p. 259). This is exactly
what controls engineers are taught in
guide one another. For example, Mica while imperfect and never intended to basic control theory.
Endsley6 incorporated artificial intelli- be rigid design criteria, 8 the notion of In Figure 2, an example of skill-
gence into a five-point LOA scale. LOAs helped such professionals con- based control for humans is the act
Such LOA scales have been criti- ceptualize a design space, as well as of flying an aircraft. Student pilots
cized for their primary focus on an give them a language to discuss com- spend the bulk of training learning
exclusive role and function allocation peting design philosophies. to scan instruments so they can in-
between humans and computers, and stantly recognize the state of an air-
less on the collaborative possibilities A New Look at an Old craft and adjust if the intended state
between the two.7 However, as noted Problem isn’t the same as the actual state
previously, engineers and designers of After more than a decade of attempt- (which is the error signal controls
such systems desire some way to de- ing to train traditional engineers and engineers are attempting to mini-
termine just when and how to design computer scientists to consider the mize.) Once this set of skills is ac-
either exclusive or shared functions human early in the design process, quired, pilots can then turn their at-
between humans and computers, and in addition to exposing the students tention (which is a scarce resource,

4 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS


­articularly under high workload),
p a collaborative approach between the For complex systems with embed-
to higher cognitive tasks. human and the machine in that when ded automation, uncertainty can
Up the cognitive continuum in Fig- a complete engine failure occurs in arise from exogenous sources such as
ure 2 are rule-based behaviors, which the Airbus 320, the fly-by-wire sys- the environment—for example, birds
are effectively those human actions tem automatically trims the plane, in the general vicinity of an airport
guided by subroutines, stored rules, computes the ideal glide speed, and that might, on rare occasion, be in-
or procedures. Rasmussen likens readjusts pitch position for landing, gested in an engine. However, uncer-
rule-based behavior to following a which is difficult for pilots to main- tainty can also be introduced from
cookbook recipe (p. 261).9 Difficul- tain. A single press of the DITCH- endogenous sources, either from hu-
ties for humans in rule-based envi- ING button seals the aircraft for wa- man behaviors or computer/automa-
ronments often come from not rec- ter entry. This mutually supportive tion behaviors. As evidenced by the
ognizing the correct goal in order to flight control environment was criti- Air France 447 crash in 2009 where
select the correct procedure or set of cal to the successful outcome of this the pitot-static system gave erroneous
rules. potentially catastrophic event. information to the pilots due to icing,
In Figure 2, in the aviation exam- I added a fourth behavior to the sensors can degrade or outright fail,
ple, pilots spend significant amounts SRK taxonomy, that of expertise, to introducing possibly unknown uncer-
of time learning to follow procedures. demonstrate that knowledge-based tainty into a situation. In this case,
For example, when an engine light il- behaviors are a prerequisite for gain- where the plane crashed because of
luminates, pilots recognize that they ing expertise in a particular field, and pilot error, the pilots couldn’t cope
should consult a manual to determine this can’t be achieved without signifi- with the uncertainty since they hadn’t
the correct procedure (since there are cant experience in the presence of un- gained the appropriate knowledge or
far too many procedures to be com- certainty. So while a person can be expertise.
mitted to memory), and then follow knowledgeable about a task through
the steps to completion. Some inter- repetition, they become experts when Skill-Based Tasks
pretation is required, particularly for they must exercise their knowledge When considering role allocation be-
multiple system problems, which is under vastly different conditions. For tween humans and computers, it’s
common during a catastrophic fail- example, one pilot who has flown useful to consider who or what can
ure such as the loss of thrust in one thousands of hours with no system perform the skill, rule, knowledge,
engine. Recognizing which procedure failures isn’t as much of an expert as and expertise-based behaviors re-
to follow isn’t always obvious, partic- one who has had to respond to many quired for a given objective and as-
ularly in warning systems where one system failures over the same time pe- sociated set of tasks. For many skill-
aural alert can indicate different fail- riod. Moreover, judgment and intu- based tasks, like flying an aircraft,
ure modes. ition, concepts that often make tradi- automation in general outperforms
For Rasmussen, the highest level of tional engineers uncomfortable since humans easily. By flying, I mean the
cognitive control is that of knowledge- they lack a mathematical formal rep- act of keeping the aircraft on head-
based behaviors, where mental models resentation, are the key behaviors that ing, altitude, and airspeed—that is,
built over time aid in the formulation allow experts to quickly assess a sit- keeping the plane in balanced flight
and selection of plans for an explicit uation in a fast and frugal method,10 on a stable trajectory.
goal.9 The landing of USAIR 1549 in without necessarily and laboriously Ever since the introduction of au-
2009 in the Hudson River, as Figure 2 comparing all possible plan outcomes. topilots and more recently, digital
shows, is an example of a knowledge- Figure 2 depicts role and func- fly-by-wire control, computers are
based behavior in that the captain had tion allocation between computers/ far more capable of keeping planes
to decide whether to ditch the aircraft automation/machines. Such assign- in stable flight for much longer pe-
or attempt to land it at a nearby air- ments aren’t just a function of the riods of times than if flown manu-
port. Given his mental model, the envi- type of behavior, but also the de- ally by humans. Vigilance research
ronment, and the state of the aircraft, gree of uncertainty in the system. It is quite clear in this regard, in that
his quick mental simulation made him should be noted that these behaviors it’s very difficult for humans to sus-
choose the ditching option. don’t occur in discrete stages with tain focused attention for more than
However, this same accident high- clear thresholds, but rather are on a 20–30 minutes, and sustained atten-
lights the importance of the need for continuum. tion is precisely what’s needed for

september/october 2014 www.computer.org/intelligent 5


flying, particularly for long-duration reliability whether the plane is in sta- computer vision systems, which don’t
flights. ble flight and how to correct in mi- cope well with uncertainty.
There are other domains where croseconds if there’s an anomaly.
the superiority of automation skill- This ability is why military and Rule-Based Tasks
based control is evident, such as au- commercial planes have been land- As depicted in Figure 2, skill-based
tonomous trucks in mining indus- ing themselves for years far more pre- behaviors and tasks are the easiest to
tries. These trucks are designed to cisely and smoothly than humans. automate, since by definition they’re
shuttle between pickup and drop off The act of landing requires the pre- highly rehearsed and automatic be-
points and can operate 24/7 in all cise control of many dynamic vari- haviors with inherent feedback loops.
weather conditions, since they aren’t ables, which the computer can do re- Rule-based behaviors for humans,
hampered by reduced vision at night peatedly without any influence from however, require higher levels of cog-
and in bad weather. These trucks a lack of sleep or reduced visibility. nition since interpretation must occur
are so predictable in their operations The same is true for cars that can to determine that, given some stimu-
that some uncertainty must be pro- parallel park by themselves. lus, which set of rules or procedures
grammed into them, or else they re- However, as previously mentioned, must be applied to attain the desired
peatedly drive over the same tracks, the ability to automate a skill-based goal state.
creating ruts in the road that make task is highly dependent on the abil- By the very nature of their if-then-
it difficult for manned vehicles to ity of the sensors to sense the envi- else structures, rule-based behaviors
negotiate. ronment and make adjustments ac- are also potentially good candidates
For many domains and tasks, au- cordingly, correcting for error as it for automation—but again, uncer-
tomation is superior in skill-based arises. For many skill-based tasks, tainty management is key. Significant
tasks because, given Rasmussen’s ear- like driving, vision (both foveal and aspects of process control plants, in-
lier definition, such tasks are reduced peripheral) is critical for correct envi- cluding nuclear reactors, are highly
to motor memory with a clear feed- ronment assessment. Unfortunately, automated because the rules for mak-
back loop to correct errors between computer vision still lags far behind ing changes are well-established and
a desired outcome and the observed human capabilities in many respects, based on first principles, with highly
state of the world. In flying and driv- although there’s significant research reliable sensors that accurately repre-
ing, the bulk of the work is a set of underway in this area. Ultimately, sent the physical plant’s state.
motor responses that become routine this means that for a skill-based task Path planning is also very rule-
and nearly effortless with practice. to be a good candidate for automa- based in that given rules about traf-
The automaticity that humans can tion, uncertainty should be low and fic flow (either in the air or on the
achieve in such tasks can, and argu- sensor reliability high, which is diffi- road), the most efficient path can be
ably should, be replaced with auto- cult for many computer vision appli- constructed. However, uncertainty in
mation, especially given human limi- cations in dynamic environments. such domains makes path planning a
tations such as vigilance, fatigue, and This is why even the most advanced less ideal candidate for complete au-
the ~ 0.5 second neuromuscular lag forms of robotic surgery are still just tomation. When an automated path
present in every human. teleoperation, where the doctor is re- planner is given a start and end goal,
The possibility of automating skill- motely guiding instruments, but still for the most part the route generated
based behaviors (and as we will later in direct control. Currently robotic is the best path in terms of the least
see, all behaviors) depends on the surgical tools don’t have mature sen- time (if that is the operator’s goal).
ability of the automation to sense the sors that allow for the closure of the However, many possibilities exist
environment, which for a human hap- control feedback loop with a high de- that automation may not have infor-
pens typically through sight, hear- gree of reliability, like those of auto- mation about that cause such a path
ing, and touch. This isn’t trivial for pilots. And while some tasks in the to be either suboptimal or even infea-
computers, but for aircraft, through driving domain can be automated be- sible, such as in the case of accidents
the use of accelerometers and gyro- cause of their skill-based nature (like or bad weather.
scopes, inertial and satellite naviga- parallel parking), seemingly simple It is at this rule-based level where
tion systems, and engine sensors, the tasks like following the gestures of there’s significant opportunity for
computer can use its sensors to deter- a traffic cop for a driverless car are ­humans to collaborate with automa-
mine with far greater precision and extremely difficult due to immature tion to achieve a better solution than

6 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS


either could alone. While fast and effectively than a human can.12 How- Watson leverages statistical reason-
able to handle complex computation ever, the sensing aspect is a signifi- ing, it can bound answers with confi-
far better than humans, computer op- cant problem for this futuristic tech- dence intervals.
timization algorithms, which work nology, which isn’t as reliable in bad A more near-term example of hu-
primarily at the rule-based level, are weather with precipitation and stand- man-computer collaboration for
notoriously brittle in that they can ing water on roadways. knowledge-based medical decision
only take into account those quanti- making is the Athena Decision Sup-
fiable variables identified in the de- Knowledge-Based Tasks and port System that implements guide-
sign stages that were deemed to be Expertise lines for hypertension and opioid
critical. In complex systems with in- The most advanced form of cognitive therapies.13 This system harnesses the
herent uncertainties (such as weather reasoning occurs in domains where power of computer search and filter-
impacts or enemy movement), it isn’t knowledge-based behaviors and ex- ing but also allows doctors the ability
possible to include a priori every sin- pertise are required. Coincidentally, to guide the computer based on their
gle variable that could impact the fi- these settings are also typically where own experiences.
nal solution. uncertainty is highest, as Figure 2 A limitation of pattern-matching ap-
Moreover, it’s not clear exactly shows. While rules may assist deci- proaches is the overreliance on super-
what characterizes an optimal solu- sion makers (whether human or com- vised learning, in that labels must be
tion in such uncertain scenarios. Of- puter) in aspects of knowledge-based assigned (typically by humans) for a
ten, in these domains, the need to decisions, such situations are by def- computer to recognize a pattern. Not
generate an optimal solution should inition vague and ambiguous and only is it possible for humans to intro-
be weighed against a satisficing11 so- mathematically optimal solutions are duce error in this process, it raises the
lution. Because constraints and vari- unavailable. question of whether a computer can
ables are often dynamic in complex It’s precisely in these situations detect a pattern or event it has never
environments, the definition of opti- where the human power of induc- seen before, or that’s slightly different
mal is also a constantly changing con- tion is critical. Judgment and intu- than a pattern it has seen before.
cept. In those cases of time pressure, ition are critical in these situations, There has been increasing interest
having a solution that’s good enough, as these are the weapons needed to in using semisupervised and unsuper-
robust, and quickly reached is often combat uncertainty. Because of the vised machine learning algorithms
preferable to one that requires com- aforementioned brittleness problems that don’t use labels, and thus gener-
plex computation and extended peri- in the programming of computer al- ate groups of patterns in absence of
ods of times, which might not be ac- gorithms and the inability to replicate such bias. However, with regard to
curate due to incorrect assumptions. the intangible concept of intuition, replicating human learning in terms
Another problem for automation knowledge-based reasoning, and es- of object recognition, unsupervised
of rule-based behaviors is similar to pecially true expertise, for now, are machine learning for computers is
one for human selection of the right outside the realm of computers. How- still quite immature. In a recent ma-
rule or procedure for a given set of ever, there’s currently significant re- jor “breakthrough,” an unsupervised
stimuli. Automation will reliably ex- search underway to change this, algorithm was able to cluster and suc-
ecute a procedure more consistently particularly in the machine learning cessfully recognize cats in unlabeled
than any human, but the assumption community—but progress is slow. images with only 15.8 percent accu-
is that the computer selects the cor- IBM’s Watson, 90 servers each racy, which was reported to be an im-
rect procedure, which is highly de- with a 3.5-gigahertz core processor, provement of 70 percent over the cur-
pendent on the sensing aspect. This is is often touted as a computer with rent state of the art.14 For computer
where obstacle detection and avoid- knowledge-based reasoning, but peo- vision applications, robust, fast, and
ance, particularly for driverless cars, ple confuse the ability of a computer efficient perception will be needed be-
is critical. If the automated sensors to search vast databases to generate fore computers can reliably be trusted
detect an obstacle, then procedures formulaic responses with knowledge. in perception-based tasks.
will be executed for avoidance or For Watson, which leverages natu- With such brittleness, it will be
braking or both. Indeed, it has been ral language processing and pattern some time before computers can
shown that cars equipped with radar matching through machine learning, truly begin to approach the expertise
can automatically brake much more uncertainty is low. Indeed, because of humans, especially in situations

september/october 2014 www.computer.org/intelligent 7


Table 3. Degree of automation as a function of a desired behavior.
Cognitive behavior/task Degree of automation
Skill-based Best candidate for automation, assuming reliable sensors for
state and error feedback
shown that allowing the human to
Rule-based Possible candidate for automation, if rule set is well-­established
and tested coach a highly automated system pro-
duces results up to 50 percent better
Knowledge-based Some automation can be used to help organize, filter, and
­synthesize data than if the automation were left to its
Expertise Human reasoning is superior, but can be aided by ­automation own devices.15 Collaboration between
as a teammate humans and computers, particularly
in knowledge-based domains where
complementary strengths can be lev-
of high uncertainty. But this isn’t to without overloading the human? eraged, hold much future potential.
say there’s no role for computers in • Can automation reasoning be im- Last, role and function allocation
knowledge-based reasoning. Again, proved through human guidance is as much art as science. The com-
this area is ripe for more develop- and coaching? plexity of systems with embedded au-
ment in human-computer collabora- • Can automation be leveraged to tonomy supporting dynamic human
tion. IBM’s first commercial applica- help the human reduce uncertainty, goals suffers from the “curse of di-
tion of Watson will be aiding nurses particularly when knowledge and mensionality.”8 As a result, these sys-
and doctors in diagnoses, which falls expertise is needed? The reverse tems will never have closed-form so-
squarely in the domain of expert de- should also be explored in that the lutions and will be intractable from
cision makers. human may be able to reduce un- a mathematical perspective. But be-
certainty for the automation. cause of the necessary mix of art and
science in designing such systems,
W hile Paul Fitts and his col- As Table 3 shows, skill-based be- both industry and academia should
leagues were perhaps overly focused haviors are the best candidates for recognize the need for a new breed
on mutually exclusive assignment of automation, assuming significant sen- of engineer/computer scientist. Such
human and machine roles, their ba- sor performance assumptions can be a person should have an apprecia-
sic premise more than 60 years ago met, but rule- and knowledge-based tion for human psychology and per-
should be interpreted through the reasoning are better suited for hu- formance characteristics, but at the
lens of collaborative systems and the man-computer collaboration. Sys- same time understand control theory,
behaviors that need to be supported. tems should be designed so that hu- Bayesian reasoning, and stochastic
The modified SRK taxonomy pre- mans harness the raw computational processes.
sented here isn’t meant to be a re- and search power of computers for
placement for earlier role and func- state-space reduction, but also allow References
tion efforts, but rather a different lens them the latitude to apply inductive 1. P.M. Fitts, “Human Engineering for
through which to think about system reasoning for potentially creative, an Effective Air Navigation and Traffic
design. The intent is to provide en- out-of-the-box thinking. As a team, Control System,” tech. report, Nat’l
gineers and computer scientists with the human and computer are far Research Council, 1951.
a principled framework by which to more powerful than either alone, es- 2. S.W.A. Dekker and D.D. Woods,
formulate critical questions, such as pecially under uncertainty. “MABA-MABA or Abracadabra?
the following: In a 2005 competition against the Progress on Human-Automation Co-
Hydra chess computer, two novices ordination,” J. Cognition, Technology
• Can my sensors provide all the data with three computers beat the com- and Work, vol. 4, 2002, pp. 240–244.
I need at a high enough reliability puter and other grandmasters aided 3. R. Parasuraman, “Designing Automa-
to approximate trained human skill by single computers. Arguably chess tion for Human Use: Empirical Studies
sets? is an environment of low uncertainty and Quantitative Models,” Ergonom-
• Is there a high degree of uncer- (particularly for computers that can ics, vol. 43, 2000, pp. 931–951.
tainty in either my environment or search a large but finite set of possible 4. R. Parasuraman, T.B. Sheridan, and C.D.
my sensors, which would necessi- outcomes.) However, in a real-world Wickens, “A Model for Types and Levels
tate human supervision? and highly uncertain command-and- of Human Interaction with ­Automation,”
• Can humans augment and improve control environment of one opera- IEEE Trans. Systems, Man, and Cyber-
either sensor or reasoning deficien- tor controlling multiple robots in netics—Part A: Systems and Humans,
cies, and how would this occur a search-and-find task, it has been vol. 30, 2000, pp. 286–297.

8 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS


5. D.B. Kaber et al., “Adaptive Auto- 10. G. Gigerenzer, P.M. Todd, and T.A.R. 15. M.L. Cummings et al., “The Impact
mation of Human-Machine System Group, Simple Heuristics That Make of Human-Automation Collaboration
Information-Processing Functions,” Us Smart, Oxford Univ. Press, 1999. in Decentralized Multiple Unmanned
Human Factors, vol. 47, Winter 2005, 11. H.A. Simon et al., Decision Making Vehicle Control,” Proc. IEEE, vol. 100,
pp. 730–741. and Problem Solving, Nat’l Academy 2012, pp. 660–671.
6. M. Endsley, “The Application of Hu- Press, 1986.
man Factors to the Development of Ex- 12. Insurance Institute for Highway Safety, Mary (Missy) Cummings is the director of
pert Systems for Advanced Cockpits,” “More Good News about Crash Avoid- the Humans and Autonomy Laboratory at
Proc. Human Factors and Ergonomic ance: Volvo City Safety Reduces Crash- Duke University. She is also a principle in-
Soc. 31st Ann. Meeting, 1987, pp. es,” Status Report, vol. 48, no. 3, 2013; vestigator in the MIT Computer Science and
1388–1392. www.iihs.org/iihs/news/desktopnews/ Artificial Intelligence Laboratory. Her re-
7. Defense Science Board, The Role of more-good-news-about-crash-avoidance- search interests include human-autonomous
Autonomy in DoD Systems, US Dept. volvo-city-safety-reduces-crashes. system collaboration, human-systems engi-
of Defense, 2012. 13. A. Advani et al., “Developing Quality neering, and the ethical and social impact of
8. T.B. Sheridan, “Function Allocation: Indicators and Auditing Protocols from technology. Cummings has a PhD in systems
Algorithm, Alchemy, or Apostasy?” Intl Formal Guideline Models: Knowledge engineering from the University of ­Virginia.
J. Human-Computer Studies, vol. 52, Representation and Transformations,” She’s an IEEE Senior Member. Contact her
2000, pp. 203–216. Proc. AMIA Ann. Symp., 2003, at m.cummings@duke.edu.
9. J. Rasmussen, “Skills, Rules, and pp. 11–15.
Knowledge: Signals, Signs, and Sym- 14. Q.V. Le et al., “Building High-level
bols, and Other Distinctions in Human Features Using Large Scale Unsuper-
Performance Models,” IEEE Trans. vised Learning,” Proc. 29th Int’l Conf. Selected CS articles and columns
Systems, Man, and Cybernetics, Machine Learning, 2012; http://icml. are also available for free at
vol. 13, no. 3, 1983, pp. 257–266. cc/2012/papers/73.pdf. http://ComputingNow.computer.org.

september/october 2014 www.computer.org/intelligent 9