Vous êtes sur la page 1sur 10

S516: Human-Computer Interaction Hamid R.

Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

Outline
 Cognition: Basic concepts
 Mentalist vs. Situated
 Theoretical frameworks
 Information Appliances
 Mentalist vs. Situated

I. Introduction
a. Since interaction design is human-centered, we need to attain a basic understanding of
human cognition, and to apply this understanding to:
i. Inform the design of technologies
ii. To extend human capabilities
iii. To compensate for human weaknesses
b. This is merely a crash course on “cognition,” with the main focus on the relationship between
cognition and technology.
i. Norman and Clark provide compatible, but also contrastive views, on this
relationship.
1. They both agree that technologies should be human-centered, but one
advocates “visibility in use” and the other advocates “transparency-in-
function”
a. Norman looks at visibility in terms of the user’s mental model,
their knowledge and understanding of artifacts, and the
psychology of use
b. Clark is more interested in the “symbiosis” between the
technology and the user, which results in “low cognitive and
economic costs;” he is not interested in our understanding the
technology
2. What is going on here? How can we compromise these two?
ii. Before examining these questions, let us get some preliminaries out of the way.

II. Cognition
a. Roughly speaking, cognition is what goes on in our heads when we “think”
i. Broadly understood, this includes attending, remembering, learning, talking,
imagining, reading, and so on.
ii. These are often interdependent, and do not happen in isolation, but let us focus
on some cognitive activities that are more relevant to design
b. Attention is the process of selecting things to focus on from among a range of possibilities;
it is facilitated by:
i. Goals: what you want to do or find out
1. Think of the difference between driving home versus sightseeing
ii. Information presentation: how information is displayed
1. Think of the difference between a phone book and a scrap piece of paper
with handwritten phone numbers, or between Google and other search
engines
c. Perception: the process of acquiring information from the environment through vision,
hearing, touching, etc.
i. Perception is almost always multimodal — that is, it involves different senses and
media

1
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

ii. The way the environment is organized, and information is presented to us, makes
a lot of difference on how we perceive things
d. Memory: the capability to maintain and recall previous information, people, experiences,
relationships, etc.
i. It is very selective; a lot of filtering is involved (we don’t commit to memory
everything that we encounter)
1. This happens through attention and subsequent encoding and
interpretation
ii. It is contextual; we remember things within the specific context of their
happening, not in isolation
1. Ex. Seeing your neighbor in an unfamiliar place
iii. We are better at recognition than recall
1. Interfaces that give priority to recognition are better
a. Ex.: file management systems, bookmarks, passwords and
PINs
e. Learning:
i. There are different ways of learning – for example,
1. Learning by following instructions
2. Learning by doing
ii. What the most effective method of learning is depends on the task
f. Reading, speaking, and listening
i. They all use language, but there are key differences
1. Written language has longer lifetime and is more grammatical
2. Reading can be quicker than listening
3. Listening requires less cognitive effort
ii. Which mode is preferable
1. Depends on the task, the user, and their skills, preferences, and
capabilities
g. Reflective cognition
i. This involves planning, reasoning, decision-making, problem-solving, etc.

III. Mentalist vs. Situated

Mentalist Situated
centralized distributed
abstract embodied
(i.e., Body is irrelevant.)
individual embedded
(i.e., solitary mind)
rational emotional
(i.e., reasoning, logical)
detached integrated
(i.e., separated from perception
and action)

IV. Cognition and Design


a. There are immediate implications for design from this cursory look at cognition.

2
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

b. A new application called Scrybe is a recent example of how simple principles can be
applied to good design: http://iscrybe.com/main/index.php
c. But cursory looks and intuitions are risky, because they might mislead us. We need a more
systematic understanding of cognition and its relation to technology. Some of the
proposed “frameworks” of thinking about this relationship are as follows.

V. Frameworks
There are different ways of conceptualizing cognition. Here we discuss three:

a. Mental Models
i. Previously, we saw how transparent conceptual models about systems can
facilitate learning by the users
ii. People also develop a “mental model” of how to use a system and how the system
works
1. The mental model guides the user in carrying out their tasks
iii. Traditionally, cognitive psychologists consider mental models as internal
constructions of some aspects of the real world that allows people to develop
expectations, and even make predictions and inferences
1. Mental models are sometimes called “scripts” (in AI), cultural schemas
or models (in anthropology and socio-linguistics)
a. For example, a fast-food-restaurant script
iv. People sometimes develop and rely on the wrong mental model
1. For example, when they walk into an unfamiliar situation (think of a
non-American who walks into a fast-food restaurant for the first time)

2. Or when they use a model based on a “general valve theory”: more is


more, which works for water taps and radio controls, but not for
thermostats
3. Incorrect mental models seem to be common, although I am not sure if
we need to resort to mental models in order to explain these behaviors;
they might well be driven by other aspects of human psychology: habits,
frustrations, reflexive responses, etc.
a. Pushing the elevator button multiple times
b. Bashing away at keys to make the frozen cursor move on the
screen
c. Hitting the top of the TV

4. However, as Westbrook (2006) says, mental models are important


because for individuals who hold them, they “have a value and reality all
their own.”

v. In the ideal case, people’s mental model should match the conceptual model of
the designer, but how could we accomplish this?
1. Educate them better?
2. Make the systems more transparent
a. How does Clark characterize transparency?
i. Are iMac’s transparent?
b. How much transparency is optimal?
c. Here are some ways of doing this:
i. Useful feedback

3
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

ii. Easy-to-understand and –follow instructions


iii. Online help and manuals
vi. The ATM Activity
The group activity on ATMs that we did in class today was more thoroughly
conducted by Payne (1991) in the form of interviews with students. The results of
this study are summarized in Payne (2003):
1. There’s a huge variety in people’s beliefs about the design of ATMs –
the lesson:
a. “Users of machines are eager to form explanatory models, and
they readily go beyond available data to infer models that are
consistent with their experiences.” (p. 140)
2. People’s models are fragmentary; they are collections of beliefs about
parts of the system, processes, or behaviors, rather than unified models
of the whole design; they are built on different metaphors and analogies
– the lesson:
a. “Users’ mental models of single processes or operations might
be worthwhile topics for study and for practical intervention
(in design or instruction).” (ibid)
3. People’s mental models have practical implications
a. For instance, if they believe that it is not possible to type ahead
during machine pauses, they wait for the whole duration of
pause, needlessly slowing down the transaction.
b. Sensemaking
i. Westbrook (2006) provides a “socio-cognitive approach” to mental models,
which “recognizes the critical roles of social context, personal situation, and
affective influences” (p.564).
ii. Most useful probably is the study on mental models of academic information
seeking behavior reported in this paper
1. First, participants were asked to document their mental models of
grocery shopping
a. Steps involved — e.g., select items, pay, etc.
b. Sequence of steps — e.g., pick up frozen items last
c. Available options — e.g., samples at the cheese counter, but
not at the pharmacy
d. Available support — e.g., employees with tags
e. Social norms — e.g., staying in line
2. Then, in the course of a semester, they were asked to develop mental
models of information seeking with a focus on components and
relationships among them
3. The study found out three patterns of mental models summarized in
Table 2 of the paper (p.573).

c. Information Processing
i. According to this view, minds are information processors
1. Information goes through different processing stages, which act upon
different types of representation such as images, words, rules, etc.
2. Processes include comparing and matching
ii. This was (and still is) an influential view in classical software engineering

4
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

1. Where people break things up into data flows that go through


processing modules from the input to the output
2. The famous Data Flow Diagrams represent this view
iii. A famous example of this is “the human processor model” (Card et al. 1983),
which is purely mentalistic
1. This runs into problems

d. External Cognition
i. To overcome the problems of mentalistic models, we need to bring external
representations and “scaffolds” into the picture
ii. This has three benefits
1. Reducing memory load
a. These could be physical objects or representational mediums
such as calendars, shopping lists, etc.
2. Offloading computation
a. Doing math on paper
b. The difference between Arabic and Roman numerals
3. Annotating and cognitive tracing
a. Annotating involves modifying representations to reflect
changes that are taking place that need to be marked
i. Ex. Crossing out items from a list
b. Tracing involves externally manipulating items into different
orders and structures
i. Ex. Card games, scrabble, or more recent
visualization techniques
e. Distributed Cognition
i. This approach is similar to external cognition in that it studies cognition across
individuals, artifacts, and internal and external representations.
ii. As such, the flow of information is understood as propagation and transformation
through different representational mediums (See Figure 3.14 on page 130 of RPS
2007).

VI. Information Appliances


a. Clark advocates the framework of external cognition, and actually makes a bigger claim –
namely, that the boundary of our minds goes beyond the skull and the skin
i. His notion of transparency has a lot to do with this claim
1. Transparent technologies have been with us for a long time
2. Passage to transparency is a evolutionary process that extends over time
(as shown by the difference between a wristwatch and a dictionary)
ii. But Clark’s view also has to do with the relationship between physical and virtual
1. He discusses technologies that are built to blur the boundaries between
the virtual and the physical
b. To fully understand the potentials and implications of this view let us take a look at the
notion of “information appliance,” introduced by Norman as something that:
i. Supports a specific activity thru the storage, processing, or transfer of
information

5
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

ii. Talks to other appliances


iii. Poised to be taken for granted – i.e., transparent
c. The notion of “transparent” defined in the above manner is somehow different from the
meanings of the term that we have seen before.
d. In fact, it seems that there’s much going on here, and we need to unpack the term by
looking at its different connotations such as:
i. Understandable (the movie projector in Norman example)
ii. Open-hood (command-line interfaces)
iii. Taken for granted (the wristwatch)
iv. Easily accessible (the dictionary)
1. Ready-to-hand
v. Seamlessly connected (the internet)
vi. Pervasive (mobile technologies)
vii. Hidden or embedded (modern appliances)
1. “out of sight”
viii. Pseudo-neural and unconscious (image messaging?)
1. “out of mind”
ix. Personalized (the wearable remembrance agent)
x. Automatic (auto-complete feature of many s/w environments such as email)
xi. Constantly running ( )
xii. Out of control ( )
1. Not present-at-hand (and not tangible?)
xiii. Dynamic appliances (that adapts to the user needs)
xiv. Skilled (a self-customizing machine)
e. Now, we can look back at the physical-virtual dichotomy
i. As we have seen before, some people advocate technologies that would allow us
to apply our experience with the physical world to the virtual world. The idea is
to make technologies “extra-visible” -- for example
1. Tangible User Interfaces such as the marble answering machine, the
Sensetable at MIT, the cello’s bow-using interface,
ii. Others try to overlay the physical world with virtual interfaces
1. As in Augmented Reality
iii. Yet others like to erase the distinction altogether
1. As in mixed reality interfaces
iv. Clark suggests that the differences between these visions “show up only, if at all,
at the very extremes, where some Information Appliances will indeed be out of
sight and out of mind” (p. 56).
v. On the basis of this, he argues that we should not ask, “Which way is the best?,”
because different tools serve different needs and purposes
1. We want to understand some but not others
vi. This reminds me of a distinction that some sociologists make among technologies
when they discuss the human-machine relationship. This provides a useful
contrast with the views we have seen so far. I have written about this in my book,
Artificial Dreams, in a discussion of chess and Deep Blue, and would like to share it
with you here:

VII. Mentalist vs. Situated

6
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

During the last few centuries, chess has transformed from a courting activity of aristocracy to a
regulated championship match in public spectacle. This can be partly understood as a shift from a
polimorphic action to a mimeomorphic one where humans are made to act, to a certain degree, in a
machine-like fashion.
The constructivist view, held by a group of sociologists, suggests that the boundary between
humans and machines is permeable at least insofar as humans are willing to act like machines. Humans
determine where the boundary is, and they can even erase it by choosing to act in certain ways. Collins
and Kusch (1998) have developed an elaborate theory of the shape of actions with the purpose of
establishing new boundaries between humans and machines. According to this theory, human actions
fall into two general categories. In polimorphic (from “polis” and “morph,” roughly meaning “socially
shaped”) actions such as voting, greeting, praying, shopping, or writing a love letter, humans draw upon
their understanding of society to perform the action. These are formative actions in the sense that they
each constitute a form of life. Praying, for instance, is different in the Islamic and Catholic forms of life,
partly making them distinguishable from each other. The variable, intentional, or institutional character
of polimorphic actions makes them essentially non-mimicable for things like machines that are not part
of the pertinent society or form of life. By contrast, in mimeomorphic (or machinelike) actions such as
blinking or swinging a golf club, there is a one-to-one mapping between the action and observable
behaviors, making them mimicable by machines. In other words, machines can act intelligently to the
extent that humans, as social beings, are willing and able to act mechanically.
During the last few centuries, chess has transformed from a courting activity of aristocracy to a
regulated championship match in public spectacle. This can be partly understood as a shift from a
polimorphic action to a mimeomorphic one where humans are made to act, to a certain degree, in a
machine-like fashion. Unfortunately, Collins and Kusch do not discuss chess directly, making it difficult
to understand their view on the questions raised here.i However, one can make inferences about chess
from their discussions of other topics. Collins and Kusch (1998: 119-120) classify machines in two
ways: first according to what they do, and second according to how they work. In respect to what they
do, the authors suggest that machines are of three types: as tools they amplify our ability to do what we
can already do; as proxies they replace us by doing what we already do (perhaps better than we do it);
and as novelties they do types of things that we could never do without them. According to this typology,
hammers and word processors are tools, thermostat and medical expert systems count as proxies (see
Chapter III), and fridge-freezers and rockets are novelties, as is a laser and a virtual-reality headset.
Asserting that the boundaries between these categories are relative to our standpoint and to the level of
analysis — “we can always turn a tool into a proxy by taking a lower vantage point,” and vice versa
(ibid) — the main thrust of their argument is that “what have been taken to be proxies should really be
thought of as tools” (p. 121). Pocket calculators, according to Collins and Kusch, do not do arithmetic;
they do only a tiny part of arithmetic.
The main argument behind the above assertion is what Collins and Kusch call “Repair,
Attribution, and all That” or “RAT”: “We think a pocket calculator does arithmetic after the fashion of a
proxy, or even an agent, because we continually repair its errors and attribute high-level arithmetical
agency to it” — for instance, when we interpret the answer 6.9999996 of a calculator to the product
“7/11 x 11” as 7. This is what the authors mean by “repair” — a common phenomenon when we deal
with machines and pets, but also in our interactions with fellow human beings. What makes the human
case different is that the pattern of skill and repair work is roughly symmetrical between the parties. By
the same token, the attribution of agency to machines would make sense if the RAT is balanced between
us and them, a condition that is hardly attained with present-day computers — unless, of course, we
either “put social knowledge into computers” or we eliminate the need for it by acting in a machine-like
fashion. In short, according Collins and Kusch (1998: 125), “machines can only be proxies where
whatever we want to bring about can be brought about through mimeomorphic actions.”
The notion of RAT provides a useful way of understanding human and computer interaction.
We saw explicit examples of repair work by the IBM team on Deep Blue between and throughout the

7
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

games — e.g., when the team fine-tuned the machine’s strategy according to opponent’s play. To be
sure, there have been many other such “repair” works that are not discussed here. All of this seems to
lend support to the idea that Deep Blue might indeed be a tool, rather than a proxy, according to
Collins and Kusch’s categories. But there are two caveats. First, the balance between Deep Blue and
humans in terms of their respective repair work is far beyond what we can say of pocket calculators. In
fact, in an interesting twist, it was the humans who played as proxies of Deep Blue (e.g., when they
physically performed the move decided by the machine). Second, even human beings when they
interact with each other do not have the proclaimed balance and symmetry in the repair work. Rather
their contribution seems to follow the contours of authority, status, and expertise — think of the
interactions between a nurse and a patient in a hospital, a manager and an employee, a mechanic and a
car owner, a parent and a child, and so on.

8
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

9
S516: Human-Computer Interaction Hamid R. Ekbia
SLIS Indiana University
Lecture 3: Human Cognition

a.
Cyborgs

Artificially
Natural Created
Born

Penetrative Non- Neurally Electronicall


Penetrative Controlled y
Controlled

Implant/ Genetically
Transplant Engineered Physical Virtual Physical Virtual

Physical Virtual

Non- Tele- Communica Tele-


Biological Biological Presence tion Manipulatio Androbots
Devices n

Thought Robots Softbots


Control Teleportati
Mechanica Computer on
l Implants
Implants

The Cyborg Family Tree

10

Vous aimerez peut-être aussi