Vous êtes sur la page 1sur 59

John Benjamins Publishing Company

This is a contribution from Sign Language & Linguistics 17:2


2014. John Benjamins Publishing Company
This electronic file may not be altered in any way.
The author(s) of this article is/are permitted to use this PDF file to generate printed copies to be
used by way of offprints, for their personal use only.
Permission is granted by the publishers to post this file on a closed server which is accessible
only to members (students and faculty) of the authors/s institute. It is not permitted to post
this PDF on the internet, or to share it on sites such as Mendeley, ResearchGate, Academia.edu.
Please see our rights policy on https://benjamins.com/#authors/rightspolicy
For any other use of this material prior written permission should be obtained from the
publishers or through the Copyright Clearance Center (for USA: www.copyright.com).
Please contact rights@benjamins.nl or consult our website: www.benjamins.com

Phonological and morphological faces


Disgust signs in German Sign Language
Eeva A. Elliotta and Arthur M. Jacobsa,b

Department of Experimental and Neurocognitive Psychology, Freie


Universitt Berlin, Germany / b Dahlem Institute for Neuroimaging of
Emotion (D.I.N.E.), Berlin, Germany

In this study, we verify the observation that signs for emotion related concepts
are articulated with the congruent facial movements in German Sign Language
using a corpus. We propose an account for the function of these facial movements in the language that also explains the function of mouthings and other
facial movements at the lexical level. Our data, taken from 20 signers in three
different conditions, show that for the disgust related signs, a disgust related
facial movement with temporal scope only over the individual sign occurred
in most cases. These movements often occurred in addition to disgust related
facial movements that had temporal scope over the entire clause. Using the Facial
Action Coding System, we found some variability in how exactly the facial movement was instantiated, but most commonly, it consisted of tongue protrusion
and an open mouth. We propose that these lexically related facial movements be
regarded as an additional layer of communication with both phonological and
morphological properties, and we extend this proposal to mouthings as well. The
relationship between this layer and manual lexical items is analogous in some
ways to the gesture-word relationship, and the intonation-word relationship.
Keywords: facial expressions, non-manual features, phonology, mouthings

1.

Introduction: The phenomenon

It has been reported for ASL that manual signs for some emotion related concepts temporally co-occur with facial movements that have the congruent meaning (Liddell 1980; McIntire & Reilly 1988; Reilly, McIntire & Bellugi 1990). For
example, the sign sad in ASL is reported to consist both of a manual component
and a temporally co-occurring frown. It is this phenomenon, that is, the frown
part of sad, that we attempt to explain in our study.
Sign Language & Linguistics 17:2 (2014), 123180. doi 10.1075/sll.17.2.01ell
issn 13879316 / e-issn 1569996x John Benjamins Publishing Company

124 Eeva A. Elliott and Arthur M. Jacobs

The function of facial movements such as the frown in sad, whether phonological, morphological, or something else, is as yet unclear. The indeterminate nature of these facial movements is reflected in the terminological choices
in the literature. The above cited authors do not use either the term phonological or morphological for these facial movements. Rather, they refer to them with
some form of the term lexically related. For German Sign Language (Deutsche
Gebrdensprache: DGS), they are described under the header phonology in
Happ & Vorkper (2006).
Emotion concept signs often appear to have a lexically related facial movement, although they are not the only semantic category of signs to have one; an
example of a non-emotion related sign in ASL with a facial component is not-yet
(Liddell 1980: 17). In this study, we only focus on facial movements that are lexically related to signs for emotion concepts. We focus on this phenomenon for the
following reason: There is a long tradition of research on facial movements associated with emotions going back at least as far as Darwin (1904 [1872]). However,
despite over a century of scientific investigation on the subject, why certain facial
movements are often associated with emotions is still a controversial issue (Russell
& Fernandez-Dols 1997; Ekman 2004; Wierzbicka 1999; Izard 2010). Some of the
main controversies surround the questions of whether facial movements are indices of emotions or rather of intentions, of identifying what is cultural and what
is innate, and of what the scientific definition of emotion should be. Comparing
emotion related facial movements with their lexically related facial movement
counterparts in sign languages may contribute to clarifying what exactly is unique
to each in their form, function, and meaning.
1.1 The goal of our investigation
The goal of our investigation is to understand what is the function of facial movements that (a) temporally co-occur with emotion related signs and (b) are semantically congruent with the manual component (e.g. sad + frown). Are they phonemes with some morphological properties as we propose below?
To achieve this goal using a corpus of DGS disgust signs, we ask the following
three empirical questions:
1. Does an emotion related facial movement consistently temporally co-occur
with disgust signs produced as single signs?
2. Does an emotion related facial movement consistently temporally co-occur
with disgust signs produced in direct and reported sentence types?
3. Which facial muscles are used by each signer when producing an emotion
related facial movement temporally co-occurring with a disgust sign?

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 125

The structure of the paper is as follows: In Section 1, we present some background


information regarding emotion theories and the meanings of facial movements.
From the literature, we conclude that there is a level of communication dedicated
to signaling first person present tense emotions and/or intentions. In Section 2, we
provide background information on co-speech gestures and intonational units and
how they relate to words. This evidence leads us to the conclusion that there are at
least four layers of communication that can co-occur and provide complementary
information resulting in an overall rich but unified signal in face-to-face conversation. In Section 3, we summarize findings on lexically related facial movements in
sign languages, specifically of facial movements that are lexically related to signs
for emotion concepts, and compare these facial movements to the four levels of
communication previously presented. In Section 4, we conclude from the available evidence that the main difference between the types of lexically related facial
movements lie in their phonological and morphological functions, but that they
all belong to the same class of phenomenon which we propose is an additional
layer of communication analogous in some ways to co-speech gesture or intonation. Our research methods and results are described in Sections 5 and 6. Finally,
in Section 7, we discuss the significance of our results and propose that the facial
movements that were timed to the disgust signs in our data set have a phonological function, and that that they also carry some meaning.
1.2 The meaning of facial movements: A semiotic approach
It is a readily observable fact that humans make facial movements that appear
to serve no other instrumental purpose than communication. These movements
include smiling, frowning, and nose wrinkling amongst others. It is known that
these facial movements appear within the first two months of life, before gestures
and words (Izard et al. 1995), and it would seem that some of them are innate since
they are also observed in the congenitally blind (Matsumoto & Willingham 2009),
who cannot have learnt them through seeing others produce them.
One group of theories referred to in Russell and Fernandez-Dols (1997: 11) as
the facial expression program assumes that some facial expressions mean that the
person making them is experiencing an emotion (e.g. Ekman 1972, 1992, 2004).
In the sign language linguistics literature, the facial expression program seems to
be the theory most often referred to when comparing facial movements that are
part of sign languages with those that are not part of sign languages (such as e.g.
emotional expressions) (e.g. Anderson & Reilly 1997; de Vos et al. 2009; McIntire
& Reilly 1988; Reilly 2005).
A contrasting theory is that of Fridlund (1997) called the behavioral ecology
view, which assumes that facial expressions mean that the person making them

2014. John Benjamins Publishing Company


All rights reserved

126 Eeva A. Elliott and Arthur M. Jacobs

intends something. For example, a face in which ones teeth are bared does not
mean I am angry it means I intend to aggress. For a short review on the topic
see Elliott & Jacobs (2013).
Our approach to the meaning of facial movements, based on many of the results
of Wierzbickas (1999) semantic analysis of facial movements, is as follows: whichever framework is adopted, we take it as uncontroversial to state that for humans,
facial movements serve as semiotic units (form-meaning pairings) that are used to
communicate information about oneself (whether ones intentions or emotions) in
the present. This communication system about ones present emotions/intentions
appears to have some innate facial movement components and develops before
other modes of communication such as gesture and speech/sign. It is commonly
observed to be used both in the presence and absence of gestures and speech/sign.
It appears implicit both to the facial expression program and the behavioral
ecology view that facial semiotic units are in the first person present tense orientation. For example, the bared teeth face could mean I am angry or I intend to
aggress, but what is common to both expressions is the template I X now,
where X is an emotion or intention. Wierzbicka (1999: 175) makes this semantic
feature explicit in her analysis of the meaning of facial movements and it is easily
verifiable by observation. When we smile or frown at each other, or when we hear
an unexpected loud noise and rapidly raise our brows and widen our eyes, it never
signifies you are happy/sad/afraid or I will be happy/sad/afraid. Wierzbicka notes
that in this semantic feature, facial movements are akin to exclamations, interjections, and performative verbs. This is also true when they are used as enactments.
For example, if I am telling someone about a past event, and I say She went [nose
wrinkle] when she saw the food even though I am referring to someone elses past
reaction to food through an enactment, the enactment itself is to be interpreted in
present tense first person orientation in which I represent the character she and
the movements I make in character are to be interpreted as if in the present. The
facial movements used in speaker attitude depiction and role-shift in ASL also appear to have a first person present tense orientation (Liddell 1980: 5358).
1.3 Facial movements in sign languages
Research into sign languages has shown that facial movements are not only used
for communication of emotion/intention in the first person present tense orientation they can also become part of signing. Facial and head movements are
used in sign languages at all levels of structure. At the lexical level, some signs
have a facial component in their citation form (Liddell 1980; Woll 2001). There
are facial morphemes (Lewin & Schembri 2011; Crasborn et al. 2008; McIntire
& Reilly 1988; Liddell 1980) such as, for instance, the ASL adverbial th meaning

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 127

carelessly. Moreover, facial movements mark relative clauses, content questions,


and conditionals, amongst others, although there is some controversy whether
these markings should be regarded as syntactic or prosodic (cf. Aarons et al. 1992;
Baker-Shenk 1983; Dachkovsky & Sandler 2009; Liddell 1980; Neidle et al. 2000;
Nespor & Sandler 1999a; Sandler 1999b; Wilbur 2009; Wilbur & Patschke 1999).
Finally, signers also use the face to gesture (Sandler 2009).
What are the differences and similarities between facial movements used to
communicate first person present tense emotions/intentions and those that are
lexically related to signs for emotion concepts? In order to answer this question,
we think it is necessary to draw attention to the multi-modal and multi-dimensional nature of face-to-face communication. By multi-dimensional we mean that
two or more types of related information are simultaneously being transmitted,
and by multi-modal we mean that one or more physically independent sets of
articulators are being used to send a message that can be detected by one or more
sensory modalities.
2. Multiple layers of information in communication
There are multiple layers of information in communication. The layers include
lexical items, prosodic units, emotion/intention signals, and gestures. Emotion/
intention signals can be used without the accompaniment of speech/sign, however, it is common that in face-to-face communication all four information types
co-occur. The fact that the act of human communication is multi-dimensional and
multi-modal, containing information encoded in gestures (Sandler 2009; McNeill
1992; Kelly, zyrek & Maris 2010; Mller & Posner 2002), information encoded in intonational units (Sandler 1999b; Nespor & Sandler 1999; Dachkovsky &
Sandler 2009; Ladd 1996, Crasborn & van der Kooij 2013), information encoded in
facial movements (Ekman 2004; Fridlund 1997; Eibl-Eibesfeldt 1975; Wierzbicka
1999), used together with the information encoded in lexical items and syntactic
constructions, is receiving increasing documentation.
We label the four information types (1) emotion/intention, (2) gesture, (3)
intonation, and (4) words. We propose that these labels correspond to (1) information about emotions and/or intentions in the first person present tense orientation
(Wierzbicka 1999); (2) information about direction, path, manner, size, and shape
(McNeill 1992); (3) information about sentence type, speech act, topic, and focus
(Dachkovsky & Sandler 2009); (4) information about language-specific semantic
categories (Wierzbicka 1996). Table 1 summarizes the four information types and
their properties based on current findings. In the following, we describe in more

2014. John Benjamins Publishing Company


All rights reserved

128 Eeva A. Elliott and Arthur M. Jacobs

Table 1. Information type layers and properties


Label

Information type

Structural properties Relation to lexical


items

Emotion/
intention

Emotions and intentions in the first


person present tense
orientation

No duality of patterning
Non-combinatoric
Possibly innate

Not timed to lexical Face


items
Vocal folds
Hands

Gesture

Imagistic information about direction,


path, manner, size,
and shape

No duality of patterning
Non-combinatoric
Idiosyncratic

Timed to the peak


syllable of a lexical
item in a clause.

Intonation

Sentence type,
No duality of patspeech act, topic and terning
focus
Combinatoric
Conventionalized

Timed to onsets
Vocal folds
and offsets of
Face
lexical items in prosodic constituents

Words

Culture-specific
semantic categories

Duality of Pattering
Combinatoric
Conventionalized

Articulators

Hands
Face

Hands
Vocal tract

detail the four information types, what is known about their properties, and how
tightly they seem to be connected to lexical items.
Emotion/intention signals appear early in acquisition, before speech and
gesture (Izard et al. 1995). Besides the face, they can be encoded in the voice
(Pittam & Scherer 1993) and the hands (Reilly, McIntire & Seago 1992). There is
cross-cultural evidence (Ekman 1972) and evidence from the congenitally blind
(Matsumoto & Willingham 2009) suggesting that some of the facial movements
for this information type are innate and universal. These types of units do not seem
to temporally align with lexical items as seen in our data below (see Figure 13) and
in Baker-Shenk (1983), for example.
We use the term gesture in McNeills sense as referring to hand movements
that accompany speech and are idiosyncratic and spontaneous (McNeill 1992: 37).
According to McNeill (1992), co-speech gestures encode visual-imagistic information such as direction, path, manner, size, and shape. They do not display duality of patterning. (The term duality of patterning (Hockett 1960) is used to refer
to the combined properties of the meaninglessness of form units, and their independent patterning. Some forms, such as e.g. the smile, cannot be broken down
into smaller units and are always associated with the same meaning.) They are
non-combinatoric. They are idiosyncratic in the sense that there are no standards
of well formedness for a particular gesture. Gestures are tightly synchronized with
speech. There is usually one gesture per clause. The preparation phase (hands

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 129

moving into position) of a gesture anticipates speech, and then synchronizes with
it during the stroke (the execution of the gesture). The stroke is timed to end at or
before, but not after, the peak syllable (McNeill 1992: 42, 85). Speech-gesture synchrony is not disrupted by stuttering (Mayberry et al. 1998) or delayed auditory
feedback (McNeill 1992: 273294).
Intonational units following Ladds (1996) definition encode post-lexical information, namely information about sentence type, speech act, and information
structure such as topic and focus. They can be encoded by facial movements or
vocal fold movements (Dachkovsky & Sandler 2009; Krahmer & Swerts 2009;
Crasborn & van der Kooij 2013). The properties of intonational units, according
to a componential model (e.g. Bartels 1999; Dachkovsky & Sandler 2009), are as
follows: there exists a finite set of intonational primitives, and these primitives
encode a meaning. The primitives of intonation, like gestures, do not display duality of patterning, but unlike gestures they are combinatoric. Intonational units
appear to be timed to word onsets and offsets and their scope to be determined
by reference to the relevant level in the prosodic hierarchy. Furthermore, they are
conventionalized (Dachkovsky & Sandler 2009).
The words of a language capture the semantic categories created by a particular culture. These categories can vary greatly between cultures, even for semantic
domains that are thought to be based on universal human experiences such as
color vision or feelings (Wierzbicka 1996, 1999). Words generally display duality
of patterning, although this may not be a strictly universal property of words as
previously assumed (de Boer, Sandler & Kirby 2012; Aryani et al. 2013). Words
are combinatoric; that is, they combine with each other in hierarchical patterns to
form more complex semiotic structures.
In what layer of information do lexically related facial movements, particularly
facial movements that are lexically related to signs for emotion concepts, belong?
Do they encode information about emotions/intentions in the first person present
tense orientation? What are their combinatoric properties? Are they idiosyncratic
like gestures, or more conventionalized like words and intonational units? In the
following section, we discuss the findings that are relevant to these questions.
3. Lexically related facial movements phonemes or morphemes?
In this section, we present extant findings on the properties of facial movements
that are lexically related to signs for emotion concepts and argue that so far the evidence suggests that they are phonological elements mental representations of
form units whose primary function is to provide perceptually salient cues for identification of the sign with some morphological (meaning bearing) properties.

2014. John Benjamins Publishing Company


All rights reserved

130 Eeva A. Elliott and Arthur M. Jacobs

More precisely, the movements themselves are of course the phonetic instantiations of phonological elements (mental representations). In our arguments below,
we attempt to make clear how a facial movement can have both a phonological and
morphological function and provide examples of other elements both in signed
and spoken languages that also display such dual functional properties.
3.1 Properties of facial movements lexically related to signs for emotion
concepts
What is known about the properties of facial movements that are lexically related
to signs for emotion concepts? It is reported that they are related to manual lexical
signs in the following way: (a) they consistently co-occur with particular manual
signs (Liddell 1980; Reilly, McIntire & Bellugi 1990; McIntire & Reilly 1988); (b)
some signs are considered ill-formed without the facial movement (Sandler &
Lillo-Martin 2006: 61); (c) some facial movements can act as minimally distinctive features (Sandler & Lillo-Martin 2006: 61). They differ from facial adverbials
by the fact that they are not systematically used to modify the meaning of entire
classes of signs. They differ from depictions of speaker attitude and from role shift
in their temporal scope and in that they are produced in the presence of a manual
component with a congruent meaning (Liddell 1980).
Property (a), consistency of co-occurrence, might suggest that these facial
movements are phonological elements. However, if the consistency of co-occurrence is less than 100%, such movements might rather be optional reinforcers
(Liddell 1980: 16). Property (b), ill-formedness, can be regarded as a stronger form
of consistency of co-occurrence in which the consistency is 100% or very close
to 100% when allowing for production errors. If property (b) can be verified for
particular manual-facial pairings, we think this is a reason to regard that facial
movement as a phonological element. Similarly, the finding of minimal pairs (c),
the classic diagnostic for phonological features, is also a reason to regard a facial
movement as a phonological element.
Is there a reason to consider a facial movement that has been established as
necessary for the well-formedness of a sign as not being a phonological element,
i.e. part of the stored mental representation of the form of the sign? We think not,
but if one defines phonological elements as discrete meaningless units, as is generally held (e.g. Sandler & Lillo-Martin 2006), than facial movements that have a
meaning, such as smiles and frowns, cannot be considered phonemes.
We think there are good reasons not to make the theoretical assumption
that phonemes are necessarily meaningless. Rather, we regard phonemes as units
whose primary function is to create perceptual distinctions but which may sometimes also be associated with a meaning. As we already mentioned above, studies

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces

on both signed and spoken languages (de Boer, Sandler & Kirby 2012; Aryani et
al. 2013) suggest that the smallest units of a language are not always meaningless.
The manual parameters of handshape, location, and movement, regarded as the
discrete meaningless units of sign language phonology, behave like independently
meaningful morphemes in classifier constructions (Sandler 2009: 261), and the
distribution of certain handshapes within the lexicon has been shown to be better
explained by their meaning (i.e. their morphological potential) than their form
(Fuks & Tobin 2008). Evidence cited in Blevins (2012) indicates that for spoken
languages, too, there are cases in the lexicon in which duality of patterning is not
found that is, there exist morphemes that cannot be further broken down into
meaningless units. One example from Blevins (2012) is the phonological feature
of palatalization which consistently means uncontrolledness/child-likeness in
Japanese mimetic constructions.
Corpus data will help in establishing consistency of co-occurrence, establishing environments in which particular facial movements appear, establishing
degree of conventionalization of facial movements, and establishing whether all
emotion-concept related signs are produced with facial movements.
3.2 Crasborn et al.s (2008) typology of mouth movements
In this section, we describe Crasborn et al.s (2008) corpus based findings on mouth
actions. Crasborn et al. (2008) created a typology of mouth actions based on data
signed by six people from three different European sign languages (British, Dutch,
and Swedish). They found that between 5080% of the manual signs across the three
languages had a facial movement component. They divided these facial movements
into five categories: mouthings, adverbials, whole face actions, enacting mouth actions, and semantically empty mouth actions. Mouthings are the lip movements
used in sign languages that have been adopted from the articulation of words from
ambient spoken languages. Adverbials are facial movements that are used to modify temporally co-occurring manually articulated verbs. Whole face actions are facial movements that use both upper and lower face muscles and would include
some movements that are lexically related to signs for emotion concepts. Enacting
mouth actions include, for example, depictions of kissing or chewing. Semantically
empty types are mouth movements that are neither enactments nor mouthings and
that do not seem to have any semantic content. The frequency ranking of each type
taken from Crasborn et al. (2008: 5152) is given in Table 2 below:

2014. John Benjamins Publishing Company


All rights reserved

131

132 Eeva A. Elliott and Arthur M. Jacobs

Table 2. Frequency ranking of facial actions in Crasborn et al. (2008: 5152)


Mouthings

Adverbials

Whole face

Enacting + Empty
combined

3957%

1430%

1620%

814%

The frequency scores above demonstrate that mouth and whole face movements
temporally co-occurring with lexical signs are ubiquitous, suggesting they have an
important function but what is this function?
3.3 Functions of facial movements that temporally co-occur with manual
lexical items
Before discussing functions, a word on the notation conventions we use for mouthings is necessary. Mouthings are written using the visemic transcription proposed
in Elliott et al. (2012) and represented in Table 3 below. It is similar to Kellers
(2001) kinematic description of mouthings. This notation is fairly transparent as
upper case Roman letters, familiar to any English reader, are used to represent
classes of phonemes that share visual appearance. For example the viseme /p/, described as lips compressed, maps onto the phonemes /p, b, m/. Thus, readers can
identify the viseme intended in the transcription, with minimal reference to the
table, by attempting to articulate the letter. We transcribe the mouthing component to the right of the sign gloss with a + symbol, e.g. for the DGS sign bruder
(brother) the gloss is bruder + p-ut-.
Table 3. Visemic transcription of mouthings
Phoneme
(CELEX notation)

Viseme

Description

bmp

Lips compressed

dnstz

Lips slightly apart with tongue in contact with teeth

@Nghkrx

Relaxed medium opening of mouth

Open mouth, tongue contacts alveolar ridge and drops

fv

Pouting of the lips while teeth stay together

Iij

Spreading of lips slightly open

Ee

Medium opening of spread lips

Wide opening of mouth

&OQo

Rounding of lips

UYuy

Pouting of lips

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 133

3.3.1 Facial movements with a phonological function


The only type of facial action that did not appear to have independent semantic
content was the semantically empty type, comparable to Wolls (2001) echo-phonology. An example of this type from British Sign Language (Woll 2001) is succeed + pa. In this sign, the hands start in contact and move apart and simultaneously the lips start in contact and move apart. Since empty-types are meaningless,
they can easily be treated as phonological features given the common theoretical
assumption that the phonological level consists of a finite set of discrete, meaningless units. However, this type comprised the smallest category. It is far more common for facial movements in sign language to have semantic content independent
of what the hands convey.
3.3.2 Facial movements with a morphological function
Facial adverbials have the function of adding new semantic information to verbs.
3.3.3 Multi-function facial movements
Mouthings, enacting movements, and whole face movements appear to carry out
both phonological and morphological functions.
Mouthings inherit the meaning of the spoken language word they are derived
from. Since they are minimal meaning bearing units, they are morphemes, however, they usually seem to behave like phonological features in that they add phonological content, but do not seem to add new semantic content to the lexical
signs they combine with. For example, the DGS sign wurm (worm) is produced
with the semantically congruent mouthing /fu-p/ in our corpus. Mouthings can
also be the minimally distinctive feature of a minimal sign pair. For instance, in
DGS bruder (brother) and schwester (sister) are both symmetrical bimanual
signs with a g-handshape and contact between the two hands. The only difference
between them is in the mouthing /p-ut-/ and /sfett-/. However, mouthings can
also form compound signs with a manual component, for example, eat + p-et
meaning eating bread (Crasborn et al. 2008), in which case they add new semantic content.
Enacting movements and whole face movements appear to have the mixed
morphological/phonological profile of mouthings. An example from British Sign
Language of an enacting movement apparently used primarily as a phonological
feature (adding perceptual information but no new semantic content) is a chewing
mouth movement occurring with the manual part of the sign chew. An example
of an enacting movement used morphologically is a shouting-mouth movement
made with the manual sign run meaning to run while shouting. Whole face
movements are like enacting movements except that they are not just limited to
movements of the mouth. Besides reporting that they were the third most frequent

2014. John Benjamins Publishing Company


All rights reserved

134 Eeva A. Elliott and Arthur M. Jacobs

mouth movement in their data, the authors of the cited study did not consider
them further as their research questions focused on the mouth only.
For the purpose of our study, the most important finding from Crasborn et al.
(2008) is that not only mouthings have a mixed morphological/phonological profile. Enacting movements behave this way, too; that is, in some cases, they appear
to be part of the phonological form of the word, e.g. chew + chew_mouth above.
In other cases, they function as morphemes by adding new semantic content, e.g.
run + shouting_mouth above.
3.4 Lexically related facial movements as a class
Based on the similarities in behavior of mouthings and enacting actions (both
mouth-only and whole-face movements), we propose that these two facial actions
are one class of phenomenon: elements that have the function of adding phonological and semantic information to lexical items. Furthermore, we propose that
empties and adverbials also belong to this class of elements, but they are extreme
points on a phonological-morphological continuum within this class. The elements in this class vary in (a) the degree in which they function as phonological
elements (adding perceptual information) or morphological elements (adding semantic information) and (b) their origin (ambient spoken language movements,
instrumental and emotion/intention facial movements, echoes of manual movements) as presented in Table 4 below.
Table 4. Properties of facial movements related to single lexical signs
Face Movement

Function

Origin

Empties

Phonological

Echoes of hand movements

Mouthings

Phonological/Morphological

Spoken languages

Enactings

Phonological/Morphological

Instrumental and emotion related facial


movements

Adverbials

Morphological

Instrumental and emotion related facial


movements

4. Placing lexically related facial movements in an information layer


Table 5 below summarizes the properties of lexically related facial movements. By
asking the questions: (a) what information does the facial movement convey, (b)
does it display duality of patterning, (c) is it combinatoric, (d) is it conventionalized, and (e) what is its temporal alignment with lexical signs, we place each

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces

lexically related facial movement in their appropriate information layer. Since


empties only convey phonological information, they belong to the form part of
words in the words layer. Since adverbials convey information about culture-specific semantic categories of manner and are combinatoric and conventionalized,
they belong in the words layer.
Enactings and mouthings, however, act both like phonemes and like words
with limited combinatoric possibilities. We propose that they comprise their own
layer of information which can be seen as a second lexicon temporally parallel to
the words layer.
Table 5. Properties of facial movements related to single lexical items
Facial
Movement

Information
Transmitted

Structural Properties Relation to


lexical sign

Enactings

Culture-specific
semantic categories of
emotion and action
and phonological
information

No duality of patterning
Non-combinatoric
Conventionalization
unknown

Timed to lexi- Facial muscles


cal sign

Mouthings

Culture-specific
semantic categories of
any kind and phonological information

Duality of patterning
Non-combinatoric
Conventionalized

Timed to lexi- Mouth


cal sign

Empties

Phonological informa- Duality of patterning


tion
Non-combinatoric
Conventionalized

Timed to lexi- Mouth


cal sign

Adverbials

Culture-specific
semantic categories of
manner

Timed to lexi- Facial muscles


cal sign

No duality of patterning
Combinatoric
Conventionalized

Articulators

We wish to make clear why enactings convey culture-specific semantic categories rather than emotions/intentions. By culture-specific semantic categories we
mean that lexicons reflect the conceptual categories created by a particular speech
community. Even if some emotions or instrumental actions (such as chewing and
biting) are universal, not all cultures name these acts or experiences. For example,
the Dani people studied by Ekman do not have words for what Ekman proposes
are the six basic universal emotions: happiness, sadness, anger, disgust, fear, and
surprise (Ekman 1975: 39, cited in Wierzbicka 1999: 25). Therefore, using a frown
to refer to a concept of sadness, rather than to the current state of the one making
the frown, is a culture-specific act of semantic categorization.

2014. John Benjamins Publishing Company


All rights reserved

135

136 Eeva A. Elliott and Arthur M. Jacobs

Furthermore, there is some evidence that emotion related enactings have undergone some semantic bleaching and lost their first person present tense orientation, unlike role-shift elements and attitude depicting enactments. Emotion related enactings occur in the citation forms of the lexical items in question (i.e. even
when not in a context in which there could be a first-person subject to predicate
sadness over in the present). Therefore, the frown component of sad would appear
to simply mean sad as opposed to I am sad. Additionally, three DGS consultants confirmed that emotion signs such as traurig (sad) and ekel (disgust) are
made with semantically congruent facial movements even in negated sentences
such as I am not sad. Enactments of instrumental movements similarly remain
present even when the sign is negated as per Liddell (1980: 17) for the ASL sign
bite. Additionally, enactings have a different temporal relationship to lexical items
than emotion/intention signals.
5. Methods
5.1 Defining an emotion related sign
We needed to define emotion related sign for the purposes of our study. As there
is currently no consensus on the scientific definition of emotion, we decided to
take into account two dominant types of emotion theory: basic emotions theories such as Ekman (1972, 1992, 2004) and dimensional theories such as Russell
(1980). Using the basic emotions approach, we defined emotion related signs as
signs corresponding to Ekmans basic emotions happiness, sadness, anger, disgust, fear, and surprise. Using the dimensional approach, we defined an emotion
related sign as one with a particular rating on the dimensions valence and arousal
as done e.g. in Hofmann et al. (2009) and Vo et al. (2009). In our complete corpus, we used both definitions, but in this paper, we only present the disgust items
from the basic emotions data. We then created our emotion signs corpus following
many suggestions in Johnston & Schembri (2006).
5.2 Collecting emotion related signs
In order to see whether speakers of DGS make semantically corresponding facial movements with emotion related signs, we use both a translation task (from
German to DGS) and a free speech task. A translation task is practical but artificial. A free speech task is natural, but less practical because we cannot guarantee
that DGS speakers will produce a large number of emotion related signs in their
free speech. In this paper, we only present the translation task data associated with

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 137

one of Ekmans proposed basic emotions, namely disgust translated as Ekel in


German.
With a corpus of DGS emotion related signs, we can see whether all or only
some are made with semantically corresponding facial movements (e.g. sad +
frown, happy + smile, disgust + nose wrinkle). We can also measure consistency
of co-occurrence as the ratio of total occurrences of a particular emotion related
sign (e.g. disgust) to occurrences of the sign co-occurring with a semantically
congruent facial movement: disgust / disgust + nose wrinkle.
We presented our participants with single German emotion words (as defined
above) and asked them to translate them into DGS (condition 3). We also presented them with the same emotion words (sometimes changed to verbal or adjectival form to make the sentence construction possible) in direct speech (condition
1) and reported speech (condition 2). In order to try to minimize interference
from the source language, we follow Dachkovsky & Sandlers (2009) practice of
instructing participants to look at the source language sentence, to pause and consider what would be the natural way to express that idea in the target language, and
only then to respond. They were told to try not to be influenced by the structure
of German or Manually Coded German. The eliciting data for disgust signs are
provided in (1) (see Appendix 4 for all the basic emotions eliciting materials).
(1) a.

Ich bin angeekelt von Wrmern.


I am disgusted by worms.
b. Mein Freund hat gesagt, dass er angeekelt von Wrmern ist.
My friend has said that he is disgusted by worms.
c. Ekel
disgust

5.3 Participants
There were 20 participants, (nine male, average age 26: Max = 39, Min = 23). 15
reported that DGS is their first language. Eight acquired DGS from birth, seven
were early acquirers (between ages 15), four acquired DGS after the age of five
(Max = 21). Data on age of acquisition is missing for one participant. Nine participants had at least one deaf parent. They were recruited by two Deaf native DGS
signers through personal contacts and advertisements. They received monetary
compensation for their time.

2014. John Benjamins Publishing Company


All rights reserved

138 Eeva A. Elliott and Arthur M. Jacobs

5.4 Elicitation procedure


A native DGS signer carried out the interviews for the corpus. Participants
were interviewed at their homes or at the Humboldt Universitys Deaf Studies
Departments studio. Participants were seated in front of a camera and received
an explanation of the procedure. They were then presented the eliciting materials
starting with the basic emotions sections, then the free speech section, and finally
the dimensional section.
5.5 Analysis procedure
We hoped to elicit lexical items that denote the concept disgust and wished to see
whether they would be consistently accompanied by facial movements unique to
the temporal scope of the manual lexical item even if embedded in a sentence. In
order to achieve this, we firstly specified a time window of analysis and then coded
the facial movements that occurred within this time window.
5.6 Selecting the time window of analysis
If facial movements that are lexically related to signs for emotion concepts are part
of the phonology of the lexical item, we expect them to have the two following
properties: (i) temporal scope only over the morpho-syntactic or possibly the phonological word and (ii) a 100% consistency of co-occurrence with a particular lexical sign. Bear in mind that we define the term phonological as mental representation of form units. We use the two properties above only as diagnostic heuristics
for the phonological status of the facial movements, not as sufficient conditions for
being a stored mental representation of the form of a lexical item.
5.7 Filtering out facial movements with temporal scope over a clause
We expect that if a facial movement on an emotion sign is not phonological, it
will be absent (from the temporal scope of the single sign) in sentences that invite
emotion depiction. It is reported that self-attributed emotion and role-shift facial
movements occur over entire clauses in ASL (Liddell 1980), and similar reports
exist for DGS (Happ & Vorkper 2006). If a clause with self-attributed emotion
or role-shift includes an emotion related sign (e.g. My friend said, he is disgusted
by worms), we wanted to know if there would be a semantically congruent facial
movement occurring with the emotion related sign despite potential facial movements over larger constituents.

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 139

We analyzed the time window of the morpho-syntactic sign: between onset


and offset of the disgust related signs. Onset was defined as the first frame in which
the hands are detected to move into the preparation phase and offset as the last
frame in which the hands complete the retraction phase of the sign. Facial movements that were already present before onset were not coded unless they gained
in intensity, reached an apex, and decreased in intensity within the specified time
window. Facial movements that had their onset within the time window analyzed
but reached their apex after offset were not coded.
By only coding facial movements that had their onsets, apexes, and offsets
within the time window of the morpho-syntactic sign, we avoid coding facial
movements that are timed to the entire clause or any other constituent larger than
the morpho-syntactic sign. Therefore, if the reports that depictive and role shift elements have temporal scope over entire clauses are accurate, we can be quite confident that we did not include them in our analysis. So, for example, if the entire
sentence is signed with a disgust expression to depict the attitude of the speaker, or
as part of role-shift, but the time window of a disgust related sign has no additional
specific movements, we coded zero movements.
Given the evidence in Crasborn et al. (2008) that in a minority of cases mouthings and empties do spread beyond the morpho-syntactic word, and given that
we hypothesize that mouthings, empties, and enacting actions, which include facial movements that are lexically related to signs for emotion concepts, belong
to the same class of phenomenon, we may miss a few facial movements that are
lexically related to signs for emotion concepts by choosing the morpho-syntactic
sign as our time window of analysis. However, we think it is more desirable to risk
excluding a few facial movements that are lexically related to signs for emotion
concepts than to risk including facial movements that are not lexically related to
signs for emotion concepts.
5.8 Facial muscles
For each eliciting condition, lexical signs that were transcribed as denoting a disgust concept were analyzed with the facial action coding system FACS (Ekman
et al. 2002b). FACS is a system with which to code all visibly detectable movements
of the face based on the facial muscles which cause this movement. Each movement caused by a facial muscle, or in some cases a group of muscles, is called an
action unit (AU) and assigned a number. For example, the raising of the lip corners, done primarily by the zygomaticus major muscle is assigned the AU12. By
using FACS, we can be precise in calculating the consistency of co-occurrence of a
particular facial movement with a particular lexical item.

2014. John Benjamins Publishing Company


All rights reserved

140 Eeva A. Elliott and Arthur M. Jacobs

FACS, in addition to allowing us to code our data in a manner which can


be replicated by other researchers, also allows us to compare our findings with
Ekman et al.s (2002a) prototypical emotion faces. Ekman et al. (2002a) propose
that there is a family of facial movements that are associated with each of their
proposed basic emotions (happiness, sadness, anger, disgust, fear, and surprise).
Comparing facial movements that are lexically related to signs for emotion concepts to the prototypical emotion faces in Ekman et al. (2002a) may give us some
indication regarding the similarities and differences between lexically related facial movements and emotion/intention facial movements as far as form is concerned. However, we stress that Ekmans theory of emotional facial expressions is
by no means uncontested.
We used ELAN software for annotating the videos. ELAN is freely available at
http://tla.mpi.nl/tools/tla-tools/elan/.
6. Results
6.1 Tokens in corpus
Over all three conditions across 20 participants, we elicited a total of 63 disgust
related sign tokens. The reason that there are more tokens than eliciting conditions
multiplied by participants (3*20 = 60) is that in five trials, the elicited sentence
construction consisted of repetition of the disgust sign, and there were two trials
in which participants did not respond. The 63 tokens were categorized into 12
unique types according to their manual phonological features of handshape, location, and movement. Transcriptions of the sentences are available in Appendix 2.
In order to determine the denotations of the 12 types, we consulted a deaf native DGS signer with a background in Deaf Studies. We asked our consultant in
which region of Germany the sign is used, how frequently he thinks it is used on a
scale of 15 (with 5 being very frequent), and how he would describe its meaning.
This semantic analysis is of course only preliminary and requires support from future lexicographic and corpus studies in DGS. Table 6 below gives the 12 different
sign types, a description of their manual phonological component, a description
of their meaning, their subjective frequency (as reported by consultant), and their
empirical frequency (tokens/total tokens). In choosing the English labels for the
different DGS signs, we considered the transcription choices of the transcribers
(see Appendix 3), the meaning of the signs as described by our consultant, and
their phonology.
We examined the frequency distribution of each sign type per condition to
see whether any of the 12 sign types are uniquely associated with one of the three

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 141

Table 6. Disgust related sign types in DGS


Sign Type Label
hate

Description

Tokens

Handshape

Location

Movement

5-handshape

thumb contacts chin

moves outwards

11

Region: Across Germany


Subjective Frequency: 4
Meaning: Used to describe dislike of a task or thing; for example, dislike of a job.
disgust

5-handshape
bi-manual

hands held at face


level palm outwards

local movement

11

Region: Across Germany


Subjective Frequency: 3
Meaning: Used to mean that something is unpleasant to see, e.g., spiders or worms.
throat

5-handshape

hand contacts throat

moves towards throat

10

Region: Berlin
Subjective Frequency: 4
Meaning: Used mainly to describe dislike of a food, or to say that a person does not
look good.
yuck

I-handshape

neutral signing space

sideways movement
away from body

Region: Across Germany


Subjective Frequency: 4
Meaning: Has its origin in the German interjection iiii (ew/yuck). Used to describe
dislike of something, but not used in reference to humans.
goose-bumps

5-handshape

contacts non-dominant hand

moves towards shoulder

Region: Across Germany


Subjective Frequency: 45
Meaning: Usually transcribed as gnsehaut (lit. goose flesh).
Used to describe being cold, being frightened, and being disgusted. Often associated
with eating or touching something unpleasant.
polite

B-handshape
palm inwards

contacts chin

repeated outward
movement

Region: Across Germany


Subjective Frequency: 3
Meaning: A slightly more polite way to say that one does not like something, e.g. food.
nausea

5-handshape
bi-manual

at stomach level

upward movement

Region: Across Germany


Subjective Frequency: 2
Meaning: Used to say that one dislikes a person or a food. Usually transcribed as
schlecht (nausea); however, it does not mean physical nausea, there is another sign
in DGS for physical nausea.

2014. John Benjamins Publishing Company


All rights reserved

142 Eeva A. Elliott and Arthur M. Jacobs

Table 6. (continued)
Sign Type Label Description
vomit

Tokens

Handshape

Location

Movement

5-Handshape
bi-manual palm
upwards

mouth area

outward movement

Region: Hamburg
Subjective Frequency: Meaning: Often transcribed as erbrechen (vomit), but it does not mean to physically
vomit. There is another DGS sign for physical vomiting.
touch

F-handshape

neutral signing space

local movement

Region: Saxony
Subjective Frequency: 2
Meaning: It describes the sensation of disgust one would have touching something
unpleasant.
neck

5-handshape with contacts neck


middle finger bent

inward movement

Region: Across Germany


Subjective Frequency: 3
Meaning: Used to describe not liking something, but in a more factual and emotionally
detached manner.
slang

5-handshape with neutral signing space


middle finger bent

downward movement

Region: ? Used by youth


Subjective Frequency: 3
Meaning: Usually transcribed as hass (hate). It is used for example to say one dislikes
a particular food.
pff

A-handshape with thumb contacts chin


extended thumb

outward movement

Region: Cologne
Subjective Frequency: 2
Meaning: It is used to say of a person or a thing that they look bad. Made with a pff
mouth action.
Total:

63

eliciting conditions (direct speech, reported speech, single word). Figure 1 below,
shows that most sign types (7/12) appeared at least once in each condition.
The five signs that only appear in one or two of the conditions were yuck,
polite, neck, slang, and pff. Since yuck is a high frequency sign in our corpus
(fourth most frequent sign; 11% of the tokens), its complete absence from the
single sign condition appears meaningful. yuck seems to be an interjection, like
the German iii it is associated with. This may be the reason why participants did

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 143


7
6
5
4

Direct speech
Reportive speech
Single word

3
2
1

ff
.p
12

t
yu
os
ck
ebu
m
ps
6.
po
lit
e
7.
na
us
ea
8.
vo
m
it
9.
to
uc
h
10
.n
ec
11 k
.sl
an
g
5.

go

4.

oa

t
us

th
r
3.

di
sg

2.

1.

ha

te

Figure 1. Distribution of sign types per condition

not produce it when translating the single German word Ekel. However, in a sentence the interjection appears to be able to serve as a verb or predicative adjective.
The signs polite (frequency = 6%) occurred both in the single word condition
and in a clause. slang (frequency = 3%) and pff (frequency = 1%) only occurred
in clauses. neck (frequency = 3%) only appeared in the single word condition, and
never in a clause.
We also examined the distribution of the 20 participants per sign type in order
to see if a particular sign was uniquely associated with a particular signer. Figure 2
below shows that 75% of the sign types were produced by more than one signer. The three signs that were uniquely associated with one participant vomit,
9
8
7
6
5

Number of Participants

4
3
2
1

2.

Figure 2. Distribution of signers per sign type

2014. John Benjamins Publishing Company


All rights reserved

ff
.p
12

1.
ha
te
di
sg
us
t
3.
th
ro
at
5.
go 4.y
uc
os
k
ebu
m
ps
6.
po
lit
e
7.
na
us
ea
8.
vo
m
it
9.
to
uc
h
10
.n
ec
11 k
.sl
an
g

144 Eeva A. Elliott and Arthur M. Jacobs

10
8
6

Number of Participants

4
2
0
1 sign type 2 sign types 3 sign types 4 sign types

Figure 3. Distribution of signers per number of sign types used

touch, and pff were those associated with a particular region in Germany. As
shown in Table 6 above, vomit is a Hamburg variant, and the participant using
vomit is originally from Hamburg. Similarly, touch was signed by a participant
originally from Saxony, and pff by a participant originally from Cologne. The
other sign that was reported to be region-specific (throat) was signed by five different participants; however, this is not surprising since the region it is reported to
be specific to is Berlin, which is where the interviews took place.
To check to what extent individual signers were consistent in their sign type
choice across conditions, we looked at the frequency distribution of number of
sign types used per participant. Figure 3 below shows that 70% of the participants
used two or more sign types across the conditions.
The frequency distribution of our sign type data in the three eliciting conditions and across participants suggests that DGS has at least seven disgust related
concepts used across Germany and five that appear to be region-specific or slang.
The sign yuck only appears in a sentence context (i.e. in our first two conditions).
Based on our preliminary semantic analysis, the various signs appear to distinguish between whether a sense of disgust is due more to seeing, touching or tasting, and also whether the referent is human or non-human. Note that empirically,
all the sign types (except perhaps neck) are compatible with the meaning of a
disgust sensation towards a non-human referent as they all (again except neck)
appeared at least once in a clause in which the object of disgust is a worm.
The iconic structure of the signs also suggests differential emphases on seeing,
touching and tasting. Iconicity is a form-meaning resemblance. For example, the
ASL sign tree visually resembles an image of a prototypical tree. Elements of the
phonological form are mapped onto elements of the schematic image of the tree in

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 145

an analogue building process (Taub 2001): branches are mapped onto the hand;
trunk is mapped onto the arm; ground is mapped onto the non-dominant arm.
Less concrete concept signs, such as religion, can also resemble their meaning
through a metonymic or metaphorical relationship between the iconically represented image and the meaning. The DGS sign religion is made with an alternating movement of tip of the middle finger of one hand contacting the middle of
the palm of the other hand. This is an iconic metonym of the concept religion as it
associates Christs crucifixion wounds with the general concept of religion.
The signs hate, throat, polite, nausea, and vomit all select a particular
image of vomiting as an iconic metaphor for the concept disgust. Features of the
schematic image of vomiting are mapped onto phonological features such that
location of bodily sensation is mapped onto the location features stomach, throat
or mouth. The feature of vomited matter is mapped onto the handshape, and the
trajectory of the vomit is mapped onto the movement feature. The sign touch
is made with the f-handshape which resembles an index finger-thumb grip one
might use when picking up something that one would rather not touch. For the
sign disgust, the 5-handshape placed at approximately face level creates a sight
barrier which is an iconic depiction of dont want to see.
Figure 4 below shows the sign types from left to right: hate, disgust, throat,
yuck, goose-bumps, nausea, vomit, neck, pff. Not all sign types are represented due to lack of permissions from participants.

Figure 4. Disgust sign types in DGS

2014. John Benjamins Publishing Company


All rights reserved

146 Eeva A. Elliott and Arthur M. Jacobs

6.2 Reliability of coding


The agreement ratio between the two FACS coders on a subset of the data (20
videos, 32% of the data) was 0.6 using the agreement formula provided in the
FACS manual. As this was rather low, the two coders arbitrated their scores. The
main source of disagreement was over whether jaw opening occurred due to relaxation or activation of muscles (AU26 jaw drop vs. AU27 mouth stretch), the
source of a down-turning of lip corners (AU15 lip corner depressor vs. AU20 lip
stretch), and the source of wrinkling on the chin (presence or absence of AU17
chin raiser). The judgment of the more experienced coder was accepted in most
cases, and the agreement score after rescoring by the less experienced coder was
0.87. The remaining videos were then recoded by one of the coders in accordance
with results of the arbitration.
6.3 Length of lexical signs
The mean duration in milliseconds of the time window we analyzed with FACS
(the length of the morpho-syntactic sign) was 827ms (SD 287, Min 351, Max
1769) including all three conditions and 769ms (SD 263, Min 351, Max 1769)
when excluding the single word condition. We initially thought that the variation
in length of the lexical signs is due mainly to sentence position, since sentencefinal signs have longer retraction phases than sentence-medial signs because the
hands return to a resting position instead of staying in signing space. Sentencefinal disgust signs are indeed longer on average (Mean 866ms, SD 292) than sentence-medial disgust signs (Mean 646ms, SD 155.6), but this bi-categorical model
only explained 18% of the variance (R = 0.18), indicating that there are other factors that have a bigger impact on sign length than sentence position.
6.4 Presence of action units in time window of analysis
We assessed whether AUs timed specifically to the morpho-syntactic manual sign
occurred. For 62/63 tokens at least one action unit that was unique to the time
window of a disgust related sign occurred. The average amount of action units
per token was 5 (SD 2, Mode = 3, Min = 1, Max = 9). The movements made were
strong and clear; the average intensity of an action unit on a scale of 15, with
5 meaning maximal possible intensity, was 3.5 (SD 0.3, Min = 3, Max = 4). This
established that for all but one disgust related sign, there was an AU timed to the
morpho-syntactic word, but this does not establish whether it looks like a disgust
related face. Mouthings and emotion related facial movements that do not have a

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 147

meaning congruent to disgust, such as a smile, need to be excluded. Our data set
is available in Appendix 1.
The action units started at onset or in 4/62 cases at the stroke phase of disgust
signs and faded with the retraction. It was clear that the face and the mouth in
particular, were engaged in rapid movements timed to the segmental actions of the
hands throughout the sentences and not just on disgust signs. Average mouthing
rate over all three conditions, calculated as number of signs containing a mouthing
divided by total signs, was 0.41 (SD 0.17). That means that for almost every second
sign, the mouth was engaged in our data set. This mouthing rate for DGS is similar
to that reported in Ebbinghaus & Hemann (2001).
6.5 Mouthings
In four cases, the facial action that occurred with disgust signs was a mouthing
or mouth gesture, coded as AU50, with no other detectable movements. In nine
cases, there were both mouthings and additional facial actions. There were two
different mouthings that occurred: /sle-t/ and /e-el/ derived from the German
words schlecht (nausea) and Ekel (disgust), respectively. There was one case of
the DGS mouth gesture pff . Table 7 below gives the mouthed and mouth gestured
signs, with data on participant, eliciting condition, mouthing type, and any additional action units that co-occurred with the mouthing. The table shows that 7/12
sign types were mouthed at least once. The most frequently mouthed sign type
was nausea. Nine different participants used a mouthing on a disgust sign at least
once. Most occurrences of mouthing (8/12 mouthed signs) happened in the single
word condition.
Figure 5 below of the clause worm index1 pff (German gloss wurm ich
ekelig) shows an example of a mouth gesture occurring over a disgust related
sign with no additional action units. It is clear from the figure that facial movements with temporal scope over larger constituents than the morpho-syntactic
signs for disgust occurred, even though we did not code them. In this example,
there are three groups of facial movement with regards to temporal scope. First,
there is a facial movement with temporal scope over the entire clause: the brows
go down at the stroke phase of the first sign worm (frame 1) and stay down until
the end of the clause (frame 3). Second, there is a facial movement with temporal
scope over two signs: the lip corners are pulled laterally and slightly down from
the articulation of the second sign index1 (frame 2) and stay in that position until
the end of the clause (frame 3). Third, there are facial movements that only have
temporal scope over individual signs: there is a mouthing over the first sign worm
(frame 1), and there is a mouth gesture pff (expulsion of air through compressed
lips) over the disgust related sign we labeled pff (frame 3). In coding the action

2014. John Benjamins Publishing Company


All rights reserved

148 Eeva A. Elliott and Arthur M. Jacobs

Table 7. Occurrence of mouthings with disgust related signs

single

neck

e-el

single

polite

e-el

13

single

hate

e-el

18

reported

yuck

e-el

18

single

neck

e-el

19

single

disgust

e-el

20

reported

nausea

sle-t

20

single

nausea

e-el

direct

nausea

sle-t

direct

nausea

sle-t

direct

pff

pff

single

throat

e-el

AU43 Eye Closure

AU9 Nose Wrinkle

e-el

AU4 Brow Lowerer

polite

AU10 Upper Lip Raiser

single

AU17 Chin Raiser

AU19 Tongue Show

AU7 Lids Tight

Mouthing

AU15 Lip Corner Depressor

Participant Eliciting Sign


number
condition label

1
1

1
1

1
1

1
1

units that occurred over the disgust related sign, we excluded any pre-existing
facial movements such as the brow furrow and lateral pulling of the lips. We only
coded AU50 for the mouth gesture pff .

Figure 5. Mouth gesture over disgust sign in the sentence worm index1 pff (German
gloss wurm ich ekelig)

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 149

6.6 Emotion related action units


Above we showed that facial movements unique to the time window of the disgust
signs occurred in all but one case. In four cases, this was a mouthing or mouth
gesture with no additional facial movements. Can the remaining 58 cases of facial
movements temporally co-occurring with disgust related signs be regarded as facial movements meaning disgust? To answer this question, we provide a description of the particular action units that occurred in our data.
Figure 6 below shows that 18 different action units occurred over all 63 tokens. No action unit had 100% consistency of co-occurrence when all sign types
are considered together. The only two action units to have a consistency of cooccurrence of >=50% were AU25 lips part and AU19 tongue show.

dr
op
AU pre
s
7.
lid sor
A
AU U17 s t
10 .ch igh
t
.u
pp in ra
ise
er
AU lip
r
2
r
AU 0.li aise
p
r
4.
br stre
ow tc
h
l
A ow
AU U50 ere
r
.
s
9.
no pe
ec
AU se
h
43 wri
nk
.e
y
l
e
AU e
c
AU 6.c losu
21 hee re
kr
AU .ne
ai
27 ck
se
AU .mo tigh
ut ten
2.
o
h
e
AU ute str r
1. r br etc
AU inn ow h
16 er
r
.lo br aise
w ow
er
lip rais
de e
pr
es
s

de

er

.ja

26

AU

15

.lip

co

AU

rn

t
ar

sh

sp

ue

.lip

ng

25

.to

AU

19

AU

ow

100
90
80
70
60
50
40
30
20
10
0

Frequency (%)

Figure 6. Frequency distribution of AUs over all disgust related tokens

To see whether there was 100% consistency of co-occurrence for an action unit
to a sign type, we present the frequency distributions of the action units per sign
type. Figure 7 shows the four most frequent signs together. None of the four highest frequency sign types has a 100% consistency of co-occurrence with any action
unit. hate, disgust, and throat show similar AU frequency distribution patterns, but yuck seems to have a unique profile. Specifically, AU19 tongue show
is completely absent from yuck while it is the second most consistent AU for the
other three signs. Note, however, that there is an un-equal amount of tokens per
sign type, and participant numbers, and eliciting conditions are not constant for
each type, therefore strong conclusions about patterns cannot be drawn.

2014. John Benjamins Publishing Company


All rights reserved

2014. John Benjamins Publishing Company


All rights reserved

1
AU

ue

ng

to
9.

rt

pa

lip

5.

1
AU

ht

ig
pe

up
0.

se

i
ra

1
AU

in

.ch

17
AU

t
ds

li
7.
AU

so

s
re
ep

rd

e
rn

op
dr

co

w
.ja

2
AU

ow
sh
r

lip

h
tc
re

st

r
s
e
h
er
ch
re
ise
ise
ise
kl
es
tc
ne
er
ra
ra
ra
su
ee
re
pr
te
rin
w
t
o
p
k
e
h
l
s
w
w
o
s
w
l
.
c
d
o
o
ee
e
tig
th
50
ip
br
br
.lip
ye
ch
os
ow
ck
rl
ou
.e
6.
er
er
20
AU
.n
e
e
3
t
br
m
n
.
U
U
9
.
4
w
.n
4
A
A
in
ou
27
AU
.lo
21
AU
1.
2.
AU
16
AU
AU
AU
AU
U
A
r

se

i
ra

Figure 7. Frequency distribution for sign types hate, disgust, throat, and yuck

2
AU

s
.lip

10

20

30

40

50

60

70

80

90

100

HATE
DISGUST
THROAT
YUCK

150 Eeva A. Elliott and Arthur M. Jacobs

Phonological and morphological faces

In Figure 8 below, we present the frequency distributions for the mid-frequency


sign types (45 occurrences out of 63) goose-bumps, polite, and nausea. From
the table, it would seem that nausea is 100% consistent with AU50, but this code is
for mouthing. Table 7 above shows that the mouthing that co-occurred with nausea was inconsistent. In one case, it was /e-el/ while in the others, it was /sle-t/.
The low frequency (3/63) sign types vomit and touch are shown in Figure 9
below. It seems as if there is 100% consistency of co-occurrence for several AUs

.lip
15
AU

de op
AU pre
7. sso
li
A
r
AU U17 ds t
10 .ch igh
t
.u
pp in ra
er
ise
AU lip
r
2
r
AU 0.li aise
4. p st r
br
ow retc
h
AU low
e
AU 50 re
9. .sp r
n
e
AU ose ech
43 wr
in
.e
k
AU ye c le
AU 6.c los
21 he ure
e
AU .nec k ra
ise
27 k t
AU .mo igh
t
ut en
2.
o
h
e
AU ute str r
e
r
t
1
b
AU .inn row ch
16 er
r
.lo br aise
w ow
er
lip rais
de e
pr
es
s

ow

dr

er

co

rn

26

.ja

sh

ue

AU

ng

.to
19

AU

AU

25

.lip

sp

ar

100
90
80
70
60
50
40
30
20
10
0

GOOSEBUMPS

POLITE

NAUSEA

Figure 8. Frequency distribution of AUs for goose-bumps, polite, and nausea


100
90
80
70
60
50
40
30
20
10

AU AU
19 25.
AU
. t lip
on s
15
p
A
. li U gue art
p
co 26. sho
rn jaw w
er
d d
AU ep rop
7. res
A
AU U1 lid so
10 7. s t r
. u ch igh
pp in
t
AU er rais
lip er
2
AU 0. ra
4. lip ise
br st r
ow ret
AU lo ch
AU 5 w
9. 0. s ere
AU nos pee r
43 e w ch
AU . ey rink
AU 6. e c le
c lo
2
AU 1. n hee sur
k e
2 e
AU 7. m ck t rais
2. ou igh e
AU out th ten
e s
AU 1. i r b tre er
16 nne row tch
. lo r b r
w ro aise
er w
lip ra
de ise
pr
es
s

Figure 9. Frequency distribution of AUs for vomit and touch

2014. John Benjamins Publishing Company


All rights reserved

VOMIT
TOUCH

151

152 Eeva A. Elliott and Arthur M. Jacobs

per type, however, the same participant signed all the vomit tokens, and the same
participant signed all the touch tokens. Therefore, it is impossible to attribute
any consistencies to the sign type, as they could equally well be due to the habitual
facial movements of the signer. Also, bear in mind that the frequencies are calculated over three tokens only.
The remaining low frequency sign types neck, slang, and pff only have 12
tokens each and therefore, a chart of the frequency distributions of their co-occurring AUs would not be meaningful.
Figure 10 below shows the occurrence of a disgust related AU configuration
occurring over a disgust related sign. When signing the sentence index1 worm
index1 disgust (German gloss ich wurm ich ekelig), there are three different
facial movements with regards to temporal scope. First, there is a facial movement with temporal scope over the entire clause: the eyes are narrowed from the
first sign index1 (frame 1) and stay this way until the end of the clause (frame 4).
Second, there are facial movements with temporal scope over several signs: the
brows are lowered from the second sign worm (frame 2) and stay in that position
until the end of the clause (frame 4); the lip corners are pulled down from the
third sign index1 (frame 3) and stay in that position until the end of the clause
(frame 4). Third, there are facial movements that only have temporal scope over
individual signs: there is a mouthing over the second sign worm (frame 2), and
there is tongue protrusion through an open mouth over the last sign disgust

Figure 10. Emotion related action units over disgust sign in the sentence index1 worm
index1 disgust (German gloss ich wurm ich ekelig)

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces

(frame 4). Note that when coding the action units for this disgust related sign, we
excluded all the pre-existing action units such as the lowered brows.
6.7 Comparison to Ekmans prototypical disgust face
According to Ekman et al. (2002a: 174), the prototypical disgust face could be any
of the six following action unit combinations:
i. AU9 nose wrinkle
ii. AU9 nose wrinkle + AU16 lower lip depress + AU15 lip corner depressor +
AU26 jaw drop
iii. AU9 nose wrinkle +AU17 chin raiser
iv. AU10 upper lip raiser
v. AU10 upper lip raiser + AU16 lower lip depress + AU15 lip corner depressor + AU26 jaw drop
vi. AU10 upper lip raiser +AU17 chin raiser
As shown in Figure 6, all of these action units occurred at least once in our data.
However, AU9 nose wrinkle and AU10 upper lip raiser, which are the core of
Ekmans prototypical disgust faces, are not the highest frequency action units in
our data. The core disgust face for our lexical items is a combination of AU19
tongue show, AU25 lips part, and AU26 jaw drop. Figure 11 below contrasts
Ekmans prototypical disgust face (version ii) with the prototypical disgust face we
found associated with disgust signs in our data:

Figure 11. Facial movements most typically associated with disgust related signs in DGS
(left picture) compared to Ekmans prototypical disgust face (right picture)

2014. John Benjamins Publishing Company


All rights reserved

153

154 Eeva A. Elliott and Arthur M. Jacobs

6.8 What meaning do the facial movements in our data convey?


The iconic structure of the facial movements suggests that they convey a disgust
related meaning: AU19 tongue show together with an open mouth resembles the
act of removing a bad thing from ones mouth. This is an iconic metaphor for
the feeling disgust, in the same way that the manual component of the DGS sign
vomit is an iconic metaphor for the feeling disgust. The case is similar for the
other movements we found. The mouth related movements (AUs 25, 26, 15, 20,
10, 9, 17, 21, 27, 16) can all be interpreted as meaning want to remove bad thing
from mouth. The eye lid movements (AUs 7, 6, 43) all serve to decrease eye aperture up until total closure (AU43 is eye closure), which provides a meaning of approximately dont want to see (the disgusting thing), which is an iconic metaphor
of aversion. Brow lowering (AU 4) is generally associated with negative feelings.
The brow raising actions (AUs 1, 2) are possible exceptions as they are associated with the meaning of want to see more, which is an iconic metaphor of want
to know more (Wierzbicka 1999: 201) and so add meaning that is not typically
compatible with disgust. They were among the three least frequent action units
in our data.
6.9 The token with no facial movements
We examined the one disgust sign that occurred in our data with no facial movements unique to the time window of analysis. Figure 12 below shows that when
producing the sentence worm index1 yuck (German gloss wurm ich ekel), the
signer made three different facial movements with regard to temporal scope. First,
there is a facial movement with temporal scope over the entire clause: the brows go
down and the eyes are closed from the onset of the first sign worm (frame 1) and
stay this way until the end of the clause (frame 3). Second, there is a facial movement with temporal scope over two signs: the lip corners are pulled laterally and
slightly down and the teeth are exposed from the articulation of the second sign
index1 (frame 2) and stay in that position until the end of the clause (frame 3).

Figure 12. No action units over disgust sign in the sentence worm index1 yuck
(German gloss wurm ich ekel)

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces

Third, there is a facial movement that only has temporal scope over an individual
sign: there is a mouthing over the first sign worm. This is the only sign in this
clause to have a facial action unique to its temporal scope. The disgust related
sign in this clause (yuck) has no additional marking besides the pre-existing brow
lowering, eye closure, and mouth corner pulling. One might interpret the mouth
movements over index1 and yuck (frames 2 and 3) as the spreading of a phonological facial feature over the prosodic word domain; however, it does not seem
to be the case that the first person pronoun was cliticized onto yuck since there
was no reduction in form of the pronoun comparable to that described in Sandler
(1999a). This issue remains to be addressed by future research.
6.10 Smiles
In six cases, a smile had onset before or during the sign analyzed, but it remained
on the face well after offset of the sentence; 3864ms on average. This smile seemed
to be directed at the interviewer and to express the amusement of the participant
with the task. Figure 13 below shows that when producing the sentence index1
worm throat (German gloss ich wurm ekel), the signer made three different
facial movements with regards to temporal scope. First, there is a facial movement with temporal scope over the entire clause: the brows go down and the eyes
are narrowed from the onset of the first sign worm (frame 1) and stay this way
until the end of the clause (frame 4). Second, there are facial movements that only
have temporal scope over individual lexical signs: the sign worm (frame 2) has
a mouthing, and the sign throat has tongue protrusion. Third, there is a facial

Figure 13. An action unit with temporal scope larger than the clause in the sentence
index1 worm throat (German gloss ich wurm ekel)

2014. John Benjamins Publishing Company


All rights reserved

155

156 Eeva A. Elliott and Arthur M. Jacobs

movement with temporal scope over a time period longer than the clause: the lip
corners are pulled upwards in a smile from the start of the clause (frame 1) and
the upward pull is maintained even though antagonistic muscles pull the mouth in
other directions in frames 2 and 3; the upward pull remains as the hands return to
their resting position in frame 4; the upward pull remains for some seconds after
the hands have reached their resting position (frames 5 and 6).
6.11 The answers to our three questions
Does an emotion related facial movement consistently temporally co-occur with
disgust signs as single words? Emotion related movements occurred with 95%
consistency in the single word condition. In one case in the single word condition,
a disgust sign occurred with a mouthing but no emotion related facial movement.
Does an emotion related facial movement consistently temporally co-occur
with disgust signs in direct and reported sentence types? An emotion related facial movement occurred with 82% consistency in direct sentence types and 100%
consistency in reported sentence types. In one direct speech sentence, a disgust
sign occurred with no facial movement that was unique to the time window of the
morpho-syntactic sign. In three direct speech cases, a sign occurred with a mouthing or mouth gesture but no emotion related facial movement.
Which facial muscles are used by each signer when producing an emotion
related facial movement temporally co-occurring with a disgust sign? 18 different
action units occurred in our data and none of them was 100% consistent with any
sign type. The most frequent facial actions were AU25 lips part, AU19 tongue
show, and AU26 jaw drop.
What can this data tell us about the function of the facial movements that are
lexically related to signs for emotion concepts? We answer this question in the
discussion section below.
7.

Discussion

7.1 Morphological function


The disgust related facial movements in our data have morphological properties in
the sense that they are meaning-bearing units. The protrusion of the tongue with
an open mouth is itself a semiotic unit that seems to be an iconic metaphor for the
concept disgust. In this study, we did not look for evidence that they can function like adverbials and form compounds with lexical signs that mean something
other than disgust. Rather, we wanted to establish with corpus data that they exist

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 157

at all on disgust related signs. Our data confirms that most disgust related signs
are articulated with a disgust related face that is unique to the time frame of the
morpho-syntactic sign. However, in order to check whether compounds are possible, we did ask three native DGS consultants whether it is possible to combine
a disgust face or sad face with the verb laufen (run/walk) to mean run while
being sad/disgusted and they confirmed that it is.
7.2 Phonological function
As we stated above, our diagnostic heuristics for the phonological status of a facial
movement are: temporal scope only over the morpho-syntactic or possibly the
phonological word and 100% consistency of co-occurrence with a particular lexical sign. We found that for almost all our disgust related tokens (62/63), there was
a facial movement unique to the time span of the morpho-syntactic sign, and that
in 58/63 cases, there was a facial movement configuration unique to the time span
of the morpho-syntactic sign that could mean disgust. However, no one particular
movement was 100% consistent with any particular disgust related sign. Does this
mean that disgust related signs in DGS do not have a facial phonological component?
Given that disgust related movements occurred on the morpho-syntactic disgust related signs even on top of already existing attitude depicting disgust related
movements, as shown in Figures 10 and 13 above, we think it is highly likely that
the disgust signs in our corpus are specified for a facial phonological component.
The variation in the articulation of the facial component could be due to at
least two factors. Since tongue protrusion with open mouth was the most common
facial movement associated with disgust, it might be the case that the facial phonological component of some disgust signs in DGS is in the process of lexicalization
and that in the future, there will be more homogeneity in production per lexical
item across the DGS speaking community.
It is also possible that the variations in production of the disgust related faces
reflect allophones of a schematic disgust face phoneme.
As we continue to analyze the many emotion signs in our corpus, we will see
whether other emotion signs are more consistently produced with emotion related
faces, or whether disgust signs are unique in this respect.
The phonological function of disgust facial movements can be tested in future
single sign identification tasks in which it can be seen whether the presence of a
facial component facilitates participant reaction time and reduces error rates compared to signs with no facial component.

2014. John Benjamins Publishing Company


All rights reserved

158 Eeva A. Elliott and Arthur M. Jacobs

7.3 Intonational function


Is it possible that the disgust related faces in our corpus are units from the intonation level? To the best of our knowledge, they do not have the function of marking
sentence type, speech act, or topic. However, Crasborn & Van der Kooij (2013)
found evidence that various facial and non-facial movements such as mouthings,
eye gaze, and head nods are associated with the marking of focused constituents in
Sign Language of the Netherlands, and Waleschkowski (2009) found that contrastive focus in DGS is associated with head-nod, forward head tilt, raised brows, and
widened eyes. Are the disgust related movements in our data focus markers?
As we gathered our sentences using a translation task, and not with questionanswer pairs as is used in information structure studies, we do not have a reliable
way of determining the information structure of our sentences. Consequently, we
cannot definitively answer this question. It may be the case that they can be used as
focus markers, although we expect them to appear on non-focused emotion signs
if they also have a phonological function.
Another possibility is that focused emotion signs could be marked through
the intensity of the emotion related facial movement rather than the movement
itself. According to our DGS consultant, the facial movements that are lexically related to signs for emotion concepts can vary in their intensity. This can be tested in
future studies by comparing non-focused emotion signs to focused emotion signs.
7.4 To what layer of information do the facial movements in our data belong?
As exemplified in Figure 13 above, several layers of information are transmitted
simultaneously during face-to-face communication. In Figure 13, the smile that
was on the signers face from the beginning of the clause to after it had ended is a
semiotic unit from the emotions/intentions layer. It conveys that the participant
was amused with the translation task. The signs that were articulated by the signer
belong to the words layer, and convey the semantic categories of DGS. The brow
furrow and eye narrowing that had temporal scope over the entire clause seem
to indicate the attitude of the speaker. Such facial movements may belong to the
intonation layer, as they provide some information on how to interpret the sentence they co-occur with, like question markers. However, this requires further
study. Lastly, there were mouthings and tongue protrusion that only had temporal
scope over individual signs. We propose that these elements are units in a parallel lexicon layer that convey phonological information and information about
culture-specific semantic categories, that is, they are phonological units with some
morphological properties.

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 159

However, one may ask if the disgust related faces are phonemes, why do they
not belong to the words layer, as part of the form of the word? If they are morphemes that convey the same type of information as words, why are they simply
not words?
As we stated in our introduction, there are probably more than four layers of
information in face-to-face conversation, and there may be some semiotic units
which are on the border between two layers. Enactings and mouthings have slightly different properties to any of the four information types that we mentioned
in this paper, as shown in Table 5 above. Therefore, we place them in their own
information type layer and call it the parallel lexicon. Do the disgust faces in our
data, which should belong to the enacting category, also show the properties of
enactings reported in the literature? In this study, we did not look for evidence
that these movements can combine with each other in hierarchical structures, or
with signs other than disgust related signs. From our evidence alone, it could well
be that the disgust related faces in our data should be regarded as phonological
features that are part of the form of words in the words layer, like empties, except
that they also happen to have semantic content. However, if as attested by our
informants, they can also combine with non-disgust words, we think that these
semiotic units, together with mouthings, make up their own sub-system of communication that is interdependent on words.
7.5 Interdependent systems
Are the disgust facial movements in our data paralinguistic? In the non-manuals
literature, it is common to find a distinction made between linguistic and paralinguistic markers, or some variation of these terms. Reilly and her colleagues
(see Reilly (2005) for an overview), for instance, use the following four oppositions: grammatical vs. non-grammatical, linguistic vs. affective, linguistic vs. communicative, grammatical vs. communicative. Crasborn & Van der Kooij (2013)
make the following distinctions: linguistic, paralinguistic (which includes emotion/intention signals), and extra-linguistic (non-communicative actions). The
linguistic/paralinguistic distinction is not incompatible with our analyses, but it
would be an unnecessary add-on from our perspective. Given that there is still
no consensus on what exactly is emotion or paralinguistic on the one hand, and
what is linguistic on the other hand, we prefer to avoid such labels and rather
examine the various co-existing communicative signals and attempt to categorize
them according to differences and similarities in properties such as form-meaning
relationship, combinatoricity, etc. We do this so as to not prematurely exclude relevant communicative behavior from analysis, as has historically happened in the
case of sign languages, as in the case of intonation and co-speech gestures (Liddell

2014. John Benjamins Publishing Company


All rights reserved

160 Eeva A. Elliott and Arthur M. Jacobs

2003: 358) and in the case of iconicity (Taub 2001). We recognize that some of
our basic theoretical assumptions are not compatible with all linguistic theories.
However, like Liddell we think
[] that spoken and signed languages both make use of multiple types of semiotic elements in the language signal, but that our understanding of what constitutes
language has been too narrow. (Liddell 2003: 362)

The systems (by system we mean a set of units that interact with each other and
that this interacting set of units together serves a particular function) we listed
in our information type layers are to some degree independent in the sense that
they each serve a unique function. However, some of them, such as intonation
and words, interact with each other and in fact seem interdependent in that the
interpretation of a sentence (e.g. as question, statement, irony) is not possible
without both types of information. Together intonational units and lexical units
serve a more general communicative function; therefore, they are sub-systems of
the same larger system. Co-speech gestures also have a special communicative
function, and they interact with words. Co-speech gesture and words then are
also sub-systems of a larger communicative system. Apart from these four layers,
there are also emblematic gestures, the parallel lexicon layer that we propose in
this paper, and other kinds of communicative behavior which have not yet been
intensively studied such as the conversational facial movements made in spoken
languages. Which of the sub-systems should be considered linguistic and which
paralinguistic depends on ones theory of language and ones theory of emotion,
but the behaviors of these systems are describable facts.
7.6 Economy, effort, and redundancy
Are the facial movements we found on the disgust related signs redundant? It is
proposed in Hohenberger & Happ (2001) that mouthings are redundant in DGS.
Since the disgust signs in our data repeat the meaning of the manual part of the
lexical item, just like mouthings often do, we consider the question: are they redundant?
In discussing the meaning of redundancy, Hohenberger & Happ (2001) note
that even though languages are designed to be economic, there is redundancy built
into them at all levels. The redundancy is there to guarantee information transfer.
This kind of redundancy can perhaps be compared to the redundancy of a humans second kidney. They argue, however, that there can also exist too much redundancy, which they term profligacy. They state that given the economic design
of languages, profligate elements will eventually disappear.
The economy of language is described by Zipf as the Principle of Least Effort:

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 161

In simple terms, the Principle of Least Effort means, for example, that a person
in solving his immediate problems will view these against the background of
his future problems, as estimated by himself. Moreover he will strive to solve his
problems in such a way as to minimize the total work that he must expend in
solving both his immediate problems and his probable future problems. That in
turn means that the person will strive to minimize the probable average rate of his
work-expenditure (over time). And in so doing he will be minimizing his effort, by
our definition of effort. Least effort, therefore, is a variant of least work. (Zipf 2012
[1949]: 1; emphases in original)

We are not convinced that mouthings are profligate elements in sign languages.
They appear in other sign languages besides DGS (Crasborn et al. 2008), and they
still have not fallen out of use as attested by the mouthing rate (40%) in the disgust
section of our corpus. Furthermore, besides fulfilling a redundant function of the
good sort (ensuring transmission of the signal by adding salient perceptual cues,
i.e. functioning as phonological elements), they also seem to take on morphological and information structure functions (Crasborn et al. 2008; Crasborn & Van
der Kooij 2013).
Like mouthings, we think that the enacting actions, including the disgust related facial movements in our data, ensure successful transmission of the signed
signal, and perhaps also take on additional functions. By investing more work in
creating a robust signal, one decreases the chances of having to repeat ones self or
of being catastrophically misunderstood, therefore minimizing effort as defined
by Zipf: probable average rate of work-expenditure over time.
Beyond the articulatory work needed to ensure information transfer in communication (see e.g. Tobin 1997), one can invest even more articulatory work on
some words than others, and this indicates to the interlocutor what information is
most important. Gussenhoven (2002) calls this the Effort Code; however, his use
of the word effort is not to be confused with Zipfs definition of effort. In Zipfs
terms, Gussenhovens effort is work.
To the best of our knowledge, enacting facial actions and mouth gestures
have never come under suspicion of profligacy because they are not derived from
spoken languages and are therefore considered more native to sign languages.
However, many mouth gestures and all enacting actions are derived from emotion/intention faces or instrumental facial movements, most of which a human
masters before she produces her first words (Izard et al. 1995). We regard these
items to be borrowings into DGS to the same extent as mouthings.

2014. John Benjamins Publishing Company


All rights reserved

162 Eeva A. Elliott and Arthur M. Jacobs

7.7 The relation between information type and articulator


As we stated in our introduction, the layers of communication are systems specialized in communicating particular types of information. We stress that the abstract
notion of information type, rather than the various articulators, is what motivates
the layering. The fact that words, intonation, and emotion/intention signals can
all be transmitted by a variety of articulators attests to this claim. However, the
relationships between articulator and information type is not random, and obviously if there are not at least two independent articulators available, simultaneous
transmission of information types cannot occur.
We suggest that the hands are particularly well suited for transmitting information about path, manner, size, and shape and that is why they come into the service of a system specialized for transmitting this type of information. The lexicons
and classifier construction systems of sign languages also take advantage of the
natural ability of the hands to analogously encode path, manner, size, and shape.
The face seems to be well suited for transmitting information about a humans
emotions/intentions. By using these facial movements depictively, one can increase their referential range to non-present non-self situations, even though still
retaining their first person present tense orientation. Furthermore, we suggest that
signers might be using these emotion related facial movements to refer to concepts
of emotion that are bleached of the first person present tense orientation.
The small muscles of the vocal tract are well suited for rapid movements. These
rapid movements can create sounds. Sounds are ill suited for analogously encoding information about the visual world, which seems to be the sensory domain
most important for humans. This is probably why iconic forms in spoken language
lexicons are much less frequent than in sign language lexicons.
The intonation system of spoken languages, however, seems to be built on
iconic metaphors since voice pitch is naturally associated with certain states of a
human and their breathing cycle. Gussenhoven (2002) proposes three such metaphors, although he calls them metaphors of biological conditions rather than
iconic metaphors, which is a term adopted from Taub (2001). For example, high
tones are associated with beginnings and low tones with ends because at the beginning of an exhalation sub-glottal air pressure is higher than towards the end.
Some articulators can be moved faster than others, which means that some articulators can transmit more information per unit of time than others. One might
then think that spoken languages, which use the fast vocal muscles, transmit more
information per time unit than sign languages. However, a comparison of speech
rate and proposition rate between English and ASL suggests that this is not the
case (Klima & Bellugi 1979). ASL and English have the same proposition rate,
despite having different sign/word rates. Equal proposition rate is achieved by

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 163

simultaneously transmitting more information in ASL compared to English. This


is done, for example, through facial adverbials being simultaneously transmitted
with the manual verb they modify, and through encoding subject and object by
the location features of verbs, and using depictive facial movements to indicate
role and attitude. There is also some evidence for a stable information transmission rate across spoken languages despite large differences in their morphologies
(Pellegrino, Coupe & Marsico 2011).
8. Conclusion
In this article, we explored the phenomenon of signs for emotion concepts being
consistently articulated with their congruent facial movement, such as the ASL
sad + frown. This phenomenon indicates that signers are using semiotic units generally used by humans to communicate emotions or intentions, i.e. facial expressions, in a novel symbolic way. For this reason, understanding the function of these
symbolic facial movements is of relevance both to emotion and language theories.
We propose that emotion related facial movements that are consistently produced with the manual part of emotion concept signs, such as the frown part
of sad, should be regarded as phonological elements with some morphological
(meaning bearing) properties.
Furthermore, we propose that these emotion related facial movements, together with other enacting facial movements, such as the biting mouth made with
the ASL sign bite, and mouthings constitute one class of phenomenon. The function of this class of elements is to convey phonological information and information about culture-specific semantic categories together with a manual component
in sign languages.
In order to explore this phenomenon in German Sign Language (Deutsche
Gebrdensprache: DGS), we have collected a corpus of emotion related signs. In this
article, we presented our findings on signs for the concept disgust from our corpus.
We examined whether disgust related signs are consistently produced with
a disgust facial movement when signed as single words, when signed in direct
speech, and when signed in reported speech. We examined which facial muscles
were used in production of disgust related facial movements using the Facial
Action Coding System.
Our results showed that we had 12 different disgust related sign types in our corpus and 63 disgust related tokens in total. We found that the disgust sign tokens were
produced with disgust related facial movements with 82%, 100%, and 95% consistency in the direct speech, reported speech, and single word conditions, respectively.

2014. John Benjamins Publishing Company


All rights reserved

164 Eeva A. Elliott and Arthur M. Jacobs

By only coding the action units that were timed to start and end within the
time frame of the morpho-syntactic sign, we were able to filter out facial movements that have temporal scope over the entire clause or over constituents larger
than single signs.
The disgust related movements with temporal scope only over the lexical sign
for disgust were often made in addition to disgust related movements with temporal scope over larger constituents, which appear to convey speaker/role attitude.
That is, often the brows would be lowered and the lip corners turned down for
most of the clause, and during the production of the disgust sign, additional facial
movements were made, such as tongue show. We propose that these facial movements function is not to depict attitude but rather to add phonological information to the disgust sign in order to aid identification by the interlocutor.
We found that 18 different action units were used in total in our data set. No
action unit was 100% consistent with any particular disgust sign type, however, the
most common disgust face was an open mouthed tongue show (AUs 19, 26, 25).
Data from consultants suggests that such facial movements can also function
as a modifier on signs other than disgust signs. We propose that this facial movement element is part of an information layer temporally parallel to words/signs,
in the same way that intonation and gesture exist as information layers temporally
parallel to words/signs and interdependent on them.

Acknowledgments
This work was supported by the International Max Planck Research School The Life Course:
Evolutionary and Ontogenetic Dynamics (LIFE), and the Cluster of Excellence, Languages of
Emotion, Freie Universitt Berlin.

References
Aarons, Debra, Ben Bahan, Judy Kegl & Carol Neidle. 1992. Clausal structure and a tier for grammatical marking in American Sign Language. Nordic Journal of Linguistics 15. 103142.
Anderson, Diane E. & Judy S. Reilly. 1997. The puzzle of negation: How children move from
communicative to grammatical negation in ASL. Applied Psycholinguistics 18(4). 411429.
DOI: 10.1017/S0142716400010912
Aryani, Arash, Markus Conrad & Arthur M. Jacobs. 2013. Extracting salient sublexical units
from written texts: Emophon, a corpus-based approach to phonological iconicity. Frontiers
in Psychology 4.654. DOI: 10.3389/fpsyg.2013.00654.
Baker-Shenk, Charlotte. 1983. A micro-analysis of the non-manual components of questions in
American Sign Language. PhD dissertation, University of California, Berkeley, CA.

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 165


Bartels, Christine. 1999. The intonation of English statements and questions: A compositional interpretation. New York/London: Garland Publishing.
Blevins, Juliette. 2012. Duality of patterning: Absolute universal or statistical tendency? Language
and Cognition 4. 275296. DOI: 10.1515/langcog-2012-0016
Boer, Bart de, Wendy Sandler & Simon Kirby (eds.). 2012. New perspectives on duality of patterning: Introduction to the special issue. Language and Cognition 4. 251259.
DOI: 10.1515/langcog-2012-0014
Crasborn, Onno & Els van der Kooij. 2013. The phonology of focus in Sign Language of the
Netherlands. Journal of Linguistics 49(3). 515565. DOI: 10.1017/S0022226713000054.
Crasborn, Onno, Els van der Kooij, Dafydd Waters, Bencie Woll & Johanna Mesch. 2008.
Frequency distribution and spreading behavior of different types of mouth actions in three
sign languages. Sign Language & Linguistics 11(1). 4567. DOI: 10.1075/sll.11.1.04cra
Dachkovsky, Svetlana & Wendy Sandler. 2009. Visual intonation in the prosody of a sign language. Language and Speech 52(2/3). 287314. DOI: 10.1177/0023830909103175
Darwin, Charles. 1904 [1872]. The expression of the emotions in man and animals. London: John
Murray.
Ebbinghaus, Horst & Jens Hemann. 2001. Sign language as multidimensional communication:
Why manual signs, mouthings, and mouth gestures are three different things. In Penny
Boyes Braem & Rachel Sutton-Spence (eds.), The hands are the head of the mouth, 133152.
Hamburg: Signum.
Eibl-Eibesfeldt, I. 1975. Similarities and differences between cultures in expressive movements.
In Robert A. Hinde (ed.), Non-verbal communication, 297312. Cambridge: Cambridge
University Press.
Ekman, Paul. 1972. Universals and cultural differences in facial expressions of emotion. In
James K. Cole (ed.), Nebraska symposium on motivation, 207283. Lincoln, NE: University
of Nebraska Press.
Ekman, Paul. 1975. The universal smile: Face muscles talk every language, Psychology Today
September. 3539.
Ekman, Paul. 1992. An argument for basic emotions. Cognition and Emotion 6. 169200.
DOI: 10.1080/02699939208411068
Ekman, Paul. 2004. Emotional and conversational nonverbal signals. In Jess M. Larrazabel &
Luis A. Prez Miranda (eds.), Language, knowledge, and representation, 3950. Dordrecht:
Kluwer Academic Publishers. DOI: 10.1007/978-1-4020-2783-3_3
Ekman, Paul, Wallace V. Friesen & Joseph C. Hager. 2002a. Facial action coding system:
Investigators guide. Salt Lake City, UT: A Human Face.
Ekman, Paul, Wallace V. Friesen & Joseph C. Hager. 2002b. Facial action coding system: The
manual. Salt Lake City, UT: A Human Face.
Elliott, Eeva A., Mario Braun, Michael Kuhlmann & Arthur M. Jacobs. 2012. A dual-route
cascaded model of reading by deaf adults: Evidence for grapheme to viseme conversion.
Journal of Deaf Studies and Deaf Education 17(2). 227243. DOI: 10.1093/deafed/enr047
Elliott, Eeva A. & Arthur M. Jacobs. 2013. Facial expressions, emotions, and sign languages.
Frontiers in Psychology 4. DOI: 10.3389/fpsyg.2013.00115.
Fridlund, Alan J. 1997. The new ethology of human facial expressions. In James A. Russell & JosMiguel Fernandez-Dols (eds.), The psychology of facial expression, 103129. Cambridge:
Cambridge University Press. DOI: 10.1017/CBO9780511659911.007

2014. John Benjamins Publishing Company


All rights reserved

166 Eeva A. Elliott and Arthur M. Jacobs


Fuks, Orit & Yishai Tobin. 2008. The signs B and B-bent in Israeli Sign Language according
to the theory of phonology as human behavior. Clinical Linguistics & Phonetics 22(4-5).
391400. DOI: 10.1080/02699200801916808
Gussenhoven, Carlos. 2002. Intonation and interpretation: Phonetics and phonology. In
Proceedings of the 1st International Conference on Speech Prosody, Aix-en-Provence, France,
4757.
Happ, Daniela & Marc-Oliver Vorkper. 2006. Deutsche Gebrdensprache: Ein Lehr- und
Arbeitsbuch. Frankfurt am Main: Fachhochschulverlag.
Hockett, Charles. 1960. The origin of speech. Scientific American 203(3). 8896.
DOI: 10.1038/scientificamerican0960-88
Hofmann, Markus, Lars Kuchinke, Sascha Tamm, Melissa L. H. Vo & Arthur M. Jacobs. 2009.
Affective processing within 1/10th of a second: High arousal is necessary for early facilitative processing of negative but not positive words. Cognitive Affective & Behavioral
Neuroscience 9(4). 389397. DOI: 10.3758/9.4.389.
Hohenberger, Annette & Daniela Happ. 2001. The linguistic primacy of signs and mouth gestures over mouthings: Evidence from language production in German Sign Language
(DGS). In Penny Boyes Braem & Rachel Sutton-Spence (eds.), The hands are the head of the
mouth, 153190. Hamburg: Signum.
Izard, Carroll E. 2010. The many meanings/aspects of emotion: Definitions, functions, activation, and regulation. Emotion Review 2(4). 363370. DOI: 10.1177/1754073910374661
Izard, Carroll E., Christina A. Fantauzzo, Janine M. Castle, O. Maurice Haynes, Maria F. Rayias
& Priscilla H. Putnam. 1995. The ontogeny and significance of infants facial expressions in
the first 9 months of life. Developmental Psychology 31(6). 9971013.
DOI: 10.1037/0012-1649.31.6.997
Johnston, Trevor & Adam Schembri. 2006. Issues in the creation of a digital archive of a signed
language. In Linda Barwick & Nicholas Thieberger (eds.), Sustainable data from digital
fieldwork, 716. Sydney: Sydney University Press.
Keller, Jrg. 2001. Multimodal representations and the linguistic status of mouthings in German
Sign Language (DGS). In Penny Boyes Braem & Rachel Sutton-Spence (eds.), The hands are
the head of the mouth, 191230. Hamburg: Signum.
Kelly, Spencer D., Asl zyrek & Eric Maris. 2010. Two sides of the same coin: Speech and
gesture mutually interact to enhance comprehension. Psychological Science 21(2). 260267.
DOI: 10.1177/0956797609357327
Klima, Edward & Ursula Bellugi. 1979. The rate of speaking and signing. In Edward Klima &
Ursula Bellugi (eds.), The signs of language, 181194. Cambridge, MA: Harvard University
Press.
Krahmer, Emiel & Marc Swerts. 2009. Audiovisual prosody Introduction to the Special Issue.
Language and Speech 52. 129133. DOI: 10.1177/0023830909103164
Ladd, D. Robert. 1996. Intonational phonology. Cambridge: Cambridge University Press.
Lewin, Donna & Adam Schembri. 2011. Mouth gestures in British Sign Language. Sign Language
& Linguistics 14(1). 94114. DOI: 10.1075/sll.14.1.06lew
Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton.
Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge:
Cambridge University Press. DOI: 10.1017/CBO9780511615054
Matsumoto, David & Bob Willingham. 2009. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals. Journal of Personality and Social Psychology
96(1). 110. DOI: 10.1037/a0014037

2014. John Benjamins Publishing Company


All rights reserved

Phonological and morphological faces 167


Mayberry, Rachel, Joselynne Jacques & Gayle DeDe. 1998. What stuttering reveals about the
development of the gesture-speech relationship. New Directions for Child Development 79.
7787. DOI: 10.1002/cd.23219987906
McIntire, Marina L. & Judy S. Reilly. 1988. Nonmanual behaviors in L1 and L2 learners of
American Sign Language. Sign Language Studies 61. 351375. DOI: 10.1353/sls.1988.0034
McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Chicago: Chicago
University Press.
Mller, Cornelia & Roland Posner (eds.). 2002. The semantics and pragmatics of everyday gestures. The Berlin conference. Berlin: Weidler Verlag.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Ben Bahan & Robert G. Lee. 2000. The syntax of
American Sign Language. Functional categories and hierarchical structure. Cambridge, MA:
MIT Press.
Nespor, Marina & Wendy Sandler. 1999. Prosody in Israeli Sign Language. Language & Speech
42(2/3). 143176. DOI: 10.1177/00238309990420020201
Pellegrino, Francois, Christophe Coupe & Egidio Marsico. 2011. A cross-language perspective
on speech information rate. Language 87(3). 539558. DOI: 10.1353/lan.2011.0057
Pittam, Jeffery & R. Klaus Scherer. 1993. Vocal expression and communication of emotion. In
Michael Lewis & Jeannette M. Haviland (eds.), Handbook of emotions, 185197. New York/
London: Guilford Press.
Reilly, Judy S. 2005. How faces come to serve grammar: The development of nonmanual morphology in American Sign Language. In Brenda Schick, Mark Marschark & Patricia E.
Spencer (eds.), Advances in the sign language development of deaf children, 262290. Oxford:
Oxford University Press. DOI: 10.1093/acprof:oso/9780195180947.003.0011
Reilly, Judy S., Marina McIntire & Ursula Bellugi. 1990. The acquisition of conditionals in
American Sign Language grammaticized facial expressions. Applied Psycholinguistics
11(4). 369392. DOI: 10.1017/S0142716400009632
Reilly, Judy S., Marina McIntire & Howie Seago. 1992. Affective prosody in American Sign
Language. Sign Language Studies 75. 113128. DOI: 10.1353/sls.1992.0035
Russell, James A. 1980. A circumplex model of affect. Journal of Personality and Social Psychology
39(6). 11611178. DOI: 10.1037/h0077714
Russell, James A. & Jos-Miguel Fernandez-Dols (eds.). 1997. The psychology of facial expression.
Cambridge: Cambridge University Press. DOI: 10.1017/CBO9780511659911
Sandler, Wendy. 1999a. Cliticization and prosodic words in a sign language. In Tracy Hall &
Ursula Kleinhenz (eds.), Studies in the phonological word, 223255. Amsterdam: John
Benjamins. DOI: 10.1075/cilt.174.09san
Sandler, Wendy. 1999b. Prosody in two natural language modalities. Language and Speech 42(23). 127142. DOI: 10.1177/00238309990420020101
Sandler, Wendy. 2009. Symbiotic symbolization by hand and mouth in sign language. Semiotica
174. 241275.
Sandler, Wendy & Diane Lillo-Martin. 2006. Sign language and linguistic universals. Cambridge:
Cambridge University Press. DOI: 10.1017/CBO9781139163910
Taub, Sarah. 2001. Language from the body: Iconicity and metaphor in American Sign Language.
New York: Cambridge University Press. DOI: 10.1017/CBO9780511509629
Tobin, Yishai. 1997. Phonology as human behaviour. Durham & London: Duke University Press.
Vo, Melissa L. H., Markus Conrad, Lars Kuchinke, Karolina Urton, Markus J. Hofmann &
Arthur M. Jacobs. 2009. The Berlin affective word list reloaded (BAWL-R). Behaviour
Research Methods 41(2). 534538. DOI: 10.3758/BRM.41.2.534

2014. John Benjamins Publishing Company


All rights reserved

168 Eeva A. Elliott and Arthur M. Jacobs


Vos, Connie de, Els van der Kooij & Onno Crasborn. 2009. Mixed signals: Combining linguistic and affective functions of eyebrows in questions in Sign Language of the Netherlands.
Language and Speech 52(2/3). 315339. DOI: 10.1177/0023830909103177
Waleschkowski, Eva. 2009. Focus in German Sign Language. Poster presented at Workshop
Nonmanuals in Sign Languages. University of Frankfurt/Main.
Wierzbicka, Anna. 1996. Semantics: Primes and universals. New York: Oxford University Press.
Wierzbicka, Anna. 1999. Emotions across languages and cultures. Cambridge: Cambridge
University Press. DOI: 10.1017/CBO9780511521256
Wilbur, Ronnie B. 2009. Effects of varying rate of signing on ASL manual signs and nonmanual
markers. Language and Speech 52(2/3). 245285.
Wilbur, Ronnie B. & Cynthia Patschke. 1999. Syntactic correlates of brow raise in ASL. Sign
Language & Linguistics 2(1). 341.
Woll, Bencie. 2001. The sign that dares to speak its name: Echo phonology in British Sign
Language (BSL). In Penny Boyes Braem & Rachel Sutton-Spence (eds.), The hands are the
head of the mouth, 8798. Hamburg: Signum.
Zipf, George Kingsley. 2012 [1949]. Human behaviour and the principle of least effort: An introduction to human ecology. Mansfield Centre: Martino Fine Books.

2014. John Benjamins Publishing Company


All rights reserved

2014. John Benjamins Publishing Company


All rights reserved

12

13

14

15

16

17

18

10

11

11

mouthing of
ekel

hate

disgust

neck

hate

mouthing ekel

nausea had a 12 and mouthing of schlecht

slang

polite

polite

slang

goose-bumps

goose-bumps

nausea mouthing of
schlecht

goose-bumps

1
4

AU17

11

AU17Int

10

goose-bumps

goose-bumps

AU15

AU15Int

AU6
1

AU19

AU4Int
4

AU26

AU4
1

AU6Int

throat had a 12 that extended the


word window:2303ms

AU7

AU7Int

AU43

AU1

yuck

AU1Int
4

AU12

AU2
1

AU12Int

AU2Int
3

AU50

AU9
1

AU9Int

AU10

AU10Int

AU16

AU16Int

had a 12 that extended the


word window:5623ms

started
before
word
onset

AU20

yuck

AU20Int

AU25

Notes

To- Par- Elicit- Sign_ Sign_


ken tici- ing_ ID
label
pant Condition

Appendix 1. Facial Action Coding System raw data

Appendices

Phonological and morphological faces 169

AU27

AU21

10

10

10

11

29

30

31

32

2014. John Benjamins Publishing Company


All rights reserved

12

12

13

13

13

14

14

14

36

38

39

40

41

42

43

had a 12, very


intense

throat

throat

throat

hate

disgust

disgust

mouthing ekel

throat had a 12 lasted longer


than sign: 1920

hate

throat had a 12 lasted longer


than sign: 3290

throat had a 12 lasted longer


than sign: 2620

yuck

yuck

touch

touch

touch

cooccurence of emo and


mouthing of ekel

AU21

polite

yuck

AU2Int

37

AU9

AU9Int

polite

AU12

12

28

throat mouthing ekel

AU12Int

35

27

hate

AU20

11

26

throat

AU20Int

11

25

hate

hate

AU27

33

24

AU43

34

23

AU1

AU1Int

AU2

AU4

21

AU4Int

22

AU6
1

AU6Int

AU7

AU7Int

contains mouth gesture pff


coded as 50

AU10

hate

AU10Int

pff

AU15

AU15Int

12

AU17

AU50

AU17Int

AU19

AU25

19

Notes
AU26

20

To- Par- Elicit- Sign_ Sign_


ken tici- ing_ ID
label
pant Condition

170 Eeva A. Elliott and Arthur M. Jacobs

AU16Int

AU16

18

18

18

19

19

19

54

55

56

57

58

59

2014. John Benjamins Publishing Company


All rights reserved

mouthing ekel

nausea mouthing ekel

nausea mouthing schlecht in


sequence with disgust face

hate

disgust mouthing ekel

disgust

disgust

disgust

neck

mouthing ekel

AU6

10

yuck

yuck

AU6Int

AU10

AU4
1

AU4Int

disgust two fists infront


blocking

AU9

disgust

disgust

AU9Int

AU10Int

20

18

53

AU15

63

17

52

AU15Int

20

17

51

disgust

AU16

hate
1

AU16Int

hate

AU17

62

17

50

AU17Int

19

16

49

AU21

20

16

48

vomit

vomit

AU7

AU7Int

AU27

60

15

47

AU50

61

15

46

AU1

throat

AU1Int

vomit

AU2

AU19

AU2Int

AU26

AU20

14

AU20Int

15

AU25

44

Notes
AU43

45

To- Par- Elicit- Sign_ Sign_


ken tici- ing_ ID
label
pant Condition

Phonological and morphological faces

AU12Int

AU12

171

direct_speech 3

reportive_
speech

single_word

direct_speech 3

reportive_
speech

single_word

direct_speech 2

reportive_
speech

single_word

2014. John Benjamins Publishing Company


All rights reserved

eliciting_con- total_signs_
dition
sentence

participant

929

3709

1564

897

2035

1484

1612

2927

2184

sentence_
length_ms

participant

Appendix 2. Transcription of DGS data

condition word_
length_ms

my

nausea

ekel

929

freund
friend

er-index
him-index

460

worm
508

ekelig

wurm

770

goose-bumps

ekel
794

him-index

friend
897

person

person

566

say

sagen

346

190
er-index

freund

goose-bumps worm

305

770
wurm

520
ekel

450

wurm

717

worm

wurm

like

mag

664

not

nicht

190

goose-bumps worm

ekel

477

self-him

he-tell-me

337

friend

ich

194

throat

ekel

1612

433

yuck
er-bescheid-ich selbst-er

383

219

ekelig

953

freund

mein

ich

worm

356

wurm

875

380

worm

wurm

346

also

auch

725

goosebumps

ekelig

975

yuck

ekelig

172 Eeva A. Elliott and Arthur M. Jacobs

direct_speech 3

reportive_
speech

single_word

direct_speech 5

reportive_
speech

single_word

direct_speech 4

reportive_
speech

single_word

2014. John Benjamins Publishing Company


All rights reserved

10

eliciting_con- total_signs_
dition
sentence

participant

734

2374

1817

1058

5691

3137

660

2249

1470

sentence_
length_ms

participant

condition word_
length_ms

creeping

worm

friend

my
ekel

734

freund

mein

266

i
283

wurm

ich

241

neck

ekel
650

friend

my
1058

freund

mein

362

worm
528

schwirren

wurm

563

polite

ekel
523

person

friend
660

person

freund

227

worm

i
487

wurm

410

356

ich

251

goose-bumps

875

173

146

disgust

ekelig

779

so

so

589

slang

ekelig

800

he-index

he-tell-me

he-index

er-bescheid-ich er-index

354

ich

147

self-freind

selbst-freund

472

self-i

selbst-ich

470

he-tell-me

er-bescheid-ich er-index

353

slang

hass

809

953

worm

wurm

500

tell

erzhlen

380

nausea

unwohl

781

polite

hass

470

902

hate

hass

825

he-index hate

er-index hass

522

worm

wurm

539

what

was

454

1108

worm creeping

wurm schwirren

374

Phonological and morphological faces 173

direct_speech 3

reportive_
speech

single_word

direct_speech 4

reportive_
speech

single_word

direct_speech

reportive_
speech

single_word

2014. John Benjamins Publishing Company


All rights reserved

eliciting_con- total_signs_
dition
sentence

participant

860

2406

813

2570

2207

970

3211

1853

sentence_
length_ms

participant

condition word_
length_ms

friend

my
ekel

860

freund

mein

187

throat

ekel

183

friend

my
813

freund

mein

337

worm

342

676
wurm

ich

399

hate

ekel

970

freund
friend

mein
my

517

worm
214

ich

327

356

wurm

725

hate

875

317

267

throat

ekel

912

this

550

he

he-tell-me

polite

er-bescheid-ich ekel

254

he-tell-me

er-bescheid-ich er

323

ich

220

he-tell-me

er-bescheid-ich dies

483

pff

ekelig

801

953

worm

wurm

643

worm

wurm

626

worm

wurm

533

yuck

iiiiiii

589

hate

hass

675

he

er

294

hate

hass

853

174 Eeva A. Elliott and Arthur M. Jacobs

direct_speech 2

reportive_
speech

single_word

direct_speech 3

reportive_
speech

single_word

direct_speech 3

reportive_
speech

single_word

10

10

10

11

11

11

2014. John Benjamins Publishing Company


All rights reserved

12

12

12

eliciting_con- total_signs_
dition
sentence

participant

1191

2460

1480

1320

2807

2890

1010

3229

1463

sentence_
length_ms

12

12

12

11

11

11

10

10

10

participant

condition word_
length_ms

yuck

darling

my
ekel

1191

schatz

mein

323

worm

i
242

wurm

570

ich

112

throat

ekel

1320

freund
friend

mein
my

267

i
172

igitt

654

ich

424

touch

ekel

1010

freund
friend

mein
my

597

touch

402

ekel

worm

911

356

wurm

552

polite

875

index-darling

index-schatz

150

throat

ekel

798

also

auch

377

worm

wurm

1812

say

sag

886

953

self

selbst

240

yuck

igitt

690

worm

wurm

404

hate

ekel

690

worm

wurm

1301

touch

ekel

940

worm

wurm

815

Phonological and morphological faces 175

direct_speech 3

reportive_
speech

single_word

direct_speech 3

reportive_
speech

single_word

direct_speech 4

reportive_
speech

13

13

13

14

14

2014. John Benjamins Publishing Company


All rights reserved

14

15

15

eliciting_con- total_signs_
dition
sentence

participant

3714

2648

1600

4041

2510

889

2773

1967

sentence_
length_ms

15

14

14

14

13

13

13

participant

condition word_
length_ms

self
344

i
319

selbst

ich

296

throat

ekel
273

friend

my
1600

freund

mein

414

worm

about-worm
400

wurm

auf-wurm

319

hate

ekel
550

freind

my
889

freund

mein

284

disgust

i
189

ekel

564

356

ich

670

hate

875

516

worm

wurm

310

say

sag

303

throat

ekel

1641

about-freind

auf-freund

283

worm

wurm

733

953

310

vomit

erbrech

1769

to-me

auf-ich

417

say

sag

246

ekel

576

disgust

ekel

584

880

1345

index-friendthroat

indexfreund

210

worm

wurm

547

931

worm

throat

wurm ekel

790

forfriend

fuerfreund

640

176 Eeva A. Elliott and Arthur M. Jacobs

single_word

direct_speech 3

reportive_
speech

single_word

direct_speech 3

reportive_
speech

single_word

direct_speech 5

15

16

16

16

17

17

2014. John Benjamins Publishing Company


All rights reserved

17

18

eliciting_con- total_signs_
dition
sentence

participant

3068

607

2508

1448

4133

2183

1204

sentence_
length_ms

18

17

17

17

16

16

16

15

15

condition word_
length_ms

participant

ekel
disgust

ich
i

727

disgust

ekel
793

darling

my
607

schatz

mein

340

really

worm
307

echt

227

wurm

578

bescheid
tell

freund
friend

1290

hate

i
1289

hass

ich

301

vomit

ekel
787

friend

my
1204

freund

356

mein

875

over

ueber

467

say

sag

348

disgust

ekel

643

index-friend

index-freund

273

worm

wurm

1095

say

sag

953

456

hate

hass

898

460
worm

yuck

fa

621

index-darling disgust

wurm

auf-wurm

worm

wurm

945

friendaboutvomit-wormworm

freunderbrechwurm

index-schatz ekel

112

worm

wurm

383

self

selbst

Phonological and morphological faces 177

reportive_
speech

single_word

direct_speech 4

reportive_
speech

single_word

direct_speech 3

reportive_
speech

single_word

18

18

19

19

19

20

20

2014. John Benjamins Publishing Company


All rights reserved

20

eliciting_con- total_signs_
dition
sentence

participant

700

3314

1847

767

2440

1328

680

3062

sentence_
length_ms

20

20

20

19

19

19

18

18

participant

condition word_
length_ms

disgust

nausea

ekel

700

erzaehl
tell

friend

index-freund freund

470

worm

wurm

821

index-friend

747

hate

i
241

hass

794

ich

232

disgust

ekel

767

sag
say

friend

index_freund freund

372

worm

wurm

417

index-friend

index-freund

535

953

index-friend

547

i
341

ekel

ich

178

neck

ekel
351

friend

680

freund

my

535

356

mein

306

875

717

disgust

ekel

790

over

ueber

363

index-friend nausea

index-freund schlecht

213

worm

wurm

390

disgust

ekel

382

yuck

fa

570

796

to-worm worm

auf-wurm wurm

130

worm

wurm

753

178 Eeva A. Elliott and Arthur M. Jacobs

Phonological and morphological faces 179

Appendix 3. Gloss choices of transcribers for each disgust related sign type
Sign Type

Transcription Variations

hate

hass, hass, hass, hass, ekel, hass, hass,


hass

disgust

ekelig, ekel, ekel, ekel, ekel, ekel, ekel,


ekel, ekel

throat

ekel, ekel, ekel, ekel

yuck

ekelig, ekelig, iiii, igitt, igitt, fa, fa

goose-bumps

ekel, ekel, ekelig

polite

hass, ekel

nausea

schlecht, unwohl, schlecht

vomit

erbrech, erbrech

touch

ekel, ekel

10

neck

n/a

11

slang

hass, ekelig

12

pff

ekelig

Note: Only the tokens elicited in conditions 1 and 2 were transcribed.

Appendix 4. Eliciting materials for basic emotions signs


1. Ich bin verrgert ber meinen Chef.
I am angry at my boss.
2. Mein Freund hat gesagt, dass er ber seinen Chef verrgert ist.
My friend said that he is angry at his boss.
3. Ich bin angeekelt von Wrmen.
I am disgusted by worms.
4. Mein Freund hat gesagt, dass er angeekelt von Wrmen ist.
My friend said that he is disgusted by worms.
5. Ich habe Angst vor dem Tod.
I am afraid of death.
6. Meine Schwester hat gesagt, dass sie Angst vor dem Tod hat.
My sister said that she is afraid of death.
7. Ich bin glcklich, meinen Freund wieder zu sehen.
I am happy to see my friend again.
8. Meine Mutter hat gesagt, dass sie glcklich ist, ihre Freundin wieder zu sehen.
My mother said that she is happy to see her friend again.

2014. John Benjamins Publishing Company


All rights reserved

180 Eeva A. Elliott and Arthur M. Jacobs


9. Ich bin traurig, weil mein Hund gestorben ist.
I am sad because my dog died.
10. Mein Bruder hat gesagt, dass er traurig ist, weil sein Hund gestorben ist.
My brother said that he is sad because his dog died.
11. Ich bin berrascht von der Nachricht.
I am surprised by the news.
12. Meine Schwester hat gesagt, dass sie berrascht von der Nachricht ist.
My sister said that she is surprised by the news.

Corresponding authors address


Dr. Eeva A. Elliott
Department of Experimental and Neurocognitive Psychology
Freie Universitt Berlin
Habelschwerdter Allee 45
14195 Berlin
Germany
astarteva@gmail.com

2014. John Benjamins Publishing Company


All rights reserved