Vous êtes sur la page 1sur 17

The Linguistic Coding of verbal and non-verbal backchannels:

A Preliminary Approach

Initial Linguistic Codes- Recap:

Continuers- Maintaining the flow of discourse Convergence Tokens- Marking agreement and disagreement Engaged Response Token- High level of engagement, with the participant responding on an affective level to the interlocutor Information Receipt Token- Marking points of the conversation where adequate information has been received

Limitations & Future Requirements:


A key concern of this project is to explore how, in conversation,

verbal and visual realisations of backchannels interact within and across such categories (and beyond!). However in its present form, such a coding scheme is only proficient for the use of audio data, spoken linguistic backchannels forms, and provides no utility in its definitions to define and encode non-verbal backchannels seen in the image data and for integrating this information with the verbal data. Therefore we need to develop a more all-encompassing coding scheme which can be used for both verbal and non-verbal forms.

Linguistic & Methodological Approach:

PRAAT is a computer program that enables you to

analyse, synthesize, and manipulate speech- so can be used to explore the phonetic patterns of backchannels. Developed by researchers at the Institute of Phonetic Sciences based at the University of Amsterdam It is free to download online and available for general use Note on technical restrictions in using our data in PRAAT

Methodological Approach:
Aim: to explore any potential relationships between the actual existence of non-verbal (i.e. head nods) and verbal backchannels, as well as the duration, pitch and intensity of these. We have focused specifically upon our training data We have explored whether there are (in) consistencies:
Across all instances- with nods, without nod Between those occurring with nods and those occurring without head nods Within the groups of yeah and mmm (the most frequent backchannels) specifically

Sample information:
Sample size = 100 verbal backchannels, from 275 total

50 co-occur with head nods (43 female, 7 male) 50 occur without head nods (41 female, 9 male) 84 are spoken by the female supervisor 16 are spoken by the male supervisee

We are aware that we need to take into account the fact that the results will obviously vary according to who actually speaks as the phonetic characteristics of the student, male will be different to the supervisor, female

Sample Data- Backchannel Length (secs):


3

2.5

Length (Seconds)

1.5

0.5

0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 Backchannel Number

= with nod

= no nod

Sample Data Results- Length in detail (secs):


Male Data:
3
1.4 1.2 1 Length (seconds) 0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 7 8 9

Female Data:
2.5

L e n g th (s e c o n d s )

1.5

0.5

0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 Backchannel Number

Backchannel Number

= with nod

= no nod

Sample Data Results- Pitch (Hz) & Intensity (dB)

400 350 300


In ten sity (Hz )

80 70 60 50 40 30 20 10 0

P itch (dB)

250 200 150 100 50 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 Backchannel Number

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 Backchannel Number

= with nod

= no nod

Sample Data Results- Yeah in detail:


- Focus on FUNCTION, linking back to initial linguistic codes - 31 * yeah in the sample, 16 with nods, 15 with no nods - 6 male (3 + nods, 3 + no nods), 25 female (13, 12) - 101 in total, making the second most frequent backchannel (mmm = 102 occurrences) - Similar investigations have been conducted across each of the backchannel forms - Extensions and future plans

Sample Data Results- Yeah Focus:


1.4 1.2 1
L e n g th (S e c o n d s )
400 350 300 250 200 150

0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Backchannel Number

P itch (Hz )

100 50 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Backchannel Number

80 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Backchannel Num ber

= with nod = no nod

Intensity (dB)

Sample Data Results- Yeah Function Results


1.4 1.2 1

400 350 300 250 200 150

L e n g th (s e c s )

0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Backchannel Number

P itch (Hz )

100 50 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Backchannel Number

80 70 60

= Continuer
Intensity (dB)

50 40 30 20

= Convergence Token

10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Backchannel Number

All Sample Data- Function Focus: Length (sec)


3

2.5

Length (Secs)

1.5

0.5

0 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 Backchannel Number

= Continuer = Engaged Response

= Convergence Token = Information Receipt

All Sample Data- Function Focus: Pitch (Hz)


400 350 300 250 200 150 100 50 0 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 Backchannel Number

Pitch (Hz)

= Continuer = Engaged Response

= Convergence Token = Information Receipt

All Sample Data- Function Focus: Intensity (dB)


80 70 60

Intensity (dB)

50 40 30 20 10 0 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 Backchannel Number

= Continuer = Engaged Response

= Convergence Token = Information Receipt

Future Developments:
- To look in more detail at: - The actual timings of the verbal and non-verbal backchannels - Where they occur in discourse - Between what words and lengths of pauses they occur This will allow us to examine whether there is a link between, for example, the time when the interlocutor makes and statement and when a response is made, and the value/ function of the response.

Future Developments- Data:


Although for the basic PRAAT explorations we have used the supervision data but we also have collected multiple forms of other data too for future investigation

Vous aimerez peut-être aussi