Vous êtes sur la page 1sur 4

Using Auditory Displays in Public Places

Eoin Brazil

Auditory displays allow us to make a better use of our auditory sense. Auditory displays
are particularly suited to mobile and ubiquitous applications as they can free up the visual
modality. Sound has been used in interfaces to draw attention to events and to support
peripheral awareness [1, 2]; to give users confirmation that their actions have succeeded
or failed [3]; to monitor processes [4, 5]; and for notification [6].

Auditory icons are representations of everyday sounds (environmental sounds)


designed to convey information from computer events [7]. Auditory icons represent
distinct events or items, as the relationship is a mapping from an event in the everyday
sound to a computer event. The identification of an auditory icon directly relates to the
analogy used between the everyday world and the model world. This metaphorical
association to the interacting object is built on our everyday listening skills. Gaver has
described and investigated auditory icon comprehension in an ecological sense [8, 9].
Auditory icons can be parameterised as discussed by Gaver [10] to reflect the relevant
dimensions of the interacting object. An example would be a large file would sound large.
This parameterisation allows for an auditory icon to be accessed by a number of
dimensions and greatly increases the amount of information that can be conveyed
perceptually rather than symbolically. The identification of an auditory icon is related to
the interpretation of its metaphorical association and this identification becomes more
difficult when concurrently presenting auditory icons as the more icons presented the
greater the possibility of changing or damaging the metaphorical associations.
Modification may be made to help improve auditory icon identification; but the
modification is constrained due to the metaphorical association that must be preserved to
maintain the mapping between the events and the sounds. These everyday sounds have an
inherent meaning which has previously been learnt from our everyday activities, which
should have an inherent meaning dependant upon the context, Mynatt [11] discussed the
recognition problem of how choosing the right sounds for the interface is an art with
many hidden dangers and is dependant upon the skills of the designer. These dangers can
include sounds which are confusing when heard without context, or which create
different scenarios by the simple rearrangement of the temporal sequencing of the sounds.
Research from Ballas and Howard [12] found that semantic context in sound
interpretation is a important factor and that auditory perception is directed towards
awareness of the sources of sounds i.e. the events producing sounds. They also stated that
the function of auditory perception is to recognise events rather than processing acoustic
patterns. This has lead to some ongoing research in the IDC where we are investigating
using interaction design and auditory display with a focus on mapping human activity to
actions rather than objects. In order to further research this new approach we have
conducted several investigations into the identification of everyday sounds. As part of
this experience a listening test approach was used, as used by several other researchers
e.g. [37-39]. Fernström et al [13] presented an analysis of a set of descriptions for the
sounds within the framework of interaction design and auditory display. In analysing
these descriptions Ballas’ method of causal uncertainty [14] was used, which determines
a confusion metric for a sound with different possible identifications of the sound. Ballas
et al found that the identification time for everyday nonspeech sounds was a function of
the logarithm of the number of alternative interpretations of a sound [15]. H cu is a
measure of causal uncertainty for sound i, where pij is the proportion of all responses
for sound i sorted into event categories j and n is the number of categories for responses
to sound i.

The process for evaluating this type of experiment involves analysing the
collected responses, which are sorted and categorized as well as being checked for
correctness. From the text responses, the action and object segments are extracted and
categorized; these are concentrated on how the objects/materials interacted and what
objects/materials were used in the interaction/s. The collected data set with all responses,
categorizations, and measurements of causal uncertainty can also be used for suggesting
the possible use of sounds in interaction design, somewhat similar to Barrass’ method of
collecting stories [16] about when sound is useful in everyday life. From a designer’s
point of view it’s interesting to note that the responses from the listening tests contain
information about how people describe everyday sounds as well as measurements of
causal uncertainty. With the results of the listening tests, we can begin to suggest possible
auditory displays and metaphors for interaction design. Based on Barrass’ TaDa
approach, we can do a task and data analysis that lets us select sounds that can
communicate the dimensions and directions that give users adequate feedback about their
actions in relation to a system as well as the system’s status and events. The
measurements of causal uncertainty allows for the choice of sounds that are semantically
distinct. Our earlier research concentrated on individual sounds, which were presented
sequentially, but as designers may wish to convey more than the single event or message
contained within a single auditory icon, our current investigations are investigating
concurrently presented auditory icons. Peek [17] is an example of a concurrent auditory
icon systems, it is used by network administrators for network monitoring. A typical
scenario could involve the mapping of the number of users to the number of birds singing
so you could directly hear how many users are on the network at any one time, serious
events or negative events could be mapped to gun shots or thunder sounds. This shows a
practical scenario for using concurrent auditory icons and it is equally applicable to any
monitoring or notification task.

Macrotemporal properties of sounds are also important for identification, e.g. the
sound of a single isolated footstep can be heard as a book being dropped on a table, while
if several footsteps are heard there is seldom any doubt about the source. Brian Gygi [18]
investigated how people identify everyday sounds. He found that when all spectral
(timbral) content was removed, the macro-temporal structures of the sounds still afforded
identification (22-46%). Another factor that contributes to confusion is context, e.g. the
sound of rain is often mistaken for being the sound of frying. These factors must also be
taken into account when designing modifications to an auditory icon. In designing
auditory displays it is possible to design sonifications and use auditory icons that are
psychoacoustically correct and also quite efficient but which are unpleasant to listen to.
Sound Design [19], Foley artists, theories from acousmatic music should all be drawn
upon to further influence the work in auditory displays to help when designing auditory
aesthetics into auditory displays.
Our research into auditory interfaces aims to give users a direct experience of the
activity they're engaging in, especially in ‘eyes busy’ Drawing on research into
classification [20] and our earlier work on browsing [21 – 23] and allowing it to inform
our approach as part of an activity-centric model of interaction. Our hypothesis is that
users interact directly with the interface and by mapping users interaction to the action
rather than the object would reflect a better mapping to their needs and understandings.
We also believe that the event mapping should not be a discrete mapping but rather a
continuous mapping. Man interactions are continuous processes but they also evolve and
change so we must support a set of practices but we must also support the evolution of
the process. This follows both Durish [24] and Schon's [25] views on the "conversation
with materials" out of which will emerge new forms of action and meaning. Moving our
focus of attention from "context" (as the set of descriptive features of a soundscape or
auditory scene) to "practice" (forms of engagement with those sounds), we highlight the
importance of the meanings that people find in the world and the meanings of their
actions in terms of the consequence and interpretations of those actions for themselves
and others. Increase our understanding as Designers of interactive sonfications we need a
greater depth of empirical work under controlled conditions before this particular
activity-centric model can be used in real world applications.

References:
[1] M. Barra, T. Cillo, A. De Santis, U. F. Petrillo, A. Negro, and V. Scarano, "Multimodal Monitoring
of Web Servers," IEEE Multimedia, pp. 32-41, 2002.
[2] B. S. Mauney and B. N. Walker, "Creating Functional and Liveable Soundscapes for Peripheral
Monitoring of Dynamic Data," presented at International Conference on Auditory Display (ICAD-04),
Sydney, Australia, 2004.
[3] S. A. Brewster, P. C. Wright, A. J. Dix, and A. D. N. Edwards, "The sonic enhancement of
graphical buttons," presented at Interact'95, Lillehammer, Norway, 1995.
[4] M. C. Albers, "The Varese system, hybrid auditory interfaces, and satellite-ground control: Using
auditory icons and sonification in a complex, supervisory control system," presented at Second
International Conference on Auditory Display, Santa Fe, N.M., 1994.
[5] J. Cohen, "Monitoring Background Activities," in Auditory Display: Sonification, Audification
and Auditory interfaces, G. Kramer, Ed. Reading, MA, USA: Addison-Wesley Publishing Company, 1994,
pp. 499-532.
[6] W. W. Gaver and R. Smith, "Auditory icons in large-scale collaborative environments," presented
at INTERACT'90, 1990.
[7] W. W. Gaver, "Using and Creating Auditory Icons," in Auditory Display: Sonification,
Audification and Auditory interfaces, G. Kramer, Ed. Reading, MA, USA: Addison-Wesley Publishing
Company, 1994, pp. 417-446.
[8] W. W. Gaver, "How do we hear in the world? Explorations of ecological acoustics," Ecological
Psychology, vol. 5, pp. 285 - 313, 1993.
[9] W. W. Gaver, "What in the world do we hear? An ecological approach to auditory source
perception," Ecological Psychology, vol. 5, pp. 1-29, 1993.
[10] W. W. Gaver, "Synthesizing Auditory Icons," presented at Interchi'93, 1993.
[11] E. D. Mynatt, "Designing with auditory icons," presented at Second International Conference on
Auditory Display (ICAD '94), Santa Fe, New Mexico, 1994.
[12] J. A. Ballas and J. H. Howard, "Interpreting the language of environmental sounds," Environ.
Behav., vol. 1, pp. 91-114, 1987.
[13] J. M. Fernström, E. Brazil, and L. Bannon, "HCI Design and Interactive Sonification for Fingers
and Ears," IEEE Multimedia, vol. 12, pp. 36-44, 2005.
[14] J. A. Ballas, "Common factors in the identification of an assortment of brief everyday sounds," J.
of Experimental Psychology, vol. 19, pp. 250-267, 1993.
[15] J. A. Ballas, M. J. Sliwinsky, and J. P. Harding, "Uncertainty and response time in identifying non-
speech sounds," Acoustical Society of America, vol. 79, 1986.
[16] S. Barrass, "Auditory Information Design," in CSIS: Australian National University, 1997, pp.
306.
[17] M. Gilfix and A. Crouch, "Peep (The Network Auralizaer): Analyzing your Network with Sound,"
presented at LISA XIV, New Orleans, USA, 2000.
[18] B. Gygi, "Factors in the Identification of Environmental Sounds," in Department of Psychology.
Indiana: Indiana University, 2001, pp. 187.
[19] D. Sonnenschein, Sound Design: The Expressive Power of Music, Voice, and Sound Effects in
Cinema, 1st ed. Studio City, CA, USA: Michael Wiese Productions, 2001.
[20] G. C. Bowker and S. L. Star, Sorting things out: Classification and its consequences. Cambridge,
MA: MIT Press, 2000.
[21] E. Brazil, M. Fernström, G. Tzanetakis, and P. Cook, "Enhancing Sonic Browsing Using Audio
Information Retrieval," presented at International Conference on Auditory Display, Kyoto, Japan, 2002.
[22] E. Brazil and J. M. Fernström, "Audio Information Browsing With The Sonic Browser," presented
at International Conference on Multiple Views in Exploratory Visualisation (CMV2003), London, UK,
2003.
[23] M. Fernström and E. Brazil, "Sonic Browsing: an Auditory Tool for Multimedia Asset
Management," presented at ICAD 2002, Helsinki, Finland, 2001.
[24] P. Dourish, Where the Action Is: The Foundations of Embodied Interaction. Cambridge, MA: MIT
Press, 2001.
[25] D. Schon, The Reflective Practitioner: How Professionals Think in Action. New York: Basic
Books, 1983.

Vous aimerez peut-être aussi