Vous êtes sur la page 1sur 1

Study of a 3D audio-motor coupling with an electromagnetic motion capture device

Thomas Hoellinger1, Johanna Robertson1,2, Malika Auvray1, Agnès Roby-Brami1,2, & Sylvain Hanneton1
1 Laboratoire de Neurophysique et Physiologie, CNRS UMR 8119, Université Paris 5
2 Département de Médecine Physique et Rééducation, Hôpital Raymond Poincaré, Garches, France

INTRODUCTION RESULTS
Sensory substitution devices convert information normally processed
by one sensory modality (e.g., vision) into stimulation provided through ano-
ther sensory modality (e.g., audition) [1].Many studies have shown the feasibi-
Performances
lity of converting visual information into sounds [2]. However, the sensorimo-
tor parameters involved in the learning of sensory substitution devices remain 100
22 B
A
unknown. 90 20

Trial Duration (s)


The aim of the study reported here is twofold :

Success Rate %
80 18
1) To investigate participants’ ability to localize a source within a 3D visual-to-
auditory environment and to investigate whether performance depends on the 70 16

emplacement of the sensor (hand versus head) 60


14

2) To investigate the kinematics of the participants’ head and hand while adap- Hand Mode 12
ting to this new system. 50 Head Mode

1 2 3 1 2 3
Block Block

MATERIAL & METHODS Fig. 2: The participants’ level of performance was significantly higher in the “hand mode” (the less
“natural” mode) than in the head mode.
In addition, we observed for both modes a block effect with a significant increase in the success

Participants
rate (F[2,18]=5.87, p<.01) and a significant decrease in the trial duration F[2,18]=6.26, p<0.01) de-
monstrating an adaptation to the task
10 blindfolded sighted participants (age range: 25-50 years) took part in this experiment. None of
the participants suffered from hearing deficits (as assessed by auditory tests) or orthopaedic or neuro-
logical deficits.
Kinematics

Experimental Set-Up
40
A C
Hand Mode Head Mode

Displacements
20 20
We used an Electromagnetic device (Polhemus) connected with a 3D audio rendering system (Ope- Right-Left
nAl) that provides auditory feedback of movements. 0
0
The model works with a listener (sensor) and an audio source. The sound received by the listener de-
pends on its position and orientation with respect to the source. In this way the sound recorded varies -20
-20
in intensity (Eq. 1.) and in frequency (computed by the audio device). The source was a «buzzing»
-40
sound with significant harmonics between 100 and 2000 Hz.
0 2 4 6 8 10 12 0 2 4 6 8 10 12
B D
Task 100 Hand Mode Head Mode
Azimuth Rotation

100

The aim of the participants was to catch a fixed audio source with their hand. The sound was per- 50 50
ceived by the participants using “virtual ears” (the listener) located either on their hand (“hand mode”)
0 0
or on their head (“head mode”). The sound received by the participants was modulated according to
the position and orientation of the “virtual ears”. (Fig. 1). For each mode, the experiment consisted of -50 -50
3 blocks of 9 trials, giving rise to a total of 54 trials per participant.
-100
0 2 4 6 8 10 12 0 2 4 6 8 10 12

Sound spatialisation modeling


Time (s) Hand Movements
Time (s)
Head Movements
Start Of The Hand Movements
According to the model used, the intensity decreases and the sound is slightly muffled as the liste- Fig. 3: We observed a tendency for oscillatory movements of the hand to decrease when approa-
ner sensor moves away from the source. ching the target. We suggest that this pattern of low amplitude oscillations close to the target is rela-
The sound attenuates according to the Equation 1 (Eq. 1) below: ted to the success of the strategy used to accomplish the task.
D ref
G d =
D ref R⋅d −D ref 
where d is the distance to the source, G is the gain applied to the sound, R (= 1.3) the Roll-off fac-
DISCUSSION
tor and Dref (= 5 cm) a “reference distance” (the distance that gives a unitary gain). The sound per-
ceived is influenced by the relative orientation of the listener to the source (Fig. 1) including further
effects of binaural of sound perception directly modeled and computed by the audio device (Sound- The study reported here revealed that auditory feedback can be used in order to guide rea-
Max integrated digital audio by Analog Devices Inc.). ching movements.
Participants’ performance was higher using the hand mode than using the head mode.
The participants’ performance significantly improved across trials for the two modes (hand
and head modes), as assessed both by an increase in success rate and by a decrease in task du-
Hand Mode ration.
Further analyses on the different kinematics between hand and head have to be conducted in
order to investigate more precisely the participants’ sensorimotor strategies.
This study thus reveals that healthy participants are able to adapt to a new audiomotor envi-
b ronment and to acquire new sensori-motor strategies.
This possibility raises hope for the use of audio feedback for hand guidance in rehabilitation
(see Robertson et al. poster).

Head Mode
REFERENCES
b b b [1] Bach-y-Rita, P., & Kercel, S. W. (2003). Sensory substitution and the human-machine interface.
Trends in Cognitive Sciences, 7, 541-546.

[2] Auvray, M., Hanneton, S., & O’Regan, J. K. (2007). Learning to perceive with a visuo-auditory
Listener sensor Y substitution system: Localization and object recognition with The Voice. Perception, 36, 416-430.cep-
X tion, 36, 416-430.
Z
Fig. 1 Modelisation of the sound perceived by the participant relative to the mode and the orientation ACKNOWLEDGEMENTS: This work was supported by PHRC 2003 grant (“Comprendre et rédui-
of the listener (located on the hand in «hand mode» and on the head in «head mode»). re le handicap moteur”). Agnes Roby-Brami is supported by INSERM.

Thomas_hoellinger@univ-paris5.fr
www.neurophys.biomedicale.univ-paris5.fr/neuromouv

Vous aimerez peut-être aussi