Vous êtes sur la page 1sur 6

Neuropsychologia 47 (2009) 17–22

Contents lists available at ScienceDirect

Neuropsychologia
journal homepage: www.elsevier.com/locate/neuropsychologia

Visual stimuli can impair auditory processing in cochlear implant users


François Champoux a,c , Franco Lepore a,b,d , Jean-Pierre Gagné a,c , Hugo Théoret a,b,d,∗
a
Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montréal, Québec, Canada
b
Département de Psychologie, Université de Montréal, Montréal, Québec, Canada
c
École d’Orthophonie et d’Audiologie, Université de Montréal, Montréal, Québec, Canada
d
Hôpital Ste-Justine de Montréal, Montréal, Québec, Canada

a r t i c l e i n f o a b s t r a c t

Article history: It has been shown that visual stimulation can activate cortical regions normally devoted to auditory
Received 11 March 2008 processing in deaf individuals. This neural activity can persist even when audition is restored through the
Received in revised form 25 August 2008 implantation of a cochlear implant, raising the possibility that cross-modal plasticity can be detrimental
Accepted 26 August 2008
to auditory performance in cochlear implant users. To determine the influence of visual information on
Available online 9 September 2008
auditory performance after restoration of hearing in deaf individuals, the ability to segregate conflicting
auditory and visual information was assessed in fourteen cochlear implant users with varied degree of
Keywords:
expertise and an equal number of participants with normal-hearing matched for gender, age and hearing
Cochlear implantation
Audio-visual interaction
performance. An auditory speech recognition task was administered in the presence of three incongruent
Multisensory segregation visual stimuli (color-shift, random-dot motion and lip movement). For proficient cochlear implant users,
Auditory perception auditory performance was equal to that of controls in the three experimental conditions where visual
stimuli were presented simultaneously with auditory information. For non-proficient cochlear implant
users, performance did not differ from that of matched controls when the auditory stimulus was paired
with a visual stimulus that was color-shifted. However, significant differences were observed between the
non-proficient cochlear implant users and their matched controls when the accompanying visual stimuli
consisted of a moving random-dot pattern or incongruent lip movements. These findings raise several
questions with regards to the rehabilitation of cochlear implant users.
© 2008 Elsevier Ltd. All rights reserved.

1. Introduction thermore, the N1 component of the visually evoked event-related


potential (ERP) recorded in response to motion stimuli was found
In deaf individuals, visual stimulation can activate cortical to be larger and more anteriorly distributed in pre-lingually deaf
regions normally devoted to auditory processing. For example, participants compared to normally hearing subjects, whereas ERPs
in pre or post-lingually deaf patients, auditory activity has been recorded in response to a color change produced no significant dif-
reported in response to visual presentation of sign language ference between hearing and deaf participants (Armstrong, Neville,
in the superior temporal gyrus and association auditory cortex Hillyard, & Mitchell, 2002).
(MacSweeney et al., 2002; Nishimura et al., 1999) and left planum In some cases, deafness can be reversed through the use of a
temporale during observation of lip movement (with or without cochlear implant (CI), which converts auditory signals into electri-
visual phonetics) (Sadato et al., 2005). Cross-modal plasticity in cal impulses delivered to the auditory nerve (see Mens, 2007 for
the auditory cortex of deaf subjects has moreover been demon- review). Cross-modal plasticity of the kind described above, where
strated with the use of purely visual stimuli. Finney, Fine, and auditory areas are recruited for visual processing, appears to be
Dobkins (2001) and Finney, Clementz, Hickok, and Dobkins (2003) an important factor in predicting the auditory performance of CI
reported activation of primary, secondary and association auditory users. Lee et al. (2001) have suggested that visual-to-auditory cross-
areas during the observation of moving dot patterns or moving modal plasticity is an important factor limiting hearing ability in
sinusoidal luminance gratings in early-deafened individuals. Fur- non-proficient CI users. The level of hypometabolism detected with
positron emission tomography in the temporal cortex of preopera-
tive CI users, pre-lingually deafened, is correlated with subsequent
speech recognition performance (Lee et al., 2001, 2007), raising
∗ Corresponding author at: Département de Psychologie, Université de Montréal,
the possibility that cross-modal plasticity can be detrimental to
CP 6128, Succ. Centre-Ville, Montréal, Quebec, Canada H3C 3J7. Tel.: +1 514 343 6362;
fax: +1 514 343 5787.
auditory performance. This hypothesis recently received experi-
E-mail address: hugo.theoret@umontreal.ca (H. Théoret). mental support when greater cross-modal plasticity was shown in

0028-3932/$ – see front matter © 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.neuropsychologia.2008.08.028
18 F. Champoux et al. / Neuropsychologia 47 (2009) 17–22

Table 1
Clinical profile of CI patients

Participant Sex Age Age at onset of deafness (years) Cause of deafness Deafness duration (years) CI duration (years)

P1 F 19 0–16 (progressive) Unknown 1–16 3


P2 F 43 27–38 (progressive) Unknown 2–13 3
P3 F 54 25–52 (progressive) Unknown 1–27 2
P4 M 59 49 (sudden) Unknown 1 9
P5 M 66 0–64 (progressive) Unknown 1–64 2
P6 F 58 30–52 (progressive) Hereditary 1–22 6
P7 F 65 20–40 (progressive) Unknown 22–42 3
NP1 M 54 11–51 (progressive) Unknown 1–40 3
NP2 F 58 6–11 (progressive) Unknown 44–49 3
NP3 M 48 0–30 (progressive) Hereditary 15–45 3
NP4 F 54 5–51 (progressive) Infectious 1–46 3
NP5 F 65 0–40 (progressive) Unknown 1–60 5
NP6 F 69 0–50 (progressive) Infectious 1–66 3
NP7 M 43 4–40 (progressive) Unknown 1–36 3

P: proficient CI users; NP: non-proficient CI users.

CI users with limited speech perception abilities compared to profi- and lip movement) in CI users and in individuals with normal-
cient CI users and normally hearing participants (Doucet, Bergeron, hearing matched on gender, age and hearing performance. In light
Lassonde, Ferron, & Lepore, 2006). In this study, visual evoked of the reported extensive cross-modal reorganization present in
potentials were recorded during presentation of high contrast sinu- CI users with poor speech perception abilities, we predicted that
soidal gratings forming a concentric pattern that transformed itself auditory speech recognition would be significantly degraded in
into a five pointed star, giving the impression of movement from non-proficient CI users when visual stimuli were simultaneously
one stimulus to the next (transformational apparent movement). presented.
Comparison between CI users and control participants revealed
2. Methods
a more anteriorly distributed P2 component in those participants
(one quarter of the participants were post-lingually deafened) that 2.1. Participants
were less efficient in processing auditory speech cues.
In light of the possibility that redirection of visual input to audi- Seventeen CI users (8 males) between 19 and 69 years of age, and an equal num-
ber of participants with normal-hearing matched for gender and age (±5 years)
tory cortical areas may hinder hearing performance in CI users,
participated in the study. All CI users had received their implant at least one year
one may wonder how these two modalities interact during mul- before they took part in the investigation. The clinical profile of each CI user is
tisensory perception. This is particularly relevant considering that presented in Table 1. All participants suffered from profound bilateral hearing loss
much of understanding speech occurs in a multisensory environ- (pure-tone detection thresholds at 80 dB HL or greater at octave frequencies ranging
ment in which both visual and auditory cues are present. Given the from 0.5 to 4 kHz) and were post-lingually deafened. The principal communication
mode in all CI users was oral/lip-reading. The ethics review boards of the Insti-
apparent invasion of auditory cortex by visual information, it could
tut Raymond-Dewar as well as of the Centre Hospitalier Mère-Enfant Sainte-Justine
be hypothesized that visual information interferes with auditory approved the study and all the participants gave their written informed consent.
treatment leading to poor speech recognition in non-proficient CI
users. If this were the case, a simple prediction can be made: audi- 2.2. Stimuli and design
tory performance of non-proficient CI users will be reduced when
A female talker was videotaped while saying 160 consonant–vowel–
concurrent visual stimuli are presented, whereas it will remain consonant–vowel bi-syllabic words. The production of each stimulus word began
largely unaffected in proficient CI users. Recent findings suggest and ended in a neutral, closed mouth position for a total duration of about 500 ms.
that a great proportion of CI users are able to adequately inte- In the first condition (see Fig. 1), each test word was presented in an audio-only
grate congruent audio-visual information (Bergeson & Pisoni, 2004; condition (video screen off). In the second condition, the monitor screen was col-
ored green for 250 ms (pre-stimulus condition) followed by the auditorily presented
Geers, 2004). CI users and normally hearing individuals equally
test word simultaneously paired with an orange–green shift in the visual stimu-
benefit from seeing a speaker’s articulatory movement to under- lus, each color being presented for 250 ms. For the third stimulus condition, static
stand speech. Rouger, Fraysse, Deguine, and Barone (2008) also random-dots were displayed on the screen for 250 ms (pre-stimulus condition),
report that CI users, are better at integrating congruent audio-visual followed by a test word that was paired with the movement of the random-dot dis-
play (dot size, 0.2◦ ; speed, 7◦ /s; dot density, 2.7%; dot luminance, 590 candelas/m2
information. However, the specific ability to segregate incongru-
against a black background; percent dots moving coherently, 87%; duration, 500 ms),
ent auditory and visual information has been poorly investigated. adapted from the stimuli described by Finney et al. (2001). In the final condi-
Recent studies suggest that CI users have atypical audio-visual tion, a face appeared for 250 ms (pre-stimulus condition), followed by a test word
fusion processing capabilities. In children with a CI, as opposed that was paired with the video sequence of the talker uttering a different (incon-
to normally hearing individuals, multisensory perception is domi- gruent) consonant–vowel–consonant–vowel bi-syllabic word lasting approximately
500 ms. In all cases, after visual–auditory stimulation, the pre-stimulus visual condi-
nated by vision when they are presented with McGurk-like stimuli
tion appeared for 250 ms. In all conditions, a temporal synchrony between the visual
(Rouger et al., 2007; Schorr, Fox, van Wassenhove, & Knudsen, stimulus and the auditory utterance was achieved by aligning the burst correspond-
2005), which involves incongruent audio-visual information. The ing to the beginning of the test word in the auditory condition with the beginning
ability to segregate conflicting auditory and visual input, however, of the visual stimulus change (color, movement or video sequence).
has not been studied in this population.
2.3. Procedure
The aim of the present study was to determine the effect of visual
stimulation on auditory performance after restoration of hearing Forty words were presented in each of the four experimental conditions in one
in deaf individuals. Specifically, we wished to investigate the link block of 160 trials. The stimuli were presented in a random order using Presenta-
between the ability to segregate conflicting auditory/visual infor- tion software (Neurobehavioral Systems Inc., San Pablo, CA). The auditory stimuli
(bi-syllabics words) were always presented at a comfortable listening level via two
mation and auditory proficiency with the CI. An auditory speech loudspeakers positioned at ear level and located on each side of a 17 in. video moni-
recognition task was administered in the presence of three dif- tor that was positioned at the participant’s eye level. The participants were asked to
ferent incongruent visual stimuli (color-shift, random-dot motion watch the screen and listen to the talker and report what they had heard. They were
F. Champoux et al. / Neuropsychologia 47 (2009) 17–22 19

informed that when incorporated into a trial, the accompanying visual utterance of
the speaker would always be incongruent with the auditory signal and their task was
only to report the heard word. An experimenter was present in the chamber during
the entire procedure to ensure that participants were always looking at the screen
before stimulus presentation and to monitor oculomotor behavior during stimulus
presentation.
The participants assigned to the control group heard each auditory test stimulus
in the presence of a broadband noise. According to Rouger et al. (2007), this auditory
degradation permits the direct comparison of performance at equivalent ranges of
non-optimal auditory performance. To degrade the auditory performance of normal
hearing subjects, we first used a masking paradigm with white noise at different
signal-to-noise ratios. For this group of participants, prior to the actual test session,
the noise level was adjusted so that the level of performance of the participant
(in the auditory-alone condition) was the same as the level of performance of the
matched participant in the experimental group. Another list of bi-syllabic words (not
used in the experiment proper) was used to determine the signal-to-noise ratio at
which a participant would perform the experimental tasks. The noise was generated
with Cool Edit pro software (version 1.2: Syntrillium Software Corporation, San Jose,
CA). During the experimental task the noise was presented continuously via the
two loudspeakers used to present the test words. With this procedure, variations in
performance between a CI user and the paired-control subject with normal-hearing
were less than 10% in the auditory-alone condition.
For each participant, the proportion of correct responses obtained in the
auditory-alone condition was used to define each individual’s performance level (i.e.,
the proficiency to use the CI). As determined prior to data collection, participants Fig. 2. Plot of auditory performance for the controls (light column) and the CI users
were included in the proficient group if their performance on the auditory task was (dark column).
above 75%. Moreover, the auditory-alone condition consisted of the reference point
that was used to compute the percent decrement in performance obtained in each
of the three experimental condition in which a visual stimulus was presented along users and their paired controls, as well as non-proficient CI users
with the auditory stimulus (i.e., percent decrease of performance = 100 × (score in and their paired controls (P > 0.05). However, there was a signifi-
the auditory-alone condition − score in the audio-visual condition)/score in the cant difference in auditory performance between the proficient and
auditory-alone condition)). In analyzing the results, no attempt was made to take
into account any other related variable such as the age of onset of the hearing loss,
non-proficient groups (P < 0.05), which was expected since groups
severity of the hearing loss, duration of the hearing loss, or communication mode were defined on the basis of their performance on this task.
used prior to implantation. Based on pre-experimental evaluation, we found no sig- The mean decrease in performance obtained for each of the
nificant difference between non-proficient and proficient CI users in the ability to experimental condition in which a visual stimulus accompanied
discriminate bisyllabic words in the congruent audio-visual or visual (lip-reading)
the auditory stimulus (Fig. 1) was computed for the group of
conditions (P > 0.05).
proficient CI users and the group of non-proficient CI users as
well as their respective controls (Fig. 3). To determine the ability
3. Results to segregate conflicting auditory/visual information, two separate
2 × 3 mixed ANOVAs with group (control subjects, CI users) as
The performance level of three CI users in the auditory-alone a between-subjects factor and visual condition (color-shift, mov-
condition was almost nil. Hence, the results obtained from these ing random-dots and lip movements) as a within-subjects factor
participants, as well as their matched normal-hearing subjects, were conducted. For the proficient performers, the analysis failed
were not considered in the data analyses. The CI users were divided to reveal a significant difference between matched control sub-
into two groups. The individuals who obtained a performance level jects and CI users for any of the three experimental conditions
greater or equivalent to 80% (n = 7) were operationally defined as (Fig. 3A). For the non-proficient performers, there were main
proficient CI users (i.e., their scores ranged from 80 to 97.5% correct). effects of visual condition (F = 48.005, P < 0.001) and group (F = 9.331,
The remaining subjects (scores ranged from 30 to 70% correct) were P = 0.010). The interaction between factors was also significant
deemed to be non-proficient CI users (n = 7). Auditory performance (F = 6.202, P = 0.007). Post-hoc analysis failed to reveal a signifi-
for each group of participant is presented in Fig. 2. Variations in cant difference (t = −0.10, P = 0.924) between the performance of
performance between CI users and their paired controls were less the non-proficient CI users and that of their matched controls
than 10%. There was no significant difference between proficient CI when the auditory stimulus was paired with a visual stimulus

Fig. 1. Illustration of the experimental procedure. Each condition began (A) and ended (C) in a static neutral position. In all audio-visual conditions (B), auditory stimuli (D)
were simultaneously presented with a visual stimulus change (color, movement or video sequence).
20 F. Champoux et al. / Neuropsychologia 47 (2009) 17–22

ditions, is presented on the y-axis of each panel. For the control


participants, there was no correlation between the performance
level and the three audio-visual conditions (P > 0.05). For the CI
users, when the accompanying visual stimulus consisted of the
color-shifted stimulus, there was no correlation (r = 0.001, P = 0.998)
between the performance level obtained when the visual stimulus
was presented and the participant’s proficiency to use the CI. How-
ever, when the accompanying visual stimulus consisted of a moving
random-dots pattern, the correlation between the decrease in per-
formance for the visual condition and the proficiency to use the CI
was statistically significant (r = −0.56, P = 0.038). Similarly, when
the accompanying visual stimulus consisted of non-congruent
visual speech utterance, a strong correlation between the decre-
ment of performance and the participant’s proficiency to use the CI
was found, and it was statistically significant (r = −0.92, P < 0.001).
Inspection of the individual data revealed the existence of an
outlier in the control group during the lip motion condition. The
omission of this data point did not change the result of the statisti-
cal analysis (P was still > 0.05 for the group and interaction effects
when comparing proficient controls and CI users). A significant
correlation between auditory proficiency and decrement in perfor-
mance emerges in the controls when this case is removed (r = −0.58,
P < 0.05).

Fig. 3. Plot of performance decrease for each audio-visual experimental condition


4. Discussion
for proficient (A) and non-proficient (B) CI users. The decrements in performance
are plotted for three different audio-visual conditions.
The present study showed that the presentation of visual stim-
uli significantly impairs concurrent auditory word recognition in
consisting of a color-shift (Fig. 3B). However, significant differ- non-proficient CI users whereas it remains unchanged in profi-
ences were observed between the two groups of subjects when the cient CI users and normally hearing subjects. This effect occurred
accompanying visual stimuli consisted of moving random-dot pat- whether or not linguistic cues were present in the visual stimuli
terns (t = 2.653, P = 0.038) and incongruent lip movements (t = 2.60, and despite the fact that every CI user was control-matched on
P = 0.040). auditory performance by adding broadband noise to the auditory
The performance level of every control participant and CI user stimuli. Importantly, the presence of a color-shifted visual stimulus
is plotted along the x-axis in Fig. 4. The percent decrement in did not modify word recognition performance in any of the groups.
performance calculated for each participant (compared to the Proficient and non-proficient CI users were selected on the
auditory-alone condition), in each of the three experimental con- basis of auditory performance. We found no significant difference

Fig. 4. Percent a of decrease of performance for each accompanying visual condition (y-axis) as a function of the individual performance level (x-axis) for each control-paired
(A: light square) and CI users (B: dark square). Separate plots are presented for each audio-visual condition under investigation. Larger symbols reflect two participants with
the same x and y values. Participants were included in the proficient group if their performance on the auditory task was above 75% (vertical dot line). (×) The outlier in
the control group during the lip motion condition. The horizontal dot line represents the correlation between auditory proficiency and decrement in performance when the
outlier is removed.
F. Champoux et al. / Neuropsychologia 47 (2009) 17–22 21

between groups, however, on congruent audio-visual and unimodal specialization of the right auditory cortex for processing motion
visual conditions. This is consistent with previous work show- (Baumgart, Gaschler-Markefski, Woldorff, Heinze, & Scheich, 1999)
ing that CI users have enhanced speechreading skills, which are is somehow co-opted by vision following auditory deprivation
preserved several years after implantation (Rouger et al., 2007). (Finney et al., 2001), a phenomenon that finds echo in the blind
This enhanced ability enables most CI users to achieve a good (Weeks et al., 2000). The motion-processing dorsal pathway of
comprehension level when auditory and visual cues are available. deaf individuals appears to be more prone to plastic changes than
Accordingly, our data suggest that when the auditory signal is the ventral pathway (Armstrong et al., 2002; Mitchell & Maslin,
degraded, non-proficient CI users use the additional visual cues to 2007). This suggests that motion elements in the two types of
increase their speech understanding to levels comparable to those visual stimuli (both were dynamic and changed position in space
of proficient CI users. as a function of time) may have significantly contributed to the
Degraded auditory input associated with CI use is also unlikely reduced word recognition ability in non-proficient CI users. This
to explain the significant reduction in word recognition ability also supports our hypothesis that cross-modal invasion of auditory
since every CI user was carefully matched to a control participant areas partly explains the drop in performance since we found no
on precisely the task that was used to evaluate visual effects on word recognition impairment when CI users were presented with
hearing performance. Predictably, even in “proficient” controls, per- chromatic, ventral pathway-sensitive stimuli that do not appear to
formance in the lip motion condition decreases by approximately produce cross-modal plasticity in auditory cortex of deaf individu-
20%. It has been claimed repeatedly that ambiguity from weak sen- als (Armstrong et al., 2002; Mitchell & Maslin, 2007). Many stimulus
sory input can be compensated for by a second sensory system, a features varied across the three tasks used in the current study, such
principle known as “inverse effectiveness” (see Meredith & Stein, as the degree of saliency and complexity of the stimulus as well as
1986). For example, stronger visual input resulting from a speaker’s the amount of visual change present in the array. The impact of
articulatory movement provides important complementary infor- specific stimulus parameters on performance remains to be fully
mation in noisy environments. This perceptual enhancement can investigated, but further studies should provide important cues as
occur in normally hearing individuals even at a small signal-to- to what property of a visual stimulus impairs auditory perception
noise ratio (Ross, Saint-Amour, Leavitt, Javitt, & Foxe, 2007). Since in CI users.
increasing background noise decreases performance in the control It must be pointed out that numerous elements may interact
group, these results suggest that normally hearing participants may with the level of cross-modal plasticity in defining the level of per-
be influenced by the presence of a speaker’s articulatory move- formance achieved with a CI and the vulnerability to behavioral
ments in an auditory discrimination task, although not as much as cross-modal conflict. For example, it is generally recognized that
in CI users. However, the fact that CI users were differently influ- post-lingually deafened individuals are excellent candidates for a
enced by the presence of visual input compared to their matched CI (see Giraud, Truy, & Frackowiak, 2001). Also, it has been shown
controls argues against a simple effect of auditory performance. that auditory stimuli activate the primary auditory cortex in pre-
The pattern of results across tasks and groups could be explained lingually deafened people while they activate both the primary and
by a variety of factors. It should be pointed out, for example, that the secondary auditory cortices in individuals who are post-lingually
three incongruent tasks more than likely did not require the same deafened (Naito et al., 1997). A direct relationship has also been
attentional resources. Indeed, the dot motion and lip motion tasks reported between post-implantation auditory-word recognition
are both more salient and more complex than the color shift, as scores and the duration of deafness (Lee et al., 2001). Moreover,
well as being longer in duration, making these stimuli more likely cross-modal plasticity may be influenced by the communication
to capture attention. It may also be argued that cortical reorgani- strategies (i.e., familiarity with lipreading or sign language ability)
zation secondary to loss of auditory input (Armstrong et al., 2002; used before implantation. In the present investigation, the onset of
Finney et al., 2001, 2003; Nishimura et al., 1999; Petitto et al., 2000; hearing loss was before the age of six in 6 of the 7 participants
Sadato et al., 2005) is a determining factor in subsequent auditory in the non-proficient group compared to only two in the profi-
performance with the CI and the cross-modal effects on behav- cient group. Furthermore, mean duration of deafness, estimated
ior reported here. Specifically, in line with the idea that the level from the age of onset, was greater in non-proficient CI users. This is
of cross-modal plasticity prior to cochlear implantation can partly consistent with the notion that early auditory deprivation increase
explain the ability to process the re-introduced modality (Doucet the level of cross-modal plasticity in deaf individuals (Giraud et
et al., 2006; Lee et al., 2001, 2007), our data suggest that the partial al., 2001; Lee et al., 2001) and that redirection of visual input to
takeover of auditory cortex by visual input renders non-proficient auditory cortical areas may hinder hearing performance in CI users
CI users, more vulnerable to behaviorally detrimental sensory inter- (Doucet et al., 2006). In the current study, such variables were not
actions in auditory cortex. Imaging studies are needed to confirm entered into the study design or data analysis. Rather, we wished to
this hypothesis by showing that hearing performance impairments determine how a single, functional end-point that bears significant
are correlated with the degree to which incongruent visual infor- importance in the daily activities of CI users (auditory word recog-
mation activates auditory cortex. nition ability) was influenced by vision. Nevertheless, the present
A question that arises from our data is what specific property findings raise several questions related to the rehabilitation of deaf
of the visual stimulus conflicts with the auditory input in non- individuals who are fitted with electronic devices to improve their
proficient CI users. Had an effect only been observed during the auditory perception. Often, professionals, parents of children with
lip movement condition, reliance on visual cues for understand- a profound hearing loss, and individuals with hearing loss who are
ing speech prior to implantation and the well-known effect of candidates for a CI inquire whether individuals with hearing loss
visual speech on cortical activity (Lee et al., 2001, 2007) would should be encouraged to make use of visual cues to communicate
have been the parsimonious explanation of the pattern of responses (e.g., lipreading, cued-speech, sign language). In order to achieve
found here. However, the fact that a simple moving stimulus devoid the best level of speech comprehension, optimal rehabilitation pro-
of linguistic information was sufficient in eliciting cross-modal grams should be individualized. For example, a speech-perception
conflict suggests a more complex explanation. Using moving dot training program might be adapted according to the nature of brain
patterns similar those used in the present study, Finney et al. (2001) reorganization that occurs following the hearing loss. An individual
showed right-hemisphere dominant visually induced auditory cor- for whom a substantial amount of visual information is processed at
tex activation in deaf individuals. The authors argued that the the level of the auditory cortices might benefit from a training that
22 F. Champoux et al. / Neuropsychologia 47 (2009) 17–22

differs from an individual for whom the auditory cortices are not (or Lee, D. S., Lee, J. S., Oh, S. H., Kim, S. K., Kim, J. W., Chung, J. K., et al. (2001). Cross-modal
not substantially) involved in the processing of visual information. plasticity and cochlear implants. Nature, 11, 149–150.
Lee, H. J., Giraud, A. L., Kang, E., Oh, S. H., Kang, H., Kim, C. S., et al. (2007). Corti-
In the future, clinical neuroimaging, coupled with behavioral inves- cal activity at rest predicts cochlear implantation outcome. Cerebral Cortex, 17,
tigations such as the one presented here, might provide insight into 909–917.
the type of rehabilitation strategy (and perhaps even the type of MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C.,
et al. (2002). Neural systems underlying British Sign Language and audio-visual
communication mode) that is best suited for an individual with English processing in native users. Brain, 125, 1583–1593.
significant hearing loss. Mens, L. H. (2007). Advances in cochlear implant telemetry: Evoked neural
responses, electrical field imaging, and technical integrity. Trends in Amplifica-
tion, 11, 143–159.
Acknowledgments Meredith, M. A., & Stein, B. E. (1986). Spatial factors determine the activity of
multisensory neurons in cat superior colliculus. Brain Research Cognitive Brain
This study was supported by grants from the NSERC and CIHR. Research, 369, 350–354.
Mitchell, T. V., & Maslin, M. T. (2007). How vision matters for individuals with hearing
The authors wish to thank the Institut Raymond-Dewar for help in loss. International Journal of Audiology, 46, 500–511.
recruiting participants. We also thank Corinne Tremblay for her Naito, Y., Hirano, S., Honjo, I., Okazawa, H., Ishizu, K., Takahashi, H., et al. (1997).
help in collecting data. Sound-induced activation of auditory cortices in cochlear implant users with
post- and prelingual deafness demonstrated by positron emission tomography.
Acta Oto-Laryngologica, 117, 490–496.
References Nishimura, H., Hashikawa, K., Doi, K., Iwaki, T., Watanabe, Y., Kusuoka, H., et al. (1999).
Sign language ‘heard’ in the auditory cortex. Nature, 397, 116.
Armstrong, B. A., Neville, H. J., Hillyard, S. A., & Mitchell, T. V. (2002). Auditory depri- Petitto, L., Zatorre, R., Gauna, K., Nikelski, E., Dostie, D., & Evans, A. (2000). Speech-like
vation affects processing of motion, but not color. Brain Research Cognitive Brain cerebral activity in profoundly deaf people processing signed languages: Impli-
Research, 14, 422–434. cations for the neural basis of human language. The Proceedings of the National
Baumgart, F., Gaschler-Markefski, B., Woldorff, M. G., Heinze, H. J., & Scheich, Academy of Sciences, 97, 13961–13966.
H. (1999). A movement-sensitive area in auditory cortex. Nature, 400, 724– Ross, L. A., Saint-Amour, D., Leavitt, V. M., Javitt, D. C., & Foxe, J. J. (2007). Do you see
726. what I am saying? Exploring visual enhancement of speech comprehension in
Bergeson, T. R., & Pisoni, D. B. (2004). Audiovisual speech perception in deaf adults noisy environments. Cerebral Cortex, 17, 1147–1153.
and children following cochlear implantation. In G. Calvert, C. Spence, & B. E. Rouger, J., Lagleyre, S., Fraysse, B., Deneve, S., Deguine, O., & Barone, P. (2007). Evi-
Stein (Eds.), Handbook of multisensory processes (pp. 749–772). Cambridge: MIT dence that cochlear-implanted deaf patients are better multisensory integrators.
Press. The Proceedings of the National Academy of Sciences, 104, 7295–7300.
Doucet, M. E., Bergeron, F., Lassonde, M., Ferron, P., & Lepore, F. (2006). Cross-modal Rouger, J., Fraysse, B., Deguine, O., & Barone, P. (2008). McGurk effects in cochlear-
reorganization and speech perception in cochlear implant users. Brain, 129, implanted deaf subjects. Brain Research, 1188, 87–99.
3376–3383. Sadato, N., Okada, T., Honda, M., Matsuki, K.-I., Yoshida, M., Kashikura, K.-I., et al.
Finney, E. M., Fine, I., & Dobkins, K. R. (2001). Visual stimuli activate auditory cortex (2005). Cross-modal integration and plastic changes revealed by lip movement,
in the deaf. Nature Neuroscience, 4, 1171–1173. random-dot motion and sing languages in the hearing and the deaf. Cerebral
Finney, E. M., Clementz, B. A., Hickok, G., & Dobkins, K. R. (2003). Visual stimuli Cortex, 15, 1113–1122.
activate auditory cortex in deaf subjects: Evidence from MEG. NeuroReport, 14, Schorr, E. A., Fox, N. A., van Wassenhove, V., & Knudsen, E. I. (2005). Auditory-visual
1425–1427. fusion in speech perception in children with cochlear implants. The Proceedings
Geers, A. E. (2004). Speech, language, and reading skills after early cochlear implan- of the National Academy of Sciences, 102, 18748–18750.
tation. Archives of Otolaryngology-Head & Neck Surgery, 130, 634–638. Weeks, R., Horwitz, B., Aziz-Sultan, A., Tian, B., Wessinger, C. M., Cohen, L. G., et al.
Giraud, A. L., Truy, E., & Frackowiak, R. (2001). Imaging plasticity in cochlear implant (2000). A positron emission tomographic study of auditory localization in the
patients. Audiology & Neurotology, 6, 381–393. congenitally blind. The Journal of Neuroscience, 20, 2664–2672.

Vous aimerez peut-être aussi