Académique Documents
Professionnel Documents
Culture Documents
DOCTEUR DE
L’UNIVERSITÉ DE BORDEAUX
Soutenue le 19/05/2016
Membres du jury :
1
A ma fille, Valentine
2
Titre :
La sensibilité auditive à l’harmonicité, en présence ou en l’absence de déficit cochléaire.
Résumé :
Le système auditif fusionne en un seul percept des sons purs simultanés dont les
fréquences sont harmoniquement liées, même si ces sons purs sont parfaitement résolus
par la cochlée. Cette fusion harmonique contribue à l’analyse des scènes auditives nécessitant
la ségrégation de sons complexes harmoniques simultanés. Deux des études décrites ici
ont eu pour but d'en préciser le mécanisme chez les auditeurs sains. La première a mesuré
la discrimination de rapports de fréquences voisins de l’octave (2:1) pour des sons purs
simultanés ou consécutifs. Les résultats montrent que l’octave simultanée est reconnue
par un mécanisme insensible à la direction des écarts par rapport à l’octave, alors que tel
n'est pas le cas pour l'octave séquentielle. Une deuxième étude a exploré cette différence
entre octave simultanée et octave séquentielle par des jugements subjectifs du degré de
fusion ou de l’affinité perceptive entre sons purs présentés simultanément ou
séquentiellement. Les résultats indiquent que fusion harmonique et affinité de hauteur
tonale ne sont pas des phénomènes directement liés. La troisième étude a porté sur la
détection d’un changement de rapport de fréquences entre sons purs simultanés chez des
auditeurs sains ou porteurs d’une surdité cochléaire légère à moyenne. Cette étude révèle
que la sensibilité à l’harmonicité est un phénomène robuste, parfois altéré chez les sujets
sourds mais résistant à une forte dégradation de la discrimination fréquentielle. Il apparaît
cependant que la détection d’inharmonicité, asymétrique chez les auditeurs sains, devient
symétrique en présence de lésions cochléaires, ce qui semble indiquer que le mécanisme
utilisé est différent dans ce cas.
Mots clés :
Fusion harmonique, octave, surdité cochléaire, rapport de fréquences.
Unité de recherche
Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, UMR CNRS 5287,
Université de Bordeaux Zone nord Bat 3a 3ème étage, 146 rue Léo Saignat
33076 Bordeaux cedex, France
3
Title :
Auditory sensitivity to harmonicity, in the presence or absence of cochlear deficit.
Abstract :
The auditory system fuses into one percept simultaneous pure tones whose frequencies
are harmonically related, even if these pure tones are perfectly resolved by the cochlea.
This harmonic fusion contributes to auditory scene analysis requiring segregation of
simultaneous harmonic complex tones. Two studies reported here aimed to clarify its
mechanism in normal-hearing listeners. The first study measured the discrimination of
frequency ratios close to the octave (2:1) for simultaneous or consecutive pure tones. The
results show that the simultaneous octave is recognized by a mechanism which is
insensitive to the direction of deviations from the octave, whereas this is not the case for
the sequential octave. A second study explored this difference between the simultaneous
and sequential octaves by means of subjective judgments on the degree of fusion or the
perceptual affinity between pure tones presented simultaneously or sequentially. The
results indicate that harmonic fusion and pitch affinity are not directly related phenomena.
The third study measured the detection of a change in frequency ratio for simultaneous
pure tones in listeners with normal hearing or mild to moderate cochlear hearing loss.
The results indicate that sensitivity to harmonicity is a robust phenomenon, sometimes
altered by cochlear lesions but resistant to severe deficits in frequency discrimination.
However, while inharmonicity detection is asymmetric in normal-hearing listeners, it
becomes symmetric in the presence of cochlear hearing loss, suggesting that the
underlying mechanism is different in the latter case.
Keywords :
Harmonic fusion, octave, cochlear hearing loss, frequency ratio.
4
SOMMAIRE
1 AVANT-PROPOS ET REMERCIEMENTS ..................................................... 7
2 PERCEPTION DE LA HAUTEUR ET FUSION HARMONIQUE CHEZ
LES SUJETS NORMO-ENTENDANTS ............................................................. 9
2.1 Traitement d’un son complexe harmonique par le système auditif périphérique ...... 9
2.2 Intérêts de l’étude de la fusion harmonique .......................................................... 12
2.3 Données psychoacoustiques ................................................................................ 15
2.3.1 Conséquences perceptives de la suppression de composantes d’un son complexe ........ 15
2.3.2 Conséquences perceptives du déplacement d’une ou plusieurs composantes d’un son
complexe ............................................................................................................................................... 16
2.4 Les grands modèles de perception de la hauteur et la fusion harmonique .............. 20
2.4.1 Indices potentiels du codage de la hauteur............................................................................. 20
2.4.2 Modèles de reconnaissance de pattern .................................................................................... 22
2.4.3 Modèles d’autocorrélation......................................................................................................... 24
2.5 Données neurophysiologiques ............................................................................ 29
2.5.1 Nerf auditif, noyau cochléaire et colliculus inférieur ............................................................ 30
2.5.2 Cortex ........................................................................................................................................... 32
2.5.3 Synthèse des données neurophysiologiques actuelles ........................................................... 36
3 ARTICLE « AUDITORY DISCRIMINATION OF FREQUENCY RATIOS:
THE OCTAVE SINGULARITY » ...................................................................... 39
3.1 Introduction ...................................................................................................... 43
3.2 Experiment 1 ..................................................................................................... 49
3.2.1 Method ......................................................................................................................................... 49
3.2.2 Results .......................................................................................................................................... 52
3.2.3 Discussion.................................................................................................................................... 54
3.3 Experiment 2 ..................................................................................................... 57
3.3.1 Rationale ...................................................................................................................................... 57
3.3.2 Method ......................................................................................................................................... 58
3.3.3 Results and discussion ............................................................................................................... 60
3.4 General Discussion ............................................................................................ 63
3.4.1 How can the octave be recognized as a simple frequency ratio? ........................................ 66
3.4.2 Harmonic versus melodic octaves: Is there a perceptual link? ............................................ 68
4 ARTICLE « HARMONIC FUSION AND PITCH AFFINITY: IS THERE A
DIRECT LINK ? » ................................................................................................ 71
4.1 Introduction ...................................................................................................... 75
4.2 Method ............................................................................................................. 80
4.2.1 Participants .................................................................................................................................. 80
4.2.2 Conditions and stimuli ............................................................................................................... 81
4.2.3 Procedure ..................................................................................................................................... 82
4.3 Results .............................................................................................................. 83
4.4 Discussion ......................................................................................................... 89
5
5 SURDITES NEUROSENSORIELLES ET PERCEPTION DE LA
HAUTEUR ........................................................................................................... 95
5.1 Classification et manifestations cliniques des surdités ........................................... 95
5.2 Perception de la hauteur des sons complexes en présence d’une surdité
neurosensorielle ...................................................................................................... 98
5.2.1 Dégradation de la sélectivité fréquentielle .............................................................................. 98
5.2.1 Sensibilité à la structure temporelle fine ............................................................................... 105
6 ARTICLE « THE EFFECT OF COCHLEAR DAMAGE ON THE
SENSITIVITY TO HARMONICITY » ............................................................. 109
6.1 Introduction ..................................................................................................... 113
6.2 Materials and methods ...................................................................................... 117
6.2.1 Stimuli, task, and experimental conditions ........................................................................... 118
6.2.2 Procedure ................................................................................................................................... 121
6.2.3 Subjects ...................................................................................................................................... 123
6.3 Results ............................................................................................................. 124
6.4 Discussion ........................................................................................................ 129
7 CONCLUSION GENERALE ......................................................................... 133
8 BIBLIOGRAPHIE ........................................................................................... 135
6
1 AVANT-PROPOS ET REMERCIEMENTS
Cette Thèse est le fruit de plus de trois années de travail menées au sein de l’équipe
recherche, que j’ai poursuivi par la suite pendant plus de deux ans conjointement à mon
dans le service du Pr. Vincent Darrouzet. Je tiens à remercier tout particulièrement mon
Directeur de Thèse, Laurent Demany, pour son implication dans ce travail et son immense
disponibilité à mes côtés malgré les fortes contraintes imposées par mon activité
entre le soin et la recherche, et a grandement facilité nos travaux portant sur les patients
atteints de surdité. Les sujets sourds qui ont participé à notre étude sont des patients
régulièrement suivis en consultation dans notre service, ce qui crée une forte implication
de leur part et une relation privilégiée avec eux. Je remercie le Pr. Vincent Darrouzet et le
Pr. René Dauman qui ont rendu possible cette étroite collaboration entre le laboratoire
7
travail, et de nouveaux projets sont déjà envisagés dans la continuité de nos travaux sur
Les expériences que nous avons conduites ont donné lieu à la rédaction de trois
articles. Le premier a été publié en 2013 dans Journal of Experimental Psychology : Human
Perception and Performance. Le deuxième vient d’être publié dans Hearing Research, et le
manuscrit du dernier article rapporté dans cette Thèse a été soumis récemment pour
publication. Nous avons fait le choix de présenter dans cette Thèse les trois manuscrits
originaux en anglais dans leur intégralité (à l’exception des résumés) ; les sections
Je remercie sincèrement le Pr. Christian Lorenzi avec qui nous avons pu échanger
à plusieurs reprises et de manière constructive sur ce travail et qui a accepté d’en être
accepté de juger ce travail en tant que rapporteur et au Dr. Olivier Macherey qui apportera
8
2 PERCEPTION DE LA HAUTEUR ET FUSION
NORMO-ENTENDANTS
complexes mais périodiques. Tel est le cas, par exemple, des voyelles de la parole ou des
sons que produisent la plupart des instruments de musique. Un son complexe périodique
peut être construit selon le principe mathématique d’une transformée de Fourier par
f0) ou à des multiples entiers de la fréquence fondamentale (2*f0, 3*f0, etc.). Si les
composantes d’un son complexe ne sont pas des multiples entiers de f0, le son complexe
Il faut cependant nuancer cette définition formelle qui n’est pas perceptivement
valable pour des f0s très faibles et/ou pour des séries harmoniques très incomplètes. Par
exemple, un son contenant les composantes 100, 210, et 320 Hz est perçu comme
inharmonique, car on peut grossièrement considérer que le système auditif détecte que les
composantes de rangs 2 et 3 ne sont pas des multiples entiers de 100 Hz. Cependant, on
pourrait concevoir cette série comme les composantes de rangs 10, 21 et 32 d’une f0 de
10 Hz. Cette série resterait harmonique selon la définition donnée ci-dessus. Mais dans ce
9
cas, d’une part la f0 (10 Hz) est en dehors du champ auditif humain, et d’autre part le
comment la hauteur des sons complexes peut être déterminée, il est primordial de
représenter les premières étapes de leur traitement par le système auditif périphérique.
440 Hz, 880 Hz, 1320 Hz, …, 7480 Hz, 7920 Hz. Ce son complexe constitue le signal
d’entrée pour le système auditif périphérique. On peut le représenter de deux façons : (1)
Comme une variation de pression au cours du temps, ce qui équivaut à une représentation
temporelle de l’onde sonore et met en évidence la période égale à 1/f0 soit environ 2.3
ms (Figure 1A). (2) Par l’ensemble de ses composantes fréquentielles, ce qui équivaut à
adaptée au système auditif périphérique dont l’une des propriétés majeures est la
spectrales d’un son complexe. La cochlée peut être assimilée à une juxtaposition de filtres
passe-bande, dont la largeur absolue (en Hz) augmente avec la fréquence centrale (Figure
1C). Ainsi les harmoniques de faible rang sont dites résolues, c’est-à-dire individualisées
10
Figure 1. Représentations d’un son complexe harmonique (de f0 égale à 440 Hz et composé des 18
premières harmoniques) et de son traitement par le système auditif périphérique. (A) Représentation
temporelle de l’onde sonore, faisant apparaître la période fondamentale de 2.3 ms (réciproque de 440 Hz).
(B) Représentation spectrale du stimulus faisant apparaître les 18 harmoniques s’échelonnant de 440 Hz à
7920 Hz. (C) Banque de filtres passe-bande illustrant le mécanisme de filtrage cochléaire. (D) Pattern
d’excitation exprimant le niveau moyen d’excitation en sortie des filtres auditifs en fonction de la fréquence
caractéristique. Les harmoniques de faible rang sont résolues car clairement identifiables par un pic saillant
dans le pattern d’excitation ; au contraire les harmoniques de rang élevées non résolues ne sont plus
individualisables. (E) Représentation de la vibration de la membrane basilaire en sortie de quatre filtres
auditifs. Les deux courbes de gauche correspondent aux filtres centrés sur les harmoniques résolues de
rangs 1 et 4 ; la membrane basilaire est stimulée par une onde sinusoïdale. Les deux courbes de droite
correspondent aux filtres centrés respectivement sur la 8ème et la 12ème harmonique, non résolues. Dans ce
cas, la membrane basilaire est stimulée par une onde complexe issue de la combinaison de plusieurs
harmoniques successives interagissant dans un même filtre auditif. D’après Oxenham (2005).
11
On peut analyser le signal obtenu à la sortie des filtres sous la forme d’un pattern
non résolues. Le caractère résolu ou non des harmoniques a une conséquence directe sur
la forme de l’onde qui stimule la membrane basilaire dans la zone considérée. La Figure
dans ce cas que la portion de la membrane basilaire centrée sur la fréquence caractéristique
440 Hz pour l’harmonique 1 (ou 1760 Hz pour l’harmonique 4), est stimulée par une onde
sinusoïdale de cette fréquence, les composantes adjacentes ne passant pas dans le même
filtre. En revanche, quand les harmoniques ne sont pas résolues comme dans les deux
courbes de droite, la zone correspondante de la membrane basilaire est stimulée par une
adjacentes passant dans le filtre. Ces composantes étant des multiples entiers de la
fréquence f0.
décomposition spectrale des sons complexes. Pourtant, un son harmonique est perçu par
un auditeur sain comme une entité cohérente d’une seule hauteur tonale (f0), émise par
12
une seule source (Bregman & Doehring, 1984), et non comme la juxtaposition de
plusieurs sons purs. Le système auditif semble avoir la capacité d’identifier que des
composantes fréquentielles ont des relations harmoniques entre elles et de les fusionner
pour former un son unique (Kalluri, Depireux, & Shamma, 2008). Ce concept est
classiquement dénommé fusion harmonique. Dans le cas des sons complexes périodiques,
une seule unité auditive (i.e. la fusion harmonique proprement dite), et l’attribution d’une
hauteur).
les différentes sources sonores en présence d’un bruit de fond ou en situation multi-
locuteurs (Carlyon & Gockel, 2008), et les sujets atteints de surdité neurosensorielle sont
conversation multi-locuteurs par exemple, le système auditif doit souvent traiter deux
regroupement perceptif des composantes de chaque source peut reposer en grande partie
sur la fusion harmonique. Les travaux portant sur la ségrégation de double voyelles
plusieurs arguments semblent indiquer qu’il est difficile d’isoler le rôle joué par
l’harmonicité des autres indices dans ce type de paradigmes. Tout d’abord, les sujets
est identique (Culling & Darwin, 1993). Par ailleurs, la ségrégation entre voyelles
simultanées est meilleure en présence d’une différence de f0, même faible (inférieure à un
13
demi-ton), alors même que les sujets n’entendent pas de différence de hauteur entre les
voyelles pour des écarts de f0 inférieurs 4 demi-tons (de Cheveigné, Kawahara, Tsuzaki,
& Aikawa, 1997; Assmann & Paschall, 1998). Si, comme on le pense, la sensibilité à
semblent indiquer que d’autres indices sont utilisables pour ségréguer les voyelles dans ce
type de conditions expérimentales (Chintanpalli & Heinz, 2013). Par exemple, dans le cas
ou les f0 des deux voyelles sont proches, il existe des interactions entre les composantes
1994).
l’harmonicité n’est pas le seul indice utilisable pour ségréguer deux sons complexes
2012). Une étude récente s’est intéressée au rôle joué spécifiquement par l’harmonicité
Kawahara, & Ellis, 2013). Les auteurs ont manipulé isolément le degré d’harmonicité du
signal de la parole, sans en modifier ses autres caractéristiques. Leurs résultats confirment
que la ségrégation est plus difficile lorsque la parole est rendue inharmonique, même si la
est difficile avec des stimuli écologiques, par définition relativement complexes. Dans ce
travail nous avons choisi des stimuli plus simples, composés de trois à quatre sons purs à
l’octave (et donc normalement parfaitement résolus par une cochlée saine lorsqu’ils sont
14
présentés simultanément), pour limiter au maximum la coexistence d’indices de nature
à l’harmonicité par le biais de la perception de la hauteur des sons complexes, mais ces
données sont donc d’un grand intérêt pour étudier indirectement la fusion harmonique.
Les mécanismes qui sous-tendent la fusion harmonique peuvent être abordés par
l’étude de stimuli dont la structure est modifiée par rapport à une stricte série harmonique.
composante de plus basse fréquence (f0) d’une série harmonique, la hauteur perçue par
un auditeur sain reste f0 (Ohm, 1843; Schouten, 1940). Cela est vrai même en présence
(Licklider, 1954; Patterson, 1969). Ces données indiquent que la perception de la hauteur
des sons harmoniques ne repose pas sur la présence d’une excitation dans la zone de la
15
En fait, on peut supprimer plusieurs harmoniques de faible rang et plusieurs
harmoniques de rang élevé sans modifier la perception globale de la hauteur. Cela a amené
de nombreux auteurs à parler de hauteur virtuelle pour les sons complexes, illustrant le fait
fréquentielle physiquement présente, par opposition à la hauteur spectrale des sons purs. Il
suffit de conserver quelques harmoniques résolues pour que la hauteur du son complexe
soit clairement perçue et reste égale à f0 ; l’auditeur percevra cette modification spectrale
celles comprises entre le 2ème et le 5ème rang semblent avoir un rôle prépondérant dans la
du complexe harmonique (Ritsma, 1962; Plomp, 1967; Moore, Glasberg, & Peters, 1985).
Plus récemment, les travaux de Dai (2000) ont modifié cette notion en indiquant que les
quel que soit leur rang dans le complexe. Par exemple, pour un son complexe de f0 égale
à 100 Hz, la sixième harmonique était dominante, alors que pour un son complexe de f0
égale à 200 Hz, la troisième harmonique était dominante. Quoi qu’il en soit, il est clair que
sons complexes. Dans le cas où seules les harmoniques non résolues sont présentes, la
hauteur perçue est plus ambigüe, moins saillante, témoignant du manque de précision
harmoniques fusionnent entre elles ; mais comment sera perçu un son complexe dont
l’une des composantes est légèrement déplacée, c’est-à-dire dont la fréquence est un peu
16
supérieure ou inférieure à sa valeur harmonique « correcte » ? Moore, Peters & Glasberg
(1985) ont mesuré la valeur minimale de déplacement d’une composante d’un son
complexe nécessaire pour qu’un auditeur détecte une inharmonicité. Pour les
harmoniques de faibles rangs (jusqu’au rangs 4 à 6), cette valeur est en moyenne égale à
déplacée semble être « exclue » du reste du complexe fusionné et forme un objet auditif
distinct. Pour les harmoniques de rang plus élevé qui sont moins bien résolues,
l’interaction entre les composantes au sein d’un même filtre auditif produit des battements
détectables par les sujets dès qu’une composante est légèrement déplacée par rapport à
(Plomp, 1967; Viemeister, Rickert, & Stellmack, 2001) constitue un point crucial dans
toute étude portant sur l’harmonicité ; il permet en effet à un sujet de distinguer un son
harmonique d’un son inharmonique sans utiliser l’indice de fusion harmonique en tant
que tel. Il est donc indispensable de s’affranchir de ces battements pour évaluer isolément
respectives f1 et f2, séparées par un intervalle proche d’une octave. Même si ces
qu’il existe au niveau de la membrane basilaire une zone où les deux patterns d’excitation
(respectivement PE1 et PE2) se chevauchent, autrement dit une zone d’interaction (ZI)
entre patterns adjacents, qui répond simultanément à deux sons purs. Quand l’intervalle
est harmonique (dans notre exemple exactement égal à une octave), l’onde résultant de la
combinaison des deux sons purs est une onde complexe périodique (et de période 1/f1),
qui ne produit pas de sensation de battements (Figure 2A). En revanche, quand l’intervalle
entre f1 et f2 est inharmonique (dans notre exemple légèrement supérieur à une octave),
la zone de chevauchement est stimulée par une onde complexe dont la période est
17
beaucoup plus longue (Figure 2B). Il apparaît une fluctuation dans la structure temporelle
fine de l’onde qui est liée à un décalage de phase continu entre les deux composantes, et
hauteur du son complexe dans son ensemble. Pour les harmoniques de faible rang, la
18
hauteur du complexe varie dans le même sens et proportionnellement au déplacement
Glasberg, & Peters, 1985) ; au-delà la composante n’a plus d’impact significatif sur la
hauteur du complexe. Ceci semble indiquer qu’une composante peut encore contribuer à
la hauteur d’un son complexe même si elle est considérée comme inharmonique par le
système auditif (le seuil de détection d’inharmonicité étant globalement égal à 1.1%).
deux mécanismes perceptifs distincts, même s’ils sont intriqués ; celui qui sous-tend la
fusion harmonique semble être moins tolérant à des écarts minimes à l’harmonicité que
Prolongeant cette idée d’une dissociation des deux mécanismes, Roberts &
Brunstrom (2000) ont avancé l’hypothèse que le mécanisme de fusion était principalement
basé sur la détection d’une régularité dans le spectre, et serait possible même en l’absence
reposerait avant tout sur l’harmonicité, tout en tolérant l’écart de certaines composantes
par rapport à leur valeur parfaitement harmonique. Partant du principe que plus un son
complexe est perceptivement fusionné, plus il est facile d’en détecter une composante
déplacée, Roberts & Brunstrom (2001) ont évalué le degré de fusion de deux types de
correspondent à une série harmonique dont le spectre a été globalement « translaté » (par
exemple une série harmonique de f0 égale à 200 Hz devient après translation de 25% :
250, 450, 650, 850 Hz, etc.). Les seconds correspondent à une série harmonique dont le
spectre a été « étiré », c’est-à-dire que l’écart séparant deux composantes consécutives
augmente progressivement avec le rang des composantes (par exemple une série
harmonique de f0 égale à 100 Hz devient après étirement de 3% : 100, 203, 309, 418 Hz,
etc.). Leurs données indiquent qu’un son dont le spectre est translaté reste étonnamment
19
fusionné (à un degré moindre cependant qu’un son harmonique), même pour des valeurs
à 3%) n’entraine pas de séparation perceptive des composantes. Ces résultats sont
compatibles avec l’idée que le système auditif est sensible à une certaine régularité
spectrale entre composantes adjacentes, et que l’intensité globale de la fusion d’un son
Roberts & Brunstrom (2003) ont néanmoins précisé dans une étude plus récente que ce
mécanisme de regroupement basé sur la régularité spectrale n’était pas valable pour le
premier composant (le composant de plus basse fréquence) d’un son complexe
Les deux types de représentation (spatiale reposant sur la détection d’un maximum
expliquer le codage de la hauteur des sons purs peuvent naturellement être appliquées aux
résolues qui interagissent au sein d’un même filtre auditif fait apparaître d’autres indices
utilisables par le système auditif pour extraire la hauteur des sons complexes. L’interaction
harmonique) entraine en sortie du filtre auditif une fluctuation d’enveloppe qui suit la
fréquence fondamentale (cf. Figure 1E). Dans les fibres du nerf auditif dont les fréquences
20
seront le plus souvent égaux à la période de fluctuation de l’enveloppe (1/f0) ou à des
multiples entiers de cette période. Le système auditif peut donc disposer d’une
harmoniques non résolues. Moore et al. (2003; 2006) ont montré que selon le rang des
harmoniques non résolues, les auditeurs sains pouvaient probablement utiliser deux types
d’information différentes. Pour les harmoniques comprises entre le 8ème et le 12ème rang,
les sujets semblent garder une sensibilité à la structure temporelle fine de l’onde
l’enveloppe peuvent être exploitées, ce qui entraine une imprécision dans l’évaluation de
l’intervalle séparant deux maxima locaux successifs, et donc une ambigüité dans la
la hauteur reposant sur ces informations issues des harmoniques non résolues. Même si
ce modèle peut rendre compte de la perception de la hauteur d’un son complexe dont la
fréquence fondamentale est absente, il n’est pas cohérent avec le fait que les harmoniques
& Peters, 1985). En effet, un son complexe ne contenant que des harmoniques résolues
évoque une hauteur plus saillante et plus précise qu’un son complexe ne contenant que
des harmoniques non résolues. Il était nécessaire de proposer d’autres modèles prenant
également en compte les informations issues des harmoniques résolues. Dans un souci de
21
2.4.2 Modèles de reconnaissance de pattern
proposés notamment par Goldstein (1973), Terhardt (1974), ou plus récemment Shamma
& Klein (2000). Ils considèrent que l’extraction de la hauteur repose sur l’information
portée par les harmoniques résolues du son complexe. Dans une première étape, le
système auditif périphérique réalise une analyse spectrale individualisant les différentes
harmoniques de bas rangs contenues dans le complexe. Notons que ce type de modèle
admet tout aussi bien que les composantes fréquentielles soient individualisées sur la base
d’un codage spatial du maximum d’excitation, que d’un codage temporel reposant sur le
verrouillage de phase. Dans une seconde étape, le modèle suppose l’existence d’un
Ce mécanisme est souvent illustré par l’image du gabarit : pour reconnaitre une
série harmonique et en extraire la hauteur, le système auditif central choisirait parmi une
banque de gabarits internes celui dont les « repères » (régulièrement espacés sur une
fréquence la plus basse du gabarit (ou égale à l’espace séparant les repères du gabarit), et
22
Figure 3. Représentation schématique d’un modèle spectral de reconnaissance de pattern. Le spectre d’un
stimulus composé des trois harmoniques de rangs 3, 4 et 5 d’une fréquence fondamentale absente de 100
Hz est représenté dans la partie basse de la figure. Le modèle suppose l’existence de gabarits harmoniques
au niveau du système auditif central ; les deux gabarits correspondant à des f0 respectives de 100 Hz et 110
Hz sont symbolisés par des lignes brisées en haut de la figure. Le gabarit « 100 Hz » s’ajuste parfaitement
aux composantes du stimulus, contrairement au gabarit « 110 Hz ». Selon ce modèle, les composantes
respectant les repères du gabarit seraient fusionnées entre elles en un son unique dont la hauteur perçue est
celle de la plus basse fréquence acceptée par le gabarit choisi (ou, de manière équivalente, égale à l’écart
entre les repères). D’après Plack (2005).
de pattern. Tout d’abord, il n’a jamais été possible de mettre en évidence un support
compatibles avec ces modèles (Wang, 2013, cf. section 2.5.2). De plus, la manière dont
ces gabarits pourraient être générés au niveau du système auditif central fait également
débat. L’hypothèse la plus communément admise est celle d’un apprentissage progressif
par le système auditif sous l’influence de notre environnement acoustique naturel riche en
sons complexes harmoniques (Terhardt, 1974). Shamma et Klein (2000) ont même
coïncidence, dans lequel tout stimulus large bande (y compris du bruit) serait suffisant
remise en question par des données suggérant que l’on peut mettre en évidence chez de
très jeunes enfants (dès l’âge de trois mois) une sensibilité « innée » aux relations
23
harmoniques entre composantes résolues (Demany & Armand, 1984; Clarkson & Rogers,
1995), même si un apprentissage au cours de la période fœtale et des tous premiers mois
de vie ne peut pas être exclu. Un autre argument est avancé à l’encontre de ces modèles,
Même si, comme nous l’avons vu, les harmoniques non résolues produisent une sensation
de hauteur moins saillante et plus ambiguë, elle n’en reste pas moins présente et permet
par exemple la reconnaissance de mélodies simples (Houtsma & Smurzynski, 1990). Les
prépondérant, présentent l’avantage de tenir compte à la fois des informations issues des
des harmoniques non résolues (comme les modèles exclusivement temporels) en les
type a été proposé par Licklider (1951), et a été enrichi par la suite notamment par Meddis
& Hewitt (1991), Meddis et O’Mard (1997) ou Bernstein & Oxenham (2005).
de filtre est représentée pour les quatre canaux centrés par les fréquences caractéristiques
200, 400, 800 et 1200 Hz (soit les harmoniques résolues de rangs 1, 2, 4 et 6), ainsi que
24
pour le canal centré par la fréquence caractéristique 3200 Hz répondant à l’interaction de
plusieurs harmoniques non résolues (Figure 4, à droite). Rappelons que les neurones dont
phase avec l’harmonique correspondante, alors que dans les zones non résolues ils se
quel que soit le canal fréquentiel considéré, les différents patterns de réponse des neurones
contiennent tous des intervalles entre potentiels d’action égaux à 5ms (réciproque de la
fréquence fondamentale 200 Hz), même si d’autres intervalles sont présents (Figure 4, à
droite).
3200 Hz
1200 Hz
Pressure
800 Hz
400 Hz
0 5 10
200 Hz
Time (ms)
5 ms
Figure 4. L’onde sonore d’un son complexe harmonique de f0 égale à 200 Hz est représentée dans la partie
gauche de la figure. L’analyse spectrale réalisée par la cochlée est illustrée pour cinq filtres cochléaires, ou
canaux fréquentiels, centrés par différentes fréquences caractéristiques (au centre de la figure) : quatre
correspondent aux harmoniques résolues de rangs 1, 2, 4 et 6 (respectivement 200, 400, 800 et 1200 Hz), le
cinquième canal centré par la fréquence 3200 Hz répond quant à lui à plusieurs harmoniques non résolues.
La réponse de chaque canal est représentée sur la partie droite de la figure. L’intervalle de temps 5 ms
(réciproque de la fréquence fondamentale 200 Hz) est indiqué par les lignes pointillées. Cet intervalle
possède la particularité d’être représenté dans les réponses de tous les canaux. Il est égal à la période de
l’onde sinusoïdale (ou à un multiple entier de cette période) dans les canaux répondant aux harmoniques
résolues, et il est égal à la période de fluctuation d’amplitude de l’enveloppe dans les canaux répondant aux
harmoniques non résolues. Les modèles d’autocorrélation reposent sur l’utilisation de cette information
temporelle inter-canaux. D’après Plack (2005).
25
L’autocorrélation repose sur la comparaison d’un signal avec des copies
très représenté dans le signal, la corrélation entre le signal et les copies est forte. Appliquée
5A) sur lequel les pics successifs correspondent à la période de l’onde traitée par ce canal.
compte des intervalles communs issus des harmoniques résolues et non résolues (Figure
5B).
Figure 5. (A) Autocorrélogramme d’un son complexe harmonique de fréquence fondamentale égale à 200
Hz. Chaque ligne horizontale représente la fonction d’autocorrélation obtenue à partir du pattern de
réponse neuronale dans un canal fréquentiel donné. On remarque graphiquement que pour les délais
multiples de 5 ms (réciproque d’une fréquence de 200 Hz), il existe une forte corrélation entre les réponses
de tous les canaux. (B) Fonction globale d’autocorrélation (SACF) obtenue par sommation des fonctions
des différents canaux fréquentiels. Les pics saillants indiquent les délais où la corrélation inter-canaux est
forte, toutes les 5 ms. Le système auditif pourrait utiliser cette information temporelle pour extraire la
hauteur correspondante du complexe (200 Hz), à partir des harmoniques résolues et non résolues. D’après
Yost (2009).
Le premier pic visible sur cette fonction (qui est également le plus ample)
correspond au délai de corrélation maximale entre tous les canaux (ici, 5 ms), et les pics
suivants de forte corrélation sont répartis toutes les 5 ms. La SACF donne une
sons complexes.
26
Roberts & Brunstrom (2001) se sont appuyés sur ce type de modèle pour proposer
Figure 6. Chacun des deux panneaux comporte un autocorrélogramme (partie supérieure du panneau) et
la SACF correspondante (partie inférieure). Sur l’autocorrélogramme, le degré de corrélation en fonction
du délai est représenté par une échelle de gris (les zones les plus foncées correspondant à une forte
corrélation). (A) Le stimulus analysé est un son complexe parfaitement harmonique comprenant les 12
premières composantes d’une fréquence fondamentale de 200 Hz. L’autocorrélogramme fait apparaître une
« colonne » verticale correspondant à une forte corrélation inter-canaux pour un délai de 5 ms, retrouvée
sous la forme d’un pic dans la SACF. (B) Stimulus identique au précédent à l’exception de la quatrième
composante spectrale rendue inharmonique par un décalage fréquentiel de -4%. Cette inharmonicité fait
apparaître une « encoche » dans la colonne verticale de l’autocorrélogramme (flèche noire), ainsi qu’une
ondulation supplémentaire dans le pic de la SACF (flèche évidée), qui pourraient témoigner d’indices utilisés
par le système auditif pour détecter l’inharmonicité de la composante et l’exclure perceptivement du
complexe. D’après Roberts & Brunstrom (2001).
Il apparaît nettement sur l’autocorrélogramme ce que les auteurs décrivent comme une
quatrième composante fréquentielle (768 Hz au lieu de 800 Hz) sont représentées dans la
Figure 6B. Deux modifications peuvent être notées : d’une part l’apparition d’une
27
de la SACF (apparition d'une ondulation supplémentaire sur le versant descendant du pic).
La traduction neuronale de ces indices n’est pas connue, néanmoins ils pourraient servir
la hauteur sur la seule base d’harmoniques non résolues. Néanmoins, certaines lacunes
peuvent leur être opposées. Premièrement, comme dans le cas des modèles de
d’autocorrélation ont du mal à expliquer pourquoi les harmoniques résolues sont associées
à une sensation de hauteur beaucoup plus précise et plus saillante que les harmoniques
non résolues, ce qui se traduit par exemple par des meilleures performances de
& Shackleton, 1994), même si certaines explications dont nous ne rendrons pas compte
ici ont été avancées (de Cheveigné & Pressnitzer, 2006). Il faut noter que d’autres types
l’information temporelle ont été proposés (Moore, 1982; de Cheveigné, 1993; Patterson,
reposant sur le codage temporel est donc toujours d’actualité. Des hypothèses conciliant
les deux théories en supposant que les harmoniques résolues et non résolues soient
traitées par des mécanismes distincts ont également été proposés (Carlyon, 1998). Durant
ces vingt dernières années, les modèles temporels ont eu les faveurs de nombreux auteurs,
28
spatiale dans le codage de la hauteur. Oxenham, Bernstein & Penagos (2004) ont utilisé
tonotopiquement dédiée aux hautes fréquences, une information temporelle fine similaire
à celle que produirait une sinusoïde de basse fréquence. Leurs résultats indiquent que les
tonotopique des composantes d’un son complexe est déterminante dans la perception de
sa hauteur.
Les études neurophysiologiques portant sur les sons complexes se sont souvent
ségrégation des sources sonores. Néanmoins, certaines données nous renseignent sur les
portant sur le codage de sons complexes présentés simultanément ou sur le codage d’une
données psychoacoustiques.
29
2.5.1 Nerf auditif, noyau cochléaire et colliculus inférieur
Larsen, Cedolin & Delgutte (2008) ont comparé la précision de deux méthodes
d’analyse des réponses du nerf auditif (chez le chat) dans l’estimation des f0s de deux sons
informations de nature spectrale, basée sur le taux de décharge des fibres du nerf auditif
selon leur fréquence caractéristique. La seconde méthode reposait sur des informations
temporelles, à savoir la distribution des durées séparant les potentiels d’action. Les auteurs
ont montré que les deux méthodes pouvaient fournir des estimations précises des f0s de
bonne pour les fréquences caractéristiques supérieures à 2-4 kHz, reflétant probablement
la dégradation du verrouillage de phase dans les hautes fréquences. Ces données indiquent
que les informations véhiculées par le nerf auditif sont suffisantes, aussi bien dans le
domaine spectral que temporel, pour permettre au système auditif de ségréguer deux sons
Des travaux plus anciens avaient porté sur le codage temporel au niveau du nerf
auditif de cochons d’Inde (Palmer, 1988). Les stimuli utilisés étaient notamment des paires
de voyelles de f0s proches (100 et 125 Hz) présentées simultanément. Les résultats
montraient que selon la fréquence caractéristique des fibres, le pattern de réponse pouvait
refléter soit l’une ou l’autre des f0s, soit des composantes harmoniques individualisées.
Palmer avait estimé que l’information temporelle était suffisamment précise pour
permettre une identification des deux f0s sans ambigüité, et servir de support à la
ségrégation des deux voyelles. Keilson et al. (1997) avaient obtenu des résultats
étaient synchronisées avec l’une ou l’autre des f0s de deux voyelles présentées
30
simultanément, esquissant une base neurophysiologique à un modèle de type
séparant les f0s des deux sons complexes était soit consonant, soit dissonant. Comme cela
avait été observé avec les voyelles, leurs résultats indiquaient que pour les intervalles
consonants, le pattern temporel pouvait se synchroniser avec l’une ou l’autre des f0s. En
revanche, dans le cas des intervalles dissonants, la réponse du nerf suivait les fluctuations
lentes de l’enveloppe observables à la sortie des filtres auditifs, liées à l’interaction entre
composantes proches non résolues. Chez l’homme, ces fluctuations lentes d’enveloppe
simultanés chez le chinchilla ont montré que le noyau cochléaire semblait comporter deux
synchronisées avec des composantes individuelles du son complexe, comme Sinex et al.
composantes non résolues. Sinex & Li (2007) avaient déjà mis en évidence ce type de
colliculus inférieur. Cela a amené les auteurs à proposer l’hypothèse selon laquelle le noyau
cochléaire était un relais important dans la différentiation des réponses entre stimuli
harmonique ou non (même si, nous l’avons vu, des réponses synchrones des fluctuations
lentes d’enveloppe ont également été observées au niveau du nerf auditif par d’autres
31
induites par l’inharmonicité du stimulus, entrainant une distinction qualitative marquée
2.5.2 Cortex
à des sons complexes composés des harmoniques de rangs 4 à 11 d’une série de f0 égale
à 250 Hz (cette fondamentale étant absente). Ces huit composantes étaient présentées soit
monauralement, soit dichotiquement auquel cas l’oreille droite recevait un son complexe
composé des quatre harmoniques impaires (1250, 1750, 2250 et 2750 Hz), l’oreille gauche
des quatre harmoniques paires (1000, 1500, 2000 et 2500 Hz). Les auteurs ont vérifié que
la hauteur perçue lorsque l’on présentait isolément l’un de ces sons complexes à quatre
composantes était voisine de 500 Hz. Leurs données montraient que la même région du
cortex auditif était activée en réponse aux stimuli monoraux (de f0 = 250 Hz) et aux
stimuli dichotiques décrits ci-dessus. Cette région corticale était celle qui répondait
également préférentiellement à un son pur de 250 Hz. Ces résultats semblaient apporter
la preuve que (1) la même région corticale était activée en réponse à des stimuli de hauteur
spectrale ou de hauteur virtuelle égale à 250 Hz, et que (2) la fusion harmonique binaurale
mise en évidence depuis longtemps par des expériences psychoacoustiques (Cramer &
Plus récemment, Bendor & Wang (2005) ont mis en évidence par des
enregistrements multi-unitaires chez le singe une région corticale contenant des neurones
qui semblaient coder spécifiquement la hauteur. Ils se sont intéressés à une population de
neurones situés à proximité du bord antérolatéral du cortex auditif primaire, dans une
région tonotopiquement dédiée aux basses fréquences. Ces neurones répondaient aussi
bien à des sons purs qu’à des sons complexes harmoniques de même hauteur, même en
32
l’absence de la fréquence fondamentale. Pourtant, ces mêmes neurones ne répondaient à
isolément. De plus, la région du cortex mise en évidence ici chez le singe correspondait
humaine (Patterson, Uppenkamp, Johnsrude, & Griffiths, 2002; Penagos, Melcher, &
Oxenham, 2004). Ces travaux laissaient entrevoir une avancée déterminante dans nos
connaissances sur la perception auditive chez l’homme, à savoir la découverte d’une zone
concerne leur caractère transposable au cortex humain. Hall & Plack (2009) ont avancé
l’hypothèse que l’activité mise en évidence chez l’homme au niveau de la portion latérale
du gyrus de Heschl était liée avant tout à la nature des stimuli utilisés dans la majorité des
directement du codage de la hauteur du stimulus. Ils ont étudié par IRM fonctionnelle la
(dont l’I.R.N. mais également des sons purs ou des sons complexes avec harmoniques
résolues et non résolues). Leurs résultats démontraient que cette zone supposée
« spécifique de la hauteur » semblait d’une part beaucoup plus étendue et plus postérieure
que la portion latérale du gyrus de Heschl, et que d’autre part la variabilité interindividuelle
était très élevée. Chez certains sujets elle était même située dans des zones corticales qui
Alain et al. (2002) ont recueilli des potentiels évoqués auditifs en réponse à des sons
33
complexes harmoniques dont l’une des composantes était spectralement déplacée. Leurs
résultats montraient que lorsque l’inharmonicité était suffisante pour entrainer l’exclusion
perceptive de la composante par rapport au reste du complexe, une onde négative tardive
(dénommée ORN pour object-related negativity) apparaissait sur les potentiels évoqués
auditifs. Cette onde, retrouvée aussi bien dans des conditions d’écoute active que passive,
serait pour les auteurs le marqueur d’un mécanisme automatique de détection d’une
inharmonicité par le système auditif central en cas d’inadéquation entre le stimulus perçu
cortex auditif primaire chez le singe par Fishman & Steinschneider (2010). Leurs résultats
reste du complexe. Premièrement, ils ont observé une augmentation locale (dans la zone
décharge neuronale en réponse à des stimuli inharmoniques par rapport aux stimuli
harmoniques. Celle-ci était d’autant plus importante que la composante était déplacée, ce
qui était cohérent avec les données psychoacoustiques humaines sur le pop out perceptif.
présentes en cas de stimulus inharmonique, comme cela a été observé au niveau du nerf
résultats pourraient refléter avant tout la saillance perceptive ou pop out de la composante
déplacée, plus que le mécanisme de détection d’inharmonicité lui-même. Les auteurs ont
d’ailleurs souligné qu’avec des stimuli inharmoniques dont le spectre était globalement
34
étiré –similaires à ceux utilisés par Roberts & Brunstrom (2001)– et qui ne sont pas
associés à un pop out perceptif, l’augmentation du taux de décharge neuronale n’était plus
observée.
Une des propriétés importantes des neurones du système auditif est leur capacité
(Kiang, Sachs, & Peake, 1967). Mais certains neurones du système auditif, notamment
dans les structures centrales (colliculus inférieur, noyau réticulé, cortex) semblent
fréquence caractéristique (Portfors & Wenstrup, 2002). Kadia et Wang (2003) ont évalué
auditif primaire chez le ouistiti (par opposition aux neurones « mono-pic » qui n’ont
qu’une seule fréquence préférentielle). Une partie de ces neurones répondent à des sons
purs (présentés isolément) dont les fréquences sont harmoniquement liées, et pourraient
ainsi être directement impliqués dans la sensibilité à l’harmonicité. De plus, parmi les 80%
de neurones corticaux « mono-pic » (lorsqu’ils sont testés avec un son pur unique), certains
semblent avoir la capacité de détecter des rapports harmoniques en réponse à deux sons
Figure 7. Exemple d’enregistrement d’un neurone « mono-pic » du cortex auditif primaire chez le ouistiti
en réponse à une paire de sons purs simultanés S1 et S2. La fréquence de S1 est fixée à la fréquence
caractéristique (CF) du neurone (1.47 kHz). La fréquence de S2 est variable (entre 0.12 et 5.88 kHz). Le
pourcentage de changement dans le taux de décharge est indiqué en fonction de la fréquence de S2. La
valeur de référence (0%, ligne horizontale pointillée) correspond au taux de décharge en réponse à la
présentation isolée de S1. La présentation simultanée de deux sons purs a un effet facilitateur important
purs
quandsimultanés dont l’un
S2 a une fréquence égale est à leur
à 2CF (soit fréquence
un intervallecaractéristique.
d’une octave entreLa
S1 Figure 7 illustre
et S2). D’après Wangles(2013).
35
purs simultanés dont l’un est à leur fréquence caractéristique. La Figure 7 illustre les
variations du taux de décharge d’un neurone de ce type en réponse à une paire de sons
purs simultanés, notés S1 (de fréquence fixe = 1.47 kHz, la fréquence caractéristique du
neurone) et S2 (de fréquence variable). Le taux de décharge est plus important lorsque S2
séparés par une octave ; la réponse apparaît facilitée par des combinaisons de fréquences
harmoniques, car elle est même supérieure à celle obtenue avec un S1 présenté isolément.
Ce type de neurone pourrait jouer un rôle déterminant dans l’extraction et la fusion des
sont disponibles et peuvent servir de support neurophysiologique crédible aux deux types
de modèles. Des réseaux neuronaux supportant un modèle reposant sur des « gabarits »
sont envisageables, par exemple en combinant l’activité de plusieurs neurones dont les
autocorrélation quant à eux peuvent théoriquement être supportés par des réseaux de
neurones comportant des lignes parallèles dont les délais de traitement sont différents.
Des structures réalisant une analyse temporelle comparable, mais dédiées à la comparaison
du délai inter-aural dans des tâches de localisation spatiale ont été décrites chez les
36
mammifères au niveau du colliculus inférieur (McAlpine, Jiang, & Palmer, 2001). En
nerveux de la structure mise en jeu dans le codage de la hauteur. On sait depuis longtemps
qu’elle se situe à un niveau capable de traiter des informations provenant des deux oreilles,
puisqu’un sujet peut extraire la hauteur à partir de deux harmoniques ne comprenant pas
la fréquence fondamentale, chacune étant présentée à une oreille (Houtsma & Goldstein,
1972). La mise en évidence d’une zone corticale dédiée à la perception de la hauteur fait
encore débat.
générale, les stimuli dissonants se traduisent par une activité neuronale plus soutenue, des
patterns de réponse plus complexes et qui présentent moins de cohérence entre les
différents canaux auditifs que les stimuli consonants (Micheyl & Oxenham, 2010).
données suffisantes pour établir par quels mécanismes neuronaux et à quel niveau du
système auditif elle s’établit. Deux types de contraintes peuvent expliquer les limites
actuelles des connaissances de ce phénomène : (1) Dès que l’on utilise des stimuli
phénomène de fusion isolément. (2) Dans le cas de stimuli inharmoniques, les interactions
battements à de multiples niveaux des voies auditives (nerf auditif, noyau cochléaire,
colliculus inférieur, cortex), qui peuvent potentiellement renseigner le système auditif sur
la présence d’une inharmonicité. La question qui reste en suspend est de savoir si cette
37
information est réellement utilisée comme indice pour fusionner les composantes qui
n’entrainent pas de tels battements (Micheyl & Oxenham, 2010), ou si elle ne représente
qu’un « artefact » neuronal lié aux propriétés physiques des sons inharmoniques. Ces
38
3 ARTICLE « AUDITORY DISCRIMINATION OF
Si un auditeur sain est capable de détecter que les composantes fréquentielles d'un
son complexe sont harmoniquement liées pour les fusionner entre elles, c'est que le
système auditif est sensible aux rapports de fréquences entre ces composantes. Plus
Nous l’avons vu, les mécanismes de la fusion harmonique restent en grande partie
inconnus, alors même que cet aspect de la perception auditive joue probablement un rôle
auditif pour fusionner les harmoniques n'a pas pu être formellement identifiée. Un des
modèles spectraux de référence a été proposé par Terhardt (1974). Il indique que la fusion
extrait la hauteur « spectrale » de chacune des composantes du son complexe, comme s’il
s’agissait de sons purs présentés isolément. Deuxièmement, les intervalles séparant ces
hauteurs sont comparés à des gabarits internes (acquis par l'exposition sonore naturelle
correctement au gabarit choisi sont fusionnées entre elles en un percept unique de hauteur
« virtuelle » f0. Selon Terhardt, ces mêmes gabarits seraient utilisés pour analyser les
relations mélodiques entre sons purs successifs. Cela expliquerait notamment pourquoi deux
39
sons purs consécutifs séparés par une octave présentent une affinité perceptive
particulière.
sons purs, équidistants sur une échelle logarithmique de la fréquence. La tâche des sujets
consistait à discriminer un rapport de fréquence étalon (égal à 0.97 octave), d’un rapport
cible légèrement plus grand (compris entre 0.98 et 1.04 octave). Les sons purs pouvaient
Les résultats montrent que dans la condition Simul, la performance des sujets ne suit pas
une fonction monotone du rapport cible : si l’on augmente graduellement le rapport cible
de 0.98 à 1.04 octave, la performance augmente initialement (jusqu’au rapport 1.0 octave),
puis diminue pour des rapports légèrement supérieurs à 1.0 octave, et augmente de
avec le rapport cible sans effet particulier de la simplicité du rapport, ce qui est cohérent
avec certaines données déjà rapportées dans la littérature. Par exemple Dobbins & Cuddy
(1982), dans une étude portant sur la perception de l’octave mélodique, ont pu montrer
que le rapport de fréquence physique et son corrélat perceptif étaient liés par une fonction
monotone : à partir du moment où un sujet détecte une différence entre deux rapports de
fréquences, il est capable de les ordonner sur une échelle de grandeur. Cela ne semble pas
être le cas dans la condition Simul où nos résultats indiquent que l’octave est associée à un
un très faible écart à l’octave, sans pour autant préciser le sens de cet écart. En d’autres
termes, l’auditeur détecte qu’un rapport de fréquences n’est pas parfaitement harmonique,
mais il ne peut pas dire s’il est trop grand ou trop petit. Une expérience contrôle a permis
40
d’éliminer l’hypothèse selon laquelle les résultats obtenus dans la condition Simul seraient
Nos résultats, comme ceux de Demany & Semal (1990), vont à l’encontre des
propriétés théoriques des gabarits harmoniques tels qu’ils sont proposés par le modèle de
(condition Simul) et l’octave mélodique (condition Consec) aurait dû aboutir à des résultats
équivalents dans les deux conditions. Ils semblent en revanche compatibles avec un
temporelle parfaite entre les différents canaux fréquentiels contenant des sons purs
41
42
AUDITORY DISCRIMINATION OF FREQUENCY RATIOS: THE
OCTAVE SINGULARITY
3.1 Introduction
Human listeners are, to some extent, sensitive to the frequency ratios formed by
pure tones. For pure tones presented sequentially, this is shown by the fact that a melody
retains its perceptual identity when it is transposed in the frequency domain, that is, when
the frequencies of the tones are multiplied by a common factor (Attneave & Olson, 1971).
Moreover, a familiar melody is easier to recognize when it is presented with the correct
frequency ratios than when the ratios are substantially distorted, even if, in the latter case,
the global melodic contour is preserved (Dowling & Fujitani, 1971). For pure tones
listeners are also sensitive to frequency ratios is the phenomenon of "harmonic fusion".
A sum of simultaneous pure tones with frequencies forming a harmonic series (f, 2f, 3f,
4f, etc) is perceived as a single sound, with a pitch corresponding to f, rather than as a
mistuned by a few percent from the "correct" frequency, then this component pops out
perceptually, while the other components remain fused (Moore, Glasberg, & Peters,
1986 ; Hartmann, McAdams, & Smith, 1990 ; Lin & Hartmann, 1998 ; Roberts &
Brunstrom, 1998). Since the components of a harmonic series are related by simple
frequency ratios (i.e., ratios of small whole numbers), the phenomenon of harmonic
However, in the case of sequentially presented tones, there is also evidence that people
43
are, to some extent, sensitive to frequency-ratio simplicity. In particular, several studies
have suggested that successive pure tones which are one octave apart, and thus form a
frequency ratio of 2:1, are perceptually similar (Deutsch, 1973 ; Idson & Massaro, 1978 ;
differences between the frequency ratios formed by pure tones presented either
influence of ratio simplicity on ratio discrimination. With regard to this influence, three
hypotheses are a priori possible; they are hereafter referred to as H0, H1, and H2.
indicated by the evidence mentioned above, the discriminability of frequency ratios does
not depend on their simplicity. Under this hypothesis, the discriminability of two ratios is
Geer, & Plomp, 1966 ; Plomp, Wagenaar, & Mimpen, 1973). Contrary to H0, H1 posits
However, both H0 and H1 posit that ratio discrimination takes place along a single,
orderly fashion, with "small" ratios (i.e., ratios close to 1) at one end and large ratios at
the other end. This means that for any set of three ratios, r1, r2, and r3, such that r1 <
r2 < r3, it will always be easier to discriminate r1 from r3 than to discriminate r1 from r2
or r2 from r3. According to H2, on the other hand, ratio simplicity has a deeper impact
magnitude. Thus, H2 posits that ratio discrimination takes place along two distinct
44
perceptual continua representing simplicity and magnitude, rather than along a single
one might find that for three ratios, r1, r2, and r3, such that r1 < r2 < r3, discriminating
r1 from r3 is more difficult than discriminating r1 from r2. In other words, H2 implies that
comparative judgments of frequency ratios may not always obey the transitivity principle.
pure tones has been investigated in a few previous studies (Viemeister & Fantini, 1987 ;
Demany & Semal, 1988 ; 1992 ; Schellenberg & Trehub, 1996b ; Schellenberg, 2001 ;
Trainor, 1997 ; Stoelinga & Lutfi, 2010). All of these studies indicate that, for simultaneous
tones, discriminating two frequency ratios is often much easier when the standard ratio is
simple (so that the stimulus is harmonic) than when this is not the case. The results are
therefore globally inconsistent with hypothesis H0; they are consistent with both H1and
H2. However, the simplicity effects observed in most of these studies can in fact be
interpreted in a rather trivial way. Due to the limited frequency resolution of the cochlea
(Moore. , 2003, chap.3), two simultaneous pure tones which are presented to the same ear
and which are not both soft and well-separated in frequency will interact in the cochlea.
As a result, some auditory-nerve fibers will respond to both tones and will encode
properties of the global waveform. If the frequency ratio of the two tones is simple, e.g .,
2:1 or 3:2, then the global waveform has a short periodicity, in the pitch range. If, on the
other hand, the frequency ratio deviates slightly from a simple value such as 2:1 or 3:2,
then the global waveform has a much longer periodicity, which may be audible as a
periodic beat (Plomp, 1967 ; Plomp, 1976 ; Viemeister, Rickert, & Stellmack, 2001).1 This
1Such beats are called “beats of mistuned consonances”. They differ from the better-known beats
produced by pairs of pure tones with small frequency differences. However, for the sake of
concision, they will be most often referred to simply as “beats” in the following.
45
provides a cue for the discrimination between simple ratios and ratios close to a simple
value.
The cue in question was probably available in most of the studies quoted above.
However, Demany and Semal (1988 ; 1992) endeavored to eliminate it by presenting the
two components of each stimulus dichotically and at a low sound pressure level (SPL),
namely 45 dB. The present research was again specifically concerned with the auditory
processing of frequency relations across peripheral frequency channels, rather than the
Therefore, care was taken to minimize peripheral interactions between the tones and the
audibility of beats resulting from such interactions. In addition, a control experiment was
For pure tones presented sequentially, previous studies have yielded conflicting
Houtsma (1968) measured discrimination thresholds for ratios in the vicinity of one
octave. One of the three listeners whom he tested had extensive musical training and all
listeners received extensive training in the discrimination task before data collection. The
ranging from 1.9 (about 11 semitones) to 2.1 (about 13 semitones). Burns and Ward
(1978) performed similar experiments, but they used a different range of frequency ratios
(2.5 - 5.5 semitones) and they compared the thresholds of musicians and nonmusicians.
In the nonmusical group, they obtained results akin to those of Houtsma, in that
thresholds were approximately independent of the standard ratio; ratio simplicity had no
effect. In the musical group, on the other hand, performance was markedly better than
for nonmusicians overall and it was systematically worse for ratios very close to simple
values (4:3, 5:4, or 6:5) than for ratios roughly halfway between these simple values. The
46
latter finding was interpreted as evidence for categorical perception of musical intervals
based on musical acculturation and practice. The authors concluded that "there is no
evidence for the existence of natural perceptual boundaries for melodic musical intervals"
(Burns & Ward, 1978). In the same vein and in a broader perspective, McDermott,
Keebler, Micheyl, and Oxenham (2010) have claimed that, contrary to a common belief,
melodic intervals are not perceived more precisely than, and differently from, relations
findings of Burns and Ward (1978) and Houtsma (1968) are at odds with results reported
more recently by Schellenberg and Trehub (1994 ; 1996a ; 1996b) and Schellenberg (2001).
In these four studies, it was found that discriminating a melodic interval corresponding
to a simple frequency ratio (3:2 or 4:3) from a somewhat smaller or larger interval
(corresponding to a complex ratio) is easier when the simple ratio is presented before the
complex ratio than when the two ratios are presented in the reverse order. This asymmetry
was observed not only in musically educated adults (Schellenberg, 2001), but also in 6-
year-old children (Schellenberg & Trehub, 1996a) and 6-month old infants (Schellenberg
& Trehub, 1996b). Even younger infants appear to perceive two melodies as similar when
they differ from each other by octave transpositions of some of their components
(Demany & Armand, 1984). The results of Schellenberg and Trehub (1996b) and Demany
and Armand (1984) clearly suggest that some melodic intervals are more "natural" than
In sum, none of the three hypotheses labeled above as H0, H1 and H2 seems to
be definitely ruled out by the current experimental evidence relating to the discrimination
of melodic intervals. Nevertheless, the idea that, in the melodic domain, frequency-ratio
discriminability could depend on ratio simplicity per se is challenged by the fact that the
simplest frequency ratio formed by non-identical successive tones, i.e., 2:1, can be
perceived as mistuned: when listeners are required to adjust two successive pure tones
47
one octave apart, the obtained frequency ratio is often slightly but significantly larger than
2:1, especially for high-frequency tones (Ward, 1954 ; Walliser, 1969 ; Sundberg &
Lindqvist, 1973 ; Demany & Semal, 1990 ; Hartmann, 1993). The neural basis of this
proposed (Terhardt, 1974 ; Ohgushi, 1983 ; Hartmann, 1993 ; McKinney & Delgutte,
1999).
The current study stemmed from introspective observations that we made about
the perception of stimuli consisting of four synchronous pure tones presented at a low
SPL and regularly spaced on a logarithmic frequency scale. Each stimulus was defined by
two parameters: Fmin, the frequency of the lowest tone, and Oct, the frequency spacing of
the tones expressed in octaves. In our stimulus set, Oct varied in the range 0.9 - 1.1. We
focused our attention on the perceptual correlate of changes in Oct while Fmin was varied
concomitantly and randomly, in a wide range. Due to the randomness of the variations in
Fmin, there was essentially no correlation between Oct and the global pitch of the stimuli
or the frequency of individual tones. We firstly observed that the stimuli obtained for two
values of Oct equidistant from 1, and not too close from each other (e.g., 0.9 and 1.1),
were discriminable on the basis of a spectral "compactness" cue. This perceptual variable
was a monotonic function of Oct: a stimulus with a relatively small value of Oct, e.g., 0.9,
sounded more compact than a stimulus with a relatively large value of Oct, e.g., 1.1. It was
also apparent, however, that a different perceptual variable, namely spectral fusion, was
more efficient for the detection of changes from Oct = 1 to Oct ≠1. For Oct = 1, the four
tones were perceptually fused into a single sound; this fusion decreased as Oct deviated
from 1. Since fusion decreased for both positive and negative deviations from 1, this cue
was not a monotonic function of Oct, in contrast to the compactness cue. Overall,
therefore, our informal observations suggested that within a small range of Oct values
centered on 1, changes in Oct were detectable on the basis of two different perceptual
48
variables, each of them being more efficient than the other in certain conditions. This
as follows. Let Octreference be an Oct value slightly smaller than 1 and moderately
discriminable from 1. Suppose that, as sugested by our introspective observations: (1) the
discrimination between Octreference and 1 essentially rests on the fusion cue; (2) fusion
decreases as soon as Oct exceeds 1. It can then be expected that Octreference will be easier to
discriminate from 1 than from Oct values slightly larger than 1. However, even larger Oct
values will be discriminable from Octreference on the basis of the compactness cue. So, the
progressively larger values of Oct might show an initial rise followed by a fall and then a
second rise. Such a nonmonotonic psychometric function would provide strong evidence
for H2.
This was the rationale underlying the first half of Experiment 1, our main
experiment. The second half of this experiment was identical to the first, except that the
pure tones forming the stimuli were now presented successively rather than
experiment intended to check, as mentioned above, that all the data collected in the main
3.2 Experiment 1
3.2.1 Method
Participants. The tested listeners were six students in their twenties (three male,
three female). They all had normal hearing. Some of them were amateur musicians, but
49
none had received a thorough musical education. All listeners were paid for their
Stimuli, conditions and task. Each stimulus consisted of four diotically presented
pure tones. In the "simultaneous" condition, these four tones were synchronous. In the
with 100-ms silent inter-stimulus intervals (ISIs) between them. In both conditions, each
tone had a total duration of 100 ms and was gated on and off with 20-ms raised cosine
functions. The tones had a nominal SPL of 31 dB for listeners L1 and L2, and 46 dB for
the four remaining listeners.2 The four components of each stimulus were equally spaced
on a logarithmic frequency scale. Their spacing, Oct, was expressed in octaves. On each
stimulus presentation, the frequency of the lowest tone (Fmin) was selected randomly
between 125 and 250 Hz; the corresponding probability distribution was rectangular,
frequency being scaled logarithmically. The starting phases of the tones were random
On every trial, four successive stimuli were presented, as illustrated in Figure 8 for
the "simultaneous" condition. There was a silent ISI of 300 ms between the first two
stimuli (S1 and S2) and between the last two stimuli (S3 and S4). A longer ISI – 600 ms –
separated S3 from S2, thus segmenting the whole sequence into two pairs of stimuli. In
three of the four stimuli, Oct had a fixed reference value (Octreference) of 0.97. In the
remaining stimulus, which was either S2 or S4, at random, Oct was larger; this target value
of Oct (Octtarget) could be equal to 0.98, 0.99, 1.00, 1.01, 1.02 , 1.03, or 1.04. The listener’s
task was to indicate if the odd man out was S2 or S4, by making a mouse click on one of
2 We had initially planned to use 46-dB tones for each listener. However, during the practice
sessions, it appeared that ceiling effects were likely to be obtained in listeners L1 and L2 if they were
tested with tones at this level. It was thus decided to test these listeners using softer tones, making
the task more difficult.
50
Figure 8. Schematic representation of listeners' task in the "simultaneous" condition of Experiment 1. On
each trial, four stimuli (S1, S2, S3, S4) were successively presented (their relative timing is not faithfully
depicted in this Figure). Each stimulus consisted of four synchronous pure tones, equally spaced on a
logarithmic frequency scale. The spacing of the tones was the same (namely, 0.97 octave) for three stimuli,
and was larger for the remaining stimulus, which was either S2 or S4. Listeners had to identify the odd man
out as S2 or S4. The frequency of the lowest component of each stimulus was randomly chosen between
125 and 250 Hz. The "sequential" condition of Experiment 1 was similar, except that in this case the four
components of each stimulus were presented consecutively.
two virtual buttons on a computer screen. Immediately after this response, the button
was colored in white if the response was correct, or in black otherwise. Response time
was not limited. Within each block of trials, a response triggered the next trial after a delay
of about 1 s.
The stimuli were generated at a 44.1-kHz sampling rate, using 24-bit digital-to-
analog converters (RME). They were presented via headphones (Sennheiser HD 650).
Harmonic distortion at the output of the headphones was assessed using an artificial ear
(Bruel & Kjaer 4153 equipped with a flat-plate coupler; microphone model: 4134) and a
spectrum analyzer (Stanford Research, SR 780). For nominally pure tones ranging in
frequency from 125 to 1000 Hz and produced at 80-90 dB SPL, it was found that each
booth (Gisol, Bordeaux). Trials were run in blocks of 50, during which Octtarget had a fixed
51
value, known by the listener. Each experimental session consisted of seven blocks of trials,
one block for each of the seven possible values of Octtarget. These seven blocks were
randomly ordered.
For each listener, the experiment proper was preceded by at least six practice
sessions in the "simultaneous" condition. Then, 10 formal sessions were run in that
condition. They were followed by one or two practice sessions in the "sequential"
condition, and then 10 formal sessions in that condition. Overall, therefore, the formal
data resulted from 500 trials per listener for each combination of condition and Octtarget
value.
3.2.2 Results
Before considering the results obtained in each individual listener, we shall focus
on the mean data. Figure 9 shows the overall proportion of correct responses obtained in
each condition ("simultaneous": black disks; "sequential": white disks) for each Octtarget
value. Performance was not markedly better in one condition than the other, but the effect
of Octtarget on performance clearly differed across the two conditions. In the "sequential"
condition, the mean psychometric function obtained was approximately a straight line,
psychometric function obtained was clearly nonmonotonic; it showed a peak for Octtarget
= 1, followed by a dip for Octtarget = 1.02; this outcome is consistent with hypothesis H2.
X Octtarget, which revealed a significant main effect of Octtarget, F(6, 30) = 14.80, p < 0.001,
and a significant interaction between Condition and Octtarget, F(6, 30) = 4.81, p = 0.0015.
There was no significant main effect of Condition, F(1, 5) = 0.03, p = 0.88. Given the
existence of a reliable interaction between Condition and Octtarget, we then submitted the
52
Figure 9. Results obtained in the "simultaneous" and "sequential" conditions of Experiment 1. Each data
point represents the mean performance of six listeners. The thin horizontal line indicates the chance level
of performance (50 % of correct responses).
Octtarget. In each case, not surprisingly, the effect of Octtarget appeared to be significant, F(6,
30) ≥ 8.45, p < 0.001. Our main goal was to determine if there were significant cubic
trends in the effect of Octtarget, as predicted by H2. There was indeed a significant trend of
this type for the "simultaneous" condition, t(30) = 5.11, p < 0.001, but not for the
The data provided by each listener are plotted in Figure 10, where the curves
perceptual processing, one model for each condition. These models are described in detail
in the Appendix to this manuscript. It is especially noticeable in Figure 10 that there was
essentially no correlation, across listeners, between the overall level of performance in the
two conditions. For instance, while listener L1 performed markedly better in the
53
"simultaneous" condition than in the "sequential" condition, the reverse was found in
listener L2.
0.9 L1 L2
0.8
0.7
Proportion of correct responses
0.6
0.9 L3 L4
0.8
0.7
0.6
0.9 L5 L6
0.8
0.7
0.6
0.98 0.99 1 1.01 1.02 1.03 1.04 0.98 0.99 1 1.01 1.02 1.03 1.04
3.2.3 Discussion
ability to discriminate between frequency ratios, rather than between frequencies? In order
to minimize the latter possibility, we had randomized the frequency of the lowest
component of each stimulus, as pointed out above and illustrated in Figure 8. However,
was the randomization range (1 octave) large enough? To answer that question, we
virtual listener insensitive to changes in frequency ratio but producing optimal responses
54
on the basis of frequency comparisons between stimuli. This virtual listener votes for
stimulus S2 if the frequency of the highest component of S2 is higher than the frequency
of the highest component of S4, and votes for S4 otherwise. We found that the proportion
Octtarget – Octreference. This proportion is 0.664 for Octtarget = 1.03 and 0.688 for Octtarget = 1.04.
In the "sequential" condition, for these two values of Octtarget, five of the six listeners tested
in the experiment (all participants except L3) performed better than the virtual listener;
therefore, those five listeners must have based their judgments on comparisons between
L3, on the other hand, may have been unable to perform the task as intended. In the
"simultaneous" condition, for all listeners except again L3, the nonmonotonic effect of
Octtarget on performance is of course at odds with the idea that performance was merely
"simultaneous" condition, the pure tones forming the stimuli interacted in the cochlea,
whereas this was precluded in the "sequential" condition. As pointed out above, cochlear
frequency ratios on the basis of beat cues. Beats were liable to be elicited by all of the
stimuli used in the "simultaneous" condition, except those composed of tones exactly one
octave apart. This can potentially account for the fact that, in the "simultaneous"
condition, a local peak in discriminability was observed for Octtarget = 1. In order to limit
cochlear interactions, we presented the tones at a low SPL, namely 46 dB for four listeners
and 31 dB for the remaining two listeners. To further investigate the extent of interactions
between the tones, we computed "excitation patterns" for the stimuli of the
55
"simultaneous" condition, using the model of Moore, Glasberg and Baer (1997) revised
by Glasberg and Moore (2006). Figure 11 shows the simulated excitation pattern obtained
for the harmonic stimulus formed of tones at 125, 250, 500, and 1000 Hz when the SPL
per tone was 46 dB (upper solid curve) and 31 dB (lower solid curve). Note that 125 Hz
was the lowest possible tone frequency in the experiment, and that in this case cochlear
interactions between tones were maximized (because the relative bandwidth of the
peripheral auditory filters tends to increase at low frequencies). The dashed curve in
Figure 11 shows the excitation corresponding to the absolute threshold of hearing (in
normal young listeners). This curve thus represents a detection floor for the excitation
patterns: portions of the patterns that fall below it are, in principle, undetectable by a
normal-hearing listener. It can be seen that in the case of the lower-level excitation pattern,
any detectable excitation originates from a single tone. At the higher level, however,
60
Excitation pattern
50 Excitation at threshold
40
Excitation (dB)
30
20
10
-10
125 250 500 1000
Frequency (in Hz on ERB scale)
Figure 11. Each The two solid curves represent the cochlear excitation pattern produced by a sum of four
pure tones at 125, 250, 500, and 1000 Hz when the SPL of each tone is 46 dB (upper curve) and 31 dB
(lower curve). The dashed curve represents the amount of excitation produced by a just-detectable pure
tone, as a function of its frequency. Frequency, on the abscissa, is scaled in accordance with the frequency
dependence of the auditory filters’ equivalent rectangular bandwidth (ERB; cf. Moore, 2003, chap. 3). The
three curves were calculated on the basis of the model proposed by Moore, Glasberg, and Baer (1997) and
revised by Glasberg and Moore (2006).
56
As a matter of fact, investigations on the audibility of beats of mistuned
consonances in monaural dyads of pure tones have suggested that such beats are not
audible when each of the tones is below 50 dB SPL (Plomp, 1967 ; 1976 ; Viemeister,
Rickert, & Stellmack, 2001). Nevertheless, it was important to check that beats resulting
from cochlear interactions were indeed never detectable in Experiment 1. This was the
3.3 Experiment 2
3.3.1 Rationale
detect deviations from the octave interval was investigated using dyads of simultaneous
pure tones. On each trial, listeners were presented with two successive dyads, one
composed of tones exactly one octave apart and one in which the interval formed by the
tones was either slightly larger (positive mistuning) or slightly smaller (negative
mistuning); the task was to indicate which of the two dyads was a mistuned octave. The
two components of each dyad were presented at a low sensation level in a pink-noise
background, so that they were unlikely to interact in the cochlea. Unexpectedly, the
experimental results showed that negative mistunings were easier to detect than positive
mistunings of the same size in terms of relative frequency deviation.3 Although the origin
of this perceptual asymmetry was not elucidated, one could infer from it that the listeners
did not detect mistuning by detecting beats resulting from cochlear interactions: Had this
been the case, no significant effect of the sign of mistuning was expected since the
3 For mistunings matched in terms of absolute, rather than relative, frequency deviation, the
observed perceptual advantage of negative mistunings would have been very slightly (and negligibly)
larger.
57
waveform obtained by adding two tones with frequencies f and (2f + ε) Hz is essentially
We thought that if the same perceptual asymmetry was observable for stimuli similar to
those used in Experiment 1, this would provide evidence that, in the latter experiment as
well as in the previous study, mistuned octaves did not elicit significant cochlear beats.
3.3.2 Method
synchronous pure tones equally spaced on a logarithmic frequency scale, gated on and off
with 20-ms raised cosine functions, and presented diotically at the same nominal SPL.
Once more, the frequency of the lowest component of each stimulus was selected
randomly between 125 and 250 Hz, and each component had a random initial phase.
However, the stimuli now had a total duration of 500 ms instead of 100 ms; this increase
successive stimuli (S1, S2, S3, S4) were presented on each trial and listeners were requested
to identify the odd man out, which was equiprobably either S2 or S4. Again, the odd man
out differed from the other three stimuli with respect to the frequency ratio of
neighboring tones (Oct). The silent ISIs separating the four stimuli were the same as in
Experiment 1, and correct-answer feedback was again provided visually following each
response.
The main novelty of Experiment 2 was that Oct now had a reference value
(Octreference) of exactly 1 octave. On each trial, therefore, the odd man out was the only
stimulus for which Oct differed from 1 octave. In one condition, named the "stretching"
condition, the value of Oct in the odd man out (Octtarget) was always larger than 1 octave.
58
In another condition, named the "compression" condition, Octtarget was always smaller than
1 octave. In both conditions, Octtarget was varied adaptively within blocks of trials, in order
Octreference |, for which the probability of a correct response was 0.75. This was done using
the "weighted up-down" paradigm described by Kaernbach (1991). At the onset of each
block of trials, Δoct was large, well above the expected threshold. Then, Δoct was decreased
following each correct response and increased following each incorrect response. A block
ended after the 14th reversal in the variation of Δoct . Until the 4th reversal, Δoct was
multiplied by 2.25 when it was increased, and divided by the cube root of the same factor
when it was decreased. After the 4th reversal, Δoct was either multiplied by 1.5 or divided
by the cube root of this factor. The threshold measure obtained in a block of trials was
computed as the geometric mean of all the Δoct values used from the 5th reversal on.
"stretching" condition and the "compression" condition; the switch occurred after each
threshold measurement.
The component tones of the stimuli were varied in SPL from 46 dB (the highest
SPL of the tones in Experiment 1) to 76 dB, or vice versa, in 10-dB steps. The SPL was
fixed within sessions. At each SPL and for each listener, 10 threshold measurements were
The apparatus used in this experiment was the same as in Experiment 1. Seven
listeners (four male, three female) were tested. Three of them were listeners L4, L5 and
L6 in Experiment 1. The four additional listeners (L7 - L10) included three students in
their twenties and author LD (aged 58 and designated as L7); these four listeners had
normal hearing, at least for the frequency range covered by the stimuli; none of them had
received a thorough musical education, but two (L7 and L10) had substantial previous
59
experience with psychoacoustics. For each of the seven listeners, the experiment proper
Figure 12 displays the results. Seven panels show, as a function of SPL, the mean
threshold measured in each listener for the two conditions ("stretching" versus
"compression"). The individual results are averaged in the rightmost panel. Note firstly
that SPL had a strong effect on thresholds: as the stimuli increased in SPL, thresholds
systematically decreased (except in the case of L5 for the highest SPL). This trend was
predictable since an increase in SPL increased the potential influence of beat cues
originating from cochlear interactions between the tones. In line with the hypothesis that
the task was performed using beat cues when the SPL was high, there was no significant
conditions at 66 dB, t(6) < 1, as well as at 76 dB, t(6) = 1.25, p = 0.26. By contrast, there
was a marginally significant effect of condition at 56 dB, t(6) = 2.02, p = 0.06, and the
effect of condition was definitely significant at 46 dB, t(6) = 3.97, p = 0.007. At these two
lower SPLs, thresholds were better in the "compression" condition than in the
underlined that, in comparison with the reference value of Oct (1 octave), these thresholds
represent quite small frequency deviations, although listeners' performance was even
better at higher SPLs. If the thresholds obtained at 46 dB had been markedly higher, then
their dependence on the sign of mistuning might have been accounted for under the
hypothesis that, even for this low SPL, mistuning detection rested on beat detection. It
60
Figure 12. Results of Experiment 2: mistuning thresholds measured in the "stretching" condition and the
"compression" condition as a function of the SPL of the tones. The data obtained from each of the seven
listeners (L4, L5, ... L10) are displayed in a separate panel. In these seven panels, each data point is the mean
of 10 threshold measurements and the error bars represent ± 1 standard error of the mean. In the rightmost
panel, the results of the seven listeners are averaged. In each panel, for clarity, the data points corresponding
to the two mistuning conditions are slightly shifted horizontally (to the left for the "stretching" condition
and to the right for the "compression" condition).
distance between the tones, increased the possibility of cochlear interactions, whereas the
contrast for 46-dB tones and mistunings such as the thresholds mentioned above, we
used again the excitation-pattern model of Moore et al. (1997). We assumed that the
magnitude of a cochlear interaction between two tones was directly reflected by the height
of the excitation-pattern trough between their frequencies. The height of the troughs
increased as Oct decreased. According to the model, this effect was strongest for the third
trough, representing a potential interaction of the upper two tones; on average, the third
trough was about 1.7-dB higher for the Oct value corresponding to a just-detectable
compression than for the Oct value corresponding to a just-detectable stretching. Such a
small difference does not seem able to account for the very substantial dependence of
were responsible for this dependence, then increasing the SPL of the tones by 10 dB
61
According to the model of Moore et al. (1997), the third trough in the excitation pattern
was about 12 dB higher for 56-dB tones than for 46-dB tones. In spite of this, the
thresholds measured in the "stretching" condition with 56-dB tones did not differ
significantly from those measured in the "compression" condition for 46-dB tones, t(6) =
1,14, p = 0.30. Clearly, therefore, the perceptual asymmetry observed in the present
experiment for 46-dB tones does not seem consistent with the hypothesis that, at this
level, deviations from one octave were detected by means of beat cues originating from
It has been reported that, for some listeners, a mistuned octave composed of
simultaneous pure tones can elicit the perception of faint beats when the two tones are
presented to opposite ears (Thurlow & Bernstein, 1957 ; Thurlow & Elfner, 1959 ; Tobias,
1963 ; Feeney, 1997); according to Thurlow (Thurlow & Bernstein, 1957 ; Thurlow &
Elfner, 1959), this is possible even for tones at a sensation level as low as 30 dB. The
localization by the binaural system (Feeney, 1997). Since in our two experiments the
stimuli were presented diotically, it is conceivable that beat detection was facilitated by
the binaural mechanism responsible for the audibility of DBMCs. However, the
perceptual asymmetry disclosed by Experiment 2 does not seem easier to fit in with this
"central beats" hypothesis than with the "peripheral beats" hypothesis considered above.
this article, it might be expected, in theory, that simultaneous and soft pure tones
approximately one octave apart fuse maximally when the octave is slightly stretched rather
than perfectly tuned from the physical point of view. This bias could give rise to a
62
However, the results of Experiment 1 fail to provide evidence for the bias in question.
Of course, negative mistunings could still be better detected than positive mistunings if
maximum fusion was systematically obtained for the physically perfect octave: it may
simply be, for some reason, that fusion decreases more rapidly for negative deviations
is now understood why two simultaneous complex tones (for instance two vowels or violin
sounds) are perceived as more consonant when their fundamental frequencies are in a
simple frequency ratio than when this is not the case (Plomp, 1976 ; McDermott, Lehr,
& Oxenham, 2010), the roots of harmonic fusion for simultaneous pure tones are still
unclear. Moreover, the reason why an affinity is perceived between sequentially presented
pure tones one octave apart is also unknown. As discussed below, both of these
phenomena might originate from a learning mechanism, or they might stem from innate
properties of the auditory system. In the present study, we sought to clarify the processes
sought to test the idea that the same processes are at work for simultaneous tones and for
consecutive tones. We assessed the effect of ratio simplicity on ratio discrimination, for
simultaneous and sequentially presented tones. Markedly different results were obtained
in these two situations (Experiment 1), in spite of the absence of significant cochlear
interactions between the tones when they were simultaneous (Experiment 2). For
simplicity. Moreover, we found that ratio simplicity – or, more precisely, distance to the
63
simple ratio 2:1 – was in itself the physical correlate of one discrimination cue. This result
fitted with a hypothesis called "H2" in the introduction of our paper; according to H2,
representing ratio simplicity and ratio magnitude. For sequentially presented tones, on the
our stimuli? The tones that we presented either simultaneously or sequentially were the
same in these two conditions; this was of course mandatory for a fair comparison between
the two conditions. One might think, however, that the melodic sequences produced in
the "sequential" condition were such that the frequency ratios of the tones could not be
perceived optimally. Against this concern, it may be noted first that the speed of the
sequences (five tones per second) was far from amusical: this speed is that of a series of
eighth notes in "allegro" tempo; such series are very common in music. More crucially, it
must be emphasized that, overall, listeners did not perform more poorly in the
clearly at odds with the idea that frequency-ratio perception was disadvantaged in the
"sequential" condition.
The fact that ratio simplicity had no effect in the "sequential" condition is not
particularly surprising in the light of previous research. Although, for consecutive tones,
Schellenberg and Trehub (1994 ; 1996a ; 1996b) found an advantage of ratio simplicity in
ratio discrimination, Houtsma (1968) reported that ratio discrimination thresholds do not
vary significantly when the standard ratio varies from 1.9 to 2.1. Moreover, Dobbins and
Cuddy (1982), who used stimuli similar to those employed by Houtsma (1968), concluded
from their frequency-ratio categorization experiment that the function relating physical
as well as non-musicians). It may be that, for consecutive tones, ratio simplicity affects
64
ratio discrimination only if special stimuli and/or experimental procedures are employed,
while simplicity effects are easier to demonstrate with simultaneous tones. We must also
point out that while the results obtained in our "sequential" condition are inconsistent
with the hypothesis referred to above as H2, they are not clearly inconsistent with H1. H2
posits that frequency-ratio discrimination takes place along two distinct perceptual
is the only discrimination cue; simplicity is not in itself a discrimination cue; however,
discriminability is better near simple ratios than near complex ratios. Our measurement
of psychometric functions with a reference ratio of 0.97 octave was not well suited for a
on ratio discrimination).
tones, and that a simultaneous presentation of the same tones did not produce extra cues
resulting from cochlear interactions, it is remarkable that in the latter condition we did
obtain an effect of ratio simplicity. This contrast reflects the fact that ratio simplicity
produces quite different perceptual effects in the melodic and harmonic domains. Two
successive pure tones forming a melodic octave are perceived as having related pitches,
but, at least for normal adult listeners, they are easily distinguishable from each other.
Indeed, with respect to pitch, their difference is not less obvious than their affinity. By
contrast, two simultaneous pure tones forming an octave interval are perceptually fused
into a single entity. In the auditory system, this fusion may well take place below the level
at which representations of pitch per se are extracted from pure tones (as well as from
complex tones). An analogous point was recently made by Borchert, Micheyl, and
Oxenham (2011). In their study, listeners had to detect slight differences in pitch
filtered into two separate, non-overlapping spectral regions. When the tones were
65
presented consecutively, performance was poor; listeners found the task difficult because
of the large difference in timbre between the tones. However, when the tones were
presented synchronously, performance was markedly better and the task was subjectively
much easier, because it could then be performed by using a fusion cue rather than by
harmonic fusion originates from a learning process. He suggested that, early in life,
humans initially perceive simultaneous pure tones forming a harmonic series as separate
auditory entities, but that subsequently such tones are fused because of their frequent co-
occurrence in the natural acoustic environment (and especially their systematic co-
occurence in vocal sounds). Shamma and Klein (2000) also took the view that harmonic
fusion is learnt, and they put forth a physiologically based model whereby this learning
could occur even in the absence of harmonic inputs. Although the spectral structure of
the stimuli used here was always dissimilar to that of natural harmonic sounds (since the
components of the latter sounds do not form only octave intervals), the frequency-ratio
simplicity effect that we obtained with simultaneous tones could in principle be based on
a learning process.
mandatory condition for the perceptual coherence of simultaneous pure tones, as shown
by Roberts and his co-workers (Brunstrom & Roberts, 2000 ; Roberts & Bailey, 1996 ;
Roberts & Brunstrom, 1998 ; 2001 ; 2003). When the elements of a harmonic series (e.g.,
200, 400, 600, ... 2000 Hz) are all shifted in frequency by the same amount in Hertz (e.g.,
50 Hz), their perceptual coherence is somewhat reduced but not dramatically, although
66
they no longer form a harmonic series. Their constant frequency spacing is sufficient to
produce perceptual coherence, as indicated by the fact that if a single element of the
inharmonic set is slightly shifted in frequency, thus making the frequency spacing locally
irregular, the shifted element tends to stand out perceptually. Given the rarity, in our
acoustic environment, of inharmonic stimuli such as the one just described, their
these inharmonic stimuli, as well as harmonic fusion, may instead originate from the
temporal coding of frequency by the auditory system. Because the successive action
phase of the tone, these successive neural spikes are generally separated by time intervals
close to the period of the tone and its integer multiples (McKinney & Delgutte, 1999).
This implies that simultaneous pure tones forming simple frequency ratios will produce,
in separate auditory-nerve fibers, spikes separated by common time intervals. For two
pure tones one octave apart, indeed, all the time intervals present in the neural response
to the lower-frequency tone should also be present, approximately, in the neural response
to the higher-frequency tone. Relations of that kind may be recognized at a higher level
responses (Licklider, 1951 ; Meddis & Hewitt, 1991 ; 1992 ; Patterson, Handel, Yost, &
Datta, 1996 ; Yost, 1996 ; Cariani & Delgutte, 1996 ; Cariani P. , 2001 ; de Cheveigné,
2005). Such a mechanism can in principle explain harmonic fusion, as well as many aspects
of pitch perception, including the perceptual affinity ot consecutive tones one octave apart
4One can also discard the idea that their perceptual coherence originates from special intrinsic
properties of their spatial (tonotopic) representation in the auditory system: pure tones equally
spaced in Hertz do not produce equally spaced neural excitations.
67
Therefore, the perceptual discrimination of an octave from slightly larger or
smaller frequency ratios could rest upon the use of temporal information. Alternatively,
this perceptual discrimination might be based on the use of spatial information. In the
latter view, consistent with the fact that the auditory system has a tonotopic organization,
by pure tones. For simultaneous tones, our data would then imply, rather paradoxically,
that deviations from this specific distance can be detected without being recognized as
the fact that performance decreased when the target frequency ratio was made larger than
one octave implied that the (positive) sign of this mistuning from one octave could not
be discriminated from the (negative) sign of the mistuning of the reference frequency ratio
(0.97 octave).
argued that harmonic fusion stems from a learning process. In his theory, two
simultaneous pure tones one octave apart are fused merely because they co-occur in
frequently heard sounds, such as the vowels of speech. Terhardt also suggested that, in
consequence of our frequent exposition to vowels and other periodic complex tones, we
acquire not only harmonic octave templates but also melodic octave templates. More
generally, the learning process responsible for harmonic fusion would also account for
the affinity perceived between successive pure tones forming simple frequency ratios,
such as the octave. The results reported here do not support the latter hypothesis. If, as
supposed by Terhardt, melodic and harmonic octaves were recognized by means of the
same internal templates, then these templates should show similar properties in melodic
68
and harmonic conditions. However, our results imply that the harmonic octave is
recognized by a mechanism which is insensitive to the sign of deviations from the octave,
whereas this is apparently not the case for the melodic octave.
introductory section of this article, Terhardt (1970 ; 1971 ; 1974 ; Terhardt, Stoll, &
Seewann, 1982) argued that the harmonic complex tones typical of our auditory
environment are such that their spectral components partially mask each other in the
auditory periphery. As a result, the pitches evoked by these spectral components would
be slightly different from those evoked by physically identical pure tones presented in
shifts; this would account for the fact that, in general, a melodic octave is heard as
optimally tuned for a physical frequency ratio slightly larger than 2:1.5 Here, we used
complex tones composed of pure tones which did not mask each other significantly, as
shown by Experiment 2. Thus, the pitch-shift effects invoked by Terhardt should not
have occurred for these stimuli. In consequence, Terhardt’s theory predicted that
maximum fusion would be obtained for frequency ratios slightly larger than one octave.
Experiment 1, all listeners but one (L6) performed better when the target frequency ratio
was exactly one octave than when it was equal to 1.01 octave, as can be seen in Figure 10.
Admittedly, the target frequency ratios may have been too coarsely sampled to reveal an
set two successive tones one octave apart, only very small enlargement effects (< 1 %) are
5 Terhardt’s explanation for the octave enlargement effect is based on the assumption that the pitch
of a pure tone is coded spatially (tonotopically) in the auditory system. Ohgushi (1983) and
McKinney and Delgutte (1999) have proposed an alternative explanation, assuming instead that pure
tone pitch depends on the temporal coding of frequency in the auditory nerve.
69
found at low frequencies, in spectral regions covered by the stimuli used here (cf. Fig. 1
Nevertheless, as pointed out above, our results clearly question the validity of
Terhardt’s hypothesis on the origin of tonal affinity and the perception of the melodic
octave. Two other problems for this hypothesis have been uncovered previously. First,
are apparently difficult to replicate; it has been suggested that these pitch shifts do not
really exist (Peters, Moore, & Glasberg, 1983 ; Hartmann & Doty, 1996). Second, the
precision with which harmonic octave dyads can be identified becomes quite poor when
the frequency of the lower tone exceeds about 1000 Hz, whereas this is not the case for
melodic octave dyads (Demany & Semal, 1990 ; Demany, Semal, & Carlyon, 1991); if
sensitivity to melodic "octaveness" were derived from experience with harmonic complex
sounds, how could it be more acute than sensitivity to harmonic octaveness? However,
the latter objection applies only to the perception of melodic octaves at high frequencies.
This is also largely true for the former objection, insofar as the octave enlargement effect
is quite small and barely detectable at low frequencies. In contrast, the present research
suggests that harmonic and melodic octaves are perceptually recognized as "special"
70
4 ARTICLE « HARMONIC FUSION AND PITCH
Demany, 2013) apportent des informations importantes sur les mécanismes de la fusion
harmonique ; elles indiquent notamment qu’un modèle tel que celui proposé par Terhardt
(1974) reposant sur des gabarits utilisables aussi bien pour l’octave harmonique que
mélodique semble peu compatible avec les résultats obtenus. Au contraire, il est plus
vraisemblable que fusion harmonique et affinité perceptive de hauteur entre sons purs à
sur une mesure objective de performance, les sujets ont cherché à utiliser les indices les
harmonique (ou simultanée), les résultats étaient compatibles avec l’utilisation de deux
indices combinés, la fusion harmonique et un indice que nous avons appelé la « compacité »,
séquentielle), les sujets semblaient n’utiliser qu’un seul indice, la compacité. Il est probable
que le paradigme utilisé en condition mélodique ne favorisait pas l’utilisation d’un indice
reposant directement sur l’affinité perceptive de hauteur, notamment si cet indice s’avérait
(1) La sensation de fusion maximale entre sons purs simultanés (séparés d’un intervalle
proche de l’octave) est-elle obtenue pour le même rapport de fréquence que l’affinité
perceptive maximale entre sons purs successifs ? En condition mélodique, il est acquis
71
que l’affinité perceptive maximale est obtenue pour un intervalle légèrement supérieur à
& Cuddy, 1982; McKinney & Delgutte, 1999). Terhardt (1971) dont le modèle repose sur
des gabarits appris par exposition à des intervalles physiquement harmoniques, explique
ce phénomène par la présence d’interactions cochléaires entre les composantes des sons
spectrale des composantes (avec un effet de répulsion entre les hauteurs), aboutissant à la
création de gabarits « harmoniques » légèrement plus grands que l’octave physique. Selon
ce modèle, la fusion maximale entre sons purs simultanés devrait être obtenue pour un
rapport de fréquences légèrement supérieure à une octave physique, dans des proportions
comparables à ce qui est mesuré dans le cas de l’élargissement d’octave mélodique. Nos
données précédentes semblent indiquer que l’indice de fusion était maximal pour un
rapport de fréquence parfaitement égal à 1.0 octave (cf. section 3.2.2) ; néanmoins,
0.01 octave qui est tout à fait plausible selon les données sur l’élargissement d’octave
(McKinney & Delgutte, 1999) ne pouvait pas être détectée avec ce protocole.
(2) Comment expliquer l’asymétrie observée dans l’expérience 2 (cf. section 3.3), à savoir
la détection plus facile d’une inharmonicité pour des rapports de fréquences légèrement
inférieurs à l’octave que pour des rapports légèrement supérieurs ? Cette asymétrie,
mécanisme de fusion harmonique et a déjà été rapportée dans d’autres travaux (Demany,
Semal, & Carlyon, 1991; Borchert, Micheyl, & Oxenham, 2011). Un des éléments de
72
position du maximum de fusion ou l’existence d’une asymétrie des pentes de part et
de la sensibilité à l’harmonicité.
L’étude présentée dans la section suivante (Bonnard, Dauman, Semal, & Demany,
sons purs à l’octave sont deux phénomènes perceptifs directement liés. Les stimuli étaient
composés de trois sons purs dont le rapport de fréquences était compris entre 0.96 et 1.04
apportant une sensation de consonance ou de fusion maximale entre ses composantes (en
significativement supérieur à une octave, et cette affinité décroit plus rapidement au-
hauteur ne sont pas des phénomènes directement liés. La section ci-après présente le
manuscrit original en anglais de l’article « Harmonic fusion and pitch affinity: Is there a
direct link ? » qui vient d’être publié dans Hearing Research (Bonnard, Dauman, Semal, &
Demany, 2016).
73
74
HARMONIC FUSION AND PITCH AFFINITY : IS THERE A
DIRECT LINK ?
4.1 Introduction
frequency ratios) is normally heard as a single sound, evoking a single pitch sensation,
even though the component tones may be largely "resolved" (i.e., segregated) in the
cochlea (Moore, Glasberg, & Peters, 1986; Carlyon, Demany, & Semal, 1992 ; Roberts &
Bailey, 1996 ; Lin & Hartmann, 1998; Kalluri, Depireux, & Shamma, 2008). This
perceptual phenomenon, which is thought to play a major role in the auditory analysis of
everyday acoustic scenes (Bregman, Auditory Scene Analysis, 1990), has been called
"harmonic fusion". In addition, there is ample evidence that pure tones presented
successively rather than simultaneously are perceived as more similar in pitch when they
have a harmonic frequency ratio, especially 2:1 (one octave), than when their frequency
one octave apart ─ often called "tone chroma" following (Bachem, 1937) ─ has been
1977; Idson & Massaro, 1978; Kallman & Massaro, 1979; Massaro, Kallman, & Kelly,
1980), as well as using tasks that did not require musical judgments (Deutsch, 1973; Borra,
Versnel, Kemner, van Opstal, & van Ee, 2013); moreover, this affinity has been observed
not only in human adults but also in three-month-old human infants (Demany & Armand,
1984), in rhesus monkeys (Wright, Rivera, Hulse, Shyan, & Neiworth, 2000), and in rats
(Blackwell & Schlosberg, 1943). It is natural to hypothesize that the perceived affinity in
pitch of non-simultaneous pure tones one octave apart is directly related to their
75
propensity to fuse when they are presented simultaneously. The present study was
hypothesis H, crucially, simultaneous pure tones one octave apart are perceptually fused
after an initial identification of their individual pitches by the central auditory system.
Fusion may then occur because the relation between these pitches is recognized as an
octave due to the existence of internal templates of the pitch intervals formed by
harmonically related simultaneous tones. The existence of an internal template for the
simultaneous octave could explain why a perceptual affinity is perceived between non-
The mechanism of harmonic fusion has been the subject of intense speculation
since the time of Helmholtz (1877/1954) and is still unclear (for reviews, see Carlyon &
Gockel, 2008; Micheyl & Oxenham, 2010). As shown by Roberts and his coworkers
(Roberts & Bregman, 1991 ; Roberts & Bailey, 1996 ; Roberts & Brunstrom, 1998 ; 2001 ;
Brunstrom & Roberts, 2000), harmonicity is not a necessary condition for the perceptual
fusion of simultaneous and peripherally resolved pure tones: regularity of spectral spacing
is sufficient to induce some amount of fusion. For instance, a sum of sinusoids with
frequencies equal to 230, 430, 630, ... 2430 Hz tends to be heard as a coherent sound. The
components of this inharmonic stimulus are difficult to hear out individually. Disrupting
the spectral regularity by shifting the frequency of one component by a few percent, in
Brunstrom, 1998). This is inconsistent with the idea that spectral fusion is entirely
Roberts and Brunstrom (2001) argued that spectral fusion, as a whole, can be better
proposed by Licklider (1951) and Meddis and Hewitt (1991). The crucial assumption of
Roberts and Brunstrom is that, at odds with hypothesis H, the neural information used
76
for spectral fusion (or segregation) is not a neural representation of the frequencies or
representation of the stimulus components, in the time domain; spectral fusion and pitch
contrary to the idea that "Nature gave us pitch to sort out the world" (Hartmann, 1996 p.
3502). Nevertheless, harmonically related tones are perceptually fused to a greater extent
than tones with a regular spectral spacing but without harmonic relations (Roberts &
Bailey, 1996 ; Roberts & Brunstrom, 1998; Brunstrom & Roberts, 2000). This gain in
fusion caused by harmonicity might be due to the existence of harmonic templates in the
central auditory system and might be related to the perception of pitch affinity. Therefore,
the findings of Roberts and his coworkers do not rule out hypothesis H.
The classical pitch perception model proposed by Terhardt (1974) assumes that
harmonic fusion is not innate and that human listeners acquire harmonic templates in
early life through repeated exposure to periodic complex tones. Before this learning
process, the resolved spectral components of a periodic complex tone are supposed to be
perceived individually; each of them then evokes a separate "spectral pitch". The
templates acquired during the learning process represent intervals between spectral
pitches rather than frequency intervals. That is an important feature of the model because,
according to Terhardt (1970 ; 1971 ; 1974), the pitch evoked by a partial in a periodic
complex such as those typically heard by humans tends to be slightly different from the
pitch evoked by the same pure tone presented in isolation. In consequence of these small
pitch shifts, supposedly due to the existence of cochlear interactions between the partials
of a complex tone, the acquired harmonic pitch intervals correspond to frequency ratios
which, for pure tones presented successively rather than simultaneously, are slightly
different from small-integer ratios. In this way, Terhardt provided a tentative explanation
77
listeners, successive pure tones must typically have a frequency ratio slightly exceeding 2:1
in order to form a perfectly tuned melodic octave (Ward, 1954; Walliser, 1969; Sundberg
& Lindqvist, 1973; Dobbins & Cuddy, 1982 ; Demany & Semal, 1990 ; Hartmann, 1993).
Terhardt (1970, 1971, 1974) argued that as the cochlear interactions between partials
result in repulsions between their pitches, these interactions must be the source of the
octave enlargement. In Terhardt's theory, therefore, both harmonic fusion and the
ratios stem from one and the same learning process. This is in line with hypothesis H.
The same basic idea was more recently supported by Schwartz et al. (2003) and Ross et
al. (2007).
Terhardt's theory has been challenged on several grounds. First, it has been
suggested that the pitch shifts supposedly explaining the octave enlargement
phenomenon do not really exist (Peters, Moore, & Glasberg, 1983 ; Hartmann & Doty,
1996). Second, listeners appear to have difficulty in identifying the sign (positive or
negative) of small but detectable deviations from an octave interval for simultaneous pure
tones (Bonnard, Micheyl, Semal, Dauman, & Demany, 2013) but this is not the case for
successive pure tones (Dobbins & Cuddy, 1982; Bonnard, Micheyl, Semal, Dauman, &
Demany, 2013); this difference seems to be at odds with the assumption that the same
internal octave templates are used to detect mistunings with simultaneous and successive
extremely poor above about 2 kHz (Demany & Semal, 1988 ; Demany & Semal, 1990 ;
Demany, Semal, & Carlyon, 1991) and it is also extremely hard to identify, within complex
tones with many harmonics, a mistuned component when its frequency exceeds 2 kHz
(Hartmann, McAdams, & Smith, 1990); in contrast, melodic octaves can still be
recognized very accurately up to 4 kHz (Ward, 1954 ; Demany & Semal, 1990); how could
this be the case if the internal templates of the melodic octave were entirely derived from
78
experience with harmonic complex sounds? Nonetheless, the experimental evidence that
we just summarized does not definitely discredit Terhardt's theory, and more generally
hypothesis H.
study, listeners had to detect a small difference in fundamental frequency (F0) between
Borchert et al., the perceptual cue used in this task was perceived fusion. Detection
performance was poorer when the spectrally higher group of harmonics had the higher
F0 than when it had the lower F0. Thus, positive mistunings were less well detected than
negative mistunings. The authors suggested that this asymmetry had the same origin as
the octave enlargement effect. A similar asymmetry had been previously reported by
Demany et al. (1991), who found that for two simultaneous pure tones, positive deviations
from an octave interval (perfectly tuned from the physical point of view) are less well
detected than negative deviations. However, these authors did not interpret their findings
in the same manner as Borchert et al. (2011). In fact, the studies of Demany et al. and
harmonic sounds from (physically) inharmonic sounds, but they provide no information
about the relationship between frequency ratio and perceived fusion because they did not
assess perceived fusion directly. So, the observed asymmetry in the detection of
inharmonicity is not necessarily related to the octave enlargement effect and it has no clear
correct, then simultaneous pure tones approximately one octave apart should be
maximally fused when the perceived affinity of their pitches is maximal; moreover, fusion
and pitch affinity should decay similarly above and below the pitch interval corresponding
79
perceived fusion ─ assessed with simultaneously presented tones ─ and pitch affinity ─
assessed using sequential presentation ─ depend on frequency ratio, for frequency ratios
in the vicinity of 2:1. An important point is that, in the two experimental conditions
("simultaneous" versus "sequential"), the tones were presented at a low sensation level
Glasberg, & Alcántara, 2000). Given the frequency selectivity of the auditory system (see,
e.g., Moore, 2012), the cochlear representation of a tone in the "simultaneous" condition
was affected to a much greater extent by the noise than by the tonal context. Thus, the
cochlear representation of any tone was presumably identical in the two conditions. The
pitch of the tones may have been affected by the noise, but it was reasonable to assume
that the relation between the frequency of a tone and its pitch was exactly the same in the
two conditions.
4.2 Method
4.2.1 Participants
The experiment has been carried out in accordance with The Code of Ethics of
humans.
The participants were 20 students (8 male, 12 female) without known hearing problems.
They were aged between 20 and 25 y. They gave informed consent at the start of the
experiment and were paid an hourly wage for their participation. Most of them had
received a substantial musical education. Each participant was asked for the number of
years during which he or she had played a musical instrument for at least 3 h per week.
80
4.2.2 Conditions and stimuli
synchronous pure tones mixed with threshold-equalizing noise (Moore, Huss, Vickers,
Glasberg, & Alcántara, 2000). We used triplets rather than pairs of tones because
harmonic triplets (or even more complex harmonic stimuli) are markedly more liable to
be heard "synthetically" (i.e., fused) than to be heard "analytically", while this is not true
for harmonic pairs (Laguitton, Demany, Semal, & Liégeois-Chauvel, 1998). The noise had
cut-off frequencies of 100 and 4000 Hz, an overall level of 62 dB SPL, and a duration of
700 ms; it was turned on and off with 20-ms raised-cosine amplitude ramps. The tones,
also turned on and off with 20-ms raised-cosine ramps, had a duration of 300 ms; they
started 300 ms after the noise onset and ended 100 ms before the noise offset. Each tone
listeners in a previous study. Within any stimulus, the frequency ratio of the highest and
intermediate tones was equal to the frequency ratio of the intermediate and lowest tones.
This frequency ratio ("R") had 11 possible values, corresponding to 0.96, 0.97, 0.98, 0.99,
0.995, 1, 1.005, 1.01, 1.02, 1.03, and 1.04 octaves. The frequency of the lowest tone was
selected at random between 200 and 400 Hz, using a uniform probability distribution on
a log-frequency scale. On each trial, the participant was presented with two successive
stimuli, separated by a 300-ms silence. These two stimuli, whose lowest components were
randomized independently, differed also with respect to R. The participant's task was to
In the "sequential" condition, the same tones, noise, and R values were used, but
this time the three component tones of each stimulus were presented one after the other,
81
without any overlap or pause between them. The three tones each had, again, a duration
of 300 ms and 20-ms onset and offset ramps. They were presented in ascending frequency
order for half of the participants and in descending frequency order for the other half.
The noise started 300 ms before the first tone and ended 100 ms after the third tone. The
two stimuli presented on a given trial were again separated by a 300-ms silence.
Participants had to indicate which stimulus gave the stronger sensation of pitch affinity,
or similarity, between the tones. This task, as well as the task used in the "simultaneous"
4.2.3 Procedure
The stimuli were delivered diotically through Sennheiser HD 650 headphones. Responses
("stimulus 1" or "stimulus 2") were given by means of a mouse click on one of two virtual
In each of the two conditions, trials were run in blocks of 55, during which each
of the 11 R values was compared once with each of the other 10 R values. The order of
these 55 comparisons was randomized, as well as the order of the two R values for a given
run for each participant in each of the two conditions. This generally required 10
experimental sessions per participant. The condition was fixed within a session and
alternated across sessions. The condition used in the first session was counterbalanced
across participants.
82
4.3 Results
From the data collected in the test blocks, for each experimental condition we
computed the percentage of trials on which each R value was voted for when it was
compared with all other R values. The patterns of choice rate for each participant are
displayed in Figure 13, using open symbols for the "simultaneous" condition and closed
Figure 13. Choice rates of the 11 frequency ratios (R values). Open symbols: "simultaneous" condition;
closed symbols: "sequential" condition. Each of the 20 panels displays the data obtained from a given
participant (identified by capital letters). In the "sequential" condition, frequency order was ascending for
the participants referred to in the 10 upper panels, and descending for the other 10 participants. Each data
point represents the outcome of 400 trials.
83
Figure 14. Choice rate (averaged across Figure 15. Choice rate (averaged across
participants) as a function of R and frequency participants) as a function of R in the two
order (ascending vs. descending) in the experimental conditions ("simultaneous" vs.
"sequential" condition. Each data point "sequential"). Each data point represents the
represents the outcome of 4000 trials. outcome of 8000 trials.
variance (ANOVA) revealed a highly significant main effect of R (F(10, 180) = 44.3, p <
0.001), but no significant interaction between frequency order and R (F(10, 180) = 0.26,
p > 0.90). The results obtained for the two frequency orders were thus pooled for an
Figure 15 shows the relevant mean data. A new repeated-measures ANOVA indicated a
significant interaction between condition and R (F(10, 190) = 6.87, p < 0.001). However,
it can be seen in Figure 15 that this effect is partly due to the fact that the variation of
choice rate as a function of R was, overall, larger in the "simultaneous" condition than in
(31−63 %) in the "sequential" condition. Such a difference could a priori arise entirely
from larger inconsistencies between participants in the "sequential" condition, but it appears
84
Figure 16. Standard deviations (SD) of the Figure 17. Apices of the choice rate patterns
choice rates as a function of R in the two in the two experimental conditions. Open and
experimental conditions. Each data point closed circles represent individual participants,
represents a given participant. The data the heights of the rectangles represent means
provided by participants AG and BM were across participants, and the error bars
ignored in our regression analyses. represent ± 1 standard error of the mean.
instead that within-participant variation of choice rate was smaller in this condition. This
is shown by Figure 16, in which we plot for each participant the standard deviation of the
choice rates in the two conditions. The smaller variation of choice rate in the "sequential"
U shape. One of our aims was to determine if the apices of these patterns (the tips of the
Vs or Us) were reached for the same frequency ratios in the two experimental conditions.
The other aim was to determine if, in the two conditions, the patterns had a similar degree
of asymmetry about their apices. To these aims, we fitted two regression lines to each
pattern: one line with positive slope was fitted to the data obtained from the lowest R
value (0.96 octave) up to the R value for which choice rate was maximal; the other line,
with negative slope, was fitted to the data obtained from the R value for which choice
85
rate was maximal up to the highest R value (1.04 octaves)6. The frequency ratio
corresponding to the apex of a pattern was then taken as the abscissa value at the crossing
point of the two regression lines, and the asymmetry of a pattern was quantified by
summing the slopes of the two regression lines7. For two participants, identified as AG
and BM in Figures 13 and 16, the fitting procedure was not suitable in one of the
experimental conditions. The range of these participants' choice rates in the problematic
condition was atypically low; consistent with this, one of the two regression lines captured
not more than about half of the variance (44 % for AG and 52 % for BM; these were the
lowest two proportions of variance accounted for, out of 80 cases). We thus discarded
the data from AG and BM in the regression analyses. For the remaining 18 participants,
the average proportion of variance accounted for by the regression lines was 0.961 in the
"simultaneous" condition and 0.945 in the "sequential" condition; this means that the
Figure 17 displays the individual and mean values of the apex estimates in each
condition. The two means (1.0016 octaves in the "simultaneous" condition and 1.0063
octaves in the "sequential" condition) differ significantly from each other (t(17) = 3.16, p
= 0.006). In the "sequential" condition, there was no reliable effect of frequency order
on the apex estimates (t(16) < 1). The overall mean obtained in the "sequential" condition
was significantly higher than 1 (t(17) = 4.46, p < 0.001), which confirms the so-called
very small deviation of the mean from 1 was also statistically significant, at the 0.05 level
6
In one case (out of 40), the pattern of choice rate had two equally high peaks, for adjacent values of
R. In that case, both of the corresponding data points were taken into account for the fitting of each
regression line.
7
An alternative way to quantify the asymmetry of a pattern was to compute the ratio of the absolute
values of the slopes of the two regression lines. We also used this method, and performed statistical
tests on the logarithms of the obtained ratios. The outcomes of these tests were similar to what was
found when the asymmetry index was the sum of the slopes.
86
Figure 18. Asymmetry indices for the choice Figure 19. Distribution of the within-
rate patterns obtained in the two experimental participant discrepancies between the apices
conditions. Each data point represents a given (abscissa) and the asymmetry indices
participant. See the text for details about the (ordinate) of the choice rate patterns obtained
computation of these indices. in the two experimental conditions. On each
axis, the discrepancy computed for a given
participant is the absolute value of the
difference between the figures obtained in the
two conditions, divided by the standard
deviation of the absolute values obtained for
the 18 participants.
(t(17) = 2.33, p = 0.033). There was no significant correlation between the apex estimates
explained above, the slopes of the associated regression lines. Each data point represents
a given participant. In the "simultaneous" condition, the average value of the asymmetry
indices was significantly larger than 0 (t(17) = 3.50, p = 0.003), which means that the
patterns of choice rate typically had a steeper slope below their apex (low R values) than
above it (high R values). In the "sequential" condition, the patterns of choice rate also had
a significant asymmetry (t(17) = 2.68, p = 0.016), that did not depend on frequency order
(t(16) < 1); but this asymmetry was opposite to that found in the "simultaneous"
condition, as shown in Figure 18 by the fact that the data points are mainly located in the
lower-right quadrant. Thus, there was a significant difference between the two conditions
with respect to the shape of the patterns (t(17) = 3.96, p = 0.001). In the "simultaneous"
87
condition, a significant positive correlation was found between the apex estimates and the
asymmetry indices (r(34) = 0.50, p = 0.034); in the "sequential" condition, however, this
correlation was not significant (r(34) = 0.22, p = 0.38). The asymmetry observed in the
"simultaneous" condition is in the direction expected from two previous studies described
in the Introduction (Demany, Semal, & Carlyon, 1991; Borchert, Micheyl, & Oxenham,
2011): both of these studies showed that it is easier to detect inharmonicity for negative
deviations from harmonicity than for positive deviations. To our knowledge, on the other
hand, no prediction could be derived from the literature as to the shapes of the patterns
As shown by Figure 19, there was no evidence in our data that the participants
fell into two clearly distinct groups, one group for whom harmonic fusion and pitch
affinity would vary in essentially the same way with R and one group for whom the effect
of R would be quite different in the two experimental conditions. In this Figure, each data
point represents again a given participant. The abscissa is a measure of the within-
participant discrepancy between the apex estimates made in the two conditions, and the
ordinate is a similar measure for the discrepancy between the two asymmetry estimates.
Both measures were computed by taking, for each participant, the absolute value of the
difference between the two estimates, and then dividing it by the standard deviation of
the absolute values obtained for the 18 participants (in order to match the units used on
the two axes). We found a modest but significant correlation between the two measures
(r(34) = 0.53, p = 0.024). More importantly, it can be seen that the data points are not
distributed into two clearly distinct clusters. Participant AD, however, was an extreme
case on both axes and could be reasonably considered as an outlier. As in addition his
pattern of choice rate in the "sequential" condition was not very clear-cut (see Figure 13),
it was useful to check that the main conclusions drawn above remained valid if his data
were discarded. We did find that, without AD, there was still a statistically significant
88
difference between the apices (t(16) = 3.03, p = 0.008) and the asymmetries (t(16) = 3.67,
p = 0.002) of the patterns of choice rate obtained in the two experimental conditions.
Let us finally note that, in the present study, we found no obvious and strong
relation between the participants' behavior and the extent of their musical experience
(assessed by the index defined in section 2.1.). For instance, the musical experience of
participants AG and BM, who provided poor data and had to be discarded from the
regression analyses, was apparently not weaker than that of participants ALA or TL, who
4.4 Discussion
The present study was intended to test the hypothesis (termed "H") that the
perceptual fusion of simultaneous pure tones approximately one octave apart is directly
related to the perceived affinity of their pitches. It was reasoned that if H were correct,
then similar patterns of choice rate should be obtained in the two experimental conditions,
the "simultaneous" and "sequential" conditions. This prediction was clearly falsified: on
average, the patterns of choice rate differed significantly between the two conditions, not
only with respect to their apex but also with respect to shape. For some individual
participants, admittedly, the two measured patterns were quite similar, as shown by Figure
13; our prediction was falsified for the group as a whole, but not for each participant. As
each pattern was certainly affected by random variables and the number of participants
was relatively large, the similarity of the two patterns obtained from a given participant
was to some extent due to chance alone; the extent of the "true" similarity is not easy to
assess precisely. There were undoubtedly genuine individual differences between the
participants. However, as indicated by our last figure, the participants did not appear to
belong to two obviously separate groups, one group for whom hypothesis H would be
89
patently wrong and one group for whom this hypothesis would seem to be true or at least
plausible.
concluding that H is wrong, however, a potential flaw in the rationale must be considered.
The rationale was based on the assumption that a given tone had exactly the same pitch
As the component tones of each stimulus were relatively far apart in frequency
and presented at only 15 dB SL within broadband noise, the neural representation of any
tone at the level of the auditory nerve was undoubtedly identical in the two conditions: it
exert a significant partial masking effect of cochlear origin on another component of the
higher level of the auditory system, in such a way that their pitches slightly differed from
those evoked in the absence of contextual tones. However, the literature does not support
Gockel et al. (2004 ; 2009a ; 2009b) reflects an interaction between the pitch percepts
evoked by simultaneous complex (rather than pure) tones, and this interaction is
pronounced only when the concurrent pitches are similar with respect to height and differ
greatly in salience. So, the phenomenon in question does not seem to be relevant here.
the mistuning, even if the mistuned harmonic and its closest spectral neighbor are far
apart in frequency. However, this effect can be interpreted as a consequence of the fact
that the pitch of a non-mistuned harmonic is not consciously audible under normal
circumstances, due to harmonic fusion (de Cheveigné, 1997). Lin and Hartmann (1998),
following Hartmann and Doty (1996) and Peters et al. (1983), suggested that the pitch of
90
a non-mistuned harmonic does not differ significantly from the pitch of the same tone in
isolation. In the "simultaneous" condition of the present study, it was found that
physically in-tune octaves elicited a very nearly optimal sensation of fusion (see Figure
17). Thus, it is reasonable to think that when perceived fusion was approximately
maximal, the tones were not affected by significant pitch shifts due to their simultaneity.
comparisons between the pitches of stimulus components, because these pitches were
not consciously heard out, explicit pitch comparisons were required in the "sequential"
condition. In addition, the "sequential" condition called for short-term (or "echoic")
memory, whereas this was not the case for the "simultaneous" condition. Could it be that,
in the "sequential" condition, memory processes biased pitch, in a manner accounting for
our main results? In this respect, one should recall that in the "sequential" condition,
frequency order (ascending versus descending) had no significant effect. This implies that
the pitch memory traces did not drift in a fixed direction. It is conceivable that the trace
of a tone slightly drifted upward when a subsequent tone was higher, and downward when
a subsequent tone was lower. As such trends would reduce the pitch interval
account for the shift between the apices of the patterns obtained in the "simultaneous"
and "sequential" conditions. However, it is more difficult to account in similar terms for
the difference between the shapes of the patterns. Therefore, the most parsimonious
Harmonic fusion may originate either from innate properties of the auditory
system or, as hypothesized by, e.g., Terhardt (1974), from a learning process taking place
any normal adult listener; thus, if a learning process is involved, it is presumably culture-
91
free and based instead on the existence of universal physical regularities. In contrast, some
authors (e.g., Burns & Houtsma, 1999) have posited that pitch affinity is strongly culture-
dependent and specifically originates from exposure to music. The fact that we obtained
different results in our two experimental conditions is consistent with this view.
Nevertheless, the results obtained in the "sequential" condition are unlikely to reflect
nothing but arbitrary cultural conventions. To begin with, the octave enlargement effect
perspective, why people perceive the sequential octave more precisely than any other
melodic interval (even when the stimuli are pure tones; (Rakowski, 1990) while sequential
octaves are actually quite rare in music (Vos & Troost, 1989). In the Introduction, we
mentioned other findings discrediting the idea that the psychological affinity of
sequentially presented pure tones one octave apart is a cultural artifact rather than a
genuinely sensory phenomenon. The affinity in question, and the octave enlargement
effect, may originate from innate physiological processes, in particular the temporal
coding of frequency by the auditory system (Ohgushi, 1983 ; Hartmann, 1993; McKinney
& Delgutte, 1999). This hypothesis is compatible with the present results, even under the
assumption that harmonic fusion also depends on the temporal coding of frequency by
related pure tones is not fully predictable from the perceived relations between their
pitches, then pitch is unlikely to be the basis of this fusion. In other words, our results are
at odds with the idea that the process of harmonic fusion begins with a measure of
"spectral" pitches (the pitch elicited by each tone) and ends up with a "suppression" of
fusion could be based on the analysis of neural information that does not represent pitch
itself. As pointed out in the Introduction, simultaneous pure tones without harmonic
92
relations are nevertheless fused if they have a regular spectral spacing in Hertz (Roberts
& Bailey, 1996). Spectral pitch is most unlikely to play any role in this fusion which, on
the other hand, can be understood in the framework of a temporal autocorrelation model
of auditory processing (Roberts & Brunstrom, 2001). Perceptually, fusion is stronger for
harmonically related tones than for tones with a regular spectral spacing but no harmonic
relationships (Roberts & Bailey, 1996 ; Roberts & Brunstrom, 1998 ; Brunstrom &
Roberts, 2000).It could be thought that the stronger fusion stems from comparisons
between spectral pitches, but in fact the autocorrelation model put forward by Roberts
and Brunstrom (2001) can account for this gain in fusion without computations of
spectral pitch. In the light of the present study, developing this model in order to make it
93
94
5 SURDITES NEUROSENSORIELLES ET
PERCEPTION DE LA HAUTEUR
Les surdités peuvent être classées selon deux modalités principales : le degré de
de sévérité est mesuré en pratique clinique par une audiométrie tonale liminaire par voie
aérienne qui estime le seuil de détection de sons purs de différentes fréquences (en
effectuant généralement une mesure par octave entre 125 et 8000 Hz). Le seuil auditif
absolu d’un sujet pour une fréquence donnée s’exprime en dB HL (Hearing Level ), et
comme normal chez l’homme (0 dB HL). La perte tonale moyenne est obtenue en
moyennant les seuils absolus obtenus pour les fréquences 500, 1000, 2000 et 4000 Hz
Les surdités sont également classées en deux grandes catégories reposant sur le
depuis le milieu extérieur jusqu’à l’oreille interne. Sur le plan audiométrique, elles se
caractérisent par l’absence d’altération des seuils obtenus en conduction osseuse du fait
de l’intégrité des structures neurosensorielles, alors que les seuils obtenus en conduction
95
conduit auditif externe et l’oreille moyenne (en premier lieu le système tympano-
l’intensité des sons perçus, qui peut s’accompagner dans une certaine mesure de
aussi le nerf auditif ou le système nerveux central jusqu’au cortex auditif (origine
rétrocochléaire). Le point commun à toutes les surdités de perception « pures » est que
auditive. Plusieurs facteurs peuvent rendre compte des difficultés à fixer des limites
(1) Une étiologie donnée de surdité peut être associée à des lésions différentes au
sein d’une même structure, et peut parfois concerner plusieurs structures simultanément.
Un bon exemple est donnée par la presbyacousie, cause la plus fréquente de surdité de
perception de l’adulte, qui peut toucher isolément ou de manière associée les cellules
ciliées, la strie vasculaire, la membrane basilaire et les neurones auditifs (Schuknecht, 1993;
Nelson & Hinojosa, 2003). La limite conceptuellement claire entre surdité endo et
rétrocochléaire peut donc être difficile à établir, d’autant plus que les outils diagnostiques
disponibles en pratique clinique courante à ce jour sont souvent pris en défaut pour
96
affirmer l’origine anatomique et physiopathologique précise d’une surdité
cognitifs impliqués dans le traitement de cette information. Ainsi, une lésion initialement
strictement endocochléaire peut entrainer à plus ou moins long terme une réorganisation
fibres du nerf auditif (dont le taux de décharge spontané est lent et le seuil de réponse
élevé) peut conduire à des anomalies perceptives supraliminaires sans altérer les seuils
d’intelligibilité majorés en présence d’un bruit de fond, alors même qu’ils sont considérés
concept de hidden hearing loss (Plack, Barker, & Prendergast, 2014). L’audiométrie tonale
liminaire dans le silence constitue de fait une tâche de détection simple qui ne préjuge pas
et chez l’animal suggèrent que ces lésions neuropathiques peuvent être secondaires à un
précision du codage temporel des sons (Kujawa & Liberman, 2009; Clinard & Tremblay,
2013; Maison, Usubuchi, & Liberman, 2013; Sergeyenko, Lall, Liberman, & Kujawa,
2013).
interindividuelle retrouvée dans de nombreuses études portant sur ces patients, même en
97
présence de critères audiométriques d’inclusion stricts. Néanmoins, cliniquement, certains
signes auditifs sont fréquemment retrouvés. A la dégradation des seuils absolus s’ajoutent
repas de groupes, transport en commun, etc.). Les sujets rapportent souvent qu’ils
En tenant compte des nuances énoncées plus haut, on peut admettre qu’en dehors
des cas de lésions rétrocochléaires focales indiscutables (tumeur du nerf auditif, lésion
la suite de cette thèse, il est convenu que le terme de surdité neurosensorielle (voire simplement
s’il n’est pas isolé. Les troubles perceptifs rapportés par les patients ont d’ailleurs été avant
tout explorés par des études portant sur les mécanismes cochléaires eux-mêmes. La
La physiologie d’une cochlée saine repose en grande partie sur la présence d’un
mécanisme actif impliquant les cellules ciliées externes (CCE), responsable de la non
98
aminoside (la kanamycine) dont la toxicité est principalement dirigée contre les CCE, ont
montré que la normalité des seuils de réponse et la spécificité fréquentielle des fibres du
nerf auditif dépendait directement de leur intégrité (Evans & Harrison, 1976). Liberman,
Dodds & Learson (1986) ont également utilisé des lésions ototoxiques, en association
avec des lésions traumato-sonores, pour obtenir plusieurs types d’atteintes intéressant
sélectivement les CCE ou les cellules ciliées internes (CCI). Ils ont ensuite étudié les
courbes d’accord obtenues à partir des réponses d’une fibre du nerf auditif. Une technique
de marquage permettait de repérer la CCI avec laquelle le neurone étudié faisait synapse,
présente sous la forme d’un « V » dont la pointe est très effilée, et descend jusqu’à de
faibles niveaux de stimulation (inférieurs à 10 dB SPL), ce qui indique que chaque point
entrainant une réponse dès les faibles niveaux d’intensité. En cas de lésion isolée des CCE,
(Figure 20A, ligne pointillée) le profil en « V » de la courbe d’accord est très aplati sans
uniquement les CCI (Figure 20B, ligne pointillée), on observe une diminution uniforme
forme globale de la courbe. Ces résultats impliquent que les CCE sont responsables d’un
isolée des CCI entraine une diminution globale de l’efficacité de la transduction, avec une
élévation homogène des seuils et ne dégrade pas le mécanisme actif. Il faut noter que cette
99
CCI#
CCE#
A# B#
Figure 20. (En haut) Image de microscopie électronique d’une cochlée de mammifère normale, mettant en
évidence la rangée de CCI et les trois rangées de CCE isolément touchées lors des manipulations
expérimentales. (A) Courbe d’accord obtenue en cas de lésion isolée des CCE (traits pointillés), par rapport
à une courbe normale en l’absence de lésion cochléaire (trait plein). (B) Courbe d’accord obtenue en cas de
lésion isolée des CCI (traits pointillés), par rapport à une courbe normale en l’absence de lésion cochléaire
(trait plein). Schéma adapté de Moore (2007), d’après les données de Liberman, Dodds & Learson (1986).
dernière situation est rare en dehors d’un contexte expérimental, les CCE étant plus
fragiles et en général plus précocement et plus massivement atteintes que les CCI.
provoquées permet d’évaluer dans une certaine mesure l’intégrité des CCE. Une
une altération du mécanisme actif (Kemp, 2002). C’est le cas notamment en cas de surdité
100
Une des méthodes utilisées en psychoacoustique pour évaluer la sélectivité
fréquentielle d’un sujet consiste à établir le profil de ses filtres auditifs. Pour cela, on peut
mesurer le seuil de détection d’un son pur accompagné d’un masquage de type notched-
noise, composé d’un bruit large bande présentant une « encoche » spectrale de part et
d’autre de la fréquence du son pur à détecter. Si les filtres auditifs sont élargis (et donc la
sélectivité fréquentielle est dégradée), la quantité de bruit qui passe dans le filtre centré par
la fréquence du son pur à détecter est grande. En sortie de ce filtre, le rapport signal/bruit
est d’autant moins favorable qu’il est élargi, et que l’encoche spectrale du bruit est étroite.
En faisant varier la largeur de l’encoche spectrale, on peut ainsi évaluer la largeur des filtres
De nombreux auteurs ont mesuré la largeur des filtres auditifs chez des sujets
largeur des filtres auditifs tend à augmenter avec le degré de surdité, même si une grande
variabilité interindividuelle empêche une prédiction fiable de la largeur des filtres à partir
des seuils auditifs absolus chez un sujet donné. Glasberg & Moore ont établi par la
méthode du notched-noise, le profil des filtres auditifs (centrés sur la fréquence 1 kHz) de
unilatéral (Figure 21). Ainsi, les différences entre oreilles saines et atteintes peuvent être
attribuées à des facteurs primitivement sensoriels (et non à des troubles centraux liés à
Leurs résultats indiquent que les profils des filtres auditifs des oreilles normo-
entendantes sont très homogènes entre les sujets, et possèdent une bonne sélectivité
fréquentielle (filtres étroits). En revanche, en cas de lésions cochléaires, les profils sont
plus variés selon les individus, avec une tendance globale d’élargissement des filtres
(prédominant du coté des basses fréquences). Cela implique que les bruits de basses
fréquences seront particulièrement masquants (et donc gênants) pour ces patients. La
101
Figure 21. Profils des filtres auditifs
centrés sur 1 kHz de cinq sujets
présentant une audition normale sur
une oreille (en haut de la figure), et
une surdité d’origine cochléaire sur
l’autre (en bas). La réponse est
donnée relativement au sommet des
courbes, arbitrairement fixé à 0 dB ;
cela équivaut à considérer que
l’amplitude d’un son pur de 1 kHz est
identique en entrée et en sortie du
filtre (atténuation de 0 dB). D’après
Glasberg & Moore (1986).
suppose une atteinte sévère des CCE, mais des lésions combinées de plusieurs structures
peuvent cohabiter.
l’élargissement d’un filtre auditif détériore le rapport signal/bruit dans ce filtre, souligne
le lien entre la sélectivité fréquentielle et les troubles de la compréhension des sujets sourds
en présence d’un bruit de fond. Henry & Heinz (2012) ont confirmé dans une étude
entraine une dégradation du codage temporel d’un son pur dans une fibre du nerf auditif
le filtre auditif correspondant est trop élevée, perturbant le verrouillage de phase avec le
son pur.
de la hauteur des sons complexes. Les harmoniques de faibles rangs sont normalement
102
très bien résolues par la cochlée d'un auditeur sain, et elles sont essentielles dans la
détermination de la hauteur du son complexe (on parle de région dominante, cf. section
cochléaires, les harmoniques de faibles rangs seront moins bien (voire pas du tout)
résolues. La Figure 22 (Moore, 2007) représente le pattern d’excitation d’un son complexe
composé des harmoniques de rangs 4 à 10 d’une fréquence fondamentale de 200 Hz, chez
un auditeur sain (trait plein) et en cas de filtres cochléaires trois fois plus larges que la
normale (trait pointillé). Les pics successifs correspondant à chaque harmonique dans le
filtres.
Figure 22. Représentation de patterns d’excitation d’un son complexe composé des harmoniques de rangs
4 à 10 d’une fréquence fondamentale de 200 Hz. La modélisation suppose que les filtres auditifs sont de
largeur normale (trait plein), ou trois fois plus larges que la normale (trait pointillé), comme on pourrait le
retrouver en présence de lésions cochléaires. Le niveau de chaque composante fréquentielle est de 80 dB
SPL. L’individualisation de pics successifs dans le pattern d’excitation correspondant aux différentes
harmoniques résolues n’est pas retrouvée dans le cas où les filtres auditifs sont élargis. D’après Moore
(2007).
Chez un auditeur sain, un son complexe ne contenant que des harmoniques non
résolues est généralement associé à une sensation de hauteur peu précise et peu saillante.
103
lésion cochléaire s’accompagne d’une perception plus ambiguë et d’une dégradation de la
discrimination de hauteur des sons complexes. De nombreuses études ont évalué les
mesurant la plus petite différence de f0 détectable entre deux sons complexes successifs
(Moore & Peters, 1992; Arehart, 1994; Moore & Moore, 2003). Comme pour beaucoup
grande variabilité interindividuelle des résultats, que ce soit dans le niveau de performance
globale ou dans l’effet des modifications spectrales des stimuli. Néanmoins, plusieurs
pour des sons complexes contenant seulement des harmoniques de rangs élevés (>5) que
pour des sons ne contenant que les harmoniques de faibles rangs. On peut même parfois
harmoniques de rangs élevés dégrade les performances. Cela suggère que chez ces sujets
sourds, la hauteur est principalement déterminée par les harmoniques de rangs élevés et
que l’interaction entre les harmoniques de faibles rangs traitées par des filtres auditifs
anormalement larges (qui induit des réponses plus complexes en sortie de filtres) pourrait
filtres auditifs.
sains lorsque l’on retire successivement les harmoniques de faibles rangs d’un son
complexe. Il est admis que cette dégradation apparait lorsque le son complexe ne contient
104
plus suffisamment d’harmoniques résolues pour permettre une estimation précise de f0.
discrimination soit atteint plus rapidement chez les sujets dont les filtres auditifs sont
élargis, car le nombre d’harmoniques correctement résolues est moindre. Les auteurs ont
correspondant au point de rupture et la largeur des filtres auditifs du sujet (le rang était
d’autant plus faible que les filtres auditifs étaient élargis). Ces données indiquent que la
A la sortie des filtres auditifs, l’onde sonore peut être analysée selon deux
fine (STF). Le codage de la structure temporelle fine dans le nerf auditif repose sur le
verrouillage de phase qui suit les cycles individuels de fluctuations rapides de l’onde ; il est
dégradé dans les hautes fréquences, classiquement au-delà de 4 à 5 kHz. La STF joue un
rôle important dans la perception de la hauteur des sons complexes (cf. section 2.4.1), où
elle participe au codage des composantes résolues, mais aussi à celui de certaines
composantes non résolues jusqu’au 12ème rang environ (au-delà seule la sensibilité aux
temporelle fine chez les sujets présentant des lésions cochléaires. Ces travaux reposaient
105
sur des tâches de détection de différences de phase interaurales –qui dépendent de la
dont les STF différaient mais dont l’enveloppe était identique (Lacher-Fougère &
Demany, 1998; Hopkins & Moore, 2007), ou encore sur la compréhension de séquences
de parole dont seules les informations d’enveloppe ou de STF étaient préservées (Lorenzi,
Gilbert, Carn, Garnier, & Moore, 2006). Les résultats indiquent que les surdités d’origine
à STF. De plus, chez certains sujets sourds, cette dégradation peut être retrouvée même
dans une zone fréquentielle où le seuil audiométrique absolu est dans les limites de la
normale (Lacher-Fougère & Demany, 2005; Lorenzi, Debruille, Garnier, Fleuriot, &
chapitre.
lésions cochléaires n’est pas clairement établie. L’hypothèse la plus « naturelle » parait celle
cochléaire ototoxique ou traumato-sonore (Woolf, Ryan, & Bone, 1981; Miller, Schilling,
Franck, & Young, 1997). D’autres auteurs ont des conclusions diamétralement opposées,
nerf auditif secondaire à une lésion cochléaire (Harrison & Evans, 1979; Kale, Micheyl, &
primitivement dégradé. L’une d’elles, comme le suggèrent Henry & Heinz (2012) ou
Moore (2008), repose sur l’idée que l’information disponible en sortie de filtres auditifs
anormalement larges est plus complexe et en quelque sorte « parasitée » par le bruit de
106
fond ou par des composantes spectrales proches, ce qui peut rendre plus difficile le
décodage de l’information relative à la STF par le système auditif. Cette hypothèse est
néanmoins remise en question par les données récentes de Hopkins & Moore (2011)
indiquant que la sensibilité à la STF n’est que très faiblement corrélée à l’élargissement des
filtres auditifs ; en revanche elle décline avec l’âge, et ce même en l’absence de déficit
codage de la hauteur des sons complexes liée à la dégradation des informations dans le
résolues. Ces troubles peuvent potentiellement avoir des conséquences importantes sur la
capacité du système auditif à fusionner les composantes fréquentielles d’un son complexe
expérimentale sur la sensibilité à la fusion harmonique chez les sujets présentant une
107
108
6 ARTICLE « THE EFFECT OF COCHLEAR DAMAGE
constituent la cause la plus fréquente de déficience auditive chez l’adulte, bien loin devant
Hearing Loss Study) portant sur plus de 3700 personnes âgées de 48 à 92 ans, illustrent les
enjeux de santé publique que représente ces pathologies (Cruickshanks, et al., 1998). La
prévalence des surdités de perception (définies ici par un seuil audiométrique moyen > 25
dB sur les fréquences 500, 1000, 2000 et 4000 Hz et un Rinne audiométrique < 15 dB)
atteignait dans cette cohorte 37.2%, soit dix fois plus que les surdités de transmission.
durant ces dernières années, les patients porteurs de prothèses auditives (aide auditive
gêne dans ce type de situation (réunion, repas au restaurant, etc.). Un auditeur normal, par
contre, possède une remarquable capacité à séparer sans effort les productions de sources
sonores actives simultanément. La réussite de cette « analyse des scènes auditives » (ASA)
tient en partie au fait que les éléments successifs produits par une source donnée
possèdent en général des traits physiques qui permettent de distinguer cette source des
autres. Il est maintenant bien établi que, chez les auditeurs normo-entendants, l'ASA obéit
différentes apparaissant au même instant proviennent d'une même source plutôt que de
109
sources distinctes. Le second veut que des composantes spectrales ayant des relations
harmoniques entre elles proviennent également d'une même source plutôt que de sources
quotidienne d'un auditeur normal, un outil important pour l'ASA (cf. Section 2.2.2). Mais
paradoxalement, alors que l'on sait que les surdités neurosensorielles rendent l'ASA
déficitaire, et alors que l'on sait aussi qu’elles dégradent la capacité à différencier des
fréquences voisines, aucune publication à ce jour n'a été consacrée aux conséquences des
d'une part les mécanismes de la fusion harmonique et d'autre part les conséquences
l’étude. S’il parait probable qu'un déficit massif dans la discrimination fréquentielle doit
entièrement abolir la sensibilité à l'harmonicité, l'effet d'un déficit relativement léger est
travaux antérieurs portant sur la sensibilité à l'harmonicité chez les sujets normo-
entendants (Demany & Semal, 1988; Demany & Semal, 1992). Ces travaux ont montré
que le seuil différentiel d'un rapport de fréquences entre sons purs simultanés est plus
petit (donc meilleur) lorsque le rapport étalon est égal à exactement une octave (2:1) que
lorsque le rapport étalon est légèrement plus grand ou légèrement plus petit qu'une octave.
Cet effet reflète la sensibilité à l'harmonicité du rapport d'octave. Ce type de tâche permet
Vingt sujets ont participé à cette étude, douze sujets normo-entendants et huit
sujets porteurs d’une surdité endocochléaire légère à moyenne avec des seuils
Pour chaque sujet, le seuil de détection d’un changement de rapport de fréquences entre
110
sons purs simultanés a été mesuré par une procédure adaptative à choix forcé. Le rapport
de fréquence étalon était soit harmonique (2:1, soit 1 octave), soit inharmonique (0.8 ou
1.2 octave). Les sons étaient présentés à un niveau d’intensité faible (au maximum 15 dB
SL) et dans un bruit de type Threshold-Equalizing Noise (Moore, Huss, Vickers, Glasberg,
& Alcántara, 2000) pour minimiser les interactions. Dans les conditions expérimentales
essai, ce qui obligeait les sujets à comparer les rapports de fréquences pour obtenir de
optimale.
Les résultats indiquent que dans les deux groupes de sujets, les seuils obtenus dans
rapport de fréquences étalon est harmonique que lorsqu’il est inharmonique. Cet effet,
traduisant la sensibilité à l’harmonicité, est plus faible chez certains sujets sourds, mais
reste observable même chez les sujets dont les performances de discrimination de
fréquences sont très altérées. En revanche les résultats des deux groupes diffèrent en ce
en accord avec nos précédents résultats, les déviations négatives par rapport à l’octave
(c’est-à-dire des « compressions » du rapport de fréquences) sont mieux détectées que les
déviations positives (c’est-à-dire des « étirements ») ; chez les sujets sourds au contraire,
particulièrement robuste, même si elle peut être partiellement altérée chez certains sujets
la détection d’inharmonicité chez les les sujets normo-entendants, qui n’est pas retrouvé
111
chez les sujets sourds, semble indiquer que le mécanisme de détection d’inharmonicté est
différent en présence d’une surdité d’origine cochléaire. L’hypothèse selon laquelle les
sujets aient utilisé un indice de battements liés aux interactions périphériques est écartée
112
THE EFFECT OF COCHLEAR DAMAGE ON THE SENSITIVITY
TO HARMONICITY
6.1 Introduction
Sounds of this type are abundant in speech and music. A harmonic complex tone is
normally heard as a single sound, with a precise pitch, even when some or all of its spectral
one or more components are mistuned by frequency shifts, they may no longer be
perceptually fused with the rest of the sound, and this will allow the detection of
inharmonicity. For listeners with normal hearing (NH), inharmonicity is readily detectable
for mistunings of a few percent (Moore, Peters, & Glasberg, 1985 ; Moore, Glasberg, &
Peters, 1986 ; Demany & Semal, 1990 ; Hartmann, McAdams, & Smith, 1990 ; Carlyon,
Demany, & Semal, 1992 ; Roberts & Brunstrom, 1998 ; Borchert, Micheyl, & Oxenham,
2011 ; Bonnard, Micheyl, Semal, Dauman, & Demany, 2013 ; Bonnard, Dauman, Semal,
& Demany, 2016). Being able to discriminate inharmonicity from harmonicity is thought
simultaneous harmonic sounds, produced by separate sources at different F0s, and the
listener generally has to segregate these sounds from each other (Bregman, 1990).
listeners, the consequences of cochlear hearing loss for sensitivity to harmonicity are still
(HI) listeners (Arehart, King, & McLean-Mudgett, 1997) (Arehart, Rossi-Katz, &
113
Swensson-Prutsman, 2005; Summers & Leek, 1998), the subjects' task was to identify
pairs of stationary vowels which were presented simultaneously and had either the same
difference. In HI listeners, this F0 effect has been more variable across subjects and
experiments. However, sensitivity to harmonicity may not be the only source of the
observed F0 effects in these experiments (Culling & Darwin, 1994; Chintanpalli & Heinz,
2013). One issue is that the F0 effect on vowel identification does not markedly increase
for F0 differences exceeding about 1 semitone (see de Cheveigné et al. 1997 for a synthesis
of the available data), whereas the minimum F0 difference required to hear a pitch
1998). In the present study, we used much simpler stimuli to investigate the consequences
different ways, depending on the relative frequency distance of the stimulus components.
Any harmonic complex tone is also a periodic stimulus, with a period equal to the inverse
of F0. This periodicity disappears if the stimulus is made inharmonic by a slight frequency
shift in one of its components; the shift creates a low-frequency oscillation in the
waveform. When the components are close enough to stimulate the same cochlear region,
the low-frequency oscillation in the waveform can be encoded by single neurons; it can
be heard as a "beat" (Plomp, 1967 ; Viemeister, Rickert, & Stellmack, 2001). This is, for
NH listeners, one way to detect inharmonicity. If, on the other hand, the stimulus
components are relatively far apart –as is the case, for example, for components with a
frequency ratio of 2:1, i.e., one octave– then the excitation patterns of the components
will not overlap significantly in the cochlea, especially if in addition the components are
all presented at a low level. In such conditions, for NH listeners, mistunings do not
produce audible beats (Plomp, 1967) (Viemeister, Rickert, & Stellmack, 2001) (Bonnard,
114
Micheyl, Semal, Dauman, & Demany, 2013); inharmonicity is instead detected as a
inharmonicity detection are still uncertain, but a central mechanism must be involved to
put in relation neural activities produced in separate peripheral channels by single spectral
components. The study reported here was focused on the latter, "cross-channel", form of
sensitivity to harmonicity.
(CCID) may be based on a two-stage process in which the first stage is a measurement of
the spectral components' frequencies, from their representation in the auditory periphery,
and the second stage makes use of templates of the harmonic frequency ratios in the
central auditory system (see, e.g., Duifhuis, Willems, & Sluyter, 1982). In accordance with
such a view, it has been found that some neurons in the auditory cortex of non-human
2013). The central harmonic templates could be inborn, but they could also originate from
a learning process (Terhardt, 1974; Shamma & Klein, 2000). However, CCSH and CCID
do not necessarily require harmonic templates: they might instead stem from the temporal
coding of frequency in the auditory nerve, due to neural phase-locking . The spikes elicited
in an auditory-nerve fiber by a pure tone with frequency f are separated by time intervals
approximately equal to 1/f, 2/f, 3/f, etc (Rose, Brugge, Anderson, & Hind, 1967;
McKinney & Delgutte, 1999). Thus, the inter-spike intervals appearing in response to two
pure tones with a harmonic relationship are partly the same, while that is not true in the
channels (Licklider, 1951 ; Meddis & Hewitt, 1991 ; Roberts & Brunstrom, 2001). Such a
model is supported, indirectly, by some psychophysical data (Roberts & Brunstrom, 2001;
Bonnard, Dauman, Semal, & Demany, Bonnard, Dauman, Semal, & Demany, 2016).
115
Whatever the mechanism of CCID, this perceptual ability is limited, from the
point of view of signal detection theory, by the sum of two "internal noises": (1) a noise
reflecting the limited accuracy of frequency coding in the auditory periphery, and (2) a
noise reflecting the relative fuzziness of the central harmonic templates or the limited
The latter, central, noise must be substantial since thresholds for CCID are normally larger
than thresholds for the detection of a frequency change in a spectral component (Moore,
Peters, & Glasberg, 1985). In case of cochlear damage, the frequency-coding noise must
be abnormally large since frequency discrimination thresholds are typically elevated (see
Moore & Carlyon, 2005 for a review); this deficit may stem from a broadening of the
peripheral auditory filters, reducing the accuracy of tonotopic frequency coding (Glasberg
& Moore, 1986 ; Strelcyk & Dau, 2009 ; Hopkins & Moore, 2011) and/or from a
1998 ; 2005 ; Lorenzi, Gilbert, Carn, Garnier, & Moore, 2006 ; Strelcyk & Dau, 2009 ;
Hopkins & Moore, 2007 ; 2011 ; King, Hopkins, & Plack, 2014). On the other hand, the
consequences of cochlear damage for the central noise involved in CCID are unknown.
It may be hypothesized that this central noise is not affected at all by cochlear damage. If
so, for a HI listener showing only a slight deficit in frequency discrimination, thresholds
for CCID should be normal, because CCID will be mainly limited by the central noise
alone, as is the case for a NH listener. However, for a HI listener showing a severe
abnormally large frequency-coding noise. When the frequency-coding noise is very large,
CCSH and CCID become meaningless in practice; it might actually be that they are
completely abolished. This is a reasonable prediction under the assumption that CCSH
and CCID depend on central templates originating from a learning process: these central
116
templates may disappear if the input from the auditory periphery is no longer able to
The methodology of the present study was inspired from previous experiments
on the detection of fluctuations in the frequency ratio of two simultaneous pure tones by
NH listeners (Demany & Semal, 1988 ; 1992). In these experiments, fluctuations around
a frequency ratio of exactly one octave (2:1) were found to be generally more detectable
than fluctuations around inharmonic ratios differing slightly from one octave, even in the
absence of possible beat cues arising from cochlear interactions between the tones. Thus,
for standard ratios ranging from 0.9 to 1.1 octaves, the fluctuation detection threshold
evidence for CCSH and an estimate of the CCID threshold. In the same vein, we
measured here thresholds for the detection of a change in a standard frequency ratio
which could be equal to either 0.8, 1.0, or 1.2 octaves. In order to eliminate beat cues, the
tones were presented at a low sensation level (at most 15 dB SL) within wideband noise.
In order to force the listeners to detect changes in a frequency ratio, rather than simply
frequency changes, the frequency register of the stimuli was randomly varied across the
listening intervals of each trial. However, this was not done in some conditions, so as to
The experiment has been carried out in accordance with The Code of Ethics of
humans.
117
6.2.1 Stimuli, task, and experimental conditions
On each trial, the subject was presented with four successive sums of three
synchronous pure tones, as illustrated in Figure. 23. These four tone triplets (T1, T2, T3,
T4), gated on and off with 20-ms raised-cosine ramps, had each a duration of 200 ms and
were mixed with threshold-equalizing noise (Moore et al. 2000) filtered between 100 and
4000 Hz. T1 and T2 were presented within a first burst of noise, and T3 and T4 within a
second burst. Both bursts were gated on and off in the same manner as the triplets. Each
noise burst started 200 ms before the first triplet mixed with it, and ended 200 ms after
the second triplet. T1 and T2, as well as T3 and T4, were separated by a 400-ms pause,
Figure 23. Schema of the stimulus sequences used in the Rov conditions of the experiment. On each trial,
four successive triplets of tones (T1, T2, T3, T4) were presented, within two bursts of noise. One of the
four triplets (T2 or T4, at random) differed from the other three triplets with respect to the frequency
spacing of its components. The task was to indicate if the different triplet was T2 or T4. In this example,
the different triplet is T2.
118
and the two noise bursts were separated by a 300-ms silence. The overall level of the noise
was typically equal to 62 dB SPL, and the sensation level (SL) of the tones in the noise
never exceeded 15 dB. More details on the level of the stimuli are provided below in the
"Procedure" section and in Table I. The stimuli were presented monaurally, to the more
sensitive ear for the HI subjects and to the preferred ear for the NH subjects.
frequency scale; this frequency spacing of the components was a variable termed FS and
measured in octaves. Three of the four triplets presented in a given trial did not differ
from each other with respect to FS. The remaining triplet, with a different FS value, was
equiprobably T2 or T4, and the subject's task was to indicate if this "target" triplet was
the second or the fourth one. Responses were given by means of mouse clicks on two
virtual buttons. Response time was unlimited and visual feedback ("yes" or "no") was
provided following each response. In a given block of trials, the standard value of FS ‒
i.e., the FS characterizing three triplets in a trial‒ was fixed, while the target FS varied
discrimination threshold for the standard FS. In the main experimental conditions, the
frequency of the lowest component of every triplet was selected randomly between 200
and 400 Hz, using a rectangular probability distribution on a logarithmic frequency scale;
this "roving rule", illustrated in Figure. 23, prevented subjects from achieving a good
performance if they based their judgments on differences in frequency rather than in FS.
In two other conditions, the lowest component of T1 and T3 were again selected
randomly and independently between 200 and 400 Hz, but the two triplets presented in
any noise burst were constrained to have the same lowest component; on any trial,
therefore, one pair of triplets consisted of identical stimuli while the other pair consisted
of different stimuli; frequency discrimination was sufficient to perform the task optimally.
119
Table I. Schema Information on the members of the HI group. The audiometric data (absolute thresholds
in dB HL for the tested ear) are followed by the sound pressure level of the threshold-equalizing noise
(TEN) in the six experimental conditions, and the detection threshold of 200-ms pure tones at six
frequencies in this noise.
The roving rule was used in four conditions, labeled "Rov0.8‒", "Rov1‒", "Rov1+"
and "Rov1.2+". In these labels, the number denotes the standard value of FS in octaves
and the final sign indicates if the target FS values were smaller than the standard FS ("‒")
or larger than it ("+"). In two of these conditions (Rov1‒ and Rov1+), CCID was helpful
because the standard FS (1 octave) was harmonic while the target FS values were
inharmonic. In the other two conditions (Rov0.8‒ and Rov1.2+), CCID was much less
120
When the roving rule was not used, the standard FS value was always 1 octave but
the target FS values could be either smaller (condition NoRov1‒) or larger (condition
6.2.2 Procedure
78-% correct point on the psychometric function (García-Pérez, 2011). In each block of
trials, the absolute value of the difference between the standard and target FS
("|deltaFS|") was initially set to 0.5 octave. Then, |deltaFS| was decreased following
each correct response and increased following each incorrect response, with one
reservation: |deltaFS| always had to be smaller than the standard FS. A block of trials
ended after the 14th reversal in the variation of |deltaFS|. Up to the 4th reversal,
|deltaFS| was normally multiplied by 2.25 when it was increased, and divided by 1.2595
when it was decreased; if multiplying |deltaFS| by 2.25 was not possible due to the ceiling
constraint mentioned above, |deltaFS| was made equal to the standard FS, minus 0.001
octave. After the 4th reversal, the factors of 2.25 and 1.2595 were replaced by,
respectively, 1.5 and 1.1223. The threshold measured in a block of trials was defined as
the geometric mean of all the |deltaFS| values used from the 5th reversal on.
Subjects were tested individually in a sound-attenuating booth. They were paid for
their participation (except for the first author, who was a member of the NH group). The
stimuli were generated at a sampling rate of 44.1 kHz, via a 24-bit digital-to-analog
For the NH subjects, the experiment consisted of seven sessions, run on different
days. Session 1 began with absolute threshold measurements at 0.125, 0.25, 0.5, 0.75, 1,
2, 3, 4, and 8 kHz, using the standard method of limits. We then measured in this session
121
the detection threshold of 200-ms pure tones in the threshold-equalizing noise, set to 62
dB SPL. This was done using a two-interval forced-choice adaptive paradigm similar to
the one used for the measurement of FS discrimination thresholds. The tones to be
detected in the noise had frequencies of 0.2, 0.3, 0.5, 1, 2, and 3 kHz. As expected, it was
found that their masked detection threshold in dB SPL did not significantly depend on
frequency: the average value of their within-subject standard deviation across frequency
was only 1.4 dB. Their overall mean was 45.0 dB, and the between-subjects standard
deviation 1.9 dB. In the three following sessions (sessions 2-4), FS discrimination
thresholds were measured using tones with a subject-dependent SPL set to 15 dB above
the mean (across frequency) of the detection thresholds found in session 1. Sessions 2
and 3 were devoted to the four Rov conditions while session 4 was devoted to the two
NoRov conditions. In each condition, three threshold measurements were made; they were
followed by a fourth measurement if the geometric standard deviation of the three initial
measures exceeded a limit set to 1.4; in that case, the most extreme of the four
measurements was finally discarded. Sessions 5-7 were organized in the same way as
sessions 2-4, but in these three final sessions the tones were set to 5 dB (instead of 15 dB)
For the HI subjects, the procedure was generally similar, but the level of the tones
in the six experimental conditions did not change across sessions. In session 1, again, the
detection threshold of tones at 0.2, 0.3, 0.5, 1, 2, and 3 kHz in the threshold-equalizing
noise was measured while the noise was set to 62 dB SPL. For some listeners, thresholds
were found to be substantially frequency dependent. When this frequency effect, defined
as the difference between the highest and lowest thresholds, did not exceed 5 dB, the level
of the tones in the subsequent and main part of the experiment was set to 15 dB above
the mean detection threshold, as for the NH subjects in sessions 2-4. When the frequency
effect was comprised between 5 and 10 dB, the level of the tones was set to 5 dB above
122
the worst detection threshold. When the frequency effect exceeded 10 dB, the noise was
increased in level and the detection thresholds remeasured. If, in spite of this increase in
the noise level, the frequency effect still exceeded 10 dB, the experiment stopped at that
point and the listener was discarded; this occurred for three listeners. Otherwise, the FS
discrimination thresholds were measured using the increased noise level and a tone level
determined by the rules described above. For each of our HI subjects, therefore, the
dB.
For all but one HI subject, three sessions were run to obtain three discrimination
threshold estimates per experimental condition, as was done for the NH subjects at a
nominal SL of 15 and then 5 dB. In the case of the remaining HI subject (MP),
performance was so unstable in the Rov conditions that we decided to collect 10 ‒ instead
of only three ‒ threshold estimates in each of these four conditions, without excluding
any estimate; this required six sessions, before the final session devoted to the NoRov
conditions.
6.2.3 Subjects
subjects were required to show, for the tested ear, an approximately uniform hearing loss,
with absolute threshold elevations comprised between 30 and 50 dB from 0.25 to 3 kHz.
123
at the tested ear. Their ages ranged from 22 to 61 y, with a mean of 42.2 y and a standard
the NH and HI groups with respect to mean age (t(17.6) = 1.04, p = 0.31). This is
particularly noteworthy as there is evidence that age has some effect on inharmonicity
6.3 Results
For each subject, experimental condition, and nominal SL of the tones (in the case
of the NH group), we computed the geometric mean of the threshold estimates. Figure
24(a) shows the results obtained in the Rov conditions after averaging (again geometrically)
across subjects in each group and pooling the data collected in the Rov1‒ and Rov1+
conditions; the error bars represent ± 1 geometric standard deviation across subjects. For
the NH group (cyan and green curves), thresholds were only slightly lower, i.e., better, at
times lower when the standard FS was 1 octave (conditions Rov1‒ and Rov1+) than when
the standard FS was inharmonic (conditions Rov0.8‒ and Rov1.2+). A similar V-shaped
pattern was obtained for the HI group (black curve), but in this case the average threshold
obtained at 1 octave was 2.4 times higher than the corresponding figure for the NH group.
transforms of the NH subjects' thresholds in the four Rov conditions, using a planned
comparison contrasting Rov1‒ and Rov1+ with Rov0.8‒ and Rov1.2+. SL was an
additional within-subject factor in the ANOVA. The overall main effect of Rov condition
was found to be highly significant (F(3, 33) = 82.91, p < 0.0001), as well as the
"harmonicity effect" tested by the planned comparison (t(33) = 15.41, p < 0.0001). The
124
Figure 24. (a) Mean performance of the hearing-impaired group (black lines) and the normal-hearing group
(green lines: SL = 5 dB; cyan lines: SL = 15 dB) in the Rov conditions of the experiment. The ordinate scale
is logarithmic and the disks represent geometric means. The results obtained in the Rov1‒ and Rov1+
conditions (standard FS = 1 octave) have been pooled. The error bars represent ± 1 geometric standard
deviation across subjects. Thick red lines represent the performance expected from a subject responding at
random. (b) Individual performance of each hearing-impaired subject in the Rov conditions. The subjects
are identified by pairs of letters as in Table I. Again, the results obtained in the Rov1‒ and Rov1+ conditions
(standard FS = 1 octave) have been pooled.
main effect of SL was also significant (F(1, 11) = 6.94, p = 0.023), but there was no
significant interaction between SL and Rov condition (F(3, 33) = 0.71, p = 0.55).
For the HI group, a similar analysis of the results obtained in the four Rov
conditions revealed again a significant overall effect of this factor (F(3, 21) = 15.91, p <
0.0001), and it was also found that thresholds were significantly better when the standard
FS was 1 octave than when it differed from from 1 octave (t(21) = 6.73, p < 0.0001).
Figure 24(b) shows the performance of each HI subject for the three standard FS,
conditions Rov1‒ and Rov1+ being pooled. Although performance in this group varied
125
considerably across subjects, especially when the standard FS was 1 octave, a majority of
This effect was statistically tested in each subject, using as entries the three (or 10, in the
case of subject MP) threshold estimates collected in each condition. The outcome of the
planned one-tailed comparison contrasting Rov1‒ and Rov1+ with Rov0.8‒ and Rov1.2+
was significant with p < 0.001 for five subjects: MO, BF, QB, AN, and LC. For the
remaining subjects, EL, MP, and NM, the p value was equal to, respectively, 0.017, 0.018,
and 0.041. These three subjects showed only a weak effect of harmonicity. It should be
noted, however, that in the case of NM, the weakness of the effect may be largely
artifactual. On any trial, as mentioned in the Methods section, |deltaFS| could not exceed
the value of the standard FS. Thus, performance could not be infinitely poor in conditions
expected in each Rov condition from a subject responding at random on every trial. The
results are depicted by thick red lines in Figure 24. It is clear that in conditions Rov0.8‒
and Rov1.2+, NM performed essentially at the chance level. Therefore, her "true"
The analyses reported above were focused on the Rov conditions. We also
experimental conditions in which the standard FS was 1 octave, namely, Rov1‒, Rov1+,
NoRov1‒ and NoRov1+. There were three within-subject factors in this ANOVA: roving
condition (Rov versus NoRov), sign of deltaFS (negative versus positive), and SL (15 dB
versus 5 dB). Each of these factors was found to have a significant main effect (roving
condition: F(1, 11) = 12.75, p = 0.004; sign of deltaFS: F(1, 11) = 55.00, p < 0.0001; SL:
126
F(1, 11) = 20.14, p < 0.001). Interestingly, two interactions were found to be significant:
roving condition X SL (F(1, 11) = 6.03, p = 0.032), and roving condition X sign of deltaFS
(F(1, 11) = 9.53, p = 0.010). The first of these interactions is depicted in Figure 25. It can
be seen that the advantage of the higher SL for performance was larger in the NoRov
Figure 25. The effect of SL and roving on the Figure 26. Performance of each group of subjects
performance of the normal-hearing group when the in the four experimental conditions for which the
standard FS was 1 octave. Again, the ordinate scale standard FS was 1 octave (Rov1‒ , Rov1+, NoRov1‒
is logarithmic and each disk represents a geometric , and NoRov1+). The error bars represent again
mean. The error bars represent within-subject within-subject geometric standard errors, computed
geometric standard errors; they were computed using the Cousineau-Morey procedure.
using the Cousineau-Morey procedure (Baguley,
2012).
26, and we also present in this figure the corresponding results for the HI group. For the
NH group, performance was definitely better in condition Rov1‒ than in condition Rov1+,
but similar in conditions NoRov1‒ and NoRov1+. For the HI group, on the other hand,
the sign of deltaFS had little effect on performance, whatever the roving condition. A
repeated-measures ANOVA of the data summarized in Figure 26, using hearing status
127
interaction between hearing status and sign of deltaFS (F(1, 18) = 10.82, p = 0.004), in
addition to a main effect of hearing status (F(1, 18) = 9.77, p = 0.006). This ANOVA
condition, and sign of deltaFS (F(1, 18) = 2.23, p = 0.153). Nevertheless, the advantage
of condition Rov1‒ over condition Rov1+ was found to be significantly greater in the NH
group than in the HI group (F(1, 18) = 10.37, p = 0.005). It must finally be pointed out
that there was no interaction between hearing status and roving condition (F(1, 18) =
0.10, p = 0.76).
As shown by Figure 24(b), very large individual differences were found among the
HI subjects in the Rov1 conditions. This finding presumably reflects large individual
differences in CCID. However, the HI subjects also differed widely from each other in
the NoRov conditions, testing frequency discrimination instead of CCID. We pointed out
coding noise. This predicts, in the HI group, a positive correlation between performance
in Rov1 and performance in NoRov. Figure 27 displays the corresponding data. The
contrast, the correlation was much smaller and not statistically significant (r = 0.310, p =
0.33).
128
6.4 Discussion
This study indicates that mild or moderate cochlear damage may reduce CCSH,
in some listeners. However, in the light of our results, CCSH appears to be remarkably
robust: it was present, at least to a small extent, in each of our HI subjects; it was not
One conceivable explanation of the latter finding was that, for the subjects in
question, performance was poor simply because the tones were difficult to detect in the
noise, due to an inadequate adjustment of their level after the preliminary phase of the
experiment. We carried out a post-hoc test of this trivial hypothesis in the four HI subjects
who had displayed the poorest thresholds in the Rov1 conditions: NM, MP, EL, and LC
(see Figure 24(b)). The stimuli were similar to those previously used in the Rov1
conditions. On each trial, again, triplets of tones were presented within two successive
noise bursts, as illustrated in Figure 23. This time, however, only one triplet was presented
within one of the two noise bursts: either triplet T2 or triplet T4 was missing; a random
choice was made between these two options and the subject had to indicate if the missing
triplet was T2 or T4. For each subject, crucially, the noise and the tones had the same SPL
as before. There were 200 trials per subject. The proportions of correct responses given
by NM, MP, EL, and LC were respectively equal to 0.965, 1, 0.995, and 1. This outcome
differed from the NH group with respect to the relation between performance in the two
Rov1 conditions. For the NH group, negative deviations from a 1-octave frequency ratio
(i.e., "compressions" of this ratio) were easier to detect than positive deviations
("stretchings"). For the HI group, on the other hand, the sign of the deviations did not
129
matter. The asymmetry observed in the NH group was expected as it is consistent with
the results of four previous studies (Demany, Semal, & Carlyon, 1991; Borchert, Micheyl,
& Oxenham, 2011; Bonnard, Micheyl, Semal, Dauman, & Demany, 2013, Bonnard,
Dauman, Semal, & Demany, 2016). However, this asymmetry remains unexplained. Its
absence in the HI group may suggest that inharmonicity was detected in different ways
by the two groups. In this regard, we have to consider the possibility that, for at least
some HI subjects, detection was based on beat cues of peripheral origin and therefore
had nothing to do with CCSH. Beat cues might have arisen from cochlear interactions
between stimulus components. By using tones widely spaced in frequency and presenting
them at a low SL within noise, we minimized the likelihood of such interactions; however,
they were not completely ruled out in the HI group since cochlear damage reduces
cochlear frequency selectivity (Glasberg & Moore, 1986 ; see also Rosen, 1987). Another
possible conjecture is that some HI subjects detected beats based on the generation of
aural harmonics (Plomp, 1967), although the middle ears of these subjects were intact and
In fact, any version of the beat-detection hypothesis is certainly wrong, for the
following reason. Beats are best detectable when their rate is comprised between about 1
and 4 Hz; they become less and less detectable at higher rates (Riesz, 1928). Consequently,
inharmonicity detection based on beat cues is more difficult for large amounts of
inharmonicity than for smaller ones. The average threshold of the HI subjects in the Rov1
conditions was 0.115 octave. For this amount of inharmonicity, the minimum rate of a
beat (i.e., the beat rate obtained for a lowest tone at 200 Hz) was 31 Hz, well above the
optimal rate for detection. Therefore, it can hardly be doubted that the V-shaped patterns
displayed by the HI subjects in the Rov conditions (Figure 24(b)) genuinely reflect CCSH.
The average threshold of the NH group in the Rov1 conditions is also inconsistent with
130
The fact that inharmonicity detection was asymmetric for the NH group but
symmetric for the HI group is not inconsistent with the idea that both groups'
performance in the Rov1 conditions was based on CCSH. As pointed out in the
Introduction, CCSH may be explained in two different ways: it could be based on the
existence of templates of the harmonic frequency ratios in the central auditory system
information provided by the auditory nerve thanks to neural phase-locking. These two
possible schemes are not mutually exclusive, but it may be that one of them sets the limits
of CCID for NH listeners while these limits are primarily set by the other scheme in case
of cochlear damage. The present study does not allow us to put forward justifiably a more
specific hypothesis. Under either scheme, the limits of CCID can be thought of as
depicted in Figure 25 are consistent with this conceptualization. They show that, for the
NH group, our manipulation of tone level had little effect on CCID while it affected
significantly frequency discrimination. Decreasing the level of the tones by 10 dB, from
coding noise. However, this increase in the frequency-coding noise was expected to have
little effect on CCID if, as it could be reasonably assumed, the main part of the total noise
a single harmonic frequency ratio, the octave. It might be objected that the octave is a
special case. Indeed, in the human auditory cortex, there seem to be neuronal populations
Psychophysical studies have suggested that two simultaneous pure tones one octave apart
are perceptually fused to a larger extent than pairs of pure tones forming other harmonic
131
frequency ratios (Demany & Semal, 1992; Roberts & Brunstrom, 2003). There is also
evidence that when they are heard one after the other, two pure tones at an octave distance
are perceived as similar (Deutsch, 1973; Demany & Armand, 1984; Borra, Versnel,
Kemner, van Opstal, & van Ee, 2013); it has often been stated that such tones share a
pitch quality, called "tone chroma". One might think that the octave advantage apparent
in Figure 24 is partly caused by the special pitch relation of pure tones forming an octave
interval, and therefore that we did not study here a "pure" form of CCSH. However, in
the light of a recent study conducted in our laboratory (Bonnard, Dauman, Semal, &
Demany, 2016), this hypothesis appears to be wrong. We showed in the latter study that
pitch affinity and fusion do not depend in exactly the same way on frequency ratio, and
hence are probably not directly related. Thus, even though the fusion of simultaneous
pure tones one octave apart may be particularly strong, this fusion can be reasonably
132
7 CONCLUSION GENERALE
Les données expérimentales présentées dans cette thèse apportent des éléments
d’étude explorait la capacité des sujets à détecter le degré de fusion harmonique des stimuli
de manière « globale », sur une large gamme de fréquence et sans identifier précisément
relativement homogène le long de la membrane basilaire qu’à une atteinte plus hétérogène,
alternant des zones très altérées et des « îlots » fréquentiels préservés. Ce type de
dépendance fréquentielle a déjà été mis en évidence dans des tâches auditives différentes
(Gabriel, Koehnke, & Colburn, 1992). (2) Pour prolonger ces résultats, une nouvelle étude
est envisagée dans le but de mesurer spécifiquement les capacités des sujets atteints de
surdité cochléaire à détecter une composante désaccordée au sein d’un son complexe
133
harmonique. A chaque essai sera présenté au sujet un son complexe à quatre composantes,
suivi par un son pur unique jouant le rôle de sonde. Le son complexe sera harmonique, à
pourra être très proche de celle de la composante désaccordée, ou bien très proche de la
composante.
Les données recueillies chez les sujets normo-entendants font apparaître que la
purs mélodiques ne sont pas deux phénomènes directement liés. Cette constatation est
difficile à expliquer avec les modèles qui supposent que l’extraction de la hauteur spectrale
sur des gabarits. Nos résultats semblent au contraire compatibles avec l’idée que la fusion
harmonique précède l’extraction de la hauteur spectrale des composantes, ou que ces deux
complexe. Même si nos données ne permettent pas directement une telle conclusion, le
des composantes fréquentielles (et donc leur analyse individuelle), la fusion harmonique
134
8 BIBLIOGRAPHIE
Alain, C., Schuler, B. M., & McDonald, K. L. (2002). Neural activity associated with distinguishing
concurrent auditory objects. J. Acoust. Soc. Am., 111, pp. 990-995.
Arehart, K. H. (1994). Effects of harmonic content on complex-tone fundamental-frequency
discrimination in hearing-impaired listeners. J. Acoust. Soc. Am., 95, pp. 3574-3585.
Arehart, K., King, C., & McLean-Mudgett, K. (1997). Role of fundamental frequency differences
in the perceptual separation of competing vowel sounds by listeners with normal hearing
and listeners with hearing loss. J. Speech Lang. Hear. Res., 40, pp. 1434-1444.
Arehart, K., Rossi-Katz, J., & Swensson-Prutsman, J. (2005). Double-vowel perception in listeners
with cochlear hearing loss: Differences in fundamental frequency, ear of presentation,
and relative amplitude. J. Speech Lang. Hear. Res., 48, pp. 236-252.
Assmann, P., & Paschall, D. (1998). Pitches of concurrent vowels. J. Acoust. Soc. Am., 103, pp.
1150-1160.
Attneave, F., & Olson, R. K. (1971). Pitch as a medium: A new approach to psychophysical
scaling. Am. J. Psychol., 84, pp. 147-166.
Bachem, A. (1937). Various types of absolute pitch. J. Acoust. Soc. Am., 9, pp. 146-151.
Bendor, D., & Wang, X. (2005). The neuronal representation of pitch in primate auditory cortex.
Nature, 436, pp. 1161-1165.
Bernstein, J. G., & Oxenham, A. J. (2005). An autocorrelation model with place dependence to
account for the effect of harmonic number on fundamental frequency discrimination. J.
Acoust. Soc. Am., 117, pp. 3816-3831.
Bernstein, J. G., & Oxenham, A. J. (2006). The relationship between frequency selectivity and
pitch discrimination: Sensorineural hearing loss. J. Acoust. Soc. Am., 120, pp. 3929-3945.
Bharadwaj, H., Verhulst, S., Shaheen, L., Liberman, M., & Shinn-Cunningham, B. (2014).
Cochlear neuropathy and the coding of supra-threshold sound. Front. Syst. Neurosci., 8,
26.
Blackwell, H., & Schlosberg, H. (1943). Octave generalization, pitch discrimination, and loudness
thresholds in the white rat. J. Exp. Psychol., 33, pp. 407-419.
Bonnard, D., Dauman, R., Semal, C., & Demany, L. (2016). Harmonic fusion and pitch affinity:
Is there a direct link? Hear. Res., 333, pp. 247-254.
Bonnard, D., Micheyl, C., Semal, C., Dauman, R., & Demany, L. (2013). Auditory discrimination
of frequency ratios: The octave singularity. J. Exp. Psychol. Hum. Percept. Perform., 39,
pp. 788-801.
Borchert, E., Micheyl, C., & Oxenham, A. (2011). Perceptual grouping affects pitch judgments
across time and frequency. J. Exp. Psychol. Hum. Percept. Perform., 37, pp. 257-269.
Borra, T., Versnel, H., Kemner, C., van Opstal, A., & van Ee, R. (2013). Octave effect in auditory
attention. Proc. Natl. Acad. Sci. U. S. A., 110, pp. 15225-15230.
Bregman, A. S. (1990). Auditory Scene Analysis. Cambridge, MA: MIT Press.
Bregman, A. S., & Doehring, P. (1984). Fusion of simultaneous tonal glides: The role of
parallelness and simple frequency relations. Percept. Psychophys., 36, pp. 251-256.
Brunstrom, J. M., & Roberts, B. (2000). Separate mechanisms govern the selection of spectral
components for perceptual fusion and for the computation of global pitch. J. Acoust.
Soc. Am., 107, pp. 1566-1577.
Bureau International d'Audiophonologie. (1997). Recommandation biap 02/1 bis. Retrieved
January 29, 2016 from biap.org: http://www.biap.org/fr/recommandations/65-ct-2-
classification-des-surdites
Burns, E., & Houtsma, A. (1999). The influence of musical training on the perception of
sequentially presented mistuned harmonics. J. Acoust. Soc. Am., 106, pp. 3564-3570.
Burns, E., & Ward, W. (1978). Categorical perception phenomenon or epiphenomenon: Evidence
from experiments in the perception of melodic musical intervals. J. Acoust. Soc. Am., 63,
pp. 456-468.
135
Cariani, P. (2001). Temporal codes, timing nets, and music perception. Journal of New Music
Research, 30, pp. 107-135.
Cariani, P., & Delgutte, B. (1996). Neural correlates of the pitch of complex tones. I. Pitch and
pitch salience. J. Neurophysiol., 76, pp. 1698-1716.
Carlyon, R. P. (1998). Comments on "A unitary model of pitch perception" [J. Acoust. Soc. Am.
102, 1811-1820 (1997)]. J. Acoust. Soc. Am., 104, pp. 1118-1121.
Carlyon, R. P., & Gockel, H. E. (2008). Effects of harmonicity and regularity on the perception
of sound sources. In W. A. Yost, A. N. Popper, & R. R. Fay (Eds.), Auditory perception
of sound sources (pp. 191-213). New York: Springer.
Carlyon, R. P., & Shackleton, T. M. (1994). Comparing the fundamental frequencies of resolved
and unresolved harmonics: Evidence for two pitch mechanisms? J. Acoust. Soc. Am., 95,
pp. 3541-3554.
Carlyon, R., Demany, L., & Semal, C. (1992). Detection of across-frequency differences in
fundamental frequency. J. Acoust. Soc. Am., 91, pp. 279-292.
Chintanpalli, A., & Heinz, M. (2013). The use of confusion patterns to evaluate the neural basis
of concurrent vowels identification. J. Acoust. Soc. Am., 134, pp. 2988-3000.
Clinard, C., & Tremblay, K. (2013). Aging degrades the neural encoding of simple and complex
sounds in the human brainstem. J. Am. Acad. Audiol., 24, pp. 590-599.
Cramer, E. M., & Huggins, W. H. (1958). Creation of pitch through binaural interaction. J. Acoust.
Soc. Am., 30, pp. 413-417.
Cruickshanks, K. J., Wiley, T. L., Tweed, T. S., Klein, B. E., Klein, R., Mares-Perlman, J. A., &
Nondahl, D. M. (1998). Prevalence of hearing loss in older adults in Beaver Dam,
Wisconsin. The Epidemiology of Hearing Loss Study. Am. J. Epidemiol., 148, pp. 879-
886.
Culling, J., & Darwin, C. (1993). Perceptual separation of simultaneous vowels: Within and across-
formant groupinf by F0. J. Acoust. Soc. Am., 93, pp. 3454-3467.
Culling, J., & Darwin, C. (1994). Perceptual and computational separation of simultaneous vowels:
Cues arising from low-frequency beating. J. Acoust. Soc. Am., 95, pp. 1559-1569.
Dai, H. (2000). On the relative influence of individual harmonics on pitch judgment. J. Acoust.
Soc. Am., 107, pp. 953-959.
de Cheveigné, A. (1993). Separation of concurrent harmonic sounds: Fundamental frequency
estimation and a time-domain cancellation model of auditory processing. J. Acoust. Soc.
Am., 93, pp. 3271-3290.
de Cheveigné, A. (1997). Harmonic fusion and pitch shifts of mistuned partials. J. Acoust. Soc.
Am., 102, pp. 1083-1087.
de Cheveigné, A. (2005). Pitch perception models. In R. R. C. Plack (Ed.), Pitch: Neural coding
and perception (pp. 169-233). New York, NY: Springer.
de Cheveigné, A., & Pressnitzer, D. (2006). The case of the missing delay lines: synthetic delays
obtained by cross-chanel phase interaction. J. Acoust. Soc. Am., 119, pp. 3908-3918.
de Cheveigné, A., Kawahara, H., Tsuzaki, M., & Aikawa, K. (1997). Concurrent vowel
identification: I. Effects of relative amplitude and F0 difference. J. Acoust. Soc. Am., 101,
pp. 2839-2847.
Demany, L., & Armand, F. (1984). The perceptual reality of tone chroma in early infancy. J.
Acoust. Soc. Am., 76, pp. 57-66.
Demany, L., & Semal, C. (1988). Dichotic fusion of two tones one octave apart: Evidence for
internal octave templates. J. Acoust. Soc. Am., 83, pp. 687-695.
Demany, L., & Semal, C. (1990). Harmonic and melodic octave templates. J. Acoust. Soc. Am,
88, pp. 2126-2135.
Demany, L., & Semal, C. (1992). Detection of inharmonicity in dichotic pure-tone dyads. Hear.
Res., 61, pp. 161-166.
Demany, L., Semal, C., & Carlyon, R. (1991). On the perceptual limits of octave harmony and
their origin. J. Acoust. Soc. Am., 90, pp. 3019-3027.
Deutsch, D. (1973). Octave generalization of specific interference effects in memory for tonal
pitch. Percept. Psychophys., 13, pp. 271-275.
Dobbins, P. A., & Cuddy, L. L. (1982). Octave discrimination: An experimental confirmation of
the "streched" subjective octave. J. Acoust. Soc. Am., 72, pp. 411-415.
136
Dowling, W., & Fujitani, D. (1971). Contour, interval, and pitch recognition in memory for
melodies. J. Acoust. Soc. Am., 49, pp. 524-531.
Dowling, W., & Hollombe, A. (1977). The perception of melodies distorted by splitting into
several octaves: Effects of increasing proximity and melodic contour. Percept.
Psychophys., 21, pp. 60-64.
Duifhuis, H., Willems, L., & Sluyter, R. (1982). Measurement of pitch in speech: An
implementation of Goldstein's theory of pitch perception. J. Acoust. Soc. Am., 71, pp.
1568-1580.
Evans, E. F., & Harrison, R. V. (1976). Correlation between outer hair cell damage and
deterioration of cochlear nerve tuning properties in the guinea pig. J. Physiol., 256, pp.
43-44.
Feeney, M. (1997). Dichotic beats of mistuned consonances. J. Acoust. Soc. Am., 102, pp. 2333-
2342.
Fishman, Y. I., & Steinschneider, M. (2010). Neural correlates of auditory scene analysis based on
inharmonicity in monkey primary auditory cortex. J. Neurosci., 30, pp. 12480-12494.
Gabriel, K., Koehnke, J., & Colburn, H. (1992). Frequency dependence of binaural performance
in listeners with impaired binaural hearing. J. Acoust. Soc. Am., 91, pp. 336-347.
García-Pérez, M. A. (2011). A cautionary note on the use of the adaptive up-down method. J.
Acoust. Soc. Am., 130, pp. 2098-2107.
Glasberg, B., & Moore, B. (1986). Auditory filter shapes in subjects with unilateral and bilateral
cochlear impairments. J. Acoust. Soc. Am., 79, pp. 1020-1033.
Glasberg, B., & Moore, B. (2006). Prediction of absolute thresholds and equal-loudness contours
using a modified loudness model. J. Acoust. Soc. Am., 120, pp. 585-588.
Gockel, H., Carlyon, R., & Plack, C. (2004). Across-frequency interference effects in fundamental
frequency discrimination: Questioning evidence for two pitch mechanisms. J. Acoust.
Soc. Am., 116, pp. 1092-1104.
Gockel, H., Carlyon, R., & Plack, C. (2009a). Further examination of pitch discrimination
interference between complex tones containing resolved harmonics. J. Acoust. Soc. Am.,
125, pp. 1059-1066.
Gockel, H., Hafter, E., & Moore, B. (2009b). Pitch discrimination interference: The role of ear of
entry and octave similarity. J. Acoust. Soc. Am., 125, pp. 324-327.
Goldstein, J. L. (1973). An optimum processor theory for the central formation of the pitch of
complex tones. J. Acoust. Soc. Am., 54, pp. 1496-1516.
Hall, D. A., & Plack, C. (2009). Pitch processing sites in the human auditory brain. Cereb. Cortex,
19, pp. 576-585.
Halsey, K., Fegelman, K., Raphael, Y., Grosh, K., & Dolan, D. F. (2005). Long-term effects of
acoustic trauma on electrically evoked otoacoustic emission. J. Assoc. Res. Otolaryngol.,
6, pp. 324-340.
Harrison, R. V., & Evans, E. F. (1979). Some aspects of temporal coding by single cochlear fibres
from regions of cochlear hair cell degeneration in the guinea pig. Arch. Otorhinolaryngol.,
224, pp. 71-78.
Hartmann, W. (1988). Pitch perception and the segregation and integration of auditory entities.
In W. E. G. M. Edelman (Ed.), Auditory function - Neurobiological bases of hearing (pp.
623-645). New York, NY: Wiley.
Hartmann, W. (1993). On the origin of the enlarged melodic octave. J. Acoust. Soc. Am., 93, pp.
3400-3409.
Hartmann, W. (1996). Pitch, periodicity, and auditory organization. J. Acoust. Soc. Am., 100, pp.
3491-3502.
Hartmann, W., & Doty, S. (1996). On the pitches of the components of a complex tone. J. Acoust.
Soc. Am., 99, pp. 567-578.
Hartmann, W., McAdams, S., & Smith, B. (1990). Hearing a mistuned harmonic in an otherwise
periodic complex tone. J. Acoust. Soc. Am., 88, pp. 1712-1724.
Helmholtz, H. (1877/1954). Die Lehre von den Tonempfindungen als Physiologische Grundlage
für die Theorie der Musik. (English translation: On the Sensations of Tone as a
Physiological Basis for the Theory of Music). New York: Dover.
137
Henry, K. S., & Heinz, M. G. (2012). Diminished temporal coding with sensorineural hearing loss
emerges in background noise. Nat. Neurosci., 15, pp. 1362-1364.
Hopkins, K., & Moore, B. C. (2007). Moderate cochlear hearing loss leads to a reduced ability to
use temporal fine structure information. J. Acoust. Soc. Am., 122, pp. 1055-1068.
Hopkins, K., & Moore, B. C. (2011). The effects of age and cochlear hearing loss on temporal
fine structure sensitivity, frequency selectivity, and speech reception in noise. J. Acoust.
Soc. Am., 130, pp. 334-349.
Houtsma, A. (1968). Discrimination of frequency ratios. J. Acoust. Soc. Am., 44, p. 383 (abstract).
Houtsma, A. J., & Goldstein, J. L. (1972). The central origin of the pitch of pure tones : Evidence
from musical interval recognition. J. Acoust. Soc. Am., 51, pp. 520-529.
Houtsma, A. J., & Smurzynski, J. (1990). Pitch identification ans discrimination for complex tones
with many harmonics. J. Acoust. Soc. Am., 87, pp. 304-310.
Idson, W., & Massaro, D. (1978). A bidimensional model of pitch in the recognition of melodies.
Percept. Psychophys., 24, pp. 551-565.
Kadia, S., & Wang, X. (2003). Spectral intergration in A1 of awake primates: Neurons with single-
and multipeaked tuning characteristics. J. Neurophysiol., 89, pp. 1603-1622.
Kaernbach, C. (1991). Simple adaptive testing with the weighted up-down method. Percept.
Psychophys., 49, pp. 227-229.
Kale, S., Micheyl, C., & Heinz, M. G. (2013). Effects of sensorineural hearing loss on temporal
coding of harmonic and inharmonic tone complexes in the auditory nerve. Adv. Exp.
Med. Biol., pp. 109-118.
Kallman, H., & Massaro, D. (1979). Tone chroma is functional in melody recognition. Percept.
Psychophys., 26, pp. 32-36.
Kalluri, S., Depireux, D. A., & Shamma, S. A. (2008). Perception and cortical neural coding of
harmonic fusion in ferrets. J. Acoust. Soc. Am., 123, pp. 2701-2716.
Keilson, S. E., Richards, V. M., Wyman, B. T., & Young, E. D. (1997). The representation of
concurrent vowels in the cat anesthetized ventral cochlear nucleus: Evidence for a
periodicity-tagged spectral representation. J. Acoust. Soc. Am., 102, pp. 1056-1071.
Kemp, D. T. (2002). Otoacoustic emissions, their origin in cochlear function, and use. Br. Med.
Bull., 63, pp. 223-241.
Kiang, N., Sachs, M., & Peake, W. (1967). Shapes of tuning curves for single auditory-nerve fibers.
J. Acoust. Soc. Am., 42, pp. 1341-1342.
King, A., Hopkins, K., & Plack, C. (2014). The effect of age and hearing loss on interaural phase
difference discrimination. J. Acoust. Soc. Am., 135, pp. 342-351.
Kujawa, S. G., & Liberman, M. C. (2009). Adding insult to injury: Cochlear nerve degeneration
after "temporary" noise-induced hearing loss. J. Neurosci., 29, pp. 14077-14085.
Lacher-Fougère, S., & Demany, L. (1998). Modulation detection by normal and hearing-impaired
listeners. Audiology, 37, pp. 109-121.
Lacher-Fougère, S., & Demany, L. (2005). Consequence of cochlear damage for the detection of
interaural phase differences. J. Acoust. Soc. Am., 118, pp. 2519-2526.
Laguitton, V., Demany, L., Semal, C., & Liégeois-Chauvel, C. (1998). Pitch perception: A
difference between right- and left-handed listerners. Neuropsychologia, 36, pp. 201-207.
Larsen, E., Cedolin, L., & Delgutte, B. (2008). Pitch representations in the auditory nerve: Two
concurrent complex tones. J. Neurophysiol., 100, pp. 1301-1319.
Levelt, W., van de Geer, J., & Plomp, R. (1966). Triadic comparisons of musical intervals. Br. J.
Math. Stat. Psychol., 19, pp. 163-179.
Liberman, M. C., Dodds, L. W., & Learson, D. A. (1986). Structure-function correlation in noise-
damaged ears: A light and electron-microscopic study. In R. J. Salvi, D. Henderson, R. P.
Hamernik, & V. Colletti (Eds.), Basic and Applied Aspects of Noise-Induced Hearing
Loss (pp. 163-176). New York: Plenum.
Licklider, J. C. (1951). A duplex theory of pitch perception. Experientia, 7, pp. 128-133.
Lin, J., & Hartmann, W. (1998). The pitch of a mistuned harmonic: Evidence for a template
model. J. Acoust. Soc. Am., 103, pp. 2608-2617.
Lorenzi, C., Debruille, L., Garnier, S., Fleuriot, P., & Moore, B. C. (2009). Abnormal processing
of temporal fine structure in speech for frequencies where absolute thresholds are
normal. J. Acoust. Soc. Am., 125, pp. 27-30.
138
Lorenzi, C., Gilbert, G., Carn, H., Garnier, S., & Moore, B. (2006). Speech perception problems
of the hearing impaired reflect inability to use temporal fine structure. Proc. Natl. Acad.
Sci. USA, 103, pp. 18866-18869.
Maison, S., Usubuchi, H., & Liberman, M. (2013). Efferent feedback minimizes cochlear
neuropathy from moderate noise exposure. J. Neurosci., 33, pp. 5542-5552.
Massaro, D., Kallman, H., & Kelly, J. (1980). The role of tone height, melodic contour, and tone
chroma in melody recognition. J. Exp. Psychol. Hum. Learn. Mem., 6, pp. 77-90.
McAlpine, D., Jiang, D., & Palmer, A. R. (2001). A neural code for low-frequency sound
localization in mammals. Nat. Neurosci., 4, pp. 396-401.
McDermott, J., Keebler, M., Micheyl, C., & Oxenham, A. (2010). Musical intervals and relative
pitch: Frequency resolution, not interval resolution, is special. J. Acoust. Soc. Am., 128,
pp. 1943-1951.
McDermott, J., Lehr, A., & Oxenham, A. (2010). Individual differences reveal the basis of
consonance. Curr. Biol., 20, pp. 1035-1041.
McDermott, J., Schemitsch, M., Kawahara, H., & Ellis, D. (2013). Harmonicity and spectral
sparsity in speech segregation. Association for Research in Otolaryngology, 36th annual
midwinter meeting, 36, p. 244. Baltimore.
McKinney, M., & Delgutte, B. (1999). A possible neurophysiological basis of the octave
enlargement effect. J. Acoust. Soc. Am., 106, pp. 2679-2692.
Meddis, R., & Hewitt, M. (1992). Modeling the identification of concurrent vowels with different
fundamental frequencies. J. Acoust. Soc. Am., 91, pp. 233-245.
Meddis, R., & Hewitt, M. J. (1991). Virtual pitch and phase sensitivity of a computer model of the
auditory periphery. I: Pitch identification. J. Acoust. Soc. Am., 89, pp. 2866-2882.
Meddis, R., & O'Mard, L. (1997). A unitary model of pitch perception. J. Acoust. Soc. Am., 102,
pp. 1811-1820.
Micheyl, C., & Oxenham, A. J. (2010). Pitch, harmonicity and concurrent sound segregation:
Psychoacoustical and neurophysiological findings. Hear. Res., 266, pp. 36-51.
Miller, R. L., Schilling, J. R., Franck, K. R., & Young, E. D. (1997). Effects of acoustic trauma on
the representation of the vowel /ε/ in cat auditory nerve fibers. J. Acoust. Soc. Am., 101,
pp. 3602-3616.
Moerel, M., De Martino, F., Santoro, R., Ugurbil, K., Goebel, R., Yacoub, E., & Formisano, E.
(2013). Processing of natural sounds: Characterization of multipeak spectral tuning in
human auditory cortex. J. Neurosci., 33, pp. 11888-11898.
Moore, B. (2003). An Introduction to the Psychology of Hearing. Amsterdam, Netherlands:
Elsevier.
Moore, B. (2012). An Introduction to the Psychology of Hearing. Leiden, Netherlands: Brill.
Moore, B. C. (1982). An Introduction to the Psychology of Hearing (2nd ed.). London: Academic
Press.
Moore, B. C. (2007). Cochlear Hearing Loss : Physiological, Psychological and Technical Issues
(2nd ed.). Chichester: John Wiley & Sons Ltd.
Moore, B. C. (2008). The role of temporal fine structure processing in pitch perception for
normal-hearing and hearing-impaired people. J. Assoc. Res. Otolaryngol., 9, pp. 399-406.
Moore, B. C., & Peters, R. W. (1992). Pitch discrimination and phase sensitivity in young and
elderly subjects and its relationship to frequency selectivity. J. Acoust. Soc. Am., 91, pp.
2881-2893.
Moore, B. C., Glasberg, B. R., & Peters, R. W. (1985). Relative dominance of individual partials
in determining the pitch of complex tones. J. Acoust. Soc. Am., 77, pp. 1853-1860.
Moore, B. C., Glasberg, B. R., & Shailer, M. J. (1984). Frequency and intensity difference limens
for harmonics within complex tones. J. Acoust. Soc. Am., 75, pp. 550-561.
Moore, B. C., Huss, M., Vickers, B. R., Glasberg, B. R., & Alcántara, J. I. (2000). A test for the
diagnosis of dead regions in the cochlea. Br. J. Audiol., 34, pp. 205-224.
Moore, B. C., Peters, R. W., & Glasberg, B. R. (1985). Thresholds for the detection of
inharmonicity in complex tones. J. Acoust. Soc. Am., 77, pp. 1861-1867.
Moore, B., & Carlyon, R. (2005). Perception of pitch by people with cochlear hearing loss and by
cochlear implant users. In A. J. C. J. Plack (Ed.), Pitch: Neural Coding and Perception
(pp. 234-277). New York, NY: Springer.
139
Moore, B., Glasberg, B., & Baer, T. (1997). A model for the prediction of thresholds, loudness,
and partial loudness. J. Audio. Eng. Soc., 45, pp. 224-240.
Moore, B., Glasberg, B., & Peters, R. (1986). Thresholds for hearing mistuned partials as separate
tones in harmonic complexes. J. Acoust. Soc. Am., 80, pp. 479-483.
Moore, G. A., & Moore, B. C. (2003). Perception of the low pitch of frequency-shifted complexes.
J. Acoust. Soc. Am., 113, pp. 977-985.
Nelson, E. G., & Hinojosa, R. (2003). Presbyacusis: A human temporal bone study of individuals
with flat audiometric patterns of hearing loss using a new method to quantify stria
vascularis volume. Laryngoscope, 113, pp. 1672-1686.
Noreña, A., Tomita, M., & Eggermont, J. (2003). Neural changes in cat auditory cortex after a
transient pure-tone trauma. J. Neurophysiol, 90, pp. 2387-2401.
Ohgushi, K. (1983). The origin of tonality and a possible explanation of the octave enlargement
phenomenon. J. Acoust. Soc. Am., 73, pp. 1694-1700.
Oxenham, A. J., Bernstein, J. G., & Penagos, H. (2004). Correct tonotopic representation is
necessary for complex pitch perception. Proc. Natl. Acad. Sci. U.S.A., 101, pp. 1421-
1425.
Palmer, A. R. (1988). The representation of concurrent vowels in the temporal discharge patterns
of auditory nerve fibers. In H. Duifhuis, J. W. Jorst, & H. Wit (Ed.), Basic Issue in Hearing
(pp. 244-251). London.
Pantev, C., Elbert, T., Ross, B., Eulitz, C., & Terhardt, E. (1996). Binaural fusion and the
representation of virtual pitch in the human auditory cortex. Hear. Res., 100, pp. 164-
170.
Patterson, R. D., Allerhand, M., & Giguère, C. (1995). Time-domain modelling of peripheral
auditory processing: A modular architecture and a software plateform. J. Acoust. Soc.
Am., 98, pp. 1890-1894.
Patterson, R. D., Uppenkamp, S., Johnsrude, I. S., & Griffiths, T. D. (2002). The processing of
temporal pitch and melody information in auditory cortex. Neuron, 36, pp. 767-776.
Patterson, R., Handel, S., Yost, W., & Datta, A. (1996). The relative strength of the tone and noise
components in iterated rippled noise. J. Acoust. Soc. Am., 100, pp. 3286-3294.
Penagos, H., Melcher, J. R., & Oxenham, A. J. (2004). A neural representation of pitch salience in
nonprimary human auditory cortex revealed with functional magnetic resonance imaging.
J. Neurosci., 24, pp. 6810-6815.
Peters, R., Moore, B., & Glasberg, B. (1983). Pitch of components of a complex tone. J. Acoust.
Soc. Am., 73, pp. 924-929.
Plack, C. J. (2005). The Sense of Hearing. New York: Taylor & Francis.
Plack, C., Barker, D., & Prendergast, G. (2014). Perceptual consequences of "hidden" hearing
loss. Trends Hear., 18, pp. 1-11.
Plomp, R. (1967). Beats of mistuned consonances. J. Acoust. Soc. Am., 42, pp. 462-474.
Plomp, R. (1976). Aspects of tone sensation. London, UK: Academic Press.
Plomp, R., Wagenaar, W., & Mimpen, A. (1973). Musical interval recognition with simultaneous
tones. Acustica, 29, pp. 101-109.
Portfors, C., & Wenstrup, J. (2002). Excitatory and facilitatory frequency response areas in the
inferior colliculus of the mustached bat. Hear. Res., 168, pp. 131-138.
Rakowski, A. (1990). Intonation variants of musical intervals in isolation and in musical context.
Psychol. Mus., 18, pp. 60-72.
Riesz, R. (1928). Differential intensity sensitivity of the ear for pure tones. Phys. Rev., 31, pp. 867-
875.
Roberts, B., & Bailey, P. (1996). Spectral regularity as a factor distinct from harmonic relations in
auditory grouping. J. Exp. Psychol. Hum. Percept. Perform., 22, pp. 604-614.
Roberts, B., & Bregman, A. (1991). Effects of the pattern of spectral spacing on the perceptual
fusion of harmonics. J. Acoust. Soc. Am., 90, pp. 3050-3060.
Roberts, B., & Brunstrom, J. (1998). Perceptual segregation and pitch shifts of mistuned
components in harmonic complexes and in regular inharmonic complexes. J. Acoust.
Soc. Am., 104, pp. 2326-2338.
140
Roberts, B., & Brunstrom, J. M. (2001). Perceptual fusion and fragmentation of complex tones
made inharmonic by applying different degrees of frequency shift and spectral strech. J.
Acoust. Soc. Am., 110, pp. 2479-2490.
Roberts, B., & Brunstrom, J. M. (2003). Spectral pattern, harmonic relations, and the perceptual
grouping of low-numbered components. J. Acoust. Soc. Am., 114, pp. 2118-2134.
Rose, J., Brugge, J., Anderson, D., & Hind, J. (1967). Phase-locked response to low-frequency
tones in single auditory nerve fibers of the squirrel monkey. J. Neurophysiol., 30, pp. 769-
793.
Rosen, S. (1987). Phase and the hearing impaired. In M. Nijhoff (Ed.), Psychophysics of Speech
Perception (pp. 481-488). Dordrecht, Netherlands: M. E. H. Shouten.
Ross, D., Choi, J., & Purves, D. (2007). Musical intervals in speech. Proc. Natl. Acad. Sci. U. S.
A., 104, pp. 9852-9857.
Schellenberg, E. (2001). Asymmetries in the discrimination of musical intervals: Going out of tune
is more noticeable than going in tune. Music. Percept., 19, pp. 223-248.
Schellenberg, E., & Trehub, S. (1994). Frequency ratios and the discrimination of pure tone
sequences. Percept. Psychophys., 56, pp. 472-478.
Schellenberg, E., & Trehub, S. (1996a). Children’s discrimination of melodic intervals. Dev.
Psychol., 32, pp. 1039-1050.
Schellenberg, E., & Trehub, S. (1996b). Natural musical intervals: Evidence from infant listeners.
Psychol. Sci., 7, pp. 272-277.
Schouten, J. F., Ritsma, R. J., & Cardozo, B. L. (1962). Pitch of the residue. J. Acoust. Soc. Am.,
34, pp. 1418-1424.
Schuknecht, H. F. (1993). Pathology of the ear. Philadelphia: Lea and Febiger.
Schwartz, A., McDermott, J., & Shinn-Cunningham, B. (2012). Spatial cues alone produce
inaccurate sound segregation: The effect of interaural time differences. J. Acoust. Soc.
Am., 132, pp. 357-368.
Schwartz, D., Howe, C., & Purves, D. (2003). The statistical structure of human speech sounds
predicts musical universals. J. Neurosci., 23, pp. 7160-7168.
Sergeyenko, Y., Lall, K., Liberman, M., & Kujawa, S. (2013). Age-related cochlear synaptopathy:
An early-onset contributor and functional decline. J. Neurosci., 33, pp. 13686-13694.
Shamma, S., & Klein, D. (2000). The case of the missing pitch templates: How harmonic templates
emerge in the early auditory system. J. Acoust. Soc. Am., 107, pp. 2631-2644.
Sinex, D. G. (2008). Responses of cochlear nucleus neurons to harmonic and mistuned complex
tones. Hear. Res., 238, pp. 39-48.
Sinex, D. G., & Li, H. (2007). Responses of inferior colliculus neurons to double harmonic tones.
J. Neurophysiol., 98, pp. 3171-3184.
Sinex, D. G., Guzik, H., Li, H., & Henderson Sabes, J. (2003). Responses of auditory nerve fibers
to harmonic and mistuned complex tones. Hear. Res., 182, pp. 130-139.
Stoelinga, C., & Lutfi, R. (2010). Discimination of frequency ratios. Poster presented at the 23rd
Midwinter Research Meeting of the Association for Research in Otolaryngology,
(Abstract 333). Anaheim, CA.
Strelcyk, O., & Dau, T. (2009). Relations between frequency selectivity, temporal fine structure
processing, and speech reception in impaired hearing. J. Acoust. Soc. Am., 125, pp. 3328-
3345.
Summers, V., & Leek, M. (1998). F0 processing and the separation of competing speech signals
by listeners with normal hearing and with hearing loss. J. Speech Lang. Hear. Res., 41,
pp. 1294-1306.
Sundberg, J., & Lindqvist, J. (1973). Musical octaves and pitch. J. Acoust. Soc. Am., 54, pp. 922-
929.
Terhardt, E. (1970). Oktavspreizung und Tonhöhenverschiebung bei Sinustönen. Acustica, 22,
pp. 345-351.
Terhardt, E. (1971). Pitch shifts of harmonics, an explanation of the octave enlargement
phenomenon. Proceedings of the 7th International Congress on Acoustics, 3, pp. 621-
624. Budapest.
Terhardt, E. (1974). Pitch, consonance, and harmony. J. Acoust. Soc. Am., 55, pp. 1061-1069.
141
Terhardt, E., Stoll, G., & Seewann, M. (1982). Algorithm for extraction of pitch and pitch salience
from complex tonal signals. J. Acoust. Soc. Am., 71, pp. 679-688.
Thurlow, W., & Bernstein, S. (1957). Simultaneous two-tone pitch discrimination. J. Acoust. Soc.
Am., 29, pp. 515-519.
Thurlow, W., & Elfner, L. (1959). Pure-tone cross-ear localization effects. J. Acoust. Soc. Am.,
31, pp. 1606-1608.
Tobias, J. (1963). Application of a “relative” procedure to a problem in binaural-beat perception.
J. Acoust. Soc. Am., 35, pp. 1442-1447.
Trainor, L. (1997). Effect of frequency ratio on infants' and adults' discrimination of simultaneous
intervals. J. Exp. Psychol. Hum. Percept. Perform., 23, pp. 1427-1438.
Tramo, M. J., Cariani, P. A., Delgutte, B., & Braida, L. D. (2001). Neurobiological foundations for
the theory of harmony in western tonal music. Ann. NY Acad. Sci., 930, pp. 92-116.
Viemeister, N., & Fantini, D. (1987). Discrimination of frequency ratios. In Auditory Processing
of Complex Sounds (pp. 47-56). Hillsdale: W. A. Yost & C. S. Watson.
Viemeister, N., Rickert, M., & Stellmack, M. (2001). Beats of mistuned consonances: Implications
for auditory coding. In D. Breebart, A. Houtsma, A. Kohlrausch, V. Prijs, & R.
Schoonhoven (Eds.), Physiological and Psychophysical Bases of Auditory Function (pp.
113-120). Maastrict: Shaker Publishing.
Vos, P., & Troost, J. (1989). Ascending and descending melodic intervals: Statistical findings and
their perceptual relevance. Music. Percept., 6, pp. 383-396.
Walliser, K. (1969). Über die Spreizung von empfundenen Intervallen gegenüber mathematisch
harmonischen Intervallen bei Sinustönen. Frequenz, 23, pp. 139-143.
Wang, X. (2013). The harmonic organization of auditory cortex. Front. Syst. Neurosci., 7, p. 114.
Ward, W. (1954). Subjective musical pitch. J. Acoust. Soc. Am., 26, pp. 369-380.
Wingfield, A., & Peelle, J. (2015). The effects of hearing loss on neural processing and plasticity.
Front. Syst. Neurosci., 9, 35.
Woolf, N. K., Ryan, A. F., & Bone, R. C. (1981). Neural phase-locking properties en the absence
of outer hair cells. Hear. Res., 4, pp. 335-346.
Wright, A., Rivera, J., Hulse, S., Shyan, M., & Neiworth, J. (2000). Music perception and octave
generalization in rhesus monkeys. J. Exp. Psychol. Gen., 129, pp. 291-307.
Yost, W. (1996). Pitch strength of iterated rippled noise. J. Acoust. Soc. Am., 100, pp. 3329-3335.
142