Vous êtes sur la page 1sur 44

Do You Hear What I Hear?

How our individual human experience and learned behaviour


shapes our perception of sound as music.

Dissertation
Lorraine Bruce

Canterbury Christ Church University


Department of Music and Performing Art
Bachelor of Music

Word count: 9,831


Submission date: 2 May 2014
1
Contents

Abstract ........................................................................................................................ 3

Introduction ................................................................................................................... 4

1. How We Hear ................................................................................................... 6

1.1 The Hearing Mechanism .................................................................................... 7

2. How We Hear Sound as Music ................................................................... 12

2.1. Organised Sound ............................................................................................. 12

2.2. Pattern Perception ........................................................................................... 15

2.2.1. Gestalt Principles of Perception ................................................................. 16

2.3. Auditory Streaming …....................................................................................... 19

2.4. A Neural Bases for Pitch and Pattern Perception ............................................. 20

2.4.1. Chroma-specific - The Auditory Phenomena of Perfect Pitch ................... 20

2.5. Sound as Event ................................................................................................. 22

2.5.1. An Ecological Approach ............................................................................ 26

2.5.2. Bregman's Theory of Auditory Scene Analysis applied to music ................27

3. How We Listen to Music ............................................................................... 29

3.1. Listening to the Environment: An Ecological Approach .................................. 31

3.2. Differences between Musicians and Non Musicians ........................................ 33

4. How We Perceive Sound as Music ............................................................ 36

4.1. Do You Hear What I Hear ............................................................................... 37

4.2. Conclusion ...................................................................................................... 38

Bibliography ............................................................................................................... 41

2
Abstract

This dissertation argues that whilst our inherent auditory cognition allows us to hear

and differentiate between sounds, it is ultimately our individual human experience

and learned behaviour that shapes our perception of sound as music. The mechanism

of hearing itself, referred to as auditory cognition, allows us to hear sound as music

and is an important and expanding area of research within the field of experimental

psychology and psychoacoustics. But in order to fully appreciate certain aspects of

our own musical behaviour, such as our different and very often unique, perception of

sound as music, we need to gain a better understanding of the psychology of hearing,

and the complex field of psychoacoustics.

Keywords

Sound, music, perception, listening, response, auditory cognition, psychoacoustics,

learned behaviour, human experience.

3
Introduction
As a 52 year old student in my final year of my Music Degree (BMus) at Canterbury

Christ Church University (CCCU), one of the most noticeable changes I have seen in

myself since I started my degree, is the way I now listen to music, and how that has

influenced my response to, and perception of, what I hear. Prior to starting my course

I enjoyed listening to many different genres of music, but my response and perception

was undoubtedly emotive rather than informed. However, over the past three years of

studying music at CCCU I have learned how to analyse music, and I now listen from

a different perspective, that of a musician. But what truly shapes my individual and

often unique perception of music, as opposed to that of my peers, friends and family,

has proved an exciting and often surprising area of research for me. This is my

inspiration behind the dissertation 'Do You Hear What I Hear?'

Being adopted from birth with no family history, I have always been interested

in the nature vs nurture dichotomy. Also being brought up in a non musical family

the latter is especially significant in trying to understand where my latent passion and

desire for music came from, and I believe this holds the key to why our perception of

music varies so considerably from person to person. Consequently, whilst I agree that

we are all be born with the same basic cognitive skills to recognize and respond to

sound as music (Hallam 2010a), I believe that the way in which we respond, and our

individual perception, is a direct result of our cultural discourse, upbringing,

environment, and learned behaviour. In other words our human experience to date

(Blacking 1973: Blacking 1975: Meyer 1956).

Research in the psychology of hearing and in psychoacoustics has so far concentrated

on how we hear sound as event, known as 'Auditory Scene Analysis', and the

differences between how we listen to everyday sounds and how we listen to music.

4
The latter has also given rise to a more ecological approach to the phenomenon of

human audition and how our environment influences what we hear. Interestingly over

the past decade our environment has also influenced many composers, such as John

Cage (1912-1992) and Oliver Messiaen (1908-1992), who have incorporated

everyday sounds into their compositions. John Cage's experimental work Train

(Bologna, 1978), and Messiaen's Oiseaux exotiques (1956), are two such examples.

By embracing a philosophical and pragmatic approach to current studies and

theories, together with a critical analysis of research in the areas of human audition,

auditory perception and psychoacoustics, this dissertation reasons that whilst the

human hearing mechanism enables us to hear external stimuli as sound, it is not the

source of the sound, or the event in which it was heard alone, but our individual

human experience and learned behaviour, particularly as musicians, that ultimately

shapes the way we process, analyse, synthesize and interpret sound as music.

5
Chapter 1

How We Hear

Research and science explain that it is our auditory cognition, the mechanism of the

brain that allows us to hear, that deciphers external acoustic stimuli and enables us to

hear them as sounds, characterised in terms of their individual sensory qualities of

pitch, loudness, and timbre. But if music is essentially an acoustic experience, then

the former alone does not explain our individual and sometimes unique perception of

what we hear. The way in which we hear is fundamentally a subconscious process,

and an important and integral part of how we are continuously monitoring the world

around us (Brownell 1997; Hodges and Sebald 2011).

Conversely, as Hallam (2010b) explains, listening to music requires

concentration, and focussing on specific musical elements, which triggers multiple

cognitive functions in the brain, sometimes simultaneously, and often in complex but

unified and incredibly fast sequences. Gaver (1993a) also distinguishes between how

we listen to everyday sounds, such as a car back firing or a clap of thunder, and how

we listen to a piece of music, which requires an awareness of certain acoustic

characteristics such as timbre, pitch and loudness. The way the human brain

structures its extremely multifaceted environment, and organises and perceives sound

as music, has been the subject of an infinite number of studies over the years, and is

now the centre of focus for the rapidly growing areas of research into psychoacoustics

and auditory perception.

But before reviewing current literature, research, and models such as the

theory of Auditory Scene Analysis (ASA), as developed by A.S. Bregman in 1990, or

ecological theories as presented in Gaver (1993a) and Clarke (2005), it is essential to

first fully understand the anatomy and physiology of the hearing mechanism itself,

6
without which we wouldn't be able to facilitate acoustical communication,

fundamental to the existence of human kind, and the ability to hear music (Fastl and

Zwicker 2006).

1.1 The Hearing Mechanism

The human hearing mechanism comprises the external, middle and inner ear. The

external part of the ear is made up of the earflap known as the pinna, and the auditory

canal (see Figure 1). The outer ear is essentially responsible for catching external

acoustic stimuli, referred to as sound waves, and passing them down the auditory

Figure 1. The major anatomical features of the outer ear, also known as the pinna and the auricle
(Warren 2008, p.6).

canal to the ear drum (see Figure 2). It is also important in determining where the

various sounds originate from, either in front, to the side or behind us. The sounds

that the outer ear catches are a result of rapid changes in air pressure, produced in a

number of different ways. For example, from the vibration of our own vocal folds or

a small object such as an insect flapping its wings, to the air emitted from a loud

7
police siren or the noisy turbulent sound of air escaping through a small aperture

(Warren 2008). The ear drum, also known as the tympanic membrane, is extremely

sensitive to these changes in pressure, and vibrates in response to the intensity and

frequency of the incoming sound wave, from a feint whisper to a loud explosion.

According to Donaldson and Duckert (1991) the eardrum's sensitivity is due to the

thinnest part of its membrane being only 0.055 mm thick. Meanwhile, the auditory

canal, an air filled S-shaped cavity about 2.5cm long, acts as a resonator to amplify

the sounds as they travel along the ear canal to the eardrum (Tan et al 2010).

The eardrum is connected to the inner ear by the ossicles, three small bones

known as the hammer, anvil and stirrup, the latter also being the smallest bone in the

human anatomy. The Eustachian tube which connects the middle ear to the nose, is

designed to ensure the correct balance of pressure to allow the eardrum to vibrate

freely. The vibration of the eardrum then causes the ossicles to move up and down

like mechanical levers, increasing the pressure of the sound wave caused by the

external stimuli, and transferring it to the oval window and cochlea, the main structure

of the inner ear (see Figure 2). The former process is necessary in order to maximize

Figure 2. The middle and inner ear (Hodges and Sebald 2011, p.98).

8
the energy of the sound that finally enters the fluids of the inner ear. But can be

affected by rapid changes in altitude, such as a plane coming in to land, the onset of a

cold, or certain diseases which can also lead to hearing impairment or loss (Brownell

1997).

The workings of the middle ear are especially complex, and if a sound wave

hits the oval window, which divides the air-filled middle ear from the liquid-filled

inner ear, without first being amplified by the auditory canal, eardrum and the

ossicles, then less than one percent of the sounds energy would be passed to the fluid

filled cochlea. And it is the energy of the sound that is converted into a neural

impulse and transmitted via the neural pathways in the cochlea to the auditory cortex

of the brain, the organ we use for all human cognition, including the perception of

sound as music (Tan et al 2010).

'No bigger than the tip of the little finger' (Stevens and Warshofsky 1965, p.

43) and measuring approximately 3.5 centimetres when unrolled, the human cochlea

is one of the most amazing examples of efficiency in the human body (see Figure 3).

Figure 3. The structure of the cochlea when unrolled (Tan et al 2010, p.46).

The cochlea is a snail-like structure, made up of three parallel tubes known as

the median, tympanic and vestibular canals. The median, which is filled with an

incompressible fluid called endolymph, and separated from the other canals by two

9
flexible membranes, is of particular significance as it contains the structures that

enable auditory perception to take place. The surface of the basilar membrane, which

separates the median from the tympanic canal, is where we find the organ of the Corti,

which contains all the neural apparatus required to detect sound (see Figure 4).

Figure 4. Cross section of the cochlea, rotated 90degrees, showing the Corti, and the inner
and outer hair cells (Tan et al 2010, p.47).

Named after medical student Alfonso Giacomo Gaspare Corti (1822-1876),

who discovered the sensory end organ of hearing in 1851 in Würzburg (Kley 1986),

the Corti consists of approximately 3,500 inner, and 20,000 outer, extremely sensitive

hair cells (Donaldson and Duckert 1991). These hair cells play a vital part in the

complex process known as transduction, which occurs during the sensation of sound

converting the sound wave into electro-chemical energy. This energy is then

transmitted to the brain for analysis; electro-chemical signals being the only kind that

the brain recognises. The auditory nerve connects the inner ear to the brain,

facilitating two way communication with the brain by way of thousands of fibres that

carry information along ascending and descending pathways from the cochlea to the

brain and back (Hodges and Sebald 2011). However, the neural pathways leading to

the auditory cortex in the brain are tangled and multifaceted, and despite recent

findings from physiological research into these connections, their significance on our

experience of complex sound patterns like music, is still not clear (Tan et al. 2010 p.

49).
10
Notwithstanding the above, extensive research into the area of

neurophysiology has shown that the cochlea responds differently to sounds, encoding

them according to their frequency. Consequently two theories have emerged as to

how sounds are encoded as pitch, one for frequencies over 5000Hz and one for lower

frequencies. These theories known as the Place and the Time theories, are now

thought to be essential when accounting for the perception of pitch (Bendor 2011).

Both theories centre on the displacement of the basilar membrane. The Place theory

suggests that the way in which the basilar membrane responds to this displacement is

by conducting a form of Fourier analysis (see Schneiderman 2011) to divide the

components of a complex tone into segments. In this context pitch perception would

be determined by the place associated with the origin of the frequency. The Time

theory on the other hand suggests that the perception of pitch is measured by the

energy of the sound wave as it travels through the ear, and the time between the

consecutive displacements of the basilar membrane. According to Pickles (2008, p.

273) timing is the major factor when encoding pitches below 5kHz, but the origin of

the sound and information on place is needed for encoding pitches above 5kHz.

(Temporal and neural bases for pitch perception are discussed in Chapter 2.3.)

Chapter 1 has so far described how the hearing mechanism works, and the

significant part that the outer, middle and inner ear play in human audition. It has

also explained how the energy of a sound wave is changed into a neural impulse and

transmitted to the brain for analysis, and how it is ultimately experienced by the

listener as sound, and some sound as music (Tan et al. 2010). The following chapters

focus on how we hear sound both as an event and as music, the different ways of

listening to music as opposed to everyday sounds, and the differences between how

musicians and non-musicians perceive sound.

11
Chapter 2

How We Hear Sound as Music

According to Hodges and Sebald (2011) a better understanding of the human hearing

mechanism, as discussed in Chapter 1, can also heavily influence the way in which

we analyse and perceive general musical behaviour. But to understand how we hear

sound as music, as opposed to everyday sounds, requires not only knowledge of the

hearing process, but also of our perceptual organisation of sound and the relationship

between its physical and perceptual attributes, as shown in Table 1. Both ideologies

have proved to be of particular interest in the rapidly expanding area of research

known as psychoacoustics, the scientific study of sound perception and how we

respond subjectively to what we hear.

Physical Attribute Perceptual Attribute


Frequency Pitch
Amplitude Loudness
Signal Shape Timbre
Time Duration

Table 1. The relationship between the physical and the perceptual attributes of sound.

2.1 Organised Sound

The term 'Organised Sound' was conceptualised by the innovative French composer

Edgard Varèse (1883 – 1965) to describe his method of grouping together certain

timbres and rhythms, which fast became a whole new way of defining music. Varèse

believed that noise simply equated to a sound that one didn't like, and he challenged

the traditional concept of noise by transforming it into music; which he saw as an

12
organised collection of noises for which the individual composer was responsible for

presenting in such a way, as to be enjoyable for the listener (Ouellette 1973).

Similarly our musical behaviour and the way we hear sound as music, is

guided by our ability to perceptually organise external sensory stimuli as sound,

characterised by musical elements such as pitch, harmony, interval, rhythm, melody

and timbre. Huron (2001) describes these elements as 'fundamental dimensions of

music' which all play an important role in auditory perception. However, research has

shown that the relationship between frequency and pitch demonstrates the strongest

connection when studying neuroscience and music. The Italian mathematician and

physicist Giovanni Battisa Benedetti (1530-1590) discovered in 1953, that it was the

frequency of the vibration emitted from the source of a sound that elicits the aural

sensation we call pitch (McDermott and Oxenham 2008; Tan et al 2010).

It should be noted however that whilst pitch and frequency both centre around

the same natural phenomenon, that of naturally occurring periodic sounds (Schwartz

and Purves 2004), frequency is external to our auditory cognition, and therefore

objective. Pitch on the other hand is innate, and therefore highly subjective (Hodges

and Sebald 2011). These differences are highlighted in the auditory streaming

paradigm which looks at how the sequence or pattern of notes within a piece of

music, can influence our perception of what we are hearing. This is illustrated in

Figure 1, where the frequency between the two pitches of notes is only two semitones.

A study carried out by Pressnitzer and Hupe (2006) showed that when the notes are

played repeatedly at a fast tempo, alternating from high to low, they would normally

be perceived as a single melody line. But when the difference in frequency is far

greater, for example eleven semitones as shown in Figure 2, the listener rather than

perceiving two concurrent lines would hear them individually, and not

simultaneously.

13
Figure 1. Two semitones difference (Pressnitzer and Hupe 2006).

Figure 2. Eleven semitones diffeence (Pressnitzer and Hupe 2006).

Cross (1999) explores this paradigm further by examining how we experience

part of a simple piece of Western tonal music, taken from El Noi de la Mare, an old

Catalan folk song (see Figure 3.). He suggests that the reason we hear a line of notes

as a melody, is so we can make sense of what we are hearing. In El Noi de la Mare

the melody, although relatively evident, is heard as one integrated line. But on close

inspection of the music, what the listener is actually hearing, and subsequently

processing, is a sequence of detached pitches travelling in time. The notion that the

listener can be confronted with a sequence of varying pitches like this, and yet hear it

as one line of melody, has fascinated philosophers and scientists alike, for centuries.

Figure 3. The first eight bars of the Catalan folk song El Noi de la Mare, by Miguel Llobet (1878-
1938)(Cross 1999).

14
2.2 Pattern Perception

According to Lora (1979) the study of ‘human pattern perception', which requires the

observation of learned and musical behaviour, is multifaceted. The topic has been

explored from a number of perspectives including psychoacoustics, the physical

sciences, and from a sociological and anthropological approach in the social sciences

and humanities. Research and studies in these areas have produced a range of

purportedly objective scientific findings, creating yet further disagreement between

science and the arts. But as Lora notes this polarisation is both unrealistic and

unnecessary, as any scientific or empirical studies that measure human response can

never be entirely objective.

Consequently, Lora (1979) stresses that the study of human pattern perception

must be viewed as an interdisciplinary one. This would allow for a balanced

perspective on this synthesis of human emotion and intellect to be established, and

substantiated. A case in point is the theory of Auditory Scene Analysis (ASA), one of

the more important recent advances in psychoacoustics developed in 1990 by Albert S

Bregman (b.1936). As Cross (1999) explains ASA is also used to explore human

pattern perception and explain why our experience of a sequence of pitches varies

according to differences in frequency. When consecutive pitches in a melody are

close to one another in pitch space, (see Figure 1) on hearing the second pitch our

inferred ASA mechanism is immediately activated and assigns it to the same source

as the first one. This is the same mechanism that enables us to process where a sound

has come from. For example, when we first hear a sound and its amplitude is greater

in the left ear than the right, we can immediately deduce that the source of the sound,

the object that is making the noise, is to our left. It could however be argued that in

the perception of patterns, and pitch recognition, it is in fact our cognitive ability to

match our learned musical behaviour, stored in our long-term memory, against the

incoming external sensory stimuli, that facilitates the process of ASA (McAdams and

15
Bigand 1993). The former also substantiating the argument that our individual

perception of sound as music is ultimately the result of our human experience to date.

The model of ASA as developed by Bregman (1990) has been used

increasingly by neuroscientists and musicians to explore and understand how we

simplify and interpret complex auditory acoustic scenes into singular events or

objects. The model is explored in more detail in section 2.5 of this Chapter.

2.2.1. Gestalt Theory

Another discipline often used to explain pattern perception is the Gestalt theory,

conceived by German psychologist Max Wertheimer (1880-1943) in the early

twentieth century and further developed by fellow psychologists Kurt Koffka (1896-

1941) and Wolfgang Köhler (1887-1967). The theory is primarily concerned with

how we organise and make sense of the world around us, including sound as music.

Gestalts, the plural for Gestalt meaning 'whole form' in German, 'are the organised

structures that emerge from the physical stimuli in our environment' (Tan et al 2010,

p.77). Although the principles of the Gestalt Theory were originally applied to the

phenomena of visual stimuli as shown in Figure 4, they can equally be applied to our

perception of musical information such as whole notes, which consist of tones.

For example, the principle of proximity as shown in Figure 4, can be used to

explain why in most tonal music we perceive a group of notes closer together,

predominantly intervals of a third or smaller, as a melody line and not simply a series

or pattern of separate tones (Hodges and Sebald 2011). Leading perceptual and

cognitive psychologist Diana Deutsch (b. 1938), has also demonstrated that when the

principle of proximity is violated, for instance by scrambling the notes of a familiar

melody by an octave, the melody becomes less coherent and extremely difficult for

the listener to recognize. This is an interesting exercise when examining our

perception of a pattern of notes as a melody, as can be demonstrated by displacing the

16
Figure 4. Visual examples of Gestalt principles (Tan et al 2010, p.78)

notes of the notorious theme Ode to Joy, from the Ninth Symphony by Ludwig van

Beethoven (1770-1827) shown in Figure 5, in different octaves on a piano as

illustrated in Figure 6 (Deutsch 1995).

Figure 5. Close proximity of the notes in Ode toJoy (Hodges and Sebald 2010).

Figure 6. Notes to Ode to Joy as shown in Figure 5, displaced in different octaves, making the melody harder
to recognize.

The Gestalt principle of similarity is equally important in understanding why

certain musical phrases, such as themes, motifs and musical ideas that are repeatedly
17
used within a piece of music, are also more likely to be grouped together in order to

facilitate our perception of the complete work. This is the same for other aspects of

music such as timbre and rhythm. The principle of closure works in the same way as

a melody or harmonic progression that is articulated by resolution to the tonic, or a

phrase that ends with a perfect cadence of the dominant to tonic (Hodges and Sebald

2011).

Good continuation or common direction is applied when music moving in the

same direction is grouped together. This principle together with the principle of

proximity is perfectly illustrated using an extract from the Sixth Symphony by Russian

composer Pyotr Ilyich Tchaikovsky (1840-1893) (see Figure 7). On close

examination the first and second violin parts as shown in A of Figure 7, are seemingly

moving in nonsensical lines. However, if the Gestalt principles of good continuation

and proximity are applied what the listener perceives is the smoother line as depicted

in B (Hodges and Sebald 2011).

Figure 7. Tchaikovsky's Symphony No.6, 4th movement. (A) First and second violin parts as notated.
(B) First and second violin parts as perceived (Hodges and Sebald 2011, p.136).

18
Whilst the Gestalt principles can be applied to the way in which we are able to

organise sound as music, they can also be used to a greater extent to explain the

theory of auditory streaming, which allows us to hear a line of melody amidst a

complex and intense piece of music.

2.3 Auditory Streaming

The environment can greatly influence our perception of sound. For example, if we

hear a sudden loud noise we will almost inevitably, immediately turn and look

towards the source of the sound. And yet we also have the ability to consciously

search for specific targets within our environment, on which to actively focus our

attention. It should be noted however, that these targets are more than likely to be

schemata, based on learned behaviour and our individually acquired knowledge from

past experiences. According to Tan et al (2010) the Gestalt principle of proximity as

applied to musical pitch, is also central to the segregation and grouping of auditory

streams.

Auditory Streaming is a phenomenon traditionally associated with speech and

language, and referred to as 'The Cocktail Party Phenomenon' (Hodges and Sebald

2010, pp. 134-136). In other words our ability to listen and follow two or more

conversations at the same time. When Auditory Streaming is applied to music it

describes how a listener can isolate two or more incoming streams of sound and

perceptually follow them as independent musical lines. For example, the chorale

prelude Wachet auf, ruft uns die Stimme, BWV645 by Johann Sebastian Bach (1685-

1750) is just one of many musical works that illustrates how auditory streaming is

necessary in order to perceive two or more overlapping musical idioms (see Figure 8).

In this example the Gestalt principles of proximity, similarity and good continuation

help the listener segregate the middle voice holding the chorale tune, from the upper

and lower voices.

19
Figure 8. An extract from Bach's Choral Prelude Wachet auf, ruft uns die Stimme, BWV645 (Hodges
and Sebald 2011).

2.4 A Neural Basis for Pitch and Pattern Perception

During the early stages of the transduction process of external stimuli, the perceived

auditory signals disintegrate into isolated frequencies, which by some means are then

structured together into rich acoustic sequences that we can readily identify and

become familiar with. Significant advances in neuroscientific research involving the

cerebral and auditory cortex, both necessary for this process, has led to a greater

understanding of neuroanatomy, and the neuronal mechanisms fundamental to

auditory perception, in particular our perception of sound as music (Recanzone 2011).

2.4.1 Chroma-specific - The Auditory Phenomena of Absolute Pitch

According to Tan et al (2010), one suggestion for a neural basis of pitch and pattern

perception is based upon the extent to which the brain possesses fixed categories of

chroma-specific pitches. Research suggests that the majority of us do not remember

specific chromas, the ability known as Absolute or Perfect Pitch. And yet in contrast

there is no research to date, to suggest that the brain has specific areas dedicated to

individual musical intervals either, referred to as Relative Pitch. So why is Absolute

Pitch so uncommon?
20
The idea that relative pitch gradually becomes dominant in human audition as

a result of being predisposed to a stereotypical musical environment is growing fast,

and has received considerable support in recent years. For example, the repeated

exposure to popular, cultural songs, such as Happy Birthday, when sung by different

many different people, in different keys. In order to learn these culturally significant

melodies, in the absence of consistent absolute information, the brains neural

mechanisms process relative information to recognise similarities between the

different versions. Neuroscience also suggests that the brains of people with Absolute

Pitch, process pitch differently to those with relative pitch. Musicians with Absolute

Pitch exhibit an irregularity concerning the left and right hemispheres of the brain.

This irregularity which prioritises the left hemisphere is found in the planum

temporale, an area towards the rear of the temporal lobe associated with auditory

cognition. This finding also suggests that the way the brain prepares and guides our

musical pitch varies according to the extent to which a neural function is restricted to

either the left or right hemisphere. The idea that the left hemisphere is specifically

engaged for recognising finer details, as in Absolute Pitch, and the right hemisphere

involved in processing more general holistic data, is also supported by Peretz (1990)

and Limb (2006).

Limb (2006) examines the theory that the right hemisphere of the brain

specialises in processing a melody, a sequence of musical pitches that form a musical

phrase, and possibly one of the most fundamental and archetypal elements of music.

As with the other dimensions of music a melody has its own temporal structure and

phrasing, but it is intrinsically the relationship between the pitch of one note to the

next, which gives each melody its specific signature sound. Limb notes that early

scientific research appears to have focussed on identifying those neural regions within

the brain directly involved with musical pitch perception, and as such primarily based

on lesion studies. Findings from some of these earlier studies also suggested that

21
musical stimuli are processed by the right hemisphere. And subsequent studies have

revealed tonal pitch perception is more likely to be attributed to the right hemisphere,

and to the auditory cortex in particular.

Deutsch (1999) estimates the incidences of Absolute Pitch as 1 in 10,000

amongst the global population. However Limb (2006) makes an interesting point,

that research suggests that musical ability such as Absolute Pitch, is directly

associated with musical talent. Limb also argues that musical ability is shaped by our

exposure to music during our early childhood years, and stresses the important part

that our environment and upbringing plays in shaping our musical ability, including

our perception of sound as music. This unique example, as described by Limb

(2006), of how our musical ability is influenced by a combination of genetic and

environmental factors, is central to the argument presented in the concluding chapter,

that it is ultimately our human experience and learned behaviour that shapes our

perception of sound as music. Interestingly, Deutsch (1999) found that studies of

individuals exposed to extensive musical training in later years, did not necessarily

acquire Absolute Pitch.

2.5. Sound as Event


Pattern perception is not, however, perceived in isolation to other musical dimensions,

such as timbre and rhythm, nor is our perception of melody based solely on a single

sequence or pattern of discrete pitches. Rather they are the result of our perceptual

and cognitive abilities to integrate coherently, into a homogenous sound, the many

rich and complex layers of music (Tan et al 2010). This process, which is reliant on

specific neural mechanisms and responses within the brain, also mediates our ability

to localise, structure, and identify various sounds individually in complex acoustic

environments, known as auditory-object perception (Bizley and Cohen 2013).

22
According to Bizley and Cohen (2013), auditory objects are fundamental to

our hearing and are the result of our ability to perceive, organise, isolate and group

regular spectral and temporal occurrences in our acoustic environment, and then

identify them by their source or the event in which they occurred. For example, in a

busy high street we might simultaneously hear a passing car, a dog bark, and a child

crying, but we would hear each of these as a distinct and discrete sound, related to its

source, temporal structure and the event in which the sound occurred.

Conversely according to Forrester (2007), we do not have to be physically

present at the source or event, in order to associate and identify the nature of the

sound. Throughout our lives, from an unborn foetus to old age, we are exposed to

external sounds in our environment. And through our individual cultural discourse,

we gradually build up and expand our own personal library and knowledge of

different sounds, and the circumstance in which they occurred. Consequently our

perception of a sound, as an object or an event, can also be the result of an association

with an existing sound representation, in the cultural repertoire of sounds stored

within our long term memory.

Bizley and Cohen (2013) explain that all external acoustic stimuli are

produced as a result of actions or events, either with intent, such as human speech, or

from natural or manmade sounds from within our environment. Our ability to make

sense of these sounds as auditory objects, is highly dependent on their temporal

structure, and facilitated by neural responses elicited by complex physiological

mechanisms. These neural responses do not, however, form a relationship with our

initial perception of the external sound, until they have passed through the cochlea

and reached the auditory cortex. Furthermore, the various neural pathways leading

from the cochlea to the auditory cortex are extremely complex and chaotic, and their

significance in our perception of external stimuli is not yet fully understood (Tan et al

2010).

23
Research has demonstrated that listening to music quite literally lights up the

human brain, and is the most exciting and complex acoustic event we can experience

(Pressnitzer et al 2011; Collins 2013). The full extent to which numerous neural

responses, structures, and associated cognitive states influence our perception of

auditory objects, however, is still not fully understood. Lakatos et al (2013) explain

that although research has produced substantive evidence to demonstrate that

attention to auditory stimuli in a complex acoustic environment excites a neural

response, the principle physiological mechanisms engaged are still not clear. But our

ability as a listener to identify a single melody line, appreciate the different sounds

and timbres of the many instruments, and tune in to a recurring theme or motif, whilst

listening to a full orchestral work, known as Auditory Scene Analysis (ASA), is at the

very forefront of research into auditory cognition and psychoacoustics.

Pressnitzer et al (2011) explain that the complexity of a musical acoustic scene

is undoubtedly daunting. Picture 1, for example, was taken in 1910 at the première

Picture 1. The premiere of Mahler's Symphony No.8 in E-flat major, at Munich's New Festival Hall in 1910
(Pressnitzer et al 2010).

24
of Symphony No. 8 in E-flat major, by Gustav Mahler (1860-1911) at Munich’s New

Music Festival Hall. The performance, which became known as 'The Symphony of a

Thousand', is said to have employed over 850 singers and an immense orchestra of

171with Mahler himself conducting (Pressnitzer et al 2011).

Whilst it is virtually impossible to gauge exactly how many sound sources

were present during this epic performance, or to identify their individual temporal

structures, an illustration of the resulting waveform from the first few minutes of the

performance is shown in Figure 9. Pressnitzer et al (2011) explain that at any given

moment in time, data made available to the auditory system comes from the pressure

of spectral and temporal stimuli to the outer ear.

Figure 9. Spectral Waveform from 'The Symphony of a Thouand' (Pressnitzer et al 2010).

However, this may include fluctuating vibrations from a number of unknown

physical objects within the environment. The challenge of our inferred ASA, is to

approximate the most probable distal causes for the waveform. By passing this

waveform through a replica simulating the early stages of auditory processing

(Shamma, 1985) a cochleogram (see Figure 10) is produced showing the acoustic data

extended over a two dimensional field of time and frequency.

The challenge for the auditory system is to group activity in relation to source,

and only to that source, even though it may appear to consist of a number of differing

tones and spatial arrangements. This process, known as tonotopic organisation,

appears to elicit patterns that would otherwise remain hidden in the sound wave.
25
Figure 10. A cochleogram of the waveform shown above in Fig 8. (Pressnitzer et al 2010).

However, in relation to our inferred ASA, tonotopy gives rise to another challenge.

The energy emitted from each individual sound source will be shared across a number

of different frequency channels, which in turn will activate distinct groups of sensory

neurons. In the world around us, humans are remarkably adept at isolating sounds in

this way, and can easily follow a single voice in a crowd (see Cocktail Party

Phenomena, Chapter 2). However, in the case of a complex musical environment,

such as Mahler's 8th Symphony, our inferred ASA might be adept at isolating the

melody line or timbre of a certain instrument, but is unable to hear the specific sound

source for each and every singer in the choir. The details of this transformation are

beyond the scope of this dissertation but are reviewed in some detail in Pickles

(2008).

2.5.1 An Ecological Approach

Another approach to sound as event, as discussed in McAdams (1993) is an

ecological one, as originally described in 1966 by American psychologist James J

Gibson (1904-1979) in his writings 'The senses considered as perceptual systems'

(Gibson 1966). The ecological theory surmises that the physical nature of the object

26
producing the sound, the event which started it vibrating, and the meaning the listener

associates with it, are all perceived directly without any intervening process. In other

words our perception of sound as event is not the result of our analyses and

organisation of discrete elements to form an association with an existing

representation in our long-term memory. Rather it is the perceptual system itself that

is directly in tune with those aspects of the environment that are of specific biological

significance to us individually, or that have acquired an associated behavioural

significance, through our human experience.

Similarly, the renowned philosopher Leonard B Meyer (1918-2007), who was

a major contributor to research into the aesthetics of music, believed that music and

life are both experienced through our natural processes of 'growth and decay, activity

and rest, and tensions and release' (Meyer 1956, p.261). According to Meyer

extensive research has shown that our experience and perception of musical stimuli is

in fact congruent with how we perceive and experience other stimuli in our

environment. Meyer also believed that any connotations elicited through musical

stimuli are the result of both our understanding of the elements of the music itself and

their spatial organisation by the auditory system, together with our knowledge of the

objects, images, ideas and inherent qualities of the non-musical world around us

(Meyer 1956). The suggestion that knowledge and behaviour are fundamental to our

ultimate perception and experience of sound is discussed in more detail in the

concluding chapter of this dissertation.

2.5.2 Bregman's Theory of Auditory Scene Analysis (ASA) applied to Music

The theory of ASA as developed by Bregman (1990) when applied to music scene

analysis, is built around the traditional view that music is two-dimensional. The

horizontal dimension representing time, and the vertical dimension representing pitch.

Unlike auditory perception, the choice of dimensions in music is not subjective, and

27
can be found both in musical compositions and scientific spectrograms like the one

shown in Figure 10. The majority of listeners will usually claim to solve the dilemma

of two dimensions by simply paying attention to one sound at a time. This implies

that the distinct elements of the sound can be isolated simply by the process of

focussing attention. However, we know that the human ear senses sound as a pattern

of frequencies formed by changes in pressure to the eardrum. Furthermore, scientific

graphs of the waveforms detected by the ear, produced by external stimuli, as shown

in Figure 9, clearly demonstrate that there is nothing evident to imply that the sound is

a combination, or to suggest how to deconstruct the pattern into component

frequencies in order to make sense of what we are hearing (Bregman 1990; Bregman

1993).

In the natural world ASA is regulated by principles that have evolved

specifically to build upon, and expand, our perceptual representations of distinct

events and occurrences that produce sounds. For example, the rustle of the wind

through the trees, an aeroplane overhead, a car backfiring, or the sound of a mother's

voice. Similarly in music there are also events which produce distinctive sounds,

such as a specific string on a violin when bowed, a column of air passing through a

trumpet, or the intense vibration of a gong when hit. However, as discussed above in

relation to the choir in Mahler's performance of his 8th Symphony, it is not always the

individual sounds in music that are intended to be heard, but rather the composite

sound as a whole which is to be experienced. A composer may also wish for a single

physical source or event, such as a violin alternating rapidly between two registers, to

be heard as virtual polyphony, that is to say two separate lines of melody. This

suggests that the listeners' perception of the music, in this instance, is being

manipulated by the composer. In the following Chapter we will look at different

ways of listening, other factors that are thought to manipulate the listeners perception,

and how the way we listen can influence how we hear and perceive sound as music.

28
Chapter 3

How We Listen to Music

Pearce and Rohrmeier (2012) stress that the same wide range of cognitive functions

and neural processes engaged by the brain for our perceptual and cognitive

organisation of sound, as discussed in the previous Chapters, are also engaged when

we listen to music. This applies whether we are listening as a performer, composer or

member of the audience.

In Chapter 1 we established that hearing is an important and integral part of

how we continuously monitor, and make sense of the world around us, but is

fundamentally a subconscious process (Brownell 1997). In contrast, as Hallam

(2010b) explains, listening could be a deliberate action requiring conscious cognitive

activity, and is fundamental to the development of our musical understanding and

behaviour. Technology such as radio and television, and in particular recording

devices, allows us to listen to music virtually anywhere and at anytime. For example

in the gym, at work, or whilst travelling. But this kind of listening is easily

manipulated for commercial and political gain. Aware of this when they first began

broadcasting their public service radio in 1920, the BBC urged their audiences not to

become passive listeners by using music as merely background noise, for example

whilst doing their housework, but to be selective and to choose their programs

carefully to suit their cultural and artistic taste. This was suitably highlighted in an

illustration by Martin Aitchison entitled 'Good music unappreciated', which appeared

in The History of Music by Geoffrey Brace in 1968 (see Picture 2) (Harper-Scott and

Samson 2009). Hallam (2010b) stresses however, that listening to music, even as a

passive listener, can still help to develop refined listening skills, equal in many

respects to those of a trained musician.

29
Picture 2. Harper-Scott and Samson 2009, p.49

Clarke et al (2010) also consider the notion of passive listening, making some

interesting comparisons between what they refer to as 'active or focused listening',

and 'passive or background' hearing. According to Clarke et al, active listening takes

place, for example, at a live concert or opera house where the audience concentrates

on the sonority of the music. This form of listening has been facilitated further by the

development of recording equipment, allowing music to be experienced away from

the original performance. Examples of passive hearing include when we are

subjected to music in a lively bar, or to the sound of a mobiles musical ring tone.

Clarke et al also stress the importance between listening for the structure of a piece of

music, or listening for its' meaning.

But the way we listen to music has fascinated psychologists and musicologists

for decades. One paradigm is that the way we listen to music, is in fact completely

different to the way we listen to everyday sounds in the world around us (Gaver

30
1993a; Gaver 1993b). Dibben (2001) however challenges this, arguing that both

musical listening and everyday listening involve listening to both the acoustic

characteristics and what the sound specifies, that is the source. Notwithstanding both

of these theories which suggest that how we listen can greatly influence both our

musical behaviour and our perception of sound as music, psychologists rarely address

everyday listening, preferring to focus on musical listening.

Studies of auditory cognition and psychoacoustics have also largely been

guided by the desire to understand music and the sounds musical instruments produce.

Gaver (1993b) suggests one possible reason for this is that natural sounds emanating

from our environment produce a richer, more abundant acoustic, making the task of

studying how we perceive them, much more onerous to how we perceive musical

sounds. Gaver (1993a) suggests that when we listen to music we focus on the acoustic

characteristics such as pitch, timbre and loudness, but when we listen to everyday

sounds we focus on the source of the sound from within our immediate environment.

This is more commonly referred to as an ecological view of the phenomena of

listening.

3.1 Listening to the Environment: An Ecological Approach

Gaver (1993a;1993b) and Clarke (2005) are both advocates of an ecological approach

to listening, which rejects the more traditional psychological notion that we process

external stimuli according to temporality, and organise individual discrete streams of

sounds into a homogenous representation, using our stored long term memory.

Instead, the ecological approach centres around the premise that we are self-tuning

human organisms, that resonate in response to environmental information, which as

listeners we perceive as a continuous measurement of objects and events in time. The

cognivitist paradigm views the ecological approach as a magical account, which

seemingly perpetuates an ostensibly mystical belief that perception is simply the

31
result of a 'miraculous tuning of perceptual systems to the regularities of the

environment' (Clarke 2005, p.25). Clarke argues that this assumption is based on a

false parody of the ecological approach, and furthermore one that totally refutes the

central function of perceptual learning; a result of the flexibility of both perception

and the nervous system in the context of an evolving and determinate environment.

In other words our perceptual awareness becomes attuned to the environment through

continuous exposure, and as a result of both our natural evolution and individual

perceptual learning across a lifetime.

According to Gaver (1993a), notwithstanding centuries of research and studies

around auditory cognition, the reality that we can hear the source of a sound, for

example the noise of a car backfiring, or the fact that we can sense from the sound

alone whether someone is running up or down a staircase, are still largely not

understood. Whilst acknowledging that the phenomena of listening as studied in

traditional psychoacoustics can be applied to both everyday and musical listening,

Gaver suggests that a new ecological approach will allow us to reassess current ideas

about audition that we consider important, and therefore, enable us to address the

attributes of everyday listening more directly. For example, an ecological approach

could help us to characterise the fundamental attributes associated with the source of a

sound, or identify the acoustic cues that gesture the event from which the sound

emanates.

To differentiate between ways of listening, Gaver (1993a) argues that whilst

listening to a string quartet we may initially use musical listening, focussing on

pattern perception in order to make sense of the sounds that we hear. But then we

might try and focus on the individual sounds of the instruments themselves, which

would be classed as everyday listening. On the other hand, we may sometimes

choose to listen to the world around us in the same way as we do music. For

example, in the same way we would listen to an orchestra, we often listen to the

32
interplay and harmony of the humming of distant traffic interspersed by syncopated

bird song.

Whilst this may appear to be an unusual experience, hearing the world as

music is an experience many composers have tried to incorporate into their

compositions for concert settings, for instance John Cage (1912-1992) and Oliver

Messiaen (1908-1992). John Cage is renowned for experimenting with traffic sounds

in particular, such as the use of a train in his work entitled Bologna (1978), in his

attempt to afford the audience the experience of musical listening to non-musical

sounds (Cage 1976). Similarly, Messiaen was fascinated by bird song and

incorporated it into many of his works, of which Oiseaux exotiques (1956) is a

significant example.

Research has demonstrated that listening requires us to differentiate between

the physical characteristics of the sound, and our psychological interpretation. But

the way we listen, whether to everyday sounds or music, is arguably determined by

our knowledge, learned behaviour and intentions (Hallam 2010b).

3.2 Differences between musicians and non musicians

Humans are intuitively aware that repeated listening to the same piece of music

shapes our perception of the music, and leads to a greater knowledge and

understanding of its structure and various compositional features (Pollard-Gott 1983).

Hallam (2010b) argues that whilst we have a tendency to prefer music we are familiar

with, over familiarity can lead to boredom and dislike. Research has shown that the

rate at which we become familiar is linked to our musical behaviour and the perceived

complexity of the composition, which differs greatly between musically trained and

untrained listeners (Pitt 1994). And neuroscience has shown that the structure of the

brain, and our perception of fundamental musical dimensions such as pitch, also

33
varies significantly between musicians and non-musicians, and trained and untrained

listeners (Gaser and Schlaug 2003; Pitt 1994).

As Clarke (2005) explains, to the untrained listener, especially younger

children without any formal musical training, the sound of a typical triadic chord

would be perceived as a single entity, one sound. This is a reasonable given that most

chords when played on a piano consist of closely related pitches and homogenous

timbres sounded at the same dynamic, which help to produce a fusion between the

disparate components of the chord (Bregman 1990). However when the listener is

made aware that the sound they perceive as homogenous can also be heard as a

number of separate components, they are being directed to pay attention to a feature

of the sound previously unnoticed, but always present. In the case of the majority,

broadening our musical behaviour in this simple but extremely effective way, by

virtue of cultural discourse and exploring our environment, is also where our

individual perceptual awareness is developed and reinforced.

As skilled performers, musicians acquire the necessary complex auditory and

motor skills required to translate musical symbols into various sequences from an

early age. These can involve complex techniques such as fingering, advanced

improvisation skills, the memorisation of extremely long phrases, and sometimes the

combination of all three. Furthermore, they will no doubt practise extensively from

their childhood throughout their entire musical career, if not lifetime.

Learning to play an instrument requires the simultaneous synchronisation of

multimodal sensory and motor data, with sensory feedback mechanisms in order to

assess and monitor performance, and gauge progress. Notwithstanding a number of

neurophysical and behavioural studies, for example Amunts (1997) or Zatorre et al

(1998), the precise neural correlates that facilitate musical ability are still not fully

understood; nor have definitive associations between ability and specific areas of the

brain been established. However, a number of functional imaging studies (Schlaug

34
2001), have highlighted differences between musicians and non musicians when

engaged in auditory, motor and sensory tasks (Gaser and Schlaug 2001).

35
Chapter 4

How We Perceive Sound as Music

As discussed in the foregoing Chapters, our innate human audition, cognitive skills

and attributes, together with neural and temporal processes, all help to facilitate our

ability to hear, organise and make sense of sounds in the world around us, and in

particular to hear sound as music, characterised by musical dimensions such as pitch,

timbre, loudness (Huron 2001). We have also established in Chapter 2 how the

hearing mechanism and our neural responses help us to perceive musical patterns

(Lora 1979) and sequences based on the principles of Gestalt (Tan et al 2010). But if

we are all born with the same innate cognitive ability to hear ( Hallam 2010a) and

organise sound (Bregman 1990; Lora 1979; Pressnitzer and Hupe 2006), why does

our perception of what we hear when we listen to music, vary so much from person to

person? And furthermore, to what extent is our individual and often unique

perception of sound as music based on universals, such as interval recognition (Cuddy

and Lunny 1995), cultural knowledge (Blacking 1975; Meyer 1956), or our ability to

recognise fundamental psychophysical cues within our environment, that transcend all

cultural boundaries (Balkwill and Thompson 1999).

Blacking (1973) is one of a number of theorists who believe that our

perception of music is determined solely by enculturation, our individual human

experience. This theory is borne out by Meyer (1956) who explains that our

perception of music, for example the first note, cannot be influenced by music we

have not heard before, and therefore our knowledge and expectation changes as we

hear each new note. In other words, the way we hear each note may be the same, but

our ultimate perception of the music is determined by our experience of the world

around us, acquired knowledge and learned musical behaviour, which is why as we

36
saw in Chapter 3, perception varies considerably between a musician and a non

musician, the trained and the untrained ear (Pitt 1994: Gaser and Schlaug 2003; Hyde

et al 2009).

Gaver (1993b) gives a very convincing example of the difference between

how we perceive sound as music by way of an imaginary psychology experiment in

which the researcher asks you to listen to a sound and simply write down what you

hear. The first recorded sound you hear is that of an aeroplane. However, Gaver

explains that the experiment requires you to write down what you heard, not what

your brain thinks you heard by association. In other words, what you would have

heard was 'a quasi-harmonic tone lasting approximately three seconds with smooth

variations in the fundamental frequency and the overall amplitude' (Gaver 1993b,

p.3), which your subconscious interpreted as an aeroplane by matching

representations stored in your long-term auditory memory with the incoming

stimulus. This is an example of everyday listening, when we experience sound as

event, as opposed to musical listening when we experience the qualities of the sounds

themselves. This has been discussed in Chapter 2 of this dissertation and in depth in

Bregman (1990).

4.1 Do You Hear What I Hear?

Do you hear what I hear? Taking into account the studies and research in the complex

field of auditory perception and psychoacoustics as discussed in this dissertation, the

answer to the question in all probability is a resounding no.

For instance, suppose you were in an old house on a dark windy evening and

you heard a noise. Do you perceive the noise as the sound of an intruder or the old

timbers strained by the wind? Faced with such uncertainty, our perceptual system

would rely on the subconscious to make an association and identify the source of the

37
sound, based on our auditory memory, which relies on our individual acquired

knowledge and experience to date (Mcdams and Bigand 1993).

Crowder (1993) also argues that our ability to recognise a familiar tune, a

natural sound, such as a clap of thunder, or our mother tongue is not innate but the

result of the retrieval of information stored in our auditory memory, which is the

result of our individual acquired knowledge and human experience to date.

As shown in Blacking (1973 and 1975), whilst our initial perception of sound

as music rests on the notes that our ears perceive, the number of structural

interpretations of any single pattern or sequence of sound is immeasurable, as is our

individual response to its structure. In turn they are both influenced by our cultural

background, emotional experience and repertoire of associated sounds stored within

our long-term memory (Forrester 2007).

Forrester (2007, p.16) convincingly suggests that 'anything can be music if the

listener chooses to hear it in a particular way, and the opposite can also be true,

nothing can be music unless it is heard as such'. This is suitably demonstrated with

John Cage's experimental silent piece 4' 33'', in which the pianist is instructed to

approach the piano as if to play, and then sit in complete silence for exactly four

minutes and thirty three seconds. As a result each listener experiences their own

unique musical event according to their individual spatial awareness, and determined

by the acoustic quality of sounds they hear for themselves, as they wait for the piece

to start. Thus again substantiating the argument that our perception of sound as music

is determined by our individual experience, which in turn is based on our cultural

discourse, acquired knowledge and learned musical behaviour.

4.2 CONCLUSION

We have seen major advances in recent years towards gaining a better understanding

of the neural correlates that are involved in musical perception. As Limb (2006)

38
explains, music perception utilises neural substrates which are common to all forms of

auditory processing. Music perception also engages a broad range of neural processes

in the brain. Differences in musical ability, especially between musicians and non

musicians continues to provide an endless number of variables with which to interpret

patterns of brain activity, and ultimately gain further insight into the relationship

between the brain and music. According to Limb, future studies are likely to go

beyond musical perception, addressing other areas of musical behaviour such as

performance, composition and learning.

Hallam (2010a) also makes reference to how recent advances in neuroscience

have allowed us to gain a better understanding of how the way we engage in musical

activities can influence other areas of our development. Whilst our in depth

knowledge of how the brain works is still in its infancy we do know that the human

brain contains approximately 100 billion neurons, each having approximately 1000

connections with other neurons, and each having considerable processing capacity.

When we learn a process known as synaptogenisis takes place, which alters the

number of synapses connecting neurons. As learning continues and particular

activities or sounds are repeated, the synapses and neurons start to fire continually,

indicating that an event is worth remembering, and storing it as a representation in the

memory.

According to Hodges and Sebald (2010) our predisposed genetics together

with our learning experiences, sculpt the brain from early childhood through to an

adult configuration in a process called neural pruning, or reorganisation. Regular and

extensive engagement with musical activities can induce cortical pruning, which can

produce functional changes to how the brain processes information. Research has

shown that when neural pruning occurs in our early childhood development, the

changes to the brain may become stable and cause permanent changes to the way we

process information later in life (Hallam 2010a; Hodges and Gruhn 2013). This

39
would once again suggest that our individual perception of sound as music is

determined by our learned behaviour and acquired knowledge.

Recent findings by Hyde et al (2009) also show that musical training in early

childhood leads to structural changes in the brain, not normally found in typical brain

development. They also suggest that the fact that no structural brain differences were

found between the study groups before they began their musical training, supports the

paradigm that these structural changes are in fact induced by instrumental practice,

and are not the result of predetermined innate predictors of musicality.

As McQueen and Varvarigou (2010) stress there is no age limit for musical

learning, it can happen at any time of life and much of it today is a result of both

informal and formal music making and education, accessible to all. This has certainly

been the case in my own personal development, and I can now conclude that my own

perception of sound as music has been greatly influenced by my recent educational

studies and musical activities, which have broadened my musical behaviour and

increased my musical knowledge and experience.

I also agree with Lewkowicz (2010) that our individual perception goes

beyond the nature vs nurture dichotomy, and is in fact the result of both endogenous

and exogenous factors working in harmony, thus substantiating my argument that

whilst our innate auditory cognition facilitates our ability to hear the world around us,

it is our learned musical behaviour and human experience from childhood that

manipulates the brain into its final configuration, which in turn is used to analyse,

synthesise and shape our individual and often unique perception of sound as music.

40
Bibliography

Amunts, K. (1997). 'Motor cortex and hand motor skills: Structural compliance in the
human brain', Human Brain Mapp, 5, 206–215.

Bendor, D. (2011). 'Understanding how neural circuits measure pitch', The Journal of
Neuroscience, 31:9, 3141–3142.

Bizley, K., and Cohen, E.Y. (2013). 'The what, where and how of auditory-object
perception', Nature Reviews Neuroscience, 14, 693–707.

Blacking, J. (1973). How Musical is Man? USA: University Washington Press.

Blacking, J. (1995). Music, Culture and Experience. London: University of Chicago


Press.

Bregman, S. A. (1990). Auditory Scene Analysis: The Perceptual Organization of


Sound. Cambridge, MA: MIT Press.

Bregman, S. A. (1993) 'Auditory scene analysis: hearing in complex environments', in


S. McAdams and E. Bigand (eds.), Thinking in Sound. The Cognitive Pyschology of
Human Audition. Oxford: Oxford University Press.

Brownell, E.W. (1997). 'How the Ear Works: Nature's Solutions for Listening', Volta
Rev, 99:5, 9–28.

Balkwill, L., and Thompson, F,W. (1999). 'A Cross-Cultural Investigation of the
Perception of Emotion in Music: Psychophysical and Cultural Cues', Music
Perception, 17:1, 43–64.

Cage, J. (1976). Silence: Lectures and writings by John Cage. Middletown, CT:
Wesleyan University Press.

Clarke, E. (2005). Ways of Listening: An Ecological Approach to the Perception of


Musical Meaning. Oxford: Oxford University Press.

Clarke, E., Dibben, N., and Pitts, S. (2010). Music and Mind in Everyday Life.
Oxford: Oxford University Press.

Collins A. (2013). 'Neuroscience meets music education: Exploring the implications


of neural processing models on music education practice', International Journal of
Music Education, 31:2, 217–231.

Cook, N. (1990). Music, Imagination, and Culture. Oxford: Clarendon Press.

Cross, I. (1999). 'AI and music perception', AISB Quarterly, 102, 12–25.

Crowder, G.R. (1993). 'Auditory memory', in S. McAdams and E. Bigand (eds.),


Thinking in Sound. The Cognitive Psychology of Human Audition Oxford: Oxford
University Press.

Cuddy, L. L., and Lunny, C. A. (1995). 'Expectancies generated by melodic intervals:


Perceptual judgements of continuity', Perception and Psychophysics, 57:4, 451–462.

41
Deutsch, D. (1995). Musical Illusions and Paradoxes. [CD]. La Jolla, CA: Philomel
Records.

Deutsch, D. (1999). The Psychology of Music. San Diego, CA: Academic Press.

Dibben, N. (2001). 'What Do We Hear, When We Hear Music?:Music Perception and


Musical Material', Musicae Scientiae, 5:2, 161–194.

Donaldson, J., and Duckert, L. (1991). 'Anatomy of the ear', in M.Paparella, D.


Schumrick, J.Gluckman and W. Meyerhoff (eds.), Otolaryngology: Basic Sciences
and related Principles. Philadelphia, USA: Saunders.

Fastl, H., and Zwicker, E. (2006) Psychoacoustics: Facts and Models (Springer
Series in Information Sciences). London: Springer-Verlag London Ltd.

Forrester, A.M. (2007). 'Auditory Perception and Sound as Event: Theorising Sound
Imagery in Psychology', University of Kent Sound Journal [Online]. (Accessed 20th
September 2013). Available from: http://www.kent.ac.uk/arts/sound-
journal/forrester001.html (Accessed 20 November 2013).

Gaser, C., and Schlaug, G. (2003). 'Brain Structures Differ between Musicians
and Non-Musicians', The Journal of Neuroscience, 23:27, 9240–9245.

Gaver, W. W. (1993a). 'What in the world do we hear: An ecological approach to


auditory event perception', Ecological Psychology, 5:1, 1–29.

Gaver, W. W. (1993b). 'How do we hear in the world: Explorations in ecological


acoustics', Ecological Psychology, 5:4, 285–313.

Gibson, J. J. (1966). The senses considered as perceptual systems. Oxford, UK :


Houghton Mifflin.

Hallam, S. (2010a). 'The power of music: Its impact on the intellectual, social and
personal development of children and young people', International Journal of Music
Education, 28:3, 269–289.

Hallam, S. (2010b). 'Listening', in S. Hallam and A. Creech (eds.), Music Education


in the 21st Century in the United Kingdom: Achievements, analysis and aspirations.
London: Institute of Education, University of London. 53–68.

Hodges, A.D., and Sebald, C.D. (2011). Music in the Human Experience: An
Introduction to Music Psychology. Oxon, UK: Routledge.

Hodges, D. and Gruhn, W. (2012). 'Implications of Neuroscience and brain research


for music teaching and learning', in G. E. McPherson and G. W. Welch (eds.), The
Oxford Handbook of Music Education, 2. Oxford: Oxford University Press. 205–223.

Huron, D. (2001). 'Tone and Voice: A Derivation of the Rules of Voice-leading from
Perceptual Principles', Music Perception, 19:1, 1–64.

Hyde, K., Lerch J., Norton A., Forgeard M., Winner E., Evans A C., and Schlaug G.
(2009). 'Musical training shapes structural brain development', The Journal of
Neuroscience, 26:10, 3019–3025.

42
Kley, W. (1986). 'Alfonso Corti (1822-1876) - Discoverer of the sensory end organ of
hearing in Würzburg', ORL J Otorhinolaryngol Relat Spec, 48:2, 61–7.

Lakatos, P., Musacchia, G., O’Connel, N.M., Falchier, Y.A., Javitt, C.D., and
Schroeder, E.C. (2013). 'The Spectrotemporal Filter Mechanism of Auditory Selective
Attention', Neuron, 77:4, 750–761.

Lewkowicz, D. (2010). 'Nature and nurture in perception', in E. Goldstein


(ed.), Encyclopedia of perception, 611–616.

Limb, C.J. (2006). 'Structural and Functional Neural Correlates of Music Perception',
The Anatomical Record Part A, 288A, pp. 435–446.

Lora, D. (1979). 'Musical Pattern Perception', College Music Symposium, 19:1, 166–
182.

McAdams, S. (1993). 'Recognition of sound sources and events', in S. McAdams and


E. Bigand (eds.), Thinking in Sound. The Cognitive Psychology of Human Audition
Oxford: Oxford University Press.

McAdams, S., and Bigand, E. (eds.) (1993). Thinking in Sound. The Cognitive
Psychology of Human Audition Oxford: Oxford University Press.

McDermott, H.J., and Oxenham, J.A. (2008). 'Music perception, pitch, and the
auditory system', Current Opinion in Neurobiology, 18:4, 452–463.

McQueen, H., and Varvariou M. (2010). 'Learning through life', in S. Hallam and A.
Creech (eds.), Music Education in the 21st Century in the United Kingdom:
Achievements, analysis and aspirations. London: Institute of Education, University of
London. 159–175.

Meyer, B.L. (1956). Emotion and Meaning in Music. London: The University of
Chicago Press, Ltd.

Meyer, L. B. (1957). 'Meaning in music and information theory', Journal of Aesthetics


and Art Criticism, 15:4, 412–424.

Ouellette, F. (1973). Edgard Varèse: a musical biography. London: Calder and


Boyars Ltd.

Pearce, M., and Rohrmeier, M. (2012). 'Music Cognition and the Cognitive Sciences',
Topics in Cognitive Science, 4, 468–484.

Peretz, I. (1990). 'Processing of local and global musical information by unilateral


brain-damaged patients. Brain, 113:4, 1185–1205.

Pickles, O.J. (2008). Introduction to the Physiology of Hearing (3rd Edition). UK:
Emerald Publishing Ltd.

Pitt, A. M. (1994). 'Perception of Pitch and Timbre by Musically Trained and


Untrained Listeners', Journal of Experimental Psychology: Human Perception and
Performance. 20:5, 976–986.

43
Pollard-Gott, L. (1983). 'Emergence of Thematic Concepts in Repeated Listening to
Music', Cognitive Psycholgy, 15, 66–94.

Pressnitzer, D., and Hupe, J. (2006). 'Temporal Dynamics of Auditory and Visual
Bistability Reveal Common Principles of Perceptual Organization', Current Biology,
16, 1351–357.

Pressnitzer, D., Suied, C., and Shamma, A.S. (2011). 'Auditory Scene Analysis: The
Sweet Music of Ambiguity', Frontiers in human neuroscience, 5:158.

Recanzone, G.H. (2011). 'Perception of auditory signals', Year In Cognitive


Neuroscience, 1224, 96–108.

Schlaug, G. (2001). 'The brain of musicians. A model for functional and structural
adaptation'. Ann NY Acad Sci, 930, 281–299.

Schneiderman, R. (2011). 'Can One Hear the Sound of a Theorem?', Notices of the
AMS, 58:7, 929–937.

Schwartz, A. D., and Purves, D. (2004). 'Pitch is determined by naturally occurring


periodic sounds', Hearing Research, 194, 31–46. Available at:
http://www.purveslab.net/publications/schwarrtz_purves_pitch.pdf (Accessed 26
November 2013).

Shamma, A, S. (1985) 'Speech processing in the auditory system II: Lateral inhibition
and the central processing of speech evoked activity in the auditory nerve', The
Journal of the Acoustical Society of America, 78:5, 1622–1632.

Stevens, S. S., and Warshofsky, F. (1965). Sound and hearing. New York: Time, Inc

Tan, S., Pfordresher, P., and Harré, R. (2010). Psychology of Music: From Sound to
Significance. East Sussex, UK: Psychology Press.

Warren, M.R. (2008). Auditory Perception: An Analysis and Synthesis. Cambridge:


Cambridge University Press.

Zatorre, R,J., Perry, D,W., Beckett, C,A., Westbury, C,F., and Evans, A,C. (1998).
'Functional anatomy of musical processing in listeners with absolute pitch and
relative pitch', Proc. Natl. Acad. Sci, 95, 3172–3177.

44

Vous aimerez peut-être aussi