Vous êtes sur la page 1sur 62






A brief guide to the technology behind
Computer Music

By Cakewalk Software
Addition by Et Cetera Distribution
CONTACTING
ET CETERA DISTRIBUTION
By Mail

Et Cetera Distribution
2 Bradwood Court
St. Crispin Way
Haslingden
LANCS
BB4 4 PW

By Phone

From the UK
01706 228039

International
+44 1706 228039

By Fax

From the UK
01706 222989

International
+44 1706 222989

Internet
http://www.etcetera.co.uk

THE DESKTOP MUSIC HANDBOOK


2
CONTENTS
Welcome to the World of DESKTOP MUSIC! .............................4
MIDI...............................................................................................4
History ........................................................................................................ 4
What is MIDI ............................................................................................... 4
How Does it Work...................................................................................... 5
What it Takes ............................................................................................. 5
MIDI Channels............................................................................................ 6
MIDI Messages........................................................................................... 6
Note On and Note Off Messages.......................................................... 7
Program Change Messages ................................................................. 8
Control Change Messages ................................................................... 8
System Messages...................................................................................... 8
General MIDI .............................................................................................. 9
MIDI Hardware ........................................................................................... 9
Synthesizers ........................................................................................ 10
General Features of MIDI Hardware................................................... 11
Programmability.................................................................................. 13
MIDI Samplers ..................................................................................... 13
Drum Machines ................................................................................... 15
Guitar and Wind Controllers .............................................................. 15
MIDI Software........................................................................................... 16
Notation Programs .............................................................................. 18
Patch Editor/Librarians....................................................................... 18
Integrated Programs ........................................................................... 19

Digital Audio...............................................................................20
What is Digital Audio............................................................................... 20
Recording a Sound.................................................................................. 22
Digital Audio Software ............................................................................ 24
Sound Cards ............................................................................................ 25
Putting it Altogether ................................................................................ 26
Synchronization....................................................................................... 26
Integrated Software................................................................................. 27
Summary.............................................................................................. 28
Glossary of MIDI and Digital Audio Terms...............................29
Appendix 1 General MIDI Specification....................................35
Appendix 2 Differences Between GM, GS and XG ..................38
Appendix 3 Selecting a PC........................................................46
Appendix 4 More on Digital Audio............................................55
Appendix 5 Trouble Shooting your Desktop Music PC...........38

THE DESKTOP MUSIC HANDBOOK


3
Welcome to the World of DESKTOP MUSIC!
Everywhere you turn, you're likely to hear music made with a computer. From the concert
hall to the local club, to radio, television and the movies, desktop music can be found all
around. Today, making music with a computer is easier and more exciting than ever, and the
capabilities you can have are equal in great part to those found in professional recording
studios. We've written this primer to help you get going. It looks at the two main music
applications found on the desktop today, MIDI and digital audio. Together, these two
applications account for the vast majority of sounds in the popular music world, and also
have far-reaching applications into many other areas of multimedia. Though it can't cover
every subject in great detail, we hope that the primer will give you a good basic
understanding of the topics, and will get you started in the right direction. We'll go through
the basics of MIDI first, then proceed to digital audio, then conclude with a discussion of
how you can combine the two. You'll also find an extensive glossary that covers all the terms
mentioned here and many more. There's a lot to cover, so let's get started.

MIDI
MIDI, or the Musical Instrument Digital Interface, is a means by which computers and
musical instruments can communicate. It's a language that allows you to give instructions to a
computer that it will then send to the synthesizer on your sound card, or to any other MIDI
devices that you may have available. MIDI is a great way to work with music and has very
powerful capabilities that will appeal to users of all levels. There are lots of unfamiliar terms
and concepts in the MIDI language, though, and it's easy to get frustrated if you don't have a
grasp of some basic ideas. The first section of this guide will help you understand what MIDI
is and teach you what it can do for you.

History
MIDI was born in the early 1980's when electronic instrument makers, primarily in the US
and Japan, recognized that their instruments must be able to talk to one another. After the
details were worked out, manufacturers soon began to include electronic circuitry in their
equipment that allowed them to understand the instructions MIDI used. Before long, nearly
every instrument maker in the world had adopted the standard, and though there have been
refinements and modifications to MIDI along the way, even the earliest MIDI instruments are
still capable enough to be used today. Since its adoption, MIDI has dramatically changed the
way music is created, performed and recorded.

What is MIDI
MIDI is a universally accepted standard for communicating information about a musical
performance by digital means. It encompasses both a hardware and software component, and
though it could be used for sending information about many other things, such as the control
of lighting in a theater, or even to control your coffee maker, it was developed to transmit
instructions about music. Like a music score, on which notes and other symbols are placed, a
MIDI transmission carries instructions that must be acted on by some device that can make
sound. While a clarinet or guitar player will interpret a written music score and produce the
sound required, it is most likely a synthesizer or drum machine that will react to MIDI
information. Fortunately for us, a complete set of these instructions can be captured and
stored by a computer, and several types of music software can be used to edit and alter them.

THE DESKTOP MUSIC HANDBOOK


4
If the information is sent to several different MIDI devices, an entire electronic orchestra can
be at the musician's disposal. MIDI does not (except in rare cases) actually transmit sound
electronically; you couldn't connect a MIDI cable to a loudspeaker and expect to hear
anything (you'd probably damage both your speakers and your ears if you tried!). Instead, it is
the sound-producing capabilities of the synthesizer, whether it's on a sound card in your
computer or a stand-alone device, that will create the sound you hear.

How Does it Work


A MIDI transmission consists of a series of signals, called bits for binary digits, that pass
through a MIDI cable. These signals are electrical impulses, some strong, some weak, that
represent the 1s and 0s that make up the language of computers (any device that wants to
send or receive MIDI data must be equipped with a microprocessor, the ``brains'' of every
computer). When the impulses reach their destination, for example a synthesizer, the
operating system of the synthesizer interprets them as a series of instructions that usually
result in the production of a sound. This sound must be amplified, so the synthesizer will
typically be connected to an amplifier or mixer.
The bits in the MIDI transmission move at a fairly high rate of speed, 31,250 per second to be
exact, and are transmitted in a serial manner, meaning one after another. (A parallel
transmission contains a number of signals that pass at the same time). Every bit does not
represent a different note or event, however. Bits are grouped into strings of 10 to create
MIDI messages, each of which conveys important information about some musical event
(Fig.1). (Individual MIDI messages are actually eight bits or one byte long, though when
messages are being transmitted, two extra bits, called the stop and start bits, are added to the
beginning and end of the byte, hence the 10 bit length of most events.)

Fig 1. -MIDI data is transmitted using a 10-bit packet that includes a start and stop bit.-

Some MIDI messages detail specific aspects of a musical performance: what notes should be
heard; how loud they should be; what type of sound (trumpet, drum, flute) should play the
notes, etc.; while others are more general in nature. Together, MIDI messages represent an
entire language of musical actions, and can be used to convey all the details of a complete
symphony or a simple hymn.

What it Takes
In order to communicate in the language of MIDI, a device should be able to send and receive
MIDI information, though many common devices are created to do primarily one or the other.
A sound card in a computer, for example, must be given instructions that are generated by
some other source; it cannot create any MIDI messages on its own. Similarly, certain

THE DESKTOP MUSIC HANDBOOK


5
electronic instruments known as tone or sound modules, are also only able to respond to
messages generated from the "outside." By contrast, a class of instruments called keyboard
controllers are intended for transmitting MIDI data only, and have no way to make sound.
Whatever their capabilities, all MIDI devices must contain a microprocessor, which is a
computer chip that deciphers and acts upon MIDI messages, as well as physical connections
called Ports, for sending and receiving data.

MIDI Channels
One of the great capabilities of MIDI is its ability to transmit messages to different electronic
musical instruments at the same time. Each instrument can distinguish which messages are
for it because the messages contain channel information, which acts like an address or
shipping label for the message. These MIDI channels are not physically separated, i.e., they
are not transmitted on separate strands of wire. Rather, the different channel numbers (1-16)
are contained in the beginning of the MIDI message, and determine whether an instrument or
device will respond to that message. In this way, messages can be directed to certain devices,
while other devices, which might also be receiving the information, will ignore them. Most
new instruments can be programmed to respond to any one or even all MIDI channels.
Because of this, the user has extensive control over how different instruments react to the
information that they receive.

Fig 2. Most MIDI devices can be set to receive on one or more MIDI channels

There are certain classes of messages called system messages that don't use a channel, since
they are intended for all devices connected to the MIDI chain. Messages that deal with tuning
or timing information are in this category. There are also other cases where individual
messages do not need their own channel label, for example when all the notes of a melody are
to played by a certain instrument on the same channel. In this case, a channel designation can
be set at the beginning of the melodic sequence and used for all messages in that series.

MIDI Messages
MIDI messages are the language of MIDI; they are the words MIDI uses in a transmission to
communicate the information that must pass from a source to a destination. There are many
types of MIDI messages, though they all fall into two categories: channel messages and
system messages. Channel messages are those that carry specific channel information, such

THE DESKTOP MUSIC HANDBOOK


6
as those described above. These include messages such as what note an instrument should
play (called a Note Message), and Program Change messages, which tell the instrument what
sound it should make while playing the note. System messages, as described above, are either
intended for all the instruments currently connected to the transmitting device, or are meant
to convey information to a specific instrument that is general in nature and doesn't represent
specific details of a performance.
Most messages consist of at least two bytes. The first byte is called the status byte, which
tells the receiving device what type of message it is. Basically, it identifies the message and
prepares the device for a response. MIDI uses the numbers between 128 and 255 for this part
of the message. What follows is the actual data the device needs; these bytes are called data
bytes. They represent the details of the message; the values the instrument will use to perform
its task. MIDI uses the numbers 0 to 127 for data bytes. Some messages use only one data
byte, others need two, while some need none at all. We'll look at a few common messages to
see what type of information they contain.

Note On and Note Off Messages


Perhaps the most basic of all messages is the pair called Note On and Note Off. A Note On
message is transmitted when a key is pressed on a keyboard, and a Note Off is transmitted
when it is released. When a synthesizer receives a Note On message, it looks immediately for
additional information, specifically, a data byte that details what note it should play and
another that specifies how loud it should play it. MIDI has only 128 different numbers for
designating pitch and loudness (or velocity) levels, so immediately after the Note On message
is sent, a data byte representing a number between 0 and 127 will appear for the Note
Number, followed by another that specifies the velocity level for that note. The note will
continue to play until the Note Off message is received, and it too must contain note and
velocity numbers. Note and velocity details must be included with the Note Off message
because it is possible that a synthesizer will be playing several notes when the Note Off is
received. If it received the Note Off without a specific Note number, it wouldn't know which
note to stop playing. The Velocity number that appears with the Note Off is not quite as
important; in fact, some synthesizers simply ignore it. Nevertheless, it will be sent as part of
the data. The Note On / Note Off combination constitutes the most common pair of messages
in any MIDI transmission, though there are many other parts of the transmission that we need
to explore (Figure 3).

Fig 3. -The MIDI message Note On is followed by two data bytes, as is the Note Off
message.-

THE DESKTOP MUSIC HANDBOOK


7
Program Change Messages
When a synthesizer is first turned on, it will load one of its sounds into its RAM (random
access memory) and prepare itself to receive note messages. These sounds are permanently
stored in the synthesizer's ROM (read only memory) and are, in essence, individual computer
programs that tell the device how to create the required sound. When the computer is directed
to load a new sound, it must change the program it is currently running so it will be ready to
play notes using the new tone. Hence, the MIDI message that tells the device what sound to
make is called a Program Change message. Program Changes are followed by a single data
byte.
MIDI devices use two different numbering schemes to catalog their programs, either 0-127 or
1-128, and it is important to know which scheme the different devices you will be using
employ. A recent standardization of this numbering scheme, called the General MIDI
specification, states that the numbers will run from 1-128, and also specifies which sounds
will have what numbers. We'll take a closer look at General MIDI at the end of this section.

Control Change Messages


Control Change messages are used to represent some change in the status of a physical
control on a device. These controls are the foot pedals, volume sliders, modulation wheels,
and similar peripherals found on most electronic instruments. Some control messages act like
simple on and off switches; for example, the sustain pedal on a synthesizer can only be down
or up, so a single status byte can be used to specify which state the pedal is in, and no data
byte is needed. Other controls are continuously changing and need to be represented by more
detailed data called continuous controller data. For example, if you move the pitch wheel on a
synthesizer very slowly from its resting position to one extreme up or down, MIDI transmits
data representing the wheel's position at numerous points along its path. In this case, the data
must be very high in resolution, so 14-bit (2-byte) messages are used. This provides a total of
16,384 values to track the movement of the wheel.
Controller data can be used for many different functions in MIDI, even multiple functions at
the same time. For this reason, the different controller data streams are numbered from 0 to
127. Some of these controller numbers have become standardized to control certain tasks, for
example, controller 10 (often abbreviated cc 10) is most often used for panning between left
and right speakers, while controller seven (cc 7) is typically used for volume changes. Many
synthesizers allow the user to change the effect controller data will have. When this is
possible, any controller could theoretically be used to control any aspect of a sound that
changes over time.

System Messages
One final category of MIDI messages is called system messages. There are several types of
system messages, but they all share the characteristic of transmitting information without a
channel assignment. As a result, all instruments that receive messages of this type would act
upon them, though one particular type of system message, called system exclusive, is
intended for communicating only with a device or devices made by a specific manufacturer.
System exclusive is often used when a musician wants to transmit large amounts of data to a
specific synthesizer or receive data from the device. Because all major instrument makers
have an ID number (#7 for Kurzweil devices, #67 for Yamaha, etc.), a message can be
"addressed" to one device and all other receiving instruments will see it, but ignore it. For

THE DESKTOP MUSIC HANDBOOK


8
example, all the instructions specifying how a synthesizer makes it sounds could be
``dumped'' from the device and stored on a computer. Users could then trade custom libraries
of sounds, or reload all the original factory settings if their equipment's memory were wiped
out. Moreover, a whole new setup of sounds could be sent by a computer just before actual
note data was transmitted, thereby getting the instrument properly configured before the
music starts.
Other system messages include timing messages, which provide information about the tempo
of the music; and Song Position messages, which indicate where a recorded MIDI sequence
should begin playback. These last messages are particularly useful with synthesizers that
contain built-in sequencing capabilities.

General MIDI
Before General MIDI (GM) was popularized, there was no consistency in the way
manufacturers numbered the sounds in their instruments, so that on one device program #1
might be a piano, while on another, it might be a flute. Because MIDI data files (or
sequences) often contain program change instructions, i.e., the actual specifications for which
sound an instrument should use to perform each layer of the music, it was unlikely that music
created for one synthesizer would sound correct when performed by another. With the
adoption of General MIDI, files that use its numbering scheme are now ``portable,'' meaning
they will sound identical, or nearly so, when played by different instruments. This assumes,
of course, that the instruments conform to the GM specification (Appendix 1)
In addition to a standardized assignment of program change numbers, General MIDI includes
several other guidelines, the most important of which is the use of Channel 10 for drum
sounds. It also provides a Drum Map, which is the fixed assignment of certain drums sounds
too specific MIDI note numbers (Appendix 1).. For example, sending middle C, MIDI note
#60, will trigger a high bongo sound on the receiving General MIDI instrument. A ``C'' one
octave below, note #48, will produce a Hi-Mid tom, and so on. This mapping scheme
provides yet another layer of standardization, thereby insuring that MIDI sequences can be
transported among different studios and desktop systems around the world.

MIDI Hardware
Different MIDI devices have different capabilities and functions. We'll look closely at the
various options on a traditional synthesizer first, then explore some of the other options that
are found on different types of instruments.

THE DESKTOP MUSIC HANDBOOK


9
Synthesizers
When you first look at a synthesizer, you are likely to see a piano-style keyboard, one or more
rows of buttons and perhaps a few "sliders" or "wheels" (Figure 4).

Fig 4. -A MIDI synthesizer (with integrated keyboard controller).-

Inside the synthesizer are the sound-producing components, the actual brains of the unit, that
respond to messages received when a key is pressed on the keyboard or when a MIDI
message is sent from some other source. The keyboard part of the unit is called a controller,
which is a term used for any MIDI device that can initiate an action. There are other types of
controllers, including those in the form of a guitar (guitar controllers), drum machines (drum
controllers), and even those that look and work like wind instruments (wind controllers). It's
possible to buy a controller that does not include the capability to make any sound, and it's
just as easy to buy the sound producing components alone, which are devices commonly
known as tone or sound modules. In essence, the devices we commonly refer to as
``synthesizers'' are actually tone modules with integrated keyboard controllers attached.
Keyboard synthesizers are by far the most common MIDI devices, although the tone modules
included with nearly all sound cards for the PC are also extremely common. Like any device
that wants to join into a MIDI conversation, synthesizers are equipped with the proper
connectors that allow MIDI information to pass in, and sometimes out. These connectors,
called MIDI ports, are usually grouped in threes: MIDI In, MIDI Out and MIDI Thru. Figure
5 below shows a standard arrangement of the Ports on the back of a synthesizer, and also
shows the end of a MIDI cable, which connects the sending and receiving devices. Unlike
single ended audio plugs (guitar cords and stereo RCA plugs), MIDI cables and Ports use a 5
pin DIN connection. The MIDI communication does not have to be two-way; for example the
MIDI input of device one can be connected to the MIDI Out of device two, but not vice versa.
The MIDI Thru port is used to relay the information that is sent to a device on to yet another
unit without altering it in any way. By using this port, many MIDI instruments can be chained
together, allowing a single controller to transmit to numerous different sound-producing
devices simultaneously.

THE DESKTOP MUSIC HANDBOOK


10
Fig 5-Three standard MIDI ports and a MIDI cable.-

To connect a MIDI synthesizer to a computer, the computer must have a MIDI interface,
which typically contains the same three MIDI ports described above. Like the synthesizer, the
MIDI interface converts the electrical signals it receives to the proper format needed by the
computer. The MIDI interface might be a separate card that installs into a free PC expansion
card slot; it could be a stand-alone, external unit that attaches to the PC's parallel or serial
port; or it might be an integrated part of a sound card. Some sound cards use proprietary
connectors for their MIDI hookup and require an optional MIDI adapter for connections to
external MIDI units. On the Macintosh, the interface is almost always external, and typically
connects to either the modem or printer port.

General Features of MIDI Hardware


Keyboard and other MIDI controllers share many common features. Most have the ability to
detect how hard a key was pressed. This feature, called Velocity Sensitivity, is used to
determine a note's loudness, or amplitude. Like other controllers, a keyboard controller
typically works by constantly watching the position of every key on the keyboard. An optical
sensor is used to determine whether a key is up in its at-rest position, or down. Then,
whenever a key is pressed, the instrument knows exactly how long it takes for the key to go
down, and it assigns a value to that note by measuring the time it took to go from its starting
point to the bottom of the keyboard. This value is called velocity, meaning "speed," but
actually determines how loud the note will be played. An instrument that has the ability to
measure this speed is said to be velocity sensitive.
Synthesizers and tone modules have many other features, including the ability to play many
notes at once. This capability, called polyphony (for "many sounds"), usually ranges from
eight notes, up to a maximum of 32, or in rare cases, 64. (Musicians usually use the term
Voices when describing the polyphonic capabilities of an instrument, so "8-voice
polyphonic" means the device can play eight notes at once.) When a device receives a new
message after it has already reached its maximum, it must decide how to allocate its
resources. For example, it might choose to drop the oldest note it is playing, or maybe it
would drop the lowest or softest note. Some instruments will just ignore the new note that
puts it over the top. In a professional synthesizer, this allocation might be programmable by
the user, though in many cases it is fixed by the manufacturer.
It's important to keep in mind that certain sounds on a tone module might use up more than
one voice. For example, even a simple flute sound could require two notes (or voices) of the
available polyphony, while a complex, evolving sound, such as those often intended for use
as movie soundtrack backgrounds, might require four or more voices. Playing a four note
chord using a sound that requires 4 voices could, in theory, use the entire polyphonic
capability of a 16-voice synthesizer. Other sounds, such as drum sounds, typically use only a
single note of polyphony, and are not likely to be needed for playing chords!

THE DESKTOP MUSIC HANDBOOK


11
When a synthesizer can make more than one type of sound at the same time, it is called
multitimbral. This term comes from the French word timbre (pronounced ``tam-ber''), which
means tone or sound color. If a synthesizer can make the sound of a trumpet, flute, clarinet
and oboe simultaneously, it is clearly multitimbral. How many different timbres can be used
at once is a significant factor in determining the usefulness of a tone module for one's music;
for example if you plan to write your next symphony using a single synthesizer, you should
be sure it is at least 16-part multitimbral and has 24 or more voices of polyphony. For choral
music, four-part multitimbral and 8-voice polyphony might be adequate, but obviously the
more the merrier.
One final basic feature of a MIDI device is its ability to respond to instructions on several
different MIDI channels at once. This subject was mentioned earlier, but to review, MIDI can
keep all the different layers of a musical performance separate from one another by
transmitting each layer on its own channel, so an instrument should be able to handle the full
"bandwidth" of a MIDI transmission, which is 16 different channels. Most instruments allow
the user to set the Reception Mode of a MIDI device, which determines how it will respond
to the messages it receives. The most common (and useful!) Reception Mode is called OMNI
OFF \ POLY, which tells the device to distinguish what channel messages are on (OMNI
OFF), and play back several notes at once if requested to do so (POLY, from polyphonic).
Many older synths were limited to other reception modes, which kept them from
distinguishing the different channels of a transmission. For example, if OMNI were ON, the
device might play all messages without regard for their channel status. In nearly all recent
devices, Reception Mode is selectable, though OMNI OFF/POLY is by far the most common
Mode in use today.
Most synthesizers have the ability to assign one sound to play over part of the keyboard, and
another sound to play over the rest. This is called keyboard splitting or zoning, and would
allow you, for example, to play a bass guitar sound with the left hand on the low notes, and a
piano sound with the right hand on the high notes (Figure 6). Synthesizers, by the way,
typically offer keyboards that range from as few as four octaves, or forty-eight notes, to full,
traditional piano lengths of just over seven octaves, or eighty-eight notes.

Fig 6. -A MIDI keyboard split into two zones.-

THE DESKTOP MUSIC HANDBOOK


12
Programmability
There is a wide range of programming options available on synthesizers today, but most have
capabilities that allow the user to design sounds with great precision. Normally, you can layer
different sounds, combining a flute with a cymbal for example, or merging the beginning
portion of a trumpet with the sustaining segment of a cello. Numerous filters are also
typically available, which, like the tone controls on a stereo system, let you raise or lower a
sound's treble or bass response. Another useful programming feature is an envelope
generator. Because natural sounds do not remain static throughout their duration-the piano,
for example, begins to fade away or decay immediately after a note is struck-these generators
allow the user to change the way a sound evolves over time. Normally, the characteristic that
changes most is the sound's amplitude (loudness), but envelopes might also be applied to the
sound's pitch or even timbre. The shape of the envelope is usually alterable, which allows the
user to determine how long it takes for the sound to move through each of its ``segments.'' In
the example below, the sound will move gradually to its peak level during the attack segment,
get a bit softer during the decay, maintain a steady level over the sustain segment, then slowly
fade during the release (Figure 7).

Fig 7. -The four segments of an amplitude envelope. -

MIDI Samplers
MIDI Samplers (sampler) are electronic devices that allow you to take recorded audio, and
create a MIDI instrument from it and then manipulate it, and play it back using MIDI
commands. In effect, they allow the entire range of acoustic sounds to be employed in a
musical composition. Under the control of MIDI messages, dog barks, train whistles, car
horns and more can be integrated alongside violins and guitars, but samplers can be used for
a lot more than just sound effects. Because of their extensive capabilities, samplers are used
to create entire original compositions, using exacting reproductions of traditional instruments.
Composers can preview their orchestral works and arrangers can listen to elaborate horn
arrangements before committing the music to notation. In addition to these tasks, an entire
musical style has evolved that uses samplers to store short phrases from existing recordings

THE DESKTOP MUSIC HANDBOOK


13
that are then used as the basis for entirely new musical compositions. While some of these
capabilities are possible using traditional synthesizers, samplers expand the musician's palette
of sounds enormously.
All samplers contain sample RAM that is used to hold digital recordings while the sampler
processes them and plays them back. The amount of RAM determines the total length of
recording time available to the unit. For example, if a sampler were to record sound using the
quality of a commercial CD, it would require just over 10 MEGS (10,000,000 bytes) of RAM
to hold just one minute of stereo or two minutes of monophonic sound. Many professional
standalone samplers contain hard disk drives for more permanent storage of recordings, while
some also include floppy drives for moving sounds into and out of the unit. Besides the
standard audio outputs used to record and playback, some samplers provide direct digital
connections so sound can be moved back and forth to a digital tape recorder (DAT) or
computer. Now as well as standalone samplers there are PC sound cards with sampling
capabilities such as the Turtle Beach Pinnacle and the MaxiSound Home Studio Pro.
Among the many features of most samplers, one particular favorite is looping. This function
allows the sampler to play repeatedly a short segment of sound. Using looping, small
recordings can be played back for long periods of time, saving RAM and storage resources.
When a sound loops, it merely plays through to the end, then restarts at the beginning or at
some other point while the key is being held down. Looping works particularly well with
string and wind sounds, but some sounds cannot be sustained; drum hits and other short
sounds with sharp attacks, for example, simply do not loop well.
Among the other techniques samplers provide are filtering; crossfading, in which one sound
fades out while another fades in; and pitch shifting, where the original pitch of a sampled
sound is raised or lowered. Pitch shifting is very useful when you need to change or transpose
the pitch of a sound, perhaps to change the key of your music. Unfortunately, after a certain
amount of shifting in either direction, the sound will no longer resemble the original. It is
very common for a sampler to use a technique known as multisampling, in which the original
sound is recorded at numerous different pitch levels, and each individual sample is assigned
to playback over a different range of the keyboard. In this way, no single sample has to be
shifted beyond a small amount.
Samplers provide numerous other manipulation techniques, some of which will be mentioned
in the section on digital audio. These include time compression/expansion, which is the
ability to stretch or shrink sounds without changing their pitch; amplitude modulation, a
technique used to change the sample's amplitude (loudness) at a variable rate; and playing
back a sound in reverse. In all, samplers offer versatile options to the musician for shaping
and crafting their music.

THE DESKTOP MUSIC HANDBOOK


14
Drum Machines
One final MIDI device is the drum machine and a related instrument, the drum controller
(Figure 8). The drum machine, one of the most common of all MIDI peripherals, typically
contains buttons or ``pads'' for playing drum sounds ``live,'' and internal software to generate
or store MIDI data. The sounds in the drum machine are most often sampled drums, i.e.,
digital recordings of actual acoustic drums. Unlike a sampler, you can rarely add your own
sounds to such devices; instead you are limited to playback of the internal sounds, perhaps
with some minor alterations.

Fig 8. -A drum machine contains recordings of acoustic drums and can be played by pressing
its buttons or sending it MIDI commands.. -

While the buttons on a typical drum machine can be used to play the instrument in ``real-
time,'' you can also record any pattern of button presses right into the device. When
requested, the drum machine will then play back the patterns you've created. In this way, one
can create elaborate drum parts ``note by note,'' then play them back repeatedly and at any
tempo required. Drum machines also typically include preset patterns, providing very realistic
drum parts that musicians who don't play the instrument can use in their own productions.
Unfortunately, many of these patterns sound ``canned,'' and their overuse has created
somewhat of a backlash against this type of device. Creative drum programming by capable
musicians can, however, produce excellent results.

Guitar and Wind Controllers


While the vast majority of MIDI music emanates from keyboard controllers and synthesizers,
instrument makers have come to realize that many other instrumentalists would like to share
in the joy of MIDI. For this reason, various types of guitar and wind controllers have been
created to provide a familiar performance interface for players of these instruments. While
they typically produce no sound on their own, these instruments can be connected to tone
modules or samplers, which then generate sound under their control.
Shaped and performed like traditional six string guitars, MIDI guitars contain small sensors
that detect the player's finger position on the strings, as well as the amount of pressure
applied to the string by the pick. Most can also track movements of the string and convert this
bending into continuous controller data. Some guitar controllers even allow the user to assign
a different MIDI channel to every string, thereby offering the ability to play up to six different
sounds on the receiving device simultaneously. Wind controllers can easily detect which keys

THE DESKTOP MUSIC HANDBOOK


15
have been closed by the player, but must make far more difficult measurements of the amount
of air pressure passing through the device's mouthpiece. Typically, a sensor in the mouthpiece
is used for such measurements, and over the years, the accuracy of wind controllers has
improved dramatically. Because a single MIDI note can be used to generate an entire chord,
(if the receiving synthesizer is so programmed), musicians who have spent most of their lives
playing monophonic (single note) instruments, now have the ability to play elaborate, chordal
textures.
One final controlling device is the pitch-to-MIDI converter. This somewhat uncommon
device is attached to a traditional acoustic wind instrument such as a saxophone or trumpet
and converts the acoustic tones the instrument generates into MIDI notes. The Pitch-to-MIDI
converter offers perhaps the best of both worlds, in that a musician can use his or her favorite
instrument to create a performance that combines ``natural'' and synthetic sounds.
Unfortunately, the conversion is not always accurate, and these devices still must undergo
some refinements before they will be completely reliable. Nevertheless, converters are
becoming more common, and offer musicians including singers tremendous expressive
qualities in a MIDI performance.

MIDI Software
There are many categories of MIDI software available. Perhaps the most common is the
MIDI sequencer, which is a type of program that can record, edit and playback MIDI data.
Sequencers, which originally were often found as stand-alone hardware devices, have very
powerful capabilities to transform MIDI information, and today represent a very complex and
mature category of software. Sequencers share many basic features, and allow the user to put
the strength of a personal computer to the task of making music.
Like a multi-track tape recorder, sequencers most often arrange multiple layers of MIDI
information into tracks. Each track represents an independent melody or part of the music.
The number of tracks in a sequencer can range from as few as sixteen in an entry-level
program, to hundreds, or even thousands in others. Each track can be used to hold any type of
MIDI data, and there is no single standard for how this information should be arranged.
Rather, the best sequencers give the user a high degree of flexibility in organizing the various
types of information their music requires.
Figure 9 below shows the main screen of a popular Windows-based sequencer, Cakewalk Pro
Audio(TM). Along the left side of the figure you can see the various tracks; the first sixteen
tracks are shown here, but different screen resolutions would allow you to see more or fewer
at once on your own monitor. Each track is assigned to a specific MIDI channel, though you
can see that several of the tracks have the same setting. This indicates that the events on all of
these tracks will go to the same destination. Most sequencers allow you to put information for
several channels on the same track, though this could make editing the information somewhat
more difficult. The right half of the screen represents the actual data, which is organized into
segments called clips in Cakewalk.

THE DESKTOP MUSIC HANDBOOK


16
Fig 9. -The Track View of Cakewalk Pro Audio.-

Sequencers typically provide different ways to view and edit your data, and it's important to
understand the function of each of a program's work areas. Usually, one will find a Piano
Roll view, where individual or small groups of notes can be altered; a Track Overview, where
entire measures or even whole tracks can be manipulated; a Notation or Staff view, where the
music is represented using standard music notation; and an Event View, which is a text-based
list of all the events in one or more tracks. The editing options that such programs provide are
numerous and vary greatly among programs, but typically, one can cut, copy and paste data,
as well as apply extensive modifications to the music, such as raising or lowering the pitch
and volume characteristics, and expanding or compressing the amount of time a section takes
to playback.
Some programs also provides features that can assist the user with the operation of his/her
MIDI hardware. It is not uncommon to find sequencers that will list all the different sounds in
your synthesizer, allowing you to work with specific names rather than the less familiar patch
numbers. Some will also import or export system exclusive (Sysx) data to a synthesizer,
meaning you can load an entire setup of sounds before the first note is played. While they
don't offer all the editing capabilities of full-blown patch editors (discussed later), these patch
librarian features are very useful, especially in settings where there are two or more MIDI
devices.
Overall, sequencers are the most common of all MIDI software programs, and provide
tremendous power that can be applied to the production of music.

THE DESKTOP MUSIC HANDBOOK


17
Notation Programs
Another category of MIDI software is the notation, or transcription program (Figure 10).
Because standard notation remains the most common way to represent music, an entire
market has been established for programs that let musicians work ``the old fashioned way.''
Typically, these programs provide huge libraries of musical symbols that can be entered onto
the page to produce professional looking scores. Some even allow the user to create new
symbols. Sophisticated page layout features, often comparable to high-end desktop
publishing programs, are also included in the more advanced notation software, and all
programs of this type offer printing options.

Fig 10. -A view of standard music notation.-

Most programs allow ``point and click'' entry as well as real-time transcription from a MIDI
keyboard. With real-time entry, musicians can play their music directly into the program and
see it appear instantly on screen as notation. Once the notes are recorded, numerous editing
capabilities, such as the cut, copy and paste features of a word processor, are available. Other
editing functions needed by musicians, such as the ability to shift or "transpose" the music up
or down are also commonly found.

Patch Editor/Librarians
Because of the complexity of many of today's synthesizers, an entire software niche has
developed to facilitate the control of such devices from a computer. Patch editors typically
display all of a synthesizer's programming controls on one or two computer screens, allowing
the user to ``see into'' the synthesizer and control it directly from the computer keyboard
(Figure 11). Rather than spend many minutes pushing buttons, trying to locate a particular
screen within the synthesizer's own display, the patch editor lays all the device's parameters
THE DESKTOP MUSIC HANDBOOK
18
before the user, and allows him or her to make extensive changes with the sweep of the
mouse or press of a few keys. Changes made on the computer screen are typically sent
immediately to the device, making it possible to preview them before any permanent changes
are made.

Fig 11. -Patch editors provide access to a Synth's controls from a computer

Stand-alone librarian programs, or those usually included with the patch editor, simply store
all the device's sounds and make them available for quick searching or sorting. Typically, a
librarian will request a ``dump'' from the device via Sysx, then show the user the sounds
currently available on the instrument. This listing can then be stored on a computer and
reloaded into the device if needed. Not only are the names of the patches stored, but also the
specifications as to how the sounds are created. In other words, if the internal memory of a
synthesizer were wiped out, the librarian could send a list of the original factory programs
back to the synthesizer and return it to its original status.
Librarians are also commonly employed when users owning the same equipment wish to
share programs they have created. Simply load the sounds into the librarian and save them on
a floppy disk, then transport them to another computer anywhere in the world.

Integrated Programs
An interesting trend in MIDI software today is the appearance of integrated programs that
combine many of the features of the programs listed above. Like their counterpart in the
business world, the ``desktop suite,'' these integrated programs offer professional sequencing,
notation, patch librarian, and in some cases, digital audio functions in an all-in-one
environment. This trend shows tremendous promise, and has far-reaching implications for the
user. It will be exciting to see how far it develops.

THE DESKTOP MUSIC HANDBOOK


19
Digital Audio
One of the most exciting developments in desktop music in recent years is the ability to work
with digital audio on a home PC. Long the province of research institutions and recording
studios, digital audio editing software has become nearly commonplace on the desktop, and is
now among the most accessible and powerful types of computer software available.
Recording, editing, and playing digital audio on a home computer gives the user considerable
power to design and produce new sounds, and to edit and craft one's own music with great
precision. Digital audio can be a highly technical and elusive concept though, and we'll try to
make the terms and concepts perfectly clear.

What is Digital Audio


Digital audio is a numeric representation of sound; it is sound stored as numbers. In order to
understand what the numbers mean, we need to review some of the basic principles of
acoustics, the study of sound.
Sound is produced when some type of motion produced by a vibrating body disturbs
molecules in the air. This body, which might be a guitar string, human vocal cord or garbage
can, is set into motion because energy is applied to it. The guitar string is struck by a pick or
finger, while the garbage can is hit perhaps by a hammer, but the basic result is the same:
they both begin to vibrate. The rate and amount of vibration is critical to our perception of the
sound. If it is not fast enough or strong enough, we won't hear it. But if the vibration occurs at
least twenty times a second and the molecules in the air are moved enough (a more difficult
phenomena to measure), then we will hear sound. To understand the process better, let's take
a closer look at a guitar string.
When the pick hits the string, the entire string moves back and forth at a certain rate of speed
(Figure 12). This speed is called the frequency of the vibration. Because a single back and
forth motion is called a cycle, we use a measure of frequency called cycles per second, or cps.
This measure is also known as hertz, abbreviated Hz. Like that of other bodies, the frequency
of the string is often very fast, so it is useful to use the abbreviation kHz to measure
frequency in thousands of vibrations per second. A frequency of 2 kHz then, signifies a
frequency of 2,000 cycles per second, meaning the string goes through its back and forth
motion 2,000 times per second. The actual distance the string moves is called its
displacement, and is proportional to how hard we pluck it. The actual measurement used for
this distance is not particularly important for our purposes, but we will often refer to the
amplitude or strength of the vibration.

Fig 12 A plucked string in motion

THE DESKTOP MUSIC HANDBOOK


20
As the string moves, it displaces the molecules around it in a wave-like pattern, i.e., while the
string moves back and forth, the molecules also move back and forth. The movement of the
molecules is propagated in the air; individual molecules bump against molecules next to
them, which in turn bump their neighbors, etc., until the molecules next to our ears are set in
motion. At the end of the chain, these molecules move our eardrum in a pattern analogous to
the original string movement, and we hear the sound. This pattern of motion, which is an air
pressure wave, can be represented in many ways, for example as a mathematical formula, or
graphically as a waveform. Figure 13 below shows the movement of the string over time: the
segment marked "A" represents the string as it is pulled back by the pick; "B" shows it
moving back towards its resting point, "C" represents the string moving through the resting
point and onward to its outer limit; then "D" has it moving back towards the point of rest.
This pattern repeats continuously under the friction of the molecules in the air gradually
slows the string down to a stop. In order for us to hear the string tone, the pattern must repeat
at least twenty times per second. This threshold, 20 cps, is the lower limit of human hearing.
The fastest sound we can hear is theoretically 20,000 cps, but in reality, it's probably closer to
15 or 17,000 cycles.

Fig 13. -The vibration pattern of a plucked string over time. Gradually, the motion will die out.-
If this back and forth motion were the only phenomena involved in creating a sound, then all
stringed instruments would probably sound much the same. We know this is not true, of
course, and alas, the laws of physics are not quite so simple. In fact, the string vibrates not
only at its entire length, but also at one-half its length, one-third, one-fourth, one-fifth, etc.
These additional vibrations occur at a rate faster than the original vibration, (known as the
fundamental frequency), but are usually weaker in strength. Our ear doesn't hear each
vibration individually however. If it if did, we would hear a multi-note chord every time a
single note were played. Rather, all these vibrations are added together to form a complex or
composite waveform that our ear perceives as a single tone (Figure 14).

THE DESKTOP MUSIC HANDBOOK


21
Fig 14. -The making of a complex waveform. Vibrations occurring at different frequencies are
added together to form a complex tone.-

This composite waveform still doesn't account for the uniqueness of the sound of different
instruments, as there is one more major factor in determining the quality of the tone we hear.
This is the resonator. The resonator in the case of the guitar is the big block of hollow wood
that the string is attached, i.e., the guitar body. This has a major impact on the sound we
perceive when a guitar is played as it actually enhances some of the vibrations produced by
the string and diminishes or attenuates others. The ultimate effect of all the vibrations
occurring simultaneously, being altered by the resonator, adds up to the sound we know as
guitar.

Recording a Sound
So what has all this got to do with digital audio? What is it we need to record from all of this
motion in the air? It is the strength of the composite pressure wave created by all the
vibrations that we must measure very accurately and very often. That is the basic principle
behind digital audio. When a microphone records a guitar playing, a small membrane in the
Mic (called the diaphragm) is set into motion in a pattern identical to the guitar wave's own
pattern. The diaphragm moves back and forth, creating an electrical current that is sent
through a cable. The voltages in the cable are also "alternating" in strength at a very rapid
rate: strong, weaker, weak, strong again. When the cable arrives at our measuring device,
called an analog to digital (ADC) converter, the device measures how strong the signal is at
every instant and sends a numeric value to a storage device, probably the hard drive in your
computer. The ADC converter, along with its counterpart, the digital to analog (DAC)
converter that turns the numbers back into voltages, is typically found as a component of your
sound card, or as a stand-alone device.

THE DESKTOP MUSIC HANDBOOK


22
There are several important aspects of this measuring process that we need to discuss. First is
the rate at which we choose to examine the signal coming into the converter. It is a known
fact of physics that we must measure or sample the signal at a rate twice as fast as the highest
frequency we wish to capture. Let's say we are trying to record a moderately high note on a
violin. Let's also assume that the fundamental frequency of this tone repeats 440 times per
second (the note would be an "A," of course), and that we want to capture all vibrations up to
five times the rate of the fundamental, or 2,200 cycles per second. To capture all the
components of this note and convert the resulting sound into numbers, we would have to
measure it 4,400 times per second.
But humans can hear tones that occur at rates well up into the tens of thousands of times per
second, so our system must be capable of much better than that! In theory, we might want to
capture an extremely high sound, for example one that actually contains a frequency
component of 20,000 cps. In that case, our measurements must occur 40,000 times per
second, which in fact, would allow us to capture every possible sound that any human might
be able to hear. Because of some complex laws that digital audio obeys however, we use a
rate of 44,100 measurements or "snapshots" of a sound per second in our professional
equipment. This sampling rate, abbreviated 44.1 kHz (44.1 kilohertz) is one aspect of what
we call CD-quality recording, as it is the same rate that commercial CDs use. Other common
sampling rates are 11kHz, 22kHz, and for some professional equipment, 48kHz.
The other important issue is how accurate our measuring system will be. Will we have 20
different values to select from for each measurement? How about 200 or 2,000? How
accurately do we need to represent the incredible variety of fluctuations in a pressure wave?
Think about the different types of timepieces you know about. If your digital watch shows
you minutes and seconds, that's adequate for most purposes. If you are doing scientific
measurements of time, then you might need more accuracy, perhaps minutes, seconds, tenths,
hundredths and even thousandths of seconds. Soundwaves actually encompass an infinite
range of strengths, but we must draw the line somewhere, or else we would need gigantic
hard drives just to store the information for a short amount of sound. The music industry has
settled on a system that provides 65,536 different values to assign to the amplitude (strength)
of a waveform at any given instant. In a certain sense, that number represents a compromise,
as we will definitely not capture every possible value that the amplitude can take. However,
our ears can live with that compromise, and in any event, using a more sophisticated
measuring system is simply not worth the extra cost in computing and storage resources.
Obviously you are wondering, "Why in the world did they choose 65,536?" The answer is
simply because it is 216, that is, 2 to the 16th power (2 multiplied by itself sixteen times).
This is the largest number we can express in the binary numbering system if we use 16 bits,
or 16 places. Recall from your high school math that the binary numbering system uses only
two digits, 0 and 1, and that this is what computers use as well. A string of sixteen 1's in the
binary system produces the number 65,535 in decimal, and a string of 16 0's is, of course the
decimal number 0. So from 0 through 65,535 we have 65,536 different numbers that we can
express using 16 bits. Computers actually think in terms of 8 digit strings, which you will
remember are called bytes. Therefore, if we use numbers that are two bytes long to represent
every different value in our system, we have the total range described above. One byte, or a
string of 8 bits, would allow us to represent the numbers 0 through 255, and MIDI is quite
happy with that range, but there is so much more detail in the digital audio world that our
system must be far more sophisticated.

THE DESKTOP MUSIC HANDBOOK


23
If you've followed the discussion up until now, you should have a pretty good idea of what is
on a compact disc. It's a massive amount of numbers, each two bytes long, that represent the
fluctuating amplitude of the pressure wave in front of the microphone that made the
recording. No matter if the sound was an orchestra, a guitar or a car horn, the CD simply
contains measurements for the pattern of motion produced by that sound. We can use our
hard drives to record the information in the same form as that on a CD, or if we wish, we can
use a somewhat less accurate representation. For example, if we choose not to capture the
data as accurately as the CD, we might only use eight bits, or one byte, for each amplitude
value. Such a measuring system has only what we call 8-bit accuracy or resolution. This will
have a significant impact on the quality of our representation, but it may be adequate for the
purpose at hand. Or we might wish to look at the sound and take a measurement only every
eleven or twenty-two thousands times a second, i.e., an 11k or 22 kHz sampling rate,
realizing that we will miss some detail, in particular the high end (upper frequencies) in the
sound. In truth, that rate may be good enough to represent certain types of sound, for example
the frequencies produced by the human voice are much lower than those produced by a
cymbal, so we might be able to get the whole picture by looking at the voice at a lower rate
(for more details on digital audio see appendix 5). The decision regarding how accurate we
need to be will be determined by the material we are recording and the amount of storage
space we have available to hold the recording. These choices are usually made from within
our audio software, so perhaps it's time to turn our attention to the PC.

Digital Audio Software


There are several common varieties of software used to manipulate digital audio data on a
computer. The most popular is wave editing software, which is often included as part of the
software packaged with sound cards. This type of software allows someone to work with a
graphic representation of sound, the waveform, and cut, copy and paste it with the ease of a
word processor (Figure 15). The software also typically includes a number of editing features
that allow additional processing of the material; this processing can be used to create special
effects, such as dogs barking backwards, and gun shots being stretched to one hundred times
their length. Features of this type fall into the category of signal processing, or digital signal
processing (DSP) functions. Professional versions of waveform editors often cost several
hundred dollars, but offer the user tremendous flexibility in the type of manipulations they
can perform. By the way, on the IBM-compatible platform, digital audio files are typically
called Wave files and carry the extension, .WAV. On the Macintosh, the standard audio file
type is the AIFF file.

THE DESKTOP MUSIC HANDBOOK


24
Fig 15. -A graphic waveform display.-

Usually, wave editing software can accommodate no more than a single, stereo file, though a
new category, called multi-track software, lets the user work with several stereo files at once.
After being manipulated and edited, these files are mixed together into a single composite
stereo file that is sent to the left and right channel outputs of a sound card. In many cases, the
multi-track software doesn't offer a full range of editing options; most often it is the signal
processing functions that are omitted, but the ability to mix many different layers of audio is
very appealing.
One other type of editing software is used with dedicated hard-disk recording systems. These
professional products are very sophisticated, and often very expensive. Their key advantage is
that they provide extensive editing capabilities, such as those needed to make commercial
audio recordings, and often include storage devices devoted to holding large amounts of high
quality audio. They also provide multiple tracks of digital audio, in some cases up to ten or
even twelve simultaneous tracks on a single PC, as well as multiple audio outputs. This
makes them well suited for the production of radio and television commercials, where a vocal
narration, sound effects and music soundtrack are often combined.

Sound Cards
Far less expensive than the dedicated hardware described above are the massively popular
sound cards found in nearly every PC today. Much of the success of these products can be
attributed to the fact that IBM-compatible computers never enjoyed the quality of sound
production that the Macintosh had from its inception. When card maker Creative Labs
reached the consumer with its industry standard Sound Blaster card, they found a huge
untapped market that is now quite saturated with products.
Sound cards typically serve several important functions. First, they contain a synthesizer that
uses either frequency modulation (FM) synthesis to produce sound, or that stores actual
recorded audio data in wavetables for use in playback. FM is a somewhat dated method of
synthesis that uses one or more wave(s), called the modulator, to alter the frequency and
amplitude of another, called the carrier. The range of sounds that can be produced is limited,
though often adequate for simple sound effects or other game sounds. While the FM-style
card has nearly disappeared from the market, most software manufacturers must include
support for it in their products because of the vast number of cards that are still installed in
computers.
THE DESKTOP MUSIC HANDBOOK
25
Nearly all newer such as Turtle Beach Pinnacle, Malibu and Montego cards use the preferable
wavetable approach because it provides far more realistic sound. Wavetables are digital
recordings that exist in some type of compressed form in the card's ROM (read only
memory). in general the size of the Wavetable ROM determines the quality of the sound.
These sounds can never be erased, but can be altered in numerous ways as they playback. For
example, a trumpet sound could be reversed, or a piano could be layered with a snare drum.
Depending upon the programmability provided by the manufacturer, this type of card can be
quite flexible in the sounds it makes. Most wavetable cards, regardless of their manufacturer,
offer a General MIDI soundset, which makes them compatible with many popular
multimedia programs. Despite what their ads may claim, sound cards vary tremendously in
quality, even those that use the same playback method. Magazine reviews and roundups are a
good source of information for evaluating a card's characteristics.
Most cards also contain a MIDI interface for MIDI input and output, plus the digital to analog
(D/A) and analog to digital (A/D) converters described above. While all MIDI interfaces are
essentially created equal, there can be major differences among the converters on these cards.
Many cards claim ``CD Quality Sound,'' which simply means they can record and playback
audio at a sampling rate of 44.1 kHz using 16-bit resolution. Unfortunately, the personal
computer was not originally intended to be a musical instrument, and the high level of
electronic activity inside its case can cause interference problems with some cards. With
properly built cards, these problems can be avoided, and most users won't experience any
difficulties.

Putting it Altogether
MIDI and digital audio have coexisted in separate worlds until very recently. Now, using an
entirely new class of software, we have the potential to work with both types of data within a
single program. This new category, called simply, integrated MIDI and Digital audio
software, solves many of the most nagging problems desktop musicians have had for years.
The capabilities it offers greatly facilitate the integration of ``real world'' audio with the
``virtual'' world of MIDI tracks. Before we discuss this software, let's look at the way things
used to work. Here's how musicians combined audio and MIDI in the past.

Synchronization
For many years, in home and professional music studios around the world, musicians have
employed elaborate and somewhat complex means to join live audio with MIDI music.
Guitarists, vocalists, drummers and others have used different synchronization techniques to
mix their live playing with the music produced by their MIDI software. Typically, a musician
would record live audio onto a tape recorder, then use the tape recorder to send information
to the computer which told it when to start and stop playing. In this way, the music on the
tape and the sequenced music could be perfectly aligned.
The information sent by the tape recorder in this case is known as SMPTE time code, and is
actually an audio signal recorded (or ``striped'') on the tape. SMPTE (pronounced ``simp-tee'')
serves as a timing reference for both the tape and the computer running the MIDI software. In
essence, this code tells the software ``what time it is,'' i.e., where into the music it should be.
If a MIDI drum part must start exactly one minute after the music on the tape recorder begins,
then the sequencer will watch the time pass from the beginning of the tape (time 00:00), until

THE DESKTOP MUSIC HANDBOOK


26
it reaches time 01:00, at which point it begins to play. Sequencers can jump instantly to any
time point that's required, so the sequencer will simply wait for its ``cue'' then start playing.
SMPTE, which stands for the Society of Motion Picture and Television Engineers, was
initially created by the NASA space agency for use with its tracking stations. It provided an
absolute timing reference that allowed the agency to keep track of when transmissions
occurred. Like a digital clock, SMPTE works on a 24 hour cycle, and the precision it
provides is considerable: a normal SMPTE time represents hours, minutes, seconds, and
``frames,'' (Figure 16). The ``frames'' designation is important to the television and movie
industry for tracking time in film and video productions. A frame in television occurs 30
times a second, while in film it represents an interval of 1/24th or 1/25th of a second, so
SMPTE can measure time quite accurately. Because most professional video equipment is
SMPTE-compatible, musicians creating audio for video productions can also use it to
synchronize their music with the various types of video equipment they commonly work
with. When scoring for films, it is an invaluable way for the composer to know exactly when
a sound effect or music cue must begin and end.

Fig. 6 -An example of SMPTE time code, showing time in hours, minutes, seconds, and
frames.-

Integrated Software
Rather than deal with the intricacies of SMPTE, today's musician can work with integrated
software to combine audio and MIDI tracks with great precision. New programs like
Cakewalk Pro Audio represent digital audio data in the same form as MIDI data, and allow
the user to manipulate the two with ease. Once audio files are recorded onto disk, they can be
aligned for playback along with the MIDI information, and what's more, numerous tracks of
audio can be performed simultaneously. If synchronization with an external device is needed,
that device can still control the entire project. Thus, the best features of multi-track audio
software can now be found integrated with the advanced options of MIDI sequencers.
The number of audio tracks that can be mixed together in an integrated program, or in a
stand-alone audio editor for that matter, is very much a function of the computer hardware
being used for the task. In the IBM world, the processor (CPU) speed, access or ``seek'' time
of the hard drive, and available system RAM are among the key components to evaluate. In
the early years of desktop multimedia, software leader Microsoft produced a ``multimedia''
specification that described the minimal requirements for work of this type. That spec has
been modified to keep up with enhancements in today's computers, and has, as of this
writing, reached ``Level III'' status. This calls for a computer with a Pentium 75 MHz or
better processor, at least 8 MEGS of RAM, a 540 MEG hard drive, a quad-speed CD-ROM
player, a sound card that uses wavetable synthesis, and a video card that is MPEG 1 (a form
of compression) compliant. Keep in mind that any component of a system can slow the
process: a fast CPU with an inadequate hard drive can bring a system to its knees, for
example. It's important that all the pieces of the system are well balanced and in good
working order.

THE DESKTOP MUSIC HANDBOOK


27
Here's a tip to keep in mind: one of the easiest and most effective tasks you can do to prepare
your system for recording or playing audio is defragmenting your hard drive. A fragmented
drive contains pieces of files spread over different physical locations, and makes the job of
streaming data to and from that disk very difficult. Use one of the cleanup programs, such as
defrag, which comes with your operating system, before making recordings. Also if possible,
devote a separate drive partition to digital audio. When you first setup your computer, you
can create partitions easily using DOS's fdisk program, but later, you'll have to backup your
drive and reformat it.

Summary
We hope you've enjoyed this initial presentation of the ins and outs of desktop music and that
it will encourage you to experiment on your own. Much of today's software is very powerful,
though manufacturers have done a good job in making it easy to use, and you've got many
hours of pleasure and excitement to look forward to. Of course the more you can learn about
desktop music, the more you will get out of your equipment, so keep your eyes on the
numerous books and magazines devoted to the subject, and consider subscribing to some of
the multimedia newsgroups on the Internet. There's a whole world of music waiting for you,
right on your desktop.

THE DESKTOP MUSIC HANDBOOK


28
Glossary of MIDI and Digital Audio Terms
ACTIVE SENSING - a method by which a MIDI device detects disconnection. A message
is sent to the receiver around three times per second, and if no message is received during this
period, the unit assumes the MIDI connection has been broken. It then begins a routine to
reestablish normal operation.
ADDITIVE SYNTHESIS - a synthesis method that builds complex waveforms by
combining sine waves whose frequencies and amplitudes are independently variable.
ADSR - Attack, Decay, Sustain and Release are the four stages of an envelope that describe
the shape of a sound over time. Attack represents the time the sound takes to rise from an
initial value of zero to its maximum level. Decay is the time for the initial falling off to the
sustain level. Sustain is the time during which it remains at this level. Release is the time it
takes to move from the sustain to its final level. Release typically begins when a note is let
up. In most sound generators, the time and the value reached are programmable.
AFTER TOUCH - a measurement of the force applied by a performer to the key on a
controller after it has been depressed. Either polyphonic, which measures the pressure on
each individual key, or monophonic, reflecting the total pressure on all keys.
AIFF - the standard file format for storing audio information on an Apple Macintosh
computer.
ALGORITHM - a set of instructions supplied to a computer for the purpose of solving a
problem.
ALL NOTES OFF - a three byte MIDI channel message that instructs the receiving device
to terminate all notes currently sounding.
ALIASING (FOLD-OVER) - ``false frequencies'' that are created when sampling
frequencies greater than one-half the sampling rate.
AMPLIFIER - a device that increases the amplitude, power or current of a signal. The
resulting signal is a reproduction of the input signal as well as this increase.
AMPLITUDE - the strength or magnitude of any changing quantity when compared to its
\Qat rest' or \Qzero' value.
ANALOG - information which is continuously variable in nature.
ANALOG SYNTHESIS - a method of sound synthesis that relies on predefined waveforms
to create sounds that vary over time. The amplitude, frequency and harmonic content of these
waveforms can be manipulated to produce a vast number of different results.
ARPEGGIATE - to play the notes of a chord in succession rather than simultaneously.
ATTACK - the initial stage of an envelope. Refers to the time from the beginning of the
sound to its highest or maximum level.
BANK - a storage location in a sampler or synthesizer that typically holds a large number of
individual program (sounds).
BINARY NUMBERS - a numbering system based on 2 in which 0 and 1 are the only
available digits.

THE DESKTOP MUSIC HANDBOOK


29
BITS (BYTES) - a binary digit. Mode of information used by a computer to store numbers.
One bit equals a \Qone' or a \Qzero'. Usually 8 bits equals one byte, however, MIDI uses a 10
bit-byte that includes a start bit, the 8 - bit data message, and a stop bit.
BUFFER - an area of RAM used to temporarily store data.
CENTRAL PROCESSING UNIT (CPU) - a silicon chip that performs calculations and
acts as the brain of a computer.
CHANNELS - one of 16 different data paths that are available to carry messages in MIDI.
CHANNEL MESSAGE - a type of MIDI message that carries specific channel information.
CHORUSING - a doubling effect commonly found on a synthesizer or sampler that makes a
single sound appear to sound like an entire ensemble. The initial signal is split and appears at
a slightly altered pitch from the original, or at a slightly later point in time. This time and
pitch level are often controllable by a low frequency oscillator (LFO).
CONTINUOUS CONTROLLER - a type of MIDI message that is generated by the
movement of a continuous control.
CONTROLLERS - various sliders, levers, knobs, or wheels typically found on a MIDI
controller. Used to send continuous (as opposed to discrete) data to control some aspect of a
sound.
DECIBEL -a decibel (or dB ) is 1/10th of a bel, which is a relative measure of two sounds.
DC (DIRECT CURRENT) - an electrical current that flows in one direction.
DECAY - one of the four basic stages of an envelope. Refers to the time the sound takes to
settle into its sustain level.
DEFAULT - the ``normal'' or ``startup'' state of a hardware device or software application.
DELAY - a common effect in a sampler or synthesizer that mimics the time difference
between the arrival of a direct sound and the first reflection to reach the listener's ears.
DIGITAL AUDIO - the numeric representation of sound. Typically used as the means for
storing sound information in a computer or sampler.
DIGITAL SYNTHESIS - the use of numbers to create sounds. Method most often used in
today's synthesizers for generating sounds, as compared to analog method employed
previously.
DIN PLUG - a five-pin connector used by MIDI equipment.
DISTORTION - a process, often found desirable by guitar players, that alters a sound's
waveform.
DRUM MACHINE - an electronic device, usually controllable via MIDI commands, that
contains samples of acoustic drum sounds. Used to create percussion parts and patterns.
DSP - digital signal processing. Processes used to alter sound in its digital form.
DYNAMICS - the relative loudness or softness of a piece of music.
ECHO - the repetition of a sound delayed in time by at least 50 milliseconds after the
original. An effect often found in synthesizers and samplers.

THE DESKTOP MUSIC HANDBOOK


30
ENVELOPE - changes in a sound over time, including alterations in a sound's amplitude,
frequency and timbre.
ENVELOPE GENERATOR - a device or process in a synthesizer or other sound generator
that creates a time varying signal used to control some aspect of the sound.
ERROR CORRECTION - a procedure found in digital audio systems that detects and
correct inaccurate or missing bits in the data stream.
EQUALIZATION (EQ) - boosting or cutting various frequencies in the spectrum of a
sound.
FADE IN/OUT - a feature of most audio editing software that allows the user to apply a
gradual amplitude increase or decrease over some segment of the sound.
FADER - also known as a slider or attenuator, this control allows the user to perform a
gradual change to the amplitude of a signal. Commonly found as a feature of MIDI software
programs.
FILTER - a circuit which permits certain frequencies to pass easily while inhibiting or
preventing others. Typical filters include low pass, high pass, band pass, and band reject.
FLANGE - an effect applied to a sound wherein a delayed version of the sound is mixed
with the original.
FM SYNTHESIS - a synthesis method that involves the interaction of a signal (carrier) by
another (modulator).
FREQUENCY - the rate per second at which an oscillating body vibrates. Usually measured
in Hertz (Hz), humans can hear sounds whose frequencies are in the range 20 Hz to 20kHz.
FUNDAMENTAL FREQUENCY - the predominant frequency in a complex waveform.
Typically provides the sound with its strongest pitch reference.
GRAPHIC EQUALIZER - a device type that applies a series of bandpass filters to a sound,
each of which works on a certain range of the spectrum. The frequencies that fall within the
range, typically one-third octave, can be boosted or cut.
HARMONIC - a sine wave component of a complex sound whose frequency is a whole
number multiple of the fundamental frequency.
HARMONIC SERIES - also known as the ``overtone'' series, this is the series of
frequencies in a sound that are whole number multiples of the fundamental.
HERTZ - a measurement used to represent the number of times per second a waveform
repeats its pattern of motion (cycle).
KEYBOARD SPLIT- a setup of a keyboard where different notes trigger different sounds.
Also known as zoning.
LCD - Liquid Crystal Display. A small screen found on electronic instruments that displays
data.
LFO - a low frequency oscillator that is used to alter a sound's frequency or amplitude.
LIBRARIAN - a category of MIDI software that is used to organize and store a MIDI
device's patch (program) data.

THE DESKTOP MUSIC HANDBOOK


31
LOCAL ON/OFF - a three byte channel message that determines the status of the Local On
function of a MIDI device. LOCAL ON allows the instrument to produce sounds from
incoming MIDI data and its own keyboard. LOCAL OFF states that only external MIDI data
is responded to.
LOOP - to repeat a sequencer pattern or portion of an audio sample repeatedly. The point to
which the program returns, whether the beginning or some other point, is usually definable by
the user.
METRONOME - a device or software function that produces a discrete pulse. Used to
synchronize music with a specific tempo.
MIDI - the Musical Instrument Digital Interface. An international standard for
communication between a musical instrument and a computer.
MIDI CLOCK - a system real time message that enables the synchronization of different
MIDI devices. The standard rate is 24 divisions per beat.
MIDI INTERFACE - a device that adds a MIDI In, Out and sometimes Thru port to a
desktop computer.
MIDI MERGE - used to combine MIDI data from various sources into a single source.
MIDI MESSAGE - the different packets of data that form a MIDI transmission.
MIDI PATCHER - a device that allows the routing of one or more MIDI signals to various
MIDI devices. Typically reconfigurable to allow for different routings of the data.
MIDI PORTS - the three connectors that pass MIDI data into (MIDI IN), out of (MIDI
OUT) and through (MIDI THRU) a MIDI device.
MIDI SAMPLER - an electronic device that can record, alter and playback digital audio data
under the control of a MIDI data stream.
MIDI TIME CODE (MTC) - a timing system used as a universal reference for all the
devices in a MIDI network. Represents the information contained in a SMPTE signal using
MIDI messages.
MIXER - a recording device that allows several different audio sources to be combined.
Provides independent control over each signal's loudness and stereo position.
MODULATION WHEEL - one of several common continuous controls on a MIDI device.
Often used to add a vibrato effect to a sound.
MONOPHONIC - the ability to play only one note at once. A characteristic of some older
synthesizers.
MULTITIMBRAL - having the ability to produce many different musical timbres (sounds)
at once.
MULTITRACK - in traditional recording technology, the ability to layer multiple different
audio signals at once. In MIDI software, the ability to layer numerous MIDI data streams.
NOTE ON COMMANDS - a channel voice message that indicates a note is to begin
sounding. Contains two additional data bytes: Note number and Note velocity.
NYQUIST FREQUENCY - the highest frequency that any given digital audio system can
capture. Defined as one half the sampling rate of that system.

THE DESKTOP MUSIC HANDBOOK


32
OCTAVE - a frequency ratio of 2:1. A musical distance (interval) of 12 semitones.
OSCILLATOR - an electronic device capable of generating a recurring waveform, or a
digital process used by a synthesizer to generate the same..
OVERDUB - the ability to record one sound on top of another.
PATCH CORD - an audio cable used to connect the output of a device to an amplifier or
mixer.
PAN - to move a signal from the left to the right of a stereo field, or vice versa.
PARAMETERS - characteristic elements of a sound that are usually programmable in a
synthesizer or other MIDI device.
PARTIAL - a sine wave component of a complex sound.
PATCH EDITOR - a category of MIDI software used to control the sound characteristics of
a synthesizer from a computer.
PATCHES - also variously known as programs, timbres, or voices. The name used for the
sounds that can be generated by a MIDI device.
PERIOD - the time required for one cycle in a periodic waveform. Period is the inverse of
frequency.
PHASE - the relative position of a wave to some reference point.
PITCH - a continuous frequency over time.
PITCH BEND - a MIDI controller that can vary the pitch of a sound.
POLYPHONIC - the ability to play many different notes at once.
POTENTIOMETER (POT) - a variable resistor used to alter voltage.
PRESETS - typically, the sounds permanently stored by the manufacturer in a sound
generating device.
PROGRAMS (SEE PATCHES)
PROGRAM CHANGE MESSAGE - a two byte MIDI message used to request that a
synthesizer change the currently loaded program.
PUNCH IN/OUT - the ability to start and stop a recording at some point other than the
beginning.
QUANTIZATION -rounding or truncating a value to the nearest reference value. In a
sequencer, used to adjust recorded material so it will be performed precisely on a selected
division of the beat. In digital audio, the range of numbers used for specifying amplitude
levels of a recorded signal. (16 bit quantization = 65,536 values; 8-bit = 256, etc.)
RAM - random access memory. The temporary storage area of a computer or sampler.
REAL TIME - a recording or realization of a sound processing procedure as it occurs. (see
Step Time).
RECEPTION MODE - one of four basic configurations used by a synthesizer that
determines how it will respond to incoming data.
ROM - read only memory. Permanent memory in a computer or MIDI device.

THE DESKTOP MUSIC HANDBOOK


33
SAMPLER - an electronic device that can record, alter and playback digital audio data under
the control of a MIDI data stream.
SAMPLING - digitizing a waveform by measuring its amplitude fluctuations at some
precisely timed intervals. The accuracy of the measurements is a function of the bit
resolution.
SAMPLING RATE - the rate at which samples of a waveform are made. Must be twice the
highest frequency one wishes to capture. Commercial compact discs use a rate of 44,100
samples per second.
SEQUENCER - MIDI software or less commonly, a hardware device that can record, edit
and playback a sequence of MIDI data.
SINE WAVE - the most basic waveform, consisting of a single partial. Forms the basis of all
complex, periodic sounds.
SMPTE TIME CODE - a timing standard adopted by the Society of Motion Picture and
Television Engineers for controlling different audio and video devices. Allows a sequencer
and an external device such as a tape recorded to stay synchronized.
STEP TIME - entering notes one by one, as opposed to real time recording in a sequencer.
SONG POSITION POINTER (SPP) - a system-common message that specifies where in a
sequencer a device should begin to play.
STANDARD MIDI FILE - a standardized form of data used for exchanging MIDI files
between programs.
STATUS BYTE - the first byte of a MIDI message that specifies what type of message it is.
SUSTAIN PEDAL - a pedal on a MIDI controller (or acoustic piano) that keeps all notes
sounding even a key is released.
SYSTEM COMMON MESSAGES - MIDI messages used for various functions including
tuning an instrument and song selection.
SYSTEM EXCLUSIVE MESSAGE - MIDI message used to communicate with a device
made by a specific manufacturer.
SYSTEM REAL TIME MESSAGES - commands used to synchronize one MIDI device
with another.
TEMPO - the rate of speed at which a musical composition proceeds. Usually uses a quarter
note as the timing reference.
TIMBRE - the property of a sound that distinguishes it from all other. Tone color.
TREMELO -a rapid alternation of two tones. Usually a third apart. On a synthesizer, this
effect can usually be controlled by the modulation wheel or modulation amount.
VELOCITY - a measure of the speed with which a key on a controller is pressed. Used to
determine volume characteristics of note.
WAVEFORM - the graphical display of a sound pressure wave over time.
WAVETABLE - a storage location that contains data used to generate waveforms digitally.

THE DESKTOP MUSIC HANDBOOK


34
Appendix 1- Feature and Capabilities of a General MIDI (GM)

Any device that bears the GM logo must adhere to these features:
• 24 voices of polyphony
• Respond to all 16 MIDI channels
• Each channel can access any number of voices
• Each channel can play a different timbre
• A full set of percussion instruments on channel 10
• All percussion instruments are mapped to specific MIDI note numbers
• A minimum of 128 presets available as MIDI program numbers
• All sounds available on all MIDI channels except channel 10
• Respond to note on velocity
• Middle C is always note number 60
• All GM devices respond to MIDI controllers 1-modulation, 7-volume, 10-pan, 11-
expression, 64-sustain pedal, 121-reset all controllers, 123-all notes off
• GM devices respond to all registered parameters 0-pitch bend sensitivity, 1-fine tuning, 2-
coarse tuning
• GM devices also respond to channel pressure and pitch bend (two semitone default)

THE DESKTOP MUSIC HANDBOOK


35
General MIDI (GM) Program Change Map
1-8 PIANO 49-56 ENSEMBLE 97-104 SYNTH EFFECTS
Prog# - Instrument Prog# - Instrument Prog# - Instrument
1 Acoustic Grand 49 String Ensemble 1 97 FX 1 (rain)
2 Bright Acoustic 50 String Ensemble 2 98 FX 2 (soundtrack)
3 Electric Grand 51 SynthStrings 1 99 FX 3 (crystal)
4 Honky-Tonk 52 SynthStrings 2 100 FX 4 (atmosphere)
5 Electric Piano 1 53 Choir Aahs 101 FX 5 (brightness)
6 Electric Piano 2 54 Voice Oohs 102 FX 6 (goblins)
7 Harpsichord 55 Synth Voice 103 FX 7 (echoes)
8 Clav 56 Orchestra Hit 104 FX 8 (sci-fi)

9-16 CHROMATIC PERCUSSION 57-64 BRASS 105-112 ETHNIC


Prog# - Instrument Prog# - Instrument Prog# - Instrument
9 Celesta 57 Trumpet 105 Sitar
10 Glockenspiel 58 Trombone 106 Banjo
11 Music Box 59 Tuba 107 Shamisen
12 Vibraphone 60 Muted Trumpet 108 Koto
13 Marimba 61 French Horn 109 Kalimba
14 Xylophone 62 Brass Section 110 Bagpipe
15 Tubular Bells 63 SynthBrass 1 111 Fiddle
16 Dulcimer 64 SynthBrass 2 112 Shanai

17-24 ORGAN 65-72 REED 113-120 PERCUSSIVE


Prog# - Instrument Prog# - Instrument Prog# - Instrument
17 Drawbar Organ 65 Soprano Sax 113 Tinkle Bell
18 Percussive Organ 66 Alto Sax 114 Agogo
19 Rock Organ 67 Tenor Sax 115 Steel Drums
20 Church Organ 68 Baritone Sax 116 Woodblock
21 Reed Organ 69 Oboe 117 Taiko Drum
22 Accordion 70 English Horn 118 Melodic Tom
23 Harmonica 71 Bassoon 119 Synth Drum
24 Tango Accordian 72 Clarinet 120 Reverse Cymbal

25-32 GUITAR 73-80 PIPE 121-128 SOUND EFFECTS


Prog# - Instrument Prog# - Instrument Prog# - Instrument
25 Acoustic Guitar(nylon) 73 Piccolo 121 Guitar Fret Noise
26 Acoustic Guitar(steel) 74 Flute 122 Breath Noise
27 Electric Guitar(jazz) 75 Recorder 123 Seashore
28 Electric Guitar(clean) 76 Pan Flute 124 Bird Tweet
29 Electric Guitar(muted) 77 Blown Bottle 125 Telephone Ring
30 Overdriven Guitar 78 Skakuhachi 126 Helicopter
31 Distortion Guitar 79 Whistle 127 Applause
32 Guitar Harmonics 80 Ocarina 128 Gunshot

33-40 BASS 81-88 SYNTH LEAD


Prog# - Instrument Prog# - Instrument
33 Acoustic Bass 81 Lead 1 (square)
34 Electric Bass (finger) 82 Lead 2 (sawtooth)
35 Electric Bass (pick) 83 Lead 3 (calliope)
36 Fretless Bass 84 Lead 4 (chiff)
37 Slap Bass 1 85 Lead 5 (charang)
38 Slap Bass 2 86 Lead 6 (voice)
39 Synth Bass 1 87 Lead 7 (fifths)
40 Synth Bass 2 88 Lead 8 (bass+lead)

41-48 STRINGS 89-96 SYNTH PAD


Prog# - Instrument Prog# - Instrument
41 Violin 89 Pad 1 (new age)
42 Viola 90 Pad 2 (warm)
43 Cello 91 Pad 3 (polysynth)
44 Contrabass 92 Pad 4 (choir)
45 Tremolo Strings 93 Pad 5 (bowed)
46 Pizzicato Strings 94 Pad 6 (metallic)
47 Orchestral Strings 95 Pad 7 (halo)
48 Timpani 96 Pad 8 (sweep)

THE DESKTOP MUSIC HANDBOOK


36
General MIDI (GM) Program Change Map
NOTE # NOTE NAME INSTRUMENT
35 B1 Acoustic Bass Drum
36 C2 Bass Drum 1
37 C#2 Side Stick
38 D2 Acoustic Snare
39 D#2 Hand Clap
40 E2 Electric Snare
41 F2 Low Floor Tom
42 F#2 Closed Hi-Hat
43 G2 High Floor Tom
44 G#2 Pedal Hi-Hat
45 A2 Low Tom
46 A#2 Open Hi-Hat
47 B2 Low-Mid Tom
48 C3 Hi-Mid Tom
49 C#3 Crash Cymbal 1
50 D3 High Tom
51 D#3 Ride Cymbal 1
52 E3 Chinese Cymbal
53 F3 Ride Bell
54 F#3 Tambourine
55 G3 Splash Cymbal
56 G#3 Cowbell
57 A3 Crash Cymbal 2
58 A#3 Vibraslap
59 B3 Ride Cymbal 2
60 C4 Hi Bongo
61 C#4 Low Bongo
62 D4 Mute Hi Conga
63 D#4 Open Hi Conga
64 E4 Low Conga
65 F4 High Timbale
66 F#4 Low Timbale
67 G4 High Agogo
68 G#4 Low Agogo
69 A4 Cabasa
70 A#4 Maracas
71 B4 Short Whistle
72 C5 Long Whistle
73 C#5 Short Guiro
74 D5 Long Guiro
75 D#5 Claves
76 E5 Hi Wood Block
77 F5 Low Wood Block
78 F#5 Mute Cuica
79 G5 Open Cuica
80 G#5 Mute Triangle
81 A5 Open Triangle

THE DESKTOP MUSIC HANDBOOK


37
Appendix 2 - Differences Between GM, GS and XG
The differences between the Yamaha XG format and earlier ones such as General MIDI
(GM) and Roland’s General Synthesis (GS) are sometimes straightforward and sometimes
subtle. General MIDI defines a minimum set of requirements that an instrument must meet in
order to be called “GM-compatible.” (see Appendix 1) It is important to understand that both
XG and GS are supersets of GM; in other words, both formats meet all the requirements of
General MIDI and so are 100% GM-compatible—but both also expand on GM.
XG and GS each provide their own minimum set of requirements that must be subscribed to
for an instrument to be XG- or GS-compatible. Both formats also provide support for a
number of optional features that may be implemented in specific instruments. The Yamaha
MU80 tone module, for example, utilizes many of XG’s optional features, while the Roland
SC-88 tone module utilizes many of the GS optional features.

Number of Voices, Voice, Organization and Voice Selection


GM: 128 Presets (corresponding to program change messages 0 - 127), organized in 16 groups of 8 Presets each.
No provision for the use of Bank Select messages (cc #0 and/or #32).
GS: Minimum requirement: 226 voices (“Tones”). Roland SC-88 provides 654 voices; SC-55 Mk II provides
354 voices. Bank Select MSB (cc #0) and, rarely, LSB (cc #32) is used to select banks of “Variation”
Tones, with program change messages used to select individual Tones. When Bank Select MSB = 0
(default setting), bank of “Capital” Tones (the GM Sound Set) is selected. When a GS-compatible
instrument receives a Bank Select message followed by a program change message that points to an empty
voice slot, the instrument plays silence.
XG: Minimum requirement: 520 voices. Yamaha MU80 provides 729 voices; MU50 provides 737 voices. Bank
Select MSB is used to select any of four bank types: Melody voices, SFX (Special Effects) voices, SFX kit
(the SFX sounds, mapped one to a key), or Rhythm kit (various drum and percussion sounds, mapped one to
a key). When Bank Select MSB = 0, the Bank Select LSB is then used to select any of 128 banks of voices,
each containing 128 Presets (accessed by standard MIDI program change messages). Program change
messages are also used to select different SFX voices, SFX kits or Rhythm Kits. When Bank Select MSB
and LSB are both = 0, the GM Sound Set is selected; when Bank Select MSB = 0 and LSB is not equal to 0,
banks of alternate “Variation” melody voices are selected. Unique sounds which are not direct variations on
the GM Sound Set are located in their own “SFX” bank(s), accessed by setting the Bank Select MSB to 40h.
When an XG instrument receives a Bank Select message followed by a program change message that points
to an empty Melody voice slot, the instrument substitutes the corresponding GM Sound Set voice, ensuring
that the voice will be heard with a sound that is at least similar to the one intended.

Number of MIDI Channels


GM: 16, with each channel capable of playing a different instrument polyphonically.
GS: 16 “or more,” with no specific instructions as to how additional MIDI channels are to be implemented.
Roland SC-88 uses 32 MIDI channels and provides 32-way multitimbral capability; SC-55 Mk II uses 16
MIDI channels and provides 16-way multitimbral capability.
XG: 16 or 32, with specified system exclusive messages used to select the receive channel for each part (in the
case of XG instruments providing 32 MIDI channels, these are organized as channels A1 - A16 and B1 -
B16). XG instruments which support 32 MIDI channels are 32-way multitimbral. Yamaha MU80 uses 32
MIDI channels, and provides 32-way multitimbral capability; MU50 uses 16 MIDI channels and provides
16-way multitimbral capability.

THE DESKTOP MUSIC HANDBOOK


38
Polyphony
GM: 24 notes, dynamically allocated.
GS: Minimum requirement: 24 notes, dynamically allocated. Optional support for additional polyphony. Roland
SC-88 has 64-note polyphony; SC-55 Mk II has 28-note polyphony.
XG: Minimum requirement: 32 notes, dynamically allocated. Optional support for additional polyphony. Yamaha
MU80 has 64-note polyphony; MU50 has 32-note polyphony.

Rhythm Channels
GM: GM specifies that MIDI channel 10 is to be used exclusively as a rhythm channel, and further designates a
single standard GM “Percussion Map,” in which note numbers 35 - 81 are assigned particular drum and
percussion sounds.
GS: Uses channel 10 for rhythm parts. Minimum requirement: 9 “drum sets.” These include one that provides
the standard GM Percussion map, as well as 7 “variation” sets (which use the same note numbers as the GM
Percussion Map but substitute alternate drum sounds), and a single “SFX Set” (which contains non-standard
percussion sounds). Roland SC-88 provides 24 drum sets, including two SFX Sets; SC-55 Mk II provides 10
drum sets, including one SFX Set. Some drum sets expand the range of the GM Percussion Map to include
additional note numbers. System exclusive messages are utilized for non-real-time designation of up to two
rhythm channels (including channel 10) which can optionally access a single alternate percussion map.
When a GS-compatible instrument receives a program change message on channel 10 pointing to a drum set
that doesn’t exist, no sound is heard.
XG: Normally uses channel 10 for rhythm parts (though channel 10 can optionally be designated to play melody
voices). XG-compatible instruments which support 32 MIDI channels normally use both channel 10 and
channel 26 (the tenth channel in the second set of 16) as rhythm channels. Any number of additional
channels can be designated for rhythm parts (in real time) by transmitting a Bank Select MSB value of 7Fh.
Minimum requirement: 11 “drum kits.” These include one that provides the standard GM Percussion map,
as well as 7 “variation” sets (which use the same note numbers as the GM Percussion Map but substitute
alternate drum sounds), and two “SFX” kits (which contain non-standard percussion sounds). Optional
support for additional SFX kits. Some drum kits expand the range of the GM Percussion Map to include
additional note numbers. When an XG-compatible instrument receives a program change message pointing
to a drum kit or SFX kit that doesn’t exist, it is ignored and the currently selected drum kit or SFX kit is
substituted, ensuring that sound is heard.

Control Change Messages


GM: GM-compatible instruments are required to respond to the following seven control change messages:
Modulation (cc #1), Volume (cc #7), Panpot (cc #10), Expression (cc #11), Sustain (cc #64), and RPNs
(Registered Parameter Numbers) (cc #100 [LSB] and cc #101 [MSB]). Modulation (cc #1) “will change the
nature of the sound in the most natural (expected) way, i.e. depth of LFO; change of timbre; add more tine
sound, etc.” Volume (cc #7) is to be used to set the overall volume of the channel prior to music
data playback as well as mixdown Fader-style movements, while Expression (cc #11) is to be used during
music data playback to attenuate the programmed MIDI volume, thus creating diminuendos and crescendos.
In the case of rhythm instruments, the balance between individual sounds is preset, and Volume and
Expression messages adjust the overall level of the instrument. Panpot (cc #10) is used to place the stereo
position of the sound between hard left (0) and hard right (127), with a value of 64 (40h) indicating center
position. GM-compatible instruments are not required to necessarily provide 128 steps of adjustment, but at
least three points (hard left/center/hard right) are necessary. Though recommended, it is not required that a
currently-sounding note be moved when a Panpot message is received; it is acceptable to apply the new pan
position starting with the next note. Some GM-compatible instruments therefore do not allow the pan

THE DESKTOP MUSIC HANDBOOK


39
position to be changed while a note is sounding. It is not required that rhythm instruments respond to Panpot
messages since pan is preset for each individual sound. If a GM-compatible instrument does allow reception
of Panpot over the rhythm channel, the entire set of percussion sounds will be shifted left or right. The
Sustain message (cc #64) is the only pedal-related message whose reception is required by GM. In general,
only On and Off values are recognized by GM-compatible instruments for Sustain; for this reason, GM
specifies that Sustain data of 0 - 63 be considered Off and data of 64 - 127 be considered On (some GM
instruments may optionally accept continuous data for piano-type sounds, this allowing half-damper and re-
damper effects).
GS: Minimum requirement: All seven GM cc messages, plus: Bank Select (cc #0 [MSB] and cc #32 [LSB]);
Portamento Time (cc #5); Data Entry (cc #6 [MSB] and cc #38 [LSB]); Portamento (cc #65); NRPN (Non-
Registered Parameter Numbers) (cc #98 [LSB] and #99 [MSB]). The Data Entry MSB and LSB (cc #6 and
cc #38) are used in conjunction with NRPNs (Non-Registered Parameter Numbers) (cc #98 and #99).
Optional support is provided for the following: Sostenuto (cc #66); Soft (cc #67); Portamento Control (cc
#84); External Effects Depth (cc #91); and Chorus Depth (cc #93). Optional support for half-damper
of Sustain (cc #64).
XG: Minimum requirement: All GM and GS required and optional cc messages, plus: Harmonic Content (cc
#71); Release Time (cc #72); Attack Time (cc #73); Brightness (cc #74); and Celeste (Detune) Depth (cc
#94).
The first four of these play a particularly important role since they allow continuously variable timbral
changes to be made easily—and in real time—to any XG voice. Since these are all adjustments that are
relative to the existing voice parameter settings, the end result will depend upon the original programming
of the voice. The default setting for each is a data value of 64 (the zeroed center value), which produces no
change. Harmonic Content (cc #71) modifies the resonance of the voice’s lowpass filter. Data values higher
than 64 cause the sound to become more nasal, while data values lower than 64 cause the sound to become
more open.
Brightness (cc #74) modifies the cutoff frequency of the voice’s lowpass filter. Data values higher than 64
enable higher frequencies to pass through, (making the sound more brilliant), while data values lower than
64 cause increased filtering, making the sound warmer. The Attack Time (cc #73) and Release Time (cc
#72) messages allow adjustments to be made to the voice’s envelope. Attack Time describes how long it
takes an envelope to reach maximum level after a note is played, while Release Time is the opposite,
describing how long it takes an envelope to reach minimum level after a note is released. Data values higher
than 64 cause the sound to attack or release more slowly, while data values lower than 64 cause the sound to
attack or release more rapidly.
The Celeste (Detune) Depth message (cc #94) is used by XG-compatible instruments to set the Variation
effect send level. As with GS, External Effects Depth (cc #91) is used to set the amount of reverb send level
and Chorus Depth (cc #93) is used to set the amount of chorus send level. For more information, see the
“Effects” section below. For pedal-related controllers (Sustain, Portamento, Sostenuto, and Soft), data
values in the range 0 - 63 are considered “Off,” while data values in the range 64 - 127 are considered “On.”

THE DESKTOP MUSIC HANDBOOK


40
RPNs (Registered Parameter Numbers)
Registered Parameter Numbers, or “RPNs” for short, are simply a standardized list of voice
parameters (for all MIDI instruments) that can be changed in real time using control change
messages. Currently, the MIDI standards committees have approved three RPNs: Pitch Bend
Sensitivity, Fine Tuning, and Coarse Tuning. To access these parameters, control change
#101 (carrying the RPN MSB) and #100 (carrying the RPN LSB) are used.
GM: GM-compatible instruments must be capable of receiving all three RPNs: Pitch Bend Sensitivity (RPN #0),
Fine Tuning (RPN #1), and Coarse Tuning (RPN #2).
GS: Minimum requirement: Same as GM. Data values are set using Data Entry (cc #6 [MSB] and cc #38
[LSB]). If a range of 128 values is sufficient, the MSB alone (cc #6) can be used. If greater resolution is
required, both the Data Entry MSB and LSB (cc #38) can be used.
XG: Minimum requirement: Same as GM and GS, but Pitch Bend Sensitivity is set in semitones only (the Data
Entry LSB is always ignored).

NRPNs (Non-Registered Parameter Numbers)


Non-Registered Parameter Numbers (NRPNs) are similar to RPNs except that they provide a
list of voice parameters unique to a particular instrument. This is an area of MIDI that is quite
open, since manufacturers are given the freedom to implement NRPNs as they like. Control
change #99 (carrying the NRPN MSB) and #98 (carrying the NRPN LSB) are used to access
manufacturer-specified NRPNs.
GM: GM makes no mention of the usage of NRPNs.
GS: Minimum requirement: None (the use of NRPNs is optional, though recommended). Optional support is
provided for the following 13 NRPNs: Vibrato Rate, Vibrato Depth, Vibrato Delay, Filter Cutoff, Filter
Resonance, Attack Time, Decay Time, Release Time, Drum Instrument Pitch, Drum Instrument Level,
Drum Instrument Pan, Drum Instrument Reverb Send, Drum Instrument Chorus Send. All NRPN values are
set with the Data Entry MSB (cc #6) only (the Data Entry LSB [cc #38] is ignored). When the drum
instrument Pan data value is 0, panning for that sound is ran-dom. Roland SC-88 responds to 14 NRPNs;
SC-55 Mk. II responds to 13 NRPNS.
XG: Minimum requirement: XG compatible instruments must utilize the following 19 NRPNs: Vibrato Rate,
Vibrato Depth, Vibrato Delay, Filter Cutoff Frequency, Filter Resonance, EG Attack Rate, EG Decay Rate,
EG Release Rate, Drum Filter Cutoff Frequency, Drum Filter Resonance, Drum EG Attack Rate, Drum EG
Decay Rate, Drum Instrument Pitch Coarse, Drum Instrument Pitch Fine, Drum Instrument Level, Drum
Instrument Pan, Drum Instrument Reverb Send Level, Drum Instrument Chorus Send Level, Drum
Instrument Variation Send Level.
All NRPN data changes are specified as being relative, with a Data Entry value of 64 (the zeroed center
value) causing no change to the sound, and values greater or less than 64 causing increased or decreased
change. As with GS, the Data Entry LSB (cc #38) is ignored; all NRPN values are set with the Data Entry
MSB (cc #6) only. As with GS, when the drum instrument Pan data value is 0, panning for that sound is
random. Yamaha MU80 responds to 19 NRPNs; MU50 responds to 19 NRPNS.

THE DESKTOP MUSIC HANDBOOK


41
Pitch Bend and Aftertouch
GM: GM-compatible instruments must be capable of receiving Pitch Bend and Channel Pressure messages for all
melody voices, though rhythm instruments are not required to receive either. GM specifies the default Pitch
Bend Range as ±2 semitones with Pitch Sensitivity set by RPN, but the pitch shift curve is not defined.
Similarly, the effect of Channel Pressure is not defined. Receipt of Polyphonic Key Pressure is not required.
GS: Minimum requirement: GS compatible instruments must follow all GM guidelines as described above.
Optional support is provided for receiving Polyphonic Key Pressure and for defining the effect of Channel
Pressure with the use of system exclusive messages. GS does not define the pitch bend curve.
XG: Minimum requirement: XG compatible instruments must follow all GM guidelines as described above, and
also must be capable of receiving Polyphonic Key Pressure. The pitch shift curve is defined as linear by
cents, ensuring pitch bend compatibility between XG instruments Pitch bend can be used to affect rhythm
channels as well as melody voices.

Effects
GM: GM provides no provision for the use of either onboard or external effects.
GS: Minimum requirement: None (reverb and chorus recommended but not required). Optional support for a
maximum of four internal effects: reverb, chorus, delay and EQ. If used, reverb send level is determined by
cc #91, chorus send level is determined by cc #93, and delay send level is determined by cc #94. Non-
Registered Parameter Numbers (NRPNs) can optionally be used to set reverb and chorus send levels for
individual sounds within drum instruments. System exclusive messages are used for non-real-time selection
from among preset reverb and chorus types and to customize effects settings. Roland SC-88 provides 3
onboard effects (reverb, chorus, and delay), plus a two-band equalizer, with 8 reverb types, 8 chorus types,
and 10 delay types; SC-55 Mk II provides 2 onboard effects (reverb and chorus), with 8 reverb types and 8
chorus types .
XG: Minimum requirement: Three onboard effects (reverb, chorus, and “Variation,” the latter of which must be
able to be used either in a standard send-return configuration or in a unity gain “insert” configuration, with a
system exclusive message used to set the desired condition), with 8 defined reverb effects types, 8 defined
chorus effects types, and 35 defined Variation effects types. Optional support for two additional effects:
distortion and graphic EQ. Reverb send level is determined by cc #91, chorus send level is determined by cc
#93, and Variation send level is determined by cc #94. Non-Registered Parameter Numbers (NRPNs) are
used to set reverb, chorus, and Variation send levels for individual sounds within drum instruments.
System exclusive messages are not only used to select preset effects types and customize effects settings but
are also used to specify effects routings (allowing for parallel or variable amounts of serial routing). A user-
defined real-time controller (such as a foot pedal or wheel) can be used to alter one effects parameter
(usually dry/wet mix) in the selected Variation effect. Yamaha MU80 provides 4 onboard effects (reverb,
chorus, Variation, distortion) plus a 5-band graphic equalizer, with 12 reverb types, 10 chorus types, 42
Variation types, and 3 distortion types (as well as 4 EQ Presets); MU50 provides 3 onboard effects (reverb,
chorus, Variation), with 11 reverb types, 11 chorus types, and 41 Variation types.

THE DESKTOP MUSIC HANDBOOK


42
External Audio Input
The ability to input external audio signal into a MIDI tone generator and then control that
signal via MIDI messages is a relatively new phenomenon, largely made possible through the
increased availability of affordable analog-to-digital converter chips. This advanced feature
enables real-time participation in the MIDI music being generated and effectively forges a
bridge into the worlds of karaoke and multimedia.
GM: No provision for external audio input.
GS: No provision for external audio input. Roland SC-88 provides two channels of audio input, though these are
not under MIDI control.
XG: Provides optional support for one or more external audio inputs, called “A/D channels.” The digital signal
from these channels (derived from the onboard A/D converter) is processed and controlled in the same way
as the tone generator signals being produced by MIDI channels: overall level and pan position can be
controlled in real-time, as well as send levels to any or all internal effects. System exclusive messages are
used to set input gain, MIDI receive channel number, and on-off reception status for incoming volume (cc
#7), pan (cc #10), and expression (cc #11) messages and are also used to select from among various A/D
channel Presets, each of which call up complete settings complementary to the instrument type. For
example, an A/D preset for a mic. input might include reverb and compression effects, whereas one for
guitar might include chorusing, echo and distortion effects. Yamaha MU80 provides 2 MIDI-con-trolled
A/D channels.

Channel Mode Messages


The MIDI Specification designates control change numbers 120 - 127 for carrying what are
known as Channel Mode messages.
GM: GM-compatible instruments must be capable of receiving the following two Channel Mode messages: Reset
All Controllers (cc #121) and All Notes Off (cc #123). When a Reset All Controllers message is received,
the GM guidelines specify that data on all channels be reset as follows:
Pitch bend is centered
Channel pressure is zeroed
Modulation is zeroed
Expression is set to maximum
(data value of 127)
Sustain is set to Off
(data value of 0)
RPN is set to Null
GS: Minimum requirement: In addition to following the GM guidelines described above, GS-compatible
instruments are also capable of receiving the following Channel Mode messages: All Sounds Off (cc #120)
and Mono/Poly (cc #126, 127) GS instruments normally operate in MIDI Mode 3 but are changed to Mode
4 upon receipt of a Mono On message. Receipt of either Mono On or Poly On cause the same processing
operation as an All Sounds Off message. Because they cannot operate in Modes 1 or 2, receipt of Omni On
or Omni Off causes the same processing operation as an All Notes Off message and Omni remains off.
XG: Minimum requirement: XG compatible instruments respond to Channel Mode messages in the same way as
GS compatible instruments.

THE DESKTOP MUSIC HANDBOOK


43
System Messages
MIDI System messages include messages that control the entire instrument and messages that
handle data unique to a manufacturer and model. The concept of “channel” does not apply to
System messages—they affect all voices. Many of these messages are used only for
synchronization and the only System messages that are applicable to tone generators are the
broad category of system exclusive messages (which set global functions such as operating
mode and deal with sound and effects
parameters) and the Active Sensing message (which prevents problems that could result from
broken connections). It is worth noting that the category of system exclusive messages
include some general-purpose messages, known as Universal system exclusive. Universal
system exclusive messages are further divided into real-time and non-real-time messages.
GM: GM-compatible instruments must recognize the following two Universal Non-Real Time system exclusive
messages: Turn GM System On and Turn GM System Off. If a GM-compatible instrument has operational
modes that allow it to function other than as a GM instrument, the reception of the Turn GM System On
message must cause it to switch to GM mode and initialize itself. Even if the instrument functions only in
GM mode, reception of this message must cause reinitialization to the following states:
Program Change 00 (first program)
Modulation Depth 00
Volume 100
Pan 64 (center)
Expression 127 (maximum)
Sustain 00 (off)
RPN Fine Tune 64,00 (0)
RPN Coarse Tune 64,00 (0)
RPN Null
Pitch Bend 64 (center)
Channel Pressure (all channels) 0
The Turn GM System Off message is used to exit GM mode, but will be ignored if the instrument functions
only in GM mode. GM recommends but does not require that Active Sensing be implemented. Nearly all
GM instruments do include this feature.
GS: Minimum requirement: Reception of the Turn GM System On Universal Non-Real-time system exclusive
message, as well as the following additional system exclusive messages: GS Reset (which places the
instrument in GS operational mode), Master Volume (a Universal Real-time message), Receive Channel
(per part), Use For Rhythm Part (which changes a melody part to a rhythm part), and Scale Tuning (which
sets the tuning globally).
Optional support is provided for 16 system exclusive messages (which are recommended but not required)
for the alteration of voice and effects parameters, as well as global messages to set voice and channel
assignments, scale tuning and effects routings. Additional system exclusive messages may be utilized by
individual instruments.
The recognition of Active Sensing is recommended but not required. Roland SC-88 and SC-55 both
respond to Active Sensing.
XG: Minimum requirement: Reception of Active Sensing and the Turn GM System On Universal Non-Real-time
system exclusive message, as well as the following additional system exclusive messages: XG System On
(which places the instrument in XG operational mode, not only setting the instrument to a default state but
also enabling the reception of XG-specific NRPNs), Master Volume (a Universal Real-time message), and
Master Tuning (which provides a convenient way to tune all channels simultaneously)

THE DESKTOP MUSIC HANDBOOK


44
In addition, XG defines a generic Parameter Change SysEx message, which can be used to alter almost
every XG parameter, including voice and effects data, as well as messages to set the effects routings,
optional A/D input(s), and optional master equalizer. The advantage to using one generic “template” such as
this is that the procedure for setting parameters is basically the same for all XG instruments.
XG also defines Parameter Request and Dump Request SysEx commands (requests for an instrument to
transmit data for one particular parameter or all internal data). Yamaha MU80 and MU50 both respond to
Active Sensing.

THE DESKTOP MUSIC HANDBOOK


45
Appendix 3 - Selecting a PC
Before you decide to purchase any hard- or software read the rest of the Desktop Music
Handbook, which should give you an idea of what software (Notation, MIDI sequencer,
Integrated MIDI and Audio sequencer or Multi Track Audio editor) you will need for your
music. Of course, you will need computer hardware that is powerful enough to run it all and
some means of getting Music in the form of MIDI and/or digital audio into and out of your
PC.
This means that you should consider the following components as the minimum requirements
for your Desktop Music PC:
• A powerful PC (you can rarely have enough CPU cycles, RAM or hard disk space)
• A low noise (<-85dB) sound card, with a MIDI synthesiser and a MIDI interface, and/or MIDI
sampling capabilities,
and
• Notation software
and/or
• MIDI and audio sequencer
and/or
• Multi track audio editor
and
• Creativity and lots of perspiration (remember: any great piece of music contains 1 part
inspiration and 99 parts perspiration!)…
The list mentioned above provides you with the basics. Other items can be added based on
your specific requirements:
• A MIDI keyboard
• MIDI sampling facilities (either as a stand-alone hardware sampler or in the form of a PC sound
card such as the Turtle Beach Pinnacle)
• Effects plug-ins, ranging from the ‘bread and butter’ type (reverb, delay, compression etc.) to the
esoteric (vocoder, various types of filters etc.) to the downright strange (sonic decimators, 3D
expanders, specialised plug-ins etc.)
• Various MIDI tools that aid composition, sequencing and MIDI effects (harmonisers,
arpeggiators, MIDI echoes, analogue-style mini-sequencers etc.)
• Various sound editing tools and specialised editors that offer options that a standard sound editor
does not offer (e.g. sample loop editors, specialised voice processing packages etc.)
• External hardware; digital sound effects, mixers, MIDI keyboards etc.
• A CD-Recorder/ReWritable, a DAT recorder or another mastering medium
Before you start spending, it is a good idea to think about what kind of music you want to
make and which tools you might need. Spend your money wisely; it's frustrating to pay a lot
of money for a great-sounding software plug-in only to find out that your PC is not powerful
enough to run it…

THE DESKTOP MUSIC HANDBOOK


46
PC Hardware Summary
It may seem strange to put the summary of the do's and don'ts of choosing your Desktop
Music PC at the beginning, if you just want advice and don't want to go into all of the
technical details just read this and leave the rest:
• Buy the highest spec PC that you can afford.
• Choose a PC with a genuine Intel Processor
• You’ll need at least 32MB of RAM (preferable 64MB).
• Your hard disk needs to be fast enough to deliver the audio data from the disk to your
system, so a low average access time (<10 ms) and lots of throughput, (Sustained transfer
rate >5 MB/sec the higher this number the more audio tracks) are essential.
• Use a separate hard disk for digital audio if you can.
• Make sure the sound card you choose supports true simultaneous record and playback.
The manufacturer will always say it does, as this is a very big selling point, they will say
something like "Enhanced Full Duplex", or "Full Duplex at full bandwidth" or
"Simultaneous Record and Playback". If you're not sure, be specific and ask "does
this sound card support simultaneous record and playback at 16-bit 44.1/48 kHz
with my software"? Cards from Turtle Beach, Digital Audio Labs, AdB, Gadget Labs
are all Enhanced Full Duplex at a least 16-bit/44.1kHz.
• Don't use the sound cards internal amplifier if it's got one
• Use good quality Speakers (at least at the Yamaha YST M20 level)
• Use good quality cables to connect all of your sound equipment
• Make a note of all of your hardware settings (IRQ's etc) including your sound card and
keep it somewhere safe.
• Install a Virus Checker
• Install an Uninstaller Program
• If you're going to be making serious music on your PC avoid using it as a games
machine, games often install all kinds of stuff that can ruin both the performance and the
set-up of your sound card and PC. If you want to play games use a Playstation or N64 (my
opinion).
• Avoid installing programmes, drivers etc. from Magazine Cover CD's that you don't
know anything about, if you must install, make sure you know how to uninstall it.
• Use an external modem rather than an internal modem (internal modems are notorious for
the noise they generate and the resources they use)
• Turn off Windows Sounds
• Ensure that the MIDI adapter cable for your sound card is opto-isolated
• Back up your data on a regular basis (use something like a CDR/CDRW)
• Read the manuals.

THE DESKTOP MUSIC HANDBOOK


47
PC Hardware the Detail
When using PCs, you will soon discover that you can never have too much processing power,
RAM or hard disk space! In general you need to buy the highest spec PC that you can afford.
The spec of PC's double in Power approximately every two years and the latest versions of
software try to take advantage of the latest specs by adding lots of features that take more
power (real time audio effects are a good example). Keeping this in mind, its worth noting
the following details.

Choosing the Processor


When buying a Processor, decide on the fastest processor you can afford without sacrificing
RAM or a fast hard disk. Also, keep in mind that while non-Intel CPU’s are generally
cheaper, they typically give a poorer performance when recording digital audio, since their
number crunching capabilities are sometimes limited. While they may run faster for general
applications, digital audio requires a lot of computation and they may actually run too slowly
for your needs. Also some market leading software packages simply do not run at all on some
non-Intel Processors.

How much memory (RAM)?


Having enough RAM is critical for most sequencers these days. While it may say on the box
that the minimum requirement is 16MB of RAM, this does not always include RAM required
for the Operating System. You will need at least 32MB to get a reasonable performance,
especially when you record a lot of digital audio tracks alongside your MIDI tracks. As
always more is better and if your motherboard supports DIMM RAM as well as SIMM
RAM, get DIMMs: they’re much faster. If you install more than 32MB of memory, it will be
beneficial to increase the cache memory on the motherboard from the standard 256KB to
512KB. If you want to use more than 64MB it’s wise to put in 1024KB of cache memory.

Choosing a hard disk?


For digital audio recording, having a large, fast hard disk is essential. One minute of stereo
recording in 16-bit at 44.1 kHz uses 10.5MB of disk space. When you are recording multiple
tracks this quickly translates into needing a vast amount of disk space (a 5 minute song with 8
mono tracks of digital audio will requires a minimum of 210MB for the song plus at least the
same again for temp work space). Your hard disk needs to be fast enough to deliver the audio
data from the disk to your system, so a low average access time (<10 ms) and lots of
throughput (Sustained transfer rate >5 MB/sec) are essential.
First you must decide how many tracks of digital audio you require, the larger the number of
tracks the higher the hard disk performance required.
The hard disk performance is the single most critical factor in determining the number of
digital audio tracks you will get on your Desktop Music PC. Because of this it is important
that you understand what physical disk performance numbers mean.
Sustained transfer rate: This is the most important performance measurement to evaluate
a disk that is going to be used for digital audio (and incidentally digital video). This is the
quantity of information that the disk can read sequentially per time unit, usually expressed as
MB/s (megabytes per second). Sustained means that the disk can deliver this performance ad
infinitum. Unfortunately most manufacturers do not provide this important number. Instead

THE DESKTOP MUSIC HANDBOOK


48
they prefer to give the more persuasive maximum throughput or burst transfer rate, which is
the maximum amount of data per second that the disk can deliver, even if it cannot sustain
this rate. The max throughput of a disk is relevant for most applications, but not hard disk
recording, because hard disk recorders will exhaust a disk’s burst speed in the first second or
so of recording. The disk must be able to sustain the transfer rate or the recorder will stall.
Therefore it is critical when buying a hard disk for recording that you make sure the seller is
quoting the sustained transfer rate.
Access times: The “one stroke seek” is the time the head needs to settle to a
contiguous track. The “full stroke seek” is the time required to travel from the first to the last
track of the disk. The “average seek” is the time required for a 1/3 full stroke, and is the value
most often supplied by manufacturers. Some times, “average access time” is specified, this
means the average seek time plus the average sector search time (depends mainly on
rotational speed).
Internal transfer rate: This specifies the peak burst transfer rate from disk surfaces to
heads. It’s usually expressed as MB/s (megabytes per second). This is not the same as the
sustained transfer rate, although the higher the internal transfer rate the higher the sustained
transfer rate. This internal transfer includes not only the user data, but also the sector headers,
servo info, ECC data, etc. There are gaps between sectors and while the head passes over
them, no useful data is transferred. The internal transfer is also stopped when switching heads
and seeking. This is the reason internal transfer rate is not the same as sustained transfer rate,
but manufacturers usually give this number instead of the more meaningful sustained one.
Channel type: There are mainly two kinds of disk interfaces: IDE and SCSI.
The SCSI interface is better for digital audio, but SCSI disks are much more expensive than
IDE disks. You should not base your selection only in the channel type, but in all disk
specifications, as some of the fastest IDE disks are faster than non-high-end SCSI disks.
SCSI is only faster if selecting the newest and fastest disks available at the time, or if there is
a requirement for the other advantages of the SCSI interface.
The major differences between IDE and SCSI are:
• A SCSI channel supports up to 8 devices on the narrow version, or 16 on the wide
version. The controller itself counts as a device. The IDE channel supports only 2
devices; most motherboards include 2 IDE channels for a total of 4 devices.
• In a SCSI channel, there can be concurrent operations for several devices. The system can
order a transfer for device 0 and then for device 1; each device requests the bus when they
have data available, sharing the total channel bandwidth. In an IDE channel, an operation
for a device must complete before beginning a new operation for other device in the same
channel, although different IDE channels can work concurrently.
Also there are often some incorrect assumptions about SCSI vs. IDE:
SCSI requires less CPU usage. This is totally wrong. This depends on the controller
technology, mainly the bus-mastering capability or the ability to use DMA channels, not the
channel type.
SCSI is faster. This is misleading. it depends on the disk performance, not the channel type.

THE DESKTOP MUSIC HANDBOOK


49
Channel transfer rate: This is the maximum transfer rate through the communication
channel between the disk drive and the host adapter. It must be higher then the maximum
sustained transfer speed to avoid becoming the bottleneck.
With IDE (or ATA) disks, the fastest mode currently is DMA2 (33 MB/s Ultra DMA). With
SCSI, there are asynchronous modes (very slow), and synchronous ones. For synchronous
modes, the speeds are: Standard = 5 MB/s, Fast = 10 MB/s, Wide = 20 MB/s, Ultra = 20
MB/s, UltraWide = 40 MB/s. There is also a recently specified mode rated at 80 MB/s.
Any channel speed above the transfer speed of the disk won't improve the transfer rate for
that disk. It only helps to leave the channel unused for longer during the transfer, so it can be
used by other drives in the case of SCSI - It also helps to reduce the PCI bus bandwidth used,
leaving more time free for other peripherals such as the display. This is the reason why users
don’t always get all of the improvements they expect using DMA2 (Ultra DMA) versus
DMA1, or Ultra SCSI versus Ultra Wide SCSI.
Bus mastering: The massive throughput requirement of digital audio recording
uses a lot of processor bandwidth when the processor needs to transfer the data between the
controller and memory. To free up processor time and make it available for other tasks (like
audio mixing and processing), bus mastering allows the transfers to be made by peripherals.
There are two ways to achieve this:
• Bus mastering controllers: These request direct accesses to the bus and do the transfer,
signalling the completion to the CPU through an interrupt.
• In some PCs there is a peripheral called a DMA controller, which is a bus master device
specialising in transfering between memory and other peripherals. The ISA and PCI buses
include signals to control DMA transfers.
Both methods are valid, but most modern PCI based controllers use bus mastering, the
preferred method for the PCI bus. ISA doesn’t support bus mastering properly, so the only
solution for ISA cards is DMA.
Bus mastering is not the same as the DMA mode of the IDE disks, but it is usually related
because most IDE controllers can only operate in bus master mode when using the IDE
devices in DMA mode.
To get the great advantages of bus mastering, you need the following:
• A bus master capable controller. Applicable to both IDE and SCSI.
• Bus mastering drivers for the controller.
• In case of IDE channel, DMA mode capable devices. Most modern devices support at
least DMA1 (16 MB/s), or better DMA2 (Ultra DMA 33 MB/s).
Install bus master drivers if your system supports it. The CPU usage typically goes down
from above 50 % to below 5 %. Freeing up your PC for audio processing.
A note about bus mastering drivers: most manufacturers (Adaptec, Intel, Asus, etc), provide
bus master drivers for their bus master capable controllers. Those drivers automatically run in
bus master mode. But sometimes, if a device doesn’t work in DMA mode, the device can’t be
used. Win95B (OSR2) automatically installs the Microsoft’s bus master drivers, these are
reliable and give good performance, but they don’t work in bus master mode by default. To
enable bus master mode for each device, you must go to Control Panel | System | Devices |

THE DESKTOP MUSIC HANDBOOK


50
Disks | Configuration, and check the DMA box. This option might not be visible if your
controller does not support bus mastering or if you installed specific drivers. For versions of
Win 95 previous to OSR2 do not install bus master drivers.
Another note: the DMA mode of IDE devices includes error checking in the transfers to
detect data corruption in the bus. Data corruption should never occur, but with some
misbehaving devices or bad quality and/or too long cables this can happen. The Microsoft
drivers do not check the CRC error reported by the controller, however there is an upgrade
the Microsoft’s website. Users should be sure that the disk subsystem is not causing CRC
errors, because errors cause retries, which degrades the performance substantially. As a
general rule, use the shortest possible high quality cables, and reliable and proven devices
tested in DMA mode.
Disk partitioning:
• For the best performance, use either a dedicated disk for the audio files or the FIRST
partition of your fastest disk. Dedicating a disk or partition for audio files also helps to
achieve less fragmentation of the files.
• Always use FAT file system for the audio partitions. It’s faster than NTFS and others.
You can use FAT32 if you have Windows OSR2. FAT32 allows for very large disk sizes
- FAT16 only allows up to 2GB partitions - so you can create a large audio disk with only
one partition. The performance of FAT32 is about the same as with FAT16 however,
FAT32 will try to create smaller cluster sizes than you want. Disks don’t read one bit at a
time, they read clusters of bits. For many applications, smaller clusters mean less wasted
space on your disk - but for audio, smaller clusters mean more discrete read/write
operations. Translation: slowness. Therefore when formatting a FAT32 device, always
use the format command with the /z:32 or /z:64 (the largest possible for your disk) to
create bigger cluster sizes. It helps to get a bit better performance and less file
fragmentation.
Win 95 disk access optimisation: There are some things that can be fine tuned to increase
performance:
Bus Mastering drivers:
Always install bus-mastering drivers if your system supports them. The CPU usage typically
goes down from above 50 % to below 5 %. Free power for audio processing!
File caching:
The minimum and maximum size of memory used to cache file accesses can be adjusted in
the SYSTEM.INI file, section [VCACHE]:
MinFileCache = n1
MaxFileCache = n2
n1 and n2 are the sizes in KB. By default, these entries do not exist and Windows adjusts the
size automatically. But file caching is not useful with audio files (or any other kind of very
large, streaming files). If running short of physical memory, set the MaxFileCache setting to
limit the maximum quantity of memory dedicated to file caching, resulting in more memory
available for other things. MinFileCache can be left undefined or set to 0.

THE DESKTOP MUSIC HANDBOOK


51
As general guide use the following values:
RAM MaxFileCache (KB)
16MB 1024-2048
32MB 2048.-4096
64MB 4096.-8192
64+MB 8192

System type and Read Ahead optimisations:


These parameters can be adjusted in Control Panel | System | Performance | Files.
The type can be set to “server”, “desktop” or “mobile”, desktop being the default. Using
“server”, the systems gives higher priority to disk I/O, which usually helps with disk
intensive applications such as audio. The read ahead optimisation setting is at maximum (64
KB) by default. The influence of this varies depending on the application, leave it at
maximum. Some audio programs recommend or set it automatically to the minimum, but this
usually results in very little improvement for those programs, and important degradation for
others.

Choosing a Sound Card


You will be using the sound card to record both your digital audio and MIDI and to playback
the music you create, you will want to hear it at its best and not as a crackley tinny rendition
which is barely recognisable. To achieve this you must invest in a good quality sound card
and synthesiser.
Noise
While sound card manufacturers may claim very low noise from a sound card, noise figures quoted
are usually only achieved in ideal conditions and are rarely attained in real life. This is due to
interference from other PC components (like the hard disk, PC clock and other cards in the vicinity
of the sound card) what you achieve in your Desktop Music PC set-up will usually be worse than
advertised. Some manufacturers do get much nearer to their stated figure than others, for instance
Turtle Beach, Gadget Labs, AdB and Digital Audio Labs all go to great lengths to minimise any PC
generated noise and get as close to their stated Signal to noise ratio as possible.
There are some measures you can take to minimise noise. Placing the sound card as far away from
other components as possible, especially Video cards, internal modems and hard disk controllers and
use high quality cables. However whilst the AD/DA conversion takes place inside the PC itself there
is bound to be more interference than if it takes place externally. In general low noise sound cards
will have AD/DA converters, which sample at higher than 16-bit even though the digital audio files
will be 16-bit. For example, the Turtle Beach Montego uses 18-bit converters, the FIJI/Pinnacle use
20-bit converters, the AdB Multi!WAV uses 24-bit converters. In general the higher the sample bits
the lower the noise.
If you already own a sound card and you're not satisfied with the noise level, one remedy is to
include some silence in each recording, and then use that period of silence as a mask then run a noise
reduction program such as DCArt, DART or Cool Edit Pro. This can improve the recording's S/N
ratio by about 10 dB. However, if most of your sound sources are first-generation digital and you are
using a sound card with a S/PDIF digital interface to get the digital audio into the PC there is no
problem, all you will be using your AD/DA converters on your sound card for is monitoring.

THE DESKTOP MUSIC HANDBOOK


52
About AD/DA conversion…
Another way to reduce noise is to use external AD/DA converters and a sound card with a digital I/O.
This can greatly reduce noise and improves the overall sound card performance. There are a number
of ways to achieve this: If you have a DAT, Minidisk or DCC recorder with digital I/O it may be
possible to use its AD/DA converters, (usually by putting the device in paused recording mode) and
using a sound card with a digital I/O usually S/PDIF, (Pinnacle/FIJI with optional digital I/O, AdB
Multi!WAV Digital Pro and the Digital Audio Labs digital only CardD) you can also use dedicated
external AD/DA converters or sound cards with external AD/DA converters (Wave/8.24 from
Gadget Labs or the Wave Center with Tango from Frontier Design), but these tend to be more
expensive.
Enhanced Full duplex
The duplex ability of a sound card is of crucial importance to the musician. If a card is described as
full duplex it means that it can record and playback digital audio at the same time but probably only
at 8-bit.
Most musicians will require full duplex at a least 16-bit 44.1 (simultaneous record and playback) this
allows the recording of a vocal (or guitar etc.) whilst playing back and listening to the backing tracks.
Some sound cards state that they are full duplex but only at 8-bit, they will therefore sound very
noisy and will not usually meet the requirement of the discerning musician. But will probaly be OK
for Internet telephony (the Sound Blaster 16/AWE family of products only support 16-bit record with
8-bit playback hence sound very noisy on playback)
Sound cards that are truly simultaneous record and playback will always say so as this is a very big
selling point. They will say something like "Enhanced Full Duplex", or "Full Duplex at full
bandwidth" or "Simultaneous Record and Playback". If you're not sure ask, "does this sound
card support simultaneous record and playback at 16-bit 44.1/48 kHz with my software"?
Cards from Turtle Beach, Digital Audio Labs, AdB, Gadget Labs are all Enhanced Full Duplex at
16-bit/44.1kHz. or higher.
Synthesiser
The quality of the synthesiser you use, whether it is built into your sound card, an add-on to your
sound card or an external unit, determines the quality of your MIDI playback. Whatever synthesiser
you choose ensure at the very minimum that it is a wave table synthesiser with at least a 2Mb sample
set (preferably more). In general the larger the wave table sample set the better the quality of the
instrument sound. Although it can very subjective and some manufacturers do a much better job on
certain instruments than others.
So what should you get?
It pays to think about what you want and what you need before shopping around. How many
inputs do you need, and what type (jacks, mini-jacks, SP/DIF, ADAT etc.)? How many
outputs? Do you need to feed the output back into a mixing desk, necessitating high-quality
AD/DA converters? Do you want games compatibility (for pro-audio, you don't!)? Do you
want effect processors on the card? How many tracks of audio do you need to record at once?
How many for playback? Do you want a card which takes the burden of shuffling audio data
to and from the hard disk, or can the PC's CPU handle it with power to spare? With all of
these questions answered you can start to look for a card.

THE DESKTOP MUSIC HANDBOOK


53
Get a CD-Recorder or CD ReWritable
One of the best things that has happened to Desktop Music is the reduction in price of CD-Recorder
and CD ReWritable products and the corresponding drop in price of the Media they use. For the
serious Desktop Music PC set-up, one of these devices is a must. It will not only allow you to use
CDs in your PC like any CD-ROM drive but also provide an ideal backup device for keeping all of
those important music projects safe and sound and it will also allow you to create your own audio
CDs for Demo purposes.

THE DESKTOP MUSIC HANDBOOK


54
Appendix 4 More on Digital Audio

Converting Sound into Numbers


In a digital recording system, sound is represented as a series of numbers, with each number
representing the voltage, or amplitude, of a sound wave at a particular moment in time. The
numbers are generated by an analogue-to-digital converter, or ADC, which converts the signal
from an analogue audio source (such as a guitar or a microphone) connected to its input into
numbers. The ADC reads the input signal several thousand times a second, and outputs a
number based on the input that is read. This number is called a sample. The number of samples
taken per second is called the sample rate.
On playback, the process happens in reverse: The series of numbers is played back through a
digital-to-analogue converter, or DAC, which converts the numbers back into an analogue
signal. This signal can then be sent to an amplifier and speakers for listening.
In computers, binary numbers are used to store the values that make up the samples. Only two
characters, 1 and 0, are used. The value of a character depends on its place in the number, just
as in the familiar decimal system. Here are a few binary/decimal equivalents:
BINARY DECIMAL
000000000000000000000000 0
000000000000000000000001 1
000000000000000000000010 2
000000000000000000000100 4
000000000000000000001000 8
000000000000000000010000 16
000000000000000000100000 32
000000001111111111111111 65,535
111111111111111111111111 16,777,215
Figure 1. Binary numbers and their decimal equivalents
Each digit in the number is called a bit, so the numbers in Figure 1 are 24-bits long, and the
maximum value which can be represented is 16,777,215.

THE DESKTOP MUSIC HANDBOOK


55
Sample Size
The more bits that are used to store the sampled value, the more closely it will represent the
source signal. In an 8-bit system, there are 256 possible combinations of zeroes and ones, so
255 different analogue voltages can be represented. A 16-bit system provides 65,535 possible
combinations. A 24-bit system provides 16,777,215 possible combinations. A 24-bit signal is
capable of providing far greater accuracy than a 16-bit signal. and 16-bit signal is capable of
providing far greater accuracy than an 8-bit signal

Figure 2 shows how this works.The more bits that are available, the more accurate the
representation of the analogue signal and the greater the dynamic range.
For example Pinnacle and Fiji sound cards analogue inputs use 20-bit ADCs, which means that
the incoming signal can be represented by any of 1,048,575 possible values. The output DACs
are also 20-bit; again, 1,048,575 values are possible. The S/PDIF inputs and outputs support
signals with up to 24-bit resolution (16,777,215 possible values).
The number of bits available also determines the potential dynamic range. Moving a binary
number one space to the left multiplies the value by two (just as moving a decimal number one
space to the left multiplies the value by ten), so each additional bit doubles the maximum value
that may be represented. Each available bit provides 6dB of dynamic range. For example, a 16-
bit system can theoretically provide 96dB of dynamic range, a 20-bit system can theoretically
provide 120dB of dynamic range and a 24-bit system can theoretically provide 144dB of
dynamic range
Sample Rate
The rate at which the numbers are generated by the ADC is equally important in determining
the quality of a digital recording. To get a high level of accuracy when sampling, the sample
rate must be greater than twice the frequency being sampled. The mathematical statement of
this is called the Nyquist Theorem. When dealing with full-bandwidth sound (20Hz−20kHz),
you should sample at greater than 40,000 times per second (twice 20kHz). Most modern sound
cards allow you to sample at rates up to 48,000 times per second.

THE DESKTOP MUSIC HANDBOOK


56
If the sampling rate is lower than the frequency you are trying to record, entire cycles of the
waveform will be missed, and the result will not resemble the proper waveform. When the
sample rate is too low, the resulting sound has diminished high frequency content.

Figure 3. Increased sample rates yield a more accurate reproduction of the source signal.
By the way, the circuits that generate the sample rate clock must be exceedingly accurate. Any
difference between the sample rate used for recording and the rate used at playback will change
the pitch of the recording, just as with an analogue tape playing at the wrong speed. Also, any
unsteadiness, or jitter, in the sample clock will distort the signal as it is being converted from or
to analogue form

THE DESKTOP MUSIC HANDBOOK


57
Appendix 5 - Troubleshooting your Desktop Music PC
Here are answers to some of the most common questions about Music Software and Hardware on the
PC Checking this list may save you a lot of time and a phone call to technical support!

Question: Why can’t I hear anything on my synthesiser or keyboard (no MIDI output)?
Answer: If you aren’t getting any MIDI output, please run through this checklist:
1. Make sure you’ve connected your MIDI Adapter cable to your synthesiser or keyboard
correctly. MIDI In to MIDI Out and MIDI Out to MIDI In.
2. Make sure your synthesiser or keyboard is connected to an amplifier and speakers or
amplified speakers
3. Make sure you install a MIDI driver for your Sound Card or MIDI interface using Windows
Control Panel. Make sure you specify the correct configuration information—like IRQ and
base port address—in the driver’s setup dialog box.
4. Now the driver is available for Windows programs to use. Next, you need to tell your Music
Software to use it. Here are examples for the main sequencers
Cakewalk: Choose the MIDI Devices command on the Settings menu and make sure the
device (i.e. SB16 MIDI Out or similar for Sound Blaster 16, AWE and TBS Pro External
MIDI Out 1 for Fiji or Pinnacle) is selected (highlighted) in the MIDI Out list. Click on the
device name to be sure it is selected; if the name only has a dotted box around it but isn’t
drawn in reverse video, then it is not selected!

Cubase & VST: Open Setup MME in Cubase Program Group. In the section MME
Outputs, make sure the device (i.e. SB16 MIDI Out or similar for Sound Blaster 16, AWE
and TBS Pro External MIDI Out 1 for Fiji or Pinnacle) is active if not select the device
(highlight) and then click Set active.

Logic: Logic sets all MIDI devices active by default.

Question Why can’t I record anything from my keyboard (no MIDI input)?
Answer If you aren’t getting any MIDI input, please run through this checklist:
1. Make sure you’ve connected your MIDI Adapter cable to your keyboard correctly. MIDI In to
MIDI Out and MIDI Out to MIDI In.
2. Make sure you install a MIDI driver for your Sound Card or MIDI interface using Windows
Control Panel. Make sure you specify the correct configuration information—like IRQ and
base port address—in the driver’s setup dialog box.
3. Now the driver is available for Windows programs to use. Next, you need to tell your Music
Software to use it. Here are examples for the main sequencers

Cakewalk: Choose the MIDI Devices command on the Settings menu and make sure the
device (i.e. SB16 MIDI In or similar for Sound Blaster 16, AWE and TBS Pro External
MIDI In for Fiji or Pinnacle) is selected (highlighted) in the MIDI In list. Click on the device
name to be sure it is selected; if the name only has a dotted box around it but isn’t drawn in
reverse video, then it is not selected!

Cubase & VST: Open Setup MME in Cubase Program Group. In the section MME
Inputs, make sure the device (i.e. SB16 MIDI In or similar for Sound Blaster 16, AWE and
TBS Pro External MIDI In for Fiji or Pinnacle) is active if not select the device (highlight)
and then click Set active.

Logic: Logic sets all MIDI devices active by default.

THE DESKTOP MUSIC HANDBOOK


58
Question: Why can't I hear anything playing or my playback is very poor whilst I'm recording into my
Music or Sound Software.
Answer: This can be one of several things.
1. Your software does not support simultaneous record and playback, so if you require this ability
you need to change your software (all Cakewalk products, Digital Orchestrator, Digital
Orchestrator Pro, Cubase and Cool Edit Pro support simultaneous record and playback).
2. Your Sound card does not support simultaneous record and playback, (check manual) so if you
require this ability you need to change your Sound card. Unfortunately there is a lot of confusion
about this feature as there is a lot of cards claiming full duplex and passing this off as a the same
as simultaneous record and playback, unfortunately this is rarely the case, full duplex is a term
borrowed from the world of communications not music and refers to the sound cards ability to
work with InterNet Telephony.
3. Your sound card is full duplex but only at 8-bit, so the playback whilst recording is at 8-bit and
hence sound noisy and generally poor. (the Sound Blaster 16/AWE family of products only
support 16-bit record with 8-bit playback hence have the 8-bit playback problem if the software
supports this mode) so if you require this ability you need to change your Sound card.
Sound cards that are truly simultaneous record and playback will always say so as this is a very big
selling point, they will say something like "Enhanced Full Duplex", or "Full Duplex at full
bandwidth" or "Simultaneous Record and Playback". If your not sure ask directly "does this
sound card support simultaneous record and playback at 16-bit 44.1/48 kHz with my
software"?
Some examples of cards which are "Full Duplex at full bandwidth"; Turtle Beach Montego,
Malibu, FIJI, Pinnacle, AdB Multi!WAV, Gadget Labs Wave/4 & MaxiSound Home Studio Pro.

Question: Why won’t my Music or Sound Software install from my CD Drive or I’m getting a “Please
Insert Original Disk” error message
Answer: If you have a CD-ROM drive which has not got a driver for Windows 95, it will significantly
degrade Windows 95 real-time performance and it can give you an error when trying to install and
run some true Window 95/NT 32 bit software and some copy protected software.

This may happen if there’s a real mode CD-ROM driver being installed in Autoexec.bat or
Config.sys. You can check for this problem by going to Control Panel | System | Performance and
checking File System and Virtual Memory. Both need to read “32 Bit” and not “Compatibility
Mode”.
Users need to reconfigure their system, eliminating the real mode drivers, or replace the real mode
driver with a true 32 bit version. In which case the only solution is to get a Windows 95 driver for
the CD Drive from where ever you bought the CD Drive, off the World Wide Web or get a new CD
Drive with a Windows 95 driver.

Note: Real-mode drivers are one terminology to describe older, Windows 3.1 style CD-ROM
drivers. The newer drivers are usually called 32 bit Windows 95 drivers. “Compatibility Mode” may
be in use for any of the following reasons:
• An “unsafe” device driver, memory-resident program, or virus hooked the INT21h or INT13h
chain before Windows 95 loaded.
• The hard disk controller in your computer was not detected by Windows 95.
• The hard disk controller was removed from the current configuration in Device Manager.
• There is a resource conflict between the hard disk controller and another hardware device.
• The Windows 95 protected-mode driver is missing or damaged.
• The Windows 95 32-bit protected-mode disk drivers detected an unsupportable configuration or
incompatible hardware.

THE DESKTOP MUSIC HANDBOOK


59
Question Why do I get a “General Protection Fault” or my print is all funny when I try to print a Score
from Sequencers or Notation software?
Answer Contact the manufacturer of your printer and make sure you have the most recent version of their
driver for Windows. If not, obtain and install it.

Explanation: When most Sequencers or Notation software print, they use the printer driver that
you’ve installed in Windows. Many printer drivers have problems (“bugs”) which appear only
when certain applications use them, even when the application is using them correctly.
Unfortunately, Sequencers or Notation software’s intensive use of TrueType fonts may flush out a
problem with a printer driver, which is not apparent, when you’re using other programs. The printer
driver may crash. You can tell this because the error message identifies the printer driver as the
program that crashed not your Sequencers or Notation software. The only solution, unfortunately, is
to obtain a fixed version of the printer driver. The good news is that many printer manufacturers
update their drivers frequently, and newer versions of many will work fine with Sequencers or
Notation software.

Hint: Most of the drivers that ship as standard with Windows work fine, so try using one of these
i.e. if you have a DeskJet 660C use the standard DeskJet driver that comes with Windows. You
can still leave the newer driver installed for printing colour from other applications.

Question: I’m experiencing erratic hanging, stuttering, General Protection Fault’s and general
seemingly unexplainable problems using my Cakewalk, Cubase or other Music or Sound
Software such as Cool Edit Pro, SAW, Personal Composer, Finale etc
Answer: frequently the problem isn’t actually with the software itself. But a hardware conflict within your
system. Almost everyone who has ever owned a PC has had to deal with these problems and they
can be difficult to resolve. But if you want things to work, you have to bite the bullet and fix them.

The Most Common Type of Conflicts:


There are three very common types of conflicts that can affect Music or Sound Software.

1. IRQ. (Interrupt Request)

2. Port Address. (Input/Output or I/O ports)

3. DMA. (Direct Memory Access)

You may be asking yourself, “What the heck is an IRQ, Port Address, or DMA?” Good question.
Basically, IRQs, Port Addresses, and DMAs are settings for devices connected to your computer.
These settings—if correct—enable the devices to work with your computer and avoid interfering
with each other.

Here’s a simple analogy that might help you understand the nature of a conflict. Think of an IRQ as a
street address, and think of your computer as Forrest Gump the mailman. Then, imagine that both
you and your next door neighbour share the same address: 1 Strawberry Lane. What’s going to
happen when good ol’ Forrest tries to deliver mail to 1 Strawberry Lane? He’s going to see the same
address on both mailboxes at which point his brain will start churning and churning so much that he
won’t know what to do—he’ll lock up—he’ll freeze—he’ll stand there unable to deliver the mail—
he’ll crash—he’ll “General Protection Fault.”

This is essentially what can happen when you have two pieces of hardware set to the same IRQ, Port
Address, or DMA. So the point is,

No two devices can share the same IRQ, Port Address, or DMA.

THE DESKTOP MUSIC HANDBOOK


60
Solving Conflicts: The Safe Way
The Safe way is the method we recommend, and it requires that you first find out all the IRQ, Port
Address, and DMA settings for your hardware. This is a one-time procedure that you should
probably do anyway to prevent future conflicts. The more devices you have in your computer, the
more time consuming it becomes.

Note: There are several utilities that claim to detect conflicts—these are not wholly reliable. Also,
just because Windows 95 doesn’t report that there is a conflict, it doesn’t mean there isn’t one.

In the Control Panel. In Windows 95, choose Start | Control Panel | System | Device Manager |
Computer | View Resources. This will show the IRQ, Port Address, and DMA setting for many
devices.

Note: Windows 95 is not 100% reliable in detecting conflicts.

Special software that comes with the device. There are some devices that require special software to
manage IRQs and so forth. This might be called “Configuration” software for the device, or
“Settings/Setup” software.

Where to change your settings.

There are a few ways to change your settings, but the method is not the same for every device.

On the device itself. You physically move jumpers/dip switches.

In the Settings for the driver. If you’re using multimedia devices in Windows 95, choose Start |
Control Panel | Multimedia | Advanced. Once there, you can look in Audio Devices or MIDI
Devices and Instruments. To get to the driver’s settings, you select the driver, click on Properties,
and then choose Settings. In Windows 3.1 or 3.11 you go to Control Panel | Drivers, select the driver,
and click on Setup.

Note 1: Some devices only require that you change the settings for the driver, while other devices
require that you change the settings for the driver and on the device itself.

Note 2: Even though your card might be “Plug and Play,” there could still be a conflict as “Plug and
Play” is not 100% reliable.

Special software that comes with the device. There are some devices that require special software to
manage IRQs and so forth. This might be called “Configuration” software for the device, or
“Settings/Setup” software.

Some of the above.

All of the above.

The only way to know for sure how to change the settings is to consult your manual or the device
manufacturer.

Getting to the root of the problem.


A good way to find out the source of a conflict is to actually remove devices from the computer.
Here’s an example:

Pretend the devices in your computer are a MIDI interface and two soundcards: Let’s call them the
Cool soundcard and the Game soundcard. Say you suspect the Game soundcard is the one that is
causing the trouble. The best thing to do is to make your system as simple as possible and remove
both the Game soundcard and the MIDI interface.

Note: As well as physically removing the device, you must also remove the device’s driver!

THE DESKTOP MUSIC HANDBOOK


61
The goal here is to see if the Cool soundcard will work by itself. If it does work by itself, then you
know that the conflict happens when you introduce the other cards into the system. Next, one at a
time, you add back the other cards. Try your system a few different ways:

Try the Cool soundcard and the Game soundcard. Does it work? If so, then you know there is not a
conflict between the two of them. If not, then you know they are conflicting and you will have to
adjust their settings.

Try the Cool soundcard and the MIDI interface. Does it work? If so, then you know there is not a
conflict between the two of them. If not, then you know they are conflicting and you will have to
adjust their settings.

You might even need to try the Game soundcard and the MIDI interface without the Cool soundcard
in your quest to find the conflicting devices.

So, the moral of the story is detecting conflicts is best done through a process of elimination. Divide
and conquer.

The Risky Way—Is There An Easy Way Out?


You now know the Safe way. The Safe way is the most thorough way to resolve conflicts and to
prevent future ones, but if you’re the type of person that likes to find an easy way out, you can take
your chances and try the Risky way.

The Risky way is very simple. Let’s use our previous example of the two soundcards and the MIDI
interface. If you thought that the Game soundcard was the problem, instead of removing all the cards
or taking an inventory of all the IRQs, Port Addresses, and DMAs in your system, you could simply
change some settings.

Let’s say you weren’t able to record MIDI using the Game soundcard, and since not being able to
record MIDI is indicative of an IRQ conflict, you could change the Game soundcard’s IRQ setting to
one that you thought was free. If you guess correctly and change it to an IRQ that is free then you’re
successful at the Risky way: You have nothing more to do. BUT...if you fail you can open yourself
up to potentially more nightmarish problems—like your computer not booting into Windows

THE DESKTOP MUSIC HANDBOOK


62

Vous aimerez peut-être aussi