Vous êtes sur la page 1sur 12

Phase: Timing Difference or Polarity?

By Randy Coppinger on 4/26/2012 Articles / Mixing / Recording

Like 89 13

What Do We Mean?
One of the most confusing topics in audio is Phase. I think part of what
makes it confusing is that people use it in reference to more than one issue.
What always seems to make things clear for me: figure out precisely which
aspect is the concern then focus on the part that matters. It is such an issue
that I avoid using the word Phase to prevent confusion.

There are really two things people mean when they say Phase: Polarity or
Timing Difference. Hopefully differentiating these can help you decide if you
should push that Invert button or move a microphone when things sound
weird.

Polarity
Youll see a button on some mic preamps and other audio gear labeled
Phase, Phase Reverse, Phase Invert, etc. This is really Polarity.

Engineers will talk about XLR connections being pin 2 hot versus pin 3 hot.
This is also Polarity.

If you have two mic elements as close together as possible and combining
them drops the overall level significantly, someone may describe that as
having one mic out of Phase with the other, but really this is Polarity.

Sound is vibrating air. Molecules are pushed then pulled, then pushed, then
pulled and so on. Those air vibrations are the force that moves a
microphone element. When the air pushes against the element, there
should be a rise above zero in the waveform. When air pulls the element
there should be a drop below zero. This pushing or pulling is Polarity. If you
reverse the Polarity, a push against the mic element will cause a drop below
zero instead of a rise above. It will also cause a rise above zero for a pull of
the mic element instead of a drop below. Now some argue that you can hear
a difference on a single channel between Push = Rise vs. Push = Drop. Im
not going to argue that one way or the other. The point is: nothing moved
earlier or later in time. We simply inverted the push/pull relationship
between air pressure and its electrical/sampled representation.
Lets examine an ideal case of polarity as the issue: combine two identical
signals (in volume, spectrum and time) and the volume will double. If you
reverse the Polarity of only one of those signals (as shown above) and
combine them, they will perfectly cancel.

Timing Difference
People will warn when you use more than one mic on a source, be careful
not to create Phase problems. When the sound of the snare arrives sooner
in the snare spot mic than the overheads, this can more accurately be
described a Timing Difference.

Digital signal processing can cause latency, which is sometimes described as


sounding phasey. You know: Latency, as in late, as in Timing Difference.
A Timing Difference may cause Comb Filtering. It may not. So even if youve
figured out that youre dealing with the issue of Timing Difference, not
Polarity, then you need to investigate Comb Filtering. Most experienced
sound professionals know the sound of obvious Comb Filtering when they
hear it. Other times it may be subtle, making your audio sound less than
desirable but not necessarily phasey.

Conditions that Create Comb Filtering


When it comes to recording music with microphones, Comb Filtering occurs
under some pretty specific circumstances. If we want to reduce the
possibility of Comb Filtering, we need to know what causes it and what
doesnt so that we can accurately focus on prevention.
Broadband
In my experience, most musical instruments are broadband. By contrast,
when you use a tone generator to create a 1k Hz sine wave, that is a single
frequency, not a broadband musical signal. Phase is the issue for a specific
frequency, like a 1k Hz tone. But Comb Filters occur for broadband signals.
Anything that is more complicated than a sine wave signal will be less an
issue of phase and more an issue of Comb Filtering, especially instruments
that are highly atonal (like a snare drum), or that vary in pitch over time (like
the way a plucked string changes pitch).

Duration
Comb Filtering is more noticeable when the broadband signal has significant
duration. White or pink noise are extreme examples of broadband duration
(they continue infinitely, or steady state) and will be the most likely to have
audible Comb Filtering. The more percussive (shorter) something is the less
likely you will notice Comb Filtering. In extreme cases a transient may be so
short that it has decayed beyond audibility by the time the delayed version
ever begins.

Volume Difference
If snare drum is in microphone A and also in microphone B, and there are a
few milliseconds of delay between them, they will tend to Comb Filter if you
combine them. But if the volume of the snare is different in the mics by 9dB
or more, the Comb Filtering will not tend to be audible. There may be other
sounds common in mic A and B, but what limits the audibility of Comb
Filtering for the snare is the volume difference of the snare in those mics. If
you can achieve snare separation of 9dB or more, you can effectively
eliminate Comb Filtering for the snare.

Delay
Another limit is the amount of delay. Most Comb Filtering occurs when the
delay is under about 30 milliseconds (ms), depending on the 3 previous
factors. Once you get beyond 30 ms, the delays begin to be heard as
discrete echoes. Since sound travels at roughly 1 ms per foot, Comb Filters
will not tend to be an issue with mics that are 30 feet or more apart. Beyond
30 feet you might get some really wacky echoes instead though.

Interaction
Directional microphones can help manage a volume difference of 9 dB or
more. For example, a cardioid microphone is theoretically 6dB lower at 90
degrees off axis. But not all mikes live up to the theory. The off axis response
of a microphone is worth your attention when you want to prevent Comb
Filtering.

If you had two mics relatively close to each other and you simply mixed one
9dB or lower than the other, that could also eliminate audible Comb
Filtering. Of course the opposite is possible that you didnt hear Comb
Filters during tracking but once you mixed things and got spaced mics within
9dB of each other, then it became audible. Add dynamic volume changes
(compression, volume automation) and you can get Comb Filtering some
times but not others. It can be a tricky thing to manage.

Now if your microphone channels never combine, then you wont ever get
Comb Filtering from those two different signals. Someone may hard pan mic
A left and mic B right thinking: they dont combine (different channels), so
they wont Comb Filter. But because there are so many different ways that
stereo can collapse to mono, it is a good idea to consider what your stereo
sounds like in mono, especially if timing differences could Comb Filter and
make your mix sound awful. This is why a spaced pair of drum overheads
should be checked in mono to make sure the timing differences dont
Latest Popular Featured cause audible Comb Filtering when combined. If they do, I believe the most
appropriate solution is to reposition the mikes (distance and/or angle) for

ARTICLES
greater separation 9dB or more. Or change the mikes (different kind
and/or different pattern) to increase separation. Or position the mikes
coincident to remove all timing difference.
VIDEOS

Further Reading
See also About Comb Filtering, Phase Shift and Polarity Reversal from
Moulton Laboratories.

Most of what I know about Comb Filters I learned from F. Alton Everests The
Master Handbook of Acoustics (Amazon). For the reader who wants to dig
further in Comb Filtering, both in theory and practice, Everests writing is the
best Ive found.

For an in depth look at phase, check out this video tutorial from Eric Tarr
You might also like:
1. 3 EQ Techniques That Can Get You In Trouble
2. 6 Tips for Powerful Drum Layering
3. Asynchrony of A/V Media (Part 1): Introduction
4. Youll Never Listen to Music the Same Way Again
5. 5 Tips to Improve the Feel of Your Recordings
6. Tips for Effective Buss Compression

R ANDY CO P P INGER

Randy Coppinger lives and works in Southern California. He likes to record with
microphones. He likes to drink coffee. On a good day, he gets to do both.

5 Comments The Pro Audio Files Login

Sort by Newest Share Favorite

Join the discussion

Barak Shpiez 2 years ago


Very good article. I would be more specific in the paragraph that begins with "Sound is
vibrating air." You go on to describe molecules being pushed and pulled, but it would be
more accurate to describe compressions and rarefactions of the pressure level in the
transmission medium (usually air). You make mention of this in the last sentence of that
paragraph, but it may help readers better understand how sound permeates acoustically.

I would also argue that for the problem of comb filtering while the term "timing difference"
is absolutely accurate, describing the same effect as "phase problems" isn't incorrect
either. When two different broadband signals are combined with a particular time
difference, the frequencies in signal A will be at a different part of their respective cycles
than the frequencies in signal B. There will appear to be a phase shift in certain
frequencies from signal A to signal B. The complexity arises when you consider that
different frequencies will have different apparent phase shifts, corresponding to the one
different frequencies will have different apparent phase shifts, corresponding to the one
particular time shift, between the two summed signals. So while comb filtering is indeed
caused by a time delay, this can also be thought of as frequency-dependent phase shifts.
Reply Share

Randy Coppinger Barak Shpiez 2 years ago


Good points Barak. I'm not especially interested in sound as it travels through
water, or anything other than air for this article. I agree that compression and
rarefaction are important. I chose not to linger on propagation but your distinction
is worth noting.

I am not taking a position that the word "Phase" is incorrect. I am suggesting that
people often interpret that word as something it isn't, or as something mystical, or
as an advanced topic. And people confuse Phase with Polarity. Delay is a pretty
basic idea. People are not afraid of it nor intimidated by it. I prefer to describe
comb filtering in terms of timing because I think it simplifies and clarifies.

To get more technical (which I tried to avoid), a phase relationship rotates --


meaning it gets further and further out of phase then comes back in phase when
the delay value reaches a complete waveform. But you can't delay back into
phase with white noise. If we think of recorded signals on a continuum between a
pure sine wave and completely random white noise, most things we record tend
toward the random; most musical signals have significant variation every time the
waveform crosses zero. That means further rotating a complex signal perfectly
back into phase is highly unlikely. And from a pragmatic perspective pretty foolish.
see more

1 Reply Share

Thomas Dulin 2 years ago


There is some good stuff here but overall I think this article contributes to the semantics
problem. The word "phase" is not a dirty word and does not necessarily imply problems.
Phase relationship is a quality of two given audio signals, measured in degrees. You're
right about the theory of addition and cancellation in regards to phase coherency, but I
wish you would spoken more to the science of comb filtering and its relationship to phase
wish you would spoken more to the science of comb filtering and its relationship to phase
coherency at given frequencies instead of offering a blanket "9dB" rule without technical
explanation. I also wish the article stayed away from the word "volume."

Again, I applaud the effort of anyone who attempts to distinguish the language of pro audio
from consumer-speak but do try and use the terms in the correct way to prevent further
Internet confusion.
Reply Share

Randy Coppinger Thomas Dulin 2 years ago


Thomas, I'm not implying phase is inherently the cause of audio problems. I'm
simply suggesting that if we think of problems that are typically described as
phase in terms of timing or polarity it can help us demystify the theory and
practice.

Everest does such an elegant job explaining comb filters. I can't hold a candle to
his writing. I defer to his Master Handbook of Acoustics because it's the best.

Honestly, I can't remember if the 9dB difference was from Everest or Alex U.
Case. I should look that up. But essentially when the difference gets to be 9dB
more between the two delayed signals the comb filtering still exists but tends to be
masked by the louder signal. That's why I say the difference "effectively"
eliminates the comb filtering.

And if I've written something that is incorrect here, please advise how it can be
made correct. I encourage criticism because I want to get it right. I also like
criticism because I was on the debate team and find a well argued position a thing
of beauty. :-)

I'm curious about your objection to the word "volume." Please explain.
1 Reply Share

Rob Schlette 2 years ago


I don't know how I missed this article, but it's really well done. I love the suggestion to
avoid the confused semantics of 'phase' - often referred to as if it were some sort of
creeping ooze that attacks in the night. This is what audio education should look like -

Vous aimerez peut-être aussi