Vous êtes sur la page 1sur 23

ANNA UNIVERSITY NOV/DEC-2015

EC-6502 PRINCIPLES OF DIGITAL SIGNAL PROCESING

1. DT signal:

Discrete time signal: A discrete time signal is defined only at discrete instants of time. The
independent variable has discrete values only, which are uniformly spaced. A discrete time signal
is often derived from the continuous time signal by sampling it at a uniform rate.

2. Time Invariant Systems:

A system is said to be time invariant system if a time delay or advance of the input signal leads to
an idenditical shift in the output signal. This implies that a time invariant system responds
idenditically no matter when the input signal is applied.

3. Pass Band:

Filter is a frequency selective device, which amplifies particular range of frequencies and
attenuate particular range of frequencies.

4. Backward Difference:

One of the simplest methods of converting analog to digital filter is to approximate the
differential equation by an equivalent difference equation /dt(y(t)/t=nT=(y(nT)-y(nT-T))/T

5. Disadavantages of FIR:

Non recursive
Greater flexibility to control the shape of magnitude response

6. Window desirable characteristics:

The desirable characteristics of the window are The central lobe of the frequency response of the
window should contain most of the energy and should be narrow. The highest side lobe level of
the frequency response should be small. The sides lobes of the frequency response should
decrease in energy rapidly as tends to .

7. Truncation of data resulting:

Truncation is the term for limiting the number of digits right of the decimal point, by discarding
the least significant ones.

8. Trucation Error analyzed:

To truncate these numbers to 4 decimal digits, we only consider the 4 digits to the right of the
decimal point. The result would be: 5.6341 32.4381 -6.3444 Note that in some cases, truncating
would yield the same result as rounding, but truncation does not round up or round down the
digits; it merely cuts off at the specified digit. The truncation error can be twice the maximum
error in rounding.

9. Multirate Processing Application:

Sub band coding


Speech analysis

10. Sampling theorem:

In the field of digital signal processing, the sampling theorem is a fundamental bridge between
continuous-time signals (often called "analog signals") and discrete-time signals (often called
"digital signals").

11. (a) RADIX-2 DIT

Radix-2 decimation-in-time (DIT) FFT is the simplest and most common form of the Cooley
Tukey algorithm, although highly optimized CooleyTukey implementations typically use other
forms of the algorithm as described below. Radix-2 DIT divides a DFT of size N into two
interleaved DFTs (hence the name "radix-2") of size N/2 with each recursive stage.

Where k is an integer ranging from 0 to N 1.

The Radix-2 DIT algorithm rearranges the DFT of the function xn into two parts: a sum over the
even-numbered indices n = 2m and a sum over the odd-numbered indices n = 2m + 1:

One can factor a common multiplier out of the second sum, as shown in the equation below. It is
then clear that the two sums are the DFT of the even-indexed part x2m and the DFT of odd-
indexed part x2m + 1 of the function xn. Denote the DFT of the Even-indexed inputs x2m by Ek
and the DFT of the Odd-indexed inputs x2m + 1 by Ok and we obtain:
This result, expressing the DFT of length N recursively in terms of two DFTs of size N/2, is the
core of the radix-2 DIT fast Fourier transform. The algorithm gains its speed by re-using the
results of intermediate computations to compute multiple DFT outputs. Note that final outputs
are obtained by a +/ combination of Ek and Ok exp( 2ik / N), which is simply a size-2 DFT

This process is an example of the general technique of divide and conquers algorithms; in many
traditional implementations, however, the explicit recursion is avoided, and instead one traverses
the computational tree in breadth-first fashion.

This process of splitting the 'time domain' sequence into even an odd samples is what gives the
algorithm its name, 'Decimation In Time'. As with the DIF algorithm, we have succeeded in
expressing an N point transform as 2 (N/2) point sub-transforms. The principal difference here is
that the order we do things has changed. In the DIF algorithm the time domain data was
'twiddled' before the two sub-transforms were performed. Here the two sub-transforms are
performed first. The final result is obtained by 'twiddling' the resulting frequency domain data.
There is a slight problem here, because the two sub-transforms only give values for k=0..N/2-1.
We also need values for k=N/2..N-1. But from the periodicity of the DFT we know
RADIX-2 DIF

We defined the FFT as:

If N is even, the above sum can be split into 'top' (n=0..N/2-1) and 'bottom' (n=N/2..N-1) halves
and re-arranged as follows
If we now consider the form of this result for even and odd valued k separately, we can see how
this result enables us to express an N point FFT in terms of 2 (N/2) point FFT's The process of
dividing the frequency components into even and odd parts is what gives this algorithm its name
'Decimation In Frequency'. If N is a regular power of 2, we can apply this method recursively
until we get to the trivial 1 point transform. The factors TN are conventionally referred to as
'twiddle factors'.
11.(b) Find the output y (n) of a filter whose impulse response is h (n) = { 1, 1, 1 } and the
input signal X(n) = { 3, -1, 0, 1, 3, 2, 0, 1, 2, 1 } using (i) overlap-Save method (ii) overlap-
add method.

Overlap-Save Method

The input sequence can be divided into blocks of data as follows

X1(n) = {0, 0, 3, -1, 0}

X2(n) = {-1, 0, 1, 3, 2}

X3(n) = {3, 2, 0, 1, 2} and X4(n) = {1, 2, 1, 0, 0}

Given h (n) = {1, 1, 1}

Increase the length of the sequence to L+M-1 = 5 by adding two zeros

Therefore h (n) = {1, 1, 1, 0, 0}

= { -1, 0, 3, 2, 2}

= { 4, 1, 0, 4, 6 }
= { 6, 7, 5, 3, 3 }

= {1, 3, 4, 3, 1 }

-1, 0, 3, 2, 2 4, 2, 0, 4, 6 6, 7, 5, 3, 3 1, 3, 4, 3, 1

Discard the first two value of the above sequence

Y(n) = { 3, 2, 2, 4, 6, 5, 3, 3, 4, 3, 1 }

Overlap- Add Method

Let the Length of data block is 3.Two zeros are added to bring the length of five L+M-1 = 5

X1(n) = {3, 2, 2, -1, 0}

X2(n) = {1, 3, 2, 0, 0}

X3(n) = {0, 1, 2, 0, 0}

X4(n) = {1, 0, 0, 0, 0}

= {3, 2, 2, -1, 0}

= {1, 4, 6, 5, 2}

= {0, 1, 3, 3, 2}

= {1, 1, 1, 0, 0}

Add the below values in the same order

3, 2, 2, -1, 0

1, 4, 6, 5, 2

0, 1, 3, 3, 2

1, 1, 1, 0, 0

Y (n) = { 3, 2, 2, 0, 4, 6, 5, 3, 3, 4, 3, 1 }
12.(a) An analog filter has the following system function. Convert this filter using backward
difference method H(s)=1/(s+0.1)2+9

Impulse invariant method:


12 (b) BUTTERWORTH FILTERS

Low pass Butterworth filters are all - pole filters with monotonic frequency response in both pass
band and stop band, characterized by the magnitude - squared frequency response

Where, N is the order of the filter, c is the -3dB frequency, i.e., cutoff frequency, p is the pass
band edge frequency and 1= (1 /1+2) is the band edge value of Ha () 2. Since the product
Ha(s) Ha (-s) and evaluated at s = j is simply equal to Ha () 2, it follows that

The poles of Ha(s)Ha(-s) occur on a circle of radius c at equally spaced points. From Eq.
(5.29), we find the pole positions as the solution of

And hence, the N poles in the left half of the s-plane are
Note that, there are no poles on the imaginary axis of s-plane, and for N odd there will be a pole
on real axis of s-plane, for N even there are no poles even on real axis of s-plane. Also note that
all the poles are having conjugate symmetry. Thus the design methodology to design a
Butterworth low pass filter with 2 attenuation at a specified frequency s is Find N,

Where by definition, 2 = 1/1+2. Thus the Butterworth filter is completely characterized by


the parameters N, 2, and the ratio s/p or c. Then, from Eq. (5.31) find the pole positions
Sk; k = 0,1, 2,..(N-1). Finally the analog filter is given by

Design an analog Butterworth filter that has -2dB pass band attenuation at a frequency of
20 rad/sec and at least -10dB stop band attenuation at 30 rad/sec.

Solution

p = 2 dB ; p = 20 rad/sec

s = 2 dB; s = 30 rad/sec

N log ( 10 0.1s 1 / 10 0.1s 1) / log (s/p)

N 3.37

N=4

So

H (s) =

c = 21.3868

S S / 21.3868 in H (s)
H (s) =

13.(a) DESIGN OF IIR DIGITAL FILTERS

A digital filter is a linear shift-invariant discrete-time system that is realized using finite
precision arithmetic. The design of digital filters involves three basic steps:

The specification of the desired properties of the system.


The approximation of these specifications using a causal discrete-time system.
The realization of these specifications using finite precision arithmetic.

These three steps are independent; here we focus our attention on the second step. The desired
digital filter is to be used to filter a digital signal that is derived from an analog signal by means
of periodic sampling. The specifications for both analog and digital filters are often given in the
frequency domain, as for example in the design of low pass, high pass, band pass and band
elimination filters.

Given the sampling rate, it is straight forward to convert from frequency specifications on an
analog filter to frequency specifications on the corresponding digital filter, the analog
frequencies being in terms of Hertz and digital frequencies being in terms of radian frequency or
angle around the unit circle with the point Z=-1 corresponding to half the sampling frequency.
The least confusing point of view toward digital filter design is to consider the filter as being
specified in terms of angle around the unit circle rather than in terms of analog frequencies.
A separate problem is that of determining an appropriate set of specifications on the digital filter.
In the case of a low pass filter, for example, the specifications often take the form of a tolerance
scheme, as shown in Figure

Many of the filters used in practice are specified by such a tolerance scheme, with no constraints
on the phase response other than those imposed by stability and causality requirements; i.e., the
poles of the system function must lie inside the unit circle. Given a set of specifications in the
form of Fig. 5.1, the next step is to and a discrete time linear system whose frequency response
falls within the prescribed tolerances. At this point the filter design problem becomes a problem
in approximation. In the case of infinite impulse response (IIR) filters, we must approximate the
desired frequency response by a rational function, while in the finite impulse response (FIR)
filters case we are concerned with polynomial approximation.

ADVANTAGES OF FIR:

They can easily be designed to be "linear phase" (and usually are).


They are simple to implement.
They are suited to multi-rate applications.
They have desirable numeric properties.
They can be implemented using fractional arithmetic.

14.(a) OVERFLOW LIMIT CYCLE OSCILLATION

When astable IIR digital filter is excited by a finite sequence, that is constant, the o/p will decay
to zero . However, the nonlinearities due to the finite precision arithmetic operation often cause
periodic oscillation to occur in the o/p. Such oscillations in recursive are called zero i/p limit
cycle oscillations.
Consider a first order IIR filter with difference equation,
Y(n)=x(n)+a y(n-1)
Let us assume a=1/2 and the data register length is 3 bit plus a sign bit . if the i/p is
This operation are due to round off errors in multiplication and overflow addition

Dead band :

The limit cycle occur as a result of the quantization effects in multiplication . the amplitude of
the output during a limit cycle are confined to a range of values that s called the dead band of
the filter.
For the first order system described by the equation.
Y(n)=ay(n-1)+x(n)
The dead band is given by ,

Overflow limit cycle oscillation:

In the fixed point addition of two binary numbers the overflow occurs when the sum exceeds the
finite word length of the register used to store the sum .The overflow in addition may lead to
oscillation in the o/p which is referred to as overflow limit cycle.

The over flow oscillation can be eliminated if saturation arithmetic is perform, In saturation
arithmetic, when an overflow is sensed, the o/p (sum) is set equal to maximum allowable value
and when an underflow is sensed , the o/p (sum) is set equal to minimum allowable value.
Therefore the a/d convertor output is the sum of the i/p signal x(n) and the error signal e(n) If
rounding is used for quantization then the quantization error e(n)=xq(n)-x(n) is
bounded by q/2 e(n)q/2 . ion most cases , we can assume that the analog to digital
conversion error e(n) has the following properties.

The error sequence e(n) is a sample sequence of a stationary random process.


The error sequence is uncorrelated with x(n) and other signals in the system .
The error is a white noise process with uniform amplitude probability distribution over the
range of quantization error.

14.(b) FIXED POINT REPRESENTATION:


In fixed point representation the point of a binary point is fixed. The bit to the right
representation the fractional part of the number and those to the left represent the integer part.
For eg : the binary number 01.1100 has the value 1.75 in decimal .

Since the number of digits are fixed it is impossible to represent too large and too small.
In fixed point representation there are three different formats for representing negative
binary number.
Signed magnitude form
1s complement form
2s complement form.

Signed magnitude form :

In the representation , the most significant bit is set to 1, to represent the negative sign,

For eg :
(0.001)2 = -0.125 10
(0.001)2 = 0.125 10

The decimal number -1.75 is represented as 11.110000 and +1.75 is represented as 01.110000

1s complement form:

In ones complement form , the positive number is represented in the sign magnitude
form.The negative number can be obtained by complementing all the bits of the
positive number.

For eg:
The decimal number -0.875
0.875 =(0.111000)2
-0.875= complement of (0.111000)2
-0.875=(1.111000)2

2s complement form:
In this complement representation positive are represented as in sign magnitude and ones
complement . however , the negative number can be obtained by complementing all the bits of
the positive number and adding one to the least significant bit.
0.875 = 0.111000
= 1.000111 (1s complement taken)
1 (adding 1)

=1.001000 the decimal value is (-0.875)

Co-efficient quantization error

When IIR filter is designed, then a1,a2,and b1,b2.coefficient should be evaluated


accurately. When, these co-efficient are quantized by using any one of the method of rounding or
truncation, the lot of difference are occurred b/w ideal and practical design of filter
If rounding is used for quantization then the quantization error e(n)=x q(n)-x(n) is bounded by
q/2 e(n)q/2 . ion most cases , we can assume that the analog to digital conversion error e(n)
has the following properties.

The error sequence e(n) is a sample sequence of a stationary random process.


The error sequence is uncorrelated with x(n) and other signals in the system .
The error is a white noise process with uniform amplitude probability distribution
over the range of quantization error.

In case of rounding the e(n) lies b/w q/2 e(n)q/2 with equal probability . the variance
of e(n) is given by,

2
e Ee2 n E 2 en
Where Ee2 n is the average of e 2(n) and E [e(n)] is the mean value of
e(n) .

15.(a) SAMPLING RATE BY A FACTOR I/D

The aim of (digital) sample-rate conversion is to bring a digital audio signal from one sample
frequency to another. The distortion of the audio signal, introduced by the sample-rate converter,
should be as low as possible. The generation of output samples from the input samples may be
performed by the application of various methods. One solution is depicted in figure 5.1. The
input samples at a sampling rate Fi are up sampled by an integer factor N. The signal at NFi is
low-pass filtered and decimated by an integer factor M to obtain the output sample rate Fo. This
solution implies that the conversion factor Fo/Fi is equal to N/M which is a rational number. The
master clocks from which Fi and Fo are obtained have to be locked. The implementation of this
method is for small values of N and M not so difficult. For large N and M the filter becomes
more complicated as the required stop-band attenuation increases with increasing N and M.
When for instance a 16-bit digital audio signal at a sampling rate of 48 kHz must be converted to
a sampling rate of 44.1 kHz, this method leads to N=147 and M=160. In this example the
common frequency, which is also the operating frequency of the filter, becomes 7.056MHz. Due
to the folding products which results from decimating by 160, the required stop-band attenuation
must at least be 120 dB.

Most digital sample-rate converters work with up sampling, construct a continuous-time


equivalent from the samples coming out of the up sample filter and resample this signal with the
output sample frequency Fo. This situation is depicted in figure 5.2. For these solutions the
common multiple of the frequencies Fi and Fo is not needed. The number of input samples is
increased to NFi so that the shape of output signal looks more like a continuous-time waveform.
The reconstruction filter constructs a continuous-time signal which is fed to the output sampler.
The reconstruction of the time-signal may be performed by a first-order hold function or by
linear interpolation.

The operation principle of the new method of sample rate conversion is very simple. An input
sample is directly transferred to the output, while per unit of time, a certain amount of these
samples is omitted or repeated, depending on the difference in input and output sample
frequencies. The omission, acceptance or repetition of a sample is called validation. In order to
get the simplest hardware implementation, the choice has been made to use only the take-over
operation and the repetition operation in the current system solution. This means that the output
sampling frequency of the sample rate converter is always larger than the input sample
frequency. The process of repeating samples inevitably introduces errors.

The resulting output samples will have correct values, but as a result of the validation operation,
they are placed on the output time grid with a variable time delay with respect to the input time
grid. As a consequence, the output sequence should be viewed as the input sequence, having the
correct signal amplitude, which is sampled at wrong time moments. The effect is the same as
sampling the input signal by a jittered clock. As a result, it can be stated that the time error
mechanism introduced by the validation algorithm is time jitter.

If all input samples would be transferred to the output grid without the repetition or omission of
a certain amount of them, then the output signal would be just a delayed version of the input
signal, exhibiting the same shape. It is the repetition and omission (in the current system setup
only the repetition ) of input samples that give rise to a variation in time delay for each individual
output sample. This variation in individual time delays introduces phase errors. As a result of
this, the shape of the output signal will be distorted. The time errors introduced by the conversion
process can be reduced considerably by applying up sampling and down sampling techniques.
The input sample rate of the converter will be higher so that the conversion errors are smaller,
resulting in smaller time jitter.

These techniques do not suffice when we want to achieve the very high analog audio
performance required for professional applications. By using a sigma-delta modulator (noise
shaper) as control source for the conversion process, the time errors will be shaped to the higher
frequency region. As a result, the audio quality ( in the baseband) of the signal will be preserved,
provided that enough bandwidth is created by up sampling of the input signal. The high
frequency (out of base band) phase modulation terms can be filtered by a decimation filter or an
analog low-pass filter which is directly placed after the sample-rate converter. Figure 5.3 shows
the block diagram of the complete sample-rate converter.
As has already been mentioned, only the input sample take over operation will be employed here
in order to get the simplest hardware. This means that the input sample frequency of the
converter must be always be smaller than the output sample frequency. With this restriction
imposed, it is assured that all input samples are used in the output sequence, none of them being
omitted. The extra output samples per unit of time are inserted in the output sequence by
repetition of their previous output samples.

15.(b) VARIOUS SOUND EFFECTS


The two principal human senses are vision and hearing. Correspondingly, much of DSP is related
to image and audio processing. People listen to both music and speech. DSP has made
revolutionary changes in both these areas. Music Sound processing The path leading from the
musician's microphone to the audiophile's speaker is remarkably long. Digital data representation
is important to prevent the degradation commonly associated with analog storage and
manipulation. This is very familiar to anyone who has compared the musical quality of cassette
tapes with compact disks. In a typical scenario, a musical piece is recorded in a sound studio on
multiple channels or tracks. In some cases, this even involves recording individual instruments
and singers separately. This is done to give the sound engineer greater flexibility in creating the
final product. The complex process of combining the individual tracks into a final product is
called mix down. DSP can provide several important functions during mix down, including:
filtering, signal addition and subtraction, signal editing, etc.
One of the most interesting DSP applications in music preparation is artificial
reverberation. If the individual channels are simply added together, the resulting piece sounds
frail and diluted, much as if the musicians were playing outdoors. This is because listeners are
greatly influenced by the echo or reverberation content of the music, which is usually minimized
in the sound studio. DSP allows artificial echoes and reverberation to be added during mix down
to simulate various ideal listening environments. Echoes with delays of a few hundred
milliseconds give the impression of cathedral like locations.
Adding echoes with delays of 10-20 milliseconds provide the perception of more modest size
listening rooms. Speech generation Speech generation and recognition are used to communicate
between humans and machines. Rather than using your hands and eyes, you use your mouth and
ears. This is very convenient when your hands and eyes should be doing something else, such as:
driving a car, performing surgery, or (unfortunately) firing your weapons at the enemy. Two
approaches are used for computer generated speech: digital recording and vocal tract simulation.
In digital recording, the voice of a human speaker is digitized and stored, usually in a
compressed form.
The human vocal tract is an acoustic cavity with resonate frequencies determined by the size and
shape of the chambers. Sound originates in the vocal tract in one of two basic ways, called
voiced and fricative sounds. With voiced sounds, vocal cord vibration produces near periodic
pulses of air into the vocal cavities. In comparison, fricative sounds originate from the noisy air
turbulence at narrow constrictions, such as the teeth and lips. Vocal tract simulators operate by
generating digital signals that resemble these two types of excitation. The characteristics of the
resonate chamber are simulated by passing the excitation signal through a digital filter with
similar resonances. This approach was used in one of the very early DSP success stories, the
Speak & Spell, a widely sold electronic learning aid for children.

The automated recognition of human speech is immensely more difficult than speech generation.
Speech recognition is a classic example of things that the human brain does well, but digital
computers do poorly. Digital computers can store and recall vast amounts of data, perform
mathematical calculations at blazing speeds, and do repetitive tasks without becoming bored or
inefficient. Unfortunately, present day computers perform very poorly when faced with raw
sensory data. Teaching a computer to send you a monthly electric bill is easy. Teaching the same
computer to understand your voice is a major undertaking. Digital Signal Processing generally
approaches the problem of voice recognition in two steps: feature extraction followed by feature
matching. Each word in the incoming audio signal is isolated and then analyzed to identify the
type of excitation and resonate frequencies. These parameters are then compared with previous
examples of spoken words to identify the closest match. Often, these systems are limited to only
a few hundred words; can only accept speech with distinct pauses between words; and must be
retrained for each individual speaker. While this is adequate for many commercial applications,
these limitations are humbling when compared to the abilities of human hearing. There is a great
deal of work to be done in this area, with tremendous financial rewards for those that produce
successful commercial products

Explain sub band coding of speech and audio signals using DSP.

In signal processing, Sub-band coding (SBC) is any form of transform coding that breaks a
signal into a number of different frequency bands and encodes each one independently. This
decomposition is often the first step in data compression for audio and video signals. The utility
of SBC is perhaps best illustrated with a specific example. When used for audio compression,
SBC exploits auditory masking in the human auditory system. Human ears are normally sensitive
to a wide range of frequencies, but when a sufficiently loud signal is present at one frequency,
the ear will not hear weaker signals at nearby frequencies. We say that the louder signal masks
the softer ones. The louder signal is called the masker, and the point at which masking occurs is
known as the masking threshold.
The basic idea of SBC is to enable a data reduction by discarding information about frequencies
which are masked. The result differs from the original signal, but if the discarded information is
chosen carefully, the difference will not be noticeable, or more importantly, objectionable.
The simplest way to digitally encode audio signals is pulse-code modulation (PCM), which is
used on audio CDs, DAT recordings, and so on. Digitization transforms continuous signals into
discrete ones by sampling a signal's amplitude at uniform intervals and rounding to the nearest
value represent able with the available number of bits. This process is fundamentally inexact, and
involves two errors: discretization error, from sampling at intervals, and quantization error, from
rounding.
The more bits used represent each sample, the finer the granularity in the digital representation,
and thus the smaller the error. Such quantization errors may be thought of as a type of noise,
because they are effectively the difference between the original source and its binary
representation. With PCM, the only way to mitigate the audible effects of these errors is to use
enough bits to ensure that the noise is low enough to be masked either by the signal itself or by
other sources of noise. A high quality signal is possible, but at the cost of a high bit rate (e.g.,
over 700 kbit/s for one channel of CD audio). In effect, many bits are wasted in encoding masked
portions of the signal because PCM makes no assumptions about how the human ear hears.
More clever ways of digitizing an audio signal can reduce that waste by exploiting known
characteristics of the auditory system. A classic method is nonlinear PCM, such as mu-
law encoding (named after a perceptual curve in auditory perception research). Small signals are
digitized with finer granularity than are large ones; the effect is to add noise that is proportional
to the signal strength. Sun's Au file format for sound is a popular example of mu-law encoding.
Using 8-bit mu-law encoding would cut the per-channel bit rate of CD audio down to about 350
kbit/s, or about half the standard rate. Because this simple method only minimally exploits
masking effects, it produces results that are often audibly poorer than the original.
Sub-band coding is used for example in G.722 codec. It uses sub-band adaptive differential pulse
code modulation (SB-ADPCM) within a bit rate of 64 kbit/s. In the SB-ADPCM technique used,
the frequency band is split into two sub-bands (higher and lower) and the signals in each sub-
band are encoded using ADPCM.
To enable higher quality compression, one may use sub band coding. First, a digital filter bank
divides the input signal spectrum into some number (e.g., 32) of sub bands. The psychoacoustic
model looks at the energy in each of these sub bands, as well as in the original signal, and
computes masking thresholds using psychoacoustic information. Each of the sub band samples is
quantized and encoded so as to keep the quantization noise below the dynamically computed
masking threshold. The final step is to format all these quantized samples into groups of data
called frames, to facilitate eventual playback by a decoder. Decoding is much easier than
encoding, since no psychoacoustic model is involved. The frames are unpacked, sub band
samples are decoded, and a frequency-time mapping reconstructs an output audio signal.

Vous aimerez peut-être aussi