Académique Documents
Professionnel Documents
Culture Documents
1. DT signal:
Discrete time signal: A discrete time signal is defined only at discrete instants of time. The
independent variable has discrete values only, which are uniformly spaced. A discrete time signal
is often derived from the continuous time signal by sampling it at a uniform rate.
A system is said to be time invariant system if a time delay or advance of the input signal leads to
an idenditical shift in the output signal. This implies that a time invariant system responds
idenditically no matter when the input signal is applied.
3. Pass Band:
Filter is a frequency selective device, which amplifies particular range of frequencies and
attenuate particular range of frequencies.
4. Backward Difference:
One of the simplest methods of converting analog to digital filter is to approximate the
differential equation by an equivalent difference equation /dt(y(t)/t=nT=(y(nT)-y(nT-T))/T
5. Disadavantages of FIR:
Non recursive
Greater flexibility to control the shape of magnitude response
The desirable characteristics of the window are The central lobe of the frequency response of the
window should contain most of the energy and should be narrow. The highest side lobe level of
the frequency response should be small. The sides lobes of the frequency response should
decrease in energy rapidly as tends to .
Truncation is the term for limiting the number of digits right of the decimal point, by discarding
the least significant ones.
To truncate these numbers to 4 decimal digits, we only consider the 4 digits to the right of the
decimal point. The result would be: 5.6341 32.4381 -6.3444 Note that in some cases, truncating
would yield the same result as rounding, but truncation does not round up or round down the
digits; it merely cuts off at the specified digit. The truncation error can be twice the maximum
error in rounding.
In the field of digital signal processing, the sampling theorem is a fundamental bridge between
continuous-time signals (often called "analog signals") and discrete-time signals (often called
"digital signals").
Radix-2 decimation-in-time (DIT) FFT is the simplest and most common form of the Cooley
Tukey algorithm, although highly optimized CooleyTukey implementations typically use other
forms of the algorithm as described below. Radix-2 DIT divides a DFT of size N into two
interleaved DFTs (hence the name "radix-2") of size N/2 with each recursive stage.
The Radix-2 DIT algorithm rearranges the DFT of the function xn into two parts: a sum over the
even-numbered indices n = 2m and a sum over the odd-numbered indices n = 2m + 1:
One can factor a common multiplier out of the second sum, as shown in the equation below. It is
then clear that the two sums are the DFT of the even-indexed part x2m and the DFT of odd-
indexed part x2m + 1 of the function xn. Denote the DFT of the Even-indexed inputs x2m by Ek
and the DFT of the Odd-indexed inputs x2m + 1 by Ok and we obtain:
This result, expressing the DFT of length N recursively in terms of two DFTs of size N/2, is the
core of the radix-2 DIT fast Fourier transform. The algorithm gains its speed by re-using the
results of intermediate computations to compute multiple DFT outputs. Note that final outputs
are obtained by a +/ combination of Ek and Ok exp( 2ik / N), which is simply a size-2 DFT
This process is an example of the general technique of divide and conquers algorithms; in many
traditional implementations, however, the explicit recursion is avoided, and instead one traverses
the computational tree in breadth-first fashion.
This process of splitting the 'time domain' sequence into even an odd samples is what gives the
algorithm its name, 'Decimation In Time'. As with the DIF algorithm, we have succeeded in
expressing an N point transform as 2 (N/2) point sub-transforms. The principal difference here is
that the order we do things has changed. In the DIF algorithm the time domain data was
'twiddled' before the two sub-transforms were performed. Here the two sub-transforms are
performed first. The final result is obtained by 'twiddling' the resulting frequency domain data.
There is a slight problem here, because the two sub-transforms only give values for k=0..N/2-1.
We also need values for k=N/2..N-1. But from the periodicity of the DFT we know
RADIX-2 DIF
If N is even, the above sum can be split into 'top' (n=0..N/2-1) and 'bottom' (n=N/2..N-1) halves
and re-arranged as follows
If we now consider the form of this result for even and odd valued k separately, we can see how
this result enables us to express an N point FFT in terms of 2 (N/2) point FFT's The process of
dividing the frequency components into even and odd parts is what gives this algorithm its name
'Decimation In Frequency'. If N is a regular power of 2, we can apply this method recursively
until we get to the trivial 1 point transform. The factors TN are conventionally referred to as
'twiddle factors'.
11.(b) Find the output y (n) of a filter whose impulse response is h (n) = { 1, 1, 1 } and the
input signal X(n) = { 3, -1, 0, 1, 3, 2, 0, 1, 2, 1 } using (i) overlap-Save method (ii) overlap-
add method.
Overlap-Save Method
X2(n) = {-1, 0, 1, 3, 2}
= { -1, 0, 3, 2, 2}
= { 4, 1, 0, 4, 6 }
= { 6, 7, 5, 3, 3 }
= {1, 3, 4, 3, 1 }
-1, 0, 3, 2, 2 4, 2, 0, 4, 6 6, 7, 5, 3, 3 1, 3, 4, 3, 1
Y(n) = { 3, 2, 2, 4, 6, 5, 3, 3, 4, 3, 1 }
Let the Length of data block is 3.Two zeros are added to bring the length of five L+M-1 = 5
X2(n) = {1, 3, 2, 0, 0}
X3(n) = {0, 1, 2, 0, 0}
X4(n) = {1, 0, 0, 0, 0}
= {3, 2, 2, -1, 0}
= {1, 4, 6, 5, 2}
= {0, 1, 3, 3, 2}
= {1, 1, 1, 0, 0}
3, 2, 2, -1, 0
1, 4, 6, 5, 2
0, 1, 3, 3, 2
1, 1, 1, 0, 0
Y (n) = { 3, 2, 2, 0, 4, 6, 5, 3, 3, 4, 3, 1 }
12.(a) An analog filter has the following system function. Convert this filter using backward
difference method H(s)=1/(s+0.1)2+9
Low pass Butterworth filters are all - pole filters with monotonic frequency response in both pass
band and stop band, characterized by the magnitude - squared frequency response
Where, N is the order of the filter, c is the -3dB frequency, i.e., cutoff frequency, p is the pass
band edge frequency and 1= (1 /1+2) is the band edge value of Ha () 2. Since the product
Ha(s) Ha (-s) and evaluated at s = j is simply equal to Ha () 2, it follows that
The poles of Ha(s)Ha(-s) occur on a circle of radius c at equally spaced points. From Eq.
(5.29), we find the pole positions as the solution of
And hence, the N poles in the left half of the s-plane are
Note that, there are no poles on the imaginary axis of s-plane, and for N odd there will be a pole
on real axis of s-plane, for N even there are no poles even on real axis of s-plane. Also note that
all the poles are having conjugate symmetry. Thus the design methodology to design a
Butterworth low pass filter with 2 attenuation at a specified frequency s is Find N,
Design an analog Butterworth filter that has -2dB pass band attenuation at a frequency of
20 rad/sec and at least -10dB stop band attenuation at 30 rad/sec.
Solution
p = 2 dB ; p = 20 rad/sec
s = 2 dB; s = 30 rad/sec
N 3.37
N=4
So
H (s) =
c = 21.3868
S S / 21.3868 in H (s)
H (s) =
A digital filter is a linear shift-invariant discrete-time system that is realized using finite
precision arithmetic. The design of digital filters involves three basic steps:
These three steps are independent; here we focus our attention on the second step. The desired
digital filter is to be used to filter a digital signal that is derived from an analog signal by means
of periodic sampling. The specifications for both analog and digital filters are often given in the
frequency domain, as for example in the design of low pass, high pass, band pass and band
elimination filters.
Given the sampling rate, it is straight forward to convert from frequency specifications on an
analog filter to frequency specifications on the corresponding digital filter, the analog
frequencies being in terms of Hertz and digital frequencies being in terms of radian frequency or
angle around the unit circle with the point Z=-1 corresponding to half the sampling frequency.
The least confusing point of view toward digital filter design is to consider the filter as being
specified in terms of angle around the unit circle rather than in terms of analog frequencies.
A separate problem is that of determining an appropriate set of specifications on the digital filter.
In the case of a low pass filter, for example, the specifications often take the form of a tolerance
scheme, as shown in Figure
Many of the filters used in practice are specified by such a tolerance scheme, with no constraints
on the phase response other than those imposed by stability and causality requirements; i.e., the
poles of the system function must lie inside the unit circle. Given a set of specifications in the
form of Fig. 5.1, the next step is to and a discrete time linear system whose frequency response
falls within the prescribed tolerances. At this point the filter design problem becomes a problem
in approximation. In the case of infinite impulse response (IIR) filters, we must approximate the
desired frequency response by a rational function, while in the finite impulse response (FIR)
filters case we are concerned with polynomial approximation.
ADVANTAGES OF FIR:
When astable IIR digital filter is excited by a finite sequence, that is constant, the o/p will decay
to zero . However, the nonlinearities due to the finite precision arithmetic operation often cause
periodic oscillation to occur in the o/p. Such oscillations in recursive are called zero i/p limit
cycle oscillations.
Consider a first order IIR filter with difference equation,
Y(n)=x(n)+a y(n-1)
Let us assume a=1/2 and the data register length is 3 bit plus a sign bit . if the i/p is
This operation are due to round off errors in multiplication and overflow addition
Dead band :
The limit cycle occur as a result of the quantization effects in multiplication . the amplitude of
the output during a limit cycle are confined to a range of values that s called the dead band of
the filter.
For the first order system described by the equation.
Y(n)=ay(n-1)+x(n)
The dead band is given by ,
In the fixed point addition of two binary numbers the overflow occurs when the sum exceeds the
finite word length of the register used to store the sum .The overflow in addition may lead to
oscillation in the o/p which is referred to as overflow limit cycle.
The over flow oscillation can be eliminated if saturation arithmetic is perform, In saturation
arithmetic, when an overflow is sensed, the o/p (sum) is set equal to maximum allowable value
and when an underflow is sensed , the o/p (sum) is set equal to minimum allowable value.
Therefore the a/d convertor output is the sum of the i/p signal x(n) and the error signal e(n) If
rounding is used for quantization then the quantization error e(n)=xq(n)-x(n) is
bounded by q/2 e(n)q/2 . ion most cases , we can assume that the analog to digital
conversion error e(n) has the following properties.
Since the number of digits are fixed it is impossible to represent too large and too small.
In fixed point representation there are three different formats for representing negative
binary number.
Signed magnitude form
1s complement form
2s complement form.
In the representation , the most significant bit is set to 1, to represent the negative sign,
For eg :
(0.001)2 = -0.125 10
(0.001)2 = 0.125 10
The decimal number -1.75 is represented as 11.110000 and +1.75 is represented as 01.110000
1s complement form:
In ones complement form , the positive number is represented in the sign magnitude
form.The negative number can be obtained by complementing all the bits of the
positive number.
For eg:
The decimal number -0.875
0.875 =(0.111000)2
-0.875= complement of (0.111000)2
-0.875=(1.111000)2
2s complement form:
In this complement representation positive are represented as in sign magnitude and ones
complement . however , the negative number can be obtained by complementing all the bits of
the positive number and adding one to the least significant bit.
0.875 = 0.111000
= 1.000111 (1s complement taken)
1 (adding 1)
In case of rounding the e(n) lies b/w q/2 e(n)q/2 with equal probability . the variance
of e(n) is given by,
2
e Ee2 n E 2 en
Where Ee2 n is the average of e 2(n) and E [e(n)] is the mean value of
e(n) .
The aim of (digital) sample-rate conversion is to bring a digital audio signal from one sample
frequency to another. The distortion of the audio signal, introduced by the sample-rate converter,
should be as low as possible. The generation of output samples from the input samples may be
performed by the application of various methods. One solution is depicted in figure 5.1. The
input samples at a sampling rate Fi are up sampled by an integer factor N. The signal at NFi is
low-pass filtered and decimated by an integer factor M to obtain the output sample rate Fo. This
solution implies that the conversion factor Fo/Fi is equal to N/M which is a rational number. The
master clocks from which Fi and Fo are obtained have to be locked. The implementation of this
method is for small values of N and M not so difficult. For large N and M the filter becomes
more complicated as the required stop-band attenuation increases with increasing N and M.
When for instance a 16-bit digital audio signal at a sampling rate of 48 kHz must be converted to
a sampling rate of 44.1 kHz, this method leads to N=147 and M=160. In this example the
common frequency, which is also the operating frequency of the filter, becomes 7.056MHz. Due
to the folding products which results from decimating by 160, the required stop-band attenuation
must at least be 120 dB.
The operation principle of the new method of sample rate conversion is very simple. An input
sample is directly transferred to the output, while per unit of time, a certain amount of these
samples is omitted or repeated, depending on the difference in input and output sample
frequencies. The omission, acceptance or repetition of a sample is called validation. In order to
get the simplest hardware implementation, the choice has been made to use only the take-over
operation and the repetition operation in the current system solution. This means that the output
sampling frequency of the sample rate converter is always larger than the input sample
frequency. The process of repeating samples inevitably introduces errors.
The resulting output samples will have correct values, but as a result of the validation operation,
they are placed on the output time grid with a variable time delay with respect to the input time
grid. As a consequence, the output sequence should be viewed as the input sequence, having the
correct signal amplitude, which is sampled at wrong time moments. The effect is the same as
sampling the input signal by a jittered clock. As a result, it can be stated that the time error
mechanism introduced by the validation algorithm is time jitter.
If all input samples would be transferred to the output grid without the repetition or omission of
a certain amount of them, then the output signal would be just a delayed version of the input
signal, exhibiting the same shape. It is the repetition and omission (in the current system setup
only the repetition ) of input samples that give rise to a variation in time delay for each individual
output sample. This variation in individual time delays introduces phase errors. As a result of
this, the shape of the output signal will be distorted. The time errors introduced by the conversion
process can be reduced considerably by applying up sampling and down sampling techniques.
The input sample rate of the converter will be higher so that the conversion errors are smaller,
resulting in smaller time jitter.
These techniques do not suffice when we want to achieve the very high analog audio
performance required for professional applications. By using a sigma-delta modulator (noise
shaper) as control source for the conversion process, the time errors will be shaped to the higher
frequency region. As a result, the audio quality ( in the baseband) of the signal will be preserved,
provided that enough bandwidth is created by up sampling of the input signal. The high
frequency (out of base band) phase modulation terms can be filtered by a decimation filter or an
analog low-pass filter which is directly placed after the sample-rate converter. Figure 5.3 shows
the block diagram of the complete sample-rate converter.
As has already been mentioned, only the input sample take over operation will be employed here
in order to get the simplest hardware. This means that the input sample frequency of the
converter must be always be smaller than the output sample frequency. With this restriction
imposed, it is assured that all input samples are used in the output sequence, none of them being
omitted. The extra output samples per unit of time are inserted in the output sequence by
repetition of their previous output samples.
The automated recognition of human speech is immensely more difficult than speech generation.
Speech recognition is a classic example of things that the human brain does well, but digital
computers do poorly. Digital computers can store and recall vast amounts of data, perform
mathematical calculations at blazing speeds, and do repetitive tasks without becoming bored or
inefficient. Unfortunately, present day computers perform very poorly when faced with raw
sensory data. Teaching a computer to send you a monthly electric bill is easy. Teaching the same
computer to understand your voice is a major undertaking. Digital Signal Processing generally
approaches the problem of voice recognition in two steps: feature extraction followed by feature
matching. Each word in the incoming audio signal is isolated and then analyzed to identify the
type of excitation and resonate frequencies. These parameters are then compared with previous
examples of spoken words to identify the closest match. Often, these systems are limited to only
a few hundred words; can only accept speech with distinct pauses between words; and must be
retrained for each individual speaker. While this is adequate for many commercial applications,
these limitations are humbling when compared to the abilities of human hearing. There is a great
deal of work to be done in this area, with tremendous financial rewards for those that produce
successful commercial products
Explain sub band coding of speech and audio signals using DSP.
In signal processing, Sub-band coding (SBC) is any form of transform coding that breaks a
signal into a number of different frequency bands and encodes each one independently. This
decomposition is often the first step in data compression for audio and video signals. The utility
of SBC is perhaps best illustrated with a specific example. When used for audio compression,
SBC exploits auditory masking in the human auditory system. Human ears are normally sensitive
to a wide range of frequencies, but when a sufficiently loud signal is present at one frequency,
the ear will not hear weaker signals at nearby frequencies. We say that the louder signal masks
the softer ones. The louder signal is called the masker, and the point at which masking occurs is
known as the masking threshold.
The basic idea of SBC is to enable a data reduction by discarding information about frequencies
which are masked. The result differs from the original signal, but if the discarded information is
chosen carefully, the difference will not be noticeable, or more importantly, objectionable.
The simplest way to digitally encode audio signals is pulse-code modulation (PCM), which is
used on audio CDs, DAT recordings, and so on. Digitization transforms continuous signals into
discrete ones by sampling a signal's amplitude at uniform intervals and rounding to the nearest
value represent able with the available number of bits. This process is fundamentally inexact, and
involves two errors: discretization error, from sampling at intervals, and quantization error, from
rounding.
The more bits used represent each sample, the finer the granularity in the digital representation,
and thus the smaller the error. Such quantization errors may be thought of as a type of noise,
because they are effectively the difference between the original source and its binary
representation. With PCM, the only way to mitigate the audible effects of these errors is to use
enough bits to ensure that the noise is low enough to be masked either by the signal itself or by
other sources of noise. A high quality signal is possible, but at the cost of a high bit rate (e.g.,
over 700 kbit/s for one channel of CD audio). In effect, many bits are wasted in encoding masked
portions of the signal because PCM makes no assumptions about how the human ear hears.
More clever ways of digitizing an audio signal can reduce that waste by exploiting known
characteristics of the auditory system. A classic method is nonlinear PCM, such as mu-
law encoding (named after a perceptual curve in auditory perception research). Small signals are
digitized with finer granularity than are large ones; the effect is to add noise that is proportional
to the signal strength. Sun's Au file format for sound is a popular example of mu-law encoding.
Using 8-bit mu-law encoding would cut the per-channel bit rate of CD audio down to about 350
kbit/s, or about half the standard rate. Because this simple method only minimally exploits
masking effects, it produces results that are often audibly poorer than the original.
Sub-band coding is used for example in G.722 codec. It uses sub-band adaptive differential pulse
code modulation (SB-ADPCM) within a bit rate of 64 kbit/s. In the SB-ADPCM technique used,
the frequency band is split into two sub-bands (higher and lower) and the signals in each sub-
band are encoded using ADPCM.
To enable higher quality compression, one may use sub band coding. First, a digital filter bank
divides the input signal spectrum into some number (e.g., 32) of sub bands. The psychoacoustic
model looks at the energy in each of these sub bands, as well as in the original signal, and
computes masking thresholds using psychoacoustic information. Each of the sub band samples is
quantized and encoded so as to keep the quantization noise below the dynamically computed
masking threshold. The final step is to format all these quantized samples into groups of data
called frames, to facilitate eventual playback by a decoder. Decoding is much easier than
encoding, since no psychoacoustic model is involved. The frames are unpacked, sub band
samples are decoded, and a frequency-time mapping reconstructs an output audio signal.