Vous êtes sur la page 1sur 86

Theorem: A band limited signal can be reconstructed if it is

sampled at a rate at least twice the maximum frequency


component in it.
m(t) a continues signal which is band limited to its highest
frequency spectral component W Hz.
Values of m(t) are determined at regular intervals separated
very Ts secondssuch that Ts

1/2W or (fs

2W)
m(nTs), where n is an integer, is the instantaneous amplitude of
m(t) at various sampling points t=nTs.
Note: It is required that sampling rate must be
rapid enough so that at least two samples
are taken duringthe course of the period corresponding to
highest frequency spectral frequency component.
i.e, Number of samples to be taken depends on the maximum
frequency contained in the signal to be sampled
s(t) is a periodic train of pulses of unit amplitude and of period,
Ts given by:
s(t) =
( ) nTs t
n

----->
Impulse function
where, (t) = 1 at t=0
= 0, otherwise.

And (t - nTs) = 1 at t=Ts
= 0 otherwise.
Train of Impulse function s(t) representation:
m(t) x s(t) = m

(t)
m(t).
( ) nTs t
n

= m

(t)
m

(t) =
( ) nTs t t m
n


). (
Sampled
signal
Find the Fourier Transform of sampled signal.
F.T of train of impulses is given by:
S(f)= fs.
( ) nfs f
n


Dirac-delta function: X(f) = f
o
.
( )
o
n
nf f

F.T of m

(t) is given by :
M

(f) = M(f)

[f
s .

( ) nfs f
n

]
M

(f) = f
s
.
( ) nfs f f M
n


) (
M(f)

(f-nf
s
) = M(f-nf
s
) Property of
convolution
M

(f) = f
s
.
( ) nfs f M
n


Thus, the Fourier transform of the sampled signal m

(t) is a sum of scaled


and shifted replicas of the Fourier transform of the original signal m(t).
M

(f) =.+ f
s
. M (f+2f
s
) + f
s
. M (f+f
s
) + f
s
. M(f) + f
s
. M (f-f
s
) + f
s
.
M (f-2f
s
) +..
Spectral representation of M(f) :
Spectral representation of M

(f):
M

(f) = f
s
. M(f) +
( ) nfs f M f
n
s


.
M(f) =
) ( .
1
f M
f
s

For -W

M(f) =
) ( .
1
f M
f
s

( ) nfs f M
n


F.T of discrete time signal m

(t) in complex form is given by:-


M

(f) =



n
nfT j
s
s
e nT m
2
). (
Discrete Fourier transform in
complex form.

M(f) =



n
nfT j
S
S
e nT m
W
2
). ( .
2
1
; -W

W
Ts=
W 2
1
Thus if sample values m(
W
n
2
) of the signal m(t) are specified for all time,
then F.T ,M(f) of original signal is determined by using above equation.
Re-construction of signal from samples:
m (t) = IFT.
{ } ) (F M
= IFT. {
W fn j
n
e
w
n
m
W
/ 2
).
2
( .
2
1

}
=
df e e
W
n
m
W
ft j W fn j
n
W
W
. . ).
2
( .
2
1
2 2 / 2


m (t) = df e
W W
n
m
W
W
W
n
t f j
n
. .
2
1
).
2
(
)
2
( 2



m(t) =
W
W
W
n
t f j
n
e
W
n
t j
W W
n
m


1
]
1

1
]
1

)
2
( 2
.
2
2
1
.
2
1
).
2
(
=
1
]
1

1
]
1

)
2
( 2 )
2
( 2
.
2
4
1
).
2
(
W
n
t W j
W
n
t W j
n
e e
W
n
t W j
W
n
m
=



1
1
1
1
]
1

1
]
1

,
_

n
W
n
t W j
W
n
t W j
W
n
t W j
e e
W
n
m
2
4
.
2
)
2
( 2 )
2
( 2
=
) 2 (
) 2 sin(
).
2
(


n Wt
n Wt
W
n
m
n
sinc x =
x
x

) sin(
Sinc function
m(t) =
) 2 ( sin ).
2
( n Wt c
W
n
m
n


Interpolation formula for reconstructing original signal m(t) from
sequence of samples m(n/2W).
Graphical representation of Interpolation process:
Delayed
version of sinc
function
m(t) = m(0).sinc2Wt + m(
t
Ts).sinc2W(t
t
Ts) + m(
t
2Ts).sinc2W(t
t
2Ts)+..
SAMPLING AND RECONSTRUCTION: reconstruction
The overall operation of sampling with subsequent reconstruction of a band limited signal m (t) looks
like this:
Note: Actual signals are never strictly band limited. For instance, an audio recording has a
finite duration and therefore cannot be band limited. This can be handled via pre-filtering.
1) fs=2W:
2) fs>2W:
3) fs<2W:
Aliasing:
Result of sampling the signal at a frequency lower that the Nyquist rate.
When the signal is converted back into a continuous time signal, it will
exhibit a phenomenon called aliasing.
Aliasing is the presence of unwanted components in the reconstructed
signal. These components were not present when the original signal was
sampled.
Aliasing occurs because signal frequencies can overlap if the sampling
frequency is too low. Frequencies "fold" around half the sampling
frequency - referred to as the folding frequency.
With these samples passed to reconstruction filter, the o/p is not same as the original
i/p analog signal - High frequency spectrums are corrupted.

Avoided by using a anti-aliasing filter which limits the signal B.W to a maximum of fs/2
prior to sampling.
Sampling theorem for Bandpass signal:
The bandpass signal m(t) whose maximum B.W is 2W can be
completely represented into and recovered from its samples if it is
sampled at the minimum rate of twice the bandwidth.
Thus if B.W is 2W, then minimum sampling rate for band pass signal
should be 4W samples /second. Fig below shows the spectrum of band
pass signal.
The spectrum is centered at frequency, fc. The B.W is 2W. Thus
frequencies in band pass signal are from fc-W to fc+W. Thus highest
freq present in band pass signal is fc+W.
A band pass signal is represented in terms of its in phase and
quadrature components
Let m (t) = m
I
(t) cos(2f
c
t) m
Q
(t) sin (2f
c
t)
The in phase and quadrature components are obtained by multiplying
m(t) by cos(2f
c
t) and sin (2f
c
t) and then suppressing the sum of
frequencies by means of low-pass filter. Thus in phase and quadrature
components contains only low frequency components limited between
W to W.
Reconstruction formula is given as:
m(t) =


1
]
1


n
c
W
n
t f
n
Wt c
W
n
m )
4
( 2 cos ).
2
2 ( sin ).
4
(
Ts=
W 4
1
Thus if 4W samples/second are taken then, then the band pass signal of
bandwidth 2W can be completely recovered from its samples.
Thus for band pass signals of bandwidth 2W, minimum sampling rate is
4W samples /second.
Sampling Techniques:
The samples were impulses of height equal to the modulating signal at
sampling instant. Practical sampling differs from ideal sampling in
following aspects:
The sampled pulses have finite duration and amplitude rather than
impulses.
Practical reconstruction filter are not ideal. Need of Guard band.
The signal to be sampled is not strictly band limited. Problem in
selecting sampling rate.
Ideal Sampling:
Switching sampler-If closing time t of the switch approaches 0, the
o/p m

(t) gives only instantaneous values.


Since the width of the pulse approaches 0, instantaneous sampling
gives train of impulses in the o/p.
m(t) x s(t) = m

(t)
m(t).
( ) nTs t
n

= m

(t)
m

(t) =
( ) nTs t t m
n


). (
Sampled
signal
M

(f) = f
s
.
( ) nfs f M
n


--------> Spectrum of
ideally sampled signal.
Flat top Sampling:
The top of the samples remains constant and equal to the instantaneous value
of the message signal, m(t) at the start of
sampling.
Duration of each pulse is and sampling
rate fs =
Ts
1
Starting edge represents the value of
message signal, m(t).
Width is determined by h(t) and sampling
instant by (t).
s(t) = m

(t)

h(t) ---> pulse whose


duration is equal to
h(t) and amplitude is defined by m

(t)



n
s s
nT t nT m t m ) ( ). ( ) (

Mathematical
equivalent of flat top pulse

By replication property: m(t)

(t) = m(t)
Spectrum of Flat top sampled signal:
S(f) = f
s
) ( ). ( f H nf f M
n
s



Aperture Effect:
Spectrum of the rectangular pulse, H(f)= ;
since A=1
The high frequency roll off of H(f) acts like a low pass filter, and attenuates the upper portion of the message
spectrum.
Maintaining the same amplitude throughout the duration, results in amplitude distortion and delay.
As increases aperture effect increases.


f j
e f c

). ( sin

Cut-off frequency of LPF is taken slightly higher than the maximum frequency of message signal.
And the transfer function of the equalizer is given by:
Natural Sampling:
Pulse of finite width .
S(t)=x(t) when c(t)= A
S(t)=0 when c(t)=0
PAM Generation:
There are two steps involved in generation of PAM:-
Instantaneous samples of the message signal, m(t) with sampling rate fs=1/Ts. Is chosen
according to the sampling theorem.
Lengthning the duration of each sample so obtained to some constant value, T.
Other forms of Pulse Modulation:
PWM:

PWM and PPM Detection:
Multiplexing:
More than one user share the channel with other users
Some multiplexing technique allow full simultaneity, with the information from each source carried
along the channel at the same time as the others, in other schemes, the channel is shared, with the
channel being switched from user to user, quickly that all necessary info is carried as fast as it is being
generated. To the receiver there is no significant time lag, although there are short times when the
channel is not directly connected for the user.
Choice of which multiplexing to use depends on many factors:- Amount of data to be transmitted, the
number of users, bandwidth available, power usage and physical spaces.
Frequency Division Multiplexing: Total channel B.W is divided into smaller bands- each user is assigned
one band.
Time Division Multiplexing: Channel used only a single frequency band and the users share this, with each
user having access at a different time.
FDM:
Idea is to modulate the signals of various users onto carriers at if freq. These modulated signals can then
share same path and they occupy the diff parts of the EMF spectrum.
Type of modulation can be AM,fm,pm. Eg.AM band used AM in an FDM scheme to allow multiple stations in
the 550-1600kHz.
And FM band allows for stations in 88-108MHz.
Total B.W required is sum of individual B.Ws plus the guard bands that are required.
In FDM a problem for one user can affect others.
Users can be added by adding set of txr modulators and rxr demodulators.
Maximum no of users is determined by the amount of B.W each user is assigned and by the total B.W of
channel.
Space vehicle example.
Used in telephone and systems.
Time Division Multiplexing:-
A single path and carrier frequency is used.
Each user is assigned a unique time slot for his or her signal
A central switch or multiplexer is used goes from one user to other in a predictable sequence and
time. When switch is at A user As signal is put into the link.
At the receiver the demultiplxer reverses this process and sorts ech signal received in time
sequence to correct user.
TDM O/P is carrying more informationapproximately equal to the sum of B.W of individual user.
Used within the computer systems.
PAM-TDM
Bit rate in TDM= Number of bits/frame x Number of frames /second.
Signaling rate in TDM: Number of pulses transmitted /second.=2 x pulses in one frame x Highest
frequency of all i/ps.
Pulse Code Modulation:
Pulse code modulation (PCM) is produced by analog-to-digital conversion
process.
As in the case of other pulse modulation techniques, the rate at which
samples are taken and encoded must conform to the Nyquist sampling
rate.
The sampling rate must be greater than, or equal to, twice the highest
frequency in the analog signal,
f
s
> 2W
Example to illustrate the pulse code modulation of an analog signal is
shown:

PCM Waveforms:
Non-return to Zero (NRZ) PCM
Return to zero (RZ) PCM
Phase encoded PCM
Multi-level binary PCM
NRZ PCM:
NRZ-L PCM:

NRZ-M PCM:
NRZ-S PCM:
Returntozero (RZ):
Unipolar-RZ:
Bipolar-RZ PCM:
RZ-AMI PCM:
Phase Encoded:
Bi-Phase-level (bi- - L) or Manchester Coding, Bi-phase-mark (bi- - M)
and Bi-phase space (bi- - S)
Bi-Phase-level (bi - - L):-
Bi-phase-mark (bi- - M):
Bi-phase space (bi- - S):
Multi-level Binary:
Di-code NRZ PCM waveforms:
Di-code RZ PCM waveforms:
Quantization:-
Sampling results in a series of pulses of varying amplitude values ranging between two limits: a
min and a max.
The amplitude values are infinite between the two limits.
We need to map the infinite amplitude values onto a finite set of known values. This is achieved
by dividing the distance between min and max into L zones, each of height .
= (max - min)/L
Quantization Levels :-
The midpoint of each zone is assigned a value from 0 to L-1 (resulting in L values)
Each sample falling in a zone is then approximated to the value of the midpoint.
Quantization Zones:-
Assume we have a voltage signal with amplitudes V
min
=-20V and V
max
=+20V.
We want to use L=8 quantization levels.
Zone width or step size, s = {20 (-20)} = 5
The 8 zones are: -20 to -15, -15 to -10, -10 to -5, -5 to 0, 0 to +5, +5 to +10, +10 to +15, +15 to
+20
The midpoints are: -17.5, -12.5, -7.5, -2.5, 2.5, 7.5, 12.5, 17.5
8
Assigning Codes to Zones:-
Each zone is then assigned a binary code.
The number of bits required to encode the zones, or the number of bits per sample as it is
commonly referred to, is obtained as follows:
n
b
= log
2
L
Given our example, n
b
= 3
The 8 zone (or level) codes are therefore: 000, 001, 010, 011, 100, 101, 110, and 111
Assigning codes to zones:
000 will refer to zone -20 to -15
001 to zone -15 to -10, etc.
Quantization Error:
When a signal is quantized, we introduce an error - the coded signal is an approximation of the
actual amplitude value.
The difference between actual and coded value (midpoint) is referred to as the quantization
error.
The more zones, the smaller which results in smaller errors.
BUT, the more zones the more bits required to encode the samples -> higher bit rate
Maximum quantization error is 2 / s t
To reduce error we need to ------------
Quantization noise = s
2
/12
Signal to Quantization noise ratio :-
SNR should be high, which can be done by increasing the step size.--> bit rate -B.W
Signaling Rate ( no. of bits per sec):
=no. of samples/sec x no .of bits / sample
Transmission BW = Signaling rate
db N Nq Si ] 6 8 . 1 [ / +
PCM transmitter and receiver:-
Non-Uniform Quantization:
Drawbacks in uniform Quantization:-
Depending on the signal the step size will vary ----- > Non uniform quantization.
For weak signals step size decreases Quantization noise ? SNR ?..
Achieved through Companding
Companding:-
Process of compressing and then expanding.
Higher amplitude analog signals are compressed prior to transmission and then expanded in
the receiver.
Signal is amplified at low levels and attenuated at high levels and then a uniform quantizer is
used.
This is equivalent to more step size at low levels and small step size at high levels.
Reverse process is done in the receiver.
-Law Companding for speech signals:-
A-law for Companding:- Piece wise technique of Compression:-
The signal to noise ratio of companded PCM is given as:
2
)] 1 [ln(
3
+

L
N
S
Delta Modulation:
Knowing the past behavior of a signal up to a certain point in time, it is
possible to make some inference about the future values.
This scheme sends only the difference between pulses, if the
pulse at time t
n+1
is higher in amplitude value than the pulse
at time t
n
, then a single bit, say a 1, is used to indicate the
positive value.
If the pulse is lower in value, resulting in a negative value, a
0 is used.
This scheme works well for small changes in signal values between
samples.
If changes in amplitude are large, this will result in large errors.
DM Transmitter:-
DM Receiver:
Advantages:
.
Disadvantages:
1) Slope Overload Distortion:
With the use of a step size, s that is too small to follow portions of the
waveform that have a steep slope.
It can be reduced by increasing the step size.
2) Granular Noise:
Results from using a step size that is too large in parts of the waveform
having a small slope.
Granular noise can be reduced by decreasing the step size.
Even for an optimized step size, the performance of the DM encoder
may still be less satisfactory.
An alternative solution is to employ a variable step size that adapts itself
to the short-term characteristics of the source signal. That is the step size
is increased when the waveform has a steep slope and decreased when
the waveform has a relatively small slope.
This strategy is called adaptive DM (ADM).
ADM Transmitter:-
ADM Receiver:-
Discrete Messages:
Communication systems are limited in their
performance by the available signal power, background
noise, and the need to limit bandwidth. Information
theory gives an opportunity to know the performance
characteristics of an optimum or ideal system.
It quantifies information content in a message leading
to different source coding techniques for efficient
transmission of message.
Function of any communication system is to convey, from
transmitter to receiver, a sequence of messages which are
selected from a finite number of predetermined messages. Thus
within some specified time interval one of these messages is
transmitted. During next time interval, another message is
transmitted. While the messages are pre determined by the
receiver, the message selected for transmission during a
particular interval is not known prior, by the receiver. Thus the
receiver does not have the burden of extracting signal from noise,
but need only perform the operation of identifying which of a
number of allowable messages was transmitted.(WHICH ONE?
Rather than what was the message?)
The receiver will answer this question by determining the
correlation of the received noisy signal with all of the possible pre-
determined signals individually.
The receiver will then decide that the transmitted signal is the
predetermined signal with which the noisy received signal has the
greatest correlation.
Probability that the particular message has been selected will not
be the same for all messages.
The Concept of Amount of Information:-
Consider a communication system in which the allowable
messages are m1, m2,., with probabilities of occurrences
p1,p2,
p1+p2+. =1.
Let the transmitter select message mk, of probability pk.
Assuming, that the receiver had correctly identifies the
message. Then, the system has conveyed an amount of
information, given by K
I
:-
Information (1/Probability of Occurrence)
k
K
p
I
1
log
2

Rule 1: Information I
k
approaches to 0 as P
k
approaches infinity.
Mathematically I
k
= 0 as P
k
1
e.g. Sun Rises in East
Rule 2 : The Information Content I
k
must be Non Negative
quantity.
It may be zero
Mathematically I
k
>= 0 as 0 <= P
k
<=1
E.g. Sun Rises in West.
Rule 3: The Information Content of message having higher
probability is less than the Information Content of
Message having Lower probability:
---------->Mathematically I(m
k
) > I(m
j
)
Rule 4. Also we can state for the Sum of two messages that
the information content in the two combined messages
is same as the sum of information content of each
message provided the occurrence is mutually
independent.
E.g. -> There will be Sunny weather today.
-> There will be Cloudy weather Tomorrow
------> Mathematically,
I (m
k
and m
j
) = I(m
k
.m
j
)= I(m
k
)+I(m
j
)
Average information, Entropy:-
Suppose we have M different and independent messages
m1,m2,, with probabilities of occurrences p1,p2,
Suppose during a long period of transmission a sequence of L
messages has been generated. Then, if L is very large, we
may expect that in the L message sequence we transmitted
p1L messages of m1, p2Lmessages of m2, etc
The total information in such a sequence will be:
...
1
log
1
1
log
2
2 2 2 1
+ +
p
L p
p
L p I
Total
The average information per message interval, represented by symbol H,
will then be
This average information is also called Entropy.
When there is only single possible message (pk=1), the receipt
of that message conveys no information. When p
k
-> 0, I
k
->


since
0
1
log lim
0

> p
p
p
The average information associated with an extremely
unlikely message and an extremely likely message, is zero
Consider two messages with probabilities p and (1-p). the
average information per message is

L
I
H
total

+ +
M
k k
k
p
p
p
L p
p
L p
1
2
2
2 2 2 1
1
log .....
1
log
1
1
log
Bits/symbol
H =
p
p
p
p

+
1
1
log ) 1 (
1
log
2 2
H=0 at p=0 and at p=1.
Maximum value of H can be found
by setting dH/dp to zero, then as
shown in the figure, the maximum
occurs at p=1/2 that is , when two messages are equally
likely.
The corresponding H=
message bit / 1 2 log 2 log
2
1
2 log
2
1
2 2 2
+
i.e.
H
max
=
message Nbit M M
M
/ log log
1
2 2

Information Rate:-
Information Rate = Total Information/ time taken
Here Time Taken
n bits are transmitted with r symbols per second. Total
Information is nH.
Information rate, R =
) (
R
n
nH
= rH bits/sec

Example: - An analog signal is band limited to B Hz, sampled at the
Nyquist rate, and the samples are quantized into 4 levels. The
quantization levels Q1, Q2, Q3, Q4 (messages) are assumed
independent and occur with probabilities p1=p4=
8
1
and p2=p3=
8
3
. Find
the information rate of the source.
Shannons Theorem, Channel Capacity:-
Theorem is concerned with the rate of transmission of
information over a communication channel.
Shannons theorem says it is possible, to devise a means
whereby a communication system will transmit
information with small probability of error provided that
the information rate R is less than or equal to a rate C
called the channel capacity.
Theorem: - Given a source of M equally likely messages, with M>>1,
which is generating information at a rate R. Given a channel with
channel capacity C. Then if,
C R
, there exists a coding technique such that the output of the
source may be transmitted over the channel with a probability of error in
the received message which may be made small.
Negative Statement:- Given a source of M equally likely messages,
with M>>1, which is generating information at a rate, R then if
C R >
, the probability of error is close to unity for every possible set
of M transmitted signals.
If the information rate, r, exceeds a specified value c, the error
probability will increase toward unity as M increases.
Capacity of Gaussian channel:-
A theorem which is complementary to Shannons theorem and
applies to a channel in which the noise is Gaussian is known as
the Shannon - Hartley theorem.
Theorem: - The channel capacity of a white, band limited Gaussian channel is
sec / ) 1 ( log
2
bits
N
S
B C +
where B is the channel bandwidth, S the signal power,
and N is the total noise within the
channel bandwidth.

Vous aimerez peut-être aussi