Académique Documents
Professionnel Documents
Culture Documents
Administrator
[Pick the date]
REFERENCES
1. Kennedy and Davis Electronic communication systems Tata McGraw hill, 4th edition, 1993.
2. Sklar Digital communication fundamentals and applications Pearson Education, 2001
3. Bary le, Memuschmidt, digital Communication, Kluwer Publication, 2004.
4. B.P.Lathi Modern digital and analog communication systems Oxford University Press, 1998.
UNIT I
ANALOG COMMUNICATION
PART A
1. Define modulation?
It is simply the process of changing some property of the carrier in accordance with the
information.
2. Define demodulation?
It is reverse process of modulation and concerts the modulated carrier back to the original
information.
3. Define Communication?
Communication means transferring the message from one point to another point.
It is the process of changing the amplitude of a relatively high frequency Carrier signal in
accordance with the amplitude of the modulating signal.
It is the process of changing the frequency of a relatively high frequency Carrier Signal in
accordance with the amplitude of the modulating signal.
Em
m=
Ec
Em Peak change in the amplitude of the output waveform voltage.
Ec Peak amplitude of the unmodulated carrier voltage.
Em
M= 100
Ec
Modulation takes place prior to the output element of the final stage of the transmitter. It is
low-level modulator.
Modulation takes place in the final element of the final stage of the transmitter. It is high-
level modulator.
Selectivity is used to measure the ability of the receiver to accept a given band of
frequencies and reject all others.
Sensitivity is the minimum RF signal level that can be detected at the input to the receiver
and still produce a usable demodulated information signal. Receivers sensitivity is also called
receiver threshold.
The difference between the actual local oscillator frequency and the desired frequency is
called tracking error. The tracking error is reduced by a technique called three-point tracking.
Tracking is the ability of the local oscillator in a receiver to oscillate either above or below
the selected radio frequency carrier by an amount equal to the intermediate frequency through out
the entire radio frequency band.
An image frequency is any frequency other than the selected radio frequency that, if
allowed to enter a receiver and mix with the local oscillator, will produce across. Product
frequency that is equal to the intermediate frequency.
The Image Frequency Rejection Ratio is a numerical measure of the ability of a preselector
to reject the image frequency.
The AGC keeps the output signal level constant irrespective the increase or decrease in the
signal level at the input of receiver.
M = KEm.
The deviation ratio is the ratio of maximum frequency deviation to maximum modulating
signal frequency i.e.,
21. State Carsons general rule for determing the bandwidth for an angle modulated wave.
Bw = 2 ( + fm (max))
is the maximum frequency deviation and fm (max) is the maximum signal frequency.
22. Differentiate between the high level modulation and low level modulation.
AM FM
In AM, these are only three frequencies is the FM has an infinite number of side bands
carrier bands. as well as carrier and the two sides.
Small signal to noise ratio. Large signal to noise ratio.
Noise cannot be reduced. Noise can be reduced.
Reception area is large. The receptions are is small.
Pc = 9 kw
Pmod = 10.125 kw
m 2
Pmod = Pc 1+ a
2
ma 2
10.125 = 9 1+
2
ma = 0.5
26. In an aerial the aerial current (RHS) before modulation is 10 amps. After modulation it rises
to 11.6 amps. Determine percentage modulation. If the carrier power is 10 kw. What is the
power after modulation?
27. A transmitter supplies 8kw to the antenna when modulated. Determine the total power
radiated when modulated to 30%.
Gc = 8 kw
30
Modulation index ma = = 0.3
100
m 2
Pt = Pc 1+ a .
2
0.32
Total radiated power Pt = 8 103 1 + Pt = 8.36 kw.
2
28. A RMS valve of antenna current before modulation is 10 amp and after modulation is 12
amps. Determine the modulation index assuming distortion.
ma 2
I = Io ' 1+
2
ma 2
12 = 10 1 +
2
ma = 0.938 or 93.81.
29. A 1 MHz carrier is amplitude modulated by 400 Hz modulating signal to a depth of 50%.
The modulated carrier power is 1 kw. Calculate the power of the unmodulated signal.
Pc = 1 kw
ma = 0.5
m 2
Pt = Pc 1+ a
2
0.52
= 1 103 1 +
2
Pt = 1.125 kw.
The increase in power is given by 1.125-1 = 0.125kw is contained in two side bands.
32. What are the advantages of super heterodyne receiver over TRF?
The super heterodyne receiver converts all incoming RF Frequencies to a fixed lower
frequency, called intermediate frequency (IF). This if is then amplitude and detected to get the
original signal.
The process of extracting modulating signal from a frequency modulated carrier is known
as frequency demodulation or detection. The electronic circuits that perform the demodulation
process are called the FM detectors.
36. Draw the waveform of an AM wave.
Ina an envelope detector, the output of the detector follows the envelope of he modulated
signal (i.e message signal)
40. Differentiate DSB-SC and SSB-SC.
(1) The DSB spectrum has two sidebands: USB and LSB, Both containing the complete
information of the baseband signal.
(2) A scheme is which only one sideband is transmitted is known as SSB transmission ,
which requires only one-half of the bandwidth of the DSB.
(i) The improvement in signal to noise ratio is 10 to 12 decibels at the receiver output
over DSB.
(ii) The bandwidth required is reduced to half . Thus twice the number of channels can
be accommodated in a given frequency.
(1) It has the great merit of allowing recovery of the base band signal by an extremely
simple meant.
(2) The receiver is made simple and less expensive for the detection of AM wave.
Power in SSB-SC AM is
1 2
Pt " = PSB = ma Pc
4
ma mod ulation index
Pc carrier power
Angle Modulation is the process by which the angle (frequency or phase) of the carried
signal is changed in accordance with the instantaneous amplitude of message signal.
This advantage over AM since, all natural internal and external noises consist of electric
amplitude variations the receives cannot distinguish between amplitude variations that represent
noise and those that represent desired signal. So AM reception is generally noises than FM
reception.
The pin wave can be obtained from FM wave by differentiating the modulating signal
before applying it to the frequency modulation.
This is done by integrating the modulated signal before applying it to the phase modulator.
(i) S/N ratio can be increased without increasing the transmitted power.
(ii) The need for large amount of modulating power is avoided since the modulation
taken place at low level power stage of transmitter.
n number of sidebands
The approximate rule for transmission bandwidth of an fm signal general by a single tone
modulating signal is.
B.W.=2( w+wm)
=2 w(1+1/ mf) radian
this empirical relation is known as Carsons rule
57. Compare WBFM and NBFM.
WBFM NBFM
Modulation index is greater than 1 Modulation index is less
Frequency deviation = 5MHz than 1
Modulating frequency range from 30Hz to 15 Frequency deviation = 75
KHz KHz
Bandwidth 15 times NBFM Modulating frequency is
Noise is more suppressed 3KHz
Bandwidth = 2FM
Less suppressing of noise
58. Give the mathematical relation for bandwidth of a single tone wideband FM.
BW = 2w(1+1/mf) radians
mf modulation index
w frequency deviation
By the heterodyne operation using mixer a local-oscillator the input and the local oscillator
frequency can be used to produce the difference frequency which is called the Intermediate
frequency.
Communication receivers are high quality, short wave, multipurpose, superhetrodyne receivers
used to receive signal for communication rather than entertainment.
PART B
According to the definition, the amplitude of the carrier signal is changed after modulation,
maVc
VAM = Vc Sin ct + ( Cos(c m ) t cos(c + m )t
2
The equation(8) of an amplitude modulated wave contains three terms. The 1st term of
R.H.8 represents the carrier wave. The 2nd and 3rd terms are identical which are called as lower
side band (LSB) and upper side band (USB)
VAM(t) VC
Frequency
Spectrum
LSB USB
m m
V
c-m c c + m
b>w=2
Figure 1 shows the frequency spectrum of AM. It shows that two side band terms lying on
either side of carrier term which are separated by m. The frequency of LSB is (c-m)and USB is
(c+m). The bandwidth of AM can be determined by using these side bands. Hence BW is twice
the frequency of modulating signal.
The modulated wave contains three such as carrier wave, LSB such as carrier wave, LSB
and USB . therefore the modulated wave contains more power than the carrier had before
modulation took place>moreover sine the amplitude of sidebands depends on the modulation
index, it is anticipated that the total power in the modulated wave depends on the modulation
index.
Pt = Pc + PLSB + PUSB
V 2 carrier V 2 LSB V 2USB
Pt = + +
R B R
2
Vcarrier (V / 2) 2 Vc2
Pcarrier = = c =
R R 2R
2
V
Similarly, PLSB = PUSB = SB
R
2
mV ma2Vc2
= a c =
2 8R
2
( )
R
Therefore
m2 a
V 2C m 2 av 2 c V 2 c
Pt = + = 1 + 2
2R 4R
2R
V 2
m2 a
We know that Pc = 2 .thusPt = Pc 1 +
2R 2
Pt m 2 a
= 1+
Pc 2
If ma = 1; i.e for 100% mod ulation
Pe
= 1.5 or Pt=1.5 Pc
Pc
2. Write down the DSB-SC-AM equation and explain each term with the help of frequency
spectrum.
(i) Two important parameters of communication system are transmitting power and the
bandwidth are highly desirable in a communication system.
(ii) In A.M. with carrier scheme there is wastage in both transmitted power and carrier is
suppressed, because it does not contain any useful information . this scheme is called as the
double side band suppressed carrier amplitude modulation(2SB SC-AM ). It contains only
LSB and USB terms. Resulting that a transmission bandwidth is twice the frequency of the
message signal. Let the modulating signal Vm(t) =Vm Sinmt ---(1) and the carrier signal
Vc(t)=Vc Sin ct --(2)
When multiplying both the carrier and message signal. the resultant signal is the 2SB SC AM
signal
VmVc
V (t ) DSB SC = [cos( c m )t cos(c + m )t ] (4)
2
In this case the product of Vc (t) and Vm(t) produces the DSB-SC-AM signal, thus, require
product modulator to generate DSB Sc signals we know that.
ma Vc
V (t ) AM = Vc Sinct +[cos(c m )t cos(c m )t ] (5) When the equation (5) is compared
c with
2
equation (4) the unmodulated carrier terms VcSinctVcSin ct is missing and only two sied bands are
present hence
SC-AM
3. Derive an expression for SSB-SC
In AM with carrier both the transmitting power and bandwidth are wasted. Hence the DSB- DSB
SC-AM
AM Scheme has been introduced in which power is shared by suppressing the carrier
component but the band with remains the same. (i.e. B.W =2Vm)
Further increase in saving of power is possible by eliminating one side band in addition to the
carrier component, because the USB and LSB are uniquely related by Symmetry about the
carrier frequency So, either one side is enough for transmitting as well as recovering the useful
message.
In addition to that, transmission bandwidth
bandwidth can be cut into half if, one side band is suppressed
along with the carrier. This scheme is known as SSB SC-AM.
AM. The block diagram of SSB-SC-
SSB
AM is shown in Fig .
The SSB-SC-AM
AM can be obtained as follows.
In order to suppress one of the side band the input signal fed to the modulator 1 is 90out
of phase with that of the signal fed to the modulator 2
We know that
Cos(A-B)
` SinA Sin B +Cos A Cos B =
2
Hence
VmVc
V (t ) SSB = Cos (c m )t ---(1)
2
(i) Let us consider the modulating signals of very large bandwidth having very low
frequency components along with rest of the signal.
(ii) These components give rise to side bands very close to the carrier frequency which are
possible to go till the extreme.
(iii) The low video frequencies contain the most important information of the picture and
any effort to completely suppress the LSB would result in phase distortion at these
frequencies.
(iv) Therefore a comprise has been made to suppress the part of the LSB, and then the
radiated signal consists of full upper side band together with the carrier and vestige of
the (partially suppressed) LSB. This pattern of modulation is known as vestigial
sideband modulation.
(v) A VSB AM system is a comprise between DSB SC AM and SSB SC AM.
(vi) VSB signals are very easy to generate and at the same time, their bandwidth is slightly
greater than SSB SC AM but less than DSB SC AM.
(vii) VSB modulation is derived by filtering DSB-SC-AM or AM with carrier.
(viii) VSB modulation system
An important and essential requirement of VSB filter transfer function HVSB (f) is that it
must have odd symmetry about fc and relative amplitude response of 0.5 at fc.
Construction:
A square law modulator means requires to add up the carrier and modulating signal to
obtain AM with carrier thus a square law modulator has 3 features.
Modulating
Signal
+ Non-linear Filter Modulated
element Signal
Carrier Signal
The message signal and carrier signal applied at the i/p are super imposed each other and
makes the diode more forward biased during the +ve half cycle of i/p and less forward biased
during negative half cycle of message signal.
Thus the magnitude of the carrier component is greater during the positive half cycle of the
modulating voltage and lesser during the negative half cycle of the modulating signal which
wh is
shown in figure.
1. Filter method:
a. side band filter helps to attenuate the carrier and provides the safety features which is absent in
the other two systems.
c. The draw back is that it cannot be used at H.F radio frequencies. For high or low radio
frequencies it needs balanced mixers in conjunction with extremely stable crystal oscillators.
Advantages:
a. It can switch from one side band to other.
b. It has the ability to generate 2SB at any frequency thereby eliminating need of frequency
thereby eliminating need of frequency.
Disadvantages:
b. The system needs two balanced modulators which must both give exactly the same output or
cancellation of side band is complete.
Advantages:
1. It does not require a side band filter or any wide band audio phase shift network.
2. Correct output may be maintained without use of critical parts or adjustments.
3. Low frequency signals are also used.
4. side bands may be easily switched.
Disadvantages:
7. Derive an expression for a single tone FM signal and draw its frequency spectrum.
Vc(t) = Vc sin
To find angular velocity,
During the process of FM the frequencies of carrier signal is changed in accordance with the
instantaneous amplitude of message signal.
Therefore the frequency of carrier after modulation is
w i = w c + kv m (t)
= w c + kv m cos w m t [4]
To find the instantaneous phase angle of the modulated signal, integrate equation [4]
kv
v(t)FM = Vc sin 1 = Vc sin w c t + m sin w m t [5]
wm
v(t)FM = Vc sin ( w c t + mf sin w m t ) [6]
kv m
where mf = = modulation index of FM
wm
The max value of angular freq wmax = wc + kvm
The min value of angular freq = wc - kvm
The freq deviation wd = kvm
wd kv m
(i.e) mf = = =
wm w m
Vin
Message signal
Carrier signal
t
t
Phasor Representation of FM
V(w) Vc J0 (mf)
Vc J1 (mf)
Vc J2 (mf)
Vc J3 (mf)
Phase modulation is defined as the process by which changing the phase of carrier signal in
accordance with the instantaneous amplitude of the message signal.
Where = phase angle of carrier signal. It is changed in accordance with the amplitude of message
signal Vm(t);
(i.e) = kvm (t) = kvm cos wmt
This is done by integrating the modulating signal before applying it to the phase modulator it is
shown in figure.
(Vm/t)
Message
Integrator PM FM Signal
Signal Vm(t)
Carrier
After integration
Vm
Vm (t) = Vm cos w m t dt = sin w m t
wm
After phase modulation; Vm (t)
kv m
= kv m (t) = sin w m t
wm
The PM wave can be obtained from FM by differentiating the modulating signal before
applying it to the frequency modulator circuit shown in figure.
D/dt (Vm/t)
Message
Differentiator FM PM Signal
SignalVm(t)
Carrier
We know that Vm(t) = Vm cos wmt
After differentiation;
d
Vm (t) = w m .Vm sin w m t
dt
After frequency modulation
dv m (t)
wi = wc +
dt
= w c + k( w m .Vm sin w m t)
w i = w c kw m v m sin w m t
This is the operation for phase modulated wave. The process of integration and differentiation are
linear. Therefore no frequencies are generated.
In this method, suitable frequency multiplying circuits are used to obtain the desired band
of FM. This method is called the Armstrong method of FM wave generation.
(i) The block diagram of Armstrong method of FM wave generation is shown in figure.
(ii) The chief advantage is that, the carrier is not directly involved in producing the FM
signal, but it is injected.
(iii) It is possible to use a crystal oscillator to generate the carries frequency.
(iv) The effect of mixer is to change the centre frequency only, whereas the effect of
frequency multiplies is to multiply centre frequency and frequency deviation equally.
USB
Carrier
VAM(t)
0 Vc
LSB
Fig shows the phasor diagram of AM.
It will be noted that the resultant of the two sideband frequency vector always in phase
with the unmodulated carrier so that there is amplitude variation in carrier but no phase variation.
6. Generation of FM through PM needs some phase shift between the modulated and
unmodulated carrier.
USB
VPM(t)
LSB
Resultant
VC
9. Since FM is the requirement the modulation voltage will have to be equalized before it enters
the balanced modulation.
10. To limit the amplitude variation, the sample RL equalizer is used. The crystal oscillator is used
at 1MHz.
11. The effect of frequency changing is essential in Armstrong method. But the frequency changing
does not affect the modulation index.
12. When the FM signal is mixed the resulting O/P contains different frequencies.
13. From points (11 & 12) we have seen that by frequency multiplication modulation index is
affected in the same manner as deviation, whereas in mixing modulation index remains
unaffected.
12. Compare AM and FM (or) advantages of FM.
(1) In AM system there are three frequency components, and hence BW is finite.
(2) But FM system infinite number of sidebands in addition to a single carrier. Hence its BW is
infinite.
(3) In FM, the sidebands at equal distance from for have equal amplitude
(4) The amplitude of FM is independent of modulation index.
(5) Where as in AM is dependent
dependen of modulation index.
(6) In AM, increased modulation index increases the sideband power and therefore increases
the total transmitted power.
(7) In FM, the total transmitted power always remains constant but an increase in the
modulation index increases the bandwidth
ba of the system.
(8) In FM system all the transmitted power is useful where as in AM most of the transmitted
power is used by the carriers. Hence the power is wasted.
(9) Noise is very less in FM, hence there is less increase in the S/N.
Narrow band for which modulation index is mall compared to one radian
Let the message signal to be represented as
Differentiate
d
= c angular frequency of carrier signal
dt
After frequency modulation.
The frequency deviation is proportional to the amplitude of modulating voltage, hence it can be
written as
2f = KVm
i = c 2f Cos m t
f
i= idt= ct + Sin mt
fm
Vfm(t ) = Vc Sin mt
f
= Vc Sin( c t + Sin mt )
fm
=Vc sin( c t m f Sin m t )
For Narrowband FM assume the modulation index is small compared to one radian.
Cos(mf Sin mt ) = 1
and Sin (mf Sin mt ) = mf Sin mt (Q Sin = 0if ' ' small )
Vfm(t ) = Vc Sin ct + VcCosct (mf Sinmt )
This modulator involves splitting the carrier wave into two paths.
1. RF section
2. The mixer/converter section
3. IF section
4. The Audio Detector section.
5. The Audio Amplifier section.
It includes a radio frequency oscillator stage and mixer / converter stage. The local
oscillators can be any of the oscillator circuits depending on the stability and accuracy. The IF
frequencies. The most common intermediate frequency used in AM broadcast and Receiver 455
KHZ.
IF Section:
It consists of a series of IF Amplifiers and band pass filters and is of the called IF strip. Most
of the Receivers gain and selectivity is achieved in the IF section. The center frequency and
bandwidth are constant for all stations. The IF is always lower than R7 because this easier and loss
expensive to construct high gain amplifier.
Detector Section:
The purpose of the detector section is convert the IF signals back to the original source
information. The detecting is generally called an audio detector are the second detector in a
broadcast bund Receiver. It can be as simple as a single diode or as complex as a phase locked
loop or balanced demodulator.
Audio Section:
The audio section comprises several cascade audio amplifiers and one r more speakers. The
number of amplifiers used depends on the audio signal.
Receiver operation:
1. R7 is converted to 17.
2. 17 is converted to source information. For eg:R7 for the commercial AM broadcast band are
frequencies between 535 k 42 and 1605 KHZ . IF signals are frequencies between 450 KHZ
and 460 KHZ.
Frequency Conversion:
The frequencies are down converted rather than up converted. The output of the mixer
consists of an infinite number of harmonic and cross product frequencies, which include the sum
and differences frequencies, which include the sum and differences frequencies between the R&
carrier and local isolation frequencies . the frequency of Oscillation is always above or below the
desired R7 carrier by an amount equal to the IF center frequency Gang tuning means that the two
adjustments are mechanically tied together so that a single adjustment will change the center
frequency of the preselected at the same time charge the local oscillator frequency. When the local
oscillation frequency is tied above R7 it is called high side injection when the local oscillator is
turned below R7 it is called low side injection.
15. Explain about the Low level and high level transmitter.
Linear S.Power
Modulator Amp
Linear S.Power
Amp
Modulating Modulating
Signal Pre-amp Signal drive Applying N/W
Source
The R7 carrier oscillator can be any of the oscillator crystal controlled oscillators are the
most common circuits used. The buffer amplifier is a low gain high input impedance linear
amplifier. Its function is to isolate the oscillator from the high power amplifiers. The buffer
provides a relatively Constance load to the oscillator which helps to reduce the occurrence and
magnitude of short term frequency variations. Emitter followers or integrated circuit Op-amps are
often used to the buffer. The modulator can use emitter or collector modulation. The intermediate
and final power amplifier are either class A or class B push pull. This is required with low-level
transmitters to maintain symmetry in the Am envelope. The antenna compelling To/W matches
the output impedance of the final power amplifier to the transmission line and antenna.
Low level Transmitters are used predominantly for low power, low capacity system such as.
1) Wireless intercoms
2) Remote control units
3) Short range walkie-talkies
High-level Transmitters
AM N/W
Modulating
Power Amp
Carrier Carrier
Buffer -Amp Carrier power
Oscillator Driver
Amp
The modulating signal is processed in the same manner as in the low level Transmitter
except for the addition of a power amplifier with high level Transmitter the power of the
modulating signal must be higher than low level transmitters. This is because the carrier is at fill
power at the point in the Transmitter. Where modulation occurs and requires a high amplitude
modulating signal to produce 100% modulation.
The Rf carrier oscillator its associated buffer and the carrier. Driver are also essentially the
same circuits used in low-level transmitters with high level transmitters the R7 carrier undergoes
additional power Amplification, prior to the modulator stage, and the final power amplifier is also
the modulator. The modulator is generally a drain, plate or collector modulated class C amplifier.
With high transmitters the modulator circuit has three primary functions.
An up-converter translates the low frequency intelligence signals to radio frequency signals
that can be efficiency radiated from an antenna and propagated through free space.
Varactor diode is used to deviate the frequency of a crystal oscillator. R1 and R2 develop a
dc voltage that reverse biases varactor diode VD1 and determines the rest frequency of the
oscillator. The external modulating signal voltage adds to and subtracts from the dc-bias and thus
the frequency of oscillation. Positive alternations of the modulating signal increase the reverse
bias an VD1, which decrease its capacitance and increase the frequency of oscillation. Negative
alternations of the modulating signal decrease the frequency of oscillations . Varactor diode FM
modulators are popular because they are simple to use and reliable and have the stability of a
crystal oscillator. Because a crystal is used, the peak frequency deviation is limited to relatively
small valves. They are used primarily for low index applications such as two-way mobile radio.
FM Reactance Modulator:
This circuit configuration is called a reactance modulator because the JFET looks like a
variable reactance load to the LC tank circuit. The modulating signal varies the reactance of Q1,
which causes a corresponding change in the resonant frequency of the oscillator tank circuit.
where gm is the Tran conductance of JFET. Impedance between drain and ground is
V
Zd =
id
R jc 1 jc
Zd = = 1
gm R gm R
-j c j
R <<< c Zd= =
g m R 2 f m g m RC
Indirect Method:
The most convenient operating, frequency for the crystal oscillator and phase modulator is
in the vicinity of 1 MHz. Since transmitting frequencies are normally much higher than this,
frequency multiplication must be used, and so multipliers, are shown in the block diagram of
figure 5-15.
The block diagram of an Armstrong system is shown in figure. The system terminates
terminat at
the output of the combining network ; the remaining blocks are included to show how wideband
FM might be obtained. The effect of mixing on an FM signal is to change the center frequency
only, whereas the effect of frequency multiplication is to multiply
multiply center frequency and deviation
equally.
The vector diagrams of figure illustrate the principles of operation of this modulation
system. Diagram I shows an amplitude modulated signal. It will be noted that the resultant of the
two sideband frequency vectors is always in phase with the unmodulated carrier vector, so that
there is amplitude variation but no phase (or frequency) variation. Since it is phase change that is
needed here, some arrangement must be found which ensures that this resultant of the sideband
voltages is always out of phase (preferably by 90)90 ) with the carrier vector. If an amplitude
modulated voltage is added to an unmodulated voltage of the same frequency and the two are
kept 90 apart in phase, as shown by diagram 2, some form of phase modulation will be achieved.
Unfortunately, it will be a very complex and nonlinear form having no practical use; however, it
does seem like a step in the right direction. Note that the two frequencies must be identical
(suggesting the one sourcee for both) with a phase shifting
shifting network in one of the channels.
Figure: Phase Modulation vector diagram
Diagram 3 shows the solution to the problem. The carrier of the amplitude modulated
signal has been removed so that only the two sidebands are are added to the unmodulated voltage.
This has been accomplished by the balanced modulator, and the addition takes place in the
combining network. The resultant of the two sideband voltages will always be in quadrature with
the carrier voltage. As the modulation
dulation increases, so will the phase deviation, and hence phase
modulation has been obtained. The resultant voltage coming from the combining network is
phase- modulated, but there is also a little amplitude modulation present. The AM is no problem
sincee it can be removed with an amplitude limiter.
The output of the amplitude limiter, if it is used, is phase modulation since frequency
modulation is the requirement, the modulating voltage will have to be equalized before it enters
the balanced modulator(remember
(remember that PM may be changed into FM by prior bass boosting of the
modulation). A simple RL equalizer is shown in Figure. In FM broadcasting , L=R at 30 Hz. As
frequency increases above that , the output of the equalizer will fall at a rate of 6 dB/octave,
satisfying the requirements.
Effects of frequency changing on an FM signal The previous section has shown that frequency
changing of an FM signal is essential in the Armstrong system. For convenience it is very often
used with the reactance modulator
dulator also. Investigation will show that the modulation index is
multiplied by the same factor as the center frequency, whereas frequency translation (changing )
does not affect the modulation index.
Figure: RL equalizer
If the frequency-modulated signal fc is fed to a frequency doubler, the output signal will
contain twice each input frequency. For the extreme frequencies here, this will be 2fc - 2 and 2fc +
2. The frequency deviation has quite clearly doubled to 2, with the result that the modulation
index has also doubled. In this fashion, both center frequency and deviation may be increased by
the same factor or, if frequency division should be used, reduced by the same factor.
When a frequency modulated wave is mixed, the resulting output contains difference
frequencies (among others). The original signal might again be fc . When mixed with a
frequency f0, it will yield fc f0 - and fc f0 + as the two extreme frequencies in its output. It is
seen that the FM signal has been translated to a extreme frequencies in its output. It is seen that
the FM signal has been translated to a lower center frequency fc f0, but he maximum deviation
has remained a . It is possible to reduce (or increase, if desired) the center frequency of an FM
signal without affecting the maximum deviation.
Since the modulating frequency has obviously remained constant in the two cases treated,
the modulation index will be affected in the same manner as the deviation. It will thus be
multiplied together with the center frequency or unaffected by mixing. Also, it is possible to raise
the modulation index without affecting the center frequency by multiplying both by 9 and mixing
the result with a frequency eight times the original frequency. The difference will be equal to the
initial frequency, but the modulation index will have been multiplied ninefold.
One of the characteristics of phase modulation is that the angle of phase deviation must be
proportional to the modulating voltage. A careful look at diagram 3 of figure shows that this is
not in this case, although this fact was carefully glossed over in the initial description. It is the
tangent of the angle of phase deviation that is proportional to the amplitude of the modulating
voltage, not the angle itself. The difficulty is not impossible to resolve. It is a trigonometric axiom
that for small angles the tangent of an angle is equal to the angle itself, measured in radians. The
angle of phase deviation is kept small, and the problem is solved, but at a price. The phase
deviation is indeed tiny, corresponding to a maximum frequency deviation of about 60 Hz at a
frequency of 1 MHz. An amplitude limiter is longer really necessary since the amount of
amplitude modulation is now insignificant.
To achieve sufficient deviation for broadcast purposes, both mixing and multiplication are
necessary, whereas for narrowband FM, ,multiplication may be sufficient by itself. In the latter
case, operating frequencies are in the vicinity of 180 MHz. Therefore, starting with an initial fc =1
MHz and =60Hz, it is possible to achieve a deviation of 10.8 kHz at 180 MHz, which is more than
adequate for FM mobile work.
The FM broadcasting station uses a higher maximum deviation with the lower center
frequency, so that both mixing and multiplication must be used. For instance, if the starting
conditions are as above and 75 kHz deviation is required at 100 MHz, f0 must be multiplied by
100/1 = 100 times, whereas must be increased 75,000/60 = 1250 times. The mixer and crystal
oscillator in the middle of the multiplier chain are used to reconcile the two multiplying factors.
fac
After being raised to about 6 MHz, the frequency-
frequency modulated carrier is mixed with the output of a
crystal oscialltor, whose frequency is such as to produce a difference of 6 MHz/12.5 . The center
frequency has been reduced, but the deviation is left
left unaffected. Both can now be multiplied by
the same factor to give the desired center frequency and maximum deviation.
Until shortly before World War II, most radio receivers were of the TRF type, whose block
diagram is shown in figure.
The TRF receiver is a simple logical receiver. A person with just a little knowledge of
communications would probably expect all radio receivers to have this form. The virtues of o this
type, which is now not used except as a fixed-frequency
fixed frequency receiver in special applications, are its
simplicity and high sensitivity. It must also be mentioned that when the TRF receiver was first
introduced, it was a great improvement on the types used used previously mainly crystal,
regenerative and superregenerative receivers.
Consider a tuned circuit required to have a bandwidth of 10 kHz at a frequency of 535 kHz.
The Q of this circuit must be Q=f/f= 535 = 53.5 . At the other end of the broadcast band, i.e., at
10
1640 kHz , the inductive reactance (and therefore the Q) of the coil should in theory have
increased by a factor of 1640/535 to 164. In practice , however, various losses dependent on
frequency will prevent so large an increase. Thus the Q at 1640 kHz is unlikely to be in excess of
120 , giving a bandwidth of f= 1640/120 = 13.7 kHz and ensuring that the receiver will pick up
adjacent stations as well as the one to which it is tuned. Consider again a TRF receiver required to
tune to 36.5 MHz , the upper end of the shortwave band. If the Q required of the RF circuit is
again calculated, still on this basis of a 10-kHz bandwidth, we have Q= 36,500/10 = 3650 . It is
obvious that such a Q is impossible to obtain with ordinary tuned circuits.
UNIT II
DIGITAL COMMUNICATION
PART A
2. Define PCM.
The analog signal is *sampled and converted to a fixed length, serial binary number for
transmission. The binary number various according to the amplitude to the analog signal.
FET acts as a simple switch. When turned an, it provides a low-impedance path to deposit
the analog sample voltage on capacitor C1. The time that Q1 is an is called the aperture or
acquisition time.
Analog signal is sampled for a short period of time and the sample voltage is held at a
constant amplitude during the A/D conversion time, this is called flat top sampling.
The sample time is mode longer and the analog-to-digital conversion takes place with a
changing analog signal, this is called Natural sampling.
The minimum sampling rate (fs) that can be used for a given Pem System. For a sample to
be reproduced signal (fa) must be sampled at least twice. The minimum sampling rate is equal to
twice the highest avoid input frequency.
7. Define coding efficiency.
Coding efficiency is the ratio of the minimum number of bits required to achieve a certain
dynamic range to the actual number of Pem bits used.
minimum number of bits
(including sign bit)
Coding efficiency= 100
Actual number of bits
(including sign bit)
8. Define PSK.
Phase shift keying (PSK) is another form of angle modulated constant amplitude digital
modulation. PSK is similar to conventional phase modulation except that with PSK the input
signal is aq binary digital signal and a limited number of output phases.
9. Define FSK.
FSK is a modulation process in which the freqency of the carrier is switched between either
of the 2 possible values corresponding to 1 or 0, with fixed frequency levels set by the channel.
PSK FSK
Keying the phase of the carrier between Keying the frequency of the carrier
either of the 2 Possible values 0 and 1. between either of the 2 possible valves 0
and 1.
Fixed amplitude and fixed frequency is Amplitude is fixed, but different
used but the signal is 180 degrees out of frequencies are used for 0 and 1.
phase for transition between 1 or 0.
It is a special case of frequency
It is a special case of phase modulation. modulation.
12. Given the equation for Shannon limit for information capacity.
S
I = B log2 1 +
N
S
I = 3.32 B log2 1 +
N
Where I = Information capacity (bps)
B= Bandwidth (Hz)
S
= Signal to noise power ratio
N
If the sampling rate in any pulse modulation system exceeds twice the maximum signal
frequency, the original signal can be reconstructed in the receiver with minimal distortion.
It is a pulse modulation system in which the signal is sampled at regular intervals ,and each
sample is made proportional to the amplitude of the signal at the instant of sampling.
A fixed amplitude and starting time of each pulse, but the width of each pulse is made
proportional to the amplitude of the signal at the instant.
The amplitude and width of the pulses in kept constant in the system, while the position of
each pulse, in relation to the position of a recurrent reference pulse is varied by each instantaneous
sampled value of the modulating wave.
A pulse modulation technique in which a continuous signal is converted into a binary pulse
patterns, for transmission through low quality channels. A technique that is used to sample voice
waves and convert them into digital code. Delta modulation typically samples the wave 32,000
times/ sec, but generates only one bit per sample.
Pulse code modulation in which an analog signal is sampled and the difference between tha
actual value of each sample and its predicated value, derived from the previous sample or
samples, is quantized and converted, by encoding to a digital signal.
ON-OFF keying is binary form of amplitude modulation in which one of the states of the
modulated wave is the absence of energy in the keying interval .
Phase shift keying is a form of phase modulation in which the modulating function shifts
the instantaneous phase of the modulated wave between predetermined discrete values.
21. Define FSK?
Minimum shift keying is a encoded with bits alternating between quaternary components
with the a component delayed by half the symbol period.
It is simplest form of phase lift keying. It uses 2 phases which are separated by 180 is
called binary Phase shift keying.
PART B
Pulse code modulation (PCM) is the only one of the digitally encode: pulse modulation
techniques previously mentioned that is used in a digital transmission system. With PCM, the
pulses are of fixed length and fixed amplitude. PCM is a binary system; a pulse or lack of a pulse
within a prescribed time slot represents either a logic 1 or a logic 0 condition. With PWM, PPM, or
PAM, a single pulse does not represent a single binary digit (bit).
Figure shows a simplified block diagram of a single channel, simplex (one- way-only)
PCM system. The band pass filter limits the input analog signal to the standard voice-band
frequency range 300Hz to 3000 Hz. The sample-and hole circuit periodically samples the analog
input and converts those samples to a multilevel PAM signal. The analog-to-digital converter
(ADC) converts the PAM samples to a serial binary data stream for transmission. The transmission
medium is a metallic wire or optical fiber.
At the receive end, the digital to analog converter (DAC) converts the serial binary data
stream to a multilevel PAM signal. The held circuit and low-pass filter convert the PAM signal
back to its original analog form. An integrated circuit that performs the PCM encoding, and
decoding is called a codec (coder/decoder).
(coder/decoder)
The purpose of the sample and hold circuit is to sample periodically the continually
changing analog input signal and convert the samples to a series of constant amplitude PAM
levels. For the ADC to accurately convert a signal to a digital code, the signal must be relatively
constant. If not, before the ADC can complete the conversion, the input would change. Therefore,
the ADC continually would be attempting to follow the analog changes and never stabilize on any
PCM code.
Figure shows the schematic diagram of a sample and hold circuit. The FET acts as a
simple switch. When turned on, it provides a low impedance path to deposit the an analog
sample voltage on capacitor C1. The time that Q1 is on is called the aperture
apertur or acquisition time.
Essentially, C1 is the hold circuit. When Q1 is off, the capacitor does not have a complete path to
discharge through and, therefore, stores the sampled voltage. The storage time of the capacitor is
called the A/D conversion time because
because it is during this time that the ADC converts the sample
voltage to a digital code. The acquisition time should be very short. This assures that a minimum
change occurs in the analog signal while it is being deposited across C1 If the input to the ADC is
changing while it is performing the conversion. Distortion results. This distortion is called
aperture distortion. Thus, by having a short aperture time and keeping the input to the ADC
relatively constant, the sample and hold circuit reduces aperture distortion. If the analog signal
is sampled for a short period of time and the sample voltage is held at a constant amplitude
during the A/D conversion time, this is called flat-top sampling.
If the sample time is made longer and the analog to digital conversion takes place with a
changing analog signal. This is called natural sampling. Natural sampling introduces more
aperture distortion than flat top sampling and requires a faster A/D converter.
Figure shows the input analog signal, the sampling pulse, and the waveform developed
across C, it is important that the output impedance of voltage follower Z1 and the on resistance of
Q1 be as small as possible. This assures that the RC charging time constant of the capacitor is kept
very short, allowing the capacitor to charge or discharge rapidly during the short acquisition time.
The rapid drop in the capacitor voltage immediately following each sample pulse is due to the
redistribution of the charge across C1. The inter electrode capacitance between the gate and drain
of the FET is placed in series with C1 when the FET is off, thus, acting as a capacitive voltage
driver network. Also, note the gradual discharge across the capacitor discharging through its own
leakage resistance and the input impedance of voltage follower Z2 and the leakage resistance of C1
be as high as possible. Essentially, voltage followers Z1 and Z2 isolate the sample- and hold
circuit (Q1 and C1) from the input and output circuitry.
Sampling Rate
The Nyquist sampling theorem establishes the minimum sampling rate (f), that can be used
for a given PCM system. For a sample to be reproduced accurately at the receive each cycle of the
analog input signal (fa) must be sampled at least twice. Consequently the minimum sampling rate
is equal to twice the highest audio input frequency. If f, is less than two times fcv distortion will
result. The distortion is called advising or foldown distortion. Mathematically, the minimum
Nyquist sample rate is
Fs 2fa
The input band pass filter shown in Figure is called an ant aliasing or anti fold over filter.
Its upper cut off frequency is chosen such that no frequency greater than one half of the
sampling rate is allowed to enter the sample and-hold
hold circuit, thus, eliminating the possibility of
fold over distortion occurring.
With PCM, the analog input signal is sampled, then converted to a serial binary code. The
binary code is transmitted to the receiver, where it is converted back to the original analog signal.
The binary codes used for PCM are n-bit codes, where n may be any positive integer greater than
1. The codes currently used for PCM are sign-magnitude codes, where the most significant bit
(MSB) is the sign bit and the remaining bits are used for magnitude. Table shows an n-bit PCM
code where n equals 3. the most significant bit is used to represent the sign of the sample (logic
1=positive and logic 0= negative). The two remaining bits represent the magnitude with 2
magnitude bits, there are four codes possible for positive numbers and four possible for negative
numbers. Consequently, there is a total of eight possible codes (23=8).
Frequency shift keying (FSK) is another relatively simple, low performance form of digital
modulation. Binary FSK is a form of constant amplitude angle modulation similar to
conventional frequency modulation except that the modulating signal is a binary pulse stream
that varies between two discrete voltage levels rather than a continuously changing analog
waveform. The general expression for a binary FSK signal is
( f )
( t ) = c cos c + m f (12 4)
2
Where
(t)=binary FSK waveform
Vc=peak un modulated carrier amplitude
r=radian carrier frequency
m(f)=binary digital modulating signal
=change in radian output frequency
From Equation 12-4 it can be seen that with binary FSK the carrier amplitude Vc remains
constant with modulation. However, the output carrier radian frequency (t) shifts by an amount
equal to /2.The frequency shit (/2)is proportional to the amplitude and polarity of the
binary input signal. For example, a binary 1 could be +1 volt, and a binary zero could be -1 volt
producing frequency shifts of +/2 -/2. respectively. In addition, the rate at which the carrier
frequency shits is equal to the rate of change of the binary input signal m(t) that is, the input bit
rate). Thus, the output carrier frequency devices (shifts) between c+/2 and c-/2 at a rate
equal to fm.
FSK Transmitter
With binary FSK, the center or carrier frequency is shifted (deviated) by the binary input
data. Consequently, the output of a binary FSK modulator is a step function in the time domain.
As the binary input signal charges from a logic 0 to a logic 1, and vice versa the FSK output shifts
between two frequencies a more logic 1 frequency and a space or logic 0 frequency. With binary
FSK, Basic is a change in the output frequency each time the logic condition of the binary input
signal changes. Consequently, the he output rate of change is equal to the input rate of change. In
digital modulation, the rate of change at the input to the modulator is called the bit rate and has
the units of bits per second (bps). The measure of the rate of change at the output of the modulator
is called band and is equal to the reciprocal of the time of one output signaling element. In essence
band is the line speed is symbols per second.; in binary FSK, the input and output rates of change
are equal; therefore, the bit rate and band rate are equal. A simple binary FSK transmitter is shown
in Figure.
In binary FSK modulator, ff is the peak frequency deviation of the carrier and is equal to
the difference between the
he rest frequency and either the mark or space frequencies). The peak
frequency deviation depends on the amplitude of the modulating signal. In a binary digital signal.
All logic 1s have the same voltage sand all logic 0s have the same voltage; consequently,
consequentl the
frequency deviation is constant and always at its maximum value.
The output of a FSK modulator is related to the binary input as shown in figure where logic
0 corresponds to space frequency fs, a logic 1 corresponds to mark frequency
equency fm, and fc is the
carrier frequency. The required peak frequency deviation, f, is given as
1fm fs1 1
f = = (Hz,minimum ) (12.5 )
2 4fb
Where tb is the time of one bit in seconds and fm and fs are expressed as
fm = fc f = fc
1
(12-6a )
4fb
fs = fc + f = fc +
1
(12-6b )
4fb
From figure it can be seen that FSK consist of two pulsed sinusoidal waves of frequency jm
and fr Pulsed sinusoidal waves have frequency spectrums that are sin x/x functions. Consequently,
we can represent the output spectrum for an FSK signal as shown in Figure. Assuming that the
zero crossings contain the bulk of the energy the bandwidth for FSK can be approximated as
1 1
BW = fm + fs = fm fs +
2
(12-7 )
tb tb tb
FSK Receiver
FSK demodulation is quite sample with a circuit such as the one shown in Figure. The FSK
input signal is applied to the inputs of both band pass filters (BPFs). The respective filter passes
only the mark or only the space frequency on to its respective envelope detector. The envelope
detectors, in turn, indicate the total power in each pass band and the comparator responds to the
larger of the two powers. This type of FSK detection is referred to as noncoherent detection; there
is no frequency involved in the demodulation process that is synchronized either in phase,
frequency, or both with the incoming FSK signal.
Figure shows the block diagram for a coherent FSK receiver. The incoming FSK signal is
multiplied by a recovered carrier signal that has the exact same frequency and phase as the
transmitter reference. However, the two transmitted frequencies (the mark and space frequencies)
are not generally continuous; it is not practical to reproduce a local reference that is coherent with
both of them. Consequently, coherent FSK detection is seldom used.
The most common circuit used for demodulating binary FSK signals is the phase-locked
loop (PLL), which is shown in block diagram form in Figure A PLL-FSK demodulator works
similarly to a PLL-FM demodulator. As the input to the PLL shifts between the mark and space
frequencies, the dc error voltage at the output of the phase comparator follows the frequency shift.
Because three are only two input frequencies (mark and space), there are also only two
outputs error voltages. One represents a logic 1 and the other a logic 0, therefore, the output is a
two level
level (binary) representation of the FSK input. Generally the natural frequency of the PLL is
made equal to the center frequency of the FSK modulator. As a result, the changes in the dc error
voltage follow the changes in thee analog input frequency and are symmetrical around 0V.
Binary FSK has a poorer error performance than FSK or QAM and, consequently, is seldom
used for high-performance
performance digital radio systems, is use is restricted to low-performance,
low low
cost, asynchronous
us data moderns that are used for data communication over analog voice band
telephone lines.
Minimum shift-keying
keying FSK (MSK) is a form of continuous phase frequency shift keying
(CPFSK). Essentially, MSK is binary FSK except that that the mark and spaces frequencies are
synchronized with the input binary bit rate. Synchronous simply means that there is a precise time
relationship between the two; it does not mean they are equal, width MSK, the mark and space
frequencies are selected such
uch that they are separated from the center frequency by an exact odd
multiple of one-half
half of the bit rate Ifm and fs mn(fb/2) where n= any odd integer]. This ensures that
there is a smooth phase transition in the analog output signal when it changes from a logic 1 to a
logic 0, and vice versa, there is an abrupt phase discontinuity in the analog output signal. When
this occurs, the demodulator has trouble following due frequency shift; consequently, an error
may occur.
Figure shows a continuous phase MSK waveform, Notice that when the output frequency
changes, it is a smooth, continuous transition. Consequently, there are no phase discontinuities.
MSK has a better bit error performance than conventional binary FSK for a given signal to-noise
ratio. The disadvantage of MSK is that it requires synchronizing circuits and is, therefore, more
expensive to implement.
Phase shift keying (PSK) is another from of angle modulated, constant- amplitude digital
modulation. PSK is similar to conventional phase modulation except that with PSK the input
signal is a binary digital and a limited number of output phases are possible.
Binary Phase Shift Keying
With binary phase shift keying (BPSK), two output phases are possible for a single carrier
frequency (binary meaning 2). One output phase represents a logic 1 and the other a logic 0.
As the input digital signal changes state, the phase of the output carrier shifts between two angles
that are 180o out of phase. Other names for BPSK are phase reversal keying (PRK) and biphase
modulation BPSK is a form of suppressed carrier, square wave modulation of a continuous wave
(CW) signal.
BPSK transmitter : Figure shows a simplified block diagram of a BPSK modulate acts as a phase
reversing switch. Depending on the logic condition of the digital input, the carrier is transferred to
the output either in phase or 180o out of phase with the reference carrier oscillator.
Figure show the schematic diagram of a balanced ring modulator. The balanced modulator
has two inputs; a carrier that is in phase with the reference oscillator and the binary digital data.
For the balanced modulator to operate properly the digital input voltage must be much greater
than the peak carrier voltage. This ensures that the digital input controls the on/off state of diodes
D1-D2 if the binary input is a logic 1 (positive voltage), diode D1 and D2 are forward biased and
on, while diodes D3 and D4 are reverse biased and off (figure).
Reference
Centre Oscillator
7
With the polarities shown, the carrier voltage is developed across transformer T2 in phase
with the carrier voltage across T1. consequently, the output signal is in phase with the reference
oscillator.
If the binary input is a logic 0 (negative voltage), diodes D1 and D2 are reverse biased and
off, while diodes D3 and D4 are forward biased and on (Figure) As a result, the carrier voltage is
developed across transformer T2 1800 out of phase with the carrier voltage across T1.
Consequently, the output signal is 180o out of phase diagram, and constellation diagram for a
BPSK modulator. A constellation diagram, which is sometimes called a signal state space
diagram, is similar to a phasor diagram except that the entire phaser is not drawn. In a
constellation diagram, only the relative positions of the peaks of the phasors are shown.
cos (c a ) t = cos (c + a ) t
1 1
or
2 2
Figure shows the output phase versus time relationship for a BPSK wave form. The
output spectrum from a BPSK modulator is simply a double sideband, suppressed carrier signal
where the upper and lower side frequencies are separated from the carrier frequency by a value
equal to one half of the bit rate. Consequently, the minimum bandwidth (fN) required to pass the
worst-case
case BPSK output signal is equal to the input bit rate.
BPSK receiver: figure shows the block diagram of a BPSK receiver. The input signal may be +sin
cI. The coherent carrier recovery circuit detects and regenerates a carrier signal that is both
frequency and phase coherent with the original transmit carrier. The balanced modulator is a
product detector,, the output is the product of the two inputs (the BPSK signal and the recovered
carrier). The low pass
pass filter (LPF) separates the recovered binary data from the complex
demodulated signal Mathematically, the demodulation process is as follows.
For a BPSK input signal of +sin c t(logic 1), the output of the balanced modulator is
It can be seen that the output of the balanced modulator contains a positive voltage
[+(1/2)V] and a cosine wave at twice the carrier frequency (2c). The LPF has a cut off frequency
much lower than 2c and, thus, blocks the second harmonic of the carrier and passes only the
positive constant component. A positive voltage represents a demodulated logic 1.
For a BPSK input signal of sin ct (logic 0), the output of the balanced modulator is
The output of the balanced modulator contains a negative voltage [-(1/2)V] and cosine
wave at twice the carrier frequency (2c). Again, the LPF blocks the second harmonic of the carrier
and passes only the negative constant component. A negative voltage represents a demodulated
logic 0.
M- ary encoding. M-ary is a term derived from the word binary. M is simply a digit that
represents the number of conditions possible. The two digital modulation techniques discussed
thus far (binary FSK and BPSK) are binary systems; there are only two post output conditions.
One represents a logic 1 and the other a logic 0; thus, they are M-ary systems where M=2. With
digital modulation, very often it is advantageous to encode at a level higher than binary. For
example, a PSK system with four possible output phases is an M-ary system where M=4. If there
were eight possible output phases, M=8, and so on, mathematically,
N = log2 M (12-11)
An M=4 indicates that with 2 bits, four different output conditions are possible for N=3, M=23 or 8,
and so on.
Quadrature Phase shift keying (QPSK), or quadrature PSK as it is some times called, is an
other form of angle modulated constant-amplitude
constant amplitude digital modulation. QPSK is an M-ary M
encoding technique where M=4 (hence, the name quaternary, meaning 4). With QPSK four
output phases are possible for a single carrier frequency. Because there are four different output
phases,
ases, there must be four different input conditions. Because the digital input to a QPSK
modulator is a binary (base 2) signal, to produce four different input conditions, it takes more than
a single input bit. With 2 bits there are four possible conditions:
conditions: 00,01,10,and 11. Therefore, with
QPSK the binary input data are combined into groups of 2 bits called dibits. Each dibit code
generates one of the four possible output phases. Therefore, for each 2- 2 bit dibit clocked into the
modulator, a single output change occurs. Therefore, the rate of change at the output (band rate) is
one half of the input bit rate.
QPSK transmitter. A block diagram of a QPSK modulator is shown in Figure two bits (a dibit) are
clocked into the bit splitter. After both bits have been serially inputted. They are simultaneously
parallel outputted. One bit is directed to the 1 channel and the other to the Q channel. The 1 bit
modulates a carrier that is in phase with the reference oscillator (hence, the name I for in
phase channel).
nnel). And the Q bit modulates a carrier that is 90o out of phase or in quadratur with
the reference carrier (hence the name Q for quadrature channel).
It can be seen that once a dibit has been split into the 1 and Q channels, the operation is the
same as in a BPSK modulator. Essentially, a QPSK modulator is two BPSK modulators combined
in parallel. Again, for a logc 1=+1V and a logic 0=-1V,
0= two phases are
re possible at the output of the 1
balanced modulator (+sin ct and cos ct) When the linear summer combines the two quadrature
(90o out of phase) signals, there are four possible resultant phasors given by these expressions:
+sinct +cosct, + sin ct-cosct-sin
sinct+cosct and sin ct-cosct.
For the remaining dibit codes (01,10, and 11), the procedure is the same. The results are shown in
figure.
In figure it can be seen that with QPSK each of the four possible output phasors has exactly
the same amplitude.
tude. Therefore, the binary information must be encoded entirely in the phase of
the output signal This constant amplitude characteristic is the most important characteristic of
PSK that distinguishes it from QAM, which is explained later in this chapter. Also, from figure it
can be sent hat the angular separation between any two adjacent phasors in QPSK is 90o.
Therefore, a QPSK signal can under to almost a+45o or -45o shift in phase during transmission and
still retain the correct encoded information when demodulated at the receiver. Figure shows the
output phase versus time relationship for a QPSK modulator.
f
Thus, output = sin2 b ( sin2 fc t )
4
1 f 1 f
cos2 fc b t = cos 2 fc + b t
2 4 2 4
The output frequency spectrum extends from fc+fc/4tofc-fb/4 and the minimum bandwidth (fv) is
fb fb 2fb fb
fc + 4 fc 4 = 4 = 2
It can be seen that for the same input bit rate the minimum band width required to pass the
output of the QPSK modulator is equal to one half of the BPSK modulator in Example Also, the
band rate for the QPSK modulator is one half that of the BPSK modulator.
QPSK receiver. The block diagram of a QPSK receiver is shown in Figure The power splitter
directs the input QPSK signal to the I and Q product detectors and the carrier recovery circuit. The
carrier recovery circuit reproduces the original transmit carrier oscillator signal. The recovered
carrier must be frequency and phase coherent with the transmit reference carrier. The QPSK signal
is demodulated in the I and Q product detectors. Which generate the original I and Q data bits.
The outputs of the product detectors are fed to the bit combining circuit. Where they are converted
from parallel I and Q data channels to a single binary output
outpu data stream.
The incoming QPSK signal may be any one of the four possible output phases shown in
Figure. To illustrate the demodulation process. Let the incoming QPSK signal be sin cf+cosct.
Mathematically, the demodulation process is as follows.
The receive QPSK signal ( sin cf+cosct)is one of the inputs to the I product detector. The
other input is the recovered carrier ( sin ct). The output of the I product detector is
=
1
(1 cos 2c t ) + 1 sin (c + c ) t + 1 sin (c c ) t
2 2 2
1 1 1 1
I = + cos 2c t + sin2c t + sin0
2 2 2 2
V ( logic0 )
1
=
2
Again, the receive QPSK signal (-sin ct+cosct) is one of the inputs to the Q product
detector. The other input is the recovered carrier shifted 90o in phase (cos ct ) The output of the Q
product detector is
=
1
(1 + cos 2c t ) 1 sin (c + c ) t 1 sin (c c ) t
2 2 2
(filtered out) (equals 0)
1 1 1 1
Q= + cos 2c t sin2c t sin0
2 2 2 2
V ( logic1)
1
=
2
In analog messages we can make a good guess about a sample value from a knowledge of
the past sample values. In other words, the same values are not independent an generally there is
a great deal of redundancy in the Nyquist samples. Propers exploitation of this redundancy leads
to encoding a signal with a lesser number of bits. Consider a simple scheme where instead of
transmitting the sample values, we transmit the difference between the successive sample values.
Thus, if m[k] is the kth sample, instead of transmitting m[k], we transmit the difference d[k]=m[k]
m[k-1]. At the receiver, knowing d[k] and the previous sample value m[k-1], we can reconstruct
m[k]. Thus, from the knowledge of the difference d[k], we can reconstruct m[k] iteratively at the
receiver. Now, the difference between successive samples is generally much smaller than the
sample values. Thus, the peak amplitude mp of the transmitted values is reduced considerably.
Because the quantization interval = mp /L, for a given L (or n), this reduces the quantization
interval , thus reducing the quantization noise, which is given by 2 /12. This means that for a
given n (or transmission bandwidth), we can increase the SNR, or for a given SNR, we can reduce
n (or transmission bandwidth).
We can improve upon this scheme by estimating (predicting ) the value of the kth sample
m[k] from a knowledge of the previous sample values. If this estimate is m[k], then we transmit
the difference (prediction error)d[k] = m[k] m[k]. At the receiver also, we determine the estimate
m[k] from the previous sample values, and then generate m[k] by adding the received d[k] to the
estimate m[k]. Thus, we reconstruct the samples at the receiver iteratively. If our prediction is
worth its salt, the predicted (estimated) value m[k] will be close to m[k] and the difference
(prediction error) d[k] will be even smaller than the difference between the successive samples.
Consequently , this scheme, known as the differential PCM (DPCM), is superior to that described
in the previous paragraph, which is a special case of DPCM, where the estimate of a sample value
is taken as the previous sample value, that is m[k] = m[k-1]
Before describing DPCM , we shall briefly discuss the approach to signal prediction
(estimation ) . to an uninitiated, future prediction seems a mysterious stuff fit only for psychics,
wizards mediums, and the likes, who can summon help from the spirit world. Electrical engineers
appear to be hopelessly outclassed in this pursuit. Not quite so! We can also summon the spirits
of to be hopelessely outclassed in this pursuit. Not quite so! We can also summon the spirits of
Taylor, Maclaurin. Wiener, and the likes to help us. Consider, for example, a signal m(t), which
has derivates of all orders at t. Using the Taylor series for this signal, we can express m(t+Ts) as
Ts2 T 3 ...
m ( t + Ts ) = m ( t ) + Ts m ( t ) + m(t ) + s m ( t ) + .... (..1a)
2! 3!
.
m ( t ) + Ts m ( t ) for small Ts ... (1b )
Equation shows that from a knowledge of the signal and its derivatives at instant t, we can
predict a future signal value at t + Ts. In fact, even if we known just the first derivative, we can still
predict this value approximately, as shown in Equation. Let us denote the kth sample of m(t) by
m[k], that is m(kTs) = m[k], and m(kTs Ts) = m[k 1], and so on. Setting t = kTs in Equation(1b),
and recognizing that m (kTs) [mkTs)- m(kTs Ts) ]/Ts we obtain
m [ k ] m [ k 1]
m [ k + 1] m [ k ] + Ts
Ts
=2m [ k ] m [ k 1]
This shows that we can find a crude prediction of the (k+1) the sample from the two
previous samples. The approximation in Equation improves as we ad more terms in the series on
the right-hand side. To determine the higher order derivatives in the series, we require more
samples in the past. The largest the number of past samples we use, the better will be the
prediction. Prediction. Thus, in general, we can express the prediction formula as
The right-hand side is m [ k ] = a1m [ k 1] + a2 m [ k 2] + ........aN m [ k N ] (3)
This is the equation of an Nth order predictor, Larger N would result in better prediction in
general. The output of this filter (predictor) is m[ k ], the predicted value of m[k]. The input is the
previous samples m[k-1], m[k-2],m[k-n], although it is customary to say that the input is m[k]
and the output is m[ k ] . Observe that this equation reduces to m[ k ] =m[k-1] for the first-order
predictor . It follows from Eq. where we retain only the first term on the right-hand side. This
means that a1 =1, and the first order predictor is a simple time delay.
We have outline here a very simple procedure for predictor design. In more sophisticated
approach, discussed in section, where we use the minimum mean squared error criterion for best
prediction, the prediction coefficients aj in Eq. are determined from the statistical correlation
between various samples. The predictor described in Equation is called a linear predictor. It is
basically a traversal filter (a tapped delay line), where the tap gains are set equal to the predicition
coefficients, as shown in figure.
Analysis of DPCM
As mentioned earlier, in DPCM we transmit not the present sample s[k], but d[k] (the
difference between m[k] and its predicted value m(k) . At the receiver , we generate m[ k ] from
the past sample values to which the received d[k] is added to generate m[k]. There is, however,
one difficulty in this scheme. At the receiver, instead of the past sample m[k-1], m[k-2],.. as well
as d[k], we have their quantized versions mq [k-1], mq [k-2].Hence, we cannot determine m[ k ] .
We can only determine m q [k 1], mq [k 2],......... This will increase the error in reconstruction. In
such a case, a better strategy is to determine m q [ k ] , the estimate of mq [k] (instead of m[k]), at the
transmitter also from the quantized samples mq [k-1], mq [k-2],.. The difference d[k] = m[k]-
m q [ k ] is now transmitted using PCM. At the receiver, we can generate mq [k], and from the
received d[k] ,we can reconstruct mq [k].
Figure shows a DPCM transmitter, we shall soon show that the predictor input is mq[k].
Naturally, is output is m q [ k ] , the predicted value or mq [k]. The difference.
d [k ] = m[k ] m q [k ]
is qunatized to yield
d q [k ] = d [k ] + q[k ]
Where q[k] is the quantization error. The predictor output m q [ k ] is fed back to its input so that the
predictor input mq[k] is
mq [k ] = mq [k ] + d q [k ]
=m [ k ] d [k ] + d q [ k ]
=m [ k ] + q [ k ]
This shows that mq[k] is a quantized version of m[k]. The predictor input is indeed mq[k], as
assumed. The quantized signal dq[k] is now transmitted over the channel. The receiver shown in
figure is identical to the shaded portion of the transmitter. The inputs in both cases are also the
same, viz, dq [k]. therefore, the predictor
pre output must be m q [ k ] (the same as the predictor output
at the transmitter). Hence, the receiver output (which is the predictor input) is also the same, viz.,
mq[k] = m[k] + q[k], as found in Equation. This shows that we are able able to receive the desired signal
m[k] plus the quantization noise q[k]. This is the quantization noise associated with the difference
signal d[k], which is generally much smaller smaller than m[k]. The received samples are decoded
and passed through a low-pass
pass filter for D/A conversion.
SNR Improvement
Figure: DPCM system (a) Transmitter (b) Receiver
To determine the improvement in DPCM over PCM, let mp and dp be the peak amplitudes
of m(t) and d(t), respectively. If we use the same value of L in both
both cases, the quantization step
in DPCM is reduced by the factor dp /mp . Because the quantization noise power is ( v ) /12, the
2
quantization noise is DPCM reduce by the factor (mp/dp)2, and the SNR increases by the same
factor. Moreover, the signal power is proportional to its peak value squared (assuming other
statistical properties invariant). Therefore, Gp (SNR improvement due to prediction ) is
Pm
Gp =
Pd
Where Pm and Pd are the powers of m (t) and d(t), respectively. In terms of dB units, this means
that the SNR increases by 10 log10 (Pm/Pd) dB. Therefore, Eq. applies to DPCM also with a value of
that is higher by 10 log (Pm/Pd) dB. An example, a second order predictor processor for speech
signals is analyzed. For this case, the SNR improvement is found to be 5.6 dB. In practice , the
SNR improvement may be as high as 25 dB in such cases as short-term
short term voiced speech spectra and
in the spectra of low-activity
activity images. Alternatively, for the same SNR,
SNR, the bit rate for DPCM
could be lower than that for PCM by 3 to 4 bits per sample. Thus, telephone systems using DPCM
can often operate at 32 kbits/ or even 24 kbit/s.
Delta Modulation:
mq [k ] = mq [ k 1] + d q [ k ]
Hence,
Proceeding iteratively in this manner, and assuming zero initial condition, that is, mq[0]=0 yields
k
mq [k ] = d q [m]
m =0
This shows that the receiver (demodulator) is just an accumulator (adder). If the output
dq[k] is represented by impulses, then the accumulator (receiver) may be realized by an integrator
because its output is the sum of the strengths of the input impulses (sum of the areas under the
impulses). We may also replace the feedback portion of the modulator (which is identical to the
demodulator) by an integrator. The demodular output is mq [k],, which when passed through a
low-pass
pass filter yields the desired signal reconstructed from the quantized samples.
The analog
g signal m(t) is compared with the feedback signal (which serves as a predicted
signal) m q (t ). The error signal d(t) =m(t) - mq ( t ) is applied to a comparator. If d(t) is positive, the
comparator output is a constantt signal of amplitude E, and if d(t) is negative the comparator
output is E.
E. Thus, the difference is a binary signal (l=2) that is needed to generate a 1-
1 bit DPCM.
The comparator output is sample by a sampler at a rate of fs samples per second, where fs is
typically much higher than the Nyquist rate. The sampler thus produces a train of narrow pulses
dq[k] to simulate impulses) with a positive pulse when m(t) > m q (t ) and a negative pulse when
m(t) < m q (t ) . Note that each sample is coded by a single binary pulse (1-
(1 bit DPCM), as required.
The pulse train dq [k] is the delta-modulated
delta modulated pulse trai (fiuhgre). The modulated signal dq[k] is
amplified and integrated in the feedback path to generate m q (t ) . Figure which tries to follow m(t).
(a) Delta modulator
To understand how this works we note that each pulse in dq[k] at the input of the integrator
gives rise to a step function (positive
(positive or negative, depending on the pulse polarity) in m q (t ) . If, for
example, m(t) > m q (t ) ,a positive pulse is generated in dq[k], which gives rise to instant, as shown
in figure. It can be seen that m q (t ) is a kind of staircase approximation of m(t). when m q (t ) is
passed through a low-pass
pass filter, the coarseness of the staircase in m q (t ) is eliminated, and we get
a smoother and better approximation
ximation to m(t). The demodulation at the receiver consists of an
amplifier-integrator
integrator (identical to that in the feedback path of the modulator ) followed by a low- low
pass filter figure.
Threshold and overloading effects can be clearly seen in figure. Various in m(t) smaller
than the step value (threshold of coding) are lost in DM. Moreover, if m(t), changes too fast, that
is m(t) is too high m q (t ) cannot follow m(t), and overloading occurs. This is the so-called slope
overload which gives rise to the slope overload noise. This noise is one of the basic limiting
factors in the performance of DM. We should expect slope overload rather than amplitude
overload in DM, because DM basically carries the formation about m ( t ) . The granular nature of
the output signal gives rise to the granular noise similar to the quantization noise. The slope
overload noise can be reduced by increasing (the step size). This unfortunately increases the
granular noise. There is an optimum value of , which yields the best compromise giving the
minimum overall noise. This optimum value of depends on the sampling frequency fs and the
nature of the signal.
The slope overload occurs when m q (t ) cannot follow m(t). During the sampling interval Ts,
m q (t ) is capable of changing by , where is the height of the step. Hence, the maximum slope
that m q (t ) can follow is / Ts , or f s , where fs is the sampling frequency. Hence, no overload
occurs if
.
m (t ) < fs
m(t) = A cos t
.
m (t ) = A < fs
max
Hence, the maximum amplitude Amax of this signal that can be tolerated without overload is given
by
fs
Amax =
The overload amplitude of the modulating signal is inversely proportional to the frequency . For
higher modulating frequencies, the overload occurs for smaller amplitudes. For voice signals,
which contain all frequency components up to (say) 4 KHz, calculating calculating Amax by using
= 2 4000 in equation will give an overly conservative value. It has been shown by de Jager10
that Amax for voice signals can be calculated by using r 2 800 in Equation
fs
[ Amax ]voice
r
Thus, the maximum voice signal amplitude Amx that can be used without causing slope overload
in DM is the same as the maximum amplitude of a sinusoidal signal of refernce frequency fr (fr
800 Hz) that can be used without causing slope overload in the same system.
Fortunately, the voice spectrum (as well as the television video signal) also decays with
frequency and closely follows the overload characteristics (curve c, figure). For this reason, DM is
well suited for voice (and d television) signals. Actually, the voice signal spectrum (curve b)
decreases as 1/ up to 2000 Hz, and beyond this frequency, it decreases as 1/1/ 2. If we had used a
double integration in the feedback circuit instead of a single integration, Amax in Equation would
be proportional to 1/2. Hence, a better match between the voice spectrum and the overload
characteristics is achieved by using a single integration up to 2000 Hz and a but has tendency to
instability, which can be reduced by using some low-orderlow er prediction along with double
integration. The double integrator can be built by placing in cascade two low-pass
low RC integrators
with the time constants R1C1 = 1/200
1/200 and R2C2 = 1/4000,, respectively. This results in single
integration from 100 Hz to 2000 Hz and double integration beyond 2000 Hz.
Pulse-width
width modulation has the disadvantage, when compared with pulse-position
modulation (PPM), which will be discussed next, that its pulses are of varying width and therefore
of varying power content. This means that the transmitter must be powerful enough to handle the
maximum-width
width pulses, although the average power transmitted is perhaps only half of the peak
power. PWM still works if synchronization between transmitter
transmitter and receiver fails, whereas pulse-
pulse
position modulation does not.
Generation and demodulation of PWM: Pulse-width width modulation may be generated by applying
trigger pulses (at the sampling rate) to control the starting time of pulses from a monostable
multivibrators, and feeding in the signal to be sampled to control the duration of these pulses. The
circuit diagram for such an arrangement is shown in figure.
The emitter-coupled
coupled monostable multivibrator of Figure makes an excellent voltage-to-time
voltage
converter,
verter, since its gate width is dependent on the voltage to which the capacitor C is charged. If
this voltage is varied in accordance with a signal voltage, a series of rectangular pulses will be
obtained, with widths varying as required. Note that the circuit
circuit does the twin jobs of sampling
and converting the samples into PWM.
It will be recalled that the stable state for this type of multivibrators is with T1 OFF and T2
ON. The applied trigger pulse switches T1 ON, whereupon the voltage at C1 falls as T1 now begins
to draw collector current, the voltage at B2 follows suit and T2 is switched OFF by regenerative
action. As soon as this happens, however, C begins to charge up to the collector supply potential
through R.. After a time determined by the supply suppl voltage and the RC time constant of the
charging network, B2 becomes sufficiently positive to switch T2 ON.T1 is simultaneously switched
OFF by regenerative
Figure: Monostable multivibrators generating pulse-width modulation.
action and stays OFF until the arrival of the next trigger pulse. The voltage that the base of T2 must
reach to allow T2 to turn on is slightly more positive than the voltage across the common emitter
resistor Rk. This voltage depends on the current flowing through the circuit, which at the time is
the collector current of T1 (which is then ON). The collector current depends on the base bias,
which is governed by the instantaneous changes in the applied signal voltage. The applied
modulation voltage controls the voltage to which B2 must rise to switch T2 ON. Since this voltage
rise is linear, the modulation voltage is seen to control the period of time during which T2 is OFF,
that is, the pulse duration. It should be noted that this pulse duration is very short compared to
even the highest signal frequencies, so that no real distortion arises through changes in signal
amplitude while is OFF.
The demodulation of pulse-width modulation is quite a simple process; PWM is merely fed
to an integrating circuit from which a signal emerges whose amplitude at any time is proportional
to the pulse width at that time. This principle is also employed in the very efficient so-called class
D amplifiers. The integrating circuit most often used there is the loudspeaker itself.
The amplitude and width of the pulses is kept constant in this system, while the position of
each pulse, in relation to the position of a recurrent reference pulse is varied by each instantaneous
sampled value of the modulating wave. This means that the transmitter must send synchronizing
pulses to operate timing circuits in the receiver. As mentioned in connection with PWM, pulse
position modulation has the advantage of requiring constant transmitter power output, but the
disadvantages of depending on transmitter-receiver synchronization.
Generation and demodulation of PPM Pulse-position modulation may be obtained very simply
from PWM, as shown in figure. Considering PWM and its generation again, it is seen that each
such pulse has a leading edge and trailing edge (like any other pulse, of course). However, in
PWM the locations of the leadings ledges are fixed, whereas those of the trailing edges are not.
Their positions depends on pulse width, which is determined by the signal amplitude at that
instant. Thus, it may be said that the trailing edges of PWM pulses are, in fact, position-
modulated. The method of obtaining PPM from PWM is thus accomplished by getting rid of
the leading edges and bodies of the PWM pulses. This is surprisingly easy to achieve.
Figure: a and b shows, once again, PWM corresponding to a given signal. If the train of
pulses thus obtained in differentiated,
tiated, then as shown in figure, another pulse train results. This
has positive going
going narrow pulses corresponding to leading edges and negative-going
negative pulses
corresponding to trailing edges. If the position corresponding to the trailing edge of an
unmodulated
lated pulse is counted as zero displacement, then the other trailing edges will arrive
earlier or later. An unmodulated PWM pulse is one that is obtained when the instantaneous
signal value is zero. These pulses are appropriately labeled in figure. They will therefore have a
time displacement other than zero; this time displacement is proportional to the instantaneous
value of the signal voltage. The differentiated pulses corresponding to the leading edges are
removed with a diode clipper or rectifier, and the remaining pulses, as shown in figure are
position modulated.
When PPM is demodulated in the receiver, it is again first converted into int PWM. This is
done with a flip-flop
flop or bistable multivibrator. One input of the multivibrator receives trigger
pulses from a local generator which is synchronized by trigger pulses received from the
transmitter, and these triggers are used to switch OFF one of the stages of the flip-flop.
flip The PPM
pulses are fed to the other base of the flip-flop
flip flop and switch that stage ON (actually by switching the
other one ODD). The period of time during which this particular stage is ODD depends on the
time difference between the two triggers, so that the resulting pulse has a width that depends on
the time displacement of each individual PPM pulse. The resulting PWM pulse train is then
demodulated.
9. Explain about pulse amplitude modulation?
Pulse-amplitude Modulation (PAM) Pulse- amplitude modulation, the simplest form of pulse
modulation, is illustrated in figure . It forms an excellent introduction to pulse modulation in
general. PAM is a pulse modulation system in which the signal is sampled at regular intervals,
and each sample is made proportional to the amplitude of the signal at the instant of sampling.
The pulses are then sent by either wire or cable, or else are used to modulate a carrier. As shown
in figure, the two types are double-polarity
double PAM, which is self-explanatory,
explanatory, and single-polarity
single
PAM, in which is fixed dc level is added to the signal, to ensure that the pulses are always
positive. The ability to use constant-amplitude
constant amplitude pulses is a major advantages of pulse modulation,
and since PAM does not utilize constant amplitude pulses, it is infrequently used. When it is
used, the pulses frequency-modulate
modulate the carrier.
Figure: Pulse-amplitude
amplitude modulation, (a) Signal; (b) double polarity
polarity PAM; (c) Single-polarity
Single
PAM.
It is very easy to generate and demodulate PAM. In a generator, the signal to be converted
to PAM is fed to one input of an AND gate. Pulses at the sampling frequency are applied t the
other input of the AND gate to open it during the wanted time intervals.
intervals. The output of the gate
then consist of pulses at the sampling rate, equal in amplitude to the signal voltage at each instant.
The pulses are then passed through a pulse-shaping
pulse shaping network, which gives them flat tops. As
mentioned above, frequency modulation
modulation is then employed, so that the system becomes PAM-FM.
PAM
In the receiver, the pulses are first recovered with a standard FM demodulator. They are then fed
to an ordinary diode detector, which is followed by a low-pass filter. If the cutoff frequency of this
filter is high enough to pass the highest signal frequency, but low enough to remove the sampling
frequency ripple, an undistorted replica of the original signal is reproduced.
Digital QAM
Like all modulation schemes, QAM conveys data by changing some aspect of a carrier signal, or
the carrier wave, (usually a sinusoid) in response to a data signal. In the case of QAM, the
amplitude of two waves, 90 degrees out-of-phase with each other (in quadrature) are changed
(modulated or keyed) to represent the data signal. Amplitude modulating two carriers in quadrature
can be equivalently viewed as both amplitude modulating and phase modulating a single carrier.
Phase modulation (analog PM) and phase-shift keying (digital PSK) can be regarded as a special
case of QAM, where the magnitude of the modulating signal is a constant, with only the phase
varying. This can also be extended to frequency modulation (FM) and frequency-shift keying
(FSK), for these can be regarded as a special case of phase modulation.
Analog QAM
Analog QAM: measured PAL colour bar signal on a vector analyser screen.
When transmitting two signals by modulating them with QAM, the transmitted signal will be of
the form:
where I(t) and Q(t) are the modulating signals and f0 is the carrier frequency.
At the receiver, these two modulating signals can be demodulated using a coherent demodulator.
Such a receiver multiplies the received signal separately with both a cosine and sine signal to
produce the received estimates of I(t) and Q(t) respectively. Because of the orthogonality property
of the carrier signals, it is possible to detect the
the modulating signals independently.
In the ideal case I(t) is demodulated by multiplying the transmitted signal with a cosine signal:
Low-pass filtering ri(t) removes the high frequency terms (containing 4ff0t), leaving only the I(t)
term. This filtered signal is unaffected by Q(t), showing that the in-phase
phase component can be
received independently of the quadrature component. Similarly, we may multiply s(t) by a sine
wave and then low-pass
pass filter to extract Q(t).
The phase of the received signal is assumed to be known accurately at the receiver.
r
If the demodulating phase is even a little off, it results in crosstalk between the modulated signals.
This issue of carrier synchronization at the receiver must be handled
handled somehow in QAM systems.
The coherent demodulator needs to be exactly in phase with the received signal, or otherwise the
modulated signals cannot be independently received. For example analog television systems
transmit a burst of the transmitting colour subcarrier after each horizontal synchronization pulse
for reference.
Analog QAM is used in NTSC and PAL television systems, where the I- and Q-signals carry the
components of chroma (colour) information. "Compatible QAM" or C--QUAM is used in AM
stereo radio to carry the stereo difference information.
where S(f), MI(f) and MQ(f)) are the Fourier transforms (frequency-domain
(frequency domain representations) of s(t),
I(t) and Q(t), respectively.
Quantized QAM
As with many digital modulation schemes, the constellation diagram is a useful representation. In
QAM, the constellation points are usually arranged in a square grid with equal vertical vert and
horizontal spacing, although other configurations are possible (e.g. Cross-QAM).
Cross Since in digital
telecommunications the data are usually binary,, the number of points in the grid is usually a
power of 2 (2, 4, 8 ...). Since QAM is usually square, some of these are rarethe
rare most common
forms are 16-QAM, 64-QAM, 128--QAM and 256-QAM. QAM. By moving to a higher-order
higher constellation,
it is possible to transmit more bits per symbol.. However, if the mean energy of the constellation is
to remain the same (by way of making a fair comparison), the points must be closer together and
are thus more susceptible to noise and other corruption; this results in a higher bit error rate and
so higher-order
order QAM can deliver more data less reliably than lower-order
lower order QAM, for constant
mean constellation energy.
Communication systems designed to achieve very high levels of spectral efficiency usually
employ very dense QAM constellations.
constell One example is the ITU-T G.hn standard for networking
over existing home wiring (coaxial
coaxial cable,
cable phone lines and power lines),
l which employs
constellations up to 4096-QAM
QAM (12 bits/symbol).
Ideal structure
Transmitter
The following picture shows the ideal structure of a QAM transmitter, with a carrier frequency f0
and the frequency response of the transmitter's filter Ht:
First the flow of bits to be transmitted is split into two equal parts: this process generates two
independent signals to be transmitted. They are encoded separately just like they were in an
amplitude-shift keying (ASK) modulator. Then one channel (the one "in phase") is multiplied by a
cosine, while the other channel (in "quadrature") is multiplied by a sine. This way there is a phase
of 90 between them. They are simply added one to the other and sent through the real channel.
where vc[n] and vs[n] are the voltages applied in response to the nth symbol to the cosine and sine
waves respectively.
Receiver
In practice, there is an unknown phase delay between the transmitter and receiver that must be
compensated by synchronization of the receivers local oscillator, i.e. the sine and cosine functions in
the above figure. In mobilee applications, there will often be an offset in the relative frequency as
well, due to the possible presence of a Doppler shift proportional to the relative velocity of the
transmitter and receiver. Both the phase and frequency variations introduced by the th channel must
be compensated by properly tuning the sine and cosine components, which requires a phase
reference,, and is typically accomplished using a Phase-Locked Loop (PLL).
.
Q(x) is related to the complementary Gaussian error function by:
, which is the probability that x will be under the tail of the Gaussian PDF towards positive
infinity.
The errorr rates quoted here are those in additive white Gaussian noise (AWGN
AWGN).
Where coordinates for constellation points are given in this article, note
note that they represent a non-
normalised constellation. That is, if a particular mean average energy were required (e.g. unit
average energy), the constellation would need to be linearly scaled.
Rectangular QAM
The first rectangular QAM constellation usually encountered is 16-QAM, 16 QAM, the constellation
diagram
am for which is shown here. A Gray coded bit-assignment
assignment is also given. The reason that 16-
16
QAM is usually the first is that a brief consideration reveals that 2-QAM
2 QAM and 4-QAM
4 are in fact
binary phase-shift keying (BPSK) and quadrature phase-shift keying (QPSK), respectively. Also,
the error-rate performance of 8-QAM
QAM is close to that of 16-QAM
16 QAM (only about 0.5 dB better[citation
), but its data rate is only three-quarters
needed]
three that of 16-QAM.
so
The bit-error
error rate depends on the bit to symbol mapping, but for and a Gray-coded
assignment -- so that we can assume each symbol error causes only one bit error -- the bit-error
rate is approximately.
Since the carriers are independent, the overall bit error rate is the same as the per-carrier
per error
rate, just like BPSK and QPSK.
Odd-k QAM
Note that neither of these constellations are used in practice, as the non-rectangular
non rectangular version of 8-
8
QAM is optimal.
Constellation diagram for rectangular 8-QAM.
8 Alternative constellation diagram for rectangular
8-QAM.
Contents [hide]
1 Mathematical representation
2 Gaussian minimum-shift
shift keying
3 See also
4 References
Non-rectangular QAM
It is the nature of QAM that most orders of constellations can be constructed in many different
ways and it is neither possible nor instructive to cover them all here. This article instead presents
two, lower-order constellations.
Two diagrams of circular QAM constellation are a shown, for 8-QAMQAM and 16-QAM.
16 The circular 8-
QAM constellation is known to be the optimal 8-QAM8 QAM constellation in the sense of requiring the
least mean power for a given minimum Euclidean distance. The 16-QAM 16 constellation is
suboptimal although the optimal al one may be constructed along the same lines as the 8-QAM 8
constellation. The circular constellation highlights the relationship between QAM and PSK. Other
orders of constellation may be constructed along similar (or very different) lines. It is consequently
hard to establish expressions for the error rates of non-rectangular
non rectangular QAM since it necessarily
depends on the constellation. Nevertheless, an obvious upper bound to the rate is relatedrela to the
minimum Euclidean distance of the constellation (the shortest straight-line
straight line distance between two
points):
In moving to a higher order QAM constellation (higher data rate and mode) in hostile
RF/microwave QAM application environments, such as in broadcasting or telecommunications,
inteference (via multipath)) typically increases. Reduced noise immunity due to constellation
separation makes it difficult to achieve theoretical performance thresholds. There are several test
parameter measurements
ents which help determine an optimal QAM mode for a specific operating
environment. The following three are most significant:[1]
Carrier/interference ratio
Carrier-to-noise ratio
Threshold-to-noise ratio
Contents [hide]
1 Mathematical representation
2 Gaussian minimum-shift
minimum keying
3 See also
4 References
Contents [hide]
1 Mathematical representation
2 Gaussian minimum-shift
shift keying
3 See also
4 References
Mathematical representation
where aI(t) and aQ(t) encode the even and odd information respectively with a sequence of square
pulses of duration 2T. Using the trigonometric identity,
identity, this can be rewritten in a form where the
phase and frequency modulation are more obvious,
Gaussian minimum-shift
shift keying
In digital communication, Gaussian
Gauss minimum shift keying or GMSK is a continuous-phase
frequency-shift keying modulation scheme.
scheme It is similar to standard
andard minimum-shift
minimum keying (MSK);
however the digital data stream is first shaped with a Gaussian filter before being applied to a
frequency modulator. This has the advantage of reducing sideband power, which in turn reduces
out-of-band
band interference between signal carriers in adjacent frequency channels. However, the
Gaussian filter increases the modulation memory in the system and causes intersymbol
interference,, making it more difficult to discriminate between
between different transmitted data values
and requiring more complex channel equalization algorithms such as an adaptive equalizer at the
receiver. GMSK has high spectral efficiency,
efficiency but it needs a higher power level than QPSK, for
instance,
nstance, in order to transmit reliably the same amount of data.
GMSK is most notably used in the Global System for Mobile Communications ( GSM).
Encoding
The simplest and most common form of ASK operates as a switch, using the presence of a carrier
wave to indicate a binary one and its absence to indicate a binary zero. This
Thi type of modulation is
called on-off keying,, and is used at radio frequencies to transmit Morse code (referred to as
continuous wave operation).
Here is a diagram showing the ideal model for a transmission system using an ASK modulation:
It can be divided into three blocks. The first one represents the transmitter, the second one is a
linear model of the effects of the channel, the third one shows the structure of the receiver. The
following notation is used:
Different symbols are represented with different voltages. If the maximum allowed value for the
voltage is A, then all the possible values are in the range [-A,A] and they are given by:
Considering the picture, the symbols v[n] are generated randomly by the source S, then the
impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be
sent through the channel. In other words, for each symbol a different carrier wave is sent with the
relative amplitude.
Out of the transmitter, the signal s(t) can be expressed in the form:
In the receiver, after the filtering through hr (t) the signal is:
where * indicates the convolution between two signals. After the A/D conversion the signal z[k]
can be expressed in the form:
In this relationship, the second term represents the symbol to be extracted. The others are
unwanted: the first one is the effect of noise, the second one is due to the intersymbol interference.
If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion,
criterion then there will be no
intersymbol interference and the value of the sum will be zero, so:
Probability of error
The probability density function to make an error after a certain symbol has been sent can be
modelled by a Gaussian function; the mean value will be the relative sent value, and its variance
will be given by:
where N(f) is the spectral density of the noise within the band and Hr (f) is the continuous Fourier
transform of the impulse response of the filter hr (f).
where is the conditional probability of making an error after a symbol vi has been sent and
is the probability of sending a symbol v0.
If we represent all the probability density functions on the same plot against the possible value of
the voltage to be transmitted, we get a picture like this (the particular case of L=4 is shown):
The possibility of making an error after a single symbol has been sent is the area of the Gaussian
function falling under the other ones. It is shown in cyan just for one of them. If we call P+ the area
under one side of the Gaussian, the sum of all the areas will be: 2LP + 2P + . The total probability
of making an error can be expressed in the form:
We have now to calculate the value of P+. In order to do that, we can move the origin of the
reference wherever we want: the area below the function will not change. We are in a situation
like the one shown in the following picture:
it does not matter which Gaussian function we are considering, the area we want to calculate will
be the same. The value we are looking for will be given by the following integral:
where erfc() is the complementary error function. Putting all these results together, the probability
to make an error is:
from this formula we can easily understand that the probability to make an error decreases if the
maximum amplitude of the transmitted signal or the amplification of the system becomes greater;
on the other hand, it increases if the number of levels or the power of noise becomes greater.
This relationship is valid when there is no intersymbol interference, i.e. g(t) is a Nyquist function.
UNIT III
SOUND CODES, LINES CODES AND ERROR CONTROL
PART A
1. Define entropy?
In information theory, entropy is s a measure of the uncertainly associated with the random
variables the term by itself in the context usually refers to the Shannon entropy, which quantifies,
in the sense of an expected value, the information contained in a message usually in units such as
bits.
A message source generates message at the rate of r messages/ second, the rate of
information r is defined as the average number of bits of information / second. Now H is the
average number of bits of information per message.
R = rH bits/ec
P1 = 1 , P2 = 1 , P3 = 1 , P4 = 1/ , P5 = 1 , P6 = 1
2 4 8 16 32 ' 32
find the entropy of the system. Also find the rate of information if there are 16 atcomes/second
Solution:
Entropyh is
6
1
H = Pk log
k =1 Pk
1 1 1 1
= 1 log 2 + log 4 + log 8 + log16 + log 32
2 4 8 16 32
31
= bits message
16
Rate of information R is
R = rH
31
=16
16
=31 bits/sec
( y )]
C = max I ( x, y ) = max[ H ( x) H x
I(x,y) measure of the average information per symbol transmitted in the system.
actual transinformation
=
max imum transinformation
I ( x;y ) I ( x; y )
= =
max I ( x, y ) c
C I ( x; y )
Redundancy of the channel is defined as R = 1 =
C
6. Define Coding?
NRZ: Symbol 1 is represented by transmitting a pulse of constant amplitude for the entire
duration of the bit interval, and symbol 0 is represented by no pulse, NRZ indicates that the
assigned amplitude level is maintained throughout the entire bit interval.
8. Define RZ : (URZ)
Symbol 1 is represented by a positive Pulse that return to zero before the end of the bit
interval and symbol 0 is represented by the absence of pulse.
9. Define AMI
Positive and negative pulses are used alternately for symbol, and no pulse is used for
symbol 0. In either case the pulse returns to a before the end of the bit interval.
In block codes, each block of k message bits is encoded into a block of n bits (n)k), as shown
in figure. The check bits are derived from the message bits and are added to them. The n-bit n
block of channel encoder output is called a codeword and the codes in which the message bits
appear at the beginning of a code word are called systematic codes.
PART B
1. Describe the about the various properties of channel capacity?
Channel Capacity:
The mutual information I (X,Y) indicates a measure of the average information per symbol
transmitted in the system. A suitable measure for efficiency of transmission of information may
be introduced by comparing the actual rate and the upper bound of the rate of information
transmission for a given channel. Shannon has introduced a significant concept of channel
capacity defined as the maximum of mutual information. Thus, the channel capacity C is given by
actual transinformation
=
max imum transformation
I ( x;y ) I ( x; y )
or = = ... ( 2 )
max i(x,y) C
C I ( x, y )
The redundancy of the channel is defined as R = 1 = ...(3)
C
Noise free channel
I(X;Y) = H(X)
Symmetric Channel
A symmetric channel is defined as the one for which (i) H(Y/xi) is independent of j; i.e., the
P( y / x j ) is independent of
m
entropy corresponding to each row of {P(Y/X)] is the same, and (ii) k
i =1
It can be seen that a channel is symmetric if the rows and columns of the channel matrix D=
P(Y/X) are independently identical ; except for permutations. If D is a square matrix, then for a
symmetric channel, the rows and columns are identical, except for permutations. The following
examples will make the concept of symmetric channel clear:
1 1 1
2 4 4 This is a symmetric channel as the rows and columns are identical
1 1 1
(a ) P ( Y/X ) = 1 1
4 2 4 except for permutations each contains one and two
1 1 1 2 4
4 4 2
1 1 1
2 4 4 This is a symmetric channel as the rows and columns are identical
1 1 1
(b) P ( Y/X ) = 1 1
4 2 4 except for permutations each contains one and two
1 1 1 2 4
4 4 2
1 1 1 1
3 6 This is a symmetric channel, as each row contains two 1 3
(c) P ( Y/X ) =
6 3
1 1 1 1 and two 1/6, and each column contains one 1/3 and one 1/6
6 3 6 3
1 1 1 1
3 6 This is not a symmetric channel, as although the rows are
(d ) P ( Y/X ) =
6 3
1 1 1 1 identical except for permutations, the columns are not
3 3 6 6
0.4 0.6
0.3 0.7 This is not a symmetric channel, as although the columns are
( e ) P (Y / X ) =
0.6 0.4 identical except for permutations, the rows are not
0.7 03
{Note that since the rows of the above matrices are complete probability schemes, the sum of each
row in each matrix is units.]
= H (Y ) H (Y / x j ) ( x j )
m
i =1
= H (Y ) A p ( x j )
m
i =1
Where A = H(Y/xj) is independent of j and hence is taken out of the summation signa.
Also,
p ( x ) = 1,
m
j
i =1
The most important case of a symmetric channel is the Binary Symmetric Channel (BSC).
p 1 p p q
In this case m =n= 2, and the channel matrix is D = P (Y / X ) = =
1 p p q p
Example
For a BSC shown in figure.(a) Find the channel capacity for I (p) = 0.9; (ii) p =.6
C = log n -A
= log 2 - H(Y/xj)
2
=log 2 - - p ( yk / x j ) log ( y k / x j )
i=1
= log 2 + p log p + ( 1- p) log (1-
( p)
=1- (p (log p + q log q)
Figure: (a)
= 1 - H(p)
= 1- H (q)
(i) For p= 0.9,
C= 1 + 0.9 log 0.9 + 0.1 log 0.1
= 0.531 bit/message
Cascaded Channels
Sometimes channels are to be cascaded for some reasons. Let us consider the case of two
cascaded Binary Symmetric Channels as shown in figure. The analysis of these cascaded channels
is as follows:
Figure:
The message from X1, reaches Z1 in two ways: X1 y1 z1 and x1 y1 z1. The respective path
probabilities are p.p and q.q.
Hence,
p ' = p 2 + q 2 = ( p + q) 2 2 pq = 1 2 pq
Similarly, the message from x1 reaches z2 in two ways: x1 y1 - z2 and x1 y2 z2. The respective
path probabilities are p.q. and q.p
Hence,
Q = pq + pq = 2 pq
1 2 pq 2 pq p ' q '
P(Z / X ) = = ...(4.1)
2 pq 1 2 pq q ' p '
Thus, the cascaded channel is equivalent to single Binary Symmetric Channel with error
probability = 2 pq. We know that the channel capacity of a BSC is given by
C= 1- H (q)
For 0.5 > q> 0,2 pq is greater than q. Hence , the channel of two cascaded BSC is less than a single
BSC, as expected.
Binary Erasure Channel (BEC)
A Binary Erasure Channel (BEC) has two inputs (0,1) and three outputs (o,y,1) as shown in
figure. BEC is also very important. Here, 0 and 1 are transmitted, and they are received as 0,y and
1. The symbol indicates that, due to noise, no deterministic decision can be made as to whether the
received symbol is a 0 or a 1. In other words, the symbol y indicates that the output is erased.
Hence, the name Binary Erasure Channel. In practice, whenever decision is in favour of y, i.e. ,
whenever deterministic decision in favour of 0 or 1 is not possible, the receiver requests reque the
transmitter for re-transmission
transmission fill the decision is taken either in favour of 0 or in favour of 1.
p q 0
D = [ P ( Y / X )] =
0 q p
Figure:
1 1
H ( X ) = log + (1 ) log
(1 )
Now, since p(x1) = p(0) = and p(x2) = p(1) = (1-),
), the joint probability matrix P (X,Y) can be found
by multiplying the rows of p(Y/X) by , and (1-(1 ), respectively.
Hence,
p q 0
P ( x, Y ) = 0 (1 )q (1 ) p
P ( y1 ) = p, p ( y2 ) = q + (1 ) q = q, p ( y3 ) = (1 ) P
The conditional probability matrix P (X/Y) can be found by dividing the columns of P(X,Y) by
p(y1), p(y2), and p(y3), respectively. Thus,
p q 0
p q (1 ) q
P( X /Y ) =
0 (1 ) (1 ) p
p q (1 ) p
1 0
=
0 1- 1
Now,
H ( X / Y ) = p ( x j , yk ) log p ( x j / yk )
2 3
j =1 k =1
I ( X ,Y ) = H ( X ) H ( X / Y )
= H(X)-(1- p) H (X)
= H (X)
and
C= max I (X,Y)
= max [ H (X)]
= max H ( X )
=p since max. [H(X)]= 1
Repetition of Signals
The transmitted signal is repeated at the channel input to the increase the channel efficiency as
shown in figure. In this case, the acceptable output signals are y1 = 00 and y2 = 11. the outputs y3 =
01 and y4 = 10 are discarded (erased) and request for re-transmission is made as in BEC.
I ( X ,Y ) = H ( X ) H ( X / Y )
=H(X) - (1-p) H (X)
= p H (X)
and
C= max I (X,Y
Figure:
Therefore,
y1 y2 y3 y4
x1 p / 2 q / 2
2 2
pq / 2 pq / 2
P (Y / X ) =
x2 q 2 / 2 p 2 / 2 pq / 2 pq / 2
Hence,
P2 + q2
p ( y1 ) = p ( y2 )
2
and
p ( y3 ) = pq = p ( y4 )
1 1
Therefore, H ( Y ) = p ( y1 ) log + p ( y2 ) log
p ( y1 ) p ( y2 )
1 1
+ p ( y3 ) log + p ( y4 ) log
p ( y3 ) p ( y4 )
2 1
= ( p 2 + q 2 ) log 2 2
+ 2 pq log ...(6.1)
p +q pq
1 1 1 1
H ( Y/X ) = p 2 log 2 + q 2 log 2 + pq log + pq log ...(6.2)
p q pq pq
I(X; Y) = H ( Y ) = H (Y / X )
2 1 1 1
= ( p 2 + q 2 ) log 2 2
+ 2 pq log p 2 log 2 + 2 pq log
p +q pq q pq
2
= ( p 2 + q 2 ) log 2 2
+ p 2 log p 2 + q 2 log q 2
p + q
1
= ( p 2 + q 2 ) 1 + log 2 2
+ p 2 log p 2 + q 2 log q 2
p + q
p2 + q2 1 p2 q2
= ( P 2 + q 2 ) 1 + 2 2
log 2 +
2 2
log p 2
+ 2 2
log q 2
p +q p +q p +q p +q
2
p 2
1 q 2
1
= ( p 2 + q 2 ) 1 + 2 2
log 2 + 2
2 2
log 2 2
p +q p +q p +q p +q
p2 q2
+ 2 2
log p 2
+ 2 2
log q 2
p +q p +q
p2 p2 q2 q2
= ( p 2 + q 2 ) 1 + 2 2
log 2 +
2 2
log 2 2
p +q p +q p +q p + q
2
q2
= (p 2
+ q 2 ) 1 H 2 2
....(6.3)
p + q
q2
Thus, the channel is now equivalent to a BSC with error probability q ' = . Since q < q, the
p2 + q2
mutual information I (X,Y) is greater than the original value (i.e. Where there is not repetition of
signals) 1- H (q).
Binary Channel
Although it is easy to analyze a BSC , in practice , we come across binary channels with
non-symmetric structures. A binary channel is shown in figure . The channel matrix is
Figure:
p P
D = P ( Y / X ) = 11 12
P21 P22
To find the channel capacity of a binary channel, the auxiliary variable , Q1, Q2, are defined by
[ P ][Q ] = [ H ]
Or
(1) No sequences
ces of employed binary numbers Ck can be obtained from each other by adding
more binary digits to the shorter sequence (prefix property)efficient ; i.e. and 0 appear
independently, with almost equal probabilities.
The actual procedure of the Shanmon-fano
Shanmon coding is as follows:
Example:
[ X ] = [ x1 x2 x3 x4 x5 x6 x7 x8 ]
[ P ] = [1/ 4 1/ 8 1/16 1/16 1/16 1/ 4 1/16 1/ 8]
Take M=2
Solution
8
1 1 1 1
L = pk nk = 2 + 3 + 4 + 4
k =1 4 8 16 16
1 1 1 1
+ 4 + 2 + 4 + 3
16 4 16 8
Or
L = 2.75 letters/ message
8
H ( X ) = pk log p k
k =1
1 1 1 1 1 1 1 1
=- log + log + log + log
4 4 8 8 16 16 16 16
1 1 1 1 1 1 1 1
+ log + log + log + log
16 16 4 4 16 16 8 8
HUFFMAN CODING
The Huffman coding method leads to the lowest possible value of L for a given M.,
resulting in a maximum . Hence, it is also known as the minimum redundancy code, or
optimum code. The procedure is as follows :
(1) N messages are arranged in order of non- increasing probability
(2) The probabilities of [N-K[M-1] least likely message are combined, where k is the higher integer
that give a positive value to the bracket, and the resulting {K(M-1) +1] probabilities are re-arranged
in a non-increasing manner. This step is called reduction, the reduction procedure is repeated as
often as necessary, by taking M terms every time, until there remain M ordered probabilities. It
may be noted that by combining [N-K(M-1), and not M, terms in the first reduction, it is ensured
that there will be exactly M terms in the last reduction.
(3) Encoding begins with the last reduction, which consists of exactly M ordered probabilities .
The first element of the encoding alphabet is assigned as the first digit in the codeword for all
source messages associated with the first probability of the last reduction. Similarly, the second
element of the encoding alphabet is assigned as the second digit in the codewords for all source
messages associated with the second probability of last reduction and so on.
The same procedure is repeated for the second from last reduction, to the first reduction, in that
order.
Example
As per step (2) of procedure, last two terms should be combined in the first reduction.
Code length
C1=0 1
C2 = 111 3
C3 = 101 3
C4 = 1101 4
C5 = 1100 4
C6 = 1001 4
C7 = 1000 4
7
L = pk nk = ( 0.4 1) + ( 0.2 3) + ( 0.12 3) + ( 0.08 4 ) + ( 0.08 4 ) + ( 0.04 4 )
k =1
= 2.48 letters / message
This is the same as the one obtained in the second answer of example which gives the maximum
efficiency.
1.Code should be separable or uniquely decodable. Also they should be comma-free , i.e., no
synchronizing signal should be required to recognize the words. This restricts the selection of
codes in such a way that no shorter code can be prefix of a longer code. Such codes are also called
Instantaneous codes.
L1 log 1/pi)
3. the code efficiency is defined as
H (X )
= and > 1 for the optimum (also called compact) codes.
L log D
In block codes (also known as arithmetic codes, or group codes), each block of k message
bits is encoded into a block of n bits (n>k), as shown in figure. The check bits are derived from the
message bits are added to them. The n-bit block of a channel encoder output is called a codeword
and codes (or coding schemes) in which the message bits appear at the beginning of a codeword,
are called systematic codes.
The simplest possible block code is when the number of check bits one. These are known as
parity check codes. When the check bit is such that the total number of 1s in the codeword is
even, it is an even parity check code, and when the check bit is such that the total number of 1s in
the codewords is odd, it is an odd parity
pari check code.
In this section certain important concepts such as the weight of a code, Hamming distance,
etc., are introduced. The weight of a codeword is defined as the number of non-zero
non components
in it. For example,
The hamming distance between two code words is defined as the number of components in
which they differ.
For example,
Let U= 1010
V = 0111
W = 1001
Similarly,
D(U,W) =2
And D(V,W) = 3
n
D(U , V ) = ( k k ) ...(1)
k =1
Where
U = 1 2 3 ....... n
V = 1 2 3 ...... n
(s and s are binary digits 0 or 1)
The notation means modulo 2 addition, for which the rules are
00 = 0
0 1 = 1
1 0 = 1
11 = 0
Then, for U =1010 and V= 0111 Eq. gives
D(U , V ) = (1 0 ) + ( 0 1) + (1 1) + ( 0 1) = [1 + 1 + 0 + 1] = 3
The minimum distance of a block code is defined as the smallest distance between any pair
of codewords in the code.
Now, let us condier a block code with a minimum distance two, If a single error occurs, a
word will be erroneously received as a meaningless word, the word does not exist in the
codebook. Thus, in such a set up any single error can be detected, but it cannot be corrected. This
will be clear from the following example:
Let us consider a block codes of two digits with a minimum distance two. Two codebooks
are possible. They are 00,11 and 01,10. Let our codebook 01,10. Now, with a single error, 01 may
be received either as 00 or as 11. let us suppose that it is received as 00. Since 00 is not in our
codebook, an error has been detected. But a decision cannot be taken as to whether 01 or 10 was
transmitted, as both are at equal distance from 00. Hence, the error cannot be corrected. If we
have a codebook of a minimum distance three, the single error can be corrected as the distance of
he erroneous word is 1 from only one codeword, and more than 1 from all other codewords. For
example, if 000,111 is our codebook , and if 001 is received, a decision can be taken that 000 is
received since the distance between 000 and 001 is one, whereas, the distance 111 and 001 is two.
n 1
(i ) errors can be corrected in n is odd
2
n-2 n
(ii) errors can be corrected and errors can be detected if n is even
2 2
It is interesting to find the number of maximum possible code words of length n and a
minimum distance d. this is given by B (n,d) and Hamming got the following values:
B ( n,1) = 2n
B ( n, 2 ) = 2 n 1
2n
B ( n,3) = 2 m
n +1
2 n 1
B ( n, 4 ) = 2 m
n
B ( n, 2k ) = B ( n 1, 2k 1)
2n
B ( n, 2k + 1) = 2 m
n n n
1 + + + ..... +
2 2 k
2n 2n
The equality is B (n,3) is valid when is an integer. Such codes are referred to as close
n +1 n +1
packed codes. These are obtained by selecting n = 2k -1 Where k is a positive integer. For example,
k = 2,3,4 gives n = 3,7,15; resulting in close packed codes B (3,3), B(7,3) , B(15,3) respectively.
23
B ( 3, 3) = =2
3 +1
27
B ( 7,3) = = 16
7 +1
215
B (15, 3) = = 2048
15 + 1
2n
When n 2k -1 , the number of maximum possible code words is found out from B(n,3)< .
n +1
Thus, for n=5
25
B ( 5,3) = 2m
5 +1
or
2 m 5.33, giving m =2
Hence, B(5,3) =22 = 4
26
For n=6, B ( 6,3 ) = 2m or 2 m 9.14. thus m =3
6 +1
Hence, B ( 6,3) = 23 = 8
There are two steps in the encoding procedure for linear block codes: (1) The information
sequence is segmented into message blocks of k successive information bits. (2) Each message
block is transformed into a larger block of n bits by an encoder according to some pre-determined
set of rules. The n-k additional bits are generated from linear combinations of the message bits.
The encoding operations can be described with the help of matrices. Let a message block be a row
vector
D= [d1d2 .dk}
Where each message bit can be a 0 or a 1. thus, we have 2k distinct message blocks. Each message
block is transformed into a code word c of length n bits.
C = [C1C2 ....Cn ]
by the encoder, and there are 2k distinct codewords. It may be noted that there is one unique
codeword for each distinct message block. This set of 2k codewords, also known as code-vectors,
is called an (n,k) block code.
k
The rate efficiency of this code is
n
In a systematic linear block code the first K bits of the codewords are the message bits, i.e.
The last n-k bits in the codeword are check bits generated from k message bits according to some
predetermined rule:
Ck +1 = p11d1 p21d 2 .... pk ,1dk
Ck + 2 = p12 d1 p22 d 2 ..... pk ,2 dk
.
...(2)
.
.
Cn = p1,n k d1 p2 , n k d 2 ..... pk ,n k dk
The coefficient pi,j in equation are 0s and 1s so that Ck are 0s and 1s . The additions in Equation
are modulo-2 additions, Equations (1) and (2) can be combined to give a matrix equation.
Where G is the K n matrix on the RHS of equation. It is called the generator matrix of the code
and is used in encoding operation. It has the form
Where Ik is the identity matrix of the order K and P is an arbitrary K (n-k) matrix. The matrix P
completely defines the (n,k) block code. The selection of a P matrix is an important step in the
design of an (n,k) block code because then the code generated by G achieves certain desirable
properties such as the ease of implementation, ability to correct errors, high rate efficiency ,etc.
We know that when a single error occurs, say in the ith bit of the code word, the syndrome
of a received vector is equal to the ith row of HT. Hence, if n rows of n (n-k) matrix HT are chosen
to be distinct, then the syndrome of all single errors will be distinct, and we can correct the single
errors. Once HT is chosen, the generator matrix G can be obtained by using Eqs.
Each row in Ht has (n-k) entries. Each one of these entries could be a 0, or a 1. Hence, we
can have 2n-k distinct rows of (n-k) entries from which 2n-k -1 distinct rows of HT can be selected. (It
is to be kept in mind that the row with all 0s cannot be selected). Since the matrix HT has n rows,
the condition for all of them to be distinct is
2nk 1 n
or
( n - k ) log 2 ( n + 1)
or
n k + log 2 ( n + 1)
Thus the minimum size n for the codeword can be determined . (Note that n has to be an integers).
Cyclic Codes
Cyclic codes form a subclass of linear block codes. They are important for two reasons:
First, encoding and syndrome calculations can be easily implemented by using simple shift
registers with feedback connections. Second, the mathematical structure of these codes is that it is
possible to design codes having useful error correcting properties.
= ( 0, v1.....vn 1 ) ...(1)
(1) = ( n 1 , 0 , 1.... n 2 )
Which is obtained by shifting cyclically one place to the right, is also a code-vector
code of C. From
the above definition it is clear that
V ( x ) = v0 + v1 x + v2 x 2 + .....vn 1 x n 1......(3)
The coefficients of the polynomial are 0s and 1s, and they belong to a binary field which satisfies
the following rules of addition and multiplication.
0+0=0 0.0 = 0
0 + 1 =1 0.1 = 0
1+0=1 1.0 = 0
1 + 1 =0 1.1 =1
Now, we will state a theorem (without giving its proof) which is very useful for a cyclic code
generation.
Theorem
If g(x) is a polynomical of the degree (n-k) and is a factor of xn + 1, then g(x) generates an (n,k)
cyclic code in which the code polynomical V(x) for a data vector D = (d0, d1 , d2 dk-1)is generated
by
V(x) = D(x) g (x) (4)
5. Explain about the Convolutional codes?
Convolutional Codes
Storage (memory) devices, such a flip flops, connected in cascade, form a shift register.
Each flip flop is capable of storing one bit. A four bit shift register is shown in figure. (a) M1, M2 ,
M3 and M4 are memory devices. A stream of binary data is applied to M1 in MSB (Most Significant
Bit) first fashion. S1, S2, S3 and S4 outputs taken from M1, M2, M3 and M4 respectively M1 stores the
most recent bit of input data stream S4 are outputs taken from M1, M2 , M3 and M4 respectively. M1
stores the mot recent bit of input data stream and indicates its state on the output line S1.
Therefore, the output S1 is the same as the MSB of the the input data stream. After one bit interval, the
bit stored in M1 shifts one stage to the right i.e., to M2. thus, the output S2 of M2 is the same as S1,
i.e. the input bit stream with one bit interval delay. In this way , the input bit stream appears at a
every output line with an increased delay.
It is assumed that, initially, the shift register is clear. The operation of the encoder is
explained for the input data stream of a four bit sequence
M= 1101
This is entered in the shift register from MSB. Thus, at the first-bit
first bit interval, S1 = 1, S2 =0, S3 =0.
Now , v1 , v2 and v3 can be found from Equation.
Equation Thus
v1 = 1 0 =1, v2 = 0 0 = 0, v3 = 1 0 =1
In the same manner, outputs at other bit intervals can be found out. Since L =4 and k =3 ,
the register resets at seventh (L + k = 4 + 3 =7) bit interval. The output at each bit interval consists
of v bits (in this case v = at seventh (L + k = 4+3 =7) bit interval. The output at each bit interval
consist of v bits ( in the case v=3). Thus , for each message, there are v (L+k) bits in the output
codeword. Notice that each message bit remains in the shift register for k-bit k intervals. Hence,
each input bit has an influence on the k groups of v bits; i.e., on vk output bits
Table gives the coded output bit stream for all input data streams for the encoder shown in
figure. The MSB column of input data stream is such that is divided into two subsets (eight 0s
and eight 1s)resulting in two subsets of the first code block of three bits in the coded output bit
stream (eight 000 and eight 101). Each of these two subsets of the MSB column is further divided
into two subsets (four 0s and four 1s) in the second MSB column , resulting in two subsets of
second code block of three bits in the coded output bit stream (four 000-four 101 and four 110
four 011). In the same way, each subset is further divided into two subsets, till there is only one
code block of three bits in each subset. Thus, it is possible to construct a code tree shown in figure,
from the table , if the input data stream is entered from the MSB in the convolution code encoder.
On the other hand, it is not possible to construct such code tree if the input data stream is entered
from the LSB , as successive division in two subsets is not possible if we start from the LSB
column. Hence, in the convolutional encoder the input data stream is entered from the MSB and
not from the LSB.
The code Tree: Figure: shows the code tree for the encoder of Figure. This is derived from Table .
The starting point on the code tree is at the extreme left and corresponds to the situation before
the arrival of the first message bit. The first message bit may be either a 0, or a 1. When an input
bit is 0, the upward path is taken, and when it is 1, the downward path is taken. The same rule is
followed at each junction or node. The path through the tree shown by the dashed line is for input
message 1101. The code for the input message 1101 can be found by reading the bits encountered
from the entrance to exit of he tree along the dashed path. Thus, the desired code is 101 011 101
110 011 000, same as in table. Codes for other message can be found out with the help of an
appropriate path on the code tree. Note that any path through the tree.
Decoding in the presence of Noise: Exhaustive Search Method In the absence of noise, the
codeword will be received as transmitted.
transmitted. Hence, it is easy to reconstruct the original message.
But due to noise, the word that is received is not the one transmitted. Decoding in the presence of
noise is done in the following manner (the procedure is explained for k =3, L =4, b =3)
The first message bit has an effect on the first kv = 9 bits. From the code tree of figure, it is
clear that there are eight possible combinations of the first nine digits which are acceptable
codewords. All these combinations are compared with the first nine nine bits of the received word,
and he path corresponding to the combination giving a minimum discrepancy is accepted as the
correct path.
If the path goes upwards, at the first node A, then the first message bit is taken as 0, and if
the path goes downwards, s, then the first message bit is taken as 1. Say, the path is downwards (as
shown by A B in figure). Thus,. It is concluded that the first message bit is 1. Now, we are at node
B. The second message bit will have an effect on the next nine bits for which
whic , again, there are eight
possible ways. Using the same procedure, the direction of the path at the node B, and hence, the
second message bit it is decided. In the same way,
way, all the message bits are decided and the received
word is decoded.
It may be seenn that the probability of error decreases exponentially with K. Hence K should
be made as large a possible. But, on the other hand, the decoding of each bit requires an
examination of the 2k branch section (in our case 2k =8 as k =3) of the code tree. Hence,
H with a large
k, the decoding procedure becomes lengthy. Another method known as sequential decoding is
manageable even for a large k.
Sequential Decoding
The main advantage of sequential decoding is that it avoids the lengthy process of
examining every branch of the 2k possible branches of the code tree while decoding a single
message bit. In this method, at the arrival of a v-
v bit code block the encoder compares these bits
with the code block of the branches diverging from the starting node. TheTh encoder follows the
branch whose code block gives lesser discrepancies with the received code block. The same
procedure is repeated at each node.
Figure 2 (a)
UNIT IV
PART A
Only one user transmits at any time and that user can use the entire available banwidth, so
that instantaneous data rate is proportional to the available bandwidth.
Many users simultaneously transmit orthogonally coded Spread spectrum signals that
occupy the same frequency band..
Multiple access is the ability of a large number of earth stations to simultaneously share a
Satellite for different services.
PART B
TDMA
Terminology: time slots; (so much of time, with a bunch of time slots in it); each user gets a time
slot in the next frame (if he still needs it).
Figure:
Different ground terminals have different time slots (same slot in each frame)
They buffer data, compress it, transmit it in a burst (transmission rate is higher than data
rate)
Transmit a frame of data in a slot time
Data time to arrive at satellite in proper time slot.
Satellite transponder receives it and retransmits it to wherever on the downlink.
M time slots per frame, each preassigned to some ground station (long term). Each time slot has
preable and then data
Preamble contains sync, addressing and ECC sequences
Variations:
Fixed assignemnnt great if source requirements are perdictable and you can keep all time
slots mostly filled (e.g. N TV channels)
Combined FDMA/TDMA
These can be the same N for ech frequency band or they can be a different N users;
One ouser can (does) have different time slots in different frequency bands (so he doesnt have to
transmit everything in T/N of time every T of time)
With more (MN) division of CR, better efficiency if a few users/sources are sporadic More flexible
Allow each user (permanently/long term ) a frequency band; use guard bands
Heterodyne mix each user to his frequency band (modulate)
Simple 3 voice channel (e.g. of heterodyning (only deeps lower sidebands)
Geostationary orbits its orbit matches the earths rotation, they thus appear stationary with
respect to earth locations. Pretty complete coverage (non shaded) with only 3 not 24/ ) satellites.
FDMA
Telphone used FDMA since early 1900s
1 speaker = 3.1 kHz, sample at 8 kHz, each user has 4kHz for calulations
Groups mux 12 users (FDM) into a group
Supergroup mux 5 groups (60 users)
1 2 3 4 5
Figure:
Send these over cables 240 kHz per supergroup (inter city uses)
Each groups has different destination (regionally) within US
FDMA didnt need sophisticated timing of slots and sync mess (as needed in TDMA)
- With present clocks and VLSI (TDMA is in )
CDMA:
With IS 95, each mobile user within a given cell, and mobile subscribers in adjacent cells
use the same radio frequency channels. In essence, frequency reuse is available in all cells. This
is made possible because IS 95 specifies a direct sequence, spread spectrum CDMA system
and does not follow the channelization principles of traditional cellular radio communications
systems. Rather than dividing the allocated frequency spectrum
spectrum into narrow bandwidth channesl,
one for each user, information is transmitted (spread) over a very wide frequency spectrum with
as many as 20 mobile subscriber units simultaneously using the same carrier frequency within the
same frequency band. Interference
ference is incorporated into the system so that there is no limit to the
number of subscribers that CDMA can support. As more mobile subscribers are added to the
system, there is a graceful degradation of communications quality.
With CDMA, unlike other cellular telephone standards, subscriber data change in real time,
depending on the voice activity and requirements of the network and other user of the network.
IS 95 also specifies a different modulation and spreading technique for the forward and reverse
re
channels. On the forward channel, the base station simultaneously transmits user data from all
current mobile units in that cell by using different spreading sequences (codes) for each users
transmissions. A pilot code is transmitted with the user data at a higher power level, thus
allowing all mobile units to use coherent detection. On the reverse like, all mobile units respond
in an asynchronous manner (i.e., no time or duration limitations) with a constant signal level
controlled by the base station.
The speech coder used with IS 95 is the Qualcomm 9600 bps Code Excited Linear
Predictive (QCELP) coder. The vocoder converts an 8 kbps compressed data stream to a 9.6
kbps data stream. The vocoders original design detects voice activity
activity and automatically reduces
the data rate to 1200 bps during silent periods. Intermediate mobile user data rates of 2400 bps
and 4800 bps are also used for special purposes. In 1995, Qualcomm introduced a 14,400 bps
vocoder that transmits 13.4 kbps of compressed digital voice information.
CDMA reduces the importance of frequency planning within a given cellular market. The
AMPS U.S. cellular telephone system is allocated a 50 MHz frequency spectrum (25 MHz for
each direction of propagration), and each service provider (system A and system B) is assigned
half the available spectrum (12.5 MHz). AMPS common carriers must provide a 270 kHz guard
band (approximately nine AMPS channels) on either side of the CDMA frequency spectrum. To
facilitate a graceful transition from AMPS to CDMA, each IS 95 channel is allocated a 1.25 MHz
frequency spectrum for each one way CDMA communications channel. This equates to 10% of
the total available frequency spectrum of each U.S. cellular telephone provider. CDMA channels
can coexist within the AMPS frequency spectrum by having a wireless operator clear a 1.25 MHz
band of frequencies to accommodate transmissions on the CDMA channel. A single CDMA radio
channel takes up the same bandwidth as approximately 42 30 kHz AMPS voice channels.
However, because of the frequency reuse advantage of CDMA, CDMA, offers approximately a 10
to 1 channel advantage over standard analog AMPS and a 3 to 1 advantage over USDC
digital AMPS.
For reverse (downlink) operation, IS 95 specifies the 824 MHz to 849 MHz band and
forward (uplinkO channels the 869 MHz band. CDMA cellular system also use a modified
frequency allocation plan in the 1900 MHz band. As with AMPS, the transmit and receive
carrier frequencies used by CDMA are separated by 45 MHz. figure shows the frequency spacing
for two adjacent CDMA channels in the AMPS frequency band. As the figure shows, each CDMA
channel is 1.23 MHz wide with a 1.25 MHz frequency separation between adjacent carriers,
producing a 200 kHz guard band between CDMA channels. Guard bands are necessary to
ensure that the CDMA carriers do not interfere with one another. Figure shows the CDMA
channel location within the AMPS frequency spectrum. The lowest CDMA carrier frequency in
the A band is at AMPS channel 283, and the lowest CDMA carrier frequency in the B band is at
AMPS channel 384. Because the band available between 667 and 716 is only 1.5 MHz in the A
band, A band operators have to acquire permission form B band carriers to use a CDMA carrier in
that portion of the frequency spectrum. When a CDMA carrier is being used next to a non
CDMA carrier, the carrier spacing must be 1.77 MHz. There are as many an nine CDMA carriers
available for the A and B band operator have 30 MHz bandwidth in the 1900 MHz frequency
band, where they can facilitate up to 11 CDMA channels.
With CDMA, many users can share common transmit and receive channels with a
transmission data rate of 9.6 kbps. Using several techniques, however, subscriber information is
spread by a factor of 128 to a channel chip rate of 1.2288 Mchips/s, and transmit and receive
channels use different spreading processes.
In the uplink channel, subscriber data area encoded using a rate convolutional code,
interleaved, and spread by one of 64 orthogonal spreading sequences using
usi Walsh functions.
Orthogonality among all uplink cellular channel subscribers within a given cell is maintained
because all the cell signals are scrambled synchronously.
Downlink channels use a different spreading strategy since each mobile units received
signal takes a different transmission path and, therefore, arrives at the base station at a different
time. Downlink channel data streams are first convolutional encoded with a rate 1/3 convolution
code. After interleaving, each block of six encoded symbols is mapped to one of the available
orthogonal Walsh functions, ensuring 64 ary orthogonal signaling. An additional fourfold
spreading is performed by subscriber specified and base station specific codes having periods
of 214 chips and 215 chips, respectively, increasing the transmission rate to 1.2288 Mchips / s.
Stringent requirements are enforced in the downlink channels transmit power to avoid the near
far problem caused by varied receive power levels.
Each mobile unit in a given cell is assigned a unique spreading sequence, which ensures
near perfect separation among the signals from different subscriber units and allows
transmissions differentiation between users. All signals in a particular cell are scrambled using a
pseudorandom sequence of length 215 chips. This reduces radio frequency interference between
mobiles in neighboring cells that may be using the same spreading sequence and provides the
desired wideband spectral characteristics even though all Walsh codes do not yield a wideband
power spectrum.
Two commonly used techniques for spreading the spectrum are frequency hopping and
direct sequencing. Both of these techniques are characteristic of transmission over a bandwidth
much wider than that normally used in narrowband FDMA / TDMA cellular telephone systems,
such as AMPS and USDC. For a more detailed description of frequency hopping and direct
sequencing, refer to chapter.
Frequency hopping spread spectrum was first used by the military to ensure reliable
antijam and to secure communications in a battlefield environment. The fundamental concept of
frequency hopping is to break a message into fixed size blocks of data with each block
transmitted in sequence except on a different carrier frequency. With frequency hopping, a
pseudorandom code is used to generate a unique frequency hopping sequence. The sequence in
which the frequencies are selected must be known by both the transmitter and the receiver prior to
the beginning of the transmission. The transmitter sends on block on a radio frequency carrier
and then switches (hops) to the next frequency in the sequence and so on. After reception of a
block of data on one frequency, the receiver switches to the next frequency in the sequence. Each
transmitter in the system has a different hopping sequence to prevent one subscriber from
interfering in the system has a different hopping sequence to prevent one subscriber from
interfering with transmissions from other subscribers using the same radio channel frequency.
Direct sequence spread spectrum:
In direct sequence systems, a high bit rate pseudorandom code is added to a low bit
rate information signal to generate a high bit rate pseudorandom signal closely resembling
noise that contains both the original data signal and the pseudorandom code. Again, before
successful transmission, the pseudorandom code must be known to both the transmitter and the
intended receiver. When a receiver detects a direct
direc sequence transmission, it simply subtracts
the pseudorandom signal from the composite receive signal to extract the information data. In
CDMA cellular telephone systems, the total radio frequency bandwidth is divided into a few
broadband radio channels els that have a much higher bandwidth than the digitized voice signal.
The digitized voice signal is added to the generated high bit rate signal and transmitted in such
a way that it occupies the entire broadband radio channel. Adding a high bit rate
pseudorandom signal to the voice information makes the signal more dominant and less
susceptible to interference, allowing lower power transmission and, hence, a lower number of
transmitters and less expensive receivers.
Figure:
Space Division (SDMA) 2 different regions of earth same satellite. Thry use same frequency band
(INTELSAT IVA)
Figure:
Polarization on Division (PDMA) earth station and satellite antennas use different polarizations
-22 stations in sane in same region simulaneously access same satellite
Its all $
DAMA works best for a given CR capacity
You dont get
et busy signal too often
There arent enough earth stations to guarantee use of 1 transponder (36 Mhz) all the time.
This is the easiest way to allow low user rate earth stations access and handle burst overload by
medium use stations
These interconnect computers, terminals, printers, etc within a building, university , company, set
of buildings, etc
For long haul, public phone used (economics)
Within LAN, lay your own high BW cables. BW not as scarce as in long haul/satellite cases
(fewer users). For this reason, LANs user simpler access methods (very different from ones
discussed for phone and satellite cases with more users)
Discussed later p. 526-31, VG 4-87
Dynamic Assignment gives station access to channel only when it requests it.
Works since actual demand rarely exceeds peak demand.
Add buffers of data and the technique handles bursty traffic also.
You of cours get busy singals (queue delays).
This is what is used now (with TDMA and FDM)
Dont forget
Ideal world, l transponder and l carrier, 36Mhz BW supports 900 users (4kHz each)
As number of carriers per transponder increases, fewer 4kHz channels are allowed (due to need to
reduce amplitude and hence power in each carrier to avoid cross talk due to non-linearities)
Why not always use l carrier per transponder (it supports more users) ?
Answer: all earth stations cannot support/utilize 36Mhz all the time
All this BW is there but it is not used all the time ($$$)
Consider the case of 12 users from country A to F (VG 4-78)
What if only l user? Then other 11 slots empty.
Problem is you cant reassign this communicatin resource (CR).
Thus, it isnt used ($$ and CR plus DSP/VLSI)
Each voice channel has its own carrier fc (for a given transponder)
Ther are 800 such voice channels, each has 45kHz (this >> 4kHz BW) to allow for nonlinearities
and channel separations. Ther is clearly room for improment here (10x BW needed)
Aside 6 of these channels are used for channel management (DAMA) etc.
800 users are OK since you dont talk 60% of the time (new rules above table) switch power
available
Still frequency multiplex each user of one earth station to some fc channel (FDM)
But with TDMA, several users (24 or 30) per fc channel, each has a different time slot 24 in Europe,
30 in U.S.
Still up to 800 fc channels/transponder (FDM), each with 24 users (TDMA) and 12 transponders per
satellite
TDMA is better than increasing the number of frequency channels (each means more hardware)
even further. TDMA is OK, if you can do sync and maintain accuracy of clocks, etc (can now with
VLSI)
(a)
Figure:
TDMA (increase transmit rate to utilize channel BW)
Prior page was for 1 frame (one sample (byte) for each user)
Channel can support >> 1.5 Mbs rate that the above needs.
Thus, they transmit the above data in leess time (burst process)
They can then put other users (channels) into different time slots (TDM)
Figure:
Concept (user BW rate is << channel rate). Thus, let multiple users share time slots.
This means that we must burst transmit the above set of frame data at a higher rate in less time
(rate is channel rate for fc number)
The earth station transmitting has control. It tells the satellite which signal goes where.
The sync data from the earth source is sent to the receiver stations (they use this to get in sync)
All info from the ground stations goes to a switching matrix.
This decides how to connect different up and down links
LAN approach (BW not matter row, few users since local)
Packet set of b bits
Ethernet assumes each user can sense the state of the net before using it
i.e. any free slots and when and where they
t are (can I get in?)
Users send preamble 64 bits (sync), destination, data type (ECC, etc used), and finally data, then
correlative linear block code parity (check field, 32)
Header
Carrier Sense method (Ethernet) if no transition occurs in the transition search window, then no
signal (carrier) is present and the line is free (the user is done and the packet is over). Figure b
Token ring method connect all users in a ring. Transmit a token around the ring (8 ones
11111111 bit pattern). Each user (in the ring in order) can access the LAN if desired. He changes
the last bit of the pattern (LAN busy), inserts his data onto LAN, and then restores the token and
passes it to the next user in the ring. Figure a
UNIT V
PART A
Satellite orbiting the earth Stays in position because the centripetal force on the Satelliete
balance the gravitational attractive force on the earth.
Active satellite
Passive Satellite
Signal received by the satellite is retransmitted rather then being simple reflected back on
board, highly transmitting and receiving antenna and complex inter connecting circuits.
The signal transmitted from the earth, the same signal is retransmitted without making any
conversion to the earth.
Commercial satellites use 4/6 GHZ bandwidth an uplink of 5.925 6.425 GHZ and a down
link of 3.7 4.2 GHZ.
Government and military satellites in many countries use the 7/8 GHZ bard, with 7.9 8.4
GHZ Up and 7.25 7.75 GHZ down.
Satellites are now being designed for 12/14 GHZ band using 14-14.5 GHZ up and either
11.7-12.2 GHZ down or 10.95-11.2 GHZ up and 11.5 11.7 GHZ down.
NA = no Sina
= n12 n 22
The material is the basic Parameter that determines the optical characteristics of any
material the ration of velocity of light in a vacuum to the velocity of light in medium.
PART B
1. Explain the different types of optical fibers?
Propagation modes can be categorized as either multimode or single mode, and then
multimode can be further subdivided into step index or graded index. Although there are a wide
variety of combinations of mode and indexes, there are only three practical types of optical fiber
configurations: single-mode step-index, multimode step index, and multimode graded index.
Figure: Single-mode
mode step-index
step index fibers: (a) air cladding: (b) glass cladding
A more practical type of single-mode
single step-index
index fiber is one that has a cladding other than
air, such as the cable shown in figure. The refractive index of the cladding (n2) is slightly less than
that of the central core (n1) and is uniform throughout the cladding. This type of cable is
physically stronger than the air-clad
air clad fiber, but the critical angle is also much higher
(approximately 77). ). This results in a small acceptance angle and a narrow na source-to-fiber
aperture, making it much more difficult to couple light into the fiber from a light source.
Multimode Step-Index
Index Optical Fiber:
Multimode Graded-index
index Optical fiber
A multimode graded-indexindex optical fiber is shown in figure. Graded index fibers are
characterized by a central core with an non uniform refractive index. Thus, the cables density is
maximum at the center and decreases gradually toward the outer edge. Light rays propagate
down this type of fiber through refraction rather than reflection. As a light ray propagates
diagonally across the core toward the center, it is continually intersection a less dense to more
dense interface. Consequently, the light rays are constantly being refracted, which result in a
continuous bending of the light rays. Light enters the fiber at many different angles. As the light
rays propagate down the fiber, the rays traveling in the outermost area of the fiber travel a greater
distance than the rays traveling near the center. Because the refractive index decreases with
distance from the center and the velocity is inversely proportional to refractive index, the light
rays traveling farthest from the center propagate at a higher velocity. Conse-index, the light rays
traveling farthest from the center propagate at a higher velocity. Consequently, they take
approximately the same amount of time to travel the length of the fiber.
1. Minimum dispersion: All rays propagating down the fiber take approximately the same path:
thus, they take approximately the same length of time of time of travel down the cable.
Consequently, a pulse of light entering the cable can be reproduced at the receiving end very
accurately.
2. Because of the high accuracy in reproducing transmitted pulses at the receive end, wider
bandwidths and higher information transmission rate (bps) are possible with single-mode step-
index fibers than with the other types of fibers.
1. Because the central core is very small, it is difficult to couple light into and out of this type of
fiber. The source-to-fiber aperature is the smallest of all the fiber types.
2. Again, because of the small central core, a highly directive light source, such as a laser, is
required to couple light into a single-mode step-index fiber.
3. Single mode step-index fibers are expensive and difficult to manufacture.
1. Light rays take many different paths down the fiber, which results in large differences in
propagation times. Because of this, rays traveling down this type of fiber have a tendency to
spread out. Consequently, a pulse of light propagating down a multimode step-index fiber is
distorted more than with the other types of fibers.
2. The bandwidths and rate of information transfer rates possible with the type of cable are less
than that possible with the other type of fiber cables.
Optical Sources:
There are essentially only two type of practical light sources used to generate light for
optical fiber communications systems: LEDs and ILDs. Both devices are constructed from
semiconductor materials and have advantages and disadvantages. Standard LEDs have spectral
widths of 30 nm to 50 nm, while injection lasers have spectral widths of only 1 nm to 3 nm (1 nm
corresponds to a frequency of about 178 GHz). Therefore, a1320-nm light source with a spectral
line width of 0.0056 nm has a frequency bandwidth of approximately 1 GHz Line width is the
wavelength equivalent of bandwidth.
Selection of one light-emitting device over the other is determined by system economic and
performance requirements. The higher cost of laser diodes is offset by higher performance, LEDs
typically have a lower cost and a corresponding lower performance. However, LEDs are typically
more reliable.
LEDs
An LED is a p-n junction diode, usually made from a semiconductor material such as
aluminum gallium-arsenide (AIGaAs) or gallium- arsenide phosphide (GaAsP). LEDs emit light
by spontaneous emission light is emitted as a result of the recombination of electrons and holes.
When forward biased, minority carriers are injected across the p-n junction. Once across
the junction, these minority carriers recombine with majority carriers and give up energy in the
form of light. This process is essentially the same as in a conventional semiconductor diode except
that in LEDs certain semiconductor materials and dopants are chosen such that the process is
radioactive; that is , a photo is produced. A photon is a quantum of electromagnetic wave energy.
Photons are particles that travel at the speed of light but at rest have no mass. In conventional
semiconductor diodes (germanium and silicon, for example), the process is primarily non
radiative, and no photons are generated. The energy gap of the material used to construct an LED
determines the color of lightt is emits and whether the light emitted by it is visible to the human
eye.
To produce LEDs, semiconductors are formed from materials with atoms having either
three or five valence, electrons (known as Group III and Group IV atoms, respectively, because of
their location in the periodic table of elements). To produce light wavelengths in the 800-nm
800
range, LEDs are constructed from Group III atoms, such as gallium (Ga) and aluminium (A1), and
a Groups IV atom, such as arsenide (As). The junction formed isis commonly abbreviated GaA1As
for gallium-aluminium
aluminium arsenide. For long wavelengths, gallium is combined with Groups III
atoms indium (In), and arsenide is combined with the Group V atoms phosphate (P), which forms
a gallium-indium-arsenide-phosphate
phosphate (GalnAsP)
(GalnAsP) junction. Table lists some of the common
semiconductor materials used in LED construction and their respective output wavelengths.
Homojunction LEDs
A p-nn junction made from two different mixtures of the same types of atoms is called a
homojunction structure. The simplest LED structures are homojunction and epitaxially grown , or
they are single-diffused
diffused semiconductor devices. Such as the two shown in the figure Epitaxially
grown LEDs are generally constructed of silicon-doped
silicon doped gallium arsenide (figure 13-28a).
13 A
typically wavelength of light emitted from this construction is 940 nm, and a typical output power
is approximately 2 mW (3 dBm) at 100 mA of forward current. Light waves from homojunction
sources do not produce a very useful light for an optical fiber. Light is emitted in all directions
equally; therefore, only a small amount of the total light produced is coupled into the fiber. In
addition, the ratio of electricity converted to light is very low . Homojunction devices are often
called surface emitters.
Heterojunction LEDs:
Heterojunction LEDs are made from a p-type semiconductor material of one set of atoms an
n-type semiconductor material from another set. Heterojunction devices are layered (usually
two) such that the concentration effect is enhanced. This produces a device that confines the
electron and hole carriers and the light to a much smaller area. The junction is generally
manufactured on a substrate backing material and then sandwiched between metal contacts that
are used to connect the device to a source of electricity.
With heterojunction devices, light is emitted from the edge of the material and are therefore
often called edges emitters. A planar heterojunction LED (figure ) is quite similar to the epitaxially
grown LED except that the geometry is designed such that the forward current is concentrated to
a very small area of the active layer.
Heterojunction devices have the following advantage over homojunction devices:
The smaller emitting area makes it easier to couple its emitted light into a fiber.
The small effective area has a smaller capacitance, which allows the planar hetero-junction LED to
be used at higher speeds.
Figure shows the typical electrical characteristics for a low cost infrared light emitting
diode. Figure a shows the output power versus forward current. From the figure, it can be seen
that the output power varies linearly over a wide range of input current ( 0.5 mW [- 3 dBm] at 20
mA to 3.4 mW [5.3 dBm] at 140mA). Figure 13 30 b shows output power versus temperature. It
can be seen that the output power varies inversely with temperature between a temperature range
of 400 C to 800 C. figure shows relative output power in respect to output wavelength. For this
particular example, the maximum output power is achieved at an output wavelength of 825 nm.
junction LED
Figure: Planar hetero-junction
For the more practical applications, such as telecommunications, data rates in excess of 100
Mbps are required. For these applications, the etched well LED was developed. Burrus and
Dawson of Bell Laboratories developed the etched well LED. It is a surface emitting LED and
is shown in figure. The Burrus etched well LED emits light in many directions. The etched well
help concentrate the emitted light to a very small area. Also, domed lenses can be placed over the
emitting surface to direct
ct the light into a smaller area. These devices are more efficient than the
standard surface emitters, and they allow more power to be coupled into the optical fiber, but they
are also more difficult and expensive to manufacture.
The edge emitting LED, which was developed by RCA, is shown in figure. These LEDs
emit a more directional light patterns than do the surface emitting LEDs. The construction is
similar to the planar and Burrus diodes except that the emitting surface is a stripe rather than a
confined circular area. The light is emitted from an active stripe and forms an elliptical beam.
Surface emitting LEDs are more commonly used than edge emitters because they emit more.
Surface emitting LEDs are more commonly used used than edge emitters because they emit more
light. However, the coupling losses with surface emitters are greater, and they have narrower
bandwidths.
The radiant light power emitted from an LED is a linear function of the forward current
passing throughgh the device (figure It can also be seen that the optical output power of an LED is,
in part, a function of the operating temperature.
ILD:
Lasers are constructed from many different materials, including gases, liquids, and solids,
although the types off laser used most often for fiber optic communications is the semiconductor
laser.
The ILD is similar to the LED. In fact, below a certain threshold current, an ILD acts
similarly to an LED. Above the threshold current, an ILD oscillates; lasing occurs.
occu As current
passes through a forward biased p n junction diode, light is emitted by spontaneous emission
at a frequency determined by the energy gap of the semiconductor material. When a particular
current level is reached, the number of minority carries and photons produced on either side of
the p n junction reaches a level where they begin to collide with already excited carriers. This
causes an increase in the ionization energy level and makes the carriers unstable. When this
happens, typicall carrier recombines with an opposite type
Of carrier at an energy level that is above its normal before collision value. In the process,
two photons are created;
reated; one is stimulated by another. Essentially, a gain in the number of
photons is realized. For this to happen, a large forward current that can provide many carriers
(holes and electrons) is required.
The construction of an ILD is similar to that of an LED except that the ends are highly
polished. The mirror like ends trap the photons in the active region and, as they reflect back and
forth, stimulate free electrons to recombine with holes at a higher than normal energy level.
This process is called lasing.
The radiant output light power of a typical ILD is shown in figure. It can be sen that very
little output power is realized
ealized until the threshold current is reached; then lasing occurs. After
lasing begins, the optical output power increases dramatically, with small increases in drive
current. It can also be seen that the magnitude of the optical output power of the ILD is more
dependent on operating temperature than is the LED.
Figure shown the light radiation patterns typical of an LED and an ILD. Because light is
radiated out the end of an ILD in a narrow concentrated beam, it has a more direct radiation
pattern.
ILDs have several advantages over LEDs and some disadvantages. Advantages include the
following:
ILDs emit coherent (orderly) light, whereas LEDs emit incoherent (disorderly) light.
Therefore, ILDs have a more direct radian pattern, making it easier to couple light emitted by the
ILD into an optical fiber cable. This reduces the coupling losses and allows smaller fibers to be
used.
Figure: LED and ILD radiation patterns
The radiant output power from an ILD is greater than that for an LED. A typical output
power for an ILD is 5 mW (7 dBm) and only mW ( - 3 dBm) for LEDs. This allows ILDs to provide
a higher drive power and to be used for systems that operate over longer distances.
Light detectors:
There are two devices commonly used to detect light energy in fiber optical
communications receivers: PIN diodes and APDs.
PIN Diodes:
A PIN diode is a depletion layer photodiode and is probably the most common device
used as a light detector in fiber optic communications systems. figure shows the basic
construction of a PIN diode. A very lightly doped (almost pure or intrinsic) layer of n type semi
conductor material is sandwiched between the junction of the two heavily doped n and p t ype
contact areas. Light enters the device through a very small window and falls on the carrier void
intrinsic material. The intrinsic material is made thick enough so that most of the photon that
enter the device are absorbed by this layer. Essentially, the PIN photodiode operates just the
opposite of an LED. Most of the photons are absorbed they add sufficient energy to generate
carriers in the depletion region and allow current to flow through the device.
Photoelectric effect:
Light entering through the window of a PIN diode is absorbed by the intrinsic material and
adds enough energy to cause electronics to move from the valence band into the conduction band.
The increase in the number of electrons that move into the conduction band is matched by an
increase in the number of holes in the valence band.
Figure: PIN photodiode construction
1 eV = 1.6 10 19 J
Eg = ( 1.12eV ) 1.6 10 19
J 19
= 1.792 10 J
eV
and energy (E) = hf
E
f=
h
For a silicon photodiode,
1.792 10 19 J
f= = 2.705 1014 Hz
6.6256 10 34 J / Hz
c 3 10 8 m / s
= = = 1109nm / cycle
f 2.705 1014 Hz
ADPs:
Figure shows the basic construction of an APD. An APD is a pipn structure. light enters
the diode and is absorbed by the thin, heavily doped n layer. A high electric field intensity
developed across the i p n junction by reverse bias causes impact ionization to occur. During
impact ionization, a carrier can gain sufficient energy to ionize other bound electrons. These
ionized carriers, in turn, cause more ionizations to occur. The process continues as in an
avalanche and is, effectively, equivalent to an internal gain or carrier multiplication.
Consequently, APDs are more sensitive than PIN diodes and require less additional amplification.
The disadvantages of APDs are relatively long transit
transit times and additional internally generated
noise due to the avalanche multiplication factor.
Lasers:
Laser is an acronym for light amplification stimulated by the emission of radiation. Laser
technology deals with the concentration of light into a very small, powerful beam. The acronym
was chosen when technology shifted from microwaves to light waves. Basically, there are four
types of lasers: gas, liquid, solid, and semiconductor.
The first laser wad developed by Theodore H. Maiman, a scientist who worked for Hughes
Aircraft Company in California. Maiman directed a beam of light into ruby crystals with a xenon
flashlamp and measured emitted radiation from the ruby. He discovered that when the emitted
radiation increased beyond threshold, it caused emitted radiation to become extremely intense
and hightly directional. Uranium lasers were developed in 1960 along with other rare earth
materials. Also in 1960, A. Javin of Bell Laboratories developed the helium laser. Semiconductor
lasers (injection laser diodes) were manufactured in 1962 by General Electric, IBM, and Lincoln
Laboratories.
Laser Types:
Basically, there are four of lasers: gas, liquid, solid, and semiconductor.
1. Gas lasers. Gas lasers use a mixture of helium and neon enclosed in a glass tube. A flow of
coherent (one frequency) light waves is emitted through the output coupler when an
electric current is discharged into the gas. The continuous light wave output is
monochromatic (one color).
2. Liquid lasers. Liquid laser use organic dyes enclosed in a glass tube for an active medium.
Dye is circulated into the tube with a pump. A powerful pulse of light excites the organic
dye.
3. Solid lasers. Solid lasers use a solid, cylindrical crystal, such as ruby, for the active
medium. Each end of the ruby is polished and parallel. The ruby is excited by a tungsten
lamp tied to an ac power supply. The output from the laser is a continuous wave.
4. Semiconductor lasers. Semiconductor laser are made from semiconductor p n junctions
and are commonly called ILDs. The excitation mechanism is a dc power supply that
controls the amount of current to the active medium. The output light from an ILD is easily
modulated, making it very useful in many electronic communications applications.
Laser Characteristics:
All types of lasers have several common characteristics. They all use (1) an active material
to convert energy into laser light (2) a pumping source to provide power or energy, (3) optics to
direct the beam through the active material to be amplified, (4) optics to direct the beam into a
narrow powerful cone of divergence, (5) a feedback mechanism to provide continuous operation,
and (6) an output coupler to transmit power out of the laser.
The radiation of a laser is extremely intense and directional. When focused into a fine hair
like beam, it can concentrate all its power into the narrow beam. If the beam of light were
allowed to diverge, it would lose most of its power.
Laser Construction:
Figure shows the construction of a basic laser. A power source is connected to a flashtube
that is coiled around a glass tube that holds the active medium. One end of the glass tube is a
polished mirror face for 100% internal reflection. The flashtube is energized by a trigger pulse and
produces a high level burst of light (similar to a flashbulb). The flash causes the chromium
atoms within the active crystalline structure to become excited. The process of pumping raises the
level of the chromium atoms from ground state to an excited energy state. The ions then decay,
falling to an intermediate energy level. When the population of ions in the intermediate level is
greater than the ground state, a population inversion occurs. The population inversion causes
laser action (lasing) to occur. After a period of time, the excited chromium atoms will fall to the
ground energy level. At this time, photons are emitted. A photon is a packet of radiant energy.
The emitted photons strike atoms and two other photons are emitted (hence the term stimulated
emission). The frequency of the energy determines the strength of the photons; higher
frequencies cause greater strength photons.
Figure: Laser construction
4. Explain briefly about the frequency division multiple access techniques used in satellite
communication?
Carrier frequencies and bandwidths for FDM/FM satellite systems using multiple channel
per carrier formats are generally assigned and remain fixed for a long period of time. This is
referred to as fixed assignment,
assignment, multiple access (FDM/FM/FAMA). An alternate channel
allocation scheme is demand assignment, multiple access (DAMA). Demand assignment allows
all users continuous and equal access of the entire transponder bandwidth by assigning carrier
frequencies on a temporary basis using a statistical assignment process. The first FDMA demand
assignment process. The first FDMA demand assignment system for satellites was developed
by com-sat
sat for use on the Intelsat series IVA and V satellites.
Figure: FDMA,, SPADE earth station transmitter
SPADE is an acronym for single channel per carrier PCM multiple access demand
assignment equipment. Figure show the block diagram and IF frequency assignments,
respectively, for SPADE.
With SPADE, 800 PCM encoded voice band channels separately QPSK modulate an IF
carrier signal (hence the name single carrier per channel, SCPC). Each 4 kHz voice band
channel is sample at an 8 kHz rate and converted to an eight bit PCM code. This produces a 64
kbps PCM code for each voice band channel. The PCM code from each voice band channel
QPSK modulates a different IF carrier frequency. With QPSK, the minimum required bandwidth
is equal to one half the input bit rate. Consequently,
Consequently, the output of each QPSK modulator
requires a minimum bandwidth of 32 kHz. Each channel is allocated a 45 kHz bandwidth,
allowing for a 13 kHz guard band between pairs of frequency division multiplexed channels.
The IF carrier frequencies begin at 52.0225 MHz (low band channel 1) and increase in 45 kHz
steps to 87.9775 MHz (high band channel 400). The entire 36 MHz band (52 MHz to 88 MHz) is
divided in half, producing two 400 channel bands (a low band and a higher band). For fullfu
duplex operation, 400 45 kHz channels are used for one direction of transmission, and 400 are
used for the opposite direction. Also, channels 1,2, and 400 from each band are left permanently
vacant. This reduces the number of usable full duplex voice band channels to 397. The 6
GHz C band extends from 5.725 GHz to 6.425 GHz (700 MHz). This allows for approximately 19
36 MHz RF cannels per system. Each RF channel has a capacity of 397 full duplex voice band
channels.
Each RF channelel has a 160 kHz common signaling channel (CSC). The CSC is a time
division multiplexed transmission that is frequency division multiplexed into the IF spectrum
below the QPSK encoded voice band channels.
Figure: Carrier frequency assignment for the Intelsat single channel per carrier PCM
multiple access demand assignment equipment (SPADE)
Figure shows the TDM frame structure for the CSC. The total frame time is 50 ms, which is
subdivided into 50 1 ms epochs. Each earth station transmits on the CSC channel only during its
preas-signed 1 ms time slot. The CSC signal is a 128 bit binary code. To transmit a 128 bit
code in 1 ms, a transmission rate of 128 kbps is required.
required. The CSC code is used for establishing
and disconnecting voice band links between two earth station users when demand assignment
channel allocation is used.
The CSC channel occupies a 160 kHz bandwidth, which includes the 45 kHz for low band
channel 1. Consequently, the CSC channel extends from 51.885 MHz to 52.045 MHz. The 128
kbps CSC binary code QPSK modulates a 51.965 MHz carrier. The minimum bandwidth
required for the CSC channel is 64 kHz, this results in a 48 k Hz guard band on either side of the
CSC signal.
With FDMA, each earth station may transmit simultaneously within the same 36 MHz RF
spectrum but on different voice band channels. Consequently, simultaneous transmissions of
voice band channels from all earth stations within the satellite network are interleaved in the
frequency domain in the satellite transponder. Transmission of CSC signals are interleaved in the
time domain.
An obvious disadvantage of FDMA is that carriers from multiple earth stations may be
present in a satellite transponder at the same time. This results in cross modulation distortion
between the various earth station transmissions. This is alleviated somewhat by shutting off the
IF subcarriers on all unused 45 kHz voice band channels. Because balanced modulators are
used in the generation of QPSK, carrier suppression is inherent. This also reduces the power load
on a system and increases its capacity by reducing the idle channel power.
Time division multiple access (TDMA) is the predominant multiple access method used
to day. It provides the most efficient method of transmitting digitally modulated carriers (PSK).
TDMA is a method of time division multiplexing digitally modulated carriers between
participating earth stations within a satellite network through a common satellite transponder.
With TDMA, each earth station transmits a short burst of a digitally modulated carrier during a
precise time slot (epoch) within a TDMA frame. Each stations burst is synchronized so that it
arrives at the satellite transponder at a different time. Consequently, only one earth stations
carrier is present in the transponder at any given time, thus, avoiding a collision with another
stations carrier. The transponder is an RF to RF repeater that simply receives the earth station
transmissions, amplifies them and then retransmits them in a downlink beam that is received by
all the participating earth stations. Each earth station receives the bursts from all other earth
stations and must select from them the traffic destined only for itself.
Figure shows a basic TDMA frame. Transmissions from all earth stations are synchronized
to a reference burst. Figure shows the reference burst as a separate transmission, but it may be the
preamble that precedes a reference stations transmission of data. Also, there may be more than
one synchronizing reference burst.
Figure: Basic time division
ivision multiple accessing (TDMA) frame
Each earth station synchronize the transmission of its carrier to the occurrence of the UW
correlation spike. Each station waits a different length of time before it begins transmitting.
Consequently, no two stations will transmit the carrier at the same time. Note the guard time (GT)
between transmissions from successive stations. This is analogous to a guard band in a frequency
division multiplexed system. Each station precedes the transmission of data with a preamble.
The preamble is logically equivalent to the reference burst. Because each stations transmissions
must be received by all other earth stations, all stations must recover carrier and clocking
information prior to demodulating the data. If demand assignment is used, a common signaling
channel also must be included in the preamble.
CEPT primary multiplex frame:
Figure show the block diagram and timing sequence, respectively, for the CEPT primary
multiplex frame. (CEPT is the Conference of European Postal and Telecommunications
Administrations; the CEPT sets many of the European telecommunications standards). This is a
commonly used TDMA frame format for digital satellite systems.
Essentially, TDMA is a store and forward system. Earth stations can transmit only
during their specified time slot, although the incoming voice band signals are continuous.
Consequently, it is necessary to sample and store the voice band signals prior to transmission.
The CEPT frame is made up of eight bit PCM encoded samples from 16 independent voice
band channels.
Each channel has a separate codec that samples the incoming voice signals at a 16 kHz
rate and converts those samples to eight bit binary codes. This results in 128 kbps transmitted at
a 2.048 MHz rate from each voice channel codec. The 16 128 kbps transmissions are time
division multiplexed into a sub frame that contains
contains one eight bit sample from each of the 16
channels (128 bits). It requires only 62.5 ss to accumulate the 128 bits ( 21.048 Mbps transmission
rate). The CEPT multiplex format specifies a 2 ms frame time. Consequently, each earth station
can transmit only once ever 2 ms and, there fore, must store the PCM encoded samples. The
128 bits accumulated during the first sample of each voice band channel are stored in a holding
register. While a second sample is taken from each channel and converted into another 128 bit
subframe. This 128 bit sequence is stored in the holding register behind the first 128 bits. The
process continues for 32 subframes (32 62.5 s = 2 ms). After 2 ms, 32 eight bit samples have
been taken from each of 16 voice band channels for a total of 4096 bits (32 8 16 = 4096). At
this time, the 4096 bits are transferred to an output shift register for transmission. Because the
total TDMA frame is 2 ms long and during this 2 ms period each of the participating earth
stations must transmit at different times, the individual transmissions from each station must
occur in a significantly shorter time period. In the CEPT frame, a transmission rate of 120.832
Mbps is used. This rate is the 59th multiple of 2.048 Mbps. Consequently, the actual transmission
of the 4096 bits are accumulated bits takes approximately 33.9 s. At the earth station receivers,
the 4096 bits are stored in a holding register and shifted at a 2.048 Mbps rate. Because all the
clock rates (500 Hz, 16 kHz, 128 kHz, 2.048 MHz, and 120.832 MHz) are synchronized, the PCM
codes are accumulated, stored, transmitted, received, and then decoded in perfect
synchronization. To the users, the voice transmission appears to be a continuous process.
There are several advantages of TDMA over FDMA. The first, and probably the most
significant, is that with TDMA only the carrier from one earth station is present in the satellite
transponder at any given time, thus reducing inter modulation distortion. Second, with FDMA,
each earth station must be capable of transmitting and receiving on a multitude of carrier
frequencies to achieve multiple accessing capabilities. Third, TDMA is must better suited to the
transmission of digital information than FDMA. Digital signals are more naturally acclimated to
storage, rate conversions, and time domain processing than their analog counterparts.
The primary disadvantage of TDMA as compared with FDMA is that in TDMA precise
synchronization is required. Each earth stations transmissions must occur during an exact time
slot. Also, bit and frame timing must be achieved and maintained with TDMA.
With FDMA, earth stations are limited to a specific bandwidth with in a satellite channel or
system but have no restriction on when they can transmit. With TDMA, an earth stations
transmissions are restricted to a precise time slot but have no restriction on what frequency or
bandwidth it may use within a specified satellite system or channel allocation. With code division
multiple access (CDMA), there are no restrictions on time or bandwidth. Each earth station
transmitter may transmit whenever it wishes and can use any or all the bandwidth allocated a
particular satellite system or channel. Because there is no limitation on the bandwidth, CDMA is
sometimes referred to as spread spectrum multiple access: transmission can spread throughout
the entire allocated bandwidth. Transmissions are separated through envelope encryption /
decryption techniques. That is, each earth stations transmissions are encoded with a unique word
called a chip code. Each station has a unique chip code. To receive a particular earth stations
transmission, a receive station must know the chip code for that station.
Figure shows the block diagram of a CDMA encoder and decoder. In the encoder the input
data (which may be PCM encoded voice band signals or raw digital data) is multiplied by a
unique chip code. The product code PSK modulates an IF carrier, which is up converted to RF for
transmission. At the receiver the RF is down converted to IF. From the IF, a coherent PSK
carrier is recovered. Also, the chip code is acquired and used to synchronize the receive stations
code generator. Keep in mind, d, the receiving station knows the chip code but must generate a chip
code that is synchronous in time with the receive code. The recovered synchronous chip code
multiplies the recovered PSK carrier and generates a PSK modulated signal that contains the
PSK carrier plus the chip code. The received IF signal that contains the chip code, the PSK carrier,
and the data information is compared with the received IF signal in the correlator. The function of
the correlator is to compare the two signals and recoverrecover the original data. Essentially, the
correlator subtracts the recovered PSK carrier + chip code from the received PSK carrier + chip
code + data. The resultant is the data.
The correlation is accomplished on the analog signals. Figure shows how the th encoding and
decoding is accomplished. Figure shows the correlation of the correctly received chip code. A + 1
indicated an in phase carrier, and a 1 indicates an out of phase carrier. The chip code is
multiplied by the data (either + 1 or 1). The product is either an in phase code or one that is
1800 out of phase with the chip code. In the receiver, the recovered synchronous chip code is
compared in the correlator with the received signaling elements. If the phases are the same, a + 1
is produced; if they are 1800 out of phase, a 1 is produced. It can be seen that if all the recovered
chips correlate favorably with the incoming chip code, the output of the correlator will be a + 6
(which is the case when a logic 1 is received). If all
all the code chips correlate 1800 out of phase, 4 6
is generated (which the the case when a logic 0 is received). the bit decision circuit is simple a
threshold detector. Depending on whether a + 6 or 6 is generated, the threshold detector will
output a logic 1 or a logic 0, respectively.
As the name implies, the correlator looks for a correlation (similarity) between the
incoming coded signal and the recovered chip code. When a correlation occurs, the bit decision
circuit generates the corresponding logic condition.
Figure: Code division multiple access (CDMA) (a) encoder; (b) decoder
With CDMA all earth stations within the system may transmit on the same frequency at the
same time. Consequently, an earth station receiver may be receiving coded code PSK signals
simultaneously from more than one transmitter. When this is the case, the job of the correlator
becomes considerably more difficult. The correlator must compare the recovered chip code with
the entire received spectrum and separate from it only the chip code from the desired earth station
transmitter. Consequently the chip code from one earth station must not correlate with the chip
codes from any of the other earth stations.
Figure shows how such a coding scheme is achieved. If half the bits within a code were
made the same and half were made exactly the opposite, the resultant would be zero cross
correlation between chip codes. Such a code is called an orthogonal code. In figure, it can be seen
that when the orthogonal code is compared
compared with the original chip code, there is no correlation (i.e.,
the sum of the comparison is zero). Consequently, the orthogonal code, although received
simultaneously with the desired chip code, had absolutely no effect on the correlation process.
For thiss example, the orthogonal code is received in exact time synchronization with the desired
chip code: this is not always the case. For systems that do not have time synchronous
transmission, codes must be developed where there is no correlation between one o stations code
and any phase of another stations code.
The primary difference between spread spectrum PSK transmitters and other types of
PSK transmitters is the additional modulator where the code word is multiplied by the incoming
data. Because of the pseudorandom nature of the code word, it is often referred to as
pseudorandom noise (PRN).
Figure: CDMA code / data alignment: (a) Correct code; (b) Orthogonal code
The PRN must have a high autocorrelation property with itself and a low correl
correlation
properly with other transmitters pseudorandom codes. The code word rate (Rcw) must exceed the
incoming data rate (Rd) by several orders of magnitude. In addition, the code rate must be
statistically independent of the data signal. When these two conditions are satisfied, the final
output signal spectrum will be increased (spread) by a factor called the processing gain.
Processing gain is expressed mathematically as
R cw
G=
Rd
A spread spectrum signal cannot be demodulated accurately if the receiver does not
possess a dispreading circuit that matches the code word generator in the transmitter. Three of
the most popular techniques used to produce the spreading function and direct direc sequence,
frequency hopping, and a combination of direct sequence and frequency hopping called hybrid
direct sequence frequency hopping (hybrid DS / FH).
Direct sequence:
Direct sequence spread spectrum (DS SS) is produced when a bipolar data modulated
signal is linearly multiplied by the spreading signal in a special balanced modulator called a
spreading correlator. The spreading code rate Rcw = 1 / Tc+ where Tc is the duration of a single
bipolar pulse (i.e., the chip). Chip rates are 100 to 1000 times shorter in duration than the time of a
single data bit. As a result, the transmitted output frequency spectrum using spread spectrum is
100 to 1000 times wider than the bandwidth of the initial PSK data modulated signal. The block
diagram for a direct sequence spread spectrum system is shown in figure. As the figure
shows, the data source directly modulates the carrier signal, which is then further modulated in
the spreading correlator by the spreading code word.
Figure: Simplified
d block diagram for a direct sequence spread spectrum transmitter
The spreading (chip) code used in spread spectrum system are either maximal length
sequence codes, sometimes called m sequence codes or Gold codes. Gold codes are
combinations of maximal length codes invented by Magnavox Corporation in 1967, especially
for multiple access CDMA applications. There is a relatively large set of Gold codes available
with minimal correlation between chip codes. For a reasonable number of satellite users, it is
impossible to achieve perfectly orthogonal codes. You can only design for a minimum cross
correlation among chips.
One of the advantages of CDMA was that the entire bandwidth of a satellite channel or
system may be used for each transmission
transmission from every earth station. For our example, the chip rate
was six times the original bit rate. Consequently, the actual transmission rate of information was
one sixth of the PSK modulation rate, and the bandwidth required is six times that required to t
simply transmit the original data as binary. Because of the coding in efficiency resulting from
transmitting chips for bits, the advantage of more bandwidth is partially offset and is, thus, less of
an advantage. Also, if the transmission of chips from the various earth stations must be
synchronized, precise timing is required for the system to work. Therefore, the disadvantage of
requiring time synchronization in TDMA systems is also present with CDMA. In short, CDMA is
not all that it is cracked up to be. The most significant advantage of CDMA is immunity to
interference (jamming), which makes CDMA ideally suited for military applications.
Frequency hopping spread spectrum:
With frequency hopping, the total available bandwidth is partitioned into smaller
frequency bands, and the total transmission time is subdivided into smaller time slots. The idea is
to transmit within a limited
ted frequency band for only a short time, then switch to another
frequency band and so on. This process continues indefinitely. The frequency hopping pattern
is determined by a binary spreading code. Each station uses a different code sequence. A typical
typ
hopping pattern (frequency time matrix) is shown in figure.
With frequency hopping, each earth station within a CDMA network is assigned a different
frequency hopping pattern. Each transmitter switches (hops) from one frequency band to the
next according to their assigned pattern. With frequency hopping, each station uses the entire RF
spectrum but never occupies more than a small portion of that spectrum at any one time.
FSK is the modulation scheme most commonly used with frequency hopping. When it is a
given stations turn to transmit, it sends one of the two frequencies (either mark or space) for the
particular band in which it is transmitting. The number of stations in a given frequency hopping
system is limited by the number of unique hopping patterns that can be generated.
Essentially, there are two methods used to interface terrestrial voice band channels with
satellite channels: digital non interpolated interfaces (DNI) and digital speech interpolated
interfaces (DSI)
************