Vous êtes sur la page 1sur 178

Communication Theory

for EEE job preparation


By Tonmoy Sharif

Administrator
[Pick the date]
REFERENCES

1. Kennedy and Davis Electronic communication systems Tata McGraw hill, 4th edition, 1993.
2. Sklar Digital communication fundamentals and applications Pearson Education, 2001
3. Bary le, Memuschmidt, digital Communication, Kluwer Publication, 2004.
4. B.P.Lathi Modern digital and analog communication systems Oxford University Press, 1998.

UNIT I

ANALOG COMMUNICATION

PART A

1. Define modulation?

It is simply the process of changing some property of the carrier in accordance with the
information.

2. Define demodulation?

It is reverse process of modulation and concerts the modulated carrier back to the original
information.

3. Define Communication?

Communication means transferring the message from one point to another point.

4. Define Amplitude modulation?

It is the process of changing the amplitude of a relatively high frequency Carrier signal in
accordance with the amplitude of the modulating signal.

5. Define Frequency modulation?

It is the process of changing the frequency of a relatively high frequency Carrier Signal in
accordance with the amplitude of the modulating signal.

6. Define Phase modulation?


It is the process of changing the phase of a relatively high frequency Carrier signal in
accordance with the amplitude of the modulating signal.

7. Define coefficient of modulation.

To describe the amount of amplitude change present in an AM waveform.

Em
m=
Ec
Em Peak change in the amplitude of the output waveform voltage.
Ec Peak amplitude of the unmodulated carrier voltage.

8. Define percent modulation.

It is simply the coefficient of modulation stated as a percentage.

Em
M= 100
Ec

9. What is difference between low-level modulator and high-level modulator?

Modulation takes place prior to the output element of the final stage of the transmitter. It is
low-level modulator.

Modulation takes place in the final element of the final stage of the transmitter. It is high-
level modulator.

10. Define Selectivity.

Selectivity is used to measure the ability of the receiver to accept a given band of
frequencies and reject all others.

11. Define Sensitivity.

Sensitivity is the minimum RF signal level that can be detected at the input to the receiver
and still produce a usable demodulated information signal. Receivers sensitivity is also called
receiver threshold.

12. Define fidelity.

Fidelity is a measure of the ability of a communication system to produce, at the output of


the receiver an exact replica of the original source information.

13. What is tracking error?

The difference between the actual local oscillator frequency and the desired frequency is
called tracking error. The tracking error is reduced by a technique called three-point tracking.

14. What is Local Oscillator tracking?

Tracking is the ability of the local oscillator in a receiver to oscillate either above or below
the selected radio frequency carrier by an amount equal to the intermediate frequency through out
the entire radio frequency band.

15. What is an image frequency?

An image frequency is any frequency other than the selected radio frequency that, if
allowed to enter a receiver and mix with the local oscillator, will produce across. Product
frequency that is equal to the intermediate frequency.

16. What IFRR?

The Image Frequency Rejection Ratio is a numerical measure of the ability of a preselector
to reject the image frequency.

17. What is meant by AGC?

The AGC keeps the output signal level constant irrespective the increase or decrease in the
signal level at the input of receiver.

18. Define heterodyning.

It is process of mixing two frequencies in non-linear device.

19. Define modulation index for FM and for PM.

The modulation index of FM is directly proportional to peak modulating voltage but


inversely proportional to modulating signal frequency.

maximum frequency deviation


m= = .
fm modulating frequency
The modulation index of PM signal is directly proportional to peak modulating voltage.

M = KEm.

20. Define deviation ratio.

The deviation ratio is the ratio of maximum frequency deviation to maximum modulating
signal frequency i.e.,

maximum frequency deviation


Deviation ratio (DR) =
fm (max)

21. State Carsons general rule for determing the bandwidth for an angle modulated wave.

Carsons rule of FM bandwidth is given as,

Bw = 2 ( + fm (max))

is the maximum frequency deviation and fm (max) is the maximum signal frequency.

22. Differentiate between the high level modulation and low level modulation.

High Level Modulation Low Level Modulation


The carrier in modulated at the highest power The carrier is modulated at the lower
level. power level.
Transmitter power is high. Transmitter power is low.
Efficiency is high. Efficiency is low.
One (or) more states for power amplification is Not required.
required.

23. Differentiate FM and AM.

AM FM
In AM, these are only three frequencies is the FM has an infinite number of side bands
carrier bands. as well as carrier and the two sides.
Small signal to noise ratio. Large signal to noise ratio.
Noise cannot be reduced. Noise can be reduced.
Reception area is large. The receptions are is small.

24. Differentiate between PM and FM.


PM FM
Phase modulation is a form of angle Frequency modulation is the form of an angle
modulation in which angle Q(t), is made modulation in which instantaneous frequency
to vary linearly with the base band signal fit(t) the carrier varied linearly with the base band
em(t). signal em(t).
25. A transmitter radiates 9 kw without modulation and 10.125 kw after modulation. Determine
depth of modulation?

Pc = 9 kw
Pmod = 10.125 kw
m 2
Pmod = Pc 1+ a
2
ma 2
10.125 = 9 1+
2
ma = 0.5

26. In an aerial the aerial current (RHS) before modulation is 10 amps. After modulation it rises
to 11.6 amps. Determine percentage modulation. If the carrier power is 10 kw. What is the
power after modulation?

Carrier current L = 10 amp.


Carrier after modulation = Imod = 11.6 amp.
If the load is assumed to be 1, then
m 2
Pmod = Imod = Ic 2 1+ a
2
m 2
(11.6)2 = 102 1 + a
2
ma 2
1 + = 1.3456
2
ma = 83.13%
m 2
Pmod = Pc 1+ a
2
Pmod = 10 [1.3456 ]
= 13.456 kw
ma = 83.13%
Pmod = 13.456 kw.

27. A transmitter supplies 8kw to the antenna when modulated. Determine the total power
radiated when modulated to 30%.

Gc = 8 kw
30
Modulation index ma = = 0.3
100
m 2
Pt = Pc 1+ a .
2
0.32
Total radiated power Pt = 8 103 1 + Pt = 8.36 kw.
2

28. A RMS valve of antenna current before modulation is 10 amp and after modulation is 12
amps. Determine the modulation index assuming distortion.

ma 2
I = Io ' 1+
2

Where I = Current after modulation

Io = Current before modulation

ma 2
12 = 10 1 +
2
ma = 0.938 or 93.81.

29. A 1 MHz carrier is amplitude modulated by 400 Hz modulating signal to a depth of 50%.
The modulated carrier power is 1 kw. Calculate the power of the unmodulated signal.

Pc = 1 kw
ma = 0.5
m 2
Pt = Pc 1+ a
2
0.52
= 1 103 1 +
2
Pt = 1.125 kw.

The increase in power is given by 1.125-1 = 0.125kw is contained in two side bands.

30. Sinusoidal carrier voltage of frequency, 1 MHz and amplitude 10 is modulated by a


sinusoidal voltage of frequency 5 KHz. Producing 50% modulations calculate the frequency
and amplitude of USB and LSB.

Frequency of USB = 1 MHz + 5 KHz = 1005 KHz.


Frequency of LSB = 1 MHz - 5 KHz = 995 KHz.
maVc
Amplitude = = 25 Volts.
2
31. What is communication?

Communication is the process a conveying of transferring messages from one point to


another generally it can be classified into two types.

(i) Communication within line of sight


(ii) Communication beyond the line of sight

32. What are the advantages of super heterodyne receiver over TRF?

Improved selectivity in terms of adjacent channels


Move uniform selectivity in terms of adjacent channels
Improved receiver stability
Higher gain per stage because if amplifier are operated of a lower frequency
Uniform band width because of fixed intermediate frequency.

33. What is super heterodyne receiver?

The super heterodyne receiver converts all incoming RF Frequencies to a fixed lower
frequency, called intermediate frequency (IF). This if is then amplitude and detected to get the
original signal.

34. What is Pre-emphasis & de-emphasis?

The boosting of higher modulating frequency at the transmitter is called pre-emphasis.


And attenuating them at the receiver is called de-emphasis.

35. What is the principle of FM detection?

The process of extracting modulating signal from a frequency modulated carrier is known
as frequency demodulation or detection. The electronic circuits that perform the demodulation
process are called the FM detectors.
36. Draw the waveform of an AM wave.

37. What are the degrees of modulation?

There are three types of degrees of modulation.


(i) Under modulation
(ii) Critical modulation
(iii) Over modulation.

38. Draw the spectrum of AM wave:

39. What is an envelope detector?

Ina an envelope detector, the output of the detector follows the envelope of he modulated
signal (i.e message signal)
40. Differentiate DSB-SC and SSB-SC.

(1) The DSB spectrum has two sidebands: USB and LSB, Both containing the complete
information of the baseband signal.
(2) A scheme is which only one sideband is transmitted is known as SSB transmission ,
which requires only one-half of the bandwidth of the DSB.

41. Write any two advantages of SSB system.

(i) The improvement in signal to noise ratio is 10 to 12 decibels at the receiver output
over DSB.
(ii) The bandwidth required is reduced to half . Thus twice the number of channels can
be accommodated in a given frequency.

42. Define VSB modulation.

VSB modulation requires a transmission bandwidth intermediate between that requires in


SSB-SC and DSB-Sc systems and saving in significant when the modulating signal has a very
large bandwidth. Ex: TV signals.

43. Mention any two advantages of AM.

(1) It has the great merit of allowing recovery of the base band signal by an extremely
simple meant.
(2) The receiver is made simple and less expensive for the detection of AM wave.

44. Name any two methods of SSB generation.

(i) Filter Method


(ii) Phasing Method

45. What are the disadvantages of Envelope Detection?

It produces distortions . They are

(i) Negative peak clipping


(ii) Diagonal clipping.
46. Give the power relations in AM with AM-SSB/SC.

Power in SSB-SC AM is

1 2
Pt " = PSB = ma Pc
4
ma mod ulation index
Pc carrier power

47. What do you mean by spectral analysis?

A branch of mathematics which is inestimable value in the study of communication system


is spectral analysis. It concern itself with the description of waveforms in the frequency domain.

48. Define Angle Modulation.

Angle Modulation is the process by which the angle (frequency or phase) of the carried
signal is changed in accordance with the instantaneous amplitude of message signal.

49. How FM is superior to AM?

This advantage over AM since, all natural internal and external noises consist of electric
amplitude variations the receives cannot distinguish between amplitude variations that represent
noise and those that represent desired signal. So AM reception is generally noises than FM
reception.

50. Draw the spectrum of FM wave.

51. Define Modulation index for FM.


The modulation index of FM system can be defined as the ratio of maximum frequency
deviation to the modulating frequency.
d KVm
mf = = =
m m

52. How PM wave is obtained from FM wave?

The pin wave can be obtained from FM wave by differentiating the modulating signal
before applying it to the frequency modulation.

53. How PM wave is converted to FM?

This is done by integrating the modulated signal before applying it to the phase modulator.

54. What are the advantages of FM?

(i) S/N ratio can be increased without increasing the transmitted power.
(ii) The need for large amount of modulating power is avoided since the modulation
taken place at low level power stage of transmitter.

55. What is significant Bandwidth in FM?

Transmission bandwidth of FM wave is defined as the separation between the frequencies


beyond which non of side frequencies is greater than 1% of the carrier amplitude obtained when
the modulation is removed.

(i.e) BW. 2nm

n number of sidebands

56. What is Carsons Rule?

The approximate rule for transmission bandwidth of an fm signal general by a single tone
modulating signal is.

B.W.=2( w+wm)
=2 w(1+1/ mf) radian
this empirical relation is known as Carsons rule
57. Compare WBFM and NBFM.

WBFM NBFM
Modulation index is greater than 1 Modulation index is less
Frequency deviation = 5MHz than 1
Modulating frequency range from 30Hz to 15 Frequency deviation = 75
KHz KHz
Bandwidth 15 times NBFM Modulating frequency is
Noise is more suppressed 3KHz
Bandwidth = 2FM
Less suppressing of noise

58. Give the mathematical relation for bandwidth of a single tone wideband FM.

BW = 2w(1+1/mf) radians
mf modulation index
w frequency deviation

59. Define deviation ratio.

The deviation ratio of a FM is defined as the ratio of peak frequency deviation


corresponding to the maximum possible amplitude of f(t) to the maximum frequency component
present in the modulating signal f(t)

60. What are the different types of FM detector?

(i) single tuned discriminators


(ii) Balance slope detector
(iii) Foster seeley discrimination
(iv) Ratio detector

61. What are the disadvantages of FM?

i. A much wider channel is required by FM


ii. FM transmitting and receiving equipments tend to be more complex and hence it is
more expensive.
iii. The reception is limited in the line of sight, hence the area for FM is much smaller than
that for A.

62. What is intermediate frequency (IF)?

By the heterodyne operation using mixer a local-oscillator the input and the local oscillator
frequency can be used to produce the difference frequency which is called the Intermediate
frequency.

63. Why de-emphasis circuit is used in FM receiver?

A de-emphasis circuit is always employed in receiver circuits to restore relative magnitudes


of different component of AF signals as in the original modulating signal.

64. List two advantages of communication receivers.

Communication receivers are high quality, short wave, multipurpose, superhetrodyne receivers
used to receive signal for communication rather than entertainment.

PART B

1. Derive the AM wave equation and the power relations in an AM wave.

Let the modulating voltage


Vm(t)= Vm Sinmt -----(1)
Vc(t) = Vc Sinct -----(2)

According to the definition, the amplitude of the carrier signal is changed after modulation,

VAM = VC + Vm (t ) = Vc + Vm sin m t ------(3)


V
=Vc 1 + m sin mt (4)
Vc
VAM =Vc (1 + ma sin mt ) (5)

ma=Vm/Vc= modulation index

The instantaneous amplitude of modulated signal is

VAM(t) =VAM sinct -----(6)

The value of VAM in equation (6)

VAM (t)=VC(1+masinmt) Sinct

=Vc Sinct +maVc sinmt. sinct ---(7)

maVc
VAM = Vc Sin ct + ( Cos(c m ) t cos(c + m )t
2

The equation(8) of an amplitude modulated wave contains three terms. The 1st term of
R.H.8 represents the carrier wave. The 2nd and 3rd terms are identical which are called as lower
side band (LSB) and upper side band (USB)

VAM(t) VC
Frequency
Spectrum

LSB USB

m m

V
c-m c c + m

b>w=2
Figure 1 shows the frequency spectrum of AM. It shows that two side band terms lying on
either side of carrier term which are separated by m. The frequency of LSB is (c-m)and USB is
(c+m). The bandwidth of AM can be determined by using these side bands. Hence BW is twice
the frequency of modulating signal.

Power relation in AM:

The modulated wave contains three such as carrier wave, LSB such as carrier wave, LSB
and USB . therefore the modulated wave contains more power than the carrier had before
modulation took place>moreover sine the amplitude of sidebands depends on the modulation
index, it is anticipated that the total power in the modulated wave depends on the modulation
index.

The total power in modulation wave will be

Pt = Pc + PLSB + PUSB
V 2 carrier V 2 LSB V 2USB
Pt = + +
R B R

Where, Vcarrier= RMS value of carrier voltages


VLSB=VUSB=RM value of upper and lower side band voltage
R= Resistance in power is dissipated

2
Vcarrier (V / 2) 2 Vc2
Pcarrier = = c =
R R 2R
2
V
Similarly, PLSB = PUSB = SB
R
2



mV ma2Vc2
= a c =
2 8R

2

( )

R

Vc=maximum amplitude of carrier value


maVc
VSB= = maximum amplitude of side bands.
2

Therefore
m2 a
V 2C m 2 av 2 c V 2 c
Pt = + = 1 + 2
2R 4R
2R
V 2
m2 a
We know that Pc = 2 .thusPt = Pc 1 +
2R 2
Pt m 2 a
= 1+
Pc 2
If ma = 1; i.e for 100% mod ulation
Pe
= 1.5 or Pt=1.5 Pc
Pc

2. Write down the DSB-SC-AM equation and explain each term with the help of frequency
spectrum.

Double band suppressed carrier AM:

(i) Two important parameters of communication system are transmitting power and the
bandwidth are highly desirable in a communication system.
(ii) In A.M. with carrier scheme there is wastage in both transmitted power and carrier is
suppressed, because it does not contain any useful information . this scheme is called as the
double side band suppressed carrier amplitude modulation(2SB SC-AM ). It contains only
LSB and USB terms. Resulting that a transmission bandwidth is twice the frequency of the
message signal. Let the modulating signal Vm(t) =Vm Sinmt ---(1) and the carrier signal
Vc(t)=Vc Sin ct --(2)

When multiplying both the carrier and message signal. the resultant signal is the 2SB SC AM
signal

V(t)DSBSC =Vm(t)Vc(t) ---(3)

Therefore V(t)DSB-SC=Vm Sin mt.Vc Sinct


=Vm.Vc Sinmt.Sinct

VmVc
V (t ) DSB SC = [cos( c m )t cos(c + m )t ] (4)
2

In this case the product of Vc (t) and Vm(t) produces the DSB-SC-AM signal, thus, require
product modulator to generate DSB Sc signals we know that.

ma Vc
V (t ) AM = Vc Sinct +[cos(c m )t cos(c m )t ] (5) When the equation (5) is compared
c with
2
equation (4) the unmodulated carrier terms VcSinctVcSin ct is missing and only two sied bands are
present hence

The equation (4) is called as DSB-SC


SC-AM

SC-AM
3. Derive an expression for SSB-SC

In AM with carrier both the transmitting power and bandwidth are wasted. Hence the DSB- DSB
SC-AM
AM Scheme has been introduced in which power is shared by suppressing the carrier
component but the band with remains the same. (i.e. B.W =2Vm)
Further increase in saving of power is possible by eliminating one side band in addition to the
carrier component, because the USB and LSB are uniquely related by Symmetry about the
carrier frequency So, either one side is enough for transmitting as well as recovering the useful
message.
In addition to that, transmission bandwidth
bandwidth can be cut into half if, one side band is suppressed
along with the carrier. This scheme is known as SSB SC-AM.
AM. The block diagram of SSB-SC-
SSB
AM is shown in Fig .

The SSB-SC-AM
AM can be obtained as follows.
In order to suppress one of the side band the input signal fed to the modulator 1 is 90out
of phase with that of the signal fed to the modulator 2

Let V1 (t ) = Vm .Sin( mt + 90)Vc Sin( c t + 90)


V1 (t ) = Vm .Cos mt.Vc cos c t
V2 (t ) = Vm .Sin mt.Vc Sin c t

Therefore V(t) SSB=V1(t0 +V2(t)


=VcVm(Sinmt. Sin ct +Cosmt.cosct)

We know that
Cos(A-B)
` SinA Sin B +Cos A Cos B =
2
Hence

VmVc
V (t ) SSB = Cos (c m )t ---(1)
2

We know that for DSB-SC-AM


VmVc
VDSB (t ) = [Cos( c m )t = Cos(c + m )t ] ---(2)
2

When comparing equations (1) and (2) one of the side-band


side band is suppressed. Hence this
scheme is known as SSB-SC AM.

Frequency Spectrum of SSB-SC-AM


AM
The frequency spectrum of SSC-SC-AM is shown in figure (b). It shows that only one
sideband signal is present the carrier and the other (upper) side band signal are suppressed. Thus
the bandwidth required reduces from 2 vm to vm (i.e.) bandwidth requirement is reduced to half
compared to Am & DSB-SC signal.

4. Explain VSB modulation.

(i) Let us consider the modulating signals of very large bandwidth having very low
frequency components along with rest of the signal.
(ii) These components give rise to side bands very close to the carrier frequency which are
possible to go till the extreme.
(iii) The low video frequencies contain the most important information of the picture and
any effort to completely suppress the LSB would result in phase distortion at these
frequencies.
(iv) Therefore a comprise has been made to suppress the part of the LSB, and then the
radiated signal consists of full upper side band together with the carrier and vestige of
the (partially suppressed) LSB. This pattern of modulation is known as vestigial
sideband modulation.
(v) A VSB AM system is a comprise between DSB SC AM and SSB SC AM.
(vi) VSB signals are very easy to generate and at the same time, their bandwidth is slightly
greater than SSB SC AM but less than DSB SC AM.
(vii) VSB modulation is derived by filtering DSB-SC-AM or AM with carrier.
(viii) VSB modulation system

Message Product DSB-SC AM USB filter


Signal Modulator
VSB AM
Carrier Signal

An important and essential requirement of VSB filter transfer function HVSB (f) is that it
must have odd symmetry about fc and relative amplitude response of 0.5 at fc.

5. Explain Generation Methods of AM.

(i) Square law modulator:

Construction:
A square law modulator means requires to add up the carrier and modulating signal to
obtain AM with carrier thus a square law modulator has 3 features.

1. Summer to sum carrier and message.


2. A non-linear
linear (active element)
3. Band pass filter.

Modulating
Signal
+ Non-linear Filter Modulated
element Signal

Carrier Signal

The operation of the diode square law modulator is as follows.

The message signal and carrier signal applied at the i/p are super imposed each other and
makes the diode more forward biased during the +ve half cycle of i/p and less forward biased
during negative half cycle of message signal.

Thus the magnitude of the carrier component is greater during the positive half cycle of the
modulating voltage and lesser during the negative half cycle of the modulating signal which
wh is
shown in figure.

6. Comparison of three methods of generation of SSB.

1. Filter method:

a. side band filter helps to attenuate the carrier and provides the safety features which is absent in
the other two systems.

b. Band width is flat and wide.

c. The draw back is that it cannot be used at H.F radio frequencies. For high or low radio
frequencies it needs balanced mixers in conjunction with extremely stable crystal oscillators.

2. Phase shift method:

Advantages:
a. It can switch from one side band to other.

b. It has the ability to generate 2SB at any frequency thereby eliminating need of frequency
thereby eliminating need of frequency.
Disadvantages:

a. It needs a critical phase shift network.

b. The system needs two balanced modulators which must both give exactly the same output or
cancellation of side band is complete.

c. circuit layout is quite critical.

3. Modified phase shift method:

Advantages:

1. It does not require a side band filter or any wide band audio phase shift network.
2. Correct output may be maintained without use of critical parts or adjustments.
3. Low frequency signals are also used.
4. side bands may be easily switched.

Disadvantages:

1. System is extremely complex.


2. D.C coupling is required to avoid the loss of signal components close to AF and it produces
whistles at the output.

7. Derive an expression for a single tone FM signal and draw its frequency spectrum.

Let the message signal Vm(t) = VmCoswmt [1]


and the carrier signal Vc(t) = Vc sin (wct + ) [2]

= (wct + ) = total instantaneous phase [3]


angle of carrier.

Vc(t) = Vc sin
To find angular velocity,

During the process of FM the frequencies of carrier signal is changed in accordance with the
instantaneous amplitude of message signal.
Therefore the frequency of carrier after modulation is

w i = w c + kv m (t)
= w c + kv m cos w m t [4]

To find the instantaneous phase angle of the modulated signal, integrate equation [4]

0 = width = (w c + kv m wsw m t)dt


kv m
= wct + sin w m t + 1
wm

The instantaneous amplitude of the modulating signal is given by

kv
v(t)FM = Vc sin 1 = Vc sin w c t + m sin w m t [5]
wm
v(t)FM = Vc sin ( w c t + mf sin w m t ) [6]
kv m
where mf = = modulation index of FM
wm
The max value of angular freq wmax = wc + kvm
The min value of angular freq = wc - kvm
The freq deviation wd = kvm

Modulation index of FM system can be defined as

wd kv m
(i.e) mf = = =
wm w m

Equation [6] represent the frequency modulance signal.

V(t) FM = Vc sin (wct + mf sin wmt)

Graphical representation FM wave:

Vin
Message signal

Carrier signal
t

t
Phasor Representation of FM

V(w) Vc J0 (mf)

Vc J1 (mf)

Vc J2 (mf)

Vc J3 (mf)

8. Derive an expression for PM wave.


W

Phase modulation is defined as the process by which changing the phase of carrier signal in
accordance with the instantaneous amplitude of the message signal.

Let the modulating signal is given by


Vm (t) = Vm cos wmt

Carrier signal Vc (t) = vc sin (wc t + )

Where = phase angle of carrier signal. It is changed in accordance with the amplitude of message
signal Vm(t);
(i.e) = kvm (t) = kvm cos wmt

After phase modulation the instantaneous voltage will be


Vpm(t) = Vc sin (wct + )
= Vc sin (wct + kvm Cos Wmt)
=
Vpm (t) = Vc sin (wct + mp Coswmt)

where mp=KVm is the modulation index of phase modulation.

9. Explain how PM to FM obtained.


Frequency modulated wave can be obtained from PM.

This is done by integrating the modulating signal before applying it to the phase modulator it is
shown in figure.

(Vm/t)
Message
Integrator PM FM Signal
Signal Vm(t)

Carrier

Let Vm(t) = Vm cos wmt

After integration
Vm
Vm (t) = Vm cos w m t dt = sin w m t
wm
After phase modulation; Vm (t)
kv m
= kv m (t) = sin w m t
wm

The instantaneous value of modulated voltage is given by

Vfm (t) = Vc sin(w c t + )


kvm
Vfm (t) = Vc sin(w c t + sin w m t)
wm
Vfm (t) = Vc sin(w c t + mf + cos w m t)
f kvm
where mf= =
fm wm

Vfm (t) = Vc sin ( w c t tmf cosw m t )

this is the expression for FM wave.

10. Explain how FM to PM obtained.

The PM wave can be obtained from FM by differentiating the modulating signal before
applying it to the frequency modulator circuit shown in figure.

D/dt (Vm/t)
Message
Differentiator FM PM Signal
SignalVm(t)

Carrier
We know that Vm(t) = Vm cos wmt
After differentiation;
d
Vm (t) = w m .Vm sin w m t
dt
After frequency modulation
dv m (t)
wi = wc +
dt
= w c + k( w m .Vm sin w m t)
w i = w c kw m v m sin w m t

we know that the instantaneous phase angle of frequency modulated signal is

i = widt = ( w c kw m Vm sin w m t )dt


kw m v m
= wct + cos w m t
wm
i = w c t + kw m v m cos w m t

The instantaneous voltage after modulation is given by Vpm(t) = Vc sin i

Vpm(t) = Vc sin(wct + kvm cos wmt)


Vpm(t) = Vc sin(wct + m.cos wmt)

This is the operation for phase modulated wave. The process of integration and differentiation are
linear. Therefore no frequencies are generated.

11. Explain the indirect method of FM generation.

In this method, suitable frequency multiplying circuits are used to obtain the desired band
of FM. This method is called the Armstrong method of FM wave generation.

(i) The block diagram of Armstrong method of FM wave generation is shown in figure.
(ii) The chief advantage is that, the carrier is not directly involved in producing the FM
signal, but it is injected.
(iii) It is possible to use a crystal oscillator to generate the carries frequency.
(iv) The effect of mixer is to change the centre frequency only, whereas the effect of
frequency multiplies is to multiply centre frequency and frequency deviation equally.

USB
Carrier

VAM(t)
0 Vc
LSB
Fig shows the phasor diagram of AM.

VAM (t) = vCsinwct


ma v c
(cos(w c w m )t
2

It will be noted that the resultant of the two sideband frequency vector always in phase
with the unmodulated carrier so that there is amplitude variation in carrier but no phase variation.
6. Generation of FM through PM needs some phase shift between the modulated and
unmodulated carrier.
USB

VPM(t)
LSB
Resultant
VC

7. T avoid this problem the carrier of AM should be removed.

8. The out put of the amplitude limiter is phase modulated O/P.

9. Since FM is the requirement the modulation voltage will have to be equalized before it enters
the balanced modulation.

10. To limit the amplitude variation, the sample RL equalizer is used. The crystal oscillator is used
at 1MHz.

11. The effect of frequency changing is essential in Armstrong method. But the frequency changing
does not affect the modulation index.

12. When the FM signal is mixed the resulting O/P contains different frequencies.

13. From points (11 & 12) we have seen that by frequency multiplication modulation index is
affected in the same manner as deviation, whereas in mixing modulation index remains
unaffected.
12. Compare AM and FM (or) advantages of FM.

(1) In AM system there are three frequency components, and hence BW is finite.
(2) But FM system infinite number of sidebands in addition to a single carrier. Hence its BW is
infinite.
(3) In FM, the sidebands at equal distance from for have equal amplitude
(4) The amplitude of FM is independent of modulation index.
(5) Where as in AM is dependent
dependen of modulation index.
(6) In AM, increased modulation index increases the sideband power and therefore increases
the total transmitted power.
(7) In FM, the total transmitted power always remains constant but an increase in the
modulation index increases the bandwidth
ba of the system.
(8) In FM system all the transmitted power is useful where as in AM most of the transmitted
power is used by the carriers. Hence the power is wasted.
(9) Noise is very less in FM, hence there is less increase in the S/N.

There are two reasons for this

(1) There is less many at frequencies where FM is used.


(2) FM receivers use amplitude limiters to remove the amplitude variation caused by noise.

(1) This feature does not exists in AM


(2) Due to frequency allocations by CCIR there are guard bands between FM station so that
there is less adjacent channel interference than in AM.
(3) FM system operates in UHF and VHF range of frequencies and at these frequencies the
space wave is used for propagation.

13. Explain the Generation of Narrow band FM.

Narrow band for which modulation index is mall compared to one radian
Let the message signal to be represented as

Vm(t) =Vm cos mt (1)

Let the carrier signal be given by


Vc(t) =Vc Sin (ct+) -----(3)

Differentiate

d
= c angular frequency of carrier signal
dt
After frequency modulation.

i=c+KVm(t) =c+KVm cosmt

The frequency deviation is maximum , when


Cos mt=1 hence i=cKVm

The frequency deviation is proportional to the amplitude of modulating voltage, hence it can be
written as
2f = KVm

i = c 2f Cos m t
f
i= idt= ct + Sin mt
fm
Vfm(t ) = Vc Sin mt

f
= Vc Sin( c t + Sin mt )
fm
=Vc sin( c t m f Sin m t )
For Narrowband FM assume the modulation index is small compared to one radian.

Cos(mf Sin mt ) = 1
and Sin (mf Sin mt ) = mf Sin mt (Q Sin = 0if ' ' small )
Vfm(t ) = Vc Sin ct + VcCosct (mf Sinmt )

The equation defines the approximate form of narrow band FM.

This modulator involves splitting the carrier wave into two paths.

One path is direct, the other path contains a 90


90 phase shift network and a product
modulator, the combination of which generates DSB-Sc-AM
DSB signal.
nal. The difference between these
two signals produce narrowband Fm with some distortion. Ideally FM signal is a constant
envelopes.

14. Explain about the AM Super heterodyne Receiver.


The super heterodyne receiver is widely used in radio communication service because of its
gain, selectively and sensitivity characteristics are superior to other configurations.

Heterodyne means Tomix two frequencies together in a non-linear


non linear device r to translate
one frequency to another using non-linear
non mixing.

There are five sections to a superhetroclyne Receiver

1. RF section
2. The mixer/converter section
3. IF section
4. The Audio Detector section.
5. The Audio Amplifier section.

RF Section: The R7 section generally consists of a pre-selector


pre selector and an amplifier. The primary
purpose of the pre-selector
selector is to provide enough initial bands limiting to prevent a specific
unwanted radio frequency called the image frequency. It also reduces the noise bandwidth of the
receiver and provides the initial step toward reducing the overall receivers bandwidth to the
minimum bandwidth required to pass the information signal. The R7 amplifiers determine the
sensitivity and noise figure to the receiver.

Advantages of RF Amplifier in Receivers.

1. Greater gain, thus better sensitivity


sensitivit
2. Improved image frequency rejection.
3. Better signal to noise Ratio
4. Better selectivity
Mixer/ Converter Section:

It includes a radio frequency oscillator stage and mixer / converter stage. The local
oscillators can be any of the oscillator circuits depending on the stability and accuracy. The IF
frequencies. The most common intermediate frequency used in AM broadcast and Receiver 455
KHZ.

IF Section:

It consists of a series of IF Amplifiers and band pass filters and is of the called IF strip. Most
of the Receivers gain and selectivity is achieved in the IF section. The center frequency and
bandwidth are constant for all stations. The IF is always lower than R7 because this easier and loss
expensive to construct high gain amplifier.
Detector Section:

The purpose of the detector section is convert the IF signals back to the original source
information. The detecting is generally called an audio detector are the second detector in a
broadcast bund Receiver. It can be as simple as a single diode or as complex as a phase locked
loop or balanced demodulator.

Audio Section:

The audio section comprises several cascade audio amplifiers and one r more speakers. The
number of amplifiers used depends on the audio signal.

Receiver operation:

The received signals undergo two or more frequency translations.

1. R7 is converted to 17.
2. 17 is converted to source information. For eg:R7 for the commercial AM broadcast band are
frequencies between 535 k 42 and 1605 KHZ . IF signals are frequencies between 450 KHZ
and 460 KHZ.

Frequency Conversion:

The frequencies are down converted rather than up converted. The output of the mixer
consists of an infinite number of harmonic and cross product frequencies, which include the sum
and differences frequencies, which include the sum and differences frequencies between the R&
carrier and local isolation frequencies . the frequency of Oscillation is always above or below the
desired R7 carrier by an amount equal to the IF center frequency Gang tuning means that the two
adjustments are mechanically tied together so that a single adjustment will change the center
frequency of the preselected at the same time charge the local oscillator frequency. When the local
oscillation frequency is tied above R7 it is called high side injection when the local oscillator is
turned below R7 it is called low side injection.

15. Explain about the Low level and high level transmitter.

RF Carrier Buffer Carrier


Oscillator Amp Driver

Linear S.Power
Modulator Amp

Linear S.Power
Amp

Modulating Modulating
Signal Pre-amp Signal drive Applying N/W
Source

Fr voice or music Transmission, the source of the modulating signal is generally an


acoustical Music Transmission, such as microphone, a magnetic tape, CD disk, or a phonograph
record. The pre-amplifier is typically a sensitive class A linear voltage amplifier with a high input
impedance. The function of a pre-amplifier is to raise the amplitude of the source signal to usable
level, while producing minimum non-linear distortion and adding as little thermal noise as
possible. The driver for the modulating signal is also a linear amplifier that simply amplifies the
information signal to an adequate level to sufficiently drive the modulator. More than are drivers
amplifier may be required.

The R7 carrier oscillator can be any of the oscillator crystal controlled oscillators are the
most common circuits used. The buffer amplifier is a low gain high input impedance linear
amplifier. Its function is to isolate the oscillator from the high power amplifiers. The buffer
provides a relatively Constance load to the oscillator which helps to reduce the occurrence and
magnitude of short term frequency variations. Emitter followers or integrated circuit Op-amps are
often used to the buffer. The modulator can use emitter or collector modulation. The intermediate
and final power amplifier are either class A or class B push pull. This is required with low-level
transmitters to maintain symmetry in the Am envelope. The antenna compelling To/W matches
the output impedance of the final power amplifier to the transmission line and antenna.

Low level Transmitters are used predominantly for low power, low capacity system such as.

1) Wireless intercoms
2) Remote control units
3) Short range walkie-talkies
High-level Transmitters

Modulating Pre- Modulating Modulating


Signal Amp Signal receiver Signal Power
Source Amp lamp

AM N/W
Modulating
Power Amp

Carrier Carrier
Buffer -Amp Carrier power
Oscillator Driver
Amp

The modulating signal is processed in the same manner as in the low level Transmitter
except for the addition of a power amplifier with high level Transmitter the power of the
modulating signal must be higher than low level transmitters. This is because the carrier is at fill
power at the point in the Transmitter. Where modulation occurs and requires a high amplitude
modulating signal to produce 100% modulation.

The Rf carrier oscillator its associated buffer and the carrier. Driver are also essentially the
same circuits used in low-level transmitters with high level transmitters the R7 carrier undergoes
additional power Amplification, prior to the modulator stage, and the final power amplifier is also
the modulator. The modulator is generally a drain, plate or collector modulated class C amplifier.

With high transmitters the modulator circuit has three primary functions.

1. it provides the circuitry necessary for modulation occur.


2. it is the final power Amplifier.
3. it is a frequency Up- converter.

An up-converter translates the low frequency intelligence signals to radio frequency signals
that can be efficiency radiated from an antenna and propagated through free space.

16. Explain about Frequency Modulators.

Varactor Diode Modulator:

Varactor diode is used to deviate the frequency of a crystal oscillator. R1 and R2 develop a
dc voltage that reverse biases varactor diode VD1 and determines the rest frequency of the
oscillator. The external modulating signal voltage adds to and subtracts from the dc-bias and thus
the frequency of oscillation. Positive alternations of the modulating signal increase the reverse
bias an VD1, which decrease its capacitance and increase the frequency of oscillation. Negative
alternations of the modulating signal decrease the frequency of oscillations . Varactor diode FM
modulators are popular because they are simple to use and reliable and have the stability of a
crystal oscillator. Because a crystal is used, the peak frequency deviation is limited to relatively
small valves. They are used primarily for low index applications such as two-way mobile radio.

A varactor diode is used to transform changes n the modulating signal amplitude to


changes in frequency. The center frequency for the oscillator is determined as follows.
1
fc = HZ
2 LC
with a modulating signal applied, the frequency
fr is
1
f=
2 LC(+C)

where f is the new frequency of oscillation,


oscillat and C is
the change in varactor diode capacitance .
The change in frequency
f= fc f

FM Reactance Modulator:

This circuit configuration is called a reactance modulator because the JFET looks like a
variable reactance load to the LC tank circuit. The modulating signal varies the reactance of Q1,
which causes a corresponding change in the resonant frequency of the oscillator tank circuit.

R1,R=,R4, Rc provide the dc bias for Q1. RE is by passed by Cc.

Assuming an ideal JFET (ig=0)


Vg = igR
V
ig =
R jc
v
Vg = R
R jc
JFET drain current is
V
id=g mVg g m R
R jc

where gm is the Tran conductance of JFET. Impedance between drain and ground is

V
Zd =
id
R jc 1 jc
Zd = = 1
gm R gm R
-j c j
R <<< c Zd= =
g m R 2 f m g m RC

gmR= is equivalent to a variable capacitance and is inversely proportional to R, 2fm,gm. When a


modulating signal is applied to the bottom of R3, the gate to source voltage is varied accordingly
causing a proportional charge in gm.
17. Explain briefly about the Armstrong method?

Indirect Method:

Because a crystal oscillator cannot be successfully frequency modulated, the direct


modulators have the disadvantage of being based on an LC oscillator which is not stable enough
for communications or broadcast purposes. In turn, this requires stabilization of the reactance
modulator with attendant circuit complexity. It is possible, however, to generate FM through
phase modulation, where a crystal oscillator can be used. Since this method is often use din
practice, it will now be described. It is called the Armstrong system after its inventor, and it
historically precedes the reactance modulator.
Figure: Block diagram of the Armstrong Frequency modulation system

The most convenient operating, frequency for the crystal oscillator and phase modulator is
in the vicinity of 1 MHz. Since transmitting frequencies are normally much higher than this,
frequency multiplication must be used, and so multipliers, are shown in the block diagram of
figure 5-15.

The block diagram of an Armstrong system is shown in figure. The system terminates
terminat at
the output of the combining network ; the remaining blocks are included to show how wideband
FM might be obtained. The effect of mixing on an FM signal is to change the center frequency
only, whereas the effect of frequency multiplication is to multiply
multiply center frequency and deviation
equally.

The vector diagrams of figure illustrate the principles of operation of this modulation
system. Diagram I shows an amplitude modulated signal. It will be noted that the resultant of the
two sideband frequency vectors is always in phase with the unmodulated carrier vector, so that
there is amplitude variation but no phase (or frequency) variation. Since it is phase change that is
needed here, some arrangement must be found which ensures that this resultant of the sideband
voltages is always out of phase (preferably by 90)90 ) with the carrier vector. If an amplitude
modulated voltage is added to an unmodulated voltage of the same frequency and the two are
kept 90 apart in phase, as shown by diagram 2, some form of phase modulation will be achieved.
Unfortunately, it will be a very complex and nonlinear form having no practical use; however, it
does seem like a step in the right direction. Note that the two frequencies must be identical
(suggesting the one sourcee for both) with a phase shifting
shifting network in one of the channels.
Figure: Phase Modulation vector diagram

Diagram 3 shows the solution to the problem. The carrier of the amplitude modulated
signal has been removed so that only the two sidebands are are added to the unmodulated voltage.
This has been accomplished by the balanced modulator, and the addition takes place in the
combining network. The resultant of the two sideband voltages will always be in quadrature with
the carrier voltage. As the modulation
dulation increases, so will the phase deviation, and hence phase
modulation has been obtained. The resultant voltage coming from the combining network is
phase- modulated, but there is also a little amplitude modulation present. The AM is no problem
sincee it can be removed with an amplitude limiter.

The output of the amplitude limiter, if it is used, is phase modulation since frequency
modulation is the requirement, the modulating voltage will have to be equalized before it enters
the balanced modulator(remember
(remember that PM may be changed into FM by prior bass boosting of the
modulation). A simple RL equalizer is shown in Figure. In FM broadcasting , L=R at 30 Hz. As
frequency increases above that , the output of the equalizer will fall at a rate of 6 dB/octave,
satisfying the requirements.

Effects of frequency changing on an FM signal The previous section has shown that frequency
changing of an FM signal is essential in the Armstrong system. For convenience it is very often
used with the reactance modulator
dulator also. Investigation will show that the modulation index is
multiplied by the same factor as the center frequency, whereas frequency translation (changing )
does not affect the modulation index.

Figure: RL equalizer
If the frequency-modulated signal fc is fed to a frequency doubler, the output signal will
contain twice each input frequency. For the extreme frequencies here, this will be 2fc - 2 and 2fc +
2. The frequency deviation has quite clearly doubled to 2, with the result that the modulation
index has also doubled. In this fashion, both center frequency and deviation may be increased by
the same factor or, if frequency division should be used, reduced by the same factor.

When a frequency modulated wave is mixed, the resulting output contains difference
frequencies (among others). The original signal might again be fc . When mixed with a
frequency f0, it will yield fc f0 - and fc f0 + as the two extreme frequencies in its output. It is
seen that the FM signal has been translated to a extreme frequencies in its output. It is seen that
the FM signal has been translated to a lower center frequency fc f0, but he maximum deviation
has remained a . It is possible to reduce (or increase, if desired) the center frequency of an FM
signal without affecting the maximum deviation.

Since the modulating frequency has obviously remained constant in the two cases treated,
the modulation index will be affected in the same manner as the deviation. It will thus be
multiplied together with the center frequency or unaffected by mixing. Also, it is possible to raise
the modulation index without affecting the center frequency by multiplying both by 9 and mixing
the result with a frequency eight times the original frequency. The difference will be equal to the
initial frequency, but the modulation index will have been multiplied ninefold.

Further consideration in the Armstrong System:

One of the characteristics of phase modulation is that the angle of phase deviation must be
proportional to the modulating voltage. A careful look at diagram 3 of figure shows that this is
not in this case, although this fact was carefully glossed over in the initial description. It is the
tangent of the angle of phase deviation that is proportional to the amplitude of the modulating
voltage, not the angle itself. The difficulty is not impossible to resolve. It is a trigonometric axiom
that for small angles the tangent of an angle is equal to the angle itself, measured in radians. The
angle of phase deviation is kept small, and the problem is solved, but at a price. The phase
deviation is indeed tiny, corresponding to a maximum frequency deviation of about 60 Hz at a
frequency of 1 MHz. An amplitude limiter is longer really necessary since the amount of
amplitude modulation is now insignificant.

To achieve sufficient deviation for broadcast purposes, both mixing and multiplication are
necessary, whereas for narrowband FM, ,multiplication may be sufficient by itself. In the latter
case, operating frequencies are in the vicinity of 180 MHz. Therefore, starting with an initial fc =1
MHz and =60Hz, it is possible to achieve a deviation of 10.8 kHz at 180 MHz, which is more than
adequate for FM mobile work.

The FM broadcasting station uses a higher maximum deviation with the lower center
frequency, so that both mixing and multiplication must be used. For instance, if the starting
conditions are as above and 75 kHz deviation is required at 100 MHz, f0 must be multiplied by
100/1 = 100 times, whereas must be increased 75,000/60 = 1250 times. The mixer and crystal
oscillator in the middle of the multiplier chain are used to reconcile the two multiplying factors.
fac
After being raised to about 6 MHz, the frequency-
frequency modulated carrier is mixed with the output of a
crystal oscialltor, whose frequency is such as to produce a difference of 6 MHz/12.5 . The center
frequency has been reduced, but the deviation is left
left unaffected. Both can now be multiplied by
the same factor to give the desired center frequency and maximum deviation.

18. Explain the block diagram of TRF Receiver?

Tuned Radio- Frequency (TRF) Receiver

Until shortly before World War II, most radio receivers were of the TRF type, whose block
diagram is shown in figure.

The TRF receiver is a simple logical receiver. A person with just a little knowledge of
communications would probably expect all radio receivers to have this form. The virtues of o this
type, which is now not used except as a fixed-frequency
fixed frequency receiver in special applications, are its
simplicity and high sensitivity. It must also be mentioned that when the TRF receiver was first
introduced, it was a great improvement on the types used used previously mainly crystal,
regenerative and superregenerative receivers.

Figure: The TRF receiver


Two or perhaps three RF amplifiers, all tuning together, were employed to select and
amply the incoming frequency, and simultaneously to reject all others. After the signal was
amplified to a suitable level, it was demodulated (detected) and fed to the loudspeaker
lo after being
passed through the appropriate audio amplifying stages. Such receivers were simple to design
and align to broadcast frequencies (535 to 1640 kHz), but they presented difficulties at higher
frequencies. This was mainly because of the instability associated with high gain being achieved
at one frequency by a multistage amplifier. If such an amplifier has a gain of 40,000 , all that is
needed is 1/40,000 of the output of the last stage (positive feedback0 to find itself back at the input
inp
to the first stage, and oscillations will occur, at the frequency at which the polarity of this spurious
feedback is positive . such conditions are almost unavoidable at high frequencies and is certainly
not conductive to good receiver operation. In addition
addition the TRF receiver suffered from a variation
in bandwidth over the tuning range. It was unable to achieve sufficient selectivity at high
frequencies, partly as a result of the enforced use of single-tuned circuits. It was realized that they
would naturally yield amplifiers in this receivers , although it was realized that they would
naturally yield better selectively. This was due to the fact that the such amplifiers had to be
tunable and the difficulties of making several double tuned amplifiers tune in unison were too
great.

Consider a tuned circuit required to have a bandwidth of 10 kHz at a frequency of 535 kHz.
The Q of this circuit must be Q=f/f= 535 = 53.5 . At the other end of the broadcast band, i.e., at
10
1640 kHz , the inductive reactance (and therefore the Q) of the coil should in theory have
increased by a factor of 1640/535 to 164. In practice , however, various losses dependent on
frequency will prevent so large an increase. Thus the Q at 1640 kHz is unlikely to be in excess of
120 , giving a bandwidth of f= 1640/120 = 13.7 kHz and ensuring that the receiver will pick up
adjacent stations as well as the one to which it is tuned. Consider again a TRF receiver required to
tune to 36.5 MHz , the upper end of the shortwave band. If the Q required of the RF circuit is
again calculated, still on this basis of a 10-kHz bandwidth, we have Q= 36,500/10 = 3650 . It is
obvious that such a Q is impossible to obtain with ordinary tuned circuits.

The problems of instability, insufficient, adjacent-frequency rejection, and bandwidth


variation can all be solved by the use of a superhetereodyne receiver, which introduces relatively
few problems of its own.

UNIT II

DIGITAL COMMUNICATION

PART A

1. List out the types of pulse modulation.

The types of pulse modulation of are

(i) Pulse width modulation (PWM)


(ii) Pulse position modulation (PPM)
(iii) Pulse amplitude modulation (PAM)
(iv) Pulse code modulation (PCM)

2. Define PCM.

The analog signal is *sampled and converted to a fixed length, serial binary number for
transmission. The binary number various according to the amplitude to the analog signal.

3. Define aperture or acquisition time.

FET acts as a simple switch. When turned an, it provides a low-impedance path to deposit
the analog sample voltage on capacitor C1. The time that Q1 is an is called the aperture or
acquisition time.

4. What is flat-flop sampling?

Analog signal is sampled for a short period of time and the sample voltage is held at a
constant amplitude during the A/D conversion time, this is called flat top sampling.

5. What is Natural Sampling?

The sample time is mode longer and the analog-to-digital conversion takes place with a
changing analog signal, this is called Natural sampling.

6. Define sampling rate.

The minimum sampling rate (fs) that can be used for a given Pem System. For a sample to
be reproduced signal (fa) must be sampled at least twice. The minimum sampling rate is equal to
twice the highest avoid input frequency.
7. Define coding efficiency.

Coding efficiency is the ratio of the minimum number of bits required to achieve a certain
dynamic range to the actual number of Pem bits used.
minimum number of bits
(including sign bit)
Coding efficiency= 100
Actual number of bits
(including sign bit)

8. Define PSK.

Phase shift keying (PSK) is another form of angle modulated constant amplitude digital
modulation. PSK is similar to conventional phase modulation except that with PSK the input
signal is aq binary digital signal and a limited number of output phases.
9. Define FSK.

FSK is a modulation process in which the freqency of the carrier is switched between either
of the 2 possible values corresponding to 1 or 0, with fixed frequency levels set by the channel.

10. Differentiate PSK with FSK.

PSK FSK
Keying the phase of the carrier between Keying the frequency of the carrier
either of the 2 Possible values 0 and 1. between either of the 2 possible valves 0
and 1.
Fixed amplitude and fixed frequency is Amplitude is fixed, but different
used but the signal is 180 degrees out of frequencies are used for 0 and 1.
phase for transition between 1 or 0.
It is a special case of frequency
It is a special case of phase modulation. modulation.

11. What is the information capacity?

The information capacity of a communication system represents the number of


independent symbols that can be carried through the system in a given unit of time. The most
basic symbol is the binary digit therefore. It is often convenient to express the information
capacity of a system in bits per second.

12. Given the equation for Shannon limit for information capacity.

S
I = B log2 1 +
N

S
I = 3.32 B log2 1 +
N
Where I = Information capacity (bps)

B= Bandwidth (Hz)
S
= Signal to noise power ratio
N

13. What is Sampling Theorem?

If the sampling rate in any pulse modulation system exceeds twice the maximum signal
frequency, the original signal can be reconstructed in the receiver with minimal distortion.

14. Define Pulse amplitude modulation?

It is a pulse modulation system in which the signal is sampled at regular intervals ,and each
sample is made proportional to the amplitude of the signal at the instant of sampling.

15. Define Pulse width modulation?

A fixed amplitude and starting time of each pulse, but the width of each pulse is made
proportional to the amplitude of the signal at the instant.

16. Define pulse position modulation?

The amplitude and width of the pulses in kept constant in the system, while the position of
each pulse, in relation to the position of a recurrent reference pulse is varied by each instantaneous
sampled value of the modulating wave.

17. Define Delta modulation?

A pulse modulation technique in which a continuous signal is converted into a binary pulse
patterns, for transmission through low quality channels. A technique that is used to sample voice
waves and convert them into digital code. Delta modulation typically samples the wave 32,000
times/ sec, but generates only one bit per sample.

18. Define differential Pulse Code modulation (DPCM)

Pulse code modulation in which an analog signal is sampled and the difference between tha
actual value of each sample and its predicated value, derived from the previous sample or
samples, is quantized and converted, by encoding to a digital signal.

19. Define OOk

ON-OFF keying is binary form of amplitude modulation in which one of the states of the
modulated wave is the absence of energy in the keying interval .

20. Define PSK?

Phase shift keying is a form of phase modulation in which the modulating function shifts
the instantaneous phase of the modulated wave between predetermined discrete values.
21. Define FSK?

Frequency shift keying as a form of frequency modulation is used especially in telegraph,


data and facsimile transmission, in which the modulating wave shifts the output frequency
between predetermined values corresponding to the frequencies of correlated sources.

22. Define ASK?

Amplitude shift keying is a method of transmitting binary coded messages in which a


sinusoidal carrier is pulsed so that one of the binary states is represented by the presence of the
carrier while the other is represented by its absence.

23. Define QAM?

Quadrature amplitude modulation is both an analog on a digital modulation scheme. It


conveys two analog message signals, are two digital bit streams, by changing the amplitudes of
two carrier waves, using the amplitude shift keying (ASK) digital modulation scheme or
amplitude modulation analog modulation scheme.

24. Define MSK?

Minimum shift keying is a encoded with bits alternating between quaternary components
with the a component delayed by half the symbol period.

25. Define Gaussian minimum Shift keying?

It is a continuous phase frequency shift keying modulation scheme. It is similar to standard


minimum-shift keying (msK); however the digital data stream is first shaped with the Gaussian
fitter before being applied to the frequency modulator.

26. Define BPSK?

It is simplest form of phase lift keying. It uses 2 phases which are separated by 180 is
called binary Phase shift keying.
PART B

1. Explain about the pulse code modulation.

Pulse code modulation (PCM) is the only one of the digitally encode: pulse modulation
techniques previously mentioned that is used in a digital transmission system. With PCM, the
pulses are of fixed length and fixed amplitude. PCM is a binary system; a pulse or lack of a pulse
within a prescribed time slot represents either a logic 1 or a logic 0 condition. With PWM, PPM, or
PAM, a single pulse does not represent a single binary digit (bit).

Figure shows a simplified block diagram of a single channel, simplex (one- way-only)
PCM system. The band pass filter limits the input analog signal to the standard voice-band
frequency range 300Hz to 3000 Hz. The sample-and hole circuit periodically samples the analog
input and converts those samples to a multilevel PAM signal. The analog-to-digital converter
(ADC) converts the PAM samples to a serial binary data stream for transmission. The transmission
medium is a metallic wire or optical fiber.

At the receive end, the digital to analog converter (DAC) converts the serial binary data
stream to a multilevel PAM signal. The held circuit and low-pass filter convert the PAM signal
back to its original analog form. An integrated circuit that performs the PCM encoding, and
decoding is called a codec (coder/decoder).
(coder/decoder)

The purpose of the sample and hold circuit is to sample periodically the continually
changing analog input signal and convert the samples to a series of constant amplitude PAM
levels. For the ADC to accurately convert a signal to a digital code, the signal must be relatively
constant. If not, before the ADC can complete the conversion, the input would change. Therefore,
the ADC continually would be attempting to follow the analog changes and never stabilize on any
PCM code.

Figure shows the schematic diagram of a sample and hold circuit. The FET acts as a
simple switch. When turned on, it provides a low impedance path to deposit the an analog
sample voltage on capacitor C1. The time that Q1 is on is called the aperture
apertur or acquisition time.
Essentially, C1 is the hold circuit. When Q1 is off, the capacitor does not have a complete path to
discharge through and, therefore, stores the sampled voltage. The storage time of the capacitor is
called the A/D conversion time because
because it is during this time that the ADC converts the sample
voltage to a digital code. The acquisition time should be very short. This assures that a minimum
change occurs in the analog signal while it is being deposited across C1 If the input to the ADC is
changing while it is performing the conversion. Distortion results. This distortion is called
aperture distortion. Thus, by having a short aperture time and keeping the input to the ADC
relatively constant, the sample and hold circuit reduces aperture distortion. If the analog signal
is sampled for a short period of time and the sample voltage is held at a constant amplitude
during the A/D conversion time, this is called flat-top sampling.

If the sample time is made longer and the analog to digital conversion takes place with a
changing analog signal. This is called natural sampling. Natural sampling introduces more
aperture distortion than flat top sampling and requires a faster A/D converter.

Figure shows the input analog signal, the sampling pulse, and the waveform developed
across C, it is important that the output impedance of voltage follower Z1 and the on resistance of
Q1 be as small as possible. This assures that the RC charging time constant of the capacitor is kept
very short, allowing the capacitor to charge or discharge rapidly during the short acquisition time.
The rapid drop in the capacitor voltage immediately following each sample pulse is due to the
redistribution of the charge across C1. The inter electrode capacitance between the gate and drain
of the FET is placed in series with C1 when the FET is off, thus, acting as a capacitive voltage
driver network. Also, note the gradual discharge across the capacitor discharging through its own
leakage resistance and the input impedance of voltage follower Z2 and the leakage resistance of C1
be as high as possible. Essentially, voltage followers Z1 and Z2 isolate the sample- and hold
circuit (Q1 and C1) from the input and output circuitry.

Sampling Rate

The Nyquist sampling theorem establishes the minimum sampling rate (f), that can be used
for a given PCM system. For a sample to be reproduced accurately at the receive each cycle of the
analog input signal (fa) must be sampled at least twice. Consequently the minimum sampling rate
is equal to twice the highest audio input frequency. If f, is less than two times fcv distortion will
result. The distortion is called advising or foldown distortion. Mathematically, the minimum
Nyquist sample rate is
Fs 2fa

Where fs= minimum Nyqust sample rate (Hz)


Fa=highest frequency to be sampled (Hz)
Essentially, a sample and hold circuit is an AM modulator. or. The switch is a nonlinear
device that has two inputs: the sampling pulse and the input analog signal. Consequently,
nonlinear mixing (heterodyning) occurs between these two signals. Figure shows the frequency
domain representation of the output spectrum
spectru from a sample and hold circuit. The output
includes the two original inputs the audio and the fundamental frequency of the sampling pulse.)
their sum and difference frequencies (f,z, fa) all the harmonics of fs and fa (2fs,2fa,3fs,3fa, etc). and
theirr associated cross products (2fs, fa,3fs, fa, etc).

Because the sampling pulse is a repetitive waveform, it is made up of a series of


harmonically related sine waves. Each of these sine waves is amplitude modulated by the analog
signal and produces sum and difference frequency generated is separated from its respective
center frequency by fa As long as fs is at least twice fa, none of the side frequencies from one
harmonic will spill into the side bands of another harmonic and aliasing does not occur. Figure F
shows the results when an analog input frequency greater than fs/2 modulates fa the side
frequencies from one harmonic fold over into the side hand of another harmonic. The frequency
that folds over is an alias of the input signal (hence, the names alising
alising or foldover distortion).
If an alias side frequency from the first harmonic folds over into the audio spectrum, it cannot be
removed through filtering or any other technique.

The input band pass filter shown in Figure is called an ant aliasing or anti fold over filter.
Its upper cut off frequency is chosen such that no frequency greater than one half of the
sampling rate is allowed to enter the sample and-hold
hold circuit, thus, eliminating the possibility of
fold over distortion occurring.
With PCM, the analog input signal is sampled, then converted to a serial binary code. The
binary code is transmitted to the receiver, where it is converted back to the original analog signal.
The binary codes used for PCM are n-bit codes, where n may be any positive integer greater than
1. The codes currently used for PCM are sign-magnitude codes, where the most significant bit
(MSB) is the sign bit and the remaining bits are used for magnitude. Table shows an n-bit PCM
code where n equals 3. the most significant bit is used to represent the sign of the sample (logic
1=positive and logic 0= negative). The two remaining bits represent the magnitude with 2
magnitude bits, there are four codes possible for positive numbers and four possible for negative
numbers. Consequently, there is a total of eight possible codes (23=8).

2. Explain about frequency shift keying.

Frequency shift keying (FSK) is another relatively simple, low performance form of digital
modulation. Binary FSK is a form of constant amplitude angle modulation similar to
conventional frequency modulation except that the modulating signal is a binary pulse stream
that varies between two discrete voltage levels rather than a continuously changing analog
waveform. The general expression for a binary FSK signal is

( f )
( t ) = c cos c + m f (12 4)
2
Where
(t)=binary FSK waveform
Vc=peak un modulated carrier amplitude
r=radian carrier frequency
m(f)=binary digital modulating signal
=change in radian output frequency

From Equation 12-4 it can be seen that with binary FSK the carrier amplitude Vc remains
constant with modulation. However, the output carrier radian frequency (t) shifts by an amount
equal to /2.The frequency shit (/2)is proportional to the amplitude and polarity of the
binary input signal. For example, a binary 1 could be +1 volt, and a binary zero could be -1 volt
producing frequency shifts of +/2 -/2. respectively. In addition, the rate at which the carrier
frequency shits is equal to the rate of change of the binary input signal m(t) that is, the input bit
rate). Thus, the output carrier frequency devices (shifts) between c+/2 and c-/2 at a rate
equal to fm.

FSK Transmitter

With binary FSK, the center or carrier frequency is shifted (deviated) by the binary input
data. Consequently, the output of a binary FSK modulator is a step function in the time domain.
As the binary input signal charges from a logic 0 to a logic 1, and vice versa the FSK output shifts
between two frequencies a more logic 1 frequency and a space or logic 0 frequency. With binary
FSK, Basic is a change in the output frequency each time the logic condition of the binary input
signal changes. Consequently, the he output rate of change is equal to the input rate of change. In
digital modulation, the rate of change at the input to the modulator is called the bit rate and has
the units of bits per second (bps). The measure of the rate of change at the output of the modulator
is called band and is equal to the reciprocal of the time of one output signaling element. In essence
band is the line speed is symbols per second.; in binary FSK, the input and output rates of change
are equal; therefore, the bit rate and band rate are equal. A simple binary FSK transmitter is shown
in Figure.

Band width Considerations of PSK

As with all electronic communication systems, bandwidth is one of the primary


considerations when designing a binary FSK transmitter, FSK is similar to conventional
c frequency
modulation and so can be described in a similar manner.

Figure shows a binary FSK modulator, which is very similar to a conventional FM


modulator, and is very often a voltage controlled oscillator (VCO). The fastest input rule of
change occurs when the binary input is a series of alternating 1s and 0s: namely,
namely a square wave.
Consequently, if only the fundamental frequency of the input is considered, the highest
modulating frequency is equal to one-half
one the input bit rate.
The rest frequency of the VCO is chosen such that it falls halfway between the mark and
space frequencies. A logic 1 condition at the input shifts the VCO from its rest frequency to the
mark frequency. Consequently, as the input binary signal changes from a logic 1 to a logic 0, and
vice versa, the VCO output frequency shifts or deviates back and forth between the mark and
space frequencies.

In binary FSK modulator, ff is the peak frequency deviation of the carrier and is equal to
the difference between the
he rest frequency and either the mark or space frequencies). The peak
frequency deviation depends on the amplitude of the modulating signal. In a binary digital signal.
All logic 1s have the same voltage sand all logic 0s have the same voltage; consequently,
consequentl the
frequency deviation is constant and always at its maximum value.

The output of a FSK modulator is related to the binary input as shown in figure where logic
0 corresponds to space frequency fs, a logic 1 corresponds to mark frequency
equency fm, and fc is the
carrier frequency. The required peak frequency deviation, f, is given as

1fm fs1 1
f = = (Hz,minimum ) (12.5 )
2 4fb

Where tb is the time of one bit in seconds and fm and fs are expressed as

fm = fc f = fc
1
(12-6a )
4fb

fs = fc + f = fc +
1
(12-6b )
4fb

From figure it can be seen that FSK consist of two pulsed sinusoidal waves of frequency jm
and fr Pulsed sinusoidal waves have frequency spectrums that are sin x/x functions. Consequently,
we can represent the output spectrum for an FSK signal as shown in Figure. Assuming that the
zero crossings contain the bulk of the energy the bandwidth for FSK can be approximated as
1 1
BW = fm + fs = fm fs +
2
(12-7 )
tb tb tb

FSK Receiver

FSK demodulation is quite sample with a circuit such as the one shown in Figure. The FSK
input signal is applied to the inputs of both band pass filters (BPFs). The respective filter passes
only the mark or only the space frequency on to its respective envelope detector. The envelope
detectors, in turn, indicate the total power in each pass band and the comparator responds to the
larger of the two powers. This type of FSK detection is referred to as noncoherent detection; there
is no frequency involved in the demodulation process that is synchronized either in phase,
frequency, or both with the incoming FSK signal.

Figure shows the block diagram for a coherent FSK receiver. The incoming FSK signal is
multiplied by a recovered carrier signal that has the exact same frequency and phase as the
transmitter reference. However, the two transmitted frequencies (the mark and space frequencies)
are not generally continuous; it is not practical to reproduce a local reference that is coherent with
both of them. Consequently, coherent FSK detection is seldom used.

The most common circuit used for demodulating binary FSK signals is the phase-locked
loop (PLL), which is shown in block diagram form in Figure A PLL-FSK demodulator works
similarly to a PLL-FM demodulator. As the input to the PLL shifts between the mark and space
frequencies, the dc error voltage at the output of the phase comparator follows the frequency shift.
Because three are only two input frequencies (mark and space), there are also only two
outputs error voltages. One represents a logic 1 and the other a logic 0, therefore, the output is a
two level
level (binary) representation of the FSK input. Generally the natural frequency of the PLL is
made equal to the center frequency of the FSK modulator. As a result, the changes in the dc error
voltage follow the changes in thee analog input frequency and are symmetrical around 0V.

Binary FSK has a poorer error performance than FSK or QAM and, consequently, is seldom
used for high-performance
performance digital radio systems, is use is restricted to low-performance,
low low
cost, asynchronous
us data moderns that are used for data communication over analog voice band
telephone lines.

Minimum Shift-keying FSK

Minimum shift-keying
keying FSK (MSK) is a form of continuous phase frequency shift keying
(CPFSK). Essentially, MSK is binary FSK except that that the mark and spaces frequencies are
synchronized with the input binary bit rate. Synchronous simply means that there is a precise time
relationship between the two; it does not mean they are equal, width MSK, the mark and space
frequencies are selected such
uch that they are separated from the center frequency by an exact odd
multiple of one-half
half of the bit rate Ifm and fs mn(fb/2) where n= any odd integer]. This ensures that
there is a smooth phase transition in the analog output signal when it changes from a logic 1 to a
logic 0, and vice versa, there is an abrupt phase discontinuity in the analog output signal. When
this occurs, the demodulator has trouble following due frequency shift; consequently, an error
may occur.
Figure shows a continuous phase MSK waveform, Notice that when the output frequency
changes, it is a smooth, continuous transition. Consequently, there are no phase discontinuities.
MSK has a better bit error performance than conventional binary FSK for a given signal to-noise
ratio. The disadvantage of MSK is that it requires synchronizing circuits and is, therefore, more
expensive to implement.

3. Explain about Phase Shift keying.

Phase Shift Keying

Phase shift keying (PSK) is another from of angle modulated, constant- amplitude digital
modulation. PSK is similar to conventional phase modulation except that with PSK the input
signal is a binary digital and a limited number of output phases are possible.
Binary Phase Shift Keying

With binary phase shift keying (BPSK), two output phases are possible for a single carrier
frequency (binary meaning 2). One output phase represents a logic 1 and the other a logic 0.
As the input digital signal changes state, the phase of the output carrier shifts between two angles
that are 180o out of phase. Other names for BPSK are phase reversal keying (PRK) and biphase
modulation BPSK is a form of suppressed carrier, square wave modulation of a continuous wave
(CW) signal.

BPSK transmitter : Figure shows a simplified block diagram of a BPSK modulate acts as a phase
reversing switch. Depending on the logic condition of the digital input, the carrier is transferred to
the output either in phase or 180o out of phase with the reference carrier oscillator.

Figure show the schematic diagram of a balanced ring modulator. The balanced modulator
has two inputs; a carrier that is in phase with the reference oscillator and the binary digital data.
For the balanced modulator to operate properly the digital input voltage must be much greater
than the peak carrier voltage. This ensures that the digital input controls the on/off state of diodes
D1-D2 if the binary input is a logic 1 (positive voltage), diode D1 and D2 are forward biased and
on, while diodes D3 and D4 are reverse biased and off (figure).

Binary Balanced Band pars Analog PSK out


Data in Modulator Filter BPF put

Reference
Centre Oscillator
7
With the polarities shown, the carrier voltage is developed across transformer T2 in phase
with the carrier voltage across T1. consequently, the output signal is in phase with the reference
oscillator.

If the binary input is a logic 0 (negative voltage), diodes D1 and D2 are reverse biased and
off, while diodes D3 and D4 are forward biased and on (Figure) As a result, the carrier voltage is
developed across transformer T2 1800 out of phase with the carrier voltage across T1.
Consequently, the output signal is 180o out of phase diagram, and constellation diagram for a
BPSK modulator. A constellation diagram, which is sometimes called a signal state space
diagram, is similar to a phasor diagram except that the entire phaser is not drawn. In a
constellation diagram, only the relative positions of the peaks of the phasors are shown.

Band width considerations of BPSK. A balanced modulator is a product modulator; the


output signal is the product of the two input signals. In a BPSK modulator, the carrier input signal
is multiplied by the binary data. If +1 sin cr or -1 sin cI; the first represents a signal that is in
phase with the reference oscillator, the latter a signal that is 180o out of phase with the reference
oscillator. Each time the input logic condition changes, the output phase changes consequently.
For BPSK the output logic condition changes, the output phase change (bps), and the widest
output band width occurs when the input binary data are an alternating 1/0 sequence. The
fundamental frequency (fs) of an alternative 1/0 bit sequence is equal to one- half of the bit rate
(fb/2) Mathematically, the output phase of a BPSK modulator is
output = (1sin
424
c t )
3
( sin c t )
1 424 3
fundamental frequency unmodulated
of the binary carrier
modulating signal

cos (c a ) t = cos (c + a ) t
1 1
or
2 2

Consequently, the minimum double sided Nyquist band width (fs)is


c + a c + a
(c a ) c a
or
2a
and because fa = fb / 2,
f
fN = 2 b = fb
2

Figure shows the output phase versus time relationship for a BPSK wave form. The
output spectrum from a BPSK modulator is simply a double sideband, suppressed carrier signal
where the upper and lower side frequencies are separated from the carrier frequency by a value
equal to one half of the bit rate. Consequently, the minimum bandwidth (fN) required to pass the
worst-case
case BPSK output signal is equal to the input bit rate.

BPSK receiver: figure shows the block diagram of a BPSK receiver. The input signal may be +sin
cI. The coherent carrier recovery circuit detects and regenerates a carrier signal that is both
frequency and phase coherent with the original transmit carrier. The balanced modulator is a
product detector,, the output is the product of the two inputs (the BPSK signal and the recovered
carrier). The low pass
pass filter (LPF) separates the recovered binary data from the complex
demodulated signal Mathematically, the demodulation process is as follows.

For a BPSK input signal of +sin c t(logic 1), the output of the balanced modulator is

output = ( sin c t )( sin c t ) = sin2 c t


(filtered out) (12-9)

sin2c t = (1 cos 2c t ) = cos 2c t


1 1 1
or
2 2 2
1
leaving output =+ V = logic1
2

It can be seen that the output of the balanced modulator contains a positive voltage
[+(1/2)V] and a cosine wave at twice the carrier frequency (2c). The LPF has a cut off frequency
much lower than 2c and, thus, blocks the second harmonic of the carrier and passes only the
positive constant component. A positive voltage represents a demodulated logic 1.
For a BPSK input signal of sin ct (logic 0), the output of the balanced modulator is

output = ( sin c t )( sin c t ) = sin2 c t


(filtered out) (12-10)

-sin2c t = (1 cos 2c t ) = + cos 2c t


1 1 1
or
2 2 2
1
leaving output =- V = logic0
2

The output of the balanced modulator contains a negative voltage [-(1/2)V] and cosine
wave at twice the carrier frequency (2c). Again, the LPF blocks the second harmonic of the carrier
and passes only the negative constant component. A negative voltage represents a demodulated
logic 0.

M- ary encoding. M-ary is a term derived from the word binary. M is simply a digit that
represents the number of conditions possible. The two digital modulation techniques discussed
thus far (binary FSK and BPSK) are binary systems; there are only two post output conditions.
One represents a logic 1 and the other a logic 0; thus, they are M-ary systems where M=2. With
digital modulation, very often it is advantageous to encode at a level higher than binary. For
example, a PSK system with four possible output phases is an M-ary system where M=4. If there
were eight possible output phases, M=8, and so on, mathematically,

N = log2 M (12-11)

Where N=number of bits


M=number of output conditions possible with N bits.
For example, if 2 bits were allowed to enter a modulator before the output were allowed to
change.

2=log2 M and 22=M thus, M=4

An M=4 indicates that with 2 bits, four different output conditions are possible for N=3, M=23 or 8,
and so on.

4. Explain about quadrature phase shift keying.

Quadrature Phase Shift Keying

Quadrature Phase shift keying (QPSK), or quadrature PSK as it is some times called, is an
other form of angle modulated constant-amplitude
constant amplitude digital modulation. QPSK is an M-ary M
encoding technique where M=4 (hence, the name quaternary, meaning 4). With QPSK four
output phases are possible for a single carrier frequency. Because there are four different output
phases,
ases, there must be four different input conditions. Because the digital input to a QPSK
modulator is a binary (base 2) signal, to produce four different input conditions, it takes more than
a single input bit. With 2 bits there are four possible conditions:
conditions: 00,01,10,and 11. Therefore, with
QPSK the binary input data are combined into groups of 2 bits called dibits. Each dibit code
generates one of the four possible output phases. Therefore, for each 2- 2 bit dibit clocked into the
modulator, a single output change occurs. Therefore, the rate of change at the output (band rate) is
one half of the input bit rate.

QPSK transmitter. A block diagram of a QPSK modulator is shown in Figure two bits (a dibit) are
clocked into the bit splitter. After both bits have been serially inputted. They are simultaneously
parallel outputted. One bit is directed to the 1 channel and the other to the Q channel. The 1 bit
modulates a carrier that is in phase with the reference oscillator (hence, the name I for in
phase channel).
nnel). And the Q bit modulates a carrier that is 90o out of phase or in quadratur with
the reference carrier (hence the name Q for quadrature channel).
It can be seen that once a dibit has been split into the 1 and Q channels, the operation is the
same as in a BPSK modulator. Essentially, a QPSK modulator is two BPSK modulators combined
in parallel. Again, for a logc 1=+1V and a logic 0=-1V,
0= two phases are
re possible at the output of the 1
balanced modulator (+sin ct and cos ct) When the linear summer combines the two quadrature
(90o out of phase) signals, there are four possible resultant phasors given by these expressions:
+sinct +cosct, + sin ct-cosct-sin
sinct+cosct and sin ct-cosct.

For the remaining dibit codes (01,10, and 11), the procedure is the same. The results are shown in
figure.
In figure it can be seen that with QPSK each of the four possible output phasors has exactly
the same amplitude.
tude. Therefore, the binary information must be encoded entirely in the phase of
the output signal This constant amplitude characteristic is the most important characteristic of
PSK that distinguishes it from QAM, which is explained later in this chapter. Also, from figure it
can be sent hat the angular separation between any two adjacent phasors in QPSK is 90o.
Therefore, a QPSK signal can under to almost a+45o or -45o shift in phase during transmission and
still retain the correct encoded information when demodulated at the receiver. Figure shows the
output phase versus time relationship for a QPSK modulator.

Binary Input QPSK output Phases


Q 1
0 0 - 136
0 1 -45
1 0 +135
1 1 +45
Band width consideration of QPSK. With QPSK, because the input data are divided into
two channels, the bit rate in either the I or the Q channel is equal to one half of the input data rate
(fb/2) (Essentially, the bit splitter stretches the I and Q bits to twice their input bit length).
Consequently, the highest fundamental frequency present at the data input to the I or the Q
balanced modulator is equal to one fourth of the input data rate ( one half fb/2 =fb/4). As a
result, the output of the I and Q balanced modulators requires a minimum double sided Nyquist
band width equal to one- half of the incoming bit rate (fN=twice fb/4=fb/2). Thus with QPSK a band
width compression is realized ( the minimum band width is less that the incoming bit rate). Also,
because the QPSK output signal does not change phase until 2 bits (a dibit) have been clocked into
the bit splitter, the fastest output rate of change (baud) is also equal to one- half of the input bit
rate. As with BPSK, the minimum band width and the and are equal. This relationship is shown in
figure.
In figure it can be seen that the worse case input condition to the I or Q balanced
modulator is an alternative 1/0 pattern, which occurs when the binary input data has a 1100
repetitive pattern. One cycle of the fastest binary transition (a 1/0 sequence) in the I or Q channel
takes the same time as 4 input data bits. Consequently. The highest fundamental
fundamenta frequency at the
input and fastest rate of change at the output of the balanced modulators is equal to one-fourth
one of
the binary input bit rate.

The output of the balanced modulators can be expressed mathematically as


out put = ( sina t )( sin c t )
f
where a t = 2 b t and a t = 2 fc t
14 4244 43 14243
un modulated
modulating carrier phase
phase

f
Thus, output = sin2 b ( sin2 fc t )
4
1 f 1 f
cos2 fc b t = cos 2 fc + b t
2 4 2 4

The output frequency spectrum extends from fc+fc/4tofc-fb/4 and the minimum bandwidth (fv) is

fb fb 2fb fb
fc + 4 fc 4 = 4 = 2

It can be seen that for the same input bit rate the minimum band width required to pass the
output of the QPSK modulator is equal to one half of the BPSK modulator in Example Also, the
band rate for the QPSK modulator is one half that of the BPSK modulator.
QPSK receiver. The block diagram of a QPSK receiver is shown in Figure The power splitter
directs the input QPSK signal to the I and Q product detectors and the carrier recovery circuit. The
carrier recovery circuit reproduces the original transmit carrier oscillator signal. The recovered
carrier must be frequency and phase coherent with the transmit reference carrier. The QPSK signal
is demodulated in the I and Q product detectors. Which generate the original I and Q data bits.
The outputs of the product detectors are fed to the bit combining circuit. Where they are converted
from parallel I and Q data channels to a single binary output
outpu data stream.

The incoming QPSK signal may be any one of the four possible output phases shown in
Figure. To illustrate the demodulation process. Let the incoming QPSK signal be sin cf+cosct.
Mathematically, the demodulation process is as follows.

The receive QPSK signal ( sin cf+cosct)is one of the inputs to the I product detector. The
other input is the recovered carrier ( sin ct). The output of the I product detector is

I = ( sin c t + cos c t )( sin c t )


144424443 1 424 3
QPSK input signal Carrier

= ( sin c t )( sin c t ) + ( cos c t )( sin c t )

= sin2 c t + ( cos c t )( sin c t )

=
1
(1 cos 2c t ) + 1 sin (c + c ) t + 1 sin (c c ) t
2 2 2
1 1 1 1
I = + cos 2c t + sin2c t + sin0
2 2 2 2

V ( logic0 )
1
=
2
Again, the receive QPSK signal (-sin ct+cosct) is one of the inputs to the Q product
detector. The other input is the recovered carrier shifted 90o in phase (cos ct ) The output of the Q
product detector is

Q = ( sin c t + cos c t )( sin c t )


144424443 1 424 3
QPSK input signal Carrier

= cos2 c t ( sin c t )( cos c t )

=
1
(1 + cos 2c t ) 1 sin (c + c ) t 1 sin (c c ) t
2 2 2
(filtered out) (equals 0)

1 1 1 1
Q= + cos 2c t sin2c t sin0
2 2 2 2

V ( logic1)
1
=
2

5. Explain briefly about the differential pulse code modulation?

In analog messages we can make a good guess about a sample value from a knowledge of
the past sample values. In other words, the same values are not independent an generally there is
a great deal of redundancy in the Nyquist samples. Propers exploitation of this redundancy leads
to encoding a signal with a lesser number of bits. Consider a simple scheme where instead of
transmitting the sample values, we transmit the difference between the successive sample values.
Thus, if m[k] is the kth sample, instead of transmitting m[k], we transmit the difference d[k]=m[k]
m[k-1]. At the receiver, knowing d[k] and the previous sample value m[k-1], we can reconstruct
m[k]. Thus, from the knowledge of the difference d[k], we can reconstruct m[k] iteratively at the
receiver. Now, the difference between successive samples is generally much smaller than the
sample values. Thus, the peak amplitude mp of the transmitted values is reduced considerably.
Because the quantization interval = mp /L, for a given L (or n), this reduces the quantization
interval , thus reducing the quantization noise, which is given by 2 /12. This means that for a
given n (or transmission bandwidth), we can increase the SNR, or for a given SNR, we can reduce
n (or transmission bandwidth).

We can improve upon this scheme by estimating (predicting ) the value of the kth sample
m[k] from a knowledge of the previous sample values. If this estimate is m[k], then we transmit
the difference (prediction error)d[k] = m[k] m[k]. At the receiver also, we determine the estimate
m[k] from the previous sample values, and then generate m[k] by adding the received d[k] to the
estimate m[k]. Thus, we reconstruct the samples at the receiver iteratively. If our prediction is
worth its salt, the predicted (estimated) value m[k] will be close to m[k] and the difference
(prediction error) d[k] will be even smaller than the difference between the successive samples.
Consequently , this scheme, known as the differential PCM (DPCM), is superior to that described
in the previous paragraph, which is a special case of DPCM, where the estimate of a sample value
is taken as the previous sample value, that is m[k] = m[k-1]

Spirits of Taylor, Maclaurin, and Wiener

Before describing DPCM , we shall briefly discuss the approach to signal prediction
(estimation ) . to an uninitiated, future prediction seems a mysterious stuff fit only for psychics,
wizards mediums, and the likes, who can summon help from the spirit world. Electrical engineers
appear to be hopelessly outclassed in this pursuit. Not quite so! We can also summon the spirits
of to be hopelessely outclassed in this pursuit. Not quite so! We can also summon the spirits of
Taylor, Maclaurin. Wiener, and the likes to help us. Consider, for example, a signal m(t), which
has derivates of all orders at t. Using the Taylor series for this signal, we can express m(t+Ts) as

Ts2 T 3 ...
m ( t + Ts ) = m ( t ) + Ts m ( t ) + m(t ) + s m ( t ) + .... (..1a)
2! 3!
.
m ( t ) + Ts m ( t ) for small Ts ... (1b )

Equation shows that from a knowledge of the signal and its derivatives at instant t, we can
predict a future signal value at t + Ts. In fact, even if we known just the first derivative, we can still
predict this value approximately, as shown in Equation. Let us denote the kth sample of m(t) by
m[k], that is m(kTs) = m[k], and m(kTs Ts) = m[k 1], and so on. Setting t = kTs in Equation(1b),
and recognizing that m (kTs) [mkTs)- m(kTs Ts) ]/Ts we obtain
m [ k ] m [ k 1]
m [ k + 1] m [ k ] + Ts
Ts
=2m [ k ] m [ k 1]

This shows that we can find a crude prediction of the (k+1) the sample from the two
previous samples. The approximation in Equation improves as we ad more terms in the series on
the right-hand side. To determine the higher order derivatives in the series, we require more
samples in the past. The largest the number of past samples we use, the better will be the
prediction. Prediction. Thus, in general, we can express the prediction formula as

m[ k ] a1m[ k 1] + a2 m [ k 2] + .......aN m [ k N ] .(2)


The right-hand side is m [ k ] = a1m [ k 1] + a2 m [ k 2] + ........aN m [ k N ] (3)
This is the equation of an Nth order predictor, Larger N would result in better prediction in
general. The output of this filter (predictor) is m[ k ], the predicted value of m[k]. The input is the
previous samples m[k-1], m[k-2],m[k-n], although it is customary to say that the input is m[k]
and the output is m[ k ] . Observe that this equation reduces to m[ k ] =m[k-1] for the first-order
predictor . It follows from Eq. where we retain only the first term on the right-hand side. This
means that a1 =1, and the first order predictor is a simple time delay.

We have outline here a very simple procedure for predictor design. In more sophisticated
approach, discussed in section, where we use the minimum mean squared error criterion for best
prediction, the prediction coefficients aj in Eq. are determined from the statistical correlation
between various samples. The predictor described in Equation is called a linear predictor. It is
basically a traversal filter (a tapped delay line), where the tap gains are set equal to the predicition
coefficients, as shown in figure.

Analysis of DPCM

As mentioned earlier, in DPCM we transmit not the present sample s[k], but d[k] (the
difference between m[k] and its predicted value m(k) . At the receiver , we generate m[ k ] from
the past sample values to which the received d[k] is added to generate m[k]. There is, however,
one difficulty in this scheme. At the receiver, instead of the past sample m[k-1], m[k-2],.. as well
as d[k], we have their quantized versions mq [k-1], mq [k-2].Hence, we cannot determine m[ k ] .
We can only determine m q [k 1], mq [k 2],......... This will increase the error in reconstruction. In
such a case, a better strategy is to determine m q [ k ] , the estimate of mq [k] (instead of m[k]), at the
transmitter also from the quantized samples mq [k-1], mq [k-2],.. The difference d[k] = m[k]-
m q [ k ] is now transmitted using PCM. At the receiver, we can generate mq [k], and from the
received d[k] ,we can reconstruct mq [k].

Figure: Transversal filter (tapped delay line) used as a linear predictor.

Figure shows a DPCM transmitter, we shall soon show that the predictor input is mq[k].
Naturally, is output is m q [ k ] , the predicted value or mq [k]. The difference.

d [k ] = m[k ] m q [k ]
is qunatized to yield
d q [k ] = d [k ] + q[k ]

Where q[k] is the quantization error. The predictor output m q [ k ] is fed back to its input so that the
predictor input mq[k] is

mq [k ] = mq [k ] + d q [k ]
=m [ k ] d [k ] + d q [ k ]
=m [ k ] + q [ k ]

This shows that mq[k] is a quantized version of m[k]. The predictor input is indeed mq[k], as
assumed. The quantized signal dq[k] is now transmitted over the channel. The receiver shown in
figure is identical to the shaded portion of the transmitter. The inputs in both cases are also the
same, viz, dq [k]. therefore, the predictor
pre output must be m q [ k ] (the same as the predictor output
at the transmitter). Hence, the receiver output (which is the predictor input) is also the same, viz.,
mq[k] = m[k] + q[k], as found in Equation. This shows that we are able able to receive the desired signal
m[k] plus the quantization noise q[k]. This is the quantization noise associated with the difference
signal d[k], which is generally much smaller smaller than m[k]. The received samples are decoded
and passed through a low-pass
pass filter for D/A conversion.

SNR Improvement
Figure: DPCM system (a) Transmitter (b) Receiver
To determine the improvement in DPCM over PCM, let mp and dp be the peak amplitudes
of m(t) and d(t), respectively. If we use the same value of L in both
both cases, the quantization step
in DPCM is reduced by the factor dp /mp . Because the quantization noise power is ( v ) /12, the
2

quantization noise is DPCM reduce by the factor (mp/dp)2, and the SNR increases by the same
factor. Moreover, the signal power is proportional to its peak value squared (assuming other
statistical properties invariant). Therefore, Gp (SNR improvement due to prediction ) is

Pm
Gp =
Pd

Where Pm and Pd are the powers of m (t) and d(t), respectively. In terms of dB units, this means
that the SNR increases by 10 log10 (Pm/Pd) dB. Therefore, Eq. applies to DPCM also with a value of
that is higher by 10 log (Pm/Pd) dB. An example, a second order predictor processor for speech
signals is analyzed. For this case, the SNR improvement is found to be 5.6 dB. In practice , the
SNR improvement may be as high as 25 dB in such cases as short-term
short term voiced speech spectra and
in the spectra of low-activity
activity images. Alternatively, for the same SNR,
SNR, the bit rate for DPCM
could be lower than that for PCM by 3 to 4 bits per sample. Thus, telephone systems using DPCM
can often operate at 32 kbits/ or even 24 kbit/s.

6. Explain briefly about the delta modulation

Delta Modulation:

n used in DPCM is further exploited in delta modulation (DM) by


Sample correlation
oversampling (typically 4 times the Nyquist rate ) the baseband signal. This increases the
correlation between adjacents samples , which result in a small prediction error than can be
encodedd using only one bit (l=2). Thus, DM is basically a 1-bit
1 bit DPCM, that is, DPCM that uses
only two levels (l=2) for quantization of the m[k] - m q [k ] . In comparison to PCM (and DPCM), it is
very simple and inexpensive method of A/D conversion. A 1-bit
1 bit code word in DM makes word
framing unnecessary at the transmitter and the receive. This strategy allow us to use fewer bits
per sample for encoding a baseband signal.

Figure: Delta modulation is a special case of DPCM.

In DM, we use a first-order


order predictor, which as seen earlier, is just a time delay of Ts (the
sampling interval). Thus, the DM transmitter (modulator) and receiver (demodulator) are
identical to those of the DPCM in figure, with a time delay for the predictor, as shown as figure.
From this figure, we obtain.

mq [k ] = mq [ k 1] + d q [ k ]

Hence,

mq (k-1) = mq [k-2] + dq [k-1]


1]
Substituting this equation into Equation. Yields

mq [k] =mq [k-2] +dq [k] +dq[k-1]


[k

Proceeding iteratively in this manner, and assuming zero initial condition, that is, mq[0]=0 yields

k
mq [k ] = d q [m]
m =0

This shows that the receiver (demodulator) is just an accumulator (adder). If the output
dq[k] is represented by impulses, then the accumulator (receiver) may be realized by an integrator
because its output is the sum of the strengths of the input impulses (sum of the areas under the
impulses). We may also replace the feedback portion of the modulator (which is identical to the
demodulator) by an integrator. The demodular output is mq [k],, which when passed through a
low-pass
pass filter yields the desired signal reconstructed from the quantized samples.

Figure shows a practical implementation of the delta modulator and demodulator . As


discussed earlier, the first-order
order predictor is replaced by a low-cost
cost integrator circuit (such as an
RC integrator). The modulator (figure ) consists of a comparator and a sampler in the direct path
and an integrator amplifer in the feedback path. Let us see how the delta modulator works.

The analog
g signal m(t) is compared with the feedback signal (which serves as a predicted
signal) m q (t ). The error signal d(t) =m(t) - mq ( t ) is applied to a comparator. If d(t) is positive, the
comparator output is a constantt signal of amplitude E, and if d(t) is negative the comparator
output is E.
E. Thus, the difference is a binary signal (l=2) that is needed to generate a 1-
1 bit DPCM.
The comparator output is sample by a sampler at a rate of fs samples per second, where fs is
typically much higher than the Nyquist rate. The sampler thus produces a train of narrow pulses
dq[k] to simulate impulses) with a positive pulse when m(t) > m q (t ) and a negative pulse when
m(t) < m q (t ) . Note that each sample is coded by a single binary pulse (1-
(1 bit DPCM), as required.
The pulse train dq [k] is the delta-modulated
delta modulated pulse trai (fiuhgre). The modulated signal dq[k] is
amplified and integrated in the feedback path to generate m q (t ) . Figure which tries to follow m(t).
(a) Delta modulator

(b) Delta demodulator

Figure: Delta modulation

To understand how this works we note that each pulse in dq[k] at the input of the integrator
gives rise to a step function (positive
(positive or negative, depending on the pulse polarity) in m q (t ) . If, for
example, m(t) > m q (t ) ,a positive pulse is generated in dq[k], which gives rise to instant, as shown
in figure. It can be seen that m q (t ) is a kind of staircase approximation of m(t). when m q (t ) is
passed through a low-pass
pass filter, the coarseness of the staircase in m q (t ) is eliminated, and we get
a smoother and better approximation
ximation to m(t). The demodulation at the receiver consists of an
amplifier-integrator
integrator (identical to that in the feedback path of the modulator ) followed by a low- low
pass filter figure.

DM Transmits the Derivative of m(t)

In PCM, the analog signal samples


samples are quantized in L levels, and this information is
transmitted by n pulse per sample (n=log2L). A little reflection shows that in DM, the modulated
signal carries information not about the signal samples but about the difference between
successive samples. If the difference is positive or negative, a positive or a negative pulse
(respectively) is generated in the modulated signal dq[k]. Basically, therefore, DM carries the
information of the difference between successive samples is transmitted by a 1- bit code word.

Threshold of coding and Overloading

Threshold and overloading effects can be clearly seen in figure. Various in m(t) smaller
than the step value (threshold of coding) are lost in DM. Moreover, if m(t), changes too fast, that
is m(t) is too high m q (t ) cannot follow m(t), and overloading occurs. This is the so-called slope
overload which gives rise to the slope overload noise. This noise is one of the basic limiting
factors in the performance of DM. We should expect slope overload rather than amplitude
overload in DM, because DM basically carries the formation about m ( t ) . The granular nature of
the output signal gives rise to the granular noise similar to the quantization noise. The slope
overload noise can be reduced by increasing (the step size). This unfortunately increases the
granular noise. There is an optimum value of , which yields the best compromise giving the
minimum overall noise. This optimum value of depends on the sampling frequency fs and the
nature of the signal.

The slope overload occurs when m q (t ) cannot follow m(t). During the sampling interval Ts,
m q (t ) is capable of changing by , where is the height of the step. Hence, the maximum slope
that m q (t ) can follow is / Ts , or f s , where fs is the sampling frequency. Hence, no overload
occurs if

.
m (t ) < fs

Consider the case of tone modulation (meaning a sinusoidal message):

m(t) = A cos t

The condition for no overload is

.
m (t ) = A < fs
max

Hence, the maximum amplitude Amax of this signal that can be tolerated without overload is given
by

fs
Amax =

The overload amplitude of the modulating signal is inversely proportional to the frequency . For
higher modulating frequencies, the overload occurs for smaller amplitudes. For voice signals,
which contain all frequency components up to (say) 4 KHz, calculating calculating Amax by using
= 2 4000 in equation will give an overly conservative value. It has been shown by de Jager10
that Amax for voice signals can be calculated by using r 2 800 in Equation

fs
[ Amax ]voice
r

Thus, the maximum voice signal amplitude Amx that can be used without causing slope overload
in DM is the same as the maximum amplitude of a sinusoidal signal of refernce frequency fr (fr
800 Hz) that can be used without causing slope overload in the same system.

Fortunately, the voice spectrum (as well as the television video signal) also decays with
frequency and closely follows the overload characteristics (curve c, figure). For this reason, DM is
well suited for voice (and d television) signals. Actually, the voice signal spectrum (curve b)
decreases as 1/ up to 2000 Hz, and beyond this frequency, it decreases as 1/1/ 2. If we had used a
double integration in the feedback circuit instead of a single integration, Amax in Equation would
be proportional to 1/2. Hence, a better match between the voice spectrum and the overload
characteristics is achieved by using a single integration up to 2000 Hz and a but has tendency to
instability, which can be reduced by using some low-orderlow er prediction along with double
integration. The double integrator can be built by placing in cascade two low-pass
low RC integrators
with the time constants R1C1 = 1/200
1/200 and R2C2 = 1/4000,, respectively. This results in single
integration from 100 Hz to 2000 Hz and double integration beyond 2000 Hz.

7. Explain Pulse Width modulation?

Pulse width Modulation:

Introduction: The pulse-width


width modulation of PTM is also often called PDM (Pulse duration
Modulation) and, less often, PLM (pulse-length
(pulse length modulation). In this system, as shown in figure,
we have a fixed amplitude and starting time of each pulse, but the width of each pulse puls is made
proportional to the amplitude of the signal at that instant. In Figure, there may be a sequence of
signal sample amplitudes of 0.9, 0.5, 0 and -0.4
0.4 V. These can be represented by pulse widths of 1.9,
1.5, 1.0 and 0.6 s,
s, respectively. The width corresponding
corresponding to zero amplitude was chosen in this
system to be 1.0 s,
s, and it has been assumed that signal amplitude at this point will vary between
Figure: Pulse-width
width modulation. (a) Signal; (b) PWM (width variations exaggerated).
the limits of +1 V (width= 2 s)
s) and 1 V (width =0 s).
s). Zero amplitude is thus the average signal
level, and the average pulse width of 1 ss has been made to correspond to it. In this context, a
negative pulse width is not possible. It would make the pulse end before it began, as it were, and
thus throw out the timing in the receiver. If the pulses in a practical system have a recurrence rate
of 8000 pulses per second, the time between the commencements of adjoining pulses is 106 / 8000 =
125 s. This is adequate not only to accommodate the varying widths but also to permit time-
division multiplexing.

Pulse-width
width modulation has the disadvantage, when compared with pulse-position
modulation (PPM), which will be discussed next, that its pulses are of varying width and therefore
of varying power content. This means that the transmitter must be powerful enough to handle the
maximum-width
width pulses, although the average power transmitted is perhaps only half of the peak
power. PWM still works if synchronization between transmitter
transmitter and receiver fails, whereas pulse-
pulse
position modulation does not.

Generation and demodulation of PWM: Pulse-width width modulation may be generated by applying
trigger pulses (at the sampling rate) to control the starting time of pulses from a monostable
multivibrators, and feeding in the signal to be sampled to control the duration of these pulses. The
circuit diagram for such an arrangement is shown in figure.

The emitter-coupled
coupled monostable multivibrator of Figure makes an excellent voltage-to-time
voltage
converter,
verter, since its gate width is dependent on the voltage to which the capacitor C is charged. If
this voltage is varied in accordance with a signal voltage, a series of rectangular pulses will be
obtained, with widths varying as required. Note that the circuit
circuit does the twin jobs of sampling
and converting the samples into PWM.

It will be recalled that the stable state for this type of multivibrators is with T1 OFF and T2
ON. The applied trigger pulse switches T1 ON, whereupon the voltage at C1 falls as T1 now begins
to draw collector current, the voltage at B2 follows suit and T2 is switched OFF by regenerative
action. As soon as this happens, however, C begins to charge up to the collector supply potential
through R.. After a time determined by the supply suppl voltage and the RC time constant of the
charging network, B2 becomes sufficiently positive to switch T2 ON.T1 is simultaneously switched
OFF by regenerative
Figure: Monostable multivibrators generating pulse-width modulation.
action and stays OFF until the arrival of the next trigger pulse. The voltage that the base of T2 must
reach to allow T2 to turn on is slightly more positive than the voltage across the common emitter
resistor Rk. This voltage depends on the current flowing through the circuit, which at the time is
the collector current of T1 (which is then ON). The collector current depends on the base bias,
which is governed by the instantaneous changes in the applied signal voltage. The applied
modulation voltage controls the voltage to which B2 must rise to switch T2 ON. Since this voltage
rise is linear, the modulation voltage is seen to control the period of time during which T2 is OFF,
that is, the pulse duration. It should be noted that this pulse duration is very short compared to
even the highest signal frequencies, so that no real distortion arises through changes in signal
amplitude while is OFF.

The demodulation of pulse-width modulation is quite a simple process; PWM is merely fed
to an integrating circuit from which a signal emerges whose amplitude at any time is proportional
to the pulse width at that time. This principle is also employed in the very efficient so-called class
D amplifiers. The integrating circuit most often used there is the loudspeaker itself.

8. Explain about Pulse Position modulation?

Pulse-Position Modulation (PPM)

The amplitude and width of the pulses is kept constant in this system, while the position of
each pulse, in relation to the position of a recurrent reference pulse is varied by each instantaneous
sampled value of the modulating wave. This means that the transmitter must send synchronizing
pulses to operate timing circuits in the receiver. As mentioned in connection with PWM, pulse
position modulation has the advantage of requiring constant transmitter power output, but the
disadvantages of depending on transmitter-receiver synchronization.

Generation and demodulation of PPM Pulse-position modulation may be obtained very simply
from PWM, as shown in figure. Considering PWM and its generation again, it is seen that each
such pulse has a leading edge and trailing edge (like any other pulse, of course). However, in
PWM the locations of the leadings ledges are fixed, whereas those of the trailing edges are not.
Their positions depends on pulse width, which is determined by the signal amplitude at that
instant. Thus, it may be said that the trailing edges of PWM pulses are, in fact, position-
modulated. The method of obtaining PPM from PWM is thus accomplished by getting rid of
the leading edges and bodies of the PWM pulses. This is surprisingly easy to achieve.
Figure: a and b shows, once again, PWM corresponding to a given signal. If the train of
pulses thus obtained in differentiated,
tiated, then as shown in figure, another pulse train results. This
has positive going
going narrow pulses corresponding to leading edges and negative-going
negative pulses
corresponding to trailing edges. If the position corresponding to the trailing edge of an
unmodulated
lated pulse is counted as zero displacement, then the other trailing edges will arrive
earlier or later. An unmodulated PWM pulse is one that is obtained when the instantaneous
signal value is zero. These pulses are appropriately labeled in figure. They will therefore have a
time displacement other than zero; this time displacement is proportional to the instantaneous
value of the signal voltage. The differentiated pulses corresponding to the leading edges are
removed with a diode clipper or rectifier, and the remaining pulses, as shown in figure are
position modulated.

Figure: Generation of pulse-poistion


poistion modulation (a) signal ; (b) PWM ; (c) differentiated (d)
clipped (PPM)

When PPM is demodulated in the receiver, it is again first converted into int PWM. This is
done with a flip-flop
flop or bistable multivibrator. One input of the multivibrator receives trigger
pulses from a local generator which is synchronized by trigger pulses received from the
transmitter, and these triggers are used to switch OFF one of the stages of the flip-flop.
flip The PPM
pulses are fed to the other base of the flip-flop
flip flop and switch that stage ON (actually by switching the
other one ODD). The period of time during which this particular stage is ODD depends on the
time difference between the two triggers, so that the resulting pulse has a width that depends on
the time displacement of each individual PPM pulse. The resulting PWM pulse train is then
demodulated.
9. Explain about pulse amplitude modulation?

Pulse-amplitude Modulation (PAM) Pulse- amplitude modulation, the simplest form of pulse
modulation, is illustrated in figure . It forms an excellent introduction to pulse modulation in
general. PAM is a pulse modulation system in which the signal is sampled at regular intervals,
and each sample is made proportional to the amplitude of the signal at the instant of sampling.
The pulses are then sent by either wire or cable, or else are used to modulate a carrier. As shown
in figure, the two types are double-polarity
double PAM, which is self-explanatory,
explanatory, and single-polarity
single
PAM, in which is fixed dc level is added to the signal, to ensure that the pulses are always
positive. The ability to use constant-amplitude
constant amplitude pulses is a major advantages of pulse modulation,
and since PAM does not utilize constant amplitude pulses, it is infrequently used. When it is
used, the pulses frequency-modulate
modulate the carrier.

Figure: Pulse-amplitude
amplitude modulation, (a) Signal; (b) double polarity
polarity PAM; (c) Single-polarity
Single
PAM.

It is very easy to generate and demodulate PAM. In a generator, the signal to be converted
to PAM is fed to one input of an AND gate. Pulses at the sampling frequency are applied t the
other input of the AND gate to open it during the wanted time intervals.
intervals. The output of the gate
then consist of pulses at the sampling rate, equal in amplitude to the signal voltage at each instant.
The pulses are then passed through a pulse-shaping
pulse shaping network, which gives them flat tops. As
mentioned above, frequency modulation
modulation is then employed, so that the system becomes PAM-FM.
PAM
In the receiver, the pulses are first recovered with a standard FM demodulator. They are then fed
to an ordinary diode detector, which is followed by a low-pass filter. If the cutoff frequency of this
filter is high enough to pass the highest signal frequency, but low enough to remove the sampling
frequency ripple, an undistorted replica of the original signal is reproduced.

10. Explain briefly about the QAM?

Quadrature amplitude modulation (QAM) (pronounced /kwm/ or /km/ or simply


"Q-A-M") is both an analog and a digital modulation scheme. It conveys two analog message
signals, or two digital bit streams, by changing (modulating) the amplitudes of two carrier waves,
using the amplitude-shift keying (ASK) digital modulation scheme or amplitude modulation (AM)
analog modulation scheme. These two waves, usually sinusoids, are out of phase with each other
by 90 and are thus called quadrature carriers or quadrature components hence the name of the
scheme. The modulated waves are summed, and the resulting waveform is a combination of both
phase-shift keying (PSK) and amplitude-shift keying (ASK), or in the analog case of phase
modulation (PM) and amplitude modulation. In the digital QAM case, a finite number of at least
two phases, and at least two amplitudes are used. PSK modulators are often designed using the
QAM principle, but are not considered as QAM since the amplitude of the modulated carrier
signal is constant. QAM is used extensively as a modulation scheme for digital telecommunication
systems.

Digital QAM

Like all modulation schemes, QAM conveys data by changing some aspect of a carrier signal, or
the carrier wave, (usually a sinusoid) in response to a data signal. In the case of QAM, the
amplitude of two waves, 90 degrees out-of-phase with each other (in quadrature) are changed
(modulated or keyed) to represent the data signal. Amplitude modulating two carriers in quadrature
can be equivalently viewed as both amplitude modulating and phase modulating a single carrier.

Phase modulation (analog PM) and phase-shift keying (digital PSK) can be regarded as a special
case of QAM, where the magnitude of the modulating signal is a constant, with only the phase
varying. This can also be extended to frequency modulation (FM) and frequency-shift keying
(FSK), for these can be regarded as a special case of phase modulation.
Analog QAM

Analog QAM: measured PAL colour bar signal on a vector analyser screen.

When transmitting two signals by modulating them with QAM, the transmitted signal will be of
the form:

where I(t) and Q(t) are the modulating signals and f0 is the carrier frequency.

At the receiver, these two modulating signals can be demodulated using a coherent demodulator.
Such a receiver multiplies the received signal separately with both a cosine and sine signal to
produce the received estimates of I(t) and Q(t) respectively. Because of the orthogonality property
of the carrier signals, it is possible to detect the
the modulating signals independently.

In the ideal case I(t) is demodulated by multiplying the transmitted signal with a cosine signal:

Using standard trigonometric identities,


identities we can write it as:

Low-pass filtering ri(t) removes the high frequency terms (containing 4ff0t), leaving only the I(t)
term. This filtered signal is unaffected by Q(t), showing that the in-phase
phase component can be
received independently of the quadrature component. Similarly, we may multiply s(t) by a sine
wave and then low-pass
pass filter to extract Q(t).

The phase of the received signal is assumed to be known accurately at the receiver.
r

If the demodulating phase is even a little off, it results in crosstalk between the modulated signals.
This issue of carrier synchronization at the receiver must be handled
handled somehow in QAM systems.
The coherent demodulator needs to be exactly in phase with the received signal, or otherwise the
modulated signals cannot be independently received. For example analog television systems
transmit a burst of the transmitting colour subcarrier after each horizontal synchronization pulse
for reference.

Analog QAM is used in NTSC and PAL television systems, where the I- and Q-signals carry the
components of chroma (colour) information. "Compatible QAM" or C--QUAM is used in AM
stereo radio to carry the stereo difference information.

Fourier analysis of QAM

In the frequency domain,, QAM has a similar spectral pattern to DSB-SC


SC modulation. Using the
properties of the Fourier transform,
transform we find that:

where S(f), MI(f) and MQ(f)) are the Fourier transforms (frequency-domain
(frequency domain representations) of s(t),
I(t) and Q(t), respectively.

Quantized QAM

As with many digital modulation schemes, the constellation diagram is a useful representation. In
QAM, the constellation points are usually arranged in a square grid with equal vertical vert and
horizontal spacing, although other configurations are possible (e.g. Cross-QAM).
Cross Since in digital
telecommunications the data are usually binary,, the number of points in the grid is usually a
power of 2 (2, 4, 8 ...). Since QAM is usually square, some of these are rarethe
rare most common
forms are 16-QAM, 64-QAM, 128--QAM and 256-QAM. QAM. By moving to a higher-order
higher constellation,
it is possible to transmit more bits per symbol.. However, if the mean energy of the constellation is
to remain the same (by way of making a fair comparison), the points must be closer together and
are thus more susceptible to noise and other corruption; this results in a higher bit error rate and
so higher-order
order QAM can deliver more data less reliably than lower-order
lower order QAM, for constant
mean constellation energy.

rates beyond those offered by 8-PSK


If data-rates 8 are required, it is more usual to move to QAM since it
achieves a greater distance between adjacent points in the I-Q
I Q plane by distributing the points
more evenly. The complicating factor is that the points are no longer all the
th same amplitude and
so the demodulator must now correctly detect both phase and amplitude,, rather than just phase.

64-QAM and 256-QAM


QAM are often used in digital cable television and cable modem applications. In
the US, 64-QAM and 256-QAM
QAM are the mandated modulation schemes for digital cable (see QAM
tuner) as standardised by the SCTE in the standard ANSI/SCTE 07 2000. 2000 Note that many
marketing people will refer to these as QAM-64
QAM and QAM-256. In the UK,
UK 16-QAM and 64-QAM
are currently used for digital terrestrial television (Freeview and Top Up TV)
TV and 256-QAM is
planned for Freeview-HD.

Communication systems designed to achieve very high levels of spectral efficiency usually
employ very dense QAM constellations.
constell One example is the ITU-T G.hn standard for networking
over existing home wiring (coaxial
coaxial cable,
cable phone lines and power lines),
l which employs
constellations up to 4096-QAM
QAM (12 bits/symbol).

Ideal structure

Transmitter

The following picture shows the ideal structure of a QAM transmitter, with a carrier frequency f0
and the frequency response of the transmitter's filter Ht:

First the flow of bits to be transmitted is split into two equal parts: this process generates two
independent signals to be transmitted. They are encoded separately just like they were in an
amplitude-shift keying (ASK) modulator. Then one channel (the one "in phase") is multiplied by a
cosine, while the other channel (in "quadrature") is multiplied by a sine. This way there is a phase
of 90 between them. They are simply added one to the other and sent through the real channel.

The sent signal can be expressed in the form:

where vc[n] and vs[n] are the voltages applied in response to the nth symbol to the cosine and sine
waves respectively.

Receiver

The receiver simply performs the


he inverse process of the transmitter. Its ideal structure is shown in
the picture below with Hr the receive filter's frequency response :

Multiplying by a cosine (or a sine) and by a low-pass


low pass filter it is possible to extract the component
in phase (or in quadrature). Then there is only an ASK demodulator and the two flows of data are
merged back.

In practice, there is an unknown phase delay between the transmitter and receiver that must be
compensated by synchronization of the receivers local oscillator, i.e. the sine and cosine functions in
the above figure. In mobilee applications, there will often be an offset in the relative frequency as
well, due to the possible presence of a Doppler shift proportional to the relative velocity of the
transmitter and receiver. Both the phase and frequency variations introduced by the th channel must
be compensated by properly tuning the sine and cosine components, which requires a phase
reference,, and is typically accomplished using a Phase-Locked Loop (PLL).

In any application, the low-pass


pass filter will be within hr (t):: here it was shown just to be clearer.

Quantized QAM performance

The following definitions are needed in determining error rates:

M = Number of symbols in modulation constellation


Eb = Energy-per-bit
Es = Energy-per-symbol = kEb with k bits per symbol
N0 = Noise power spectral density (W/Hz)
Pb = Probability of bit-error
Pbc = Probability of bit-error
error per carrier
Ps = Probability of symbol-error
error
Psc = Probability of symbol--error per carrier

.
Q(x) is related to the complementary Gaussian error function by:

, which is the probability that x will be under the tail of the Gaussian PDF towards positive
infinity.

The errorr rates quoted here are those in additive white Gaussian noise (AWGN
AWGN).

Where coordinates for constellation points are given in this article, note
note that they represent a non-
normalised constellation. That is, if a particular mean average energy were required (e.g. unit
average energy), the constellation would need to be linearly scaled.

Rectangular QAM

Constellation diagram for rectangular 16-QAM.


16

Rectangular QAM constellations are, in general, sub-optimal


sub optimal in the sense that they do not
maximally space the constellation points for a given energy. However, they have the considerable
advantage that they may be easily transmitted as two pulse amplitude modulation (PAM) signals
on quadrature carriers, and can be easily demodulated. The non-square
non square constellations, dealt with
below, achieve marginally better bit-error
bit error rate (BER) but are harder to modulate and demodulate.

The first rectangular QAM constellation usually encountered is 16-QAM, 16 QAM, the constellation
diagram
am for which is shown here. A Gray coded bit-assignment
assignment is also given. The reason that 16-
16
QAM is usually the first is that a brief consideration reveals that 2-QAM
2 QAM and 4-QAM
4 are in fact
binary phase-shift keying (BPSK) and quadrature phase-shift keying (QPSK), respectively. Also,
the error-rate performance of 8-QAM
QAM is close to that of 16-QAM
16 QAM (only about 0.5 dB better[citation
), but its data rate is only three-quarters
needed]
three that of 16-QAM.

Expressions for the symbol-error


error rate of rectangular QAM are not hard to derive but yield rather
unpleasant expressions. For an even number of bits per symbol, k,, exact expressions are available.
They are most easily expressed in a per carrier sense:

so

The bit-error
error rate depends on the bit to symbol mapping, but for and a Gray-coded
assignment -- so that we can assume each symbol error causes only one bit error -- the bit-error
rate is approximately.

Since the carriers are independent, the overall bit error rate is the same as the per-carrier
per error
rate, just like BPSK and QPSK.

Odd-k QAM

For odd k, such as 8-QAM (k = 3)) it is harder to obtain symbol-error


symbol error rates, but a tight upper bound
is

Two rectangular 8-QAM


QAM constellations are shown below without bit assignments. These both have
the same minimum distance between
ween symbol points, and thus the same symbol-error
symbol rate (to a
first approximation).

The exact bit-error rate, Pb will depend on the bit-assignment.


bit

Note that neither of these constellations are used in practice, as the non-rectangular
non rectangular version of 8-
8
QAM is optimal.
Constellation diagram for rectangular 8-QAM.
8 Alternative constellation diagram for rectangular
8-QAM.

Contents [hide]

1 Mathematical representation
2 Gaussian minimum-shift
shift keying
3 See also
4 References

Non-rectangular QAM

Constellation diagram for circular 8-QAM.


8

Constellation diagram for circular 16-QAM.


16

It is the nature of QAM that most orders of constellations can be constructed in many different
ways and it is neither possible nor instructive to cover them all here. This article instead presents
two, lower-order constellations.

Two diagrams of circular QAM constellation are a shown, for 8-QAMQAM and 16-QAM.
16 The circular 8-
QAM constellation is known to be the optimal 8-QAM8 QAM constellation in the sense of requiring the
least mean power for a given minimum Euclidean distance. The 16-QAM 16 constellation is
suboptimal although the optimal al one may be constructed along the same lines as the 8-QAM 8
constellation. The circular constellation highlights the relationship between QAM and PSK. Other
orders of constellation may be constructed along similar (or very different) lines. It is consequently
hard to establish expressions for the error rates of non-rectangular
non rectangular QAM since it necessarily
depends on the constellation. Nevertheless, an obvious upper bound to the rate is relatedrela to the
minimum Euclidean distance of the constellation (the shortest straight-line
straight line distance between two
points):

Again, the bit-error


error rate will depend on the assignment of bits to
t symbols.

Although, in general, there is a non-rectangular


non rectangular constellation that is optimal for a particular M,
they are not often used since the rectangular QAMs are much easier to modulate and demodulate.

Interference and noise

In moving to a higher order QAM constellation (higher data rate and mode) in hostile
RF/microwave QAM application environments, such as in broadcasting or telecommunications,
inteference (via multipath)) typically increases. Reduced noise immunity due to constellation
separation makes it difficult to achieve theoretical performance thresholds. There are several test
parameter measurements
ents which help determine an optimal QAM mode for a specific operating
environment. The following three are most significant:[1]

Carrier/interference ratio
Carrier-to-noise ratio
Threshold-to-noise ratio

11. Explain briefly about the msk and Gmsk?

In digital modulation, minimum-shift


minimum keying (MSK) is a type of continuous-phase
continuous frequency-
shift keying that was developed in the late 1950s and 1960s. Similar to OQPSK,
[1] OQPSK MSK is encoded
with bits alternating between quaternary components, with the Q component delayed by half the
symbol period. However, instead of square pulses as OQPSK uses, MSK encodes each bit as a half
sinusoid.. This results in a constant-modulus
constant modulus signal, which reduces problems caused by non-linear
non
distortion.
tion. In addition to being viewed as related to OQPSK, MSK can also be viewed as a
continuous phase frequency shift keyed (CPFSK) signal with a frequency separation of one-half
one
the bit rate.

Contents [hide]

1 Mathematical representation
2 Gaussian minimum-shift
minimum keying
3 See also
4 References

Contents [hide]

1 Mathematical representation
2 Gaussian minimum-shift
shift keying
3 See also
4 References

Mathematical representation

The resulting signal is represented by the formula

where aI(t) and aQ(t) encode the even and odd information respectively with a sequence of square
pulses of duration 2T. Using the trigonometric identity,
identity, this can be rewritten in a form where the
phase and frequency modulation are more obvious,

where bk(t) is +1 when aI(t) = aQ(t)


( and -1 if they are of
opposite signs, and k is 0 if aI(tt) is 1, and otherwise. Therefore, the signal is modulated in
frequency and phase, and the phase continuously and linearly changes.

Gaussian minimum-shift
shift keying
In digital communication, Gaussian
Gauss minimum shift keying or GMSK is a continuous-phase
frequency-shift keying modulation scheme.
scheme It is similar to standard
andard minimum-shift
minimum keying (MSK);
however the digital data stream is first shaped with a Gaussian filter before being applied to a
frequency modulator. This has the advantage of reducing sideband power, which in turn reduces
out-of-band
band interference between signal carriers in adjacent frequency channels. However, the
Gaussian filter increases the modulation memory in the system and causes intersymbol
interference,, making it more difficult to discriminate between
between different transmitted data values
and requiring more complex channel equalization algorithms such as an adaptive equalizer at the
receiver. GMSK has high spectral efficiency,
efficiency but it needs a higher power level than QPSK, for
instance,
nstance, in order to transmit reliably the same amount of data.

GMSK is most notably used in the Global System for Mobile Communications ( GSM).

Encoding

The simplest and most common form of ASK operates as a switch, using the presence of a carrier
wave to indicate a binary one and its absence to indicate a binary zero. This
Thi type of modulation is
called on-off keying,, and is used at radio frequencies to transmit Morse code (referred to as
continuous wave operation).

More sophisticated encoding schemes have been developed which represent


represen data in groups using
additional amplitude levels. For instance, a four-level
four level encoding scheme can represent two bits
with each shift in amplitude; an eight-level
eight level scheme can represent three bits; anda so on. These
forms of amplitude-shift
shift keying require a high signal-to-noise ratio for their recovery, as by their
nature much of the signal is transmitted at reduced power.

Here is a diagram showing the ideal model for a transmission system using an ASK modulation:

It can be divided into three blocks. The first one represents the transmitter, the second one is a
linear model of the effects of the channel, the third one shows the structure of the receiver. The
following notation is used:

ht(t) is the carrier signal for the transmission


hc(t) is the impulse response of the channel
n(t) is the noise introduced by the channel
hr(t) is the filter at the receiver
L is the number of levels that are used for transmission
Ts is the time between the generation of two symbols

Different symbols are represented with different voltages. If the maximum allowed value for the
voltage is A, then all the possible values are in the range [-A,A] and they are given by:

the difference between one voltage and the other is:

Considering the picture, the symbols v[n] are generated randomly by the source S, then the
impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be
sent through the channel. In other words, for each symbol a different carrier wave is sent with the
relative amplitude.

Out of the transmitter, the signal s(t) can be expressed in the form:

In the receiver, after the filtering through hr (t) the signal is:

where we use the notation:

nr(t) = n(t) * hr(f)


g(t) = ht(t) * hc(f) * hr(t)

where * indicates the convolution between two signals. After the A/D conversion the signal z[k]
can be expressed in the form:

In this relationship, the second term represents the symbol to be extracted. The others are
unwanted: the first one is the effect of noise, the second one is due to the intersymbol interference.
If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion,
criterion then there will be no
intersymbol interference and the value of the sum will be zero, so:

z[k] = nr[k] + v[k]g[0]

the transmission will be affected only by noise.

Probability of error

The probability density function to make an error after a certain symbol has been sent can be
modelled by a Gaussian function; the mean value will be the relative sent value, and its variance
will be given by:

where N(f) is the spectral density of the noise within the band and Hr (f) is the continuous Fourier
transform of the impulse response of the filter hr (f).

The possibility to make an error is given by:

where is the conditional probability of making an error after a symbol vi has been sent and
is the probability of sending a symbol v0.

If the probability of sending any symbol is the same, then:


th

If we represent all the probability density functions on the same plot against the possible value of
the voltage to be transmitted, we get a picture like this (the particular case of L=4 is shown):

The possibility of making an error after a single symbol has been sent is the area of the Gaussian
function falling under the other ones. It is shown in cyan just for one of them. If we call P+ the area
under one side of the Gaussian, the sum of all the areas will be: 2LP + 2P + . The total probability
of making an error can be expressed in the form:

We have now to calculate the value of P+. In order to do that, we can move the origin of the
reference wherever we want: the area below the function will not change. We are in a situation
like the one shown in the following picture:

it does not matter which Gaussian function we are considering, the area we want to calculate will
be the same. The value we are looking for will be given by the following integral:

where erfc() is the complementary error function. Putting all these results together, the probability
to make an error is:

from this formula we can easily understand that the probability to make an error decreases if the
maximum amplitude of the transmitted signal or the amplification of the system becomes greater;
on the other hand, it increases if the number of levels or the power of noise becomes greater.

This relationship is valid when there is no intersymbol interference, i.e. g(t) is a Nyquist function.

UNIT III
SOUND CODES, LINES CODES AND ERROR CONTROL

PART A

1. Define entropy?

In information theory, entropy is s a measure of the uncertainly associated with the random
variables the term by itself in the context usually refers to the Shannon entropy, which quantifies,
in the sense of an expected value, the information contained in a message usually in units such as
bits.

2. What is the rate of information?

A message source generates message at the rate of r messages/ second, the rate of
information r is defined as the average number of bits of information / second. Now H is the
average number of bits of information per message.

R = rH bits/ec

3. An event has six possible outcomes with the probabilities

P1 = 1 , P2 = 1 , P3 = 1 , P4 = 1/ , P5 = 1 , P6 = 1
2 4 8 16 32 ' 32

find the entropy of the system. Also find the rate of information if there are 16 atcomes/second

Solution:

Entropyh is

6
1
H = Pk log
k =1 Pk
1 1 1 1
= 1 log 2 + log 4 + log 8 + log16 + log 32
2 4 8 16 32
31
= bits message
16

Now r = 16 outcomes /sec

Rate of information R is
R = rH
31
=16
16
=31 bits/sec

4. Define Channel Capacity

It is defined as the maximum of mutual information.

( y )]
C = max I ( x, y ) = max[ H ( x) H x

I(x,y) measure of the average information per symbol transmitted in the system.

5. Define transmission efficiency

Transmission efficiency or channel efficiency is defined as

actual transinformation
=
max imum transinformation
I ( x;y ) I ( x; y )
= =
max I ( x, y ) c

C I ( x; y )
Redundancy of the channel is defined as R = 1 =
C

6. Define Coding?

Coding is to improve the efficiency of the communication system

7. Define NRZ (UNRZ)

NRZ: Symbol 1 is represented by transmitting a pulse of constant amplitude for the entire
duration of the bit interval, and symbol 0 is represented by no pulse, NRZ indicates that the
assigned amplitude level is maintained throughout the entire bit interval.
8. Define RZ : (URZ)

Symbol 1 is represented by a positive Pulse that return to zero before the end of the bit
interval and symbol 0 is represented by the absence of pulse.

9. Define AMI

Positive and negative pulses are used alternately for symbol, and no pulse is used for
symbol 0. In either case the pulse returns to a before the end of the bit interval.

10. Define Split Phase (Manchester)


Symbol 1 is represented by a two pulse followed by a ve ve pulse, with both pulses being of
equal amplitude and half bit duration; for symbol 0, the polarities of these pulses are reversed.

11. Define Block Codes.

In block codes, each block of k message bits is encoded into a block of n bits (n)k), as shown
in figure. The check bits are derived from the message bits and are added to them. The n-bit n
block of channel encoder output is called a codeword and the codes in which the message bits
appear at the beginning of a code word are called systematic codes.

Message Channel Code blocks


Block encoder

Message Message Check bits

Kbits Kbits rbits

PART B
1. Describe the about the various properties of channel capacity?

Channel Capacity:

The mutual information I (X,Y) indicates a measure of the average information per symbol
transmitted in the system. A suitable measure for efficiency of transmission of information may
be introduced by comparing the actual rate and the upper bound of the rate of information
transmission for a given channel. Shannon has introduced a significant concept of channel
capacity defined as the maximum of mutual information. Thus, the channel capacity C is given by

C= max I (X,Y) = max [H(x) H(X/Y)} (1)

The transmission efficiency or channel efficiency is defined as

actual transinformation
=
max imum transformation
I ( x;y ) I ( x; y )
or = = ... ( 2 )
max i(x,y) C

C I ( x, y )
The redundancy of the channel is defined as R = 1 = ...(3)
C
Noise free channel

For a noise-free channel, the mutual information is given by Eq. Thus,

I(X;Y) = H(X)

Hence, the channel capacity in this case is


Max H (X) = log M bits/ message

Where M is the total number of message

Hence, for a noise free channel,

C= log M bits/ message ..(1a)

Symmetric Channel

A symmetric channel is defined as the one for which (i) H(Y/xi) is independent of j; i.e., the
P( y / x j ) is independent of
m
entropy corresponding to each row of {P(Y/X)] is the same, and (ii) k
i =1

k, i.e, the sum of all the columns of {P(Y/X)} is the same.

It can be seen that a channel is symmetric if the rows and columns of the channel matrix D=
P(Y/X) are independently identical ; except for permutations. If D is a square matrix, then for a
symmetric channel, the rows and columns are identical, except for permutations. The following
examples will make the concept of symmetric channel clear:

1 1 1
2 4 4 This is a symmetric channel as the rows and columns are identical

1 1 1
(a ) P ( Y/X ) = 1 1
4 2 4 except for permutations each contains one and two
1 1 1 2 4

4 4 2

1 1 1
2 4 4 This is a symmetric channel as the rows and columns are identical

1 1 1
(b) P ( Y/X ) = 1 1
4 2 4 except for permutations each contains one and two
1 1 1 2 4

4 4 2

1 1 1 1
3 6 This is a symmetric channel, as each row contains two 1 3
(c) P ( Y/X ) =
6 3

1 1 1 1 and two 1/6, and each column contains one 1/3 and one 1/6
6 3 6 3

1 1 1 1
3 6 This is not a symmetric channel, as although the rows are
(d ) P ( Y/X ) =
6 3

1 1 1 1 identical except for permutations, the columns are not
3 3 6 6

0.4 0.6
0.3 0.7 This is not a symmetric channel, as although the columns are
( e ) P (Y / X ) =
0.6 0.4 identical except for permutations, the rows are not

0.7 03
{Note that since the rows of the above matrices are complete probability schemes, the sum of each
row in each matrix is units.]

For a symmetric channel


I(X,Y)= H(Y) H(Y/X)

= H (Y ) H (Y / x j ) ( x j )
m

i =1

= H (Y ) A p ( x j )
m

i =1

Where A = H(Y/xj) is independent of j and hence is taken out of the summation signa.

Also,

p ( x ) = 1,
m

j
i =1

Hence, I ( X,Y ) = H (Y ) A ...(2.1)


The channel capacity of symmetric channels is
C= max I (X,Y)
= max [H ( Y ) A]
= max [H(Y)]-A
or
C= log n -A

Where n is total number of receiver symbols


Since
max H (Y) = log n

Binary Symmetric Channel (BSC)

The most important case of a symmetric channel is the Binary Symmetric Channel (BSC).
p 1 p p q
In this case m =n= 2, and the channel matrix is D = P (Y / X ) = =
1 p p q p

BSC can be represented graphically as shown in fig.

Example

For a BSC shown in figure.(a) Find the channel capacity for I (p) = 0.9; (ii) p =.6
C = log n -A
= log 2 - H(Y/xj)
2
=log 2 - - p ( yk / x j ) log ( y k / x j )
i=1
= log 2 + p log p + ( 1- p) log (1-
( p)
=1- (p (log p + q log q)

Figure: (a)

= 1 - H(p)
= 1- H (q)
(i) For p= 0.9,
C= 1 + 0.9 log 0.9 + 0.1 log 0.1
= 0.531 bit/message

(ii) For p= 0.6,


C= 1 + 0.6 log 0.6 + 0.4 log 0.4
= 0.029 bit/message
age

Cascaded Channels

Sometimes channels are to be cascaded for some reasons. Let us consider the case of two
cascaded Binary Symmetric Channels as shown in figure. The analysis of these cascaded channels
is as follows:
Figure:

The message from X1, reaches Z1 in two ways: X1 y1 z1 and x1 y1 z1. The respective path
probabilities are p.p and q.q.

Hence,

p ' = p 2 + q 2 = ( p + q) 2 2 pq = 1 2 pq
Similarly, the message from x1 reaches z2 in two ways: x1 y1 - z2 and x1 y2 z2. The respective
path probabilities are p.q. and q.p

Hence,

Q = pq + pq = 2 pq

{As a check, it can be seen that p + q = (1-2pq) +(2 pq) = 1]

Thus, the channel matrix of the cascaded channel is

1 2 pq 2 pq p ' q '
P(Z / X ) = = ...(4.1)
2 pq 1 2 pq q ' p '

Thus, the cascaded channel is equivalent to single Binary Symmetric Channel with error
probability = 2 pq. We know that the channel capacity of a BSC is given by

C= 1- H (q)

Hence, the channel capacity of cascaded channel is

C= 1- H (q1) = 1 H (2 pq) (4.2)

For 0.5 > q> 0,2 pq is greater than q. Hence , the channel of two cascaded BSC is less than a single
BSC, as expected.
Binary Erasure Channel (BEC)

A Binary Erasure Channel (BEC) has two inputs (0,1) and three outputs (o,y,1) as shown in
figure. BEC is also very important. Here, 0 and 1 are transmitted, and they are received as 0,y and
1. The symbol indicates that, due to noise, no deterministic decision can be made as to whether the
received symbol is a 0 or a 1. In other words, the symbol y indicates that the output is erased.
Hence, the name Binary Erasure Channel. In practice, whenever decision is in favour of y, i.e. ,
whenever deterministic decision in favour of 0 or 1 is not possible, the receiver requests reque the
transmitter for re-transmission
transmission fill the decision is taken either in favour of 0 or in favour of 1.

For BEC, the channel matrix is

p q 0
D = [ P ( Y / X )] =
0 q p

Figure:

Let us assume that p0) = and p (1) =1 at the transmitter , Hence;

1 1
H ( X ) = log + (1 ) log
(1 )
Now, since p(x1) = p(0) = and p(x2) = p(1) = (1-),
), the joint probability matrix P (X,Y) can be found
by multiplying the rows of p(Y/X) by , and (1-(1 ), respectively.

Hence,

p q 0
P ( x, Y ) = 0 (1 )q (1 ) p

The summation of he columns give

P ( y1 ) = p, p ( y2 ) = q + (1 ) q = q, p ( y3 ) = (1 ) P

The conditional probability matrix P (X/Y) can be found by dividing the columns of P(X,Y) by
p(y1), p(y2), and p(y3), respectively. Thus,
p q 0
p q (1 ) q
P( X /Y ) =
0 (1 ) (1 ) p

p q (1 ) p
1 0
=
0 1- 1

Now,

H ( X / Y ) = p ( x j , yk ) log p ( x j / yk )
2 3

j =1 k =1

=- p log 1+ q log + (1 ) q log (1- ) + (1 ) p log 1


=qH ( X )
= (1- p) H (X) Since q =1-p

In this case, the mutual information and channel capacity are

I ( X ,Y ) = H ( X ) H ( X / Y )
= H(X)-(1- p) H (X)
= H (X)
and
C= max I (X,Y)
= max [ H (X)]
= max H ( X )
=p since max. [H(X)]= 1

Repetition of Signals

The concept of BEC is used in the following way:

The transmitted signal is repeated at the channel input to the increase the channel efficiency as
shown in figure. In this case, the acceptable output signals are y1 = 00 and y2 = 11. the outputs y3 =
01 and y4 = 10 are discarded (erased) and request for re-transmission is made as in BEC.

The channel matrix is


y1 y2 y3 y4
x1 p 2 q2 pq pq
P (Y / X ) =
x2 q 2 p 2
pq pq

I ( X ,Y ) = H ( X ) H ( X / Y )
=H(X) - (1-p) H (X)
= p H (X)
and
C= max I (X,Y

Figure:

Let us assume p(x1) = p(x2) = 0.5

Therefore,

y1 y2 y3 y4
x1 p / 2 q / 2
2 2
pq / 2 pq / 2
P (Y / X ) =
x2 q 2 / 2 p 2 / 2 pq / 2 pq / 2

Hence,

P2 + q2
p ( y1 ) = p ( y2 )
2
and
p ( y3 ) = pq = p ( y4 )
1 1
Therefore, H ( Y ) = p ( y1 ) log + p ( y2 ) log
p ( y1 ) p ( y2 )
1 1
+ p ( y3 ) log + p ( y4 ) log
p ( y3 ) p ( y4 )

2 1
= ( p 2 + q 2 ) log 2 2
+ 2 pq log ...(6.1)
p +q pq
1 1 1 1
H ( Y/X ) = p 2 log 2 + q 2 log 2 + pq log + pq log ...(6.2)
p q pq pq
I(X; Y) = H ( Y ) = H (Y / X )
2 1 1 1
= ( p 2 + q 2 ) log 2 2
+ 2 pq log p 2 log 2 + 2 pq log
p +q pq q pq
2
= ( p 2 + q 2 ) log 2 2
+ p 2 log p 2 + q 2 log q 2
p + q
1
= ( p 2 + q 2 ) 1 + log 2 2
+ p 2 log p 2 + q 2 log q 2
p + q
p2 + q2 1 p2 q2
= ( P 2 + q 2 ) 1 + 2 2
log 2 +
2 2
log p 2
+ 2 2
log q 2
p +q p +q p +q p +q
2

p 2
1 q 2
1
= ( p 2 + q 2 ) 1 + 2 2
log 2 + 2
2 2
log 2 2
p +q p +q p +q p +q
p2 q2
+ 2 2
log p 2
+ 2 2
log q 2
p +q p +q

p2 p2 q2 q2
= ( p 2 + q 2 ) 1 + 2 2
log 2 +
2 2
log 2 2
p +q p +q p +q p + q
2

q2
= (p 2
+ q 2 ) 1 H 2 2
....(6.3)
p + q

q2
Thus, the channel is now equivalent to a BSC with error probability q ' = . Since q < q, the
p2 + q2
mutual information I (X,Y) is greater than the original value (i.e. Where there is not repetition of
signals) 1- H (q).

Binary Channel

Although it is easy to analyze a BSC , in practice , we come across binary channels with
non-symmetric structures. A binary channel is shown in figure . The channel matrix is
Figure:

p P
D = P ( Y / X ) = 11 12
P21 P22
To find the channel capacity of a binary channel, the auxiliary variable , Q1, Q2, are defined by

[ P ][Q ] = [ H ]
Or

P11 P12 Q1 P11 log P11 + P12 log P12


P = ..(7.1)
21 P22 Q2 P21 log p 21 + P22 log P22

The channel capacity is then given by

C = log ( 2Q1 + 2Q2 ) ...(7.2)

2. Described about the various Source coding?

Shannon Fano Coding

This method of coding is directed towards constructing reasonably efficient separable


binary codes. Let [X] be the ensemble of the messages to be transmitted, and [P] be their
corresponding probabilities. The sequence Ck of binary numbers of the length nk associated to
each message xk should fulfil the following conditions.

(1) No sequences
ces of employed binary numbers Ck can be obtained from each other by adding
more binary digits to the shorter sequence (prefix property)efficient ; i.e. and 0 appear
independently, with almost equal probabilities.
The actual procedure of the Shanmon-fano
Shanmon coding is as follows:

The messages are first written in the order of non-increasing


non increasing probabilities. The message set
then is partitioned into two most equi-probable
equi subsets [X1] and [X2]. A0 is assigned to each
message contained in one subset, and a 1 to each message contained in the other subset. The same
procedure is repeated for the subsets of [X1] and [X2] i.e., [X1] will be portioned into two subsets
[X21] and [X22]. The codewords in [X11] will start with 00, [X12] will start with 0.1; [X2] will start with
10, and [X22} will start with 11. The procedure is continued until each subset contains only one
message. Note that each digit () or 1 in each portioning of the probability space appears with
more or less equal probability and is independent of the previous or subsequent portioning.
Hence , P(0) and P(1) are also more or less equal.

Example:

Fano coding procedure for the following message ensemble:


Apply the Shanmon-Fano

[ X ] = [ x1 x2 x3 x4 x5 x6 x7 x8 ]
[ P ] = [1/ 4 1/ 8 1/16 1/16 1/16 1/ 4 1/16 1/ 8]

Take M=2

Solution

8
1 1 1 1
L = pk nk = 2 + 3 + 4 + 4
k =1 4 8 16 16
1 1 1 1
+ 4 + 2 + 4 + 3
16 4 16 8

Or
L = 2.75 letters/ message
8
H ( X ) = pk log p k
k =1

1 1 1 1 1 1 1 1
=- log + log + log + log
4 4 8 8 16 16 16 16
1 1 1 1 1 1 1 1
+ log + log + log + log
16 16 4 4 16 16 8 8

= 2.75 bits/ message


log M = log 2 = 1 bits/ letter
H (X) 2.75
= = = 100%
L log M 2.75 1

HUFFMAN CODING

The Huffman coding method leads to the lowest possible value of L for a given M.,
resulting in a maximum . Hence, it is also known as the minimum redundancy code, or
optimum code. The procedure is as follows :
(1) N messages are arranged in order of non- increasing probability
(2) The probabilities of [N-K[M-1] least likely message are combined, where k is the higher integer
that give a positive value to the bracket, and the resulting {K(M-1) +1] probabilities are re-arranged
in a non-increasing manner. This step is called reduction, the reduction procedure is repeated as
often as necessary, by taking M terms every time, until there remain M ordered probabilities. It
may be noted that by combining [N-K(M-1), and not M, terms in the first reduction, it is ensured
that there will be exactly M terms in the last reduction.
(3) Encoding begins with the last reduction, which consists of exactly M ordered probabilities .
The first element of the encoding alphabet is assigned as the first digit in the codeword for all
source messages associated with the first probability of the last reduction. Similarly, the second
element of the encoding alphabet is assigned as the second digit in the codewords for all source
messages associated with the second probability of last reduction and so on.

The same procedure is repeated for the second from last reduction, to the first reduction, in that
order.

Example

Solve by the Huffman method.


Solution:

As per step (2) of procedure, last two terms should be combined in the first reduction.

Explanation of construction of the codeword C4 for the message X4 : The dashed-


dashed line shows the
path for deciding codeword C4 for the message X4. The 1 encountered in the fifth reduction
becomes the first digit from let in C4. the 1 encountered in the fourth reductin becomes the
second digit from left in C4. The 0 encountered in third reduction becomes the third digit from
left in C4. No digit is encountered in the second reduction. The 1 encountered in the first
reduction becomes thee fourth digit from left. Thus, the codeword C4 is C4 = 1101. In the same
way, other codewords can be formed.

Code length

C1=0 1
C2 = 111 3
C3 = 101 3
C4 = 1101 4
C5 = 1100 4
C6 = 1001 4
C7 = 1000 4

7
L = pk nk = ( 0.4 1) + ( 0.2 3) + ( 0.12 3) + ( 0.08 4 ) + ( 0.08 4 ) + ( 0.04 4 )
k =1
= 2.48 letters / message

This is the same as the one obtained in the second answer of example which gives the maximum
efficiency.

3. Describe about the Noiseless coding in source coding?

In practical communication systems, it is necessary to transform the m-ary source alphabets


(x1 xm) to a convenient form, say binary of D-ary digits to match the channel. Since the rate of
transmission of information in a noisy channel is maximum if the source probabilities p(xi) are all
equal i.e., H(x) = log m (this will be shown in a later section ), it is desirable that the transformed
code alphabets (C1, C2, .CD) have equal probability of occurrence, hence the technique is also
known as Entropy coding.

To avoid ambiguity in deciphering the codes, as well as , to minimize the time of


transmission of the message, the codes should have the following properties:

1.Code should be separable or uniquely decodable. Also they should be comma-free , i.e., no
synchronizing signal should be required to recognize the words. This restricts the selection of
codes in such a way that no shorter code can be prefix of a longer code. Such codes are also called
Instantaneous codes.

2. The average length of the code words


L = pi .Li
Li = Length of ith code word, should be minimum , and L should approach the value [H(X)/ log
D], with the condition L H ( X ) / log D. (This will be proved later). Since pi Li is to minimum,
the code words with larger Li should have smaller pi, i.e.,

L1 log 1/pi)
3. the code efficiency is defined as

H (X )
= and > 1 for the optimum (also called compact) codes.
L log D

Further, Redundancy of codes is defined as T =1 - .

4. Explain about the block codes?

In block codes (also known as arithmetic codes, or group codes), each block of k message
bits is encoded into a block of n bits (n>k), as shown in figure. The check bits are derived from the
message bits are added to them. The n-bit block of a channel encoder output is called a codeword
and codes (or coding schemes) in which the message bits appear at the beginning of a codeword,
are called systematic codes.

Parity Check Codes

The simplest possible block code is when the number of check bits one. These are known as
parity check codes. When the check bit is such that the total number of 1s in the codeword is
even, it is an even parity check code, and when the check bit is such that the total number of 1s in
the codewords is odd, it is an odd parity
pari check code.

The following example explains the parity check code:

Message Code for even message Parity Checkbit


010011 010011 1
101110 101110 0

Code for odd parity


Message Checkbit
010011 0
101110 1
If a single error occurs in a received message, it can be immediately detected, although the
position of the erroneous bit cannot be determined. Thus, with this code, though a signle error
can be detected, it cannot be corrected.

Study a Binary Codes Space

In this section certain important concepts such as the weight of a code, Hamming distance,
etc., are introduced. The weight of a codeword is defined as the number of non-zero
non components
in it. For example,

Code word Weight


010110 3
101000 2
000000 0

The hamming distance between two code words is defined as the number of components in
which they differ.

For example,

Let U= 1010
V = 0111
W = 1001

Then, D(U,V) = distance between U and V =3

Similarly,

D(U,W) =2
And D(V,W) = 3

Mathematically, the Hamming distance, can be defined as

n
D(U , V ) = ( k k ) ...(1)
k =1

Where

U = 1 2 3 ....... n
V = 1 2 3 ...... n
(s and s are binary digits 0 or 1)
The notation means modulo 2 addition, for which the rules are

00 = 0
0 1 = 1
1 0 = 1
11 = 0
Then, for U =1010 and V= 0111 Eq. gives

D(U , V ) = (1 0 ) + ( 0 1) + (1 1) + ( 0 1) = [1 + 1 + 0 + 1] = 3

The minimum distance of a block code is defined as the smallest distance between any pair
of codewords in the code.

Now, let us condier a block code with a minimum distance two, If a single error occurs, a
word will be erroneously received as a meaningless word, the word does not exist in the
codebook. Thus, in such a set up any single error can be detected, but it cannot be corrected. This
will be clear from the following example:

Let us consider a block codes of two digits with a minimum distance two. Two codebooks
are possible. They are 00,11 and 01,10. Let our codebook 01,10. Now, with a single error, 01 may
be received either as 00 or as 11. let us suppose that it is received as 00. Since 00 is not in our
codebook, an error has been detected. But a decision cannot be taken as to whether 01 or 10 was
transmitted, as both are at equal distance from 00. Hence, the error cannot be corrected. If we
have a codebook of a minimum distance three, the single error can be corrected as the distance of
he erroneous word is 1 from only one codeword, and more than 1 from all other codewords. For
example, if 000,111 is our codebook , and if 001 is received, a decision can be taken that 000 is
received since the distance between 000 and 001 is one, whereas, the distance 111 and 001 is two.

By extending above ideas, the following data are given by Hamming.


Minimum distance Description of coding
1 Error cannot be detected
2 Single error detection
3 Single error correction
4 Single error correction plus double error detection
5 Double error correction
6 Double error correction plus triple error detection

In general, if n is the minimum distance of a block code, then

n 1
(i ) errors can be corrected in n is odd
2
n-2 n
(ii) errors can be corrected and errors can be detected if n is even
2 2

It is interesting to find the number of maximum possible code words of length n and a
minimum distance d. this is given by B (n,d) and Hamming got the following values:
B ( n,1) = 2n
B ( n, 2 ) = 2 n 1
2n
B ( n,3) = 2 m

n +1
2 n 1
B ( n, 4 ) = 2 m
n
B ( n, 2k ) = B ( n 1, 2k 1)
2n
B ( n, 2k + 1) = 2 m
n n n
1 + + + ..... +
2 2 k

2n 2n
The equality is B (n,3) is valid when is an integer. Such codes are referred to as close
n +1 n +1
packed codes. These are obtained by selecting n = 2k -1 Where k is a positive integer. For example,
k = 2,3,4 gives n = 3,7,15; resulting in close packed codes B (3,3), B(7,3) , B(15,3) respectively.

23
B ( 3, 3) = =2
3 +1
27
B ( 7,3) = = 16
7 +1
215
B (15, 3) = = 2048
15 + 1
2n
When n 2k -1 , the number of maximum possible code words is found out from B(n,3)< .
n +1
Thus, for n=5
25
B ( 5,3) = 2m
5 +1
or
2 m 5.33, giving m =2
Hence, B(5,3) =22 = 4
26
For n=6, B ( 6,3 ) = 2m or 2 m 9.14. thus m =3
6 +1
Hence, B ( 6,3) = 23 = 8

The codebooks B (5,3) and B (6,3) may be found as


B ( 5,3) B ( 6, 3)
123 123
00000 000000 100110
01101 010101 110011
10110
11011 111000 011110
101101 001011

Linear Block Codes

If each of the 2k codeword of a systematic code can be expressed as linear combinations of k


linearly independent code vectors, the code is called a linear block code , or systematic linear block
code.

There are two steps in the encoding procedure for linear block codes: (1) The information
sequence is segmented into message blocks of k successive information bits. (2) Each message
block is transformed into a larger block of n bits by an encoder according to some pre-determined
set of rules. The n-k additional bits are generated from linear combinations of the message bits.
The encoding operations can be described with the help of matrices. Let a message block be a row
vector

D= [d1d2 .dk}

Where each message bit can be a 0 or a 1. thus, we have 2k distinct message blocks. Each message
block is transformed into a code word c of length n bits.

C = [C1C2 ....Cn ]

by the encoder, and there are 2k distinct codewords. It may be noted that there is one unique
codeword for each distinct message block. This set of 2k codewords, also known as code-vectors,
is called an (n,k) block code.
k
The rate efficiency of this code is
n

In a systematic linear block code the first K bits of the codewords are the message bits, i.e.

Ci = di, I = 1,2,.k (1)

The last n-k bits in the codeword are check bits generated from k message bits according to some
predetermined rule:
Ck +1 = p11d1 p21d 2 .... pk ,1dk

Ck + 2 = p12 d1 p22 d 2 ..... pk ,2 dk
.
...(2)
.
.

Cn = p1,n k d1 p2 , n k d 2 ..... pk ,n k dk

The coefficient pi,j in equation are 0s and 1s so that Ck are 0s and 1s . The additions in Equation
are modulo-2 additions, Equations (1) and (2) can be combined to give a matrix equation.

1000..... 0 : p11 p12 .....p1 , n-k


0100..... 0 : p 21 p 22 .....p 2 , n-k

[C1C2 .......Cn ] = [ d1d 2 ...dk ] 0010..... 0 : p31 p32 .....p3 , n-k ...(3)

.... ..... .. : ... .... ..... .....
0000..... 1 : p k,1 p k,2 .....pk , n-k k n

Or C= DG (4)

Where G is the K n matrix on the RHS of equation. It is called the generator matrix of the code
and is used in encoding operation. It has the form

G= {Ik: P}k n (4a)

Where Ik is the identity matrix of the order K and P is an arbitrary K (n-k) matrix. The matrix P
completely defines the (n,k) block code. The selection of a P matrix is an important step in the
design of an (n,k) block code because then the code generated by G achieves certain desirable
properties such as the ease of implementation, ability to correct errors, high rate efficiency ,etc.

Hammings single Error correcting Code

We know that when a single error occurs, say in the ith bit of the code word, the syndrome
of a received vector is equal to the ith row of HT. Hence, if n rows of n (n-k) matrix HT are chosen
to be distinct, then the syndrome of all single errors will be distinct, and we can correct the single
errors. Once HT is chosen, the generator matrix G can be obtained by using Eqs.

Each row in Ht has (n-k) entries. Each one of these entries could be a 0, or a 1. Hence, we
can have 2n-k distinct rows of (n-k) entries from which 2n-k -1 distinct rows of HT can be selected. (It
is to be kept in mind that the row with all 0s cannot be selected). Since the matrix HT has n rows,
the condition for all of them to be distinct is
2nk 1 n
or
( n - k ) log 2 ( n + 1)
or
n k + log 2 ( n + 1)

Thus the minimum size n for the codeword can be determined . (Note that n has to be an integers).

Cyclic Codes

Cyclic codes form a subclass of linear block codes. They are important for two reasons:
First, encoding and syndrome calculations can be easily implemented by using simple shift
registers with feedback connections. Second, the mathematical structure of these codes is that it is
possible to design codes having useful error correcting properties.

An (n,k) linear block code


ode C is called a cyclic code if it satisfies the following property:
If an n-tuple

= ( 0, v1.....vn 1 ) ...(1)

is a code- vector of C, then the n-tuple


tuple

(1) = ( n 1 , 0 , 1.... n 2 )

Which is obtained by shifting cyclically one place to the right, is also a code-vector
code of C. From
the above definition it is clear that

(i ) = ( ni , ni +1 ,...... 0 , 1 ,.....vn i +1 ) ...(2)


Is also a code vector of C
An example of cyclic code;

It can be seen that the code 1 0 1 1 , 1 1 0 1, 1 1 1 0, 0 1 1 1 is obtained by a cyclic shift of n-tuple


n
1011 9n=4). The code obtained by rearranging the four words is also a cyclic code. Thus 1011,
1110 , 1101 , 0111 are also cyclic codes. (All 0s or all 1s can be words of any cyclic code as all shifts
results in the same word.)

The code word can be represented by a code polynomial as

V ( x ) = v0 + v1 x + v2 x 2 + .....vn 1 x n 1......(3)

The coefficients of the polynomial are 0s and 1s, and they belong to a binary field which satisfies
the following rules of addition and multiplication.

0+0=0 0.0 = 0
0 + 1 =1 0.1 = 0
1+0=1 1.0 = 0
1 + 1 =0 1.1 =1

(It can be seen that a means modulo -2 addition previously denoted by .)

Also, x2 = x. x ;x3 = (x2) . x = x .x .x ; etc

Now, we will state a theorem (without giving its proof) which is very useful for a cyclic code
generation.

Theorem

If g(x) is a polynomical of the degree (n-k) and is a factor of xn + 1, then g(x) generates an (n,k)
cyclic code in which the code polynomical V(x) for a data vector D = (d0, d1 , d2 dk-1)is generated
by
V(x) = D(x) g (x) (4)
5. Explain about the Convolutional codes?

Convolutional Codes

As already mentioned, in convolutions codes the message bit stream is encoded in a


continuous fashion; rather than in piecemeal as in block codes. Convolutional codes are easily
generated with a shift register shown in figure.
Figure: (a) A four stage Shift Register

Storage (memory) devices, such a flip flops, connected in cascade, form a shift register.
Each flip flop is capable of storing one bit. A four bit shift register is shown in figure. (a) M1, M2 ,
M3 and M4 are memory devices. A stream of binary data is applied to M1 in MSB (Most Significant
Bit) first fashion. S1, S2, S3 and S4 outputs taken from M1, M2, M3 and M4 respectively M1 stores the
most recent bit of input data stream S4 are outputs taken from M1, M2 , M3 and M4 respectively. M1
stores the mot recent bit of input data stream and indicates its state on the output line S1.
Therefore, the output S1 is the same as the MSB of the the input data stream. After one bit interval, the
bit stored in M1 shifts one stage to the right i.e., to M2. thus, the output S2 of M2 is the same as S1,
i.e. the input bit stream with one bit interval delay. In this way , the input bit stream appears at a
every output line with an increased delay.

Figure: Successive States of Stages for the input Train 11001.


The operation of the shift register of Figure is explained in figure. Here, it is assumed that
initially the shift register is clear, i.e, all memories are storing zeroes. A five-bit
five input data stream
is applied to the shift register and this figure traces the pth of the data stream through the register.
The input data steam is 11001, but for convenience of decoding, the train is represented against a a
reversed time scale , i.e. MSB which is the bit on the extreme left of the input data stream, enters
the register first. With each succeeding bit, the contents of each memory device are shifted into
the next device. At the ninth bit interval, the the register returns to its clear state after allowing the
input data stream to pass through it.
Encoder for Convolutional code

An encoder for a convolution code is shown in figure. In this case,


K= no. of shift registers
=3
= no of modulo -2 adders
= no. of bits in the code-block
=3
L= length of input data stream
=4

The outputs v1, v2 and v3 of the adders are

Figure: Encoder for Convolutional code

It is assumed that, initially, the shift register is clear. The operation of the encoder is
explained for the input data stream of a four bit sequence

M= 1101

This is entered in the shift register from MSB. Thus, at the first-bit
first bit interval, S1 = 1, S2 =0, S3 =0.
Now , v1 , v2 and v3 can be found from Equation.
Equation Thus
v1 = 1 0 =1, v2 = 0 0 = 0, v3 = 1 0 =1

Hence, the output at the first-bit


bit interval is 101. Similarly, at the second bit interval, S1 =1, S2 =1, S3
=0. Thus , v1 = 1 1 = 0 , v2 =1 0 =1, v3 = 1 0=1. Hence , the output at second
se bit interval is 011.

In the same manner, outputs at other bit intervals can be found out. Since L =4 and k =3 ,
the register resets at seventh (L + k = 4 + 3 =7) bit interval. The output at each bit interval consists
of v bits (in this case v = at seventh (L + k = 4+3 =7) bit interval. The output at each bit interval
consist of v bits ( in the case v=3). Thus , for each message, there are v (L+k) bits in the output
codeword. Notice that each message bit remains in the shift register for k-bit k intervals. Hence,
each input bit has an influence on the k groups of v bits; i.e., on vk output bits

Table gives the coded output bit stream for all input data streams for the encoder shown in
figure. The MSB column of input data stream is such that is divided into two subsets (eight 0s
and eight 1s)resulting in two subsets of the first code block of three bits in the coded output bit
stream (eight 000 and eight 101). Each of these two subsets of the MSB column is further divided
into two subsets (four 0s and four 1s) in the second MSB column , resulting in two subsets of
second code block of three bits in the coded output bit stream (four 000-four 101 and four 110
four 011). In the same way, each subset is further divided into two subsets, till there is only one
code block of three bits in each subset. Thus, it is possible to construct a code tree shown in figure,
from the table , if the input data stream is entered from the MSB in the convolution code encoder.
On the other hand, it is not possible to construct such code tree if the input data stream is entered
from the LSB , as successive division in two subsets is not possible if we start from the LSB
column. Hence, in the convolutional encoder the input data stream is entered from the MSB and
not from the LSB.

Decoding a Convolutional Code

The code Tree: Figure: shows the code tree for the encoder of Figure. This is derived from Table .
The starting point on the code tree is at the extreme left and corresponds to the situation before
the arrival of the first message bit. The first message bit may be either a 0, or a 1. When an input
bit is 0, the upward path is taken, and when it is 1, the downward path is taken. The same rule is
followed at each junction or node. The path through the tree shown by the dashed line is for input
message 1101. The code for the input message 1101 can be found by reading the bits encountered
from the entrance to exit of he tree along the dashed path. Thus, the desired code is 101 011 101
110 011 000, same as in table. Codes for other message can be found out with the help of an
appropriate path on the code tree. Note that any path through the tree.

Convolution Code for Figure:


Pass through only as many nodes (L) as there are bits in the input message. The node corresponds
to the point where alternate paths are possible depending on the next message bit being 1 or 0.

Decoding in the presence of Noise: Exhaustive Search Method In the absence of noise, the
codeword will be received as transmitted.
transmitted. Hence, it is easy to reconstruct the original message.
But due to noise, the word that is received is not the one transmitted. Decoding in the presence of
noise is done in the following manner (the procedure is explained for k =3, L =4, b =3)

The first message bit has an effect on the first kv = 9 bits. From the code tree of figure, it is
clear that there are eight possible combinations of the first nine digits which are acceptable
codewords. All these combinations are compared with the first nine nine bits of the received word,
and he path corresponding to the combination giving a minimum discrepancy is accepted as the
correct path.

If the path goes upwards, at the first node A, then the first message bit is taken as 0, and if
the path goes downwards, s, then the first message bit is taken as 1. Say, the path is downwards (as
shown by A B in figure). Thus,. It is concluded that the first message bit is 1. Now, we are at node
B. The second message bit will have an effect on the next nine bits for which
whic , again, there are eight
possible ways. Using the same procedure, the direction of the path at the node B, and hence, the
second message bit it is decided. In the same way,
way, all the message bits are decided and the received
word is decoded.

It may be seenn that the probability of error decreases exponentially with K. Hence K should
be made as large a possible. But, on the other hand, the decoding of each bit requires an
examination of the 2k branch section (in our case 2k =8 as k =3) of the code tree. Hence,
H with a large
k, the decoding procedure becomes lengthy. Another method known as sequential decoding is
manageable even for a large k.

Sequential Decoding

The main advantage of sequential decoding is that it avoids the lengthy process of
examining every branch of the 2k possible branches of the code tree while decoding a single
message bit. In this method, at the arrival of a v-
v bit code block the encoder compares these bits
with the code block of the branches diverging from the starting node. TheTh encoder follows the
branch whose code block gives lesser discrepancies with the received code block. The same
procedure is repeated at each node.

Figure: Code Tree for the Encoder for figure 1


Figure 2a illustrates how a decoder decides that it has taken
taken a wrong turn. Let P (e) be the
probability that a received bit is in error. Then, the total number of errors d (l) = vlP ( e ) , where v
is the number of bits in a code block, and l the number of nodes traversed. Then , if a plot l Vsd(
l ) is plotted, the ideal plot is a straight line d ( l ) = v l P(e); and the correct path curve oscillates
oscillat
about this line withing reasonable limits. If the decoder takes a wrong turn at any node, the total
number of errors will increase rapidly after the ndoe. When it crosses the discard level, the
decoder judge that it has made an error, and it retraces to the previous node and takes the
alternate turn. If it is still on the incorrect path, it will again retrace the path and follow the same
procedure. After few retraces, the decoder will finally follow the correct path. In figure is the
incorrect path; (3) and (4) are retraced paths; and (1) is the correct path. Thus, the sequential
decoder operates mostly on short code block and reverts to a trial-and-error
trial error search over long code
blocks only when it judges that an error has been made. The end result is i that the sequential
decoding may generally be accomplished with much less computation than the exhaustive search
method.

Figure 2 (a)

UNIT IV

PART A

1. Define Frequency division multiple Access?


All users share the Satellite at the same time, but each transmits in its own unique
frequency band.

2. Define Time division multiple Access?

Only one user transmits at any time and that user can use the entire available banwidth, so
that instantaneous data rate is proportional to the available bandwidth.

3. Define code Division multiple Access?

Many users simultaneously transmit orthogonally coded Spread spectrum signals that
occupy the same frequency band..

4. What is meant by multiple Access?

Multiple access is the ability of a large number of earth stations to simultaneously share a
Satellite for different services.

5. Mention the different types of multiple access?

Three Multiple Access Technique

Frequency division Multiple Access (FDMA)


Time division multiple Access (TDMA)
Code division multiple Access (CDMA)

PART B

1. Explain the Time division multiple Access

TDMA
Terminology: time slots; (so much of time, with a bunch of time slots in it); each user gets a time
slot in the next frame (if he still needs it).

Figure:

Different ground terminals have different time slots (same slot in each frame)
They buffer data, compress it, transmit it in a burst (transmission rate is higher than data
rate)
Transmit a frame of data in a slot time
Data time to arrive at satellite in proper time slot.
Satellite transponder receives it and retransmits it to wherever on the downlink.

Fixed Assignment TDMA

M time slots per frame, each preassigned to some ground station (long term). Each time slot has
preable and then data
Preamble contains sync, addressing and ECC sequences

Variations:

Fixed assignemnnt great if source requirements are perdictable and you can keep all time
slots mostly filled (e.g. N TV channels)

Improvements needed if traffic is sporadic or bursty


bu
Can dynamically allocate slots depending on who has the dtat to send and how backed up data
from each source is.

Terminology used is: packet switched systems, concentrations, statistical mulitplexers

Combined FDMA/TDMA

Break BW=W into M slots (each


ch same BWn ). Break frame T into N slots (each same t) Within one
frequency band, you have N users.

These can be the same N for ech frequency band or they can be a different N users;
One ouser can (does) have different time slots in different frequency bands (so he doesnt have to
transmit everything in T/N of time every T of time)

With more (MN) division of CR, better efficiency if a few users/sources are sporadic More flexible

2. Explain the Frequency division multiple Access


Acc

EDMA (early 1900s) Frequency Division Multeple Multiple Access

Allow each user (permanently/long term ) a frequency band; use guard bands
Heterodyne mix each user to his frequency band (modulate)
Simple 3 voice channel (e.g. of heterodyning (only deeps lower sidebands)

TV, radio, all use this concept


Figure:

Satellite FDMA System

Geostationary orbits its orbit matches the earths rotation, they thus appear stationary with
respect to earth locations. Pretty complete coverage (non shaded) with only 3 not 24/ ) satellites.

Concept you transmit to satellite (uplink), satellite just amplifies it

since lots of loss (transponder, repqeater, non-regenerative


non regenerative i.e. no processing restore dtat),
dt
frequency shifts it and retransmits it to earth (downlind).
B and C most comon (low atmosphere attenuation), fc = 6 Ghz (C band) uplink and 4 Ghz
downlink (new systems K 4 band 14/12 Ghz)

Each satellite allowed 0.5 Ghz bandwidth


Each has 12 transponders, 36 Mhz BW each, covers 500 Mhz
They use FDM/FM/FMDA multidestination mode
Diffent destinations by phased array steering concepts and frequencies

FDMA
Telphone used FDMA since early 1900s
1 speaker = 3.1 kHz, sample at 8 kHz, each user has 4kHz for calulations
Groups mux 12 users (FDM) into a group
Supergroup mux 5 groups (60 users)

1 2 3 4 5

Figure:

Send these over cables 240 kHz per supergroup (inter city uses)
Each groups has different destination (regionally) within US

Can put each supergroup onto a different fc when send


i.e. transmit one carrier with 60 FDM signals on it
transmit lots of carriers

We mastered inter-city trunk line and then undersea cables


But BW low (until get optical done)

FDMA didnt need sophisticated timing of slots and sync mess (as needed in TDMA)
- With present clocks and VLSI (TDMA is in )

Satellite Version (phone version)


Combine multiple (4kHz each users onto one FDM carrier
Use a number of carriers (from one or different ground stations)
Different groups can correspond to destinations in different countries
See VG (p. 509) (VG 4-78)
Consider 1 supergroup (5 groups), 12 users per group (bottom left)
Different frequency regions (B to F) correspond to different countries (or regions of the
world )
Multiplex all groups onto one carrier ( fA ) , FDM, transmit
Satellite transmits different portion (groups) or data to different country receives
rec
Dashed line is one country group of signals
Each supergroup comes from 1 country (1 earth station)

Figure:Preassigned mutlidestintion FDM/FM carriers. (Reprinted with permission from J.G.


Puente and A.M. Werth. Demand Assigned service for the INTELSAT Global Netowrk, IEEE
Jan 1971 * 1971 IEEE

TDMA vs. FDMA

A given communication recource (CR) has so much t and f (freq)


Divide it up by allocating frequency bands (FDMA) or time slots (TDMA)
Each frequency band is one earth station or one city, etc.

3. Explain the Code Division Multiple Access

CDMA:

With IS 95, each mobile user within a given cell, and mobile subscribers in adjacent cells
use the same radio frequency channels. In essence, frequency reuse is available in all cells. This
is made possible because IS 95 specifies a direct sequence, spread spectrum CDMA system
and does not follow the channelization principles of traditional cellular radio communications
systems. Rather than dividing the allocated frequency spectrum
spectrum into narrow bandwidth channesl,
one for each user, information is transmitted (spread) over a very wide frequency spectrum with
as many as 20 mobile subscriber units simultaneously using the same carrier frequency within the
same frequency band. Interference
ference is incorporated into the system so that there is no limit to the
number of subscribers that CDMA can support. As more mobile subscribers are added to the
system, there is a graceful degradation of communications quality.

With CDMA, unlike other cellular telephone standards, subscriber data change in real time,
depending on the voice activity and requirements of the network and other user of the network.
IS 95 also specifies a different modulation and spreading technique for the forward and reverse
re
channels. On the forward channel, the base station simultaneously transmits user data from all
current mobile units in that cell by using different spreading sequences (codes) for each users
transmissions. A pilot code is transmitted with the user data at a higher power level, thus
allowing all mobile units to use coherent detection. On the reverse like, all mobile units respond
in an asynchronous manner (i.e., no time or duration limitations) with a constant signal level
controlled by the base station.

The speech coder used with IS 95 is the Qualcomm 9600 bps Code Excited Linear
Predictive (QCELP) coder. The vocoder converts an 8 kbps compressed data stream to a 9.6
kbps data stream. The vocoders original design detects voice activity
activity and automatically reduces
the data rate to 1200 bps during silent periods. Intermediate mobile user data rates of 2400 bps
and 4800 bps are also used for special purposes. In 1995, Qualcomm introduced a 14,400 bps
vocoder that transmits 13.4 kbps of compressed digital voice information.

CDMA frequency and channel allocations:

CDMA reduces the importance of frequency planning within a given cellular market. The
AMPS U.S. cellular telephone system is allocated a 50 MHz frequency spectrum (25 MHz for
each direction of propagration), and each service provider (system A and system B) is assigned
half the available spectrum (12.5 MHz). AMPS common carriers must provide a 270 kHz guard
band (approximately nine AMPS channels) on either side of the CDMA frequency spectrum. To
facilitate a graceful transition from AMPS to CDMA, each IS 95 channel is allocated a 1.25 MHz
frequency spectrum for each one way CDMA communications channel. This equates to 10% of
the total available frequency spectrum of each U.S. cellular telephone provider. CDMA channels
can coexist within the AMPS frequency spectrum by having a wireless operator clear a 1.25 MHz
band of frequencies to accommodate transmissions on the CDMA channel. A single CDMA radio
channel takes up the same bandwidth as approximately 42 30 kHz AMPS voice channels.
However, because of the frequency reuse advantage of CDMA, CDMA, offers approximately a 10
to 1 channel advantage over standard analog AMPS and a 3 to 1 advantage over USDC
digital AMPS.

For reverse (downlink) operation, IS 95 specifies the 824 MHz to 849 MHz band and
forward (uplinkO channels the 869 MHz band. CDMA cellular system also use a modified
frequency allocation plan in the 1900 MHz band. As with AMPS, the transmit and receive
carrier frequencies used by CDMA are separated by 45 MHz. figure shows the frequency spacing
for two adjacent CDMA channels in the AMPS frequency band. As the figure shows, each CDMA
channel is 1.23 MHz wide with a 1.25 MHz frequency separation between adjacent carriers,
producing a 200 kHz guard band between CDMA channels. Guard bands are necessary to
ensure that the CDMA carriers do not interfere with one another. Figure shows the CDMA
channel location within the AMPS frequency spectrum. The lowest CDMA carrier frequency in
the A band is at AMPS channel 283, and the lowest CDMA carrier frequency in the B band is at
AMPS channel 384. Because the band available between 667 and 716 is only 1.5 MHz in the A
band, A band operators have to acquire permission form B band carriers to use a CDMA carrier in
that portion of the frequency spectrum. When a CDMA carrier is being used next to a non
CDMA carrier, the carrier spacing must be 1.77 MHz. There are as many an nine CDMA carriers
available for the A and B band operator have 30 MHz bandwidth in the 1900 MHz frequency
band, where they can facilitate up to 11 CDMA channels.

With CDMA, many users can share common transmit and receive channels with a
transmission data rate of 9.6 kbps. Using several techniques, however, subscriber information is
spread by a factor of 128 to a channel chip rate of 1.2288 Mchips/s, and transmit and receive
channels use different spreading processes.

Figure: (a) CDMA channel bandwidth, guard


guard band, and frequency separation; (b) CDMA
channel location within the AMPS frequency spectrum

In the uplink channel, subscriber data area encoded using a rate convolutional code,
interleaved, and spread by one of 64 orthogonal spreading sequences using
usi Walsh functions.
Orthogonality among all uplink cellular channel subscribers within a given cell is maintained
because all the cell signals are scrambled synchronously.

Downlink channels use a different spreading strategy since each mobile units received
signal takes a different transmission path and, therefore, arrives at the base station at a different
time. Downlink channel data streams are first convolutional encoded with a rate 1/3 convolution
code. After interleaving, each block of six encoded symbols is mapped to one of the available
orthogonal Walsh functions, ensuring 64 ary orthogonal signaling. An additional fourfold
spreading is performed by subscriber specified and base station specific codes having periods
of 214 chips and 215 chips, respectively, increasing the transmission rate to 1.2288 Mchips / s.
Stringent requirements are enforced in the downlink channels transmit power to avoid the near
far problem caused by varied receive power levels.

Each mobile unit in a given cell is assigned a unique spreading sequence, which ensures
near perfect separation among the signals from different subscriber units and allows
transmissions differentiation between users. All signals in a particular cell are scrambled using a
pseudorandom sequence of length 215 chips. This reduces radio frequency interference between
mobiles in neighboring cells that may be using the same spreading sequence and provides the
desired wideband spectral characteristics even though all Walsh codes do not yield a wideband
power spectrum.

Two commonly used techniques for spreading the spectrum are frequency hopping and
direct sequencing. Both of these techniques are characteristic of transmission over a bandwidth
much wider than that normally used in narrowband FDMA / TDMA cellular telephone systems,
such as AMPS and USDC. For a more detailed description of frequency hopping and direct
sequencing, refer to chapter.

Frequency hopping spread spectrum:

Frequency hopping spread spectrum was first used by the military to ensure reliable
antijam and to secure communications in a battlefield environment. The fundamental concept of
frequency hopping is to break a message into fixed size blocks of data with each block
transmitted in sequence except on a different carrier frequency. With frequency hopping, a
pseudorandom code is used to generate a unique frequency hopping sequence. The sequence in
which the frequencies are selected must be known by both the transmitter and the receiver prior to
the beginning of the transmission. The transmitter sends on block on a radio frequency carrier
and then switches (hops) to the next frequency in the sequence and so on. After reception of a
block of data on one frequency, the receiver switches to the next frequency in the sequence. Each
transmitter in the system has a different hopping sequence to prevent one subscriber from
interfering in the system has a different hopping sequence to prevent one subscriber from
interfering with transmissions from other subscribers using the same radio channel frequency.
Direct sequence spread spectrum:

In direct sequence systems, a high bit rate pseudorandom code is added to a low bit
rate information signal to generate a high bit rate pseudorandom signal closely resembling
noise that contains both the original data signal and the pseudorandom code. Again, before
successful transmission, the pseudorandom code must be known to both the transmitter and the
intended receiver. When a receiver detects a direct
direc sequence transmission, it simply subtracts
the pseudorandom signal from the composite receive signal to extract the information data. In
CDMA cellular telephone systems, the total radio frequency bandwidth is divided into a few
broadband radio channels els that have a much higher bandwidth than the digitized voice signal.
The digitized voice signal is added to the generated high bit rate signal and transmitted in such
a way that it occupies the entire broadband radio channel. Adding a high bit rate
pseudorandom signal to the voice information makes the signal more dominant and less
susceptible to interference, allowing lower power transmission and, hence, a lower number of
transmitters and less expensive receivers.

4. Explain the space division


ion multiple Access

CDMA (Code division multiple axis)

Figure:

Hybrid TDMA & FDMA advantages:

A given user group (singnal N) knows its code PRIVACY


EFFICIENCY BW during any time slot is near the maximum
MAJOR ADVANTAGE no precise time coordination
coordination needed among users as with Orthog
codes (Orthogonality of codes not affected by small variations)

Space Division (SDMA) 2 different regions of earth same satellite. Thry use same frequency band
(INTELSAT IVA)

Figure:

Polarization on Division (PDMA) earth station and satellite antennas use different polarizations
-22 stations in sane in same region simulaneously access same satellite

(INTELSAT V uses both SD and PD)


Figure:

How to decide what channel you have? (for international call)

It is dynamically assigned (DAMA)


you ask for a slot, every station told which slots available every 50 ms, you pick one slot (earth
station does), if no other user picket it, you get it else try again until you get one.
You
ou have l msec every 50 msec to request and try to get a phone line.
CSE (common signal channel) is one of the channels, it contains all of this whats available
channel data. BPSK on it

Its all $
DAMA works best for a given CR capacity
You dont get
et busy signal too often
There arent enough earth stations to guarantee use of 1 transponder (36 Mhz) all the time.
This is the easiest way to allow low user rate earth stations access and handle burst overload by
medium use stations

Thus, keep most people


eople happy (few/ busy signals) vs. all the people happy all the time and they
(companies) not utilize resources fully 90% of the time.
TV satellietes:
Completely different game; here BW and data rate are constant all the time.
Here fixed allocations are OK. Issue is that the satellite transmits lots of power
Growing issue is to encrypt (pay for HBO, etc)

LAN (local area networks)

These interconnect computers, terminals, printers, etc within a building, university , company, set
of buildings, etc
For long haul, public phone used (economics)
Within LAN, lay your own high BW cables. BW not as scarce as in long haul/satellite cases
(fewer users). For this reason, LANs user simpler access methods (very different from ones
discussed for phone and satellite cases with more users)
Discussed later p. 526-31, VG 4-87

FDMA vs. TDMA advantages/disadvantages

(1) FDMA causes intermodulation IM nonlinear data, TDMA doesnt.


To avoid IM, you reduce the power available (thus, number of users)
(2) FDMA has simpler sync and timing. However, the amount of equipment (amps, etc) grows
With the number of freqeuency users. Information increase killed FDMA
(3) TDMA equipment is more sophisticated; there is no big growth in equipment as the number of
users increases. Thus TDMA wins on $ (given advanced DSP, VLSI, clocks, etc)
(4) Multi-beam (each user has to be able to communicate with all othe rusers) is more convenient
With TDMA (ake connectins seqentially and only for duration of some time slot) than with
FDMA (you need a filter and amp, etc for each frequency)

DAMA (Demand Assignment Multiple Access) used on INTELSAT

Dynamic Assignment gives station access to channel only when it requests it.
Works since actual demand rarely exceeds peak demand.
Add buffers of data and the technique handles bursty traffic also.
You of cours get busy singals (queue delays).
This is what is used now (with TDMA and FDM)

Dont forget

Compresssion is still an issue, DW given, but compression decides number of bps


Coding and ECC are still needed all within symbol
Dont lose track of these issues in the present global level
EDMA Transponder issues (linearity and power) e.g. Telephone 4kHz
Each transponder has 36 Mhr BW
Travelling wave amp (TWA) is nonlinear if uou have a carrier, TWA can generate harmonics of
it. If multiple carriers within 36 Mhz, with nonlinearities of amp you can get cross talk between
carrier data.

Ideal world, l transponder and l carrier, 36Mhz BW supports 900 users (4kHz each)
As number of carriers per transponder increases, fewer 4kHz channels are allowed (due to need to
reduce amplitude and hence power in each carrier to avoid cross talk due to non-linearities)

Number of carriers Carrier kHz


Number of 4 Number of 4kHz
per transponder bandwidth Channels per carrier Channels per
transponder
1 36 MHz 900 900
4 3 at 10 MHz 132 456
1 at 5 MHz 60
7 5 MHz 60 420
14 2.5 MHz 24 336

Why not always use l carrier per transponder (it supports more users) ?

Answer: all earth stations cannot support/utilize 36Mhz all the time
All this BW is there but it is not used all the time ($$$)
Consider the case of 12 users from country A to F (VG 4-78)
What if only l user? Then other 11 slots empty.
Problem is you cant reassign this communicatin resource (CR).
Thus, it isnt used ($$ and CR plus DSP/VLSI)

So what is used? (telephone/satellite)

Each voice channel has its own carrier fc (for a given transponder)
Ther are 800 such voice channels, each has 45kHz (this >> 4kHz BW) to allow for nonlinearities
and channel separations. Ther is clearly room for improment here (10x BW needed)
Aside 6 of these channels are used for channel management (DAMA) etc.
800 users are OK since you dont talk 60% of the time (new rules above table) switch power
available

So how is TDMA done in telephone/satellites?

Still frequency multiplex each user of one earth station to some fc channel (FDM)
But with TDMA, several users (24 or 30) per fc channel, each has a different time slot 24 in Europe,
30 in U.S.

Still up to 800 fc channels/transponder (FDM), each with 24 users (TDMA) and 12 transponders per
satellite
TDMA is better than increasing the number of frequency channels (each means more hardware)
even further. TDMA is OK, if you can do sync and maintain accuracy of clocks, etc (can now with
VLSI)

European TDMA (Speech ) -24


24 users per fc standard 30 users (channels ) per fc )
Terminology l frame transmit one byte for each of N = 24 or 30 users plus frame alignment (sync)
bit

1 frame = 125 ss (= 1/8 kHz sample rate)

Calculations: voice 4kHz BW, 8 bits


bit each (compression, this = 13 bits)
24 users (channels ) 8 bits each (one sample of each speech channel) = 193 bits,
Transmit this in l frame (125 us) or = 1.5 Mbs (plus extra bits for sync, etc information and
uses)

(a)

Figure:
TDMA (increase transmit rate to utilize channel BW)

Prior page was for 1 frame (one sample (byte) for each user)
Channel can support >> 1.5 Mbs rate that the above needs.
Thus, they transmit the above data in leess time (burst process)
They can then put other users (channels) into different time slots (TDM)

Figure:

Concept (user BW rate is << channel rate). Thus, let multiple users share time slots.
This means that we must burst transmit the above set of frame data at a higher rate in less time
(rate is channel rate for fc number)

e.g. transmit at 120 Mbs (use QPSK, =60Msymbols/s rate)


Thus, the above frame is sent in 34 us (60:1 compression in time, increase in BW)
Slightly different compression and time slots U.S./Europe since 24/30 users/frame.
Use a set of 16 frames (samples) to standardize.

Traffic slots different users in different slots (A, B in above figure)


Traffic slots selected by DAMA (different users demand t slots A,B, etc)
Buffers data in is at the signal rate (e.g. 4kHz) and deata out is at a higher rate.
You load one buffer (left switch at a low data rate, read it out at high data rate (right switch).
switc
Data is read out and into a TDMA system at 60 rate input

The process is reversed at the receiver (right figure)


This is burst compression for TDAM transmission and expansion upon reception

Multi-beam satellites these exist


They aim severall beams to N. and S. America, Europe, Africa
Other satellite aims beams elsewhere (Asia)
Satellites also aim beams to each other

How do they decide which cannel (t or f and both) goes were?

The earth station transmitting has control. It tells the satellite which signal goes where.
The sync data from the earth source is sent to the receiver stations (they use this to get in sync)
All info from the ground stations goes to a switching matrix.
This decides how to connect different up and down links

LAN approach (BW not matter row, few users since local)
Packet set of b bits
Ethernet assumes each user can sense the state of the net before using it
i.e. any free slots and when and where they
t are (can I get in?)
Users send preamble 64 bits (sync), destination, data type (ECC, etc used), and finally data, then
correlative linear block code parity (check field, 32)

Preamble Destination Source Type field Data field Check


Bits 64 address address 16 80 field
48 48 32

Header

Carrier Sense method (Ethernet) if no transition occurs in the transition search window, then no
signal (carrier) is present and the line is free (the user is done and the packet is over). Figure b

Token ring method connect all users in a ring. Transmit a token around the ring (8 ones
11111111 bit pattern). Each user (in the ring in order) can access the LAN if desired. He changes
the last bit of the pattern (LAN busy), inserts his data onto LAN, and then restores the token and
passes it to the next user in the ring. Figure a

UNIT V

SATELLITE, OPTICAL FIBER POWRE LINE SCADA

PART A

1. Define satellite orbits?

Satellite orbiting the earth Stays in position because the centripetal force on the Satelliete
balance the gravitational attractive force on the earth.

2. Mention the types of satellite orbits?

Inclined Elliptical orbit


Polar circular orbit
Geostationary orbit

3. Mention the types of satellite?

Active satellite
Passive Satellite

4. Define an Active satellite?

Signal received by the satellite is retransmitted rather then being simple reflected back on
board, highly transmitting and receiving antenna and complex inter connecting circuits.

5. Define passive Satellite?

The signal transmitted from the earth, the same signal is retransmitted without making any
conversion to the earth.

6. What are frequencies used for link establishment?

Commercial satellites use 4/6 GHZ bandwidth an uplink of 5.925 6.425 GHZ and a down
link of 3.7 4.2 GHZ.
Government and military satellites in many countries use the 7/8 GHZ bard, with 7.9 8.4
GHZ Up and 7.25 7.75 GHZ down.
Satellites are now being designed for 12/14 GHZ band using 14-14.5 GHZ up and either
11.7-12.2 GHZ down or 10.95-11.2 GHZ up and 11.5 11.7 GHZ down.

7. What are the multiple Access

Time division multiple Access


Frequency division multiple Access
Code division multiple Access

8. Mention the types of fiber?

Single mode step Index Fiber


Multi mode Step Index Fiber
Multi mode Graded Index Fiber

9. Define Numerical Aperture?

The maximum light gathering capacity of the fiber

NA = no Sina
= n12 n 22

10. Define refractive index.

The material is the basic Parameter that determines the optical characteristics of any
material the ration of velocity of light in a vacuum to the velocity of light in medium.

Speed of light in vacuum


M or n =
Speed of light in Material

PART B
1. Explain the different types of optical fibers?

Propagation modes can be categorized as either multimode or single mode, and then
multimode can be further subdivided into step index or graded index. Although there are a wide
variety of combinations of mode and indexes, there are only three practical types of optical fiber
configurations: single-mode step-index, multimode step index, and multimode graded index.

Single-Mode Step-Index Optical Fiber


Single-mode step-index fibers are the dominant fibers used in todays telecommunications and data
networking industries. A single-mode
single step-index
index fiber has a central core that is significantly
smaller in diameter than any of the multimode cables. In fact, the diameter is sufficiently
suff small
that their essentially only one path that light may take as it propagates down the cable. This type
of fiber is shown in figure. In the simplest form of single mode step- step-index fiber, the outside
cladding is simply air. The refractive indexinde of the glass core (n1) is approximately and the
refractive index of the air cladding (n2) is 1. the large difference in the refractive indexes results in
a small critical angle (approximately 42)
42 ) at the glass/air interface. Consequently, a single-mode
step-index
index fiber has a wide external acceptance angle, which makes it relatively easy to couple
light into the cable from an external source. However, this type of fiber is very weak and difficult
to splice or terminate.

Figure: Single-mode
mode step-index
step index fibers: (a) air cladding: (b) glass cladding
A more practical type of single-mode
single step-index
index fiber is one that has a cladding other than
air, such as the cable shown in figure. The refractive index of the cladding (n2) is slightly less than
that of the central core (n1) and is uniform throughout the cladding. This type of cable is
physically stronger than the air-clad
air clad fiber, but the critical angle is also much higher
(approximately 77). ). This results in a small acceptance angle and a narrow na source-to-fiber
aperture, making it much more difficult to couple light into the fiber from a light source.

With both types of single-- mode step-index,


index, light is propagated down the fiber through
reflection. Light rays that enter the fiber either propagate
propagate straight down the core or, perhaps, are
refection only a few times. Consequently, all light rays follows approximately the same path
down the cable and take approximately the same amount of time of travel the length of the cable.
This is one overwhelming
whelming advantage of single-mode
single step-index
index fibers, as explained in more detail
in a later section of this chapter.

Multimode Step-Index
Index Optical Fiber:

A multimode step index optical fiber is shown in figure. Multimode step-index


step fibers are
similar to the single-mode step-index
index fibers except the center core is much large with the
multimode configuration. This type of fiber has a large light-to-fiber
light fiber aperature and, consequently,
allows more external light to enter the cable. The light rays that strike the core/cladding interface
at an angle greater than the critical angle (ray A) are propagated down the core in a zigzag
fashion, continuously reflecting off the interface boundary. Light rays that strike the core/
Cladding interface at an angle less than
than the critical angle (ray B) enter the cladding and are lost. It
can be seen that there are many paths that a light rays may follow as it propagates down the fiber.
As a result, all right rays do not follow the same path and, consequently , do not take the t same
amount of time to travel the length of the cable.

Figure: Multimode step index fiber

Figure: Multimode graded-index


index fiber

Multimode Graded-index
index Optical fiber

A multimode graded-indexindex optical fiber is shown in figure. Graded index fibers are
characterized by a central core with an non uniform refractive index. Thus, the cables density is
maximum at the center and decreases gradually toward the outer edge. Light rays propagate
down this type of fiber through refraction rather than reflection. As a light ray propagates
diagonally across the core toward the center, it is continually intersection a less dense to more
dense interface. Consequently, the light rays are constantly being refracted, which result in a
continuous bending of the light rays. Light enters the fiber at many different angles. As the light
rays propagate down the fiber, the rays traveling in the outermost area of the fiber travel a greater
distance than the rays traveling near the center. Because the refractive index decreases with
distance from the center and the velocity is inversely proportional to refractive index, the light
rays traveling farthest from the center propagate at a higher velocity. Conse-index, the light rays
traveling farthest from the center propagate at a higher velocity. Consequently, they take
approximately the same amount of time to travel the length of the fiber.

Optical Fiber Comparison:

Single-mode step-index fiber. Advantages include the following

1. Minimum dispersion: All rays propagating down the fiber take approximately the same path:
thus, they take approximately the same length of time of time of travel down the cable.
Consequently, a pulse of light entering the cable can be reproduced at the receiving end very
accurately.
2. Because of the high accuracy in reproducing transmitted pulses at the receive end, wider
bandwidths and higher information transmission rate (bps) are possible with single-mode step-
index fibers than with the other types of fibers.

Disadvantage include the following:

1. Because the central core is very small, it is difficult to couple light into and out of this type of
fiber. The source-to-fiber aperature is the smallest of all the fiber types.
2. Again, because of the small central core, a highly directive light source, such as a laser, is
required to couple light into a single-mode step-index fiber.
3. Single mode step-index fibers are expensive and difficult to manufacture.

Multimode step-index fiber. Advantage include the following:

1. Multimode-step-index fibers are relatively inexpensive and simple to manufacture.


2. It is easier to couple light into and out of multimode step-index fibers because they have a
relatively large source-to-fiber aperture.

Disadvantages include the following:

1. Light rays take many different paths down the fiber, which results in large differences in
propagation times. Because of this, rays traveling down this type of fiber have a tendency to
spread out. Consequently, a pulse of light propagating down a multimode step-index fiber is
distorted more than with the other types of fibers.
2. The bandwidths and rate of information transfer rates possible with the type of cable are less
than that possible with the other type of fiber cables.

Multimode graded-index fiber: Essentially, there are no outstanding advantages or


disadvantages of this type of fiber. Multimode graded-index fibers are easier to couple light into
and out of than single-mode step-index fibers but are more difficult than multimode step-index
fibers. Distortion due to multiple propagation paths is greater than in single-mode step index
fibers but less than in multimode step-index fibers. This multimode graded-index is considered
an intermediate fiber compared to the other fiber types.

2. Explain the fiber sources used in optical communication?

Optical Sources:

There are essentially only two type of practical light sources used to generate light for
optical fiber communications systems: LEDs and ILDs. Both devices are constructed from
semiconductor materials and have advantages and disadvantages. Standard LEDs have spectral
widths of 30 nm to 50 nm, while injection lasers have spectral widths of only 1 nm to 3 nm (1 nm
corresponds to a frequency of about 178 GHz). Therefore, a1320-nm light source with a spectral
line width of 0.0056 nm has a frequency bandwidth of approximately 1 GHz Line width is the
wavelength equivalent of bandwidth.
Selection of one light-emitting device over the other is determined by system economic and
performance requirements. The higher cost of laser diodes is offset by higher performance, LEDs
typically have a lower cost and a corresponding lower performance. However, LEDs are typically
more reliable.

LEDs

An LED is a p-n junction diode, usually made from a semiconductor material such as
aluminum gallium-arsenide (AIGaAs) or gallium- arsenide phosphide (GaAsP). LEDs emit light
by spontaneous emission light is emitted as a result of the recombination of electrons and holes.

When forward biased, minority carriers are injected across the p-n junction. Once across
the junction, these minority carriers recombine with majority carriers and give up energy in the
form of light. This process is essentially the same as in a conventional semiconductor diode except
that in LEDs certain semiconductor materials and dopants are chosen such that the process is
radioactive; that is , a photo is produced. A photon is a quantum of electromagnetic wave energy.
Photons are particles that travel at the speed of light but at rest have no mass. In conventional
semiconductor diodes (germanium and silicon, for example), the process is primarily non
radiative, and no photons are generated. The energy gap of the material used to construct an LED
determines the color of lightt is emits and whether the light emitted by it is visible to the human
eye.

To produce LEDs, semiconductors are formed from materials with atoms having either
three or five valence, electrons (known as Group III and Group IV atoms, respectively, because of
their location in the periodic table of elements). To produce light wavelengths in the 800-nm
800
range, LEDs are constructed from Group III atoms, such as gallium (Ga) and aluminium (A1), and
a Groups IV atom, such as arsenide (As). The junction formed isis commonly abbreviated GaA1As
for gallium-aluminium
aluminium arsenide. For long wavelengths, gallium is combined with Groups III
atoms indium (In), and arsenide is combined with the Group V atoms phosphate (P), which forms
a gallium-indium-arsenide-phosphate
phosphate (GalnAsP)
(GalnAsP) junction. Table lists some of the common
semiconductor materials used in LED construction and their respective output wavelengths.

Material Wavelength (nm)


A1GalnP 630-680
GalnP 670
GaA1As 620-895
GaAs 904
InGaAs 980
InGaAsP 1100-1650
InGaAsSb 1700-4400

Figure: Homojunction LED Structures (a) silicon-doped


silicon doped gallium arsenide (b) Planer diffused

Homojunction LEDs

A p-nn junction made from two different mixtures of the same types of atoms is called a
homojunction structure. The simplest LED structures are homojunction and epitaxially grown , or
they are single-diffused
diffused semiconductor devices. Such as the two shown in the figure Epitaxially
grown LEDs are generally constructed of silicon-doped
silicon doped gallium arsenide (figure 13-28a).
13 A
typically wavelength of light emitted from this construction is 940 nm, and a typical output power
is approximately 2 mW (3 dBm) at 100 mA of forward current. Light waves from homojunction
sources do not produce a very useful light for an optical fiber. Light is emitted in all directions
equally; therefore, only a small amount of the total light produced is coupled into the fiber. In
addition, the ratio of electricity converted to light is very low . Homojunction devices are often
called surface emitters.

Planar diffused homojunction LEDs (figure ) output approximately 500 W at a


wavelength of 900 nm. The primary disadvantages of homojunction LEDs is the nondirectionality
of their light emission, which makes them a poor choice as a light source for optical fiber systems.

Heterojunction LEDs:

Heterojunction LEDs are made from a p-type semiconductor material of one set of atoms an
n-type semiconductor material from another set. Heterojunction devices are layered (usually
two) such that the concentration effect is enhanced. This produces a device that confines the
electron and hole carriers and the light to a much smaller area. The junction is generally
manufactured on a substrate backing material and then sandwiched between metal contacts that
are used to connect the device to a source of electricity.

With heterojunction devices, light is emitted from the edge of the material and are therefore
often called edges emitters. A planar heterojunction LED (figure ) is quite similar to the epitaxially
grown LED except that the geometry is designed such that the forward current is concentrated to
a very small area of the active layer.
Heterojunction devices have the following advantage over homojunction devices:

The increase in current density generates a more brilliant light spot.

The smaller emitting area makes it easier to couple its emitted light into a fiber.

The small effective area has a smaller capacitance, which allows the planar hetero-junction LED to
be used at higher speeds.

Figure shows the typical electrical characteristics for a low cost infrared light emitting
diode. Figure a shows the output power versus forward current. From the figure, it can be seen
that the output power varies linearly over a wide range of input current ( 0.5 mW [- 3 dBm] at 20
mA to 3.4 mW [5.3 dBm] at 140mA). Figure 13 30 b shows output power versus temperature. It
can be seen that the output power varies inversely with temperature between a temperature range
of 400 C to 800 C. figure shows relative output power in respect to output wavelength. For this
particular example, the maximum output power is achieved at an output wavelength of 825 nm.
junction LED
Figure: Planar hetero-junction

Burrs Etched Well Surface Emitting LED

For the more practical applications, such as telecommunications, data rates in excess of 100
Mbps are required. For these applications, the etched well LED was developed. Burrus and
Dawson of Bell Laboratories developed the etched well LED. It is a surface emitting LED and
is shown in figure. The Burrus etched well LED emits light in many directions. The etched well
help concentrate the emitted light to a very small area. Also, domed lenses can be placed over the
emitting surface to direct
ct the light into a smaller area. These devices are more efficient than the
standard surface emitters, and they allow more power to be coupled into the optical fiber, but they
are also more difficult and expensive to manufacture.

Edge Emitting LED

The edge emitting LED, which was developed by RCA, is shown in figure. These LEDs
emit a more directional light patterns than do the surface emitting LEDs. The construction is
similar to the planar and Burrus diodes except that the emitting surface is a stripe rather than a
confined circular area. The light is emitted from an active stripe and forms an elliptical beam.
Surface emitting LEDs are more commonly used than edge emitters because they emit more.
Surface emitting LEDs are more commonly used used than edge emitters because they emit more
light. However, the coupling losses with surface emitters are greater, and they have narrower
bandwidths.

The radiant light power emitted from an LED is a linear function of the forward current
passing throughgh the device (figure It can also be seen that the optical output power of an LED is,
in part, a function of the operating temperature.

ILD:

Lasers are constructed from many different materials, including gases, liquids, and solids,
although the types off laser used most often for fiber optic communications is the semiconductor
laser.
The ILD is similar to the LED. In fact, below a certain threshold current, an ILD acts
similarly to an LED. Above the threshold current, an ILD oscillates; lasing occurs.
occu As current
passes through a forward biased p n junction diode, light is emitted by spontaneous emission
at a frequency determined by the energy gap of the semiconductor material. When a particular
current level is reached, the number of minority carries and photons produced on either side of
the p n junction reaches a level where they begin to collide with already excited carriers. This
causes an increase in the ionization energy level and makes the carriers unstable. When this
happens, typicall carrier recombines with an opposite type

(a) Output power versus forward current

(b) Output power versus temperature;


(c) Output power versus output wavelength

Figure: Typical LED electrical characteristics

Figure: Burrus etched well surface emitting LED

Figure: Edge emitting LED


Figure: Output power versus forward current and operating temperature for an LED

Of carrier at an energy level that is above its normal before collision value. In the process,
two photons are created;
reated; one is stimulated by another. Essentially, a gain in the number of
photons is realized. For this to happen, a large forward current that can provide many carriers
(holes and electrons) is required.

The construction of an ILD is similar to that of an LED except that the ends are highly
polished. The mirror like ends trap the photons in the active region and, as they reflect back and
forth, stimulate free electrons to recombine with holes at a higher than normal energy level.
This process is called lasing.

Figure: Injection laser diode construction


Figure: Output power versus forward current and temperature for an ILD

The radiant output light power of a typical ILD is shown in figure. It can be sen that very
little output power is realized
ealized until the threshold current is reached; then lasing occurs. After
lasing begins, the optical output power increases dramatically, with small increases in drive
current. It can also be seen that the magnitude of the optical output power of the ILD is more
dependent on operating temperature than is the LED.

Figure shown the light radiation patterns typical of an LED and an ILD. Because light is
radiated out the end of an ILD in a narrow concentrated beam, it has a more direct radiation
pattern.

ILDs have several advantages over LEDs and some disadvantages. Advantages include the
following:

ILDs emit coherent (orderly) light, whereas LEDs emit incoherent (disorderly) light.
Therefore, ILDs have a more direct radian pattern, making it easier to couple light emitted by the
ILD into an optical fiber cable. This reduces the coupling losses and allows smaller fibers to be
used.
Figure: LED and ILD radiation patterns

The radiant output power from an ILD is greater than that for an LED. A typical output
power for an ILD is 5 mW (7 dBm) and only mW ( - 3 dBm) for LEDs. This allows ILDs to provide
a higher drive power and to be used for systems that operate over longer distances.

ILDs can be used at higher bit rates than LEDs.


ILDs generate monochromatic light, which reduces chromatic or wavelength dispersion.

Disadvantages include the following:

ILDs are typically 10 times more expensive than LEDs.


Because ILDs operate at higher powers, they typically have a much shorter lifetime than LEDs.

ILDs are more temperature dependent than LEDs.

3. Explain the fiber detectors used in optical communication?

Light detectors:

There are two devices commonly used to detect light energy in fiber optical
communications receivers: PIN diodes and APDs.

PIN Diodes:

A PIN diode is a depletion layer photodiode and is probably the most common device
used as a light detector in fiber optic communications systems. figure shows the basic
construction of a PIN diode. A very lightly doped (almost pure or intrinsic) layer of n type semi
conductor material is sandwiched between the junction of the two heavily doped n and p t ype
contact areas. Light enters the device through a very small window and falls on the carrier void
intrinsic material. The intrinsic material is made thick enough so that most of the photon that
enter the device are absorbed by this layer. Essentially, the PIN photodiode operates just the
opposite of an LED. Most of the photons are absorbed they add sufficient energy to generate
carriers in the depletion region and allow current to flow through the device.

Photoelectric effect:

Light entering through the window of a PIN diode is absorbed by the intrinsic material and
adds enough energy to cause electronics to move from the valence band into the conduction band.
The increase in the number of electrons that move into the conduction band is matched by an
increase in the number of holes in the valence band.
Figure: PIN photodiode construction

To cause current to flow in a photodiode, light of sufficient energy must be absorbed to


give valence electrons enough energy to jump ththee energy gap. The energy gap for silicon is 1.12eV
(electron volts). Mathematically, the operation is as follows:

For silicon, the energy gap (Eg) equals 1.12 eV:

1 eV = 1.6 10 19 J

Thus, the energy gap for silicon is

Eg = ( 1.12eV ) 1.6 10 19
J 19
= 1.792 10 J
eV
and energy (E) = hf

where h = Plancks constant = 6.6256 10 34 J/Hz


f = frequency (hertz)

Rearranging and solving for f yields

E
f=
h
For a silicon photodiode,

1.792 10 19 J
f= = 2.705 1014 Hz
6.6256 10 34 J / Hz

Converting to wavelength yields

c 3 10 8 m / s
= = = 1109nm / cycle
f 2.705 1014 Hz
ADPs:

Figure shows the basic construction of an APD. An APD is a pipn structure. light enters
the diode and is absorbed by the thin, heavily doped n layer. A high electric field intensity
developed across the i p n junction by reverse bias causes impact ionization to occur. During
impact ionization, a carrier can gain sufficient energy to ionize other bound electrons. These
ionized carriers, in turn, cause more ionizations to occur. The process continues as in an
avalanche and is, effectively, equivalent to an internal gain or carrier multiplication.
Consequently, APDs are more sensitive than PIN diodes and require less additional amplification.
The disadvantages of APDs are relatively long transit
transit times and additional internally generated
noise due to the avalanche multiplication factor.

Figure: Avalanche photodiode construction

Figure: Spectral response curve

Characteristics of light detectors:

The most important characteristics of light


l detectors are the following:
1. Responsivity. A measure of the conversion efficiency of a photo detector. It is the ratio of
the output current of a photodiode to the input optical power and has the unit of amperes
per watt. Responsivity is generally given for a particular wave length or frequency.
2. Dark current. The leakage current that flow through a photodiode with no light input.
Thermally generated carriers in the diode cause dark current.
3. Transit time. The time it takes a light induced carrier to travel across the depletion region
of a semiconductor. This parameter determines the maximum bit rate possible with a
particular photodiode.
4. Spectral response. The range of wavelength values that a given photodiode will respond.
Generally, relative spectral response is graphed as a function of wave length or frequency,
as shown in figure.
5. Light sensitivity. The minimum optical a light detector can receive and still produce a
usable electrical output signal. Light sensitivity is generally given for a particular
wavelength in either dBm or dB.

Lasers:

Laser is an acronym for light amplification stimulated by the emission of radiation. Laser
technology deals with the concentration of light into a very small, powerful beam. The acronym
was chosen when technology shifted from microwaves to light waves. Basically, there are four
types of lasers: gas, liquid, solid, and semiconductor.
The first laser wad developed by Theodore H. Maiman, a scientist who worked for Hughes
Aircraft Company in California. Maiman directed a beam of light into ruby crystals with a xenon
flashlamp and measured emitted radiation from the ruby. He discovered that when the emitted
radiation increased beyond threshold, it caused emitted radiation to become extremely intense
and hightly directional. Uranium lasers were developed in 1960 along with other rare earth
materials. Also in 1960, A. Javin of Bell Laboratories developed the helium laser. Semiconductor
lasers (injection laser diodes) were manufactured in 1962 by General Electric, IBM, and Lincoln
Laboratories.

Laser Types:

Basically, there are four of lasers: gas, liquid, solid, and semiconductor.

1. Gas lasers. Gas lasers use a mixture of helium and neon enclosed in a glass tube. A flow of
coherent (one frequency) light waves is emitted through the output coupler when an
electric current is discharged into the gas. The continuous light wave output is
monochromatic (one color).
2. Liquid lasers. Liquid laser use organic dyes enclosed in a glass tube for an active medium.
Dye is circulated into the tube with a pump. A powerful pulse of light excites the organic
dye.
3. Solid lasers. Solid lasers use a solid, cylindrical crystal, such as ruby, for the active
medium. Each end of the ruby is polished and parallel. The ruby is excited by a tungsten
lamp tied to an ac power supply. The output from the laser is a continuous wave.
4. Semiconductor lasers. Semiconductor laser are made from semiconductor p n junctions
and are commonly called ILDs. The excitation mechanism is a dc power supply that
controls the amount of current to the active medium. The output light from an ILD is easily
modulated, making it very useful in many electronic communications applications.

Laser Characteristics:

All types of lasers have several common characteristics. They all use (1) an active material
to convert energy into laser light (2) a pumping source to provide power or energy, (3) optics to
direct the beam through the active material to be amplified, (4) optics to direct the beam into a
narrow powerful cone of divergence, (5) a feedback mechanism to provide continuous operation,
and (6) an output coupler to transmit power out of the laser.

The radiation of a laser is extremely intense and directional. When focused into a fine hair
like beam, it can concentrate all its power into the narrow beam. If the beam of light were
allowed to diverge, it would lose most of its power.

Laser Construction:

Figure shows the construction of a basic laser. A power source is connected to a flashtube
that is coiled around a glass tube that holds the active medium. One end of the glass tube is a
polished mirror face for 100% internal reflection. The flashtube is energized by a trigger pulse and
produces a high level burst of light (similar to a flashbulb). The flash causes the chromium
atoms within the active crystalline structure to become excited. The process of pumping raises the
level of the chromium atoms from ground state to an excited energy state. The ions then decay,
falling to an intermediate energy level. When the population of ions in the intermediate level is
greater than the ground state, a population inversion occurs. The population inversion causes
laser action (lasing) to occur. After a period of time, the excited chromium atoms will fall to the
ground energy level. At this time, photons are emitted. A photon is a packet of radiant energy.
The emitted photons strike atoms and two other photons are emitted (hence the term stimulated
emission). The frequency of the energy determines the strength of the photons; higher
frequencies cause greater strength photons.
Figure: Laser construction

4. Explain briefly about the frequency division multiple access techniques used in satellite
communication?

Frequency Division Multiple Access:

Frequency division multiple access (FDMA) is a method of multiple accessing where a


given RF bandwidth is divided into smaller frequency bands called subdivisions. Each
subdivision has its own IF carrier frequency. A control
control mechanism is used to ensure that two or
more earth stations do not transmit in the same subdivision at the same time. Essentially, the
control mechanism designates a receive station for each of the subdivisions. In demand
assignment systems, the control
trol mechanism is also used to establish or terminate the voice band
links between the source and destination earth stations. Consequently, any of the subdivisions
may be used by any of the participating earth stations at any given time. If each subdivision
subdiv
carries only one 4 kHz voice band channels are frequency division multiplexed together to
form a composite baseband signal comprised of groups, super groups, or even master groups, a
wider subdivision is assigned. This is referred to as multiple
multip channel per carrier (MCPC).

Carrier frequencies and bandwidths for FDM/FM satellite systems using multiple channel
per carrier formats are generally assigned and remain fixed for a long period of time. This is
referred to as fixed assignment,
assignment, multiple access (FDM/FM/FAMA). An alternate channel
allocation scheme is demand assignment, multiple access (DAMA). Demand assignment allows
all users continuous and equal access of the entire transponder bandwidth by assigning carrier
frequencies on a temporary basis using a statistical assignment process. The first FDMA demand
assignment process. The first FDMA demand assignment system for satellites was developed
by com-sat
sat for use on the Intelsat series IVA and V satellites.
Figure: FDMA,, SPADE earth station transmitter

SPADE DAMA satellite system:

SPADE is an acronym for single channel per carrier PCM multiple access demand
assignment equipment. Figure show the block diagram and IF frequency assignments,
respectively, for SPADE.
With SPADE, 800 PCM encoded voice band channels separately QPSK modulate an IF
carrier signal (hence the name single carrier per channel, SCPC). Each 4 kHz voice band
channel is sample at an 8 kHz rate and converted to an eight bit PCM code. This produces a 64
kbps PCM code for each voice band channel. The PCM code from each voice band channel
QPSK modulates a different IF carrier frequency. With QPSK, the minimum required bandwidth
is equal to one half the input bit rate. Consequently,
Consequently, the output of each QPSK modulator
requires a minimum bandwidth of 32 kHz. Each channel is allocated a 45 kHz bandwidth,
allowing for a 13 kHz guard band between pairs of frequency division multiplexed channels.
The IF carrier frequencies begin at 52.0225 MHz (low band channel 1) and increase in 45 kHz
steps to 87.9775 MHz (high band channel 400). The entire 36 MHz band (52 MHz to 88 MHz) is
divided in half, producing two 400 channel bands (a low band and a higher band). For fullfu
duplex operation, 400 45 kHz channels are used for one direction of transmission, and 400 are
used for the opposite direction. Also, channels 1,2, and 400 from each band are left permanently
vacant. This reduces the number of usable full duplex voice band channels to 397. The 6
GHz C band extends from 5.725 GHz to 6.425 GHz (700 MHz). This allows for approximately 19
36 MHz RF cannels per system. Each RF channel has a capacity of 397 full duplex voice band
channels.

Each RF channelel has a 160 kHz common signaling channel (CSC). The CSC is a time
division multiplexed transmission that is frequency division multiplexed into the IF spectrum
below the QPSK encoded voice band channels.

Figure: Carrier frequency assignment for the Intelsat single channel per carrier PCM
multiple access demand assignment equipment (SPADE)

Figure: FDMA, SPADE common signaling channel (CSC)

Figure shows the TDM frame structure for the CSC. The total frame time is 50 ms, which is
subdivided into 50 1 ms epochs. Each earth station transmits on the CSC channel only during its
preas-signed 1 ms time slot. The CSC signal is a 128 bit binary code. To transmit a 128 bit
code in 1 ms, a transmission rate of 128 kbps is required.
required. The CSC code is used for establishing
and disconnecting voice band links between two earth station users when demand assignment
channel allocation is used.
The CSC channel occupies a 160 kHz bandwidth, which includes the 45 kHz for low band
channel 1. Consequently, the CSC channel extends from 51.885 MHz to 52.045 MHz. The 128
kbps CSC binary code QPSK modulates a 51.965 MHz carrier. The minimum bandwidth
required for the CSC channel is 64 kHz, this results in a 48 k Hz guard band on either side of the
CSC signal.

With FDMA, each earth station may transmit simultaneously within the same 36 MHz RF
spectrum but on different voice band channels. Consequently, simultaneous transmissions of
voice band channels from all earth stations within the satellite network are interleaved in the
frequency domain in the satellite transponder. Transmission of CSC signals are interleaved in the
time domain.

An obvious disadvantage of FDMA is that carriers from multiple earth stations may be
present in a satellite transponder at the same time. This results in cross modulation distortion
between the various earth station transmissions. This is alleviated somewhat by shutting off the
IF subcarriers on all unused 45 kHz voice band channels. Because balanced modulators are
used in the generation of QPSK, carrier suppression is inherent. This also reduces the power load
on a system and increases its capacity by reducing the idle channel power.

5. Explain the TDMA techniques used in satellite communication?

Time Division Multiple Access:

Time division multiple access (TDMA) is the predominant multiple access method used
to day. It provides the most efficient method of transmitting digitally modulated carriers (PSK).
TDMA is a method of time division multiplexing digitally modulated carriers between
participating earth stations within a satellite network through a common satellite transponder.
With TDMA, each earth station transmits a short burst of a digitally modulated carrier during a
precise time slot (epoch) within a TDMA frame. Each stations burst is synchronized so that it
arrives at the satellite transponder at a different time. Consequently, only one earth stations
carrier is present in the transponder at any given time, thus, avoiding a collision with another
stations carrier. The transponder is an RF to RF repeater that simply receives the earth station
transmissions, amplifies them and then retransmits them in a downlink beam that is received by
all the participating earth stations. Each earth station receives the bursts from all other earth
stations and must select from them the traffic destined only for itself.

Figure shows a basic TDMA frame. Transmissions from all earth stations are synchronized
to a reference burst. Figure shows the reference burst as a separate transmission, but it may be the
preamble that precedes a reference stations transmission of data. Also, there may be more than
one synchronizing reference burst.
Figure: Basic time division
ivision multiple accessing (TDMA) frame

Figure: Unique word correlator


The reference burst contains a carrier recovery sequence (CRS), from which all receiving
stations recover a frequency and phase coherent carrier for PSK demodulation. Also included in
the reference burst is a binary sequence for bit timing recovery (BTR, i.e., clock recovery). At the
end of each reference burst, a unique word (UW) is transmitted. The UW sequence is used to
establish a precise time reference that each of the the earth stations uses to synchronize the
transmission of its burst. The UW is typically a string of 20 successive binary 1s terminated with a
binary 0. Each earth station receiver demodulates and integrates the UW sequence. Figure shows
the result of thee integration process. The integrator and threshold detector are designed so that
the threshold voltage is reached precisely when the last bit of the UW sequence is integrated. This
generates a correlations spike at the output of the threshold detector at at the exact time the UW
sequence ends.

Figure: TDMA, CEPT primary multiplex frame transmitter

Each earth station synchronize the transmission of its carrier to the occurrence of the UW
correlation spike. Each station waits a different length of time before it begins transmitting.
Consequently, no two stations will transmit the carrier at the same time. Note the guard time (GT)
between transmissions from successive stations. This is analogous to a guard band in a frequency
division multiplexed system. Each station precedes the transmission of data with a preamble.
The preamble is logically equivalent to the reference burst. Because each stations transmissions
must be received by all other earth stations, all stations must recover carrier and clocking
information prior to demodulating the data. If demand assignment is used, a common signaling
channel also must be included in the preamble.
CEPT primary multiplex frame:

Figure show the block diagram and timing sequence, respectively, for the CEPT primary
multiplex frame. (CEPT is the Conference of European Postal and Telecommunications
Administrations; the CEPT sets many of the European telecommunications standards). This is a
commonly used TDMA frame format for digital satellite systems.

Essentially, TDMA is a store and forward system. Earth stations can transmit only
during their specified time slot, although the incoming voice band signals are continuous.
Consequently, it is necessary to sample and store the voice band signals prior to transmission.
The CEPT frame is made up of eight bit PCM encoded samples from 16 independent voice
band channels.

Figure: TDMA, CEPT primary multiplex frame

Each channel has a separate codec that samples the incoming voice signals at a 16 kHz
rate and converts those samples to eight bit binary codes. This results in 128 kbps transmitted at
a 2.048 MHz rate from each voice channel codec. The 16 128 kbps transmissions are time
division multiplexed into a sub frame that contains
contains one eight bit sample from each of the 16
channels (128 bits). It requires only 62.5 ss to accumulate the 128 bits ( 21.048 Mbps transmission
rate). The CEPT multiplex format specifies a 2 ms frame time. Consequently, each earth station
can transmit only once ever 2 ms and, there fore, must store the PCM encoded samples. The
128 bits accumulated during the first sample of each voice band channel are stored in a holding
register. While a second sample is taken from each channel and converted into another 128 bit
subframe. This 128 bit sequence is stored in the holding register behind the first 128 bits. The
process continues for 32 subframes (32 62.5 s = 2 ms). After 2 ms, 32 eight bit samples have
been taken from each of 16 voice band channels for a total of 4096 bits (32 8 16 = 4096). At
this time, the 4096 bits are transferred to an output shift register for transmission. Because the
total TDMA frame is 2 ms long and during this 2 ms period each of the participating earth
stations must transmit at different times, the individual transmissions from each station must
occur in a significantly shorter time period. In the CEPT frame, a transmission rate of 120.832
Mbps is used. This rate is the 59th multiple of 2.048 Mbps. Consequently, the actual transmission
of the 4096 bits are accumulated bits takes approximately 33.9 s. At the earth station receivers,
the 4096 bits are stored in a holding register and shifted at a 2.048 Mbps rate. Because all the
clock rates (500 Hz, 16 kHz, 128 kHz, 2.048 MHz, and 120.832 MHz) are synchronized, the PCM
codes are accumulated, stored, transmitted, received, and then decoded in perfect
synchronization. To the users, the voice transmission appears to be a continuous process.

There are several advantages of TDMA over FDMA. The first, and probably the most
significant, is that with TDMA only the carrier from one earth station is present in the satellite
transponder at any given time, thus reducing inter modulation distortion. Second, with FDMA,
each earth station must be capable of transmitting and receiving on a multitude of carrier
frequencies to achieve multiple accessing capabilities. Third, TDMA is must better suited to the
transmission of digital information than FDMA. Digital signals are more naturally acclimated to
storage, rate conversions, and time domain processing than their analog counterparts.

The primary disadvantage of TDMA as compared with FDMA is that in TDMA precise
synchronization is required. Each earth stations transmissions must occur during an exact time
slot. Also, bit and frame timing must be achieved and maintained with TDMA.

Code Division Multiple Access:

With FDMA, earth stations are limited to a specific bandwidth with in a satellite channel or
system but have no restriction on when they can transmit. With TDMA, an earth stations
transmissions are restricted to a precise time slot but have no restriction on what frequency or
bandwidth it may use within a specified satellite system or channel allocation. With code division
multiple access (CDMA), there are no restrictions on time or bandwidth. Each earth station
transmitter may transmit whenever it wishes and can use any or all the bandwidth allocated a
particular satellite system or channel. Because there is no limitation on the bandwidth, CDMA is
sometimes referred to as spread spectrum multiple access: transmission can spread throughout
the entire allocated bandwidth. Transmissions are separated through envelope encryption /
decryption techniques. That is, each earth stations transmissions are encoded with a unique word
called a chip code. Each station has a unique chip code. To receive a particular earth stations
transmission, a receive station must know the chip code for that station.
Figure shows the block diagram of a CDMA encoder and decoder. In the encoder the input
data (which may be PCM encoded voice band signals or raw digital data) is multiplied by a
unique chip code. The product code PSK modulates an IF carrier, which is up converted to RF for
transmission. At the receiver the RF is down converted to IF. From the IF, a coherent PSK
carrier is recovered. Also, the chip code is acquired and used to synchronize the receive stations
code generator. Keep in mind, d, the receiving station knows the chip code but must generate a chip
code that is synchronous in time with the receive code. The recovered synchronous chip code
multiplies the recovered PSK carrier and generates a PSK modulated signal that contains the
PSK carrier plus the chip code. The received IF signal that contains the chip code, the PSK carrier,
and the data information is compared with the received IF signal in the correlator. The function of
the correlator is to compare the two signals and recoverrecover the original data. Essentially, the
correlator subtracts the recovered PSK carrier + chip code from the received PSK carrier + chip
code + data. The resultant is the data.

The correlation is accomplished on the analog signals. Figure shows how the th encoding and
decoding is accomplished. Figure shows the correlation of the correctly received chip code. A + 1
indicated an in phase carrier, and a 1 indicates an out of phase carrier. The chip code is
multiplied by the data (either + 1 or 1). The product is either an in phase code or one that is
1800 out of phase with the chip code. In the receiver, the recovered synchronous chip code is
compared in the correlator with the received signaling elements. If the phases are the same, a + 1
is produced; if they are 1800 out of phase, a 1 is produced. It can be seen that if all the recovered
chips correlate favorably with the incoming chip code, the output of the correlator will be a + 6
(which is the case when a logic 1 is received). If all
all the code chips correlate 1800 out of phase, 4 6
is generated (which the the case when a logic 0 is received). the bit decision circuit is simple a
threshold detector. Depending on whether a + 6 or 6 is generated, the threshold detector will
output a logic 1 or a logic 0, respectively.

As the name implies, the correlator looks for a correlation (similarity) between the
incoming coded signal and the recovered chip code. When a correlation occurs, the bit decision
circuit generates the corresponding logic condition.
Figure: Code division multiple access (CDMA) (a) encoder; (b) decoder
With CDMA all earth stations within the system may transmit on the same frequency at the
same time. Consequently, an earth station receiver may be receiving coded code PSK signals
simultaneously from more than one transmitter. When this is the case, the job of the correlator
becomes considerably more difficult. The correlator must compare the recovered chip code with
the entire received spectrum and separate from it only the chip code from the desired earth station
transmitter. Consequently the chip code from one earth station must not correlate with the chip
codes from any of the other earth stations.

Figure shows how such a coding scheme is achieved. If half the bits within a code were
made the same and half were made exactly the opposite, the resultant would be zero cross
correlation between chip codes. Such a code is called an orthogonal code. In figure, it can be seen
that when the orthogonal code is compared
compared with the original chip code, there is no correlation (i.e.,
the sum of the comparison is zero). Consequently, the orthogonal code, although received
simultaneously with the desired chip code, had absolutely no effect on the correlation process.
For thiss example, the orthogonal code is received in exact time synchronization with the desired
chip code: this is not always the case. For systems that do not have time synchronous
transmission, codes must be developed where there is no correlation between one o stations code
and any phase of another stations code.

The primary difference between spread spectrum PSK transmitters and other types of
PSK transmitters is the additional modulator where the code word is multiplied by the incoming
data. Because of the pseudorandom nature of the code word, it is often referred to as
pseudorandom noise (PRN).
Figure: CDMA code / data alignment: (a) Correct code; (b) Orthogonal code
The PRN must have a high autocorrelation property with itself and a low correl
correlation
properly with other transmitters pseudorandom codes. The code word rate (Rcw) must exceed the
incoming data rate (Rd) by several orders of magnitude. In addition, the code rate must be
statistically independent of the data signal. When these two conditions are satisfied, the final
output signal spectrum will be increased (spread) by a factor called the processing gain.
Processing gain is expressed mathematically as

R cw
G=
Rd

Where G is processing gain and Rcw > > Rd

A spread spectrum signal cannot be demodulated accurately if the receiver does not
possess a dispreading circuit that matches the code word generator in the transmitter. Three of
the most popular techniques used to produce the spreading function and direct direc sequence,
frequency hopping, and a combination of direct sequence and frequency hopping called hybrid
direct sequence frequency hopping (hybrid DS / FH).

Direct sequence:

Direct sequence spread spectrum (DS SS) is produced when a bipolar data modulated
signal is linearly multiplied by the spreading signal in a special balanced modulator called a
spreading correlator. The spreading code rate Rcw = 1 / Tc+ where Tc is the duration of a single
bipolar pulse (i.e., the chip). Chip rates are 100 to 1000 times shorter in duration than the time of a
single data bit. As a result, the transmitted output frequency spectrum using spread spectrum is
100 to 1000 times wider than the bandwidth of the initial PSK data modulated signal. The block
diagram for a direct sequence spread spectrum system is shown in figure. As the figure
shows, the data source directly modulates the carrier signal, which is then further modulated in
the spreading correlator by the spreading code word.

Figure: Simplified
d block diagram for a direct sequence spread spectrum transmitter

The spreading (chip) code used in spread spectrum system are either maximal length
sequence codes, sometimes called m sequence codes or Gold codes. Gold codes are
combinations of maximal length codes invented by Magnavox Corporation in 1967, especially
for multiple access CDMA applications. There is a relatively large set of Gold codes available
with minimal correlation between chip codes. For a reasonable number of satellite users, it is
impossible to achieve perfectly orthogonal codes. You can only design for a minimum cross
correlation among chips.

One of the advantages of CDMA was that the entire bandwidth of a satellite channel or
system may be used for each transmission
transmission from every earth station. For our example, the chip rate
was six times the original bit rate. Consequently, the actual transmission rate of information was
one sixth of the PSK modulation rate, and the bandwidth required is six times that required to t
simply transmit the original data as binary. Because of the coding in efficiency resulting from
transmitting chips for bits, the advantage of more bandwidth is partially offset and is, thus, less of
an advantage. Also, if the transmission of chips from the various earth stations must be
synchronized, precise timing is required for the system to work. Therefore, the disadvantage of
requiring time synchronization in TDMA systems is also present with CDMA. In short, CDMA is
not all that it is cracked up to be. The most significant advantage of CDMA is immunity to
interference (jamming), which makes CDMA ideally suited for military applications.
Frequency hopping spread spectrum:

Frequency hopping is a form of CDMA where a digital code is used to continually


con change
the frequency of the carrier. The carrier is first modulated by the data message and then up
converted using a frequency synthesized local oscillator whose output frequency is determined
by an n bit pseudorandom noise code produced in in a spreading transmitter is shown in figure.

With frequency hopping, the total available bandwidth is partitioned into smaller
frequency bands, and the total transmission time is subdivided into smaller time slots. The idea is
to transmit within a limited
ted frequency band for only a short time, then switch to another
frequency band and so on. This process continues indefinitely. The frequency hopping pattern
is determined by a binary spreading code. Each station uses a different code sequence. A typical
typ
hopping pattern (frequency time matrix) is shown in figure.

Figure: Simplified block diagram of a frequency hopping spread spectrum transmitter

Figure: Frequency time hopping matrix

With frequency hopping, each earth station within a CDMA network is assigned a different
frequency hopping pattern. Each transmitter switches (hops) from one frequency band to the
next according to their assigned pattern. With frequency hopping, each station uses the entire RF
spectrum but never occupies more than a small portion of that spectrum at any one time.
FSK is the modulation scheme most commonly used with frequency hopping. When it is a
given stations turn to transmit, it sends one of the two frequencies (either mark or space) for the
particular band in which it is transmitting. The number of stations in a given frequency hopping
system is limited by the number of unique hopping patterns that can be generated.

Essentially, there are two methods used to interface terrestrial voice band channels with
satellite channels: digital non interpolated interfaces (DNI) and digital speech interpolated
interfaces (DSI)

************

Vous aimerez peut-être aussi