Vous êtes sur la page 1sur 163

NEHRU ARTS AND SCIENCE COLLEGE

DEPARTMENT OF ELECTRONICS & COMMUNICATION SYSTEMS


PRINCIPLES OF COMMUNICATION SYSTEM
E-LEARNING

Unit I
(PART-A)

1)

Ultrasonic testing is based on time-varying deformations or vibrations in

materials.

2)

Many different patterns of vibrational motion exist at the atomic level.

3)

Sound waves can propagate in four principle modes that are based on the

way the particles oscillate.

4)

The oscillations occur in the longitudinal direction or the direction of

wave propagation.

5)

Shear waves require an acoustically solid material for effective

propagation.

(PART-B)

6) Describe EM waves?
Wave Propagation
Ultrasonic testing is based on time-varying deformations or vibrations in materials, which
is generally referred to as acoustics. All material substances are comprised of atoms,
which may be forced into vibrational motion about their equilibrium positions. Many
different patterns of vibrational motion exist at the atomic level, however, most are
irrelevant to acoustics and ultrasonic testing. Acoustics is focused on particles that

contain many atoms that move in unison to produce a mechanical wave. When a material
is not stressed in tension or compression beyond its elastic limit, its individual particles
perform elastic oscillations. When the particles of a medium are displaced from their
equilibrium positions, internal (electrostatic) restoration forces arise. It is these elastic
restoring forces between particles, combined with inertia of the particles, that leads to the
oscillatory motions of the medium.

7) Write a short note on free space propagation?


Sound waves can propagate in four principle modes that are based on the way the
particles oscillate. Sound can propagate as longitudinal waves, shear waves, surface
waves, and in thin materials as plate waves. Longitudinal and shear waves are the two
modes of propagation most widely used in ultrasonic testing. The particle movement
responsible for the propagation of longitudinal and shear waves is illustrated below.
8)Write notes on LUF?
The lowest useful frequency (LUF), as statistically calculated, is the lowest
frequency at which the field intensity at the receiving antenna is sufficient to provide the
required signal-to-noise ratio (SNR) on 90 percent of the undisturbed days of the month.
Unlike the MUF and FOT, the LUF, in addition to the amount of absorption on the
particular path, is also dependent on the link parameters such as transmitter power output,
antenna efficiency, receiver sensitivity, required SNR for the service required,
transmission mode, existence of multipath, and the noise levels at the receiving station.
As frequency is lowered, the amount of absorption of the signal by the D-layer (and
frequently the E-region) increases. This absorption is proportional to the inverse square
of the frequency. Eventually, as the frequency is lowered further, the signal becomes
completely absorbed by the ionosphere. This is the LUF. The LUF changes in direct
correlation to the movement of the Sun over the radio path, and peaks at noon at the
midpoint of the path. The window of usable frequencies, therefore, lies in the frequenc
range between the MUF and LUF.
When an ionospherically propagated wave returns to Earth, it can be reflected

upward from the ground, travel again to the ionosphere, and again be refracted to Earth.
This process, multihop propagation, can be repeated several times under ideal conditions,
leading to very long distance communications. Propagation between any two points on
the Earths surface is usually by the shortest direct route, which is a great-circle path
between the two points. A great circle is an imaginary line drawn around the Earth,
formed by a plane passing through the center of the Earth. The diameter of a great circle
is equal to the diameter of the Earth (12,755 km or 7,926 mi at the equator). The
circumference of the Earth -- and the length of any global great-circle path is about
40,000 km or 24,900 mi. Due to ionospheric absorption and ground-reflection losses,
multiple-hop propagation usually yields lower signal levels and more distorted
modulation than single hop signals. There are exceptions, however, and under ideal
conditions, communications using long path signals may be possible. The long path is
the other (long) route around the great-circle path. The same signal can be propagated
over a given path via several different numbers of hops. The success of multihop
propagation usually depends on the type of transmitter modulation and type of signal. For
interrupted continuous wave (ICW) (also known as CW or Morse code) there is no real
problem. On single-sideband (SSB), the signal is somewhat distorted due primarily to
time spreading. For radio-teletypewriter (RTTY) or data service using frequency-shift
keying (FSK), the distortion may be sufficient to degrade the signal to the point that it
cannot be used. Every different point-to-point circuit will have its own mode structure as
a function of time of day. This means that every HF propagation problem has a totally
unique solution.
9)Write a short note on surface wave propagation?
In communication systems where antennas are used to transfer information the
environment between (and around) the transmitter and receiver has a major influence
on the quality of the transferred signal. Buildings are the main source of attenuation
but vegetation elements such as trees and large bushes can also have some reducing

effects,

on

the

propagated

radio

signal.

In the case of attenuation by trees and bushes the incident electromagnetic field is
mainly interacting with the leaves and the branches. The trunk does of course also
have some influence on the attenuation but since the volume occupied by the trunk is
much smaller than the total volume of a tree, for example, these effects can be
considered as negligible. In the case of wave propagation between antennas that are
located on heights i.e. on rooftops it will in principal only be the upper part of
the tree crown that affects the attenuation.
Since one of the fundamental assumptions, in this thesis, is communication between
fixed antennas on heights, the attenuation effects, from the trunks, will thus be
neglected in the vegetation models.
10)Describe sky wave propagation?
The attenuation due to vegetation is also very sensitive to the wavelength. Since the
interaction between the tree and the electromagnetic field mainly is due to leaves and
branches, the size and shape of these are important. For low frequencies when
thewavelength is much larger than the scattering body leaves and branches have

only a small interaction to the electromagnetic field, which means that surface
irregularities have no or minor influence on the attenuation. The incident field
will

approximately have the same magnitude over the whole body, which leads to that

the body experiences the incident field as uniform. Since the vegetation element is
exposed by an electric field, an internal electric field is induced. This give rice to
secondary radiation and since the wavelength is much larger than the scattering body, the
emitted radiation is spread out and forms a radiation pattern close to that formed by a
dipole antenna. When the wavelength is decreased, the losses increase due to a larger
interaction between the incident field and the vegetation elements. This proceeds until the
wavelength approach the same size as the scattering body and thus enters the resonance
region. Here will the absorption and scattering values fluctuate strongly and the
attenuation becomes irregular and very frequency dependent. The size and shape of the
body is the main reason why this happens. The incident electric field induces an internal
electric field that takes different values at different parts of the scattering body (these
values are of course time dependent) since the wavelength no longer is much larger than

the size of the body. These different parts work as scatterers and will thus emit secondary
radiation. The radiation from the different mitters interferes, which leads to that specific
directions are predominated and radiation lobes are formed. When the frequency is
increased further, the effects of the resonance gradually decay, which leads to a more
predictable behavior. The attenuation of the leaves and branches increases with increasing
frequency. When the wavelength is much less than the scattering body no resonance
effects occur and the attenuation will be purely exponential. The number of scatterers
in the scattering body will of course increase, which leads to an increase in the number
of radiation lobes. For very high frequencies the width of the maximum lobes is thin and
thus forms radiation beams. This means that the intensity in the lobes, whose direction
corresponds to the beam directions, is much higher and differs by many orders of
magnitude compared to the other lobes. The fundamental principles behind the
interaction between the incident field and the scattering elements are very complicated
and will therefore not be discussed here. It should be mentioned though that some factors
that contribute to the losses are the fact that the incident field changes the permanent
dipole moment in the liquid and induces currents in the medium. The induced currents
can be created due to the charges in the saline water that the organic components contain.

11)Describe sky wave propagation?


We have so far discussed the interaction in general between the incident
electromagnetic field and the vegetation elements at different frequencies. From the
discussion we find that three types of interacting exist from which approximations
can be done. In the case of low frequencies we are dealing with Rayleigh scattering
(long wave approximations) and in the case of high frequencies, physical optics or
geometric optics (short wave approximations) are considered. In the resonance region
there is no simple way to do any approximations which leads to that the
electromagnetic problems are difficult to solve. If the electric properties of the
scattering body can be considered as weak, Born or Rytov approximations can be
used to simplify the calculations. In this case the internal fields inside the scattering

body is approximated by the incident field which makes it possible to treat cases
when resonanceoccurs.
(PART-C)
11)Explain the space wave propagation?
We have so far discussed the interaction in general between the incident
electromagnetic field and the vegetation elements at different frequencies. From the
discussion we find that three types of interacting exist from which approximations
can be done. In the case of low frequencies we are dealing with Rayleigh scattering
(long wave approximations) and in the case of high frequencies, physical optics or
geometric optics (short wave approximations) are considered. In the resonance region
there is no simple way to do any approximations which leads to that the
electromagnetic problems are difficult to solve. If the electric properties of the
scattering body can be considered as weak, Born or Rytov approximations can be
used to simplify the calculations. In this case the internal fields inside the scattering
body is approximated by the incident field which makes it possible to treat cases
when resonance occurs. In the common microwave propagation models that are used
today, assumptions of small or large wavelength in comparison to the scatterers are
often done. Thus is Rayleigh scattering or physical optics considered. But when the
wavelength of the transmitted field approach the size of the leaves and branches,
resonance effects occur which leads to that these models generates incorrect results.
The purpose of this work is to study the vegetation attenuation and scattering at 3.1
GHz and 5.8 GHz. Since the wavelengths of the transmitted fields are about the same
size as the leaves and branches ( .=9.7 cm and .= 5.2 cm ) resonance effects occur.
Since the common models can not be used the wave propagation through the canopy
must be analyzed in detail which leads to an improved model for the attenuation. The
attenuation model is based on the total cross section of a leaf and a branch. A
computer program, based on the T-matrix theory,makes the computations of the total
cross section. The results from the simulations of the improved attenuation model will
finally be compared with measurements that have been made on a large test beech.

Wave Propagation through Vegetation at 3.1 GHz and 5.8 GHz 2 Basic relationships
This section gives a brief introduction to the theory of microwave propagation.
3.1 Leaf model
Effective dielectric properties are modeled by dielectric mixing theory. In the case of
vegetation elements, the components are liquid water with a high permittivity, organic
material with moderate to low permittivity and air with unit permittivity. For such
highly contrasting permittivities and large volume fractions physical mixing theory
has, so far, failed. In the attempt to overcome this problem Ulaby and El-Rayes [6]
assumed linear, i.e. empirical relationships between the permittivity and volume
fractions of the different components. Dielectric measurements by Ulaby and ElRayes indicate that the dielectric properties of vegetation can be modelled by
representing vegetation as a mixture of saline water, bound water and dry vegetation.
They derived a semi-empirical formula [6] from measurements at frequencies
between 1 and 20 GHz on corn leaves with relatively high dry matter contents. The
extrapolation of the formula to higher frequencies and lower dry matter contents leads
to incorrect values. This was shown by Mtzler and Sume [2]. From the data used in
[6], and their own data at frequencies up to 94 GHz, they developed and improved a
semi-empirical formula to calculate the dielectric constant of leaves. High and low
dry matter contents were included. M tzler combined the data of Ulaby and El Rayes
[6], El Rayes and Ulaby [9] and of M tzler and Sume [2] and derived a new dielectric
formula [1] eleaf = 0.522(1-1.32 md )e sw + 0.51 + 3.84 md which is valid over the
frequency range from 1 to 100 GHz. The formula is applicable to fresh leaves with
md values in the range 0.1 = md = 0.5 . Here esw is the dielectric permittivity for
saline water according to the Debye model and md is the dry-matter fraction of leaves
given by dry mass
md = fresh mass

3.2 Canopy opacity model

Wegm ller, M tzler and Njoku [4] used the radiative transfer model, described by Kerr
and Njoku, as a reference point for studying the vegetation attenuation and emission.
The transfer model is a model for spaceborne observations of semi-arid land surfaces
and it is based on the concept of temperature instead of the concept of electric and
magnetic fields. It means that instead of analyzing how the magnitude of the electric
and magnetic field is distributed to the different components one analyzes how the
energy is distributed in terms of the temperature. Every component of the system
the land surface, air, leaves, branches etc.

(PART-C)
6)Write notes on propagation model?
Radio propagation
The usable frequency range for radio waves extends from the highest frequencies
of sound, about 20 kHz, to above 30,000 MHz. The frequency band from 3 to 30 MHz is
designated as the high frequency (HF) band. Most of the newer HF radios can operate in
a larger range of 1.6 to 30 MHz, or higher. Most long-haul communications in this band,
however, generally take place between 4 and 18 MHz. Depending on ionospheric
conditions and the time of day, the upper frequency range of about 18 to 30 MHz may
also be available. The HF band, of all of the frequency bands, is by far the most sensitive
to ionospheric effects. HF radio waves, in fact, experience some form of almost every
known propagation mode. The sun influences all radio communication beyond
groundwaveor line-of-sight ranges. Conditions vary with such obvious sun-related cycles
astime of day and season of the year. Since these conditions differ for appreciable
changesin latitude and longitude, and everything is constantly changing as the Earth
rotates,almost every communications circuit has unique features with respect to the band
offrequencies that are useful and the quality of signals in portions of that band.
The two basic modes of radio wave propagation at HF are ground wave and
skywave. Figure A1-1 illustrates these two modes.

Ground wave and skywave


A1-1.1 Ground waves
A ground wave, as the name implies, travels along the surface of the earth, thus
enabling short-range communications. Ground waves are those portions of the radiowave
radiation directly affected by the surface of the Earth. The principal components
are
1) an Earth-guided surface wave,
2) a direct wave,
3) a ground-reflected wave,
4) a space wave, and sometimes
5) a tropospheric-reflected/refracted wave.
Ground-wave communication is more straightforward than skywave and it isgenerally
assumed that the ground wave is merely an attenuated, delayed, but otherwise undistorted
version of the transmitted signal. The received strength of transmitted radiosignals in the
ground-wave mode is dependent on such factors as: transmitter power,receiver sensitivity,
ground conductivity and terrain roughness, antenna characteristics
(such as height, polarization, directivity and gain), the radio frequency, and the type of
path traveled. For a given complement of equipment, the range may extend out to as far
as 400 km (250 mi) over a conductive, all-sea water path, but over arid, rocky,
nonconductiveterrain, however, the range may drop to less than 30 km (20 mi), even with
thesame equipment. Ground-wave propagation is almost always vertically polarized.
The surface wave is that component of the ground wave that is affected primarily by
the conductivity and dielectric constant of the Earth and is able to follow the curvature of
the Earth. When both transmitting and receiving antennas are on, or close to, the ground,
the direct and ground-reflected components of the wave tend to cancel out, and the
resulting field intensity at the receiving antenna is principally that of the surface wave.
The surface-wave component is not confined to the Earths surface, however, but extends
up to considerable heights, diminishing in field strength with increased height. Because
part of its energy is absorbed by the ground, the electric intensity of the surface wave is

attenuated at a much greater rate than inversely as the distance. This attenuation depends
on the relative conductivity of the surface over which the wave travels. The best type of
surface for surface-wave transmission is sea water. The electrical properties of the
underlying terrain that determine the attenuation of the surface-wave field intensity vary
little from time to time, and therefore, this type of transmission has relatively stable
characteristics. The surface-wave component generally is transmitted as a vertically
polarized wave, and it remains vertically polarized at appreciable distances from the
antenna. This polarization is chosen because the Earth has a short-circuiting effect on
the electric intensity of a horizontally polarized wave but offers resistance to this
component of the vertical wave.
Absorption of the radio wave increases with frequency and limits useful surface-wave
propagation to the lower HF range. At frequencies below about 5 MHz, the surface wave
is favored because the ground behaves as a conductor for the electromagnetic energy.
Above 10 MHz, however, the ground behaves as a dielectric. In the region below 10
MHz, conductivity of the surface is a primary factor in attenuation of the surface wave.
As frequencies approach 30 MHz, losses suffered by the surface wave become excessive
ocand

ground-wave communication is possible only by means of direct waves.

Direct waves, also known as line-of-sight (LOS) waves, follow a direct path through
the troposphere from the transmitting antenna to the receiving antenna. Propagation can
extend to somewhat beyond the visible horizon due to normal refraction in the
atmosphere causing the path to be somewhat bent or refracted. Because the electric field
intensity of a direct wave varies inversely with the distance of transmission, the wave
becomes weaker as distance increases, much like the light beam from a lantern or
headlight. The direct wave is not affected by the ground or by the tropospheric air over
the path but the transmitting and receiving antennas must be able to see each other for
communications to take place, making antenna height a very critical factor in determining
range. Almost all of the communications systems above 30 MHz use the direct (LOS)
mode. This includes the commercial broadcast FM stations, VHF, UHF, microwave,
cellular telephone systems, and satellite systems.
Space waves constitute the combination of all signal types which may reach a receiver

when both the transmitting and the receiving antennas are within LOS. In addition to the
direct signal, space waves include all of any earth-reflected signals of significance and,
under specific conditions, would include undesirable strong secondary ionospheric modes
as well. Space waves will support a relatively high signal bandwidth, as compared to
ionospheric modes.
Ground-reflected waves result from a portion of the propagated wave being reflected
from the surface of the earth at some point between the transmitting and receiving
antenna. This causes a phase change in the transmitted signal and can result in a
reduction or an enhancement of the combined received signal, depending on the time of
arrival of the reflected signal relative to the other components.
Tropospheric-reflected/refracted waves are generated when abrupt differences in
atmospheric density and refractive index exist between large air masses. This type of
refraction, associated with weather fronts, is not normally significant at HF.
Skywaves
Skywaves are those main portions of the total radiation leaving the antenna at
angles above the horizon. The term skywave describes the method of propagation by
which signals originating from one terminal arrive at a second terminal by refraction
from
the ionosphere. The refracting (bending) qualities of the ionosphere enable global-range
communications by bouncing the signals back to Earth and keeping them from being
beamed into outer space. This is one of the primary characteristics of long-haul HF
communication -- its dependence upon ionospheric refraction. Depending on frequency,
time of day, and atmospheric conditions, a signal can bounce several times before
reaching a receiver which may be thousands of kilometers away. Ionospheric skywave
returns, however, in addition to experiencing a much greater variability of attenuation and
delay, also suffer from fading, frequency (doppler) shifting and spreading, time
dispersionand delay distortion.
Nearly all medium- and long-distance (beyond the range of ground wave)

communications in the HF band is by means of skywaves. After leaving the transmitting


antenna, the skywave travels from the antenna at an angle that would send it out into
space if its path were not bent enough to bring it back to Earth. The radio wave path is
essentially a straight line as it travels through the neutral atmospheric region below the
ionosphere. As the radio wave travels outward from the Earth, the ionized particles in the
ionosphere bend (refract) the radio waves. Figure A1-2 is an idealized depiction of the
refraction process. Depending on the frequency and ionospheric ionization conditions,
the continual refraction of the radio waves results in a curved propagation path. This
curved path can eventually exit the ionosphere downward towards Earth so that the radio
waves return to the Earth at a point hundreds or thousands of kilometers from where they
first entered the ionosphere. In many cases, the radio waves reenter the ionosphere by
bouncing from the Earth, and again be refracted back at a further distance. This is
known as multihop and, under the right conditions, will give global reach. On a single
HF link, many single-hop and multihop propagation paths are frequently possible.

11)Briefly the trophospheric scatter propagation?


it is important to note that the relative dielectric constants of the leaves and branches are
frequency dependent [1]. In the analysis constant values for the permittivities of the
leaves and the branches have been assumed because the permittivities of the leaves and
the branches do not change much between 800 MHz to 2000 MHz.

3.4 Microwave transmissivity of a forest canopy


Microwave measurements have been executed by M tzler

for the microwave

transmissivities and opacities of the crown of a beech (Fagus sylvatica L.). The technique
used for measurements corresponds to the one explained in section 3.2. To avoid any
prejudice on the type of microwave propagation model, M tzler limit the physical

interpretation to obvious facts and to consistency tests of the multivariate dataset. The
main instruments that have been used in the study are the five microwave radiometers of
the PAMIR system.
The transmitted power has been recorded during a whole year. In this way it has been
possible to get an apprehension of how much the attenuation is affected by the leaves
alone since measurements were made both for a canopy containing leaves and branches
and for a canopy without leaves. The microwave radiation at 4.9 GHz, 10.4 GHz, 21
GHz, 35 GHz and 94 GHz was measured about once every week between August 1987
and August 1988. During the measurements the radiometer was placed to measure the
transmissivity in a vertical direction through the beech. Thus it measures the brightness
temperature Tb1 of
downwelling radiation from the beech. This temperature can be expressed by
T = tT + rT +(1 - r- t)T1

(3.23)

b1 b 2 b 0
where t is the transmissivity and r the reflectivity of the vegetation layer. Here T1 is the
physical tree temperature and Tb 2 is the sky brightness temperature. That from the
ground upwelling brightness temperature Tb 0 is given by
Tb 0 = e0T0 +(1 - e0 )Tb1

(3.24)

where e0 is the emissivity of the ground surface and T0 is the ground temperature.
Eq. (3.23) and Eq. (3.24) are the basic equations for the experiments and they can be used
to
get an expression for the transmissivity of the tree crown. After some algebra we find

t = T1 + rd T - Tb1

(3.25)

T - T1 b 2
where d T = Tb 0 - T1 . Since the emissivity of the grass-covered ground below the beech
is near 0.95 over the entire frequency range Tb 0 approaches T0. This and the fact
that the reflectivity of the beech is close to 0.1 lead to the following estimation
r d T = 0.1(T0 - T1 )
Since T0 and T1 always are very similar (differences were typically within 2 oC) we
can neglect r dT in Eq. (3.25) and write
T -T
1 b1
t=

T T

(3.26)

1b2

In order to compute t we need values of the physical tree temperature T1, of the
brightness
temperature T , measured below the tree, and of the sky brightness temperature T .
b1 b 2

In the beech experiment T was measured at zenith angles of 50o and 60o, and T (the

b 2 b1
downwelling radiation of the beech) was measured at two linear (v) and (h) polarizations,
at vertical direction, and through the center of the crown at 30o off zenith opposite the

direction of the sky measurements. The tree temperature T1 was measured with an
infrared adiometer and compared with air and grass temperatures. We define the effective
opacity of the vegetation layer

t =-ln()t (3.27)

in accordance with the Lambert-Beer law.


4 ) Tree modeling

In this section we analyze and model the dielectric properties of leaves and branches. We
also analyze the structure of the crown of a tree. Despite the stochastic nature of this
subject it is still possible to make some conclusions on the orientation and distribution of
the leaves and branches. Since we have made our attenuation measurements on a Fagus
sylvatica Pendula (beech) the analysis is based on this tree. It is easy to adjust the results
to another tree type since only a few parameters are related to the structure of the tree.

4.1 Dielectric model of Leaves


Leaves consist of a heterogeneous cell structure. Since frequencies are used with
wavelengths corresponding to values around 0.5 to 1 dm the incident field is not able to
resolve the cell structure. Thus the material of the leaves resembles a homogeneous
material with effective medium properties. As was mentioned before the effective
dielectric properties are modeled by dielectric mixing theory. In the technique of
dielectric mixing theory the volume fractions of the different parts of the object are
multiplied with the

corresponding permittivity to obtain the effective permittivity of the object. If the object
with the volume V consists of three components with the volumes V1, V2 and V3 , where
the respective component has the permittivity e , e and e , we get

e ={V e+ V e+V e }= v e+ v e+ v e (4.1)

where V =vV and v + v + v = 1 . water with a high permittivity, organic material with
moderate to low permittivity and air with unit permittivity. All attempts so far to use the
physical mixing theory to create a formula for
the effective permittivity of a leaf have failed. The reason is probably the large
differences of volume fractions and permittivities between the different components a
leaf can consists of
up to 90 percent of water (or even more) which probably causes nonlinear effects.
To create a valid formula for the permittivity of a leaf we have to use another technique.
Since the saline water of the leaf causes the largest contributions to the disturbance of the
incident electromagnetic field, a model of the water content could serve as a basis. This
model should thereafter be adjusted to the experimental values from leaves at different
frequencies and at different dry matter fractions in order to compensate for the effects
that the organic matter and air has on the permittivity.
A model that describes the dielectric properties of saline water is the
Debye model e-e s
esw =e8 + s 8+ I
1 - i.t .e0

(4.2)

Here e8 is the value of the dielectric function at high frequencies, es is the corresponding
value at .= 0 and t is the relaxation time. The values of the different parameters are 1 This
is valid for all sorts of vegetation elements such as branches, herbs, trunks etc.
Write notes on ionosphere models?
The ionosphere is a region of electrically charged gases and particles in the earths
atmosphere, which extends upward from approximately 50 km to 600 km (30 to 375
miles) above the earths surface. See Figure A1-3. During daylight hours, the lower
boundary of the ionosphere is normally about 65 to 75 km above the earths surface,
butcan be as low as about 50 km. At night the absence of direct solar radiation causes the

The ionosphere is made up of several ionized regions, which play a most


important part in the propagation of radio waves. These regions have an influence on
radio waves mainly because of the presence of free electrons, which are arranged in
generally horizontal layers.
Ionization is the process of creating free electrically charged particles (ions and
free electrons) in the atmosphere, thus establishing the ionosphere. The Sun is the
primary engine of ionization. The earths atmosphere is composed of many different
gases. Because the sun emits radiation in a broad spectrum of wavelengths, different
wavelengths ionize the various atmospheric gas molecules at different altitudes. This
results in the development of a number of ionized layers. Extreme ultraviolet (EUV)
radiation from the sun is a primary force in the ionization process. The various types of
gas molecules in the upper atmosphere have different susceptibilities to ionization,
primarily based on the wavelengths of the ionizing radiation. The short-wavelength solar
radiation, including EUV, is sufficiently intense during daylight hours to alter the
electronic structure of the various gas molecules above altitudes of about 65 km. In
general, the interactions between ions, free electrons, and background neutral molecules
in the ionosphere involve chemical, electrodynamic, and kinetic forces. The existence of
charged particles in the ionosphere allows electrical forces to affect the motions of the
atmospheric gas.
The intensity of solar radiation and therefore ionization varies periodically
allowing prediction of solar radiation intensity based on time of day and the season.
Ionization is higher during spring and summer because the hours of daylight are longer
and conversely lower during the fall and winter because the hours of daylight are shorter.
This ionized ionospheric structure also varies widely over the Earths surface, since the
strength of the suns radiation varies considerably with geographic latitude, time of day,
season, sunspot activity, and whether or not the ionosphere is disturbed. The intensity of
the solar radiation tends to track solar activity, especially the sun spot activity. In
addition to ionizing a portion of the neutral gas, solar radiation also breaks down some of
the neutral molecules, thereby changing the composition of the upper atmospheric gas.
Although the principal source of ionization in the ionosphere is electromagnetic radiation
from the sun, there are other important sources of ionization, such as solar particles and

galactic cosmic rays. The ionization rate at various altitudes depends upon the intensity
of the solar radiation and the ionization efficiency of the neutral atmospheric gases.
Collisions in the atmosphere, however, usually result in the recombination of electrons
and positive ions, and the reattachment of electrons to neutral gas atoms and molecules,
thus decreasing the overall ionization density.
For the purpose of propagation prediction and ionospheric studies, it is frequently
useful to separate the environment (especially the ionosphere) into two states, benign and
disturbed. The benign ionosphere state is that which is undisturbed by solar flares, large
geomagnetic storms, and known manmade (including nuclear) events. Even then, there is
still a significant variability, partly due to the effects of such phenomena as traveling
ionospheric disturbances (TIDs), sudden ionospheric disturbances (SIDs), sporadic-E,
and
spread-F, as examples. The disturbed ionosphere is a state that includes the effects of
several disturbing influences which occur quite naturally. Solar flares, geomagnetic
storms, and nuclear detonations will cause significant ionospheric changes. Disturbances
may also be produced by the release of certain chemicals into the ionosphere. The
magnitudes of the introduced effects vary widely. Certain regions of the ionosphere, such
as the auroral zone and the equatorial region (in certain categories), are always in the
disturbed state.
Ionospheric layering
Within the ionosphere, there are four layers of varying ionization that have
notable effects on communications. As has been noted, solar radiation (EUV, UV, and
X-rays) and, to a lesser extent cosmic rays, act on ionospheric gases and cause ionization.
Since these ionization sources vary both in energy level and wavelength (frequency), they
penetrate to different depths of the atmosphere and cause different ionization effects. The
natural grouping of energy levels results in distinct layers being formed at different
altitudes.
At altitudes below about 80 km, winds and weather patterns cause a turbulent
mixing of the atmospheric gases present at these lower levels. This turbulent mixing
diminishes as altitude increases and as the stratification (or layering) of the constituent
gases becomes more pronounced. The density of ionized gases and particles increases
with altitude to a maximum value, then decreases or remains constant up to the next
layer. The higher layers of the ionosphere tend to be more densely ionized and contain

the smaller particles, while the lower layers, which are somewhat protected by the higher
ones, contain the larger particles and experience less ionization. The different
ionospheric gases each have different ionizing wavelengths, recombination times, and
collision cross sections, as well as several other characteristics. All of this results in the
creation of the ionized atmospheric layers. The boundaries between the various
ionospheric layers are not distinct, because of constant motion within the layers and the
changeability of the ionizing forces.
The ionospheric layers that most influence HF communications are the D, E, F1,
and F2 layers, and, when present, the sporadic-E layer. Of these, the D-layer acts as a
large rf sponge that absorbs signals passing through it. Depending on frequency and time
of day, the remaining four ionized layers are useful (necessary!) to the communicator and
HF communications.
Due to the ionization effects of the solar zenith angle (height of the Sun in the
sky), the altitudes of the various layers and their relative electron densities at any time
depend on the latitude. For mid-latitudes, the following are typical layer (region)
altitudes and extent:
D-region -- 70 to 90 km (a bottom level of 50 km is not too unusual)
E-region -- 90 to 140 km
Sporadic-E region -- typically 105 to 110 km
F-region -- from about 140 km to as high as 1000 km
F1-region -- 140 to over 200 km (during daylight only)
F2-region -- 200 to about 500 km
The hourly, daily, seasonal, and solar cycle variations in solar activity cause the altitudes
of these layers to undergo continual shifting and further substratification.
D-layer
The D-layer, which normally extends from 70 to 90 km above the Earth, is
strongest during daylight hours with its ionization being directly proportional to how high
the sun is in the sky. This layer often extends down to about 50 km. The electron
concentration and the corresponding ionization density is quite small at the lowest levels,
but increases rapidly with altitude. The D-region electron density has a maximum value

shortly after local solar noon and a very small value at night because it is ionized only
during the day. The D-layer is the lowest region affecting HF radio waves. There is a
pronounced seasonal variation in D-region electron densities with a maximum in
summer. The relatively high density of the neutral atmosphere in the D-region causes the
electron collision frequency to be correspondingly high. The main influence of the D
region on HF systems is absorption. In fact, this region is responsible for most of the
absorption encountered by HF signals which use the skywave mode. Because absorption
is inversely proportional to frequency, wave energy in the lower end of the HF band is
almost completely absorbed by this layer during daylight hours. The rise and fall of the
D-layer, and the corresponding amount of radio wave absorption, is the primary
determinant of the lowest usable frequency (LUF) over a given path. Due to the greater
penetration ability of higher radio frequencies, the D-layer has a smaller effect on
frequencies above about 10 MHz. At lower frequencies, however, absorption by the D
layeris significant. Absorption losses of the higher-frequency waves depend on the D
region ionization density, the extent of the region, the incident angle, the radio frequency,
and the number of hops, among other factors. (For every hop, the rf wave traverses the D
region twice, once on the way up, and once on the way down.)
E-layer
The lowest region of the ionosphere useful for returning radio signals to the Earth
is the E-layer. Its altitude ranges from about 90 km to about 130 km and includes both
the normal and the sporadic-E layers. The average altitude of the layers central region is
at about 110 km. At this height, the atmosphere is dense enough so that ions and
electrons set free by solar radiation do not have to travel far before they meet and
recombine to form neutral particles. It is also dense enough to allow rapid de-ionization
as solar energy ceases to reach it. Ionization of this layer begins near sunrise, reaches
maximum ionization at noon, and ceases shortly after sundown. The layer can maintain
its ability to bend radio waves only in the presence of sunlight. At night, only a small
residual level of ionization remains in the E-region. The normal E-layer is important for
daytime HF propagation at distances of up to about 2000 km. Irregular cloud-like layers
of ionization often occur in the region of normal E-layer appearance and are known as
sporadic-E (ES). These areas are highly ionized and are sometimes capable of supporting

the propagation of sky waves at the upper end of the HF band and into the lower VHF
band.
Sporadic E
In addition to the relatively regular ionospheric layers (D, E, and F), layers of
enhanced ionization often appear in the E (ES)-region and the lower parts of the Fregions
(sporadic F). The significant irregular reflective layer, from the point of view of HF
propagation, is the ES-layer since it occurs in the same altitude region as the regular
Elayer.
Despite what their name implies, these layers are quite common. A theory is that
ES occurs as a result of ionization from high altitude wind shear in the presence of the
magnetic field of the Earth, rather than from ionization by solar and cosmic radiation.
Another theory is that ES-layers are thin patches of long-lived ions (primarily metallic)
that are believed to be rubbed off from meteors as they pass through the atmosphere, and
then are formed into thin layers by the action of tidal wind systems. Layers of sodium
ions produced by similar mechanisms commonly appear in the 90-km altitude range.
Because the recombination rates of metallic ions are extremely low in the ionosphere,
these thin layers can persist for many hours before being neutralized by recombination
and dispersed by diffusion and are most commonly observed at night when the
background densities are low.. Areas of ES generally last only a few hours, and move
about rapidly under the influence of high altitude wind patterns. Different forms of ES,
having different characteristics and production mechanisms, are found in the auroral
zones and, at an attitude of about 105 km, in the low and middle equatorial latitudes.
They share the common characteristics that they are all E-layer phenomena, their
occurrence is not predictable, and they all have an effect on HF radio communications.
When ES occurs, it produces a marked effect on the geometry of radio propagation paths
which normally involve the higher layers. Their peak densities can sometimes exceed
that of the higher altitude F-region. When this occurs, these layers can reflect incident
HF waves at much lower altitudes and prevent reflections from the F-layer, thereby
greatly reducing the expected range of transmission. Although ES is difficult to predict, it
can be used to advantage when its presence is known. It has been found that close to the

equator, ES occurs primarily during the day and shows little seasonal variation. By
contrast, in the auroral zone, ES is most prevalent during the night but also shows little
seasonal variation. In middle latitudes however, ES occurrence is subject to both seasonal
and diurnal variations and is more prevalent in local summer than in winter and during
the day rather than at night.
F-layer
The F-layer is the highest and most heavily ionized of the ionized regions, and
usually ranges in altitude from about 140 km to about 500 km. At these altitudes, the air
is thin enough that the ions and electrons recombine very slowly, thus allowing the layer
to retain its ionized properties even after sunset. The F-layer is the most important one
for long-distance HF propagation. If sporadic ionospheric disturbances are ignored, the
height and density of this region varies in a predictable manner diurnally, seasonally, and
with the 11-year sunspot cycle. Under normal conditions it exists 24 hours a day. The
Flayersionize very rapidly at sunrise and reach peak electron density early in the
afternoonat the middle of the propagation path. The ionization decays very slowly after
sunset andreaches the minimum value just before sunrise. At night, the layer has a single
density peak and is called the F-layer. During the day, the absorption of solar energy
results in
the formation of two distinct density peaks. The lower peak, the F1- layer, ranges in
height from about 130 km to about 300 km and seldom is predominant in supporting HF
radio propagation. Occasionally, this layer is the reflecting region for HF transmission,
but in general, obliquely-incident waves that penetrate the E-region also penetrate the F1layer and are reflected by the F2-layer. The F1-layer, however, does introduce additional
absorption of the radio waves. After sunset, the F1-layer quickly decays and is replaced
by a broadened F2-layer, which is known simply as the F-layer. The F2-layer, the higher
and more important of the two layers, ranges in height from about 200 km to about 500
km. This F2-layer reaches maximum ionization at noon and remains charged at night,
gradually decreasing to a minimum just before sunrise. In addition to being the layer with
the maximum electron density, the F2-layer is also strongly influenced by solar winds,
diffusion, magnetospheric events, and other dynamic effects and exhibits considerable
variability. Ionization does not completely depend on the solar zenith angle because with

such low molecular collision rates, the region can store received solar energy for many
hours. In the daytime, the F2-layer is generally about 80 km thick, centered on about 300
km altitude. At night the F1-layer merges with the F2-layer resulting in a combined
Flayer
with a width of about 150 km, also centered on about 300 km altitude. Due to the
Earth/ionospheric geometry, the maximum range of a single hop off of the F2-region is
about 4000 km (2500 miles). The absence of the F1-layer, the sharp reduction in
absorption of the E-region, and absence of the D-layer cause night-time field intensities
and noise to be generally higher than during daylight. Near the equator, there are
significant latitudinal gradients in the F-region ionization. In the polar regions (high
latitudes), there is a region of strongly depressed electron density in the F-layer. These
can have important effects upon long-distance radio wave propagation.
13)Explain the wavelength virtual height?
If a receiving antenna is used to measure that through the canopy transmitted power, Pt,
at a distance r from the transmitter the properties of the receiving antenna have to be
considered. The incident waves are be received in an area that is not the same as the
physical area of the receiving antenna. It is therefore convenient to define a quantity
called the effective area7. The effective area, Ae().,f, of a receiving antenna is the ratio of
the average power delivered to a matched load to the time-average power density (timeaverage Poynting vector) of the
incident electromagnetic wave at the antenna. We write
PL =AeS (5.18)
where PL is the maximum average power transferred to the load (under matched
conditions) with the receiving antenna properly oriented with respect to the polarization
of the incident

Also called effective aperture or receiving cross section. Wave

Propagation through Vegetation at 3.1 GHz and 5.8 GHz wave. It can be proved that the
ratio of the directive gain and the effective area of an antenna is a universal constant and
follows the relation

f= 4pA ().,f (5.19)

13)Explain the skip distance?


The far field amplitude in the case of scattering from a thin disc. The analysis of finding
the far field amplitude in the case of scattering from a finite-length cylinder, is very
similar to the case of a thin disc. The electromagnetic boundary conditions, requiring the
continuity of the tangential field components across the interface, are used together with a
quasi-static technique.
2.2 Resonance region
The theory for the electromagnetic interaction for bodies in the resonance region is
discussed
in appendix C. The T-matrix method is used to derive an expression for the total cross
section
in the far zone. From Eq. (C.41) we get
.* 8 l 2 . s() 4p
Im.E0 SSSS i-l -2+t ftsml Aki . (5.56)
where are the expansion coefficients of the scattered field and are the spherical
ftsml Atsmlvector surface functions. Since the surrounding medium is air the wave
constant becomes
k =k0 . To find the expansion coefficients for the scattered field Eq. (C.53)
ftsml =SSSS Ttsml,t's'm'l'at's'm'l' (5.57)

l'=1 m'=0s'=e,o t'=1


is used. Here we find the T-matrix and the expansion coefficients for the incident field. To
generate the different elements of the T-matrix, a surface integral has to be calculated. In
this model the leaves are modeled as dielectric oblate spheroids and the branches as
cylinders of finite length. These symmetries lead to that it is hard to do any
simplifications (especially in the case of an oblate spheroid) and thus must the surface
integrals be calculated numerically.

5.2.3 Short wave approximation


When the frequency increases and the wavelength becomes small in comparison to the
size of the scattering object, i.e. ka >10, the field will be much more sensitive to surface
irregularities. If we can assume that the radius of curvature of the surface can be
considered as much larger than the wavelength, i.e. each small portion of the surface
14)Explain the isonopheric abnormalities?
Ultrasonic testing is based on time-varying deformations or vibrations in materials, which
is generally referred to as acoustics. All material substances are comprised of atoms,
which may be forced into vibrational motion about their equilibrium positions. Many
different patterns of vibrational motion exist at the atomic level, however, most are
irrelevant to acoustics and ultrasonic testing. Acoustics is focused on particles that
contain many atoms that move in unison to produce a mechanical wave. When a material
is not stressed in tension or compression beyond its elastic limit, its individual particles
perform elastic oscillations. When the particles of a medium are displaced from their
equilibrium positions, internal (electrostatic) restoration forces arise. It is these elastic
restoring forces between particles, combined with inertia of the particles, that leads to the
oscillatory motions of the medium.
In solids, sound waves can propagate in four principle modes that are based on the way
the particles oscillate. Sound can propagate as longitudinal waves, shear waves, surface

waves, and in thin materials as plate waves. Longitudinal and shear waves are the two
modes of propagation most widely used in ultrasonic testing. The particle movement
responsible for the propagation of longitudinal and shear waves is illustrated below.

In longitudinal waves, the oscillations occur in the longitudinal direction or the direction
of wave propagation. Since compressional and dilational forces are active in these waves,
they are also called pressure or compressional waves. They are also sometimes called
density waves because their particle density fluctuates as they move. Compression waves
can be generated in liquids, as well as solids because the energy travels through the
atomic structure by a series of comparison and expansion (rarefaction) movements.

In the transverse or shear wave, the particles oscillate at a right angle or transverse to the
direction of propagation. Shear waves require an acoustically solid material for effective
propagation, and therefore, are not effectively propagated in materials such as liquids or
gasses. Shear waves are relatively weak when compared to longitudinal waves. In fact,
shear waves are usually generated in materials using some of the energy from
longitudinal waves.

15)Explain the duct propagation?


The expectation value of the total cross section of an oblate spheroid can now be
calculated. The results are given in Figures 7.9-12 and they are based on the results
presented in Figures 7.5-8. The n value corresponds to the exponent of the probability
function. When the value of the exponent is increased the probability of finding the leaf
in a vertical orientation is increased
find a way to improve the model of vegetation attenuation. The same concepts as in the
case of rain attenuation has been used and thus the problem of vegetation attenuation has
been minimized to find the two quantities N and s
; the number of scattering bodies per unit volume and the expectation value of the total
cross section. To calculate the N-values a test tree has been chosen in which the number
of leaves and branches per unit volume on average have been counted. The total cross
section for the leaves and branches has been calculated with a computer program based
on the T-matrix method in the resonance region. We have not been able though to use the
real symmetries because of restrictions in the computer program. Convergence for the
oblate spheroids (the leaves) was achieved for 2b = 2.86 cm at 3.1 GHz
and 2b = 1.90 cm at 5.8 GHz. The oblate spheroid has the dimensions of 2a along the
symmetry axis (z-axis) and 2b across the equatorial plane (the x-y-plane) where the
center of
the spheroidal is placed at the origin of the coordinate system. The correct value should
be 2b
= 6.30 cm. For the cylinders (the branches) convergence was achieved for 2a = 3 cm at
3.1
GHz and 2a = 2 cm at 5.8 GHz. Here 2a is the length and 2b is the diameter of the
cylinder.
Since we could not use the real values we instead used the values that gave convergence.

From these values we calculated the attenuation of the tree. We used these values to
estimate the real values for the attenuation of the tree crown. To do that we assumed that
the difference between the sizes of the leaves reflected the difference in attenuation. We
therefor increased the attenuation values at 3.1 GHz with a factor of two and the
attenuation values at 5.8GHz by a factor of three. But it turned out that the results still
were to low compared to the measurements. The calculated values were 0.7 (0.3) dB/m at
3.1 GHz and 0.8 (0.3) dB/m at 5.8 GHz (the standard deviation is given inside the
parenthesis). The measured values were 1.3 (0.4) dB/m at 3.1 GHz and 1.4 (0.5) dB/m at
5.8 GHz. The predicted values are thus too low. If we compare the results we find that
they overlap and thus are the deviations from the correct values small. To decrease the
uncertainties more measurements have to be done. This means that further work is
needed but the modeling approach can be used
8.1 Future work
More measurements have to be done on the same test beech in order to increase the
accuracy of the mean value and the standard deviation of the attenuation. The inventory
of the test beech must be done with a greater accuracy since the values of the standard
deviation are much too high. The total cross section should be calculated for different
sizes of the branches and leaves. In that way a better estimation can be performed. If it is
possible the computer program based on the T-matrix method must be improved in order
to be able to calculate oblate spheroids and cylinders with extreme symmetries or
alternatively find another method to calculate the total cross section. Since the results of
the vegetation attenuation will be used in a prediction tool it is necessary to investigate
the attenuation from other trees so that a mean value of the attenuation can be estimated.
This prediction tool is used to investigate wave propagation in general at residential
environments. It is therefor important to investigate the attenuation of many different
types of trees. It is also important to investigate the frequency of tree types in cities. If
this factor can be determined a model for every single tree type can be constructed and
used together with this factor as a statistical weight to get a better estimation of the
vegetation attenuation in general. Of course measurements must be made on all different
types of trees in order to verify the validity of the theoretical model.

(PART-A)-ANTENNAS

1)

An antenna (or aerial) is a transducer signed to transmit or receive

electromagnetic waves.

2)

an antenna is an arrangement of conductors that generate a radiating

electromagnetic field.

3)

The origin of the word antenna relative to wireless apparatus is attributed

to Guglielmo Marconi.

4)

Antennas have practical uses for the transmission and reception of radio

frequency signals.
5)Define an antenna.
Antenna is a transition device or a transducer between a guided wave and a free
space wave or vice versa. Antenna is also said to be an impedance transforming device.
6)The directionality of the array is due to the spatial relationships and the electrical feed
relationships between individual antennas.
7) What is meant by radiation pattern?
Radiation pattern is the relative distribution of radiated power as a function of distance in
space .It is a graph which shows the variation in actual field strength of

the EM wave at all points which are at equal distance from the antenna. The energy
radiated in a particular direction by an antenna is measured in terms of FIELD
STRENGTH.(E Volts/m)
8) Define Radiation intensity?
The power radiated from an antenna per unit solid angle is called the radiation intensity U
(watts per steradian or per square degree). The radiation intensity is independent of
distance.
9). Define Beam efficiency?
The total beam area ( WA) consists of themain beam area ( WM ) plus the minor lobe
area ( Wm) .Thus WA = WM+ Wm .
The ratio of the main beam area to the totalbeam area is called beam efficiency.
Beam efficiency = SM = WM / WA.
10).Define Directivity?
The directivity of an antenna is equal to the
ratio of the maximum power density P(q,f)max to its
average value over a sphere as observed in the far field
of an antenna.
D = P(q,f)max / P(q,f)av. Directivity fromPattern.
D = 4p / WA. . Directivity from beamarea(WA ).
11) What are the different types of aperture.?
i) Effective aperture.
ii). Scattering aperture.
iii) Loss aperture.
iv) collecting aperture.
v). Physical aperture.
12).Define different types of aperture.?
Effective aperture(Ae):
It is the area over which the power is extrated from the incident wave and delivered to the

load is called effective aperture.


Scattering aperture(As.)
It is the ratio of the reradiated power to the power density of the incident wave.
Loss aperture. (Ae).
It is the area of the antenna which dissipates power as heat.
Collecting aperture. (Ae).
It is the addition of above three apertures.
Physical aperture. (Ap).
This aperture is a measure of the physical size of the antenna.
13)Define Aperture efficiency?
The ratio of the effective aperture tothe physical aperture is the aperture efficiency. i.e
Aperture efficiency = hap = Ae / Ap(dimensionless).
14). What is meant by effective height?
The effective height h of an antenna is the
parameter related to the aperture. It may be defined as
the ratio of the induced voltage to the incident field.i.e
H= V / E.
15). What are the field zone?
The fields around an antenna ay be divided into two principal regions.
i. Near field zone (Fresnel zone)
ii. Far field zone (Fraunhofer zone)
16).What is meant by Polarization.?
The polarization of the radio wave can be defined bydirection in which the electric vector
E is aligned duringthe passage of atleast one full cycle.Also polarization can
also be defined the physical orientation of the radiatedelectromagnetic waves in space.
The polarization are three types. They are
Elliptical polarization ,circular polarization and linear

polarization.
17). What is meant by front to back ratio.?
It is defined as the ratio of the power radiated in
desired direction to the power radiated in the opposite
direction. i.e FBR = Power radiated in desired direction / power radiated in the opposite
direction.
18) Define antenna efficiency.?
The efficiency of an antenna is defined as the ratio
of power radiated to the total input power supplied to the
antenna.
Antenna efficiency = Power radiated / Total input power
19) What is radiation resistance ?
The antenna is a radiating device in which power is radiated into space in the form of
electromagnetic wave.
W = I2 R
Rr = W/ I2
Where Rr is a fictitious resistance called called as radiation resistance.
20)Define gain
The ratio of maximum radiation intensity in given direction to the maximum radiation
intensity from a reference antenna produced in the same direction with same input power.
i.e
Maximum radiation intensity from test antenna
Gain (G) = ------------------------------------------------------------------------------Maximum radiation intensity from the reference antenna with same input
power

(PART-B)

1)

Describe the antenna?


An antenna (or aerial) is a transducer designed to transmitor receive

electromagnetic waves. In other words, antennas convert electromagnetic waves into


electrical currents and vice versa. Antennas are used in systems such as radio and
television broadcasting, point-to-point radio communication, wireless LAN radar, and
space exploration. Antennas usually work in air or outer space but can also be operated
under water or even through soil and rock at certain frequencies for short distances.
Physically, an antenna is an arrangement of conductors that generate a radiating
electromagnetic field in response to an applied alternating voltage and the associated
alternating electric current or can be placed in an electromagnetic field so that the field
will induce an alternating current in the antenna and a voltage between its terminals.
Some antenna devices (parabolic antenna Horn Antenna just adapt the free space to
another type of antenna.

2)

write a short notes on electromagnetic radiations?

The origin of the word antenna relative to wireless apparatus is attributed to Guglielmo
Marconi In 1895, while testing early radio apparatus in the Swiss Alps at Salvan,
Switzerlandin the Mont Blancregion, Marconi experimented with early wireless
equipment. A 2.5 meter long pole, along which was carried a wire, was used as a
radiating and receiving aerial element. In Italian a tent pole is known as l'antenna
centrale, and the pole with a wire alongside it used as an aerial was simply called
l'antenna. Until then wireless radiating transmitting and receiving elements were known
simply as aerials or terminals. Marconi's use of the word antenna (Italian for pole) would
become a popular term for what today is uniformly known as the antenna.

3)

Describe the elementary doublet?

Antennas have practical uses for the transmission and reception of radio frequency
signals (radio, TV, etc.). In air, those signals travel very quickly and with a very low
transmission loss. The signals are absorbed when moving through more conducting
materials, such as concrete walls, rock, etc. When encountering an interface, the waves
are partially reflected and partially transmitted through.
A common antenna is a vertical rod a quarter of a wavelength long. Such antennas are
simple in construction, usually inexpensive, and both radiate in and receive from all
horizontal directions (omnidirectional). One limitation of this antenna is that it does not
radiate or receive in the direction in which the rod points. This region is called the
antenna blind cone or null.

4)

Describe the current and voltage distribution?

An electromagnetic wave refractor is a structure which is shaped or positioned to delay or


accelerate transmitted electromagnetic waves, passing through such structure, an amount
which varies over the wave front. The refractor alters the direction of propagation of the
waves emitted from the structure with respect to the waves impinging on the structure. It
can alternatively bring the wave to a focus or alter the wave front in other ways, such as
to convert a spherical wave front to a planar wave front (or vice-versa). The velocity of
the waves radiated have a component which is in the same direction (director) or in the
opposite direction (reflector) as that of the velocity of the impinging wave.

A director is a parasitic element, usually a metallic conductive structure, which reradiates into free space impinging electromagnetic radiation coming from or going to the
active antenna, the velocity of the re-radiated wave having a component in the direction
of the velocity of the impinging wave. The director modifies the radiation pattern of the

active antenna but there is no direct electrical connection between the active antenna and
this parasitic element.
10) what is resonant antennas?

The "resonant frequeny" and "electrical resonance" is related to the electrical lengthof an
antenna. The electrical length is usually the physical length of the wire divided by its
velocity factor (the ratio of the speed of wave propagation in the wire to c0, the speed of
light in a vacuum). Typically an antenna is tuned for a specific frequency, and is effective
for a range of frequencies that are usually centered on that resonant frequency. However,
other properties of an antenna change with frequency, in particular the radiation pattern
and impedance, so the antenna's resonant frequency may merely be close to the center
frequency of these other more important properties.

Antennas can be made resonant on harmonic frequencies with lengths that are fractions
of the target wavelength. Some antenna designs have multiple resonant frequencies, and
some are relatively effective over a very broad range of frequencies. The most commonly
known type of wide band aerial is the logarithmic or log periodic, but its gain is usually
much lower than that of a specific or narrower band aerial.

(PART-C)

1)

Briefly explain the electromagnetic radiations?

An antenna (or aerial) is a transducer designed to transmit or receive electromagnetic


waves. In other words, antennas convert electromagnetic waves into electrical currents
and vice versa. Antennas are used in systems such as radio and television broadcasting,
point-to-point radio communication, wireless LAN, radar, and space exploration.
Antennas usually work in air or outer space, but can also be operated under water or even
through soil and rock at certain frequencies for short distances.
Physically, an antenna is an arrangement of conductors that generate a radiating
electromagnetic field in response to an applied alternating voltage and the associated
alternating electric current, or can be placed in an electromagnetic field so that the field
will induce an alternating current in the antenna and a voltage between its terminals.
Some antenna devices (parabolic antenna, Horn Antenna) just adapt the free space to
another type of antenna.
Thomas Edison used antennas by 1885. Edison patented his system in U.S. Patent
465,971. Antennas were also used in 1888 by Heinrich Hertz (1857-1894) to prove the
existence of electromagnetic waves predicted by the theory of James Clerk Maxwell.
Hertz placed the emitter dipole in the focal point of a parabolic reflector. He published
his work and installation drawings in Annalen der Physik und Chemie (vol. 36, 1889).

Terminology
The words antenna (plural: antennas[1]) and aerial are used interchangeably; but usually a
rigid metallic structure is termed an antenna and a wire format is called an aerial. In the
United Kingdom and other British English speaking areas the term aerial is more
common, even for rigid types. The noun aerial is occasionally written with a diaresis
mark arial in recognition of the original spelling of the adjective arial from
which the noun is derived.
The origin of the word antenna relative to wireless apparatus is attributed to Guglielmo
Marconi. In 1895, while testing early radio apparatus in the Swiss Alps at Salvan,
Switzerland in the Mont Blanc region, Marconi experimented with early wireless
equipment. A 2.5 meter long pole, along which was carried a wire, was used as a
radiating and receiving aerial element. In Italian a tent pole is known as l'antenna
centrale, and the pole with a wire alongside it used as an aerial was simply called
l'antenna. Until then wireless radiating transmitting and receiving elements were known
simply as aerials or terminals. Marconi's use of the word antenna (Italian for pole) would
become a popular term for what today is uniformly known as the antenna.[2]
A Hertzian antenna is a set of terminals that does not require the presence of a ground for
its operation (versus a Tesla antenna which is grounded. [3]) A loaded antenna is an active
antenna having an elongated portion of appreciable electrical length and having
additional inductance or capacitance directly in series or shunt with the elongated portion
so as to modify the standing wave pattern existing along the portion or to change the
effective electrical length of the portion. An antenna grounding structure is a structure for
establishing a reference potential level for operating the active antenna. It can be any
structure closely associated with (or acting as) the ground which is connected to the
terminal of the signal receiver or source opposing the active antenna terminal (i.e., the
signal receiver or source is interposed between the active antenna and this structure).
Overview

Antennas have practical uses for the transmission and reception of radio frequency
signals (radio, TV, etc.). In air, those signals travel very quickly and with a very low
transmission loss. The signals are absorbed when moving through more conducting
materials, such as concrete walls, rock, etc. When encountering an interface, the waves
are partially reflected and partially transmitted through.
A common antenna is a vertical rod a quarter of a wavelength long. Such antennas are
simple in construction, usually inexpensive, and both radiate in and receive from all
horizontal directions (omnidirectional). One limitation of this antenna is that it does not
radiate or receive in the direction in which the rod points. This region is called the
antenna blind cone or null.
There are two fundamental types of antenna directional patterns, which, with reference to
a specific three dimensional (usually horizontal or vertical) plane are either:
1.

Omni-directional (radiates equally in all directions), such as a vertical rod

or
2.

Directional (radiates more in one direction than in the other).

In colloquial usage "omni-directional" usually refers to all horizontal directions with


reception above and below the antenna being reduced in favor of better reception (and
thus range) near the horizon. A "directional" antenna usually refers to one focusing a
narrow beam in a single specific direction such as a telescope or satellite dish, or, at least,
focusing in a sector such as a 120 horizontal fan pattern in the case of a panel antenna at
a Cell site.
All antennas radiate some energy in all directions in free space but careful construction
results in substantial transmission of energy in a preferred direction and negligible energy
radiated in other directions. By adding additional elements (such as rods, loops or plates)
and carefully arranging their length, spacing, and orientation, an antenna with desired
directional properties can be created.

An antenna array is two or more simple antennas combined to produce a specific


directional radiation pattern. In common usage an array is composed of active elements,
such as a linear array of parallel dipoles fed as a "broadside array". A slightly different
feed method could cause this same array of dipoles to radiate as an "end-fire array".
Antenna arrays may be built up from any basic antenna type, such as dipoles, loops or
slots.
The directionality of the array is due to the spatial relationships and the electrical feed
relationships between individual antennas. Usually all of the elements are active
(electrically fed) as in the log-periodic dipole array which offers modest gain and broad
bandwidth and is traditionally used for television reception. Alternatively, a superficially
similar dipole array, the Yagi-Uda Antenna (often abbreviated to "Yagi"), has only one
active dipole element in a chain of parasitic dipole elements, and a very different
performance with high gain over a narrow bandwidth.
An active element is electrically connected to the antenna terminals leading to the
receiver or transmitter, as opposed to a parasitic element that modifies the antenna pattern
without being connected directly. The active element(s) couple energy between the
electromagnetic wave and the antenna terminals, thus any functioning antenna has at least
one active element.
An antenna lead-in is the medium, for example, a transmission line or feed line for
conveying the signal energy between the signal source or receiver and the antenna. The
antenna feed refers to the components between the antenna and an amplifier.
An antenna counterpoise is a structure of conductive material most closely associated
with ground that may be insulated from or capacitively coupled to the natural ground. It
aids in the function of the natural ground, particularly where variations (or limitations) of
the characteristics of the natural ground interfere with its proper function. Such structures
are usually connected to the terminal of a receiver or source opposite to the antenna
terminal.

An antenna component is a portion of the antenna performing a distinct function and


limited for use in an antenna, as for example, a reflector, director, or active antenna.
Parasitic elements have no direct electrical connection to the antenna terminals, yet they
modify the antenna pattern. The parasitic elements are immersed in the electromagnetic
waves and fields around the active elements, and the parasitic currents induced in them
interact with the original waves and fields. A careful arrangement of parasitic elements,
such as rods or coils, can improve the radiation pattern of the active element(s). Directors
and reflectors are common parasitic elements.
An electromagnetic wave refractor is a structure which is shaped or positioned to delay or
accelerate transmitted electromagnetic waves, passing through such structure, an amount
which varies over the wave front. The refractor alters the direction of propagation of the
waves emitted from the structure with respect to the waves impinging on the structure. It
can alternatively bring the wave to a focus or alter the wave front in other ways, such as
to convert a spherical wave front to a planar wave front (or vice-versa). The velocity of
the waves radiated have a component which is in the same direction (director) or in the
opposite direction (reflector) as that of the velocity of the impinging wave.
A director is a parasitic element, usually a metallic conductive structure, which reradiates into free space impinging electromagnetic radiation coming from or going to the
active antenna, the velocity of the re-radiated wave having a component in the direction
of the velocity of the impinging wave. The director modifies the radiation pattern of the
active antenna but there is no direct electrical connection between the active antenna and
this parasitic element.
A reflector is a parasitic element, usually a metallic conductive structure (e.g., screen, rod
or plate), which re-radiates back into free space impinging electromagnetic radiation
coming from or going to the active antenna. The velocity of the returned wave having a
component in a direction opposite to the direction of the velocity of the impinging wave.
The reflector modifies the radiation of the active antenna. There is no direct electrical
connection between the active antenna and this parasitic element.

An antenna coupling network is a passive network (which may be any combination of a


resistive, inductive or capacitive circuit(s)) for transmitting the signal energy between the
active antenna and a source (or receiver) of such signal energy.
Typically, antennas are designed to operate in a relatively narrow frequency range. The
design criteria for receiving and transmitting antennas differ slightly, but generally an
antenna can receive and transmit equally well. This property is called reciprocity.
Parameters
Main article: Antenna measurement
There are several critical parameters affecting an antenna's performance that can be
adjusted during the design process. These are resonant frequency, impedance, gain,
aperture or radiation pattern, polarization, efficiency and bandwidth. Transmit antennas
may also have a maximum power rating, and receive antennas differ in their noise
rejection properties. All of these parameters can be measured through various means.
Resonant frequency
The "resonant frequency" and "electrical resonance" is related to the electrical length of
an antenna. The electrical length is usually the physical length of the wire divided by its
velocity factor (the ratio of the speed of wave propagation in the wire to c0, the speed of
light in a vacuum). Typically an antenna is tuned for a specific frequency, and is effective
for a range of frequencies that are usually centered on that resonant frequency. However,
other properties of an antenna change with frequency, in particular the radiation pattern
and impedance, so the antenna's resonant frequency may merely be close to the center
frequency of these other more important properties.
Antennas can be made resonant on harmonic frequencies with lengths that are fractions
of the target wavelength. Some antenna designs have multiple resonant frequencies, and
some are relatively effective over a very broad range of frequencies. The most commonly

known type of wide band aerial is the logarithmic or log periodic, but its gain is usually
much lower than that of a specific or narrower band aerial.
Gain
Main article: Antenna gain
Gain as a parameter measures the directionality of a given antenna. An antenna with a
low gain emits radiation with about the same power in all directions, whereas a high-gain
antenna will preferentially radiate in particular directions. Specifically, the Gain,
Directive gain or Power gain of an antenna is defined as the ratio of the intensity (power
per unit surface) radiated by the antenna in a given direction at an arbitrary distance
divided by the intensity radiated at the same distance by a hypothetical isotropic antenna.
The gain of an antenna is a passive phenomenon - power is not added by the antenna, but
simply redistributed to provide more radiated power in a certain direction than would be
transmitted by an isotropic antenna. If an antenna has a greater than one gain in some
directions, it must have a less than one gain in other directions since energy is conserved
by the antenna. An antenna designer must take into account the application for the
antenna when determining the gain. High-gain antennas have the advantage of longer
range and better signal quality, but must be aimed carefully in a particular direction. Lowgain antennas have shorter range, but the orientation of the antenna is inconsequential.
For example, a dish antenna on a spacecraft is a high-gain device that must be pointed at
the planet to be effective, whereas a typical Wi-Fi antenna in a laptop computer is lowgain, and as long as the base station is within range, the antenna can be in an any
orientation in space. It makes sense to improve horizontal range at the expense of
reception above or below the antenna. Thus most antennas labelled "omnidirectional"
really have some gain.[4]
Sometimes, the half-wave dipole is taken as a reference instead of the isotropic radiator.
The gain is then given in dBd (decibels over dipole):

2)

Explain the antenna gain and effective radiated power?

Monopole and earth return


In a common configuration, called monopole, one of the terminals of the rectifier is
connected to earth ground. The other terminal, at a potential high above, or below,
ground, is connected to a transmission line. The earthed terminal may or may not be
connected to the corresponding connection at the inverting station by means of a second
conductor.
If no metallic conductor is installed, current flows in the earth between the earth
electrodes at the two stations. Therefore it is a type of single wire earth return. The issues
surrounding earth-return current include

Electrochemical corrosion of long buried metal objects such as pipelines

Underwater earth-return electrodes in seawater may produce chlorine or

otherwise affect water chemistry.

An unbalanced current path may result in a net magnetic field, which can

affect magnetic navigational compasses for ships passing over an underwater


cable.
These effects can be eliminated with installation of a metallic return conductor between
the two ends of the monopolar transmission line. Since one terminal of the converters is
connected to earth, the return conductor need not be insulated for the full transmission
voltage which makes it less costly than the high-voltage conductor. Use of a metallic
return conductor is decided based on economic, technical and environmental factors.[15]
Modern monopolar systems for pure overhead lines carry typically 1,500 MW.[16] If
underground or underwater cables are used the typical value is 600 MW.
Most monopolar systems are designed for future bipolar expansion. Transmission line
towers may be designed to carry two conductors, even if only one is used initially for the
monopole transmission system. The second conductor is either unused, used as electrode
line or connected in parallel with the other (as in case of Baltic-Cable).

Bipolar

Bipolar system pylons of the Baltic-Cable-HVDC in Sweden


In bipolar transmission a pair of conductors is used, each at a high potential with respect
to ground, in opposite polarity. Since these conductors must be insulated for the full
voltage, transmission line cost is higher than a monopole with a return conductor.
However, there are a number of advantages to bipolar transmission which can make it the
attractive option.

Under normal load, negligible earth-current flows, as in the case of

monopolar transmission with a metallic earth-return. This reduces earth return


loss and environmental effects.

When a fault develops in a line, with earth return electrodes installed at

each end of the line, approximately half the rated power can continue to flow
using the earth as a return path, operating in monopolar mode.

Since for a given total power rating each conductor of a bipolar line

carries only half the current of monopolar lines, the cost of the second conductor
is reduced compared to a monopolar line of the same rating.

In very adverse terrain, the second conductor may be carried on an

independent set of transmission towers, so that some power may continue to be


transmitted even if one line is damaged.
A bipolar system may also be installed with a metallic earth return conductor.

Bipolar systems may carry as much as 3,200 MW at voltages of +/-600 kV. Submarine
cable installations initially commissioned as a monopole may be upgraded with
additional cables and operated as a bipole.
A back-to-back station (or B2B for short) is a plant in which both static inverters and
rectifiers are in the same area, usually in the same building. The length of the direct
current line is kept as short as possible. HVDC back-to-back stations are used for

coupling of electricity mains of different frequency (as in Japan)

coupling two networks of the same nominal frequency but no fixed phase

relationship (as until 1995/96 in Etzenricht, Drnrohr and Vienna).

different frequency and phase number (for example, as a replacement for

traction current converter plants)


The DC voltage in the intermediate circuit can be selected freely at HVDC back-to-back
stations because of the short conductor length. The DC voltage is as low as possible, in
order to build a small valve hall and to avoid series connections of valves. For this reason
at HVDC back-to-back stations valves with the highest available current rating are used.
Systems with transmission lines
The most common configuration of an HVDC link is two inverter/rectifier stations
connected by an overhead powerline. This is also a configuration commonly used in
connecting unsynchronised grids, in long-haul power transmission, and in undersea
cables.
Multi-terminal HVDC links, connecting more than two points, are rare. The configuration
of multiple terminals can be series, parallel, or hybrid (a mixture of series and parallel).
Parallel configuration tends to be used for large capacity stations, and series for lower
capacity stations. An example is the 2,000 MW Quebec - New England Transmission
system opened in 1992, which is currently the largest multi-terminal HVDC system in the
world.[17]

Tripole: current-modulating control


A newly patented scheme (As of 2004) (Current modulation of direct current
transmission lines) is intended for conversion of existing AC transmission lines to
HVDC. Two of the three circuit conductors are operated as a bipole. The third conductor
is used as a parallel monopole, equipped with reversing valves (or parallel valves
connected in reverse polarity). The parallel monopole periodically relieves current from
one pole or the other, switching polarity over a span of several minutes. The bipole
conductors would be loaded to either 1.37 or 0.37 of their thermal limit, with the parallel
monopole always carrying +/- 1 times its thermal limit current. The combined RMS
heating effect is as if each of the conductors is always carrying 1.0 of its rated current.
This allows heavier currents to be carried by the bipole conductors, and full use of the
installed third conductor for energy transmission. High currents can be circulated through
the line conductors even when load demand is low, for removal of ice.
Combined with the higher average power possible with a DC transmission line for the
same line-to-ground voltage, a tripole conversion of an existing AC line could allow up to
80% more power to be transferred using the same transmission right-of-way, towers, and
conductors. Some AC lines cannot be loaded to their thermal limit due to system stability,
reliability, and reactive power concerns, which would not exist with an HVDC link.
The system would operate without earth-return current. Since a single failure of a pole
converter or a conductor results in only a small loss of capacity and no earth-return
current, reliability of this scheme would be high, with no time required for switching.
As of 2005, no tri-pole conversions are in operation, although a transmission line in India
has been converted to bipole HVDC.
Corona discharge
Corona discharge is the creation of ions in a fluid (such as air) by the presence of a strong
electric field. Electrons are torn from neutral air, and either the positive ions or else the
electrons are attracted to the conductor, while the charged particles drift. This effect can

cause considerable power loss, create audible and radio-frequency interference, generate
toxic compounds such as oxides of nitrogen and ozone, and bring forth arcing.
Both AC and DC transmission lines can generate coronas, in the former case in the form
of oscillating particles, in the latter a constant wind. Due to the space charge formed
around the conductors, an HVDC system may have about half the loss per unit length of a
high voltage AC system carrying the same amount of power. With monopolar
transmission the choice of polarity of the energised conductor leads to a degree of control
over the corona discharge. In particular, the polarity of the ions emitted can be controlled,
which may have an environmental impact on particulate condensation. (particles of
different polarities have a different mean-free path.) Negative coronas generate
considerably more ozone than positive coronas, and generate it further downwind of the
power line, creating the potential for health effects. The use of a positive voltage will
reduce the ozone impacts of monopole HVDC power lines.
Applications
Overview
The controllability of current-flow through HVDC rectifiers and inverters, their
application in connecting unsynchronized networks, and their applications in efficient
submarine cables mean that HVDC cables are often used at national boundaries for the
exchange of power. Offshore windfarms also require undersea cables, and their turbines
are unsynchronized. In very long-distance connections between just two points, for
example around the remote communities of Siberia, Canada, and the Scandinavian North,
the decreased line-costs of HVDC also makes it the usual choice. Other applications have
been noted throughout this article.
AC network interconnections
AC transmission lines can only interconnect synchronized AC networks that oscillate at
the same frequency and in phase. Many areas that wish to share power have
unsynchronized networks. The power grids of the UK, Northern Europe and continental

Europe are not united into a single synchronized network. Japan has 50 Hz and 60 Hz
networks. Continental North America, while operating at 60 Hz throughout, is divided
into regions which are unsynchronised: East, West, Texas, Quebec, and Alaska. Brazil
and Paraguay, which share the enormous Itaipu hydroelectric plant, operate on 60 Hz and
50 Hz respectively. However, HVDC systems make it possible to interconnect
unsynchronized AC networks, and also add the possibility of controlling AC voltage and
reactive power flow.
A generator connected to a long AC transmission line may become unstable and fall out
of synchronization with a distant AC power system. An HVDC transmission link may
make it economically feasible to use remote generation sites. Wind farms located offshore may use HVDC systems to collect power from multiple unsynchronized generators
for transmission to the shore by an underwater cable.
In general, however, an HVDC power line will interconnect two AC regions of the
power-distribution grid. Machinery to convert between AC and DC power adds a
considerable cost in power transmission. The conversion from AC to DC is known as
rectification, and from DC to AC as inversion. Above a certain break-even distance
(about 50 km for submarine cables, and perhaps 600800 km for overhead cables), the
lower cost of the HVDC electrical conductors outweighs the cost of the electronics.
The conversion electronics also present an opportunity to effectively manage the power
grid by means of controlling the magnitude and direction of power flow. An additional
advantage of the existence of HVDC links, therefore, is potential increased stability in the
transmission grid.
Renewable electricity superhighways
A number of studies have highlighted the potential benefits of very wide area super grids
based on HVDC since they can mitigate the effects of intermittency by averaging and
smoothing the outputs of large numbers of geographically dispersed wind farms or solar
farms.[18] Czisch's study concludes that a grid covering the fringes of Europe could bring
100% renewable power (70% wind, 30% biomass) at close to today's prices. There has

been debate over the technical feasibility of this proposal [19] and the political risks
involved in energy transmission across a large number of international borders.[20][21]
The construction of such green power superhighways is advocated in a white paper that
was released by the American Wind Energy Association and the Solar Energy Industries
Association[22]
In January, the European Commission proposed 300 million to subsidize the
development of HVDC links between Ireland, Britain, the Netherlands, Germany,
Denmark, and Sweden, as part of a wider 1.2 billion package supporting links to
offshore wind farms and cross-border interconnectors throughout Europe. Meanwhile the
recently founded Union of the Mediterranean has embraced a Mediterranean Solar Plan
to import large amounts of concentrating solar power into Europe from North Africa and
the Middle East.[23]
Smaller scale use

The development of insulated gate bipolar transistors (IGBT) and gate

turn-off thyristors (GTO) has made smaller HVDC systems economical. These
may be installed in existing AC grids for their role in stabilizing power flow
without the additional short-circuit current that would be produced by an
additional AC transmission line. ABB manufacturer calls this concept "HVDC
Light" and Siemens manufacturer calls a similar concept "HVDC PLUS" (Power
Link Universal System). They have extended the use of HVDC down to blocks as
small as a few tens of megawatts and lines as short as a few score kilometres of
overhead line. The difference lies in the concept of the Voltage-Sourced Converter
(VSC) technology whereas "HVDC Light" uses Retrieved.
12)Describe the antenna bandwidth, beam width and bandwidth polarization?

Radiation pattern

The radiation pattern of an antenna is the geometric pattern of the relative field strengths
of the field emitted by the antenna. For the ideal isotropic antenna, this would be a
sphere. For a typical dipole, this would be a toroid. The radiation pattern of an antenna is
typically represented by a three dimensional graph, or polar plots of the horizontal and
vertical cross sections. The graph should show sidelobes and backlobes, where the
antenna's gain is at a minima or maxima.
See Antenna measurement: Radiation pattern or Radiation pattern for more information.
Impedance
As an electro-magnetic wave travels through the different parts of the antenna system
(radio, feed line, antenna, free space) it may encounter differences in impedance (E/H,
V/I, etc). At each interface, depending on the impedance match, some fraction of the
wave's energy will reflect back to the source[5], forming a standing wave in the feed line.
The ratio of maximum power to minimum power in the wave can be measured and is
called the standing wave ratio (SWR). A SWR of 1:1 is ideal. A SWR of 1.5:1 is
considered to be marginally acceptable in low power applications where power loss is
more critical, although an SWR as high as 6:1 may still be usable with the right
equipment. Minimizing impedance differences at each interface (impedance matching)
will reduce SWR and maximize power transfer through each part of the antenna system.
Complex impedance of an antenna is related to the electrical length of the antenna at the
wavelength in use. The impedance of an antenna can be matched to the feed line and
radio by adjusting the impedance of the feed line, using the feed line as an impedance
transformer. More commonly, the impedance is adjusted at the load (see below) with an
antenna tuner, a balun, a matching transformer, matching networks composed of
inductors and capacitors, or matching sections such as the gamma match.
Efficiency
Efficiency is the ratio of power actually radiated to the power put into the antenna
terminals. A dummy load may have an SWR of 1:1 but an efficiency of 0, as it absorbs all

power and radiates heat but not RF energy, showing that SWR alone is not an effective
measure of an antenna's efficiency. Radiation in an antenna is caused by radiation
resistance which can only be measured as part of total resistance including loss
resistance. Loss resistance usually results in heat generation rather than radiation, and
reduces efficiency. Mathematically, efficiency is calculated as radiation resistance divided
by total resistance.
Bandwidth
The bandwidth of an antenna is the range of frequencies over which it is effective,
usually centered on the resonant frequency. The bandwidth of an antenna may be
increased by several techniques, including using thicker wires, replacing wires with
cages to simulate a thicker wire, tapering antenna components (like in a feed horn), and
combining multiple antennas into a single assembly and allowing the natural impedance
to select the correct antenna. Small antennas are usually preferred for convenience, but
there is a fundamental limit relating bandwidth, size and efficiency.
Polarization
The polarization of an antenna is the orientation of the electric field (E-plane) of the
radio wave with respect to the Earth's surface and is determined by the physical structure
of the antenna and by its orientation. It has nothing in common with antenna
directionality terms: "horizontal", "vertical" and "circular". Thus, a simple straight wire
antenna will have one polarization when mounted vertically, and a different polarization
when mounted horizontally. "Electromagnetic wave polarization filters" are structures
which can be employed to act directly on the electromagnetic wave to filter out wave
energy of an undesired polarization and to pass wave energy of a desired polarization.
Reflections generally affect polarization. For radio waves the most important reflector is
the ionosphere - signals which reflect from it will have their polarization changed
unpredictably. For signals which are reflected by the ionosphere, polarization cannot be
relied upon. For line-of-sight communications for which polarization can be relied upon,
it can make a large difference in signal quality to have the transmitter and receiver using

the same polarization; many tens of dB difference are commonly seen and this is more
than enough to make the difference between reasonable communication and a broken
link.
Polarization is largely predictable from antenna construction but, especially in directional
antennas, the polarization of side lobes can be quite different from that of the main
propagation lobe. For radio antennas, polarization corresponds to the orientation of the
radiating element in an antenna. A vertical omnidirectional WiFi antenna will have
vertical polarization (the most common type). An exception is a class of elongated
waveguide antennas in which vertically placed antennas are horizontally polarized. Many
commercial antennas are marked as to the polarization of their emitted signals.
Polarization is the sum of the E-plane orientations over time projected onto an imaginary
plane perpendicular to the direction of motion of the radio wave. In the most general
case, polarization is elliptical (the projection is oblong), meaning that the antenna varies
over time in the polarization of the radio waves it is emitting. Two special cases are linear
polarization (the ellipse collapses into a line) and circular polarization (in which the
ellipse varies maximally). In linear polarization the antenna compels the electric field of
the emitted radio wave to a particular orientation. Depending on the orientation of the
antenna mounting, the usual linear cases are horizontal and vertical polarization. In
circular polarization, the antenna continuously varies the electric field of the radio wave
through all possible values of its orientation with regard to the Earth's surface. Circular
polarizations, like elliptical ones, are classified as right-hand polarized or left-hand
polarized using a "thumb in the direction of the propagation" rule. Optical researchers use
the same rule of thumb, but pointing it in the direction of the emitter, not in the direction
of propagation, and so are opposite to radio engineers' use.
In practice, regardless of confusing terminology, it is important that linearly polarized
antennas be matched, lest the received signal strength be greatly reduced. So horizontal
should be used with horizontal and vertical with vertical. Intermediate matchings will
lose some signal strength, but not as much as a complete mismatch. Transmitters
mounted on vehicles with large motional freedom commonly use circularly polarized

antennas so that there will never be a complete mismatch with signals from other sources.
In the case of radar, this is often reflections from rain drops.
12)Explain the antenna transmission and reception?
Transmission and reception
All of the antenna parameters are expressed in terms of a transmission antenna, but are
identically applicable to a receiving antenna, due to reciprocity. Impedance, however, is
not applied in an obvious way; for impedance, the impedance at the load (where the
power is consumed) is most critical. For a transmitting antenna, this is the antenna itself.
For a receiving antenna, this is at the (radio) receiver rather than at the antenna. Tuning is
done by adjusting the length of an electrically long linear antenna to alter the electrical
resonance of the antenna.
Antenna tuning is done by adjusting an inductance or capacitance combined with the
active antenna (but distinct and separate from the active antenna). The inductance or
capacitance provides the reactance which combines with the inherent reactance of the
active antenna to establish a resonance in a circuit including the active antenna. The
established resonance being at a frequency other than the natural electrical resonant
frequency of the active antenna. Adjustment of the inductance or capacitance changes this
resonance.
Antennas used for transmission have a maximum power rating, beyond which heating,
arcing or sparking may occur in the components, which may cause them to be damaged
or destroyed. Raising this maximum power rating usually requires larger and heavier
components, which may require larger and heavier supporting structures. This is a
concern only for transmitting antennas, as the power received by an antenna rarely
exceeds the microwatt range.
Antennas designed specifically for reception might be optimized for noise rejection
capabilities. An antenna shield is a conductive or low reluctance structure (such as a wire,
plate or grid) which is adapted to be placed in the vicinity of an antenna to reduce, as by

dissipation through a resistance or by conduction to ground, undesired electromagnetic


radiation, or electric or magnetic fields, which are directed toward the active antenna
from an external source or which emanate from the active antenna. Other methods to
optimize for noise rejection can be done by selecting a narrow bandwidth so that noise
from other frequencies is rejected, or selecting a specific radiation pattern to reject noise
from a specific direction, or by selecting a polarization different from the noise
polarization, or by selecting an antenna that favors either the electric or magnetic field.
For instance, an antenna to be used for reception of low frequencies (below about ten
megahertz) will be subject to both man-made noise from motors and other machinery,
and from natural sources such as lightning. Successfully rejecting these forms of noise is
an important antenna feature. A small coil of wire with many turns is more able to reject
such noise than a vertical antenna. However, the vertical will radiate much more
effectively on transmit, where extraneous signals are not a concern.
Basic antenna models

TV aerial antenna
There are many variations of antennas. Below are a few basic models. More can be found
in Category:Radio frequency antenna types.

The isotropic radiator is a purely theoretical antenna that radiates equally

in all directions. It is considered to be a point in space with no dimensions and no


mass. This antenna cannot physically exist, but is useful as a theoretical model for
comparison with all other antennas. Most antennas' gains are measured with

reference to an isotropic radiator, and are rated in dBi (decibels with respect to an
isotropic radiator).

The dipole antenna is simply two wires pointed in opposite directions

arranged either horizontally or vertically, with one end of each wire connected to
the radio and the other end hanging free in space. Since this is the simplest
practical antenna, it is also used as a reference model for other antennas; gain with
respect to a dipole is labeled as dBd. Generally, the dipole is considered to be
omnidirectional in the plane perpendicular to the axis of the antenna, but it has
deep nulls in the directions of the axis. Variations of the dipole include the folded
dipole, the half wave antenna, the ground plane antenna, the whip, and the J-pole.

The Yagi-Uda antenna is a directional variation of the dipole with parasitic

elements added which are functionality similar to adding a reflector and lenses
(directors) to focus a filament light bulb.

The random wire antenna is simply a very long (at least one quarter

wavelength) wire with one end connected to the radio and the other in free space,
arranged in any way most convenient for the space available. Folding will reduce
effectiveness and make theoretical analysis extremely difficult. (The added length
helps more than the folding typically hurts.) Typically, a random wire antenna will
also require an antenna tuner, as it might have a random impedance that varies
nonlinearly with frequency.

The Horn is used where high gain is needed, the wavelength is short

(microwave) and space is not an issue. Horns can be narrow band or wide band,
depending on their shape. A horn can be built for any frequency, but horns for
lower frequencies are typically impractical. Horns are also frequently used as
reference antennas.

The Patch antenna consists mainly of a square conductor mounted over a

groundplane. An other example of a planar antenna is the Tapered Slot Antenna


(TSA), as the Vivaldi-antenna.

13)Briefly the antenna radiated power?

The basic structure of matter involves charged particles bound together in many
different ways. When electromagnetic radiation is incident on matter, it causes the
charged particles to oscillate and gain energy. The ultimate fate of this energy depends on
the situation. It could be immediately re-radiated and appear as scattered, reflected, or
transmitted radiation. It may also get dissipated into other microscopic motions within the
matter, coming to thermal equilibrium and manifesting itself as thermal energy in the
material. With a few exceptions such as fluorescence, harmonic generation,
photochemical reactions and the photovoltaic effect, absorbed electromagnetic radiation
simply deposits its energy by heating the material. This happens both for infrared and
non-infrared radiation. Intense radio waves can thermally burn living tissue and can cook
food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can
also easily set paper afire. Ionizing electromagnetic radiation can create high-speed
electrons in a material and break chemical bonds, but after these electrons collide many
times with other atoms in the material eventually most of the energy gets downgraded to
thermal energy, this whole process happening in a tiny fraction of a second. That infrared
radiation is a form of heat and other electromagnetic radiation is not, is a widespread
misconception in physics. Any electromagnetic radiation can heat a material when it is
absorbed.
The inverse or time-reversed process of absorption is responsible for thermal radiation.
Much of the thermal energy in matter consists of random motion of charged particles, and
this energy can be radiated away from the matter. The resulting radiation may
subsequently be absorbed by another piece of matter, with the deposited energy heating
the material. Radiation is an important mechanism of heat transfer.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a
form of thermal energy, having maximum radiation entropy. The thermodynamic
potentials of electromagnetic radiation can be well-defined as for matter. Thermal
radiation in a cavity has energy density (see Planck's Law) of

Differentiating the above with respect to temperature, we may say that the
electromagnetic radiation field has an effective volumetric heat capacity given by

Electromagnetic spectrum
Main article: Electromagnetic spectrum

Electromagnetic spectrum with light highlighted

Legend:

Gamma

rays

HX

Hard

X-rays

SX

Soft

X-Rays

Extreme

ultraviolet

Near

ultraviolet

EUV

NUV

Visible

light

NIR

MIR

FIR

Near

infrared

Moderate

infrared

Far

infrared

Radio
EHF
SHF
UHF
VHF

waves:
=

Extremely

high

Super
=

high

Ultrahigh
=

Very

frequency
frequency

(Microwaves)
(Microwaves)

frequency

(Microwaves)

high

frequency

HF

MF

LF
VLF

=
=

VF

High

frequency

Medium

frequency

Low

frequency

Very
=

low
Voice

frequency
frequency

ELF = Extremely low frequency


Generally, EM radiation is classified by wavelength into electrical energy, radio,
microwave, infrared, the visible region we perceive as light, ultraviolet, X-rays and
gamma rays.
The behavior of EM radiation depends on its wavelength. Higher frequencies have
shorter wavelengths, and lower frequencies have longer wavelengths. When EM radiation
interacts with single atoms and molecules, its behavior depends on the amount of energy
per quantum it carries. Spectroscopy can detect a much wider region of the EM spectrum
than the visible range of 400 nm to 700 nm. A common laboratory spectroscope can
detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical
properties of objects, gases, or even stars can be obtained from this type of device. It is
widely used in astrophysics. For example, hydrogen atoms emit radio waves of
wavelength 21.12 cm.
Light
Main article: Light
EM radiation with a wavelength between approximately 400 nm and 700 nm is detected
by the human eye and perceived as visible light. Other wavelengths, especially nearby
infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes
referred to as light, especially when visibility to humans is not relevant.
If radiation having a frequency in the visible region of the EM spectrum reflects off of an
object, say, a bowl of fruit, and then strikes our eyes, this results in our visual perception
of the scene. Our brain's visual system processes the multitude of reflected frequencies

into different shades and hues, and through this not-entirely-understood psychophysical
phenomenon, most people perceive a bowl of fruit.
At most wavelengths, however, the information carried by electromagnetic radiation is
not directly detected by human senses. Natural sources produce EM radiation across the
spectrum, and our technology can also manipulate a broad range of wavelengths. Optical
fiber transmits light which, although not suitable for direct viewing, can carry data that
can be translated into sound or an image. The coding used in such data is similar to that
used with radio waves.
Radio waves
Main article: Radio waves
Radio waves can be made to carry information by varying a combination of the
amplitude, frequency and phase of the wave within a frequency band.
When EM radiation impinges upon a conductor, it couples to the conductor, travels along
it, and induces an electric current on the surface of that conductor by exciting the
electrons of the conducting material. This effect (the skin effect) is used in antennas. EM
radiation may also cause certain molecules to absorb energy and thus to heat up; this is
exploited in microwave ovens.

14)Explain the voltage and current distribution?

Long distance HVDC lines carrying hydropower from Canada's Nelson river to this
station where it is converted to AC for use in Winnipeg's local grid
A high-voltage, direct current (HVDC) electric power transmission system uses direct
current for the bulk transmission of electrical power, in contrast with the more common
alternating current systems. For long-distance distribution, HVDC systems are less
expensive and suffer lower electrical losses. For shorter distances, the higher cost of DC
conversion equipment compared to an AC system may be warranted where other benefits
of direct current links are useful.
The modern form of HVDC transmission uses technology developed extensively in the
1930s in Sweden at ASEA. Early commercial installations included one in the Soviet
Union in 1951 between Moscow and Kashira, and a 10-20 MW system in Gotland,
Sweden in 1954.[1] The longest HVDC link in the world is currently the Inga-Shaba
1,700 km (1,100 mi) 600 MW link connecting the Inga Dam to the Shaba copper mine, in
the Democratic Republic of Congo.

HVDC interconnections in western Europe - red are existing links, green are under
construction, and blue are proposed. Many of these transfer power from renewable
sources such as hydro and wind. For names, see also the annotated version.

High voltage transmission


High voltage is used for transmission to reduce the energy lost in the resistance of the
wires. For a given quantity of power transmitted, higher voltage reduces the transmission
power loss. Power in a circuit is proportional to the current, but the power lost as heat in
the wires is proportional to the square of the current. However, power is also proportional
to voltage, so for a given power level, higher voltage can be traded off for lower current.
Thus, the higher the voltage, the lower the power loss. Power loss can also be reduced by
reducing resistance, commonly achieved by increasing the diameter of the conductor; but
larger conductors are heavier and more expensive.

High voltages cannot be easily used in lighting and motors, and so transmission-level
voltage must be reduced to values compatible with end-use equipment. The transformer,
which only works with alternating current, is an efficient way to change voltages. The
competition between the DC of Thomas Edison and the AC of Nikola Tesla and George
Westinghouse was known as the War of Currents, with AC emerging victorious. Practical
manipulation of DC voltages only became possible with the development of high power
electronic devices such as mercury arc valves and later semiconductor devices, such as
thyristors, insulated-gate bipolar transistors (IGBTs), high power capable MOSFETs
(power metaloxidesemiconductor field-effect transistors) and gate turn-off thyristors
(GTOs).
History of HVDC transmission

HVDC in 1971: this 150 KV mercury arc valve converted AC hydropower voltage for
transmission to distant cities from Manitoba Hydro generators.
The first long-distance transmission of electric power was demonstrated using direct
current in 1882 at the Miesbach-Munich Power Transmission, but only 2.5 kW was
transmitted. An early method of high-voltage DC transmission was developed by the
Swiss engineer Rene Thury[2] and his method was put into practice by 1889 in Italy by the
Acquedotto de Ferrari-Galliera company. This system used series-connected motor-

generator sets to increase voltage. Each set was insulated from ground and driven by
insulated shafts from a prime mover. The line was operated in constant current mode,
with up to 5,000 volts on each machine, some machines having double commutators to
reduce the voltage on each commutator. This system transmitted 630 kW at 14 kV DC
over a distance of 120 km.[3][4] The Moutiers-Lyon system transmitted 8,600 kW of
hydroelectric power a distance of 124 miles, including 6 miles of underground cable. The
system used eight series-connected generators with dual commutators for a total voltage
of 150,000 volts between the poles, and ran from about 1906 until 1936. Fifteen Thury
systems were in operation by 1913

[5]

Other Thury systems operating at up to 100 kV DC

operated up to the 1930s, but the rotating machinery required high maintenance and had
high energy loss. Various other electromechanical devices were tested during the first half
of the 20th century with little commercial success.[6]
One conversion technique attempted for conversion of direct current from a high
transmission voltage to lower utilization voltage was to charge series-connected batteries,
then connect the batteries in parallel to serve distribution loads. [7] While at least two
commercial installations were tried around the turn of the 20th century, the technique was
not generally useful owing to the limited capacity of batteries, difficulties in switching
between series and parallel connections, and the inherent energy inefficiency of a battery
charge/discharge cycle.
The grid controlled mercury arc valve became available for power transmission during
the period 1920 to 1940. Starting in 1932, General Electric tested mercury-vapor valves
and a 12 kV DC transmission line, which also served to convert 40 Hz generation to serve
60 Hz loads, at Mechanicville, New York. In 1941, a 60 MW, +/-200 kV, 115 km buried
cable link was designed for the city of Berlin using mercury arc valves (Elbe-Project), but
owing to the collapse of the German government in 1945 the project was never
completed The nominal justification for the project was that, during wartime, a buried
cable would be less conspicuous as a bombing target. The equipment was moved to the
Soviet Union and was put into service there

Introduction of the fully-static mercury arc valve to commercial service in 1954 marked
the beginning of the modern era of HVDC transmission. A HVDC-connection was
constructed by ASEA between the mainland of Sweden and the island Gotland. Mercury
arc valves were common in systems designed up to 1975, but since then, HVDC systems
use only solid-state devices. From 1975 to 2000, line-commutated converters (LCC)
using thyristor valves were relied on. According to experts such as Vijay Sood, the next
25 years may well be dominated by force commutated converters, beginning with
capacitor commutative converters (CCC) followed by self commutating converters which
have largely supplanted LCC use Since use of semiconductor commutators, hundreds of
HVDC sea-cables have been laid and worked with high reliability, usually better than
96% of the time.
Advantages of HVDC over AC transmission
The advantage of HVDC is the ability to transmit large amounts of power over long
distances with lower capital costs and with lower losses than AC. Depending on voltage
level and construction details, losses are quoted as about 3% per 1,000 km. High-voltage
direct current transmission allows efficient use of energy sources remote from load
centers.
In a number of applications HVDC is more effective than AC transmission. Examples
include:

Undersea cables, where high capacitance causes additional AC losses.

(e.g., 250 km Baltic Cable between Sweden and German

Endpoint-to-endpoint

long-haul

bulk

power

transmission

without

intermediate 'taps', for example, in remote areas

Increasing the capacity of an existing in situations where additional wires

are difficult or expensive to install

Power transmission and stabilization between unsynchronised AC

distribution systems

Connecting a remote generating plant to the distribution grid, for example

Nelson River Bipole

Stabilizing

predominantly

AC

power-grid,

without

increasing

prospective short circuit current

Reducing line cost. HVDC needs fewer conductors as there is no need to

support multiple phases. Also, thinner conductors can be used since HVDC does
not suffer from the skin effect

Facilitate power transmission between different countries that use AC at

differing voltages and/or frequencies

Synchronize AC produced by renewable energy sources

Long undersea cables have a high capacitance. While this has minimal effect for DC
transmission, the current required to charge and discharge the capacitance of the cable
causes additional I2R power losses when the cable is carrying AC. In addition, AC power
is lost to dielectric losses.
HVDC can carry more power per conductor, because for a given power rating the
constant voltage in a DC line is lower than the peak voltage in an AC line. In AC power,
the root mean square (RMS) voltage measurement is considered the standard, but RMS is
only about 71% of the peak voltage. The peak voltage of AC determines the actual
insulation thickness and conductor spacing. Because DC operates at a constant maximum
voltage without RMS, this allows existing transmission line corridors with equally sized
conductors and insulation to carry 29% more power into an area of high power
consumption than AC, which can lower costs.
Because HVDC allows power transmission between unsynchronised AC distribution
systems, it can help increase system stability, by preventing cascading failures from
propagating from one part of a wider power transmission grid to another. Changes in load
that would cause portions of an AC network to become unsynchronized and separate
would not similarly affect a DC link, and the power flow through the DC link would tend
to stabilize the AC network. The magnitude and direction of power flow through a DC
link can be directly commanded, and changed as needed to support the AC networks at

either end of the DC link. This has caused many power system operators to contemplate
wider use of HVDC technology for its stability benefits alone.
Disadvantages
The disadvantages of HVDC are in conversion, switching and control. Further operating
an HVDC scheme requires keeping many spare parts, which may be used exclusively in
one system as HVDC systems are less standardized than AC systems and the used
technology changes fast.
The required static inverters are expensive and have limited overload capacity. At smaller
transmission distances the losses in the static inverters may be bigger than in an AC
transmission line. The cost of the inverters may not be offset by reductions in line
construction cost and lower line loss. With two exceptions, all former mercury rectifiers
worldwide have been dismantled or replaced by thyristor units.
In contrast to AC systems, realizing multiterminal systems is complex, as is expanding
existing schemes to multiterminal systems. Controlling power flow in a multiterminal DC
system requires good communication between all the terminals; power flow must be
actively regulated by the control system instead of by the inherent properties of the
transmission line. High voltage DC circuit breakers are difficult to build because some
mechanism must be included in the circuit breaker to force current to zero, otherwise
arcing and contact wear would be too great to allow reliable switching. Multi-terminal
lines are rare. One is in operation at the Hydro Qubec - New England transmission from
Radisson to Sandy Pond Another example is the Sardinia-mainland Italy link which was
modified in 1989 to also provide power to the island of Corsica.
Costs of high voltage DC transmission
Normally manufacturers such as AREVA, Siemens and ABB do not state specific cost
information of a particular project since this is a commercial matter between the
manufacturer and the client.

Costs vary widely depending on the specifics of the project such as power rating, circuit
length, overhead vs. underwater route, land costs, and AC network improvements
required at either terminal. A detailed evaluation of DC vs. AC cost may be required
where there is no clear technical advantage to DC alone and only economics drives the
selection.
However some practitioners have given out some information that can be reasonably well
relied upon:
For an 8 GW 40 km link laid under the English Channel, the following are approximate
primary equipment costs for a 2000 MW 500 kV bipolar conventional HVDC link
(exclude way-leaving, on-shore reinforcement works, consenting, engineering, insurance,
etc.)

Converter stations ~110M

Subsea cable + installation ~1M/km

So for an 8 GW capacity between England and France in four links, little is left over from
750M for the installed works. Add another 200300M for the other works depending
on additional onshore works required.
Rectifying and inverting

Two of three thyristor valve stacks used for long distance transmission of power from
Manitoba Hydro dams
Early static systems used mercury arc rectifiers, which were unreliable. Two HVDC
systems using mercury arc rectifiers are still in service (As of 2008). The thyristor valve
was first used in HVDC systems in the 1960s. The thyristor is a solid-state semiconductor
device similar to the diode, but with an extra control terminal that is used to switch the
device on at a particular instant during the AC cycle. The insulated-gate bipolar transistor
(IGBT) is now also used and offers simpler control and reduced valve cost.
Because the voltages in HVDC systems, up to 800 kV in some cases, exceed the
breakdown voltages of the semiconductor devices, HVDC converters are built using large
numbers of semiconductors in series.
The low-voltage control circuits used to switch the thyristors on and off need to be
isolated from the high voltages present on the transmission lines. This is usually done
optically. In a hybrid control system, the low-voltage control electronics sends light
pulses along optical fibres to the high-side control electronics. Another system, called
direct light triggering, dispenses with the high-side electronics, instead using light pulses
from the control electronics to switch light-triggered thyristors (LTTs).

A complete switching element is commonly referred to as a valve, irrespective of its


construction.
15) Describe the antenna effect?
Antennas are typically used in an environment where other objects are present that
may have an effect on their performance. Height above ground has a very significant
effect on the radiation pattern of some antenna types.
At frequencies used in antennas, the ground behaves mainly as a dielectric. The
conductivity of ground at these frequencies is negligible. When an electromagnetic wave
arrives at the surface of an object, two waves are created: one enters the dielectric and the
other is reflected. If the object is a conductor, the transmitted wave is negligible and the
reflected wave has almost the same amplitude as the incident one. When the object is a
dielectric, the fraction reflected depends (among others things) on the angle of incidence.
When the angle of incidence is small (that is, the wave arrives almost perpendicularly)
most of the energy traverses the surface and very little is reflected. When the angle of
incidence is near 90 (grazing incidence) almost all the wave is reflected.
Most of the electromagnetic waves emitted by an antenna to the ground below the
antenna at moderate (say < 60) angles of incidence enter the earth and are absorbed
(lost). But waves emitted to the ground at grazing angles, far from the antenna, are almost
totally reflected. At grazing angles, the ground behaves as a mirror. Quality of reflection
depends on the nature of the surface. When the irregularities of the surface are smaller
than the wavelength reflection is good.

The wave reflected by earth can be considered as emitted by the image antenna
This means that the receptor "sees" the real antenna and, under the ground, the image of
the antenna reflected by the ground. If the ground has irregularities, the image will appear
fuzzy.
If the receiver is placed at some height above the ground, waves reflected by ground will
travel a little longer distance to arrive to the receiver than direct waves. The distance will
be the same only if the receiver is close to ground.
In the drawing at right, we have drawn the angle far bigger than in reality. Distance
between the antenna and its image is .
The situation is a bit more complex because the reflection of electromagnetic waves
depends on the polarization of the incident wave. As the refractive index of the ground
(average value ) is bigger than the refractive index of the air (), the direction of the
component of the electric field parallel to the ground inverses at the reflection. This is
equivalent to a phase shift of radians or 180. The vertical component of the electric field
reflects without changing direction. This sign inversion of the parallel component and the
non-inversion of the perpendicular component would also happen if the ground were a
good electrical conductor.

The vertical component of the current reflects


without

changing

sign.

The

horizontal

component reverses sign at reflection.


This means that a receiving antenna "sees" the image antenna with the current in the same
direction if the antenna is vertical or with the current inverted if the antenna is horizontal.
For a vertical polarized emission antenna the far electric field of the electromagnetic
wave produced by the direct ray plus the reflected ray is:

The sign inversion for the parallel field case just changes a cosine to a sine:

is the distance between antenna and its image (twice the height of the

center of the antenna).

Radiation patterns of antennas and their images reflected by the ground. At left the
polarization is vertical and there is always a maximum for . If the polarization is
horizontal as at right, there is always a zero for .
For emitting and receiving antenna situated near the ground (in a building or on a mast)
far from each other, distances traveled by direct and reflected rays are nearly the same.
There is no induced phase shift. If the emission is polarized vertically the two fields
(direct and reflected) add and there is maximum of received signal. If the emission is
polarized horizontally the two signals subtracts and the received signal is minimum. This
is depicted in the image at right. In the case of vertical polarization, there is always a
maximum at earth level (left pattern). For horizontal polarization, there is always a
minimum at earth level. Note that in these drawings the ground is considered as a perfect
mirror, even for low angles of incidence. In these drawings the distance between the
antenna and its image is just a few wavelengths. For greater distances, the number of
lobes increases.
Note that the situation is different and more complex if reflections in the ionosphere
occur. This happens over very long distances (thousands of kilometers). There is not a
direct ray but several reflected rays that add with different phase shifts.
This is the reason why almost all public address radio emissions have vertical
polarization. As public users are near ground, horizontal polarized emissions would be
poorly received. Observe household and automobile radio receivers. They all have

vertical antennas or horizontal ferrite antennas for vertical polarized emissions. In cases
where the receiving antenna must work in any position, as in mobile phones, the emitter
and receivers in base stations use circular polarized electromagnetic waves.
Classical (analog) television emissions are an exception. They are almost always
horizontally polarized, because the presence of buildings makes it unlikely that a good
emitter antenna image will appear. However, these same buildings reflect the
electromagnetic waves and can create ghost images. Using horizontal polarization,
reflections are attenuated because of the low reflection of electromagnetic waves whose
magnetic field is parallel to the dielectric surface near the Brewster's angle. Vertically
polarized analog television has been used in some rural areas. In digital terrestrial
television reflections are less annoying because of the type of modulation.
Mutual impedance and interaction between antennas

Mutual impedance between parallel dipoles not staggered. Curves Re and Im are the
resistive and reactive parts of the impedance.
Current circulating in any antenna induces currents in all others. One can postulate a
mutual impedance between two antennas that has the same significance as the in
ordinary coupled inductors. The mutual impedance between two antennas is defined as:
where is the current flowing in antenna 1 and is the voltage that would have to be applied
to antenna 2 with antenna 1 removed to produce the current in the antenna 2 that was
produced by antenna 1.
From this definition, the currents and voltages applied in a set of coupled antennas are:
where:

is the voltage applied to the antenna i

is the impedance of antenna i

is the mutual impedance between antennas i and j

Note that, as is the case for mutual inductances,


If some of the elements are not fed (there is a short circuit instead a feeder cable), as is
the case in television antennas (Yagi-Uda antennas), the corresponding are zero. Those
elements are called parasitic elements. Parasitic elements are unpowered elements that
either reflect or absorb and reradiate RF energy.
In some geometrical settings, the mutual impedance between antennas can be zero. This
is the case for crossed dipoles used in circular polarization antennas.
Antenna gallery
Antennas and antenna arrays

Rooftop TV antenna. It is
A multi-band rotaryactually
three
Yagi
A Yagi-Uda beamdirectional antennaantennas. The longestA
terrestrial
for amateur radioelements are for the lowmicrowave
antenna.
radio
use.
band, while the mediumantenna array.
and short elements are for
the high and UHF band.

Examples

of

US

log-periodicShortwave antennas
136-174 MHz baseLow cost LF timeRotatable
signal
receiver,array for VHF and UHF. in
Delano,
station antennas.

antenna

(left)

and

California.

receiver (right).

Antennas and supporting structures

A building rooftop supporting

Territory

telecommunications

in

Northern
with

broadcasting

antennas (Doncaster, Victoria,


Australia.

tower

Palmerston,

numerous dish and sectored


mobile

water

communications

radio
and

three-sectorTelephone site

telephone site inconcealed as a


Mexico City.

palm tree.

antennas.

Diagrams as part of a system

Antennas may be connected through a multiplexingAntenna

network

arrangement in some applications like this trunked two-emergency


way radio example.

base station.

medical

for

an

services

Smart antenna

1.

^ In the context of engineering and physics, the plural of antenna is

antennas, and it has been this way since about 1950 (or earlier), when a
cornerstone textbook in this field, Antennas, was published by John D. Kraus of
the Ohio State University. Besides the title, Dr. Kraus noted this in a footnote on
the first page of his book. Insects may have "antennae," but this form is not used
in technical contexts.
2.

"Salvan: Cradle of Wireless, How Marconi Conducted Early Wireless

Experiments in the Swiss Alps", Fred Gardiol & Yves Fournier, Microwave
Journal, February 2006, pp. 124-136.
3.

Nikola Tesla said during the development of radio that "One of the

terminals of the source would be connected to Earth [as a electric ground


connection ...] the other to an insulated body of large surface. For more
information, see ". Delivered before the Franklin Institute, Philadelphia, February
1893, and before the National Electric Light Association, St. Louis, Missouri,
March 1893.
4.

^ "Guide to Wi-Fi Wireless Network Antenna". NetworkBits.net.

http://networkbits.net/wireless-printing/wireless-network-antenna-guide/.
Retrieved on 2008-04-08.
5.

^ Impedance is caused by the same physics as refractive index in optics,

although impedance effects are typically one dimensional, where effects of


refractive index is three dimensional.

15)Explain the electromagnetic radiation?


Electromagnetic radiation
Jump to: navigation, search
Electromagnetism

Electricity Magnetism
Electromagnetic radiation (sometimes abbreviated EMR and often simply called light)
is a ubiquitous phenomenon that takes the form of self-propagating waves in a vacuum or
in matter. It consists of electric and magnetic field components which oscillate in phase
perpendicular to each other and perpendicular to the direction of energy propagation.
Electromagnetic radiation is classified into several types according to the frequency of its
wave; these types include (in order of increasing frequency and decreasing wavelength):
radio waves, microwaves, terahertz radiation, infrared radiation, visible light, ultraviolet
radiation, X-rays and gamma rays. A small and somewhat variable window of frequencies
is sensed by the eyes of various organisms; this is what we call the visible spectrum, or
light.
EM radiation carries energy and momentum that may be imparted to matter with which it
interacts.

Theory

Shows three electromagnetic modes (blue, green and red) with a distance scale in
micrometres along the x-axis.
Electromagnetic waves were first postulated by James Clerk Maxwell and subsequently
confirmed by Heinrich Hertz. Maxwell derived a wave form of the electric and magnetic
equations, revealing the wave-like nature of electric and magnetic fields, and their
symmetry. Because the speed of EM waves predicted by the wave equation coincided
with the measured speed of light, Maxwell concluded that light itself is an EM wave.
According to Maxwell's equations, a time-varying electric field generates a magnetic
field and vice versa. Therefore, as an oscillating electric field generates an oscillating
magnetic field, the magnetic field in turn generates an oscillating electric field, and so on.
These oscillating fields together form an electromagnetic wave.
A quantum theory of the interaction between electromagnetic radiation and matter such as
electrons is described by the theory of quantum electrodynamics.
Properties

Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave


of electric and magnetic fields. This diagram shows a plane linearly polarized wave
propagating from right to left. The electric field is in a vertical plane, the magnetic field
in a horizontal plane.

The physics of electromagnetic radiation is electrodynamics, a subfield of


electromagnetism. Electric and magnetic fields obey the properties of superposition so
that a field due to any particular particle or time-varying electric or magnetic field will
contribute to the fields present in the same space due to other causes: as they are vector
fields, all magnetic and electric field vectors add together according to vector addition.
For instance, a travelling EM wave incident on an atomic structure induces oscillation in
the atoms of that structure, thereby causing them to emit their own EM waves, emissions
which alter the impinging wave through interference. These properties cause various
phenomena including refraction and diffraction.
Since light is an oscillation it is not affected by travelling through static electric or
magnetic fields in a linear medium such as a vacuum. However in nonlinear media, such
as some crystals, interactions can occur between light and static electric and magnetic
fields these interactions include the Faraday effect and the Kerr effect.
In refraction, a wave crossing from one medium to another of different density alters its
speed and direction upon entering the new medium. The ratio of the refractive indices of
the media determines the degree of refraction, and is summarized by Snell's law. Light
disperses into a visible spectrum as light is shone through a prism because of the
wavelength dependent refractive index of the prism material (Dispersion).
EM radiation exhibits both wave properties and particle properties at the same time (see
wave-particle duality). Both wave and particle characteristics have been confirmed in a
large number of experiments. Wave characteristics are more apparent when EM radiation
is measured over relatively large timescales and over large distances while particle
characteristics are more evident when measuring small timescales and distances. For
example, when electromagnetic radiation is absorbed by matter, particle-like properties

will be more obvious when the average number of photons in the cube of the relevant
wavelength is much smaller than 1. Upon absorption the quantum nature of the light
leads to clearly non-uniform deposition of energy.
There are experiments in which the wave and particle natures of electromagnetic waves
appear in the same experiment, such as the diffraction of a single photon. When a single
photon is sent through two slits, it passes through both of them interfering with itself, as
waves do, yet is detected by a photomultiplier or other sensitive detector only once.
Similar self-interference is observed when a single photon is sent into a Michelson
interferometer or other interferometers.
[edit] Wave model

White light being separated into its components.


An important aspect of the nature of light is frequency. The frequency of a wave is its
rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is
equal to one oscillation per second. Light usually has a spectrum of frequencies which
sum together to form the resultant wave. Different frequencies undergo different angles
of refraction.
A wave consists of successive troughs and crests, and the distance between two adjacent
crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in
size, from very long radio waves the size of buildings to very short gamma rays smaller
than atom nuclei. Frequency is inversely proportional to wavelength, according to the
equation:

where v is the speed of the wave (c in a vacuum, or less in other media), f is the frequency
and is the wavelength. As waves cross boundaries between different media, their speeds
change but their frequencies remain constant.
Interference is the superposition of two or more waves resulting in a new wave pattern. If
the fields have components in the same direction, they constructively interfere, while
opposite directions cause destructive interference.
The energy in electromagnetic waves is sometimes called radiant energy.
Particle model
Because energy of an EM wave is quantized, in the particle model of EM radiation, a
wave consists of discrete packets of energy, or quanta, called photons. The frequency of
the wave is proportional to the magnitude of the particle's energy. Moreover, because
photons are emitted and absorbed by charged particles, they act as transporters of energy.
The energy per photon can be calculated from the PlanckEinstein equation:[1]

where E is the energy, h is Planck's constant, and f is frequency. This photon-energy


expression is a particular case of the energy levels of the more general electromagnetic
oscillator whose average energy, which is used to obtain Planck's radiation law, can be
shown to differ sharply from that predicted by the equipartition principle at low
temperature, thereby establishes a failure of equipartition due to quantum effects at low
temperature[2].
As a photon is absorbed by an atom, it excites an electron, elevating it to a higher energy
level. If the energy is great enough, so that the electron jumps to a high enough energy
level, it may escape the positive pull of the nucleus and be liberated from the atom in a
process called photoionisation. Conversely, an electron that descends to a lower energy
level in an atom emits a photon of light equal to the energy difference. Since the energy

levels of electrons in atoms are discrete, each element emits and absorbs its own
characteristic frequencies.
Together, these effects explain the absorption spectra of light. The dark bands in the
spectrum are due to the atoms in the intervening medium absorbing different frequencies
of the light. The composition of the medium through which the light travels determines
the nature of the absorption spectrum. For instance, dark bands in the light emitted by a
distant star are due to the atoms in the star's atmosphere. These bands correspond to the
allowed energy levels in the atoms. A similar phenomenon occurs for emission. As the
electrons descend to lower energy levels, a spectrum is emitted that represents the jumps
between the energy levels of the electrons. This is manifested in the emission spectrum of
nebulae. Today, scientists use this phenomenon to observe what elements a certain star is
composed of. It is also used in the determination of the distance of a star, using the red
shift.
Speed of propagation
Main article: Speed of light
Any electric charge which accelerates, or any changing magnetic field, produces
electromagnetic radiation. Electromagnetic information about the charge travels at the
speed of light. Accurate treatment thus incorporates a concept known as retarded time (as
opposed to advanced time, which is unphysical in light of causality), which adds to the
expressions for the electrodynamic electric field and magnetic field. These extra terms are
responsible for electromagnetic radiation. When any wire (or other conducting object
such as an antenna) conducts alternating current, electromagnetic radiation is propagated
at the same frequency as the electric current. At the quantum level, electromagnetic
radiation is produced when the wavepacket of a charged particle oscillates or otherwise
accelerates. Charged particles in a stationary state do not move, but a superposition of
such states may result in oscillation, which is responsible for the phenomenon of radiative
transition between quantum states of a charged particle.

Depending on the circumstances, electromagnetic radiation may behave as a wave or as


particles. As a wave, it is characterized by a velocity (the speed of light), wavelength, and
frequency. When considered as particles, they are known as photons, and each has an
energy related to the frequency of the wave given by Planck's relation E = h, where E is
the energy of the photon, h = 6.626 10-34 Js is Planck's constant, and is the frequency
of the wave.
One rule is always obeyed regardless of the circumstances: EM radiation in a vacuum
always travels at the speed of light, relative to the observer, regardless of the observer's
velocity. (This observation led to Albert Einstein's development of the theory of special
relativity.)
In a medium (other than vacuum), velocity factor or refractive index are considered,
depending on frequency and application. Both of these are ratios of the speed in a
medium to speed in a vacuum.
Thermal radiation and electromagnetic radiation as a form of heat
Main article: Thermal radiation
The basic structure of matter involves charged particles bound together in many different
ways. When electromagnetic radiation is incident on matter, it causes the charged
particles to oscillate and gain energy. The ultimate fate of this energy depends on the
situation. It could be immediately re-radiated and appear as scattered, reflected, or
transmitted radiation. It may also get dissipated into other microscopic motions within the
matter, coming to thermal equilibrium and manifesting itself as thermal energy in the
material. With a few exceptions such as fluorescence, harmonic generation,
photochemical reactions and the photovoltaic effect, absorbed electromagnetic radiation
simply deposits its energy by heating the material. This happens both for infrared and
non-infrared radiation. Intense radio waves can thermally burn living tissue and can cook
food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can
also easily set paper afire. Ionizing electromagnetic radiation can create high-speed
electrons in a material and break chemical bonds, but after these electrons collide many

times with other atoms in the material eventually most of the energy gets downgraded to
thermal energy, this whole process happening in a tiny fraction of a second. That infrared
radiation is a form of heat and other electromagnetic radiation is not, is a widespread
misconception in physics. Any electromagnetic radiation can heat a material when it is
absorbed.
The inverse or time-reversed process of absorption is responsible for thermal radiation.
Much of the thermal energy in matter consists of random motion of charged particles, and
this energy can be radiated away from the matter. The resulting radiation may
subsequently be absorbed by another piece of matter, with the deposited energy heating
the material. Radiation is an important mechanism of heat transfer.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a
form of thermal energy, having maximum radiation entropy. The thermodynamic
potentials of electromagnetic radiation can be well-defined as for matter. Thermal
radiation in a cavity has energy density (see Planck's Law) of

Differentiating the above with respect to temperature, we may say that the
electromagnetic radiation field has an effective volumetric heat capacity given by

Electromagnetic spectrum
Main article: Electromagnetic spectrum

Electromagnetic spectrum with light highlighted

Legend:

Gamma

rays

HX

Hard

X-rays

SX

Soft

X-Rays

EUV

NUV

Extreme

ultraviolet

Near

ultraviolet

Visible

light

NIR

MIR

FIR

Near

infrared

Moderate

infrared

Far

infrared

Radio
EHF
SHF
UHF
VHF

waves:
=

Extremely

Super
=

high

Ultrahigh
=

HF

LF

=
=

VF

frequency
frequency

(Microwaves)
(Microwaves)

high

frequency

High

frequency

Medium

frequency

Low

frequency

Very
=

(Microwaves)

frequency

Very
=

MF
VLF

high

low
Voice

frequency
frequency

ELF = Extremely low frequency


Generally, EM radiation is classified by wavelength into electrical energy, radio,
microwave, infrared, the visible region we perceive as light, ultraviolet, X-rays and
gamma rays.
The behavior of EM radiation depends on its wavelength. Higher frequencies have
shorter wavelengths, and lower frequencies have longer wavelengths. When EM radiation
interacts with single atoms and molecules, its behavior depends on the amount of energy
per quantum it carries. Spectroscopy can detect a much wider region of the EM spectrum
than the visible range of 400 nm to 700 nm. A common laboratory spectroscope can
detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical
properties of objects, gases, or even stars can be obtained from this type of device. It is
widely used in astrophysics. For example, hydrogen atoms emit radio waves of
wavelength 21.12 cm.

Light
Main article: Light
EM radiation with a wavelength between approximately 400 nm and 700 nm is detected
by the human eye and perceived as visible light. Other wavelengths, especially nearby
infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes
referred to as light, especially when visibility to humans is not relevant.
If radiation having a frequency in the visible region of the EM spectrum reflects off of an
object, say, a bowl of fruit, and then strikes our eyes, this results in our visual perception
of the scene. Our brain's visual system processes the multitude of reflected frequencies
into different shades and hues, and through this not-entirely-understood psychophysical
phenomenon, most people perceive a bowl of fruit.
At most wavelengths, however, the information carried by electromagnetic radiation is
not directly detected by human senses. Natural sources produce EM radiation across the
spectrum, and our technology can also manipulate a broad range of wavelengths. Optical
fiber transmits light which, although not suitable for direct viewing, can carry data that
can be translated into sound or an image. The coding used in such data is similar to that
used with radio waves.
Radio waves
Main article: Radio waves
Radio waves can be made to carry information by varying a combination of the
amplitude, frequency and phase of the wave within a frequency band.
When EM radiation impinges upon a conductor, it couples to the conductor, travels along
it, and induces an electric current on the surface of that conductor by exciting the
electrons of the conducting material. This effect (the skin effect) is used in antennas. EM
radiation may also cause certain molecules to absorb energy and thus to heat up; this is
exploited in microwave ovens.

Derivation
Electromagnetic waves as a general phenomenon were predicted by the classical laws of
electricity and magnetism, known as Maxwell's equations. If you inspect Maxwell's
equations without sources (charges or currents) then you will find that, along with the
possibility of nothing happening, the theory will also admit nontrivial solutions of
changing electric and magnetic fields. Beginning with Maxwell's equations for free
space:

where
is a vector differential operator (see Del).
One solution,
,
is trivial.
To see the more interesting one, we utilize vector identities, which work for any vector, as
follows:

To see how we can use this take the curl of equation (2):

Evaluating the left hand side:

where we simplified the above by using equation (1).


Evaluate the right hand side:

Equations (6) and (7) are equal, so this results in a vector-valued differential equation for
the electric field, namely

Applying a similar pattern results in similar differential equation for the magnetic field:

.
These differential equations are equivalent to the wave equation:

where
c0 is the speed of the wave in free space and
f describes a displacement
Or more simply:

where

is d'Alembertian:

Notice that in the case of the electric and magnetic fields, the speed is:

Which, as it turns out, is the speed of light in free space. Maxwell's equations have
unified the permittivity of free space 0, the permeability of free space 0, and the speed
of light itself, c0. Before this derivation it was not known that there was such a strong
relationship between light and electricity and magnetism.
But these are only two equations and we started with four, so there is still more
information pertaining to these waves hidden within Maxwell's equations. Let's consider
a generic vector wave for the electric field.

Here

is the constant amplitude, f is any second differentiable function,

vector in the direction of propagation, and

is a unit

is a position vector. We observe that

is a generic solution to the wave equation. In other words

,
for a generic wave traveling in the

direction.

This form will satisfy the wave equation, but will it satisfy all of Maxwell's equations,
and with what corresponding magnetic field?

The first of Maxwell's equations implies that electric field is orthogonal to the direction
the wave propagates.

The second of Maxwell's equations yields the magnetic field. The remaining equations
will be satisfied by this choice of

Not only are the electric and magnetic field waves traveling at the speed of light, but they
have a special restricted orientation and proportional magnitudes, E0 = c0B0, which can be
seen immediately from the Poynting vector. The electric field, magnetic field, and
direction of wave propagation are all orthogonal, and the wave propagates in the same
direction as

From the viewpoint of an electromagnetic wave traveling forward, the electric field might
be oscillating up and down, while the magnetic field oscillates right and left; but this
picture can be rotated with the electric field oscillating right and left and the magnetic
field oscillating down and up. This is a different solution that is traveling in the same
direction. This arbitrariness in the orientation with respect to propagation direction is
known as polarization.

UNIT-III(MODULATION TECHNIQUES)

PART-A
1)In telecommunication, a communications system is a collection of individual
communications networks.
2)A communications subsystem is a functional unit or operationa assembly that is
smaller than the larger assembly under consideration.
3)

It

also

contains

transponders

and

other

transponders

in

it

and

communication satellite communication system receives signals from the antenn


subsystem.
4) A radio communication system is composed of several communications subsystems
that give exterior communications capablilities.
5) Power line communications systems operate by impressing a modulated carrier signal
on the wiring system.

(PART-B)
7) What is communication?
In telecommunication, a communications system is a collection of individual
communications networks, transmission systems, relay stations, tributary stations, and
data terminal equipment (DTE) usually capable of interconnection and interoperation to
form an integrated whole. The components of a communications system serve a common
purpose, are technically compatible, use common procedures, respond to controls, and
operate in unison. Telecommunications is a method of communication (e.g., for sports
broadcasting, mass media, journalism, etc.).
8) What is transmitter?

In radio electronics and broadcasting, a transmitter usually has a power supply, an


oscillator, a modulator, and amplifiers for audio frequency (AF) and radio frequency
(RF). The modulator is the device which piggybacks (or modulates) the signal
information onto the carrier frequency, which is then broadcast. Sometimes a device (for
example, a cell phone) contains both a transmitter and a radio receiver, with the
combined unit referred to as a transceiver. In amateur radio, a transmitter can be a
separate piece of electronic gear or a subset of a transceiver, and often referred to using
an abbreviated form; "XMTR". [1] In most parts of the world, use of transmitters is strictly
controlled by laws since the potential for dangerous interference (for example to
emergency communications) is considerable. In consumer electronics, a common device
is a Personal FM transmitter, a very low power transmitter generally designed to take a
simple audio source like an iPod, CD player, etc. and transmit it a few feet to a standard
FM radio receiver. Most personal FM transmitters In the USA fall under Part 15 of the
FCC regulations to avoid any user licensing requirements.

9)How to be use transmitter?


In industrial process control, a "transmitter" is any device which converts
measurements from a sensor into a signal to be received, usually sent via wires, by some
display or control device located a distance away. Typically in process control
applications the "transmitter" will output an analog 4-20 mA current loop or digital
protocol to represent a measured variable within a range. For example, a pressure
transmitter might use 4 mA as a representation for 50 psig of pressure and 20 mA as 1000
psig of pressure and any value in between proportionately ranged between 50 and 1000
psig. (A 0-4 mA signal indicates a system error.) Older technology transmitters used
pneumatic pressure typically ranged between 3 to 15 psig (20 to 100 kPa) to represent a
process variable.
10)Write a short notes on channel?

In the early days of radio engineering, radio frequency energy was generated using
arcs known as Alexanderson alternator or mechanical alternators (of which a rare
example survives at the SAQ transmitter in Grimeton, Sweden). In the 1920s electronic
transmitters, based on vacuum tubes, began to be used.
In broadcasting, and telecommunication, the part which contains the oscillator,
modulator, and sometimes audio processor, is called the exciter. Confusingly, the highpower amplifier which the exciter then feeds into is often called the "transmitter" by
broadcast engineers. The final output is given as transmitter power output (TPO),
although this is not what most stations are rated by.

(PART-C)
11)Explain about communication?
Communications system

In telecommunication, a communications system is a collection of individual


communications networks, transmission systems, relay stations, tributary stations, and
data terminal equipment (DTE) usually capable of interconnection and interoperation to
form an integrated whole. The components of a communications system serve a common
purpose, are technically compatible, use common procedures, respond to controls, and
operate in unison. Telecommunications is a method of communication (e.g., for sports
broadcasting, mass media, journalism, etc.).
A communications subsystem is a functional unit or operational assembly that is smaller
than the larger assembly under consideration. Examples of communications subsystems
in the Defense Communications System (DCS) are (a) a satellite link with one Earth
terminal in CONUS and one in Europe, (b) the interconnect facilities at each Earth

terminal of the satellite link, and (c) an optical fiber cable with its driver and receiver in
either of the interconnect facilities. Communication subsystem (b) basically consists of a
receiver, frequency translator and a transmitter. It also contains transponders and other
transponders in it and communication satellite communication system receives signals
from the antenna subsystem.

Examples
An optical communication system is any form of telecommunication that uses light as the
transmission medium. Optical communications consists of a transmitter, which encodes a
message into an optical signal, a channel, which carries the signal to its destination, and a
receiver, which reproduces the message from the received optical signal. Fiber-optic
communication systems transmit information from one place to another by sending light
through an optical fiber. The light forms an electromagnetic carrier wave that is
modulated to carry information. First developed in the 1970s, fiber-optic communication
systems have revolutionized the telecommunications industry and played a major role in
the advent of the Information Age. Because of its advantages over electrical transmission,
the use of optical fiber has largely replaced copper wire communications in core networks
in the developed world.
A radio communication system is composed of several communications subsystems that
give exterior communications capablilities.[1][2][3] A radio communication system
comprises a transmitting conductor[4] in which electrical oscillations[5][6][7] or currents are
produced and which is arranged to cause such currents or oscillations to be propagated
through the free space medium from one point to another remote therefrom and a
receiving conductor[4] at such distant point adapted to be excited by the oscillations or
currents propagated from the transmitter.[8][9][10][11]
Power line communications systems operate by impressing a modulated carrier signal on
the wiring system. Different types of powerline communications use different frequency
bands, depending on the signal transmission characteristics of the power wiring used.

Since the power wiring system was originally intended for transmission of AC power, the
power wire circuits have only a limited ability to carry higher frequencies. The
propagation problem is a limiting factor for each type of power line communications.
A duplex communication system is a system composed of two connected parties or
devices which can communicate with one another in both directions. The term duplex is
not used when describing communication between more than two parties or devices.
Duplex systems are employed in nearly all communications networks, either to allow for
a communication "two-way street" between two connected parties or to provide a
"reverse path" for the monitoring and remote adjustment of equipment in the field.
A tactical communications system is a communications system that (a) is used within, or
in direct support of, tactical forces, (b) is designed to meet the requirements of changing
tactical situations and varying environmental conditions, (c) provides securable
communications, such as voice, data, and video, among mobile users to facilitate
command and control within, and in support of, tactical forces, and (d) usually requires
extremely short installation times, usually on the order of hours, in order to meet the
requirements of frequent relocation.

3)

Discuss about the Transmitter?


Generally in communication and information processing, a transmitter

is any object (source) which sends information to an observer (receiver). When used in
this more general sense, vocal cords may also be considered an example of a transmitter.
In radio electronics and broadcasting, a transmitter usually has a power supply, an
oscillator, a modulator, and amplifiers for audio frequency (AF) and radio frequency
(RF). The modulator is the device which piggybacks (or modulates) the signal
information onto the carrier frequency, which is then broadcast. Sometimes a device (for
example, a cell phone) contains both a transmitter and a radio receiver, with the
combined unit referred to as a transceiver. In amateur radio, a transmitter can be a
separate piece of electronic gear or a subset of a transceiver, and often referred to using
an abbreviated form; "XMTR". [1] In most parts of the world, use of transmitters is strictly

controlled by laws since the potential for dangerous interference (for example to
emergency communications) is considerable. In consumer electronics, a common device
is a Personal FM transmitter, a very low power transmitter generally designed to take a
simple audio source like an iPod, CD player, etc. and transmit it a few feet to a standard
FM radio receiver. Most personal FM transmitters In the USA fall under Part 15 of the
FCC regulations to avoid any user licensing requirements.
In industrial process control, a "transmitter" is any device which converts
measurements from a sensor into a signal to be received, usually sent via wires, by
some display or control device located a distance away. Typically in process control
applications the "transmitter" will output an analog 4-20 mA current loop or digital
protocol to represent a measured variable within a range. For example, a pressure
transmitter might use 4 mA as a representation for 50 psig of pressure and 20 mA as
1000 psig of pressure and any value in between proportionately ranged between 50
and 1000 psig. (A 0-4 mA signal indicates a system error.) Older technology
transmitters used pneumatic pressure typically ranged between 3 to 15 psig (20 to
100).
13)Explain the need for modulation?

NEED FOR MODULATION BANDWIDTH REQUIREMENT


Just as important as the planning of the construction and location of the transmitter is
how its output fits in with existing transmissions. Two transmitters cannot broadcast on
the same frequency in the same area as this would cause co-channel interference. For a
good example of how the channel planners have dovetailed different transmitters' outputs
see Crystal Palace UHF TV channel allocations. This reference also provides a good
example of a grouped transmitter, in this case an A group. That is, all of its output is
within the bottom third of the UK UHF television broadcast band. The other two groups
(B and C/D) utilise the middle and top third of the band, see graph. By replicating this
grouping across the country (using different groups for adjacent transmitters), co-channel

interference can be minimised, and in addition, those in marginal reception areas can use
more efficient grouped receiving antennas. Unfortunately, in the UK, this carefully
planned system has had to be compromised with the advent of digital broadcasting which
(during the changeover period at least) requires yet more channel space, and
consequently the additional digital broadcast channels cannot always be fitted within the
transmitter's existing group. Thus many UK transmitters have become "wideband" with
the consequent need for replacement of receiving antennas (see external links). Once the
Digital Switch Over (DSO) occurs the plan is that most transmitters will revert to their
original groups, source Further complication arises when adjacent transmitters have to
transmit on the same frequency and under these circumstances the broadcast radiation
patterns are attenuated in the relevant direction(s). A good example of this is in the United
Kingdom, where the Waltham transmitting station broadcasts at high power on the same
frequencies as the Sandy Heath transmitting station's high power transmissions, with the
two being only 50 miles apart. Thus Waltham's antenna array[1] does not broadcast these
two channels in the direction of Sandy Heath and vice versa.
Where a particular service needs to have wide coverage, this is usually achieved by using
multiple transmitters at different locations. Usually, these transmitters will operate at
different frequencies to avoid interference where coverage overlaps. Examples include
national broadcasting networks and cellular networks. In the latter, frequency switching is
automatically done by the receiver as necessary, in the former, manual retuning is more
common (though the Radio Data System is an example of automatic frequency switching
in broadcast networks). Another system for extending coverage using multiple
transmitters is quasi-synchronous transmission, but this is rarely used nowadays.
Main and relay (repeater) transmitters
Transmitting stations are usually either classified as main stations or relay stations (also
known as repeaters or translators).

Main stations are defined as those that generate their own modulated output signal from a
baseband (unmodulated) input. Usually main stations operate at high power and cover
large areas.
Relay stations (translators) take an already modulated input signal, usually by direct
reception of a parent station off the air, and simply rebroadcast it on another frequency.
Usually relay stations operate at medium or low power, and are used to fill in pockets of
poor reception within, or at the fringe of, the service area of a parent main station.
Note that a main station may also take its input signal directly off-air from another
station, however this signal would be fully demodulated to baseband first, processed, and
then remodulated for transmission.14)Explain about power relations in AM wave?

Some cities in Europe, like Mhlacker, Ismaning, Langenberg, Kalundborg,


Hoerby and Allouis became famous as sites of powerful transmitters. For example,
Goliath transmitter was a VLF transmitter of the German Navy during World War II
located near Kalbe an der Milde in Saxony-Anhalt, Germany. Some transmitting towers
like the radio tower Berlin or the TV tower Stuttgart have become landmarks of cities.
Many transmitting plants have very high radio towers that are masterpieces of
engineering.
Having the tallest building in the world, the nation, the state/province/prefecture, city,
etc., has often been considered something to brag about. Often, builders of high-rise
buildings have used transmitter antennas to lay claim to having the tallest building. A
historic example was the "tallest building" feud between the Chrysler Building and the
Empire State Building in New York, New York.
Some towers have an observation deck accessible to tourists. An example is the
Ostankino Tower in Moscow, which was completed in 1967 on the 50th anniversary of
the October Revolution to demonstrate the technical abilities of the Soviet Union. As very
tall radio towers of any construction type are prominent landmarks, requiring careful

planning and construction, and high-power transmitters especially in the long- and
medium-wave ranges can be received over long distances, such facilities were often
mentioned in propaganda. Other examples were the Deutschlandsender Herzberg/Elster
and the Warsaw Radio Mast.
14)What are the advantages of FM over AM?
FM has following advantages over AM.
i) The amplitude of FM is constant. It is independent of depth of
modulation.
Hence transmitter power remains constant in FM whereas it varies in
AM.
ii) Since amplitude of FM constant, the noise interference is minimum
in FM. Any
noise superimposing an amplitude can be removed with the help of
amplitude
limits. Whereas it is difficult to remove amplitude variations due to
noise in
AM.
iii) The depth of modulation have limitation in AM. But in FM the depth
of
modulation can be increased to any value by increasing the deviation.
This does
not cause any distortion in FM signal.
iv) Since guard bands are provided in FM, there is less possibility of
adjacent
channel interference.
v) Since space waves are used for FM, the radius of propagation is
limited to line of
sight. Hence it is possible to operate several independent transmitters
on same

frequency with minimum interference.


vi) Since FM uses UHF and VHF ranges, the noise interference is
minimumcompared to AM which uses MF and HF ranges.
15)Briefly the amplitude modulation theory?
Modulation is a technique used for encoding information into a RF channel. Typically the
process of modulation combines an information signal with a carrier signal to create a
new composite signal that can be transmitted over a wireless link. In theory a message
signal can be directly sent into space to a receiver by simply powering an antenna with
the message signal. However, message signals typically don't have a high enough
bandwidth to make direct propagation an efficient transmission technique. In order to
efficiently transmit data, the lower frequency data must be modulated onto a higher
frequency wave. The high frequency wave acts as a carrier that transmits the data through
space to the receiver where the composite wave is demodulated and the data is recovered.
There are a few general types of modulation; Frequency Modulation (FM), Phase
Modulation (PM), and Amplitude modulation (AM). Frequency modulation encodes data
by performing shifting of frequency, phase modulation performs shifts in phase, and
amplitude modulation controls the envelope of the carrier wave. AM is usually the
simplest to implement and is thus the scheme we chose for our modulator.
In an AM radio system a high frequency sinusoidal wave is amplitude modulated
by a lower frequency message signal.

This can be expressed by

Vam(t) = [(A + Vm(t))] Cos(2fct)

where Cos(2fct) is the carrier frequency and Vm(t) is the modulating signal. In our
application fc = 915 MHz and Vm(t) = Audio Signal. We can take Vm(t) = VmCos(2fmt)
where fm = highest frequency component in the message signal. For transmitting audio, fm
= 20 kHz. The constant "A" is chosen such that Vam(t) never becomes negative. Thus
Vam(t) can be modified to

Vam(t) = A[(1 + mCos(2fmt))] Cos(2fct).


In this expression "m" is known as the modulation index. After performing the
multiplication of the modulated signal the spectral output can be determined.

Vam(t) = A[(Cos(2fct) + mCos(2fmt) Cos(2fct))]


Vam(t) = A[(Cos(2fct) + m/2[Cos(2(fc-fm)t) + Cos(2(fc+fm)t))]]
From the multiplication it is evident that the resulting spectrum consists of the center
frequency fc and two side band frequencies (fc - fm) and (fc + fm).

What has occurred is that the low frequency message signal has been translated to a much
higher frequency range for greater transmission efficiency. Either of the side bands can be
used to recover the message signal at the demodulator. One simply needs to filter out the
unwanted side band before sending the signal to the demodulation stage. This modulation
scheme is typically implemented in circuitry by a component called a Double-Side-Band
mixer. The mixer physically multiplies the carrier wave, driven by an oscillator, with the
message signal to produce the AM signal.

Diode Double Balanced Mixer


(Written by Chad Schrader)
Our group implemented the modulator by using a diode double-balanced mixer
design. We decided to use this design because it seemed like it would be the most
straightforward. The mixer basically consists of two balanced to unbalanced transformers
(baluns) and a Schottky diode quad. The Schottky quad basically is a rectifier that
multiplies the local oscillator input (915 MHz) and the audio input (20 kHz). We
implemented the Schottky quad by putting four Schottky diodes in a ring formation. The
two baluns are used to match the circuit to the external circuits that are connected to the
mixer. We made our own baluns by buying some toroidal ferrite cores and #32 wire and
wrapping the cores according to a schematic for a 1:1 balanced to unbalanced transformer
found in the ARRL Handbook. The cores were wound using three wires twisted together.
There were 7 winds on the toroid in total. This circuit was constructed on a protoboard,
taking care to try to keep the leads between components as short as possible. SMA
connectors were used for interfacing with the other circuits in the transmitter.
Envelope Detector
The demodulator was implemented using an envelope detector circuit. This
consists of a diode and a low pass filter circuit. The low pass filter circuit is simply a
capacitor and a resistor in parallel. The values for the resistor and the capacitor were
calculated using the following equation:

RC [( c m ) ]
1

where

1
2

is the local oscillator frequency (915 MHz) and

is the frequency of

the audio signal (20 kHz). Once we found a value for RC ( 2.34 10 7 ), we assumed a
value of 1000 for R. This gave us a value of 23 pF for C. In our implementation of the
circuit, we used a 24 pF capacitor because that was the closest standard value. The
circuit was constructed on a protoboard, again trying to keep the leads between
components as short as possible. We also used SMA connectors for the input and output
of this circuit.
Demodulator and Envelope Detector
(Written by Farial Mahbub)
Amplitude Modulation (AM) refers to the method of adjusting an electromagnetic
carrier frequency by varying its amplitude in accordance to the analogue signal to be
transmitted.There are two essential methods that are used to demodulate AM signals and
in this portion of the report we will discuss both. The figure below represents the circuit
used as the Demodulator in this project.

The first method of demodulation is using the envelope detector. The envelope
detector is essentially made up of a rectifier and a low pass filter (see figure below). In
this project a diode was used as the rectifier to pass current in one direction only. In order
to calculate the value of the RC time constant to be used, the following equation is used:
2fC >> 1/(RC) > 2fm
Vr = Vp (1 e -1/fcRC)

where fC and fm are the carrier frequency and the modulated frequency respectively. The
reason the inverse of the time constant is significantly smaller than the carrier frequency
is to keep the ripple created minimal. The second equation shown above defines the peakto-peak value of the ripple, Vr of the rectified signal and where Vp is the peak value of the
incoming signal and fc is the frequency of the signal.

The second method for demodulation that we did not choose to implement is the
product detector. This circuit essentially multiplies the incoming signal by the signal of a
local oscillator which is at the same frequency and phase as the carrier signal. After
filtering the product, only the original audio signal remains (works for AM as well as
Single Side Band Modulation, SSBM).
The output of the above described circuits can be seen graphically in the figure below.
The Signal 1 is the modulated signal that is applied to the Detector. The diode present in
the circuit demodulates the AM signal by allowing its carrier to multiply with its
sidebands. The diode passes current in only one direction and its output voltage is
proportional to the square of its input voltage. Thus, if an input voltage that varies
according to the modulation envelope is used, the information present in the sidebands
would be successfully recovered. Once the signal is rectified (after it passes through the
diode), it resembles Signal 2. The next component in the circuit is the low-pass filter (the
resistor and capacitor in parallel) and this filters out the RF and turns it into Signal 3. The
coupling capacitor in the circuit is present to eliminate the DC component in the received
signal thus centering the information signal around the zero axis as in Signal 4.

Measurements, Testing, and Calculations of the Amplitude Modulator


(Written By Omar Castillo)
A plot of Power S one sideband (dBc) against modulation index(m) for DSB-LC is
shown below.
M
0.1
1
2

S(dBc)
-26.02
-6.02
0.0006

Testing the Sidebands


We arbitrarily set the carrier signal to 30kHz to make the sidebands as close to equal as
we can make them. With the carrier signal set at 30kHz, a 3kHz 10Vpp sinusoidal signal
was sent through the input port where audio would be inputted and the sidebands were
measured.
fc = 30kHz
Power

fc + 3kHz
45.55dBm

fc - 3kHz
45.16dBm

Double Sideband Suppressed Carrier (DSB-SC) Modulation.


With the carrier signal set at 30kHz, we input signal at 3 kHz, 10Vp-p sinusoidal without
DC offset signal being applied. Scope and spectrum analyzer measurements were taken at
the output.

f(kHz)
27
33
86.2
92.6

P(dBm)
-22.71
-23
-53.3
-56.9

Spectrum of DSB-SC Signal


0
-10
-20
-30
Pow
er
(dB
m)

-40
-50
-60
-70
-80
0

30

60

90

120

Frequency (kHz)

The measured spectrum of the DSB-SC is comparably close to the theoretical model.
We now varied the input signal and tested the output. The input modulating signal
amplitude was varied and both the input and output voltage were measured.
Vin,pp(V)
0
2
4
6
8
10

Vo,pp(V)
0
0.8
1.5
2.3
3
4

The modulating signal (input signal) and the output amplitudes are proportional by the
amplitude of the carrier frequency.

Keeping the input amplitude constant at 10Vpp, we varied the frequency of the
modulating input signal from 4kHz to 10kHz in increments of 2kHz and measured
corresponding amplitudes.
f(kHz)
4
6
8
10

Vo,pp(V)
9.5
9.5
9.5
9.5

From the data it appears that the frequency of the modulating signal does not affect the
output amplitude.
Double Sideband Large Carrier (DSB-LC) Modulation
We now apply a DC Offset to get a DSB-LC signal. On adding DC offset to the input we
get an equation of the form [A + f(t)] which forms basic form of DSB-LC, i.e [A + f(t) ]
cosct.
We continue to use an input modulating signal of 3kHz, 2Vp-p sinusoidal with a +2V DC
offset. Measurements of the output were taken on both the oscilloscope and spectrum
analyzer. Data of the spectrum was taken twice, once with a span from 0-120kHz and
another with a span of 10-50kHz.
f(kHz)
27
30
33
86.2
89.2
92.2

P(dBm)
-36.4
-24.6
-36.8
-67.07
-56.3
-70.08

f(kHz)
26.6
29.6
32.7

P(dBm)
-36.64
-24.4
-36.9

Spectrum of DSB-LC Actual Results

Power(dBm)

0
-10
-20
-30
-40
-50
-60
-70
-80
0

30

60

90

120

Frequency(kHz)

Spectrum of DSB-LC Theoretical Model


0
-10
-20
-30

Power(dBm)

-40
-50
-60
-70
-80
10

20

30

40

50

Frequency(kHz)

Comparing the spectrum of the DSB-LC with the theoretical model, they are almost the
same.
Looking at the modulated signal at the output, we obtained values for Emax and Emin
and computed the modulation index.
Emax = 2.2Vpp
m = 0.468
Emin = 0.8Vpp
The input amplitude at the carrier input was varied from 0-10Vpp in 1V increments and
measurements of the max and min energies were taken from the oscilloscope and the
power of the carrier and two sidebands were taken from the spectrum analyzer. Values of
m and S were calculated from the energies and power values.

Vi,pp(V) Emax(V) Emin(V)


0
1.6
1.5
2
2.2
1.9
4
1.9
0.1
6
3.6
-0.8
8
4.2
-1.6
10
4.9
-2.3
Pl = left sideband power
Pc = Carrier power
Pr = right sideband power

Pl(dBm)
-56.3
-37.09
-31.1
-27.1
-24.6
-22.7

Pc(dBm)
-24.3
-24.4
-24.8
-24.9
-25.1
-25.3

Pr(dBm)
-56.6
-37.4
-31.4
-27.5
-25.13
-23

m
0.032258
0.073171
0.9
1.571429
2.230769
2.769231

S(dBc)
-32.15
-12.845
-6.45
-2.4
0.235
2.45

Plotting the modulation index vs. the input amplitude, the data shows that there is a linear
relationship. This makes sense since the input amplitude has a direct relationship with
Emax and the values of the m were calculated from the Emax and Emin values.
Looking at the graph for sideband power vs. modulation index, it looks close to
theoretical models.
Using a 3kHz, 2Vpp sinusoidal signal as our input, the DC offset was varied from -6 to
+6V in 1V increments. Measurements were taken from the oscilloscope of Emax and
Emin and the modulation indices were calculated.

DC Offset (V)
-6
-5
-4
-3
-2
-1
0
1
2
3
4
5
6

Emax(V)
5.2
4.5
3.8
3
2.3
1.6
0.8
1.8
2.3
3
3.8
4.5
5.1

Emin(V)
4
3.2
2.4
1.7
0.9
0.1
0
0.1
1
1.7
2.5
3.3
3.9

m
0.130435
0.168831
0.225806
0.276596
0.4375
0.882353
1
0.894737
0.393939
0.276596
0.206349
0.153846
0.133333

The relationship between the modulation index and the DC offset seems to be of an
exponential relation. That is m = e-k|x|, where x is dc offset and k is a con
Over-Modulation with DSB-LC:
In the Testing section, the modulating index m is merely defined as a parameter, which
determines the amount of modulation. However, we have to ask ourselves a question of
what is the degree of modulation required to establish a desirable AM communication
link?
The answer to the question is to maintain m < 1.0(100%).
This is important as to ensure successful retrieval of the original transmitted information
at the receiver end. Note that by performing the demodulation process (reverse of
modulation) the message signal is simply being traced out from the envelope of the
modulated signal. To have a quick recap, amplitude of the modulated signal varies in
proportion to the amplitude of the information signal.
Thus, once m > 1.0(100%), envelope distortion will occur and the waveform is said to be
overmodulated. Under this circumstances, Ac is large enough, resulting the nonproportionality of s (t ) to s m (t ) ----hence distortion of the desire message signal!!
Ac: DC component of Amplitude of carrier signal; s(t) is AM signal; sm (t ) is
modulation signal.

(http://foe.mmu.edu.my/course/etm3046/notes/AM(-DSSC)ETM2042.doc; Chapter 3
Amplitude modulation H

Communications I -ETM2042)

3. Problem with Demodulation: (using an Envelop detector)


a Rectifier with a Filter Capacitor= The Peak Rectifier

We can avoid negative peak clipping by choosing a small value of . However, to


minimize ripple we want to make
choose a value

as large as possible. In practice we should therefore


to minimize the signal distortions caused by

these effects. This is clearly only possible if the modulation frequency

Envelope detectors only work satisfactorily when we ensure this inequality is true.

RC is too close to inverse of modulation frequency, excessive ripples but no negative


peak.

RC is too close to inverse of modulation frequency, less ripples but significant negative
peak

WRITE NOTES ON AM?

FREQUENCY SPECTRUM OF AM WAVE


AM has the advantage of simplicity, but it is not the most efficient mode to use both in
terms of the amount of spectrum it takes up and the usage of the power. For this reason, it
is rarely used for communications purposes. Its only major communications use is for
VHF aircraft communications. However, it is still widely used on the long, medium, and
short wave bands for broadcasting because its simplicity enables the cost of radio
receivers to be kept to a minimum.

To find out why it is inefficient, it is necessary to look at a little theory behind the
operation of AM. When a radio-frequency signal is modulated by an audio signal, the
envelope will vary. The level of modulation can be increased to a level where the
envelope falls to zero and then rises to twice the unmodulated level. Any increase above
this will cause distortion because the envelope cannot fall below zero. As this is the
maximum amount of modulation possible, it is called 100 per cent modulation (Figure 35).

Figure 3-5. Fully modulated signal.


Even with 100 per cent modulation, the utilization of power is very poor. When the
carrier is modulated, sidebands appear at either side of the carrier in its frequency
spectrum. Each sideband contains the information about the audio modulation. To look at
how the signal is made up and the relative powers, take the simplified case where the 1kHz tone is modulating the carrier. In this case, two signals will be found: 1 kHz either
side of the main carrier, as shown in Figure 3-6. When the carrier is fully modulated (i.e.
100 per cent), the amplitude of the modulation is equal to half that of the main carrier
that is, the sum of the powers of the sidebands is equal to half that of the carrier. This
means that each sideband is just a quarter of the total power. In other words, for a
transmitter with a 100-watt carrier, the total sideband power will be 50 W and each
individual sideband will be 25 W. During the
AM has the advantage of simplicity, but it is not the most efficient mode to use both in
terms of the amount of spectrum it takes up and the usage of the power. For this reason, it
is rarely used for communications purposes. Its only major communications use is for
VHF aircraft communications. However, it is still widely used on the long, medium, and
short wave bands for broadcasting because its simplicity enables the cost of radio
receivers to be kept to a minimum.
To find out why it is inefficient, it is necessary to look at a little theory behind the
operation of AM. When a radio-frequency signal is modulated by an audio signal, the
envelope will vary. The level of modulation can be increased to a level where the
envelope falls to zero and then rises to twice the unmodulated level. Any increase above
this will cause distortion because the envelope cannot fall below zero. As this is the

maximum amount of modulation possible, it is called 100 per cent modulation (Figure 35).

Figure 3-5. Fully modulated signal.


Even with 100 per cent modulation, the utilization of power is very poor. When the
carrier is modulated, sidebands appear at either side of the carrier in its frequency
spectrum. Each sideband contains the information about the audio modulation. To look at
how the signal is made up and the relative powers, take the simplified case where the 1kHz tone is modulating the carrier. In this case, two signals will be found: 1 kHz either
side of the main carrier, as shown in Figure 3-6. When the carrier is fully modulated (i.e.
100 per cent), the amplitude of the modulation is equal to half that of the main carrier
that is, the sum of the powers of the sidebands is equal to half that of the carrier. This
means that each sideband is just a quarter of the total power. In other words, for a
transmitter with a 100-watt carrier, the total sideband power will be 50 W and each
individual sideband will be 25 W. During the modulation process the carrier power
remains constant. It is only needed as a reference during the demodulation process. This
means that the sideband power is the useful section of the signal, and this corresponds to
(50/150) 100 per cent, or only 33 per cent of the total power transmitted.

Figure 3-6. Spectrum of a signal modulated with a 1-kHz tone.


Not only is AM wasteful in terms of power; it is also not very efficient in its use of
spectrum. If the 1-kHz tone is replaced by a typical audio signal made up of a variety of
sounds with different frequencies, then each frequency will be present in each sideband
(Figure 3-7). Accordingly, the sidebands spread out either side of the carrier as shown and

the total bandwidth used is equal to twice the top frequency that is transmitted. In the
crowded conditions found on many of the short wave bands today this is a waste of
space, and other modes of transmission that take up less space are often used.

Figure 3-7. Spectrum of a signal modulated with speech or music.


To overcome the disadvantages of AM, a derivative known as single sideband (SSB) is
often used. By removing or reducing the carrier and removing one sideband, the
bandwidth can be halved and the efficiency improved. The carrier can be introduced by
the receiver for demodulation.
Neither AM in its basic form nor SSB is used for mobile phone applications, although in
some applications AM combined with phase modulation is used.
Modulation index
It is often necessary to define the level of modulation that is applied to a signal. A factor
or index known as the modulation index is used for this. When expressed as a percentage,
it is the same as the depth of modulation. In other words, it can be expressed as:

The value of the modulation index must not be allowed to exceed 1 (i.e. 100 per cent in
terms of the depth of modulation), otherwise the envelope becomes distorted and the
signal will spread out either side of the wanted channel, causing interference to other
users.

Page 3: Frequency modulation

e term electromagnetic spectrum refers to all forms of energy transmitted by means of


waves traveling at the speed of light. Visible light is a form of electromagnetic radiation,
but the term also applies to cosmic rays, X rays, ultraviolet radiation, infrared radiation,
radio waves, radar, and microwaves. These forms of electromagnetic radiation make up
the electromagnetic spectrum much as the various colors of light make up the visible
spectrum (the rainbow).
Wavelength and frequency
Any waveincluding an electromagnetic wavecan be described by two properties: its
wavelength and frequency. The wavelength of a wave is the distance between two
successive identical parts of the wave, as between two wave peaks or crests. The Greek
letter lambda () is often used to represent wavelength. Wavelength is measured in
various units, depending on the kind of wave being discussed. For visible light, for
example, wavelength is often expressed in nanometers (billionths of a meter); for radio
waves, wavelengths are usually expressed in centimeters or meters.
Frequency is the rate at which waves pass a given point. The frequency of an X-ray
beam, for example, might be expressed as 1018 hertz. The term hertz (abbreviation: Hz) is
a measure of the number of waves that pass a given point per second of time. If you could
watch

the

X-ray

beam

from

some

given

position,

you

1,000,000,000,000,000,000 (that is, 1018) wave crests pass you every second.

would

see

For every electromagnetic wave, the product of the wavelength and frequency equals a
constant, the speed of light (c). In other words, f = c. This equation shows that
wavelength and frequency have a reciprocal relationship to each other. As one increases,
the other must decrease. Gamma rays, for example, have very small wavelengths and
very large frequencies. Radio waves, by contrast, have large wavelengths and very small
frequencies.
AM TRANSMITTER BLOCK DIAGRAM
As shown in the accompanying figure, the whole range of the electromagnetic spectrum
can be divided up into various regions based on wavelength and frequency.
Electromagnetic radiation with very short wavelengths and high frequencies fall into the
cosmic ray/gamma ray/ultraviolet radiation region. At the other end of the spectrum are
the long wavelength, low frequency forms of radiation: radio, radar, and microwaves. In
the middle of the range is visible light.
Properties of waves in different regions of the spectrum are commonly described by
different notation. Visible radiation is usually described by its wavelength, while X rays
are described by their energy. All of these schemes are equivalent, however; they are just
different ways of describing the same properties.
Words to Know
Electromagnetic radiation: Radiation that travels through a vacuum with the speed of
light and that has properties of both an electric and magnetic wave.
Frequency: The number of waves that pass a given point in a given period of time.
Hertz: The unit of frequency; a measure of the number of waves that pass a given point
per second of time.
Wavelength: The distance between two successive peaks or crests in a wave.

The boundaries between types of electromagnetic radiation are rather loose. Thus, a wave
with a frequency of 8 1014 hertz could be described as a form of very deep violet visible
light or as a form of ultraviolet radiation.
Applications
The various forms of electromagnetic radiation are used everywhere in the world around
us. Radio waves are familiar to us because of their use in communications. The standard
AM radio band includes radiation in the 540 to 1650 kilohertz (thousands of hertz) range.
The FM band includes the 88 to 108 megahertz (millions of hertz) range. This region also
includes shortwave radio transmissions and television broadcasts.
Microwaves are probably most familiar to people because of microwave ovens. In a
microwave oven, food is heated when microwaves excite water molecules contained
within foods (and the molecules' motion produces heat). In astronomy, emission of
radiation at a wavelength of 8 inches (21 centimeters) has been used to identify neutral
hydrogen throughout the galaxy. Radar is also included in this region.
The infrared region of the spectrum is best known to us because of the fact that heat is a
form of infrared radiation. But the visible wavelength range is the range of frequencies
with which we are most familiar. These are the wavelengths to which the human eye is
sensitive and which most easily pass through Earth's atmosphere. This region is further
broken down into the familiar colors of the rainbow, also known as the visible spectrum.
The ultraviolet range lies at wavelengths just short of the visible range. Most of the
ultraviolet radiation reaching Earth in sunlight is absorbed in the upper atmosphere.
Ozone, a form of oxygen, has the ability to trap ultraviolet radiation and prevent it from
reaching Earth. This fact is important since ultraviolet radiation can cause a number of
problems for both plants and animals. The depletion of the ozone layer during the 1970s
and 1980s was a matter of some concern to scientists because of the increase in
dangerous ultraviolet radiation reaching Earth.

We are most familiar with X rays because of their uses in medicine. X-radiation can pass
through soft tissue in the body, allowing doctors to examine bones and teeth from the
outside. Since X rays do not penetrate Earth's atmosphere, astronomers must place X-ray
telescopes in space.
Gamma rays are the most energetic of all electromagnetic radiation, and we have little
experience with them in everyday life. They are produced by nuclear processesduring
radioactive decay (in which an element gives off energy by the disintegration of its
nucleus) or in nuclear reactions in stars or in space.
UNIT-IV(SINGLE SIDEBAND MODULATION)
(PART-A)
1)

A carrier that has been modulated by voice or music is

accompanied by two identical sidebands, each

carrying the same

intelligence.
2)

A single sideband modulator provides a means of translating low

frequency.
3)

The level of one of the RF paths is adjusted to achieve amplitude

balance.
4)

Apply audio and an IF sine wave into a balanced modulator. We

not only want to mix the audio and IF to produce an audio modulated IF
signal.
5)

One just above and one just below a carrier frequency.

(PART-B)

6)

What is sideband modulation?

SINGLE-SIDEBAND TRANSMITTER You should remember the


properties of modulation envelopes from your study of NEETS, Module
12, Modulation Principles. A carrier that has been modulated by voice or
music is accompanied by two identical sidebands, each carrying the same
intelligence. In amplitude-modulated (AM) transmitters, the carrier and
both sidebands are transmitted. In a single-sideband transmitter (ssb), only
one of the sidebands, the upper or the lower, is transmitted while the
remaining sideband and the carrier are suppressed. SUPPRESSION is the
elimination of the undesired portions of the signal. Figure 2-7 is the block
diagram of a single-sideband transmitter. You can see the audio amplifier
increases the amplitude of the incoming signal to a level adequate to
operate the ssb generator.
7)

How to be use sideband modulation?

The ssb generator (modulator) combines its audio input and its carrier
input to produce the two sidebands. The two sidebands are then fed to a
filter that selects the desired sideband and suppresses the other one. By
eliminating the carrier and one of the sidebands, intelligence is transmitted
at a savings in power and frequency bandwidth. In most cases ssb
generators operate at very low frequencies when compared with the
normally transmitted frequencies. For that reason, we must convert (or
translate) the filter output to the desired frequency. This is the purpose of
the mixer stage. A second output is obtained from the frequency generator
and fed to a frequency multiplier to obtain a higher carrier frequency for
the mixer stage. The output from the mixer is fed to a linear power
amplifier to build up the level of the signal for transmission. Suppressed
Carrier In ssb the carrier is suppressed (or eliminated) at the transmitter,
and the sideband frequencies produced by the carrier are reduced to a
minimum. You will probably find this reduction (or elimination) is the
most difficult aspect in the understanding of ssb. In a single-sideband
suppressed carrier, no carrier is present in the transmitted signal.

8)

What is SSB generation?

It is true that simultaneous FM and AM modulation can suppress the


amplitude one sideband and increase the amplitude of the other if the
modulation phasing is right, but the resulting signal is not that same as a
normal

SSB

signal.

If the modulation phasing is such that each time the carrier frequency
deviates higher due to FM the carrier amplitude increases due to AM the
upper sideband will become stronger. Likewise with that phasing, each
time the carrier frequency deviates lower the carrier amplitude will
decrease due to AM which decreases the strength of the lower sideband.
If the modulation phasing is reversed by reversing the audio input polarity
to either the FM or AM modulator the lower sideband will become
stronger

and

the

upper

sideband

will

become

weaker.

Neither of those FM/AM modulation phase relationships produce the type


of signal normally referred to as an SSB signal, but they both suppress the
amplitude

one

sideband.

If the modulation phasing is changed to make the FM and AM modulation


phase difference 90 degrees, the amplitudes of the upper and lower
sidebands will be equal and the carrier amplitude will be higher when the
carrier frequency passes through center-frequency in one direction and
lower when it passes through center-frequency in the opposite direction.
If the FM/AM modulation phasing is changed to -90 degrees, the
amplitudes of the upper and lower sidebands will be equal, but frequency

deviation directions for higher and lower amplitudes will be opposite


compared to those obtained with a 90 degree modulation phase difference.

9)

What is pilot carrier?


The range of the electromagnetic spectrum located either above (the

upper sideband) or below (the lower sideband) the frequency of a


sinusoidal carrier signal c(t). The sidebands are produced by modulating
the carrier signal in amplitude frequency, or phase in accordance with a
modulating signal m(t) to produce the modulated signal s(t). The resulting
distribution of power in the sidebands of the modulated signal depends on
the modulating signal and the particular form of modulation employed.
10)

What is an indonent sideband?

sideband, any frequency component of a modulated carrier wave other


than the frequency of the carrier wave itself, i.e., any frequency added to
the carrier as a result of modulation; sidebands carry the actual
information while the carrier contributes none at all. Those frequency
components that are higher than the carrier frequency are know as upper
sidebands; those lower are called lower sidebands. The upper and lower
sidebands contain equivalent information; thus only one needs to be
transmitted. Such single-sideband signals are very efficient in their use of
the frequency spectrum when compared to standard amplitude modulation
(AM) signals. See radio.

(PART-C)
11)

Explain the sideband modulation?

SINGLE-SIDEBAND TRANSMITTER You should remember


the properties of modulation envelopes from your study of NEETS,
Module 12, Modulation Principles. A carrier that has been modulated by
voice or music is accompanied by two identical sidebands, each carrying
the same intelligence. In amplitude-modulated (AM) transmitters, the
carrier and both sidebands are transmitted. In a single-sideband transmitter
(ssb), only one of the sidebands, the upper or the lower, is transmitted
while the remaining sideband and the carrier are

suppressed.

SUPPRESSION is the elimination of the undesired portions of the signal.

Figure 2-7 is the block diagram of a single-sideband transmitter. You can


see the audio amplifier increases the amplitude of the incoming signal to a
level adequate to operate the ssb generator. Usually the audio amplifier is
just a voltage amplifier. Figure 2-7.Ssb transmitter block diagram.
The ssb generator (modulator) combines its audio input and its carrier
input to produce the two sidebands. The two sidebands are then fed to a
filter that selects the desired sideband and suppresses the other one. By
eliminating the carrier and one of the sidebands, intelligence is transmitted
at a savings in power and frequency bandwidth. In most cases ssb
generators operate at very low frequencies when compared with the
normally transmitted frequencies. For that reason, we must convert (or
translate) the filter output to the desired frequency. This is the purpose of
the mixer stage. A second output is obtained from the frequency generator
and fed to a frequency multiplier to obtain a higher carrier frequency for
the mixer stage. The output from the mixer is fed to a linear power
amplifier to build up the level of the signal for transmission. Suppressed
Carrier In ssb the carrier is suppressed (or eliminated) at the transmitter,
and the sideband frequencies produced by the carrier are reduced to a
minimum. You will probably find this reduction (or elimination) is the
most difficult aspect in the understanding of ssb. In a single-sideband
suppressed carrier, no carrier is present in the transmitted signal.
12)

Describe the balanced modulator?

A single sideband modulator provides a means of translating low


frequency baseband signals directly to radio frequency in a single stage.
Such modulators, providing suppressed carrier and one or two of the
sidebands, facilitates the transmission of intelligence with significantly
increased gain over AM transmission. Control signals are continuously
generated to keep the local oscillator breakthrough and image sidebands
down to an insignificantly low level. This is achieved by monitoring

amplitude of the RF output of the single sideband modulator, and


comparing this with the baseband signals. By adjusting the d.c. offsets at
the baseband inputs to the balanced modulators, carrier breakthrough is
cancelled. By adjusting the relative phases of the baseband signals,
deviations from the 90.degree. split are compensated. By changing the
amplitude of one of the baseband signals, the level of one of the RF paths
is adjusted to achieve amplitude balance.

13)

Explain the pilot carrier and indonent sideband?

sideband, any frequency component of a modulated carrier wave other than the frequency
of the carrier wave itself, i.e., any frequency added to the carrier as a result of
modulation; sidebands carry the actual information while the carrier contributes none at
all. Those frequency components that are higher than the carrier frequency are know as
upper sidebands; those lower are called lower sidebands. The upper and lower sidebands
contain equivalent information; thus only one needs to be transmitted. Such singlesideband signals are very efficient in their use of the frequency spectrum when compared
to standard amplitude modulation (AM) signals. See radio.

Either of the two bands of frequencies, one just above and one just below a carrier
frequency, that result from modulation of a carrier wave.

The range of the electromagnetic spectrum located either above (the upper sideband) or
below (the lower sideband) the frequency of a sinusoidal carrier signal c(t). The
sidebands are produced by modulating the carrier signal in amplitude, frequency, or phase

in accordance with a modulating signal m(t) to produce the modulated signal s(t). The
resulting distribution of power in the sidebands of the modulated signal depends on the
modulating signal and the particular form of modulation employed. See also Amplitude
modulation; Frequency modulation; Modulation; Phase modulation.
In radio communications, a signal that results from amplitude modulating a carrier
frequency. The upper sideband is the carrier plus modulation, and the lower sideband is
the carrier minus modulation, which are mirror images of each other. See single sideband.

Columbia Encyclopedia: sideband


Top
Home > Library > Miscellaneous > Columbia Encyclopedia
sideband, any frequency component of a modulated carrier wave other than the frequency
of the carrier wave itself, i.e., any frequency added to the carrier as a result of
modulation; sidebands carry the actual information while the carrier contributes none at
all. Those frequency components that are higher than the carrier frequency are know as
upper sidebands; those lower are called lower sidebands. The upper and lower sidebands
contain equivalent information; thus only one needs to be transmitted. Such singlesideband signals are very efficient in their use of the frequency spectrum when compared
to standard amplitude modulation (AM) signals. See radio.

Wikipedia: Sideband
Top
Home > Library > Miscellaneous > Wikipedia

The

power

of

an

AM

signal

plotted

against

frequency.

Key: fc is the carrier frequency, fm is the maximum modulation frequency


In radio communications, a sideband is a band of frequencies higher than or lower than
the carrier frequency, containing power as a result of the modulation process. The
sidebands consist of all the Fourier components of the modulated signal except the
carrier. All forms of modulation produce sidebands.
Amplitude modulation of a carrier wave normally results in two mirror-image sidebands.
The signal components above the carrier frequency constitute the upper sideband (USB)
and those below the carrier frequency constitute the lower sideband (LSB). In
conventional AM transmission, the carrier and both sidebands are present, sometimes
called double sideband amplitude modulation (DSB-AM).
In some forms of AM the carrier may be removed, producing double sideband with
suppressed carrier (DSB-SC). An example is the stereophonic difference (L-R)
information transmitted in FM stereo broadcasting on a 38 kHz subcarrier. The receiver
locally regenerates the subcarrier by doubling a special 19 kHz pilot tone, but in other
DSB-SC systems the carrier may be regenerated directly from the sidebands by a Costas
loop or squaring loop. This is common in digital transmission systems such as BPSK
where the signal is continually present.

Sidebands are evident in this spectrogram of an AM broadcast (carrier highlighted in red).


If part of one sideband and all of the other remain, it is called vestigial sideband, used
mostly with television broadcasting, which would otherwise take up an unacceptable
amount of bandwidth. Transmission in which only one sideband is transmitted is called
single-sideband transmission or SSB. SSB is the predominant voice mode on shortwave
radio other than shortwave broadcasting. Since the sidebands are mirror images, which
sideband is used is a matter of convention. In amateur radio, LSB is traditionally used
below 10 MHz and USB is used above 10 MHz.
In SSB, the carrier is suppressed, significantly reducing the electrical power (by up to 12
dB) without affecting the information in the sideband. This makes for more efficient use
of transmitter power and RF bandwidth, but a beat frequency oscillator must be used at
the receiver to reconstitute the carrier. Another way to look at an SSB receiver is as an
RF-to-audio frequency transposer: in USB mode, the dial frequency is subtracted from
each radio frequency component to produce a corresponding audio component, while in
LSB mode each incoming radio frequency component is subtracted from the dial
frequency.
Sidebands can also interfere with adjacent channels. The part of the sideband that would
overlap the neighboring channel must be suppressed by filters, before or after modulation
(often both). In Broadcast band frequency modulation (FM), subcarriers above 75 kHz
are limited to a small percentage of modulation and are prohibited above 99 kHz
altogether to protect the 75 kHz normal deviation and 100 kHz channel boundaries.
Amateur radio and public service FM transmitters generally utilize 5 kHz deviation.

Single-sideband modulation for more technical information about

sideband modulation

Sideband computing is a distributed computing method using a separate

channel than the main communication channel.

Out-of-band communications involve a separate channel other than the

main communication channel.

side lobe

13) Briefly explain the vestigial sideband transmission?


A telephone transmission system providing multiple modulated
carrier communication channels between a single central station and plural remote
stations on a single transmissionmedium which exhibits phase nonlinearities at certain
frequencies, comprising:
plural transmitters at said central and remote stations, each generating on said
transmission medium double side band AM modulated communication signals at
different carrier frequencies; and
plural receivers at said central and remote stations, each tuned to one of said different
carrier frequencies on said transmission medium, at least one of said plural receivers,
which receiver is tuned to receive the double side band AM modulatedcommunication
signal from one of said plural transmitters, and which receiver is tuned to one of said
certain frequencies which exhibit phase nonlinearities, attenuating at least a portion of
one of said double side bands more than the correspondingportion of the other of said
side bands to eliminate side band phase cancellation.
2. A transmission system, as defined in claim 1, wherein said transmission medium
exhibits nonlinear phase characteristics at plural separated frequencies, and wherein
plural of said receivers, which attenuate one of said double side bands morethan the other

of said double side bands, are utilized to receive different carrier frequencies at said
plural separated frequencies.
3. A transmission system, as defined in claim 1, wherein at least one of said plural
receivers receives full double side band AM modulated communication signals.
4. A transmission system, as defined in claim 1, wherein said one of said plural receivers
provides substantially double side band reception at modulation frequencies below a first
predetermined frequency, and substantially single side bandreception at modulation
frequencies above a second predetermined frequency.
5. A transmission system, as defined in claim 1, wherein said one of said plural receivers
attenuates the received carrier frequency by approximately 3.5 db.
6. A transmission system, as defined in claim 1, wherein said one of said plural receivers
additionally comprises:
filter means having an attenuation versus frequency slope characteristic at the received
carrier frequency for reducing distortion caused by frequency drift of the carrier.
7. A transmission system, as defined in claim 1, wherein said one of said plural receivers
includes a band pass filter, the pass band of which extends on both sides of the received
carrier frequency, the poles on one side of the pass band havinga relatively lower Q than
the poles on the other side of the pass band.
8. A transmission system, as defined in claim 1, wherein said one of said plural receivers
includes a band pass filter providing a pass band which extends above and below the
received carrier frequency by a predetermined frequency amount, and anotch filter, the
notch of which is frequency positioned adjacent one edge of said band pass filter.
9. A method of carrier multiplexing multiple telephone communication channels between
a single central station and plural remote stations on a single communication medium
exhibiting phase nonlinearities at a certain frequency, comprising:
transmitting said multiple channels from said central and remote stations on said
communication medium as double side band AM modulated carrier signals having
carriers at different frequencies; and

avoiding communication medium induced distortion at said certain frequency by


receiving at least some modulation frequencies on said communication medium of at
least one of said multiple double side band AM modulated channels at one of saidcentral
or remote stations as a single side band AM modulation signal.

14)

Explain the pulse amplitude modulation?

Pulse-amplitude modulation, acronym PAM, is a form of signal modulation


where the message information is encoded in the amplitude of a series of
signal pulses.
Example: A two bit modulator (PAM-4) will take two bits at a time and will
map the signal amplitude to one of four possible levels, for example 3 volts,
1 volt, 1 volt, and 3 volts.
Demodulation is performed by detecting the amplitude level of the
carrier at every symbol period.
Pulse-amplitude modulation is widely used in baseband transmission of digital
data, with non-baseband applications having been largely superseded by pulsecode modulation, and, more recently, by pulse-position modulation.
In particular, all telephone modems faster than 300 bit/s use quadrature amplitude
modulation (QAM). (QAM uses a two-dimensional constellation).
It should be noted, however, that some versions of the widely popular
Ethernet communication standard are a good example of PAM usage. In
particular, the Fast Ethernet 100BASE-T2 medium, running at 100Mb/s,
utilizes 5 level PAM modulation (PAM-5) running at 25 megapulses/sec
over two wire pairs. A special technique is used to reduce inter-symbol
interference between the unshielded pairs. Later, the gigabit Ethernet
1000BASE-T medium raised the bar to use 4 pairs of wire running each at

125 megapulses/sec to achieve 1000Mb/s data rates, still utilizing PAM-5


for each pair.
The IEEE 802.3an standard defines the wire-level modulation for
10GBASE-T as a Tomlinson-Harashima Precoded (THP) version of pulseamplitude modulation with 16 discrete levels (PAM-16), encoded in a twodimensional checkerboard pattern known as DSQ128. Several proposals
were considered for wire-level modulation, including PAM with 12
discrete levels (PAM-12), 10 levels (PAM-10), or 8 levels (PAM-8), both
with and without Tomlinson-Harashima Precoding (THP).
15)

Explain the pulse position modulation?

Pulse-position modulation is a form of signal modulation in which M message bits


are encoded by transmitting a single pulse in one of 2M possible time-shifts. This is
repeated every T seconds, such that the transmitted bit rate is M/T bits per second. It is
primarily useful for optical communications systems, where there tends to be little or no
multipath interference.
Synchronization
One of the key difficulties of implementing this technique is that the receiver must be
properly synchronized to align the local clock with the beginning of each symbol.
Therefore, it is often implemented differentially as Differential Pulse-position
modulation, where by each pulse position is encoded relative to the previous , such that
the receiver must only measure the difference in the arrival time of successive pulses. It is
possible to limit the propagation of errors to adjacent symbols, so that an error in
measuring the differential delay of one pulse will affect only two symbols, instead of
affecting all successive measurements.
Sensitivity to Multipath Interference

Aside from the issues regarding receiver synchronization, the key disadvantage of PPM is
that it is inherently sensitive to multipath interference that arises in channels with
frequency-selective fading, whereby the receiver's signal contains one or more echoes of
each transmitted pulse. Since the information is encoded in the time of arrival (either
differentially, or relative to a common clock), the presence of one or more echoes can
make it extremely difficult, if not impossible, to accurately determine the correct pulse
position corresponding to the transmitted pulse.
Non-coherent Detection
One of the principal advantages of Pulse Position Modulation is that it is an M-ary
modulation technique that can be implemented non-coherently, such that the receiver
does not need to use a Phase-locked loop (PLL) to track the phase of the carrier. This
makes it a suitable candidate for optical communications systems, where coherent phase
modulation and detection are difficult and extremely expensive. The only other common
M-ary non-coherent modulation technique is M-ary Frequency Shift Keying, which is the
frequency-domain dual to PPM.
PPM vs. M-FSK
PPM and M-FSK systems with the same bandwidth, average power, and transmission
rate of M/T bits per second have identical performance in an AWGN (Additive White
Gaussian Noise) channel. However, their performance differs greatly when comparing
frequency-selective and frequency-flat fading channels. Whereas frequency-selective
fading produces echoes that are highly disruptive for any of the M time-shifts used to
encode PPM data, it selectively disrupts only some of the M possible frequency-shifts
used to encode data for M-FSK. Conversely, frequency-flat fading is more disruptive for
M-FSK than PPM, as all M of the possible frequency-shifts are impaired by fading, while
the short duration of the PPM pulse means that only a few of the M time-shifts are
heavily impaired by fading.
Optical communications systems (even wireless ones) tend to have weak multipath
distortions, and PPM is a viable modulation scheme in many such applications.

Applications for RF Communications

PPM

Implementation

This figure illustrates how PPM is used to control servos in RC applications.


N.B. Unfortunately this image does not display PPM, but a PWM signal
Narrowband RF (Radio Frequency) channels with low power and long wavelengths (i.e.,
low frequency) are affected primarily by flat fading, and PPM is better suited than MFSK to be used in these scenarios. One common application with these channel
characteristics, first used in the early 1960s, is the radio control of model aircraft, boats
and cars. PPM is employed in these systems, with the position of each pulse representing
the angular position of an analogue control on the transmitter, or possible states of a
binary switch. The number of pulses per frame gives the number of controllable channels
available. The advantage of using PPM for this type of application is that the electronics
required to decode the signal are extremely simple, which leads to small, light-weight
receiver/decoder units. (Model aircraft require parts that are as lightweight as possible).
Servos made for model radio control include some of the electronics required to convert
the pulse to the motor position - the receiver is merely required to demultiplex the
separate channels and feed the pulses to each servo.
More sophisticated R/C systems are now often based on pulse-code modulation, which is
more complex but offers greater flexibility and reliability.
Pulse position modulation is also used for communication to the ISO 15693 contactless
Smart card as well as the HF implementation of the EPC Class 1 protocol for RFID tags.

(UNIT-V)
(PART-A)

1)

A radio receiver is an electronic circuit that receives its input from

an antenna.
2)

Electronic filters to separate a wanted radio signal from all other

signals.
3)

In consumer electronics, the terms radio and radio receiver are

often used specifically for receivers.


4)

Simple crystal radio receivers operate using the power received

from radio waves.


5)

Specialized-use receivers such as telemetry receivers that allow

the remote measurement and reporting of information.


(PART-B)
6)

write a short notes on receiver?

A radio receiver is an electronic circuit that receives its input from an


antenna, uses electronic filters to separate a wanted radio signal from all other
signals picked up by this antenna, amplifies it to a level suitable for further
processing, and finally converts through demodulation and decoding the
signal into a form usable for the consumer, such as sound, pictures, digital
data, measurement values, navigational positions.
7) what is use this receiver?
In consumer electronics, the terms radio and radio receiver are often used
specifically for receivers designed for the sound signals transmitted by radio
broadcasting services historically the first mass-market radio application.
8)

what s semiconductor?

Further developments in semiconductor technology led to the introduction of


the integrated circuit in the late 1950s.[5] This enabled radio receiver

technology to move forward even further. Integrated circuits enabled high


performance circuits to be built for less cost, and significant amounts of space
could be saved.
As a result of these developments new techniques could be introduced. One of
these was the frequency synthesizer that was used to generate the local
oscillator signal for the receiver. By using a synthesizer it was possible to
generate a very accurate and stable local oscillator signal. Also the ability of
synthesizers to be controlled by microprocessors meant that many new
facilities could be introduced apart from the significant performance
improvements offered by synthesizers.
9)

Describe the receiver design?

The advantage to this method is that most of the radio's signal path has to be
sensitive to only a narrow range of frequencies. Only the front end (the part
before the frequency converter stage) needs to be sensitive to a wide
frequency range. For example, the front end might need to be sensitive to 1
30 MHz, while the rest of the radio might need to be sensitive only to
455 kHz, a typical IF. Only one or two tuned stages need to be adjusted to
track over the tuning range of the receiver; all the intermediate-frequency
stages operate at a fixed frequency which need not be adjusted.
10)

What is receiver advantage?


A radio receiver is an electronic circuit that receives its input from an

antenna, uses electronic filters to separate a wanted radio signal from all other
signals picked up by this antenna, amplifies it to a level suitable for further
processing, and finally converts through demodulation and decoding the
signal into a form usable for the consumer, such as sound, pictures, digital
data, measurement values, navigational positions.
(PART-C)

11)

Explain about receiver?

Receiver (radio)
A radio receiver is an electronic circuit that receives its input from an antenna, uses
electronic filters to separate a wanted radio signal from all other signals picked up by this
antenna, amplifies it to a level suitable for further processing, and finally converts
through demodulation and decoding the signal into a form usable for the consumer, such
as sound, pictures, digital data, measurement values, navigational positions, etc.[1]

Old-fashioned radio receiver--wireless Truetone model from about 1940


In consumer electronics, the terms radio and radio receiver are often used specifically for
receivers designed for the sound signals transmitted by radio broadcasting services
historically the first mass-market radio application.
Types of radio receivers
Various types of radio receivers may include:

Consumer audio and high fidelity audio receivers and AV receivers used

by home stereo listeners and audio and home theatre system enthusiasts.

Communications

receivers,

used

as

component

of

radio

communication link, characterized by high stability and reliability of


performance.

Simple crystal radio receivers (also known as a crystal set) which operate

using the power received from radio waves.

Satellite television receivers, used to receive television programming from

communication satellites in geosynchronous orbit.

Specialized-use receivers such as telemetry receivers that allow the remote

measurement and reporting of information.

Measuring receivers (also: measurement receivers) are calibrated

laboratory-grade devices that are used to measure the signal strength of


broadcasting stations, the electromagnetic interference radiation emitted by
electrical products, as well as to calibrate RF attenuators and signal generators.

Scanners are specialized receivers that can automatically scan two or more

discrete frequencies, stopping when they find a signal on one of them and then
continuing to scan other frequencies when the initial transmission ceases. They
are mainly used for monitoring VHF and UHF radio systems.
Consumer audio receivers
In the context of home audio systems, the term "receiver" often refers to a combination of
a tuner, a preamplifier, and a power amplifier all on the same chassis. Audiophiles will
refer to such a device as an integrated receiver, while a single chassis that implements
only one of the three component functions is called a discrete component. Some audio
purists still prefer three discreet units - tuner, preamplifier and power amplifier - but the
integrated receiver has, for some years, been the mainstream choice for music listening.
The first integrated stereo receiver was made by the Harman Kardon company, and came
onto the market in 1958. It had undistinguished performance, but it represented a

breakthrough to the "all in one" concept of a receiver, and rapidly improving designs
gradually made the receiver the mainstay of the marketplace. Many radio receivers also
include a loudspeaker.
Today AV receivers are a common component in a high-fidelity or home-theatre system.
The receiver is generally the nerve centre of a sophisticated home-theatre system
providing selectable inputs for a number of different audio components like turntables,
compact-disc players and recorders, and tape decks ( like video-cassette recorders) and
video components (DVD players and recorders, video-game systems, and televisions).
With the decline of vinyl discs, modern receivers tend to omit inputs for turntables, which
have separate requirements of their own. All other common audio/visual components can
use any of the identical line-level inputs on the receiver for playback, regardless of how
they are marked (the "name" on each input is mostly for the convenience of the user.) For
instance, a second CD player can be plugged into an "Aux" input, and will work the same
as it will in the "CD" input jacks.
Some receivers can also provide signal processors to give a more realistic illusion of
listening in a concert hall. Digital audio S/PDIF and USB connections are also common
today. The home theater receiver, in the vocabulary of consumer electronics, comprises
both the 'radio receiver' and other functions, such as control, sound processing, and power
amplification. The standalone radio receiver is usually known in consumer electronics as
a tuner.
Some modern integrated receivers can send audio out to seven loudspeakers and an
additional channel for a subwoofer and often include connections for headphones.
Receivers vary greatly in price, and support stereophonic or surround sound. A highquality receiver for dedicated audio-only listening (two channel stereo) can be relatively
inexpensive; excellent ones can be purchased for $300 US or less. Because modern
receivers are purely electronic devices with no moving parts unlike electromechanical
devices like turntables and cassette decks, they tend to offer many years of trouble-free
service. In recent years, the home theater in a box has become common, which often

integrates a surround-capable receiver with a DVD player. The user simply connects it to
a television, perhaps other components, and a set of loudspeakers.
Portable radios
Portable radios include simple transistor radios that are typically monoaural and receive
the AM, FM, and/or short wave broadcast bands. FM, and often AM, radios are
sometimes included as a feature of portable DVD/CD, MP3 CD, and USB key players, as
well as cassette player/recorders.
AM/FM stereo car radios can be a separate dashboard mounted component or a feature of
in car entertainment systems.
A Boombox (or Boom-box)also sometimes known as a Ghettoblaster or a Jambox, or
(in parts of Europe) as a "radio-cassette"is a name given to larger portable stereo
systems capable of playing radio stations and recorded music, often at a high level of
volume.
Self-powered portable radios, such as clockwork radios are used in developing nations or
as part of an emergency preparedness kit.[2]
Early development
While James Clerk Maxwell was the first person to prove electromagnetic waves existed,
in 1887 a German named Heinrich Hertz demonstrated these new waves by using spark
gap equipment to transmit and receive radio or "Hertzian waves", as they were first
called. The experiments were not followed up by Hertz. The practical applications of the
wireless communication and remote control technology were implemented by Nikola
Tesla.
The worlds first radio receiver (thunderstorm register) was designed by Alexander
Stepanovich Popov, and it was first seen at the All-Russia exhibition in 1896. He was the
first to demonstrate the practical application of electromagnetic (radio) waves,[3] although
he did not care to apply for a patent for his invention.

A device called a coherer became the basis for receiving radio signals. The first person to
use the device to detect radio waves was a Frenchman named Edouard Branly, and Oliver
Lodge popularised it when he gave a lecture in 1898 in honour of Hertz. Lodge also made
improvements to the coherer. Guglielmo Marconi believed that these new waves could be
used to communicate over great distances and made significant improvements to both
radio receiving and transmitting apparatus. In 1895 Marconi demonstrated the first viable
radio system, leading to transatlantic radio communication in December 1901.
John Ambrose Fleming's development of an early thermionic valve to help detect radio
waves was based upon a discovery of Thomas Edison's (called "The Edison effect",
which essentially modified an early light bulb). Fleming called it his "oscillation valve"
because it acted in the same way as water valve in only allowing flow in one direction.
While Fleming's valve was a great stride forward it would take some years before
thermionic, or vacuum tube technology was fully adopted.
Around this time work on other types of detectors started to be undertaken and it resulted
in what was later known as the cat's whisker. It consisted of a crystal of a material such as
galena with a small springy piece of wire brought up against it. The detector was
constructed so that the wire contact could be moved to different points on the crystal, and
thereby obtain the best point for rectifying the signal and the best detection. They were
never very reliable as the "whisker" needed to be moved periodically to enable it to detect
the signal properly.[4]
Valves (Tubes)
An American named Lee de Forest, a competitor to Marconi, set about to develop
receiver technology that did not infringe any patents to which Marconi had access. He
took out a number of patents in the period between 1905 and 1907 covering a variety of
developments that culminated in the form of the triode valve in which there was a third
electrode called a grid. He called this an audion tube. One of the first areas in which
valves were used was in the manufacture of telephone repeaters, and although the

performance was poor, they gave significant improvement in long distance telephone
receiving circuits.
With the discovery that triode valves could amplify signals it was soon noticed that they
would also oscillate, a fact that was exploited in generating signals. Once the triode was
established as an amplifier it made a tremendous difference to radio receiver performance
as it allowed the incoming signals to be amplified. One way that proved very successful
was introduced in 1913 and involved the use of positive feedback in the form of a
regenerative detector. This gave significant improvements in the levels of gain that could
be achieved, greatly increasing selectivity, enabling this type of receiver to outperform all
other types of the era. With the outbreak of the First World War, there was a great impetus
to develop radio receiving technology further. An American named Irving Langmuir
helped introduce a new generation of totally air-evacuated "hard" valves. H. J. Round
undertook some work on this and in 1916 he produced a number of valves with the grid
connection taken out of the top of the envelope away from the anode connection.[4]
Autodyne and superheterodyne
By the 1920s, the tuned radio frequency receiver (TRF) represented a major improvement
in performance over what had been available before, it still fell short of the needs for
some of the new applications. To enable receiver technology to meet the needs placed
upon it a number of new ideas started to surface. One of these was a new form of direct
conversion receiver. Here an internal or local oscillator was used to beat with the
incoming signal to produce an audible signal that could be amplified by an audio
amplifier.
H. J. Round developed a receiver he called an autodyne in which the same valve was
used as a mixer and an oscillator, Whilst the set used fewer valves it was difficult to
optimise the circuit for both the mixer and oscillator functions.
The next leap forward in receiver technology was a new type of receiver known as the
superheterodyne, or supersonic heterodyne receiver. A Frenchman named Lucien Levy
was investigating ways in which receiver selectivity could be improved and in doing this

he devised a system whereby the signals were converted down to a lower frequency
where the filter bandwidths could be made narrower. A further advantage was that the
gain of valves was considerably greater at the lower frequencies used after the frequency
conversion, and there were fewer problems with the circuits bursting into oscillation.
The idea for developing a receiver with a fixed intermediate frequency amplifier and
filter is credited to Edwin Armstrong. Working for the American Expeditionary Force in
Europe in 1918, Armstrong thought that if the incoming signals were mixed with a
variable frequency oscillator, a low frequency fix tuned amplifier could be used.
Armstrong's original receiver consisted of a total of eight valves. Several tuned circuits
could be cascaded to improve selectivity, and being on a fixed frequency they did not all
need to be changed in line with one another. The filters could be preset and left correctly
tuned. Armstrong was not the only person working on the idea of a superhet. Alexander
Meissner in Germany took out a patent for the idea six months before Armstrong, but as
Meissner did not prove the idea in practice and did not build a superhet radio, the idea is
credited to Armstrong.
The need for the increased performance of the superhet receiver was first felt in America,
and by the late 1920s most sets were superhets. However in Europe the number of
broadcast stations did not start to rise as rapidly until later. Even so by the mid 1930s
virtually all receiving sets in Europe as well were using the superhet principle. In 1926
the tetrode valve was introduced, and enabled further improvements in performance.[4]
War and postwar developments

In 1939 the outbreak of war gave a new impetus to receiver development. During this
time a number of classic communications receivers were designed. Some like the
National HRO are still sought by enthusiasts today and although they are relatively large
by today's standards, they can still give a good account of themselves under current
crowded band conditions. In the late 1940s the transistor was discovered. Initially the
devices were not widely used because of their expense, and the fact that valves were
being made smaller, and performed better. However by the early 1960s portable transistor
broadcast receivers (transistor radios) were hitting the market place. These radios were
ideal for broadcast reception on the long and medium wave bands. They were much
smaller than their valve equivalents, they were portable and could be powered from
batteries. Although some valve portable receivers were available, batteries for these were
expensive and did not last for long. The power requirements for transistor radios were
very much less, resulting in batteries lasting for much longer and being considerably
cheaper.[4]
Semiconductors
Further developments in semiconductor technology led to the introduction of the
integrated circuit in the late 1950s.[5] This enabled radio receiver technology to move
forward even further. Integrated circuits enabled high performance circuits to be built for
less cost, and significant amounts of space could be saved.
As a result of these developments new techniques could be introduced. One of these was
the frequency synthesizer that was used to generate the local oscillator signal for the
receiver. By using a synthesizer it was possible to generate a very accurate and stable
local oscillator signal. Also the ability of synthesizers to be controlled by microprocessors
meant that many new facilities could be introduced apart from the significant
performance improvements offered by synthesizers.[4]
Digital technologies
Main article: Digital radio

Receiver technology is still moving forward. Digital signal processing where many of the
functions performed by an analog intermediate frequency stage can be performed
digitally by converting the signal to a digital stream that is manipulated mathematically is
now widespread. The new digital audio broadcasting standard being introduced can only
be used when the receiver can manipulate the signal digitally.
While today's radios are miracles of modern technology, filled with low power high
performance integrated circuits crammed into the smallest spaces, the basic principle of
the radio is usually the superhet, the same idea which was developed by Edwin
Armstrong back in 1918.[4]

12)

Briefly explain the superhetrodyne receiver?

SUPER HETRODYNE RECEIVER

A 5-tubes superhet receiver made in Japan about 1955.

In electronics, the superheterodyne receiver (also known as the supersonic heterodyne


receiver, or by the abbreviated form superhet) is a receiver which uses the principle of
frequency mixing or heterodyning to convert the received signal to a lower (sometimes
higher) "intermediate" frequency, which can be more conveniently processed than the
original carrier frequency. Virtually all modern radio and TV receivers use the
Superheterodyne principle.

Two section variable capacitor, used in superhet receiver


The word heterodyne is derived from the Greek roots hetero- "different", and -dyne
"power". The original heterodyne technique was pioneered by Canadian inventorengineer Reginald Fessenden but was not pursued far because local oscillators were not
very stable at the time.[1]
Later, the superheterodyne (superhet) principle was conceived in 1918 by Edwin
Armstrong during World War I, as a means of overcoming the deficiencies of early
vacuum triodes used as high-frequency amplifiers in radio direction finding (RDF)
equipment. Unlike simple radio communication, which only needs to make transmitted
signals audible, RDF requires actual measurements of received signal strength, which
necessitates linear amplification of the actual carrier wave.
In a triode RF amplifier, if both the plate and grid are connected to resonant circuits tuned
to the same frequency, stray capacitive coupling between the grid and the plate will cause
the amplifier to go into oscillation if the stage gain is much more than unity. In early

designs, dozens (in some cases over 100) low-gain triode stages had to be connected in
cascade to make workable equipment, which drew enormous amounts of power in
operation and required a team of maintenance engineers. The strategic value was so high,
however, that the British Admiralty felt the high cost was justified.
Armstrong had realized that if RDF could be operated at a higher frequency, it would
allow detection of enemy shipping much more effectively, but at the time, no practical
"short wave" amplifier existed, (defined then as any frequency above 500 kHz) due to the
limitations of triodes of the day.
A "heterodyne" refers a beat or "difference" frequency produced when two or more radio
frequency carrier waves are fed to a detector. The term was originally coined by
Canadian Engineer Reginald Fessenden describing his proposed method of making
Morse Code transmissions from an Alexanderson alternator type transmitter audible.
With the Spark gap transmitters then in wide use, the Morse Code signal consisted of
short bursts of a heavily modulated carrier wave which could be clearly heard as a series
of short chirps or buzzes in the receiver's headphones.
The signal from an Alexanderson Alternator on the other hand, did not have any such
inherent modulation and Morse Code from one of those would only be heard as a series
of clicks or thumps. Fessenden's idea was to run two Alexanderson Alternators, one
producing a carrier frequency 3kHz higher than the other. In the receiver's detector the
two carriers would beat together to produce a 3kHz tone and so in the headphones the
morse signals would then be heard as a series of 3kHz beeps. For this he coined the term
"heterodyne" meaning "Generated by a Difference" (in frequency).
Later, when vacuum triodes became available, the same result could be achieved more
conveniently by incorporating a "local oscillator" in the receiver, which became known as
a "Beat Frequency Oscillator" or BFO. As the BFO frequency was varied, the pitch of the
heterodyne could be heard to vary with it. If the frequencies were too far apart the
heterodyne became ultrasonic and hence no longer audible.

It had been noticed some time before that if a regenerative receiver was allowed to go
into oscillation, other receivers nearby would suddenly start picking up stations on
frequencies different from those that the stations were actually transmitted on. Armstrong
(and others) eventually deduced that this was caused by a "supersonic heterodyne"
between the station's carrier frequency and the oscillator frequency. Thus, for example, if
a station was transmitting on 300 kHz and the oscillating receiver was set to 400 kHz, the
station would be heard not only at the original 300 kHz, but also at 100 kHz and 700 kHz.
Armstrong realized that this was a potential solution to the "short wave" amplification
problem, since the beat frequency still retained its original moduation, but on a lower
carrier frequency. To monitor a frequency of 1500 kHz for example, he could set up an
oscillator to say, 1560 kHz, which would produce a heterodyne of 60kHz, a frequency
that could then be much more conveniently amplified by the triodes of the day. He termed
this the "Intermediate Frequency" often abbreviated to "IF"
In December, 1919, Major E. H. Armstrong gave publicity to an indirect method of
obtaining short-wave amplification, called the Super- Heterodyne. The idea is to reduce
the incoming frequency which may be, say 1,500,000 cycles (200 meters), to some
suitable super-audible frequency which can be amplified efficiently, then passing this
current through a radio frequency amplifier and finally rectifying and carrying on to one
or two stages of audio frequency amplification. (page 11 of December 1922 QST
magazine)

Early Superheterodyne receivers actually used IFs as low as 20 kHz, often based around
the self-resonance of iron-cored transformers. This made them extremely susceptible to
image frequency interference, but at the time, the main objective was sensitivity rather
than selectivity. Using this technique, a small number triodes could be made to do work
that formerly required dozens or even hundreds.
1920s commercial IF transformers actually look very similar to 1920s audio interstage
coupling transformers, and were wired up in an almost identical manner. By the mid-

1930s superhets were using much higher intermediate frequencies, (typically around 440470kHz), using tuned coils very similar in construction to the aerial and oscillator coils.
However the term "Intermediate Frequency Transformer" or "IFT" still persists to this
day.
Modern receivers typically use a mixture of Ceramic Filters and/or Saw Resonators as
well as traditional tuned-inductor IF transformers
Armstrong was able to put his ideas into practice quite quickly, and the technique was
rapidly adopted by the military. However, it was less popular when commercial radio
broadcasting began in the 1920s. There were many factors involved,but the main issues
were the need for an extra tube for the oscillator, the generally higher cost of the receiver,
and the level of technical skill required to operate it. For early domestic radios, Tuned
RFs ("TRF"), also called the Neutrodyne, were much more popular because they were
cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong
eventually sold his superheterodyne patent to Westinghouse, who then sold it to RCA, the
latter monopolizing the market for superheterodyne receivers until 1930.[2]
By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF
receiver's cost advantages, and the explosion in the number of broadcasting stations
created a demand for cheaper, higher-performance receivers.
First, the development of practical indirectly-heated-cathode tubes allowed the mixer and
oscillator functions to be combined in a single Pentode tube, in the so-called Autodyne
mixer. This was rapidly followed by the introduction of low-cost multi-element tubes
specifically designed for superheterodyne operation. These allowed the use of much
higher Intermediate Frequencies (typically around 440-470kHz) which eliminated the
problem of image frequency interference. By the mid-30s, for commercial receiver
production the TRF technique was obsolete.
The superheterodyne principle was eventually taken up for virtually all commercial radio
and TV designs.

13)

Explain the receiver faults?

The diagram below shows the basic elements of a single conversion superhet receiver.
The essential elements of a local oscillator and a mixer followed by a fixed-tuned filter
and IF amplifier are common to all superhet circuits. Cost-optimized designs may use one
active device for both local oscillator and mixerthis is sometimes called a "converter"
stage. One such example is the pentagrid converter.

The advantage to this method is that most of the radio's signal path has to be sensitive to
only a narrow range of frequencies. Only the front end (the part before the frequency
converter stage) needs to be sensitive to a wide frequency range. For example, the front
end might need to be sensitive to 130 MHz, while the rest of the radio might need to be
sensitive only to 455 kHz, a typical IF. Only one or two tuned stages need to be adjusted
to track over the tuning range of the receiver; all the intermediate-frequency stages
operate at a fixed frequency which need not be adjusted.
To overcome obstacles such as image response, multiple IF stages are used, and in some
case multiple stages with two IFs of different values. For example, the front end might be
sensitive to 130 MHz, the first half of the radio to 5 MHz, and the last half to 50 kHz.
Two frequency converters would be used, and the radio would be a "Double Conversion
Super Heterodyne"a common example is a television receiver where the audio
information is obtained from a second stage of intermediate frequency conversion.
Occasionally special-purpose receivers will use an intermediate frequency much higher
than the signal, in order to obtain very high image rejection.
Superheterodyne receivers have superior characteristics to simpler receiver types in
frequency stability and selectivity. They offer much better stability than Tuned radio

frequency receivers (TRF) because a tuneable oscillator is more easily stabilized than a
tuneable amplifier, especially with modern frequency synthesizer technology. IF filters
can give much narrower passbands at the same Q factor than an equivalent RF filter. A
fixed IF also allows the use of a crystal filter when exceptionally high selectivity is
necessary. Regenerative and super-regenerative receivers offer better sensitivity than a
TRF receiver, but suffer from stability and selectivity problems.
In the case of modern television receivers, no other technique was able to produce the
precise bandpass characteristic needed for vestigial sideband reception, first used with the
original NTSC system introduced in 1941. This originally involved a complex collection
of tuneable inductors which needed careful adjustment, but since the early 1980s these
have been replaced with precision electromechanical surface acoustic wave (SAW)
filters. Fabricated by precision laser milling techniques, SAW filters are much cheaper to
produce, can be made to extremely close tolerances, and are extremely stable in
operation.
Microprocessor technology allows replacing the superheterodyne receiver design by a
software defined radio architecture, where the IF processing after the initial IF filter is
implemented in software. This technique is already in use in certain designs, such as very
low cost FM radios incorporated into mobile phones where the necessary microprocessor
is already present in the system.
Radio transmitters may also use a mixer stage to produce an output frequency, working
more or less as the reverse of a superheterodyne receiver.
Drawbacks
Drawbacks to the superheterodyne receiver include interference from signal frequencies
close to the intermediate frequency. To prevent this, IF frequencies are generally
controlled by regulatory authorities, and this is the reason most receivers use common
IFs. Examples are 455 kHz for AM radio, 10.7 MHz for FM, and 38.9 MHz (Europe) 45
MHz (US) for television.

(For AM radio, a variety of IFs have been used, but most of the Western World settled on
455kHz, in large part because of the almost universal transition to Japanese-made
ceramic resonators which used the US standard of 455kHz. In more recent digitally tuned
receivers, this was changed to 450kHz as this figure simplifies the design of the
synthesizer circuitry).
Additionally, in urban environments with many strong signals, the signals from multiple
transmitters may combine in the mixer stage to interfere with the desired signal.
14) Eplain about receiver applications?
High-side and low-side injection
The amount that a signal is down-shifted by the local oscillator depends on whether its
frequency f is higher or lower than fLO. That is because its new frequency is |f fLO| in
either case. Therefore, there are potentially two signals that could both shift to the same
fIF one at f = fLO + fIF and another at f = fLO fIF. One or the other of those signals, called
the image frequency, has to be filtered out prior to the mixer to avoid aliasing. When the
upper one is filtered out, it is called high-side injection, because fLO is above the
frequency of the received signal. The other case is called low-side injection. High-side
injection also reverses the order of a signal's frequency components. Whether or not that
actually changes the signal depends on whether it has spectral symmetry or not. The
reversal can be undone later in the receiver, if necessary.
Image Frequency (fimage)
One major disadvantage to the superheterodyne receiver is the problem of image
frequency. In heterodyne receivers, an image frequency is an undesired input frequency
equal to the station frequency plus twice the intermediate frequency. The image
frequency results in two stations being received at the same time, thus producing
interference. Image frequencies can be eliminated by sufficient attenuation on the
incoming signal by the RF amplifier filter of the superheterodyne receiver.

Early Autodyne receivers typically used IFs of only 150kHz or so, as it was difficult to
maintain reliable oscillation if higher frequencies were used. As a consequence, most
Autodyne receivers needed quite elaborate antenna tuning networks, often involving
double-tuned coils, to avoid image interference. Later superhets used tubes especially
designed for oscillator/mixer use, which were able work reliably with much higher IFs,
reducing the problem of image interference and so allowing simpler and cheaper aerial
tuning circuitry.
Local oscillator radiation
It is difficult to keep stray radiation from the local oscillator below the level that a nearby
receiver can detect. This means that there can be mutual interference in the operation of
two or more superheterodyne receivers in close proximity. In espionage, oscillator
radiation gives a means to detect a covert receiver and its operating frequency.
Further information: Electromagnetic compatibility
Local oscillator sideband noise
Local oscillators typically generate a single frequency signal that has negligible
amplitude modulation but some random phase modulation. Either of these impurities
spreads some of the signal's energy into sideband frequencies. That causes a
corresponding widening of the receiver's frequency response, which would defeat the aim
to make a very narrow bandwidth receiver such as to receive low-rate digital signals.
Care needs to be taken to minimise oscillator phase noise, usually by ensuring that the
oscillator never enters a non-linear mode.

15) Explain about receiver frequency?


FREQUENCY: For a given crystal cut, lower frequency crystals exhibit superior
stability and, for a given frequency, the higher overtone crystals will usually provide the
best stability. A simple rule of thumb is, "the more quartz the better" down to about 5

MHz below which frequency dividers are usually the best choice. High frequency
oscillators may include phase-locked loops or frequency multipliers to take advantage of
a low frequency crystal's stability. Multiplied oscillators are preferred above 120 MHz
when stability is a key issue.
AGING: New, high quality ovenized quartz crystals typically exhibit small, positive
frequency drift with time unrelated to external influences. A significant drop in this
"aging" rate occurs after the first few weeks of operation at the operating temperature.
Ultimate aging rates below 0.1 PPB per day are achieved by the highest quality crystals
and 1 PPB per day rates are commonplace. Significant negative aging (dropping
frequency) indicates a bad crystal - probably a leaking package.

A typical aging curve for a new ovenized oscillator.


TEMPERATURE: The primary effect of temperature variations is to change the
oscillator's frequency. Oven oscillators offer the best temperature stability and largely
avoid many of the problems associated with activity dips. Activity dips are drops in
crystal Q which can appear in narrow temperature windows causing sudden frequency
shifts and amplitude variations. Temperature stability below 0.1 PPB can be achieved but
the aging rate often dominates the frequency error budget after only a few days. The
specification should state whether the stability specification is peak-to-peak over the
entire range or whether it is relative to room temperature. Variation from room
temperature is a popular method of specification since the oscillator is usually tuned at
room temperature. Non-oven XOs and TCXOs may drift slowly to a new frequency after
the ambient temperature changes since the internal thermal time constants can be fairly
long.

RETRACE: When power is removed from an oscillator, then re-applied several hours
later, the frequency will stabilize at a slightly different value. This "retrace" error is
usually specified for a twenty-four hour off-time followed by a warm-up time sufficient
to allow complete thermal equilibrium. Retrace errors often diminish after warming as
though the crystal walks back down its aging curve when cold and then exponentially
approaches the previous drift curve when activated. Oscillators stored at extremely cold
temperatures for extended periods of time may exhibit a frequency vs. time curve much
like the initial "green" aging curve of a new crystal. In addition to the crystal related
effects described above, mechanical shifts can also occur due to the thermal stresses from
heating and cooling the oven structure. A common retrace error source is the mechanical
device used to adjust the oscillator's frequency. Precision, multi-turn variable capacitors
exhibit good retrace but a good practice is to turn the screw back slightly after setting to
relieve any stress. Most Wenzel oscillators use special precision potentiometers which
exhibit an unusually low amount of retrace and hysterisis.

Vous aimerez peut-être aussi