Académique Documents
Professionnel Documents
Culture Documents
Unit I
(PART-A)
1)
materials.
2)
3)
Sound waves can propagate in four principle modes that are based on the
4)
wave propagation.
5)
propagation.
(PART-B)
6) Describe EM waves?
Wave Propagation
Ultrasonic testing is based on time-varying deformations or vibrations in materials, which
is generally referred to as acoustics. All material substances are comprised of atoms,
which may be forced into vibrational motion about their equilibrium positions. Many
different patterns of vibrational motion exist at the atomic level, however, most are
irrelevant to acoustics and ultrasonic testing. Acoustics is focused on particles that
contain many atoms that move in unison to produce a mechanical wave. When a material
is not stressed in tension or compression beyond its elastic limit, its individual particles
perform elastic oscillations. When the particles of a medium are displaced from their
equilibrium positions, internal (electrostatic) restoration forces arise. It is these elastic
restoring forces between particles, combined with inertia of the particles, that leads to the
oscillatory motions of the medium.
upward from the ground, travel again to the ionosphere, and again be refracted to Earth.
This process, multihop propagation, can be repeated several times under ideal conditions,
leading to very long distance communications. Propagation between any two points on
the Earths surface is usually by the shortest direct route, which is a great-circle path
between the two points. A great circle is an imaginary line drawn around the Earth,
formed by a plane passing through the center of the Earth. The diameter of a great circle
is equal to the diameter of the Earth (12,755 km or 7,926 mi at the equator). The
circumference of the Earth -- and the length of any global great-circle path is about
40,000 km or 24,900 mi. Due to ionospheric absorption and ground-reflection losses,
multiple-hop propagation usually yields lower signal levels and more distorted
modulation than single hop signals. There are exceptions, however, and under ideal
conditions, communications using long path signals may be possible. The long path is
the other (long) route around the great-circle path. The same signal can be propagated
over a given path via several different numbers of hops. The success of multihop
propagation usually depends on the type of transmitter modulation and type of signal. For
interrupted continuous wave (ICW) (also known as CW or Morse code) there is no real
problem. On single-sideband (SSB), the signal is somewhat distorted due primarily to
time spreading. For radio-teletypewriter (RTTY) or data service using frequency-shift
keying (FSK), the distortion may be sufficient to degrade the signal to the point that it
cannot be used. Every different point-to-point circuit will have its own mode structure as
a function of time of day. This means that every HF propagation problem has a totally
unique solution.
9)Write a short note on surface wave propagation?
In communication systems where antennas are used to transfer information the
environment between (and around) the transmitter and receiver has a major influence
on the quality of the transferred signal. Buildings are the main source of attenuation
but vegetation elements such as trees and large bushes can also have some reducing
effects,
on
the
propagated
radio
signal.
In the case of attenuation by trees and bushes the incident electromagnetic field is
mainly interacting with the leaves and the branches. The trunk does of course also
have some influence on the attenuation but since the volume occupied by the trunk is
much smaller than the total volume of a tree, for example, these effects can be
considered as negligible. In the case of wave propagation between antennas that are
located on heights i.e. on rooftops it will in principal only be the upper part of
the tree crown that affects the attenuation.
Since one of the fundamental assumptions, in this thesis, is communication between
fixed antennas on heights, the attenuation effects, from the trunks, will thus be
neglected in the vegetation models.
10)Describe sky wave propagation?
The attenuation due to vegetation is also very sensitive to the wavelength. Since the
interaction between the tree and the electromagnetic field mainly is due to leaves and
branches, the size and shape of these are important. For low frequencies when
thewavelength is much larger than the scattering body leaves and branches have
only a small interaction to the electromagnetic field, which means that surface
irregularities have no or minor influence on the attenuation. The incident field
will
approximately have the same magnitude over the whole body, which leads to that
the body experiences the incident field as uniform. Since the vegetation element is
exposed by an electric field, an internal electric field is induced. This give rice to
secondary radiation and since the wavelength is much larger than the scattering body, the
emitted radiation is spread out and forms a radiation pattern close to that formed by a
dipole antenna. When the wavelength is decreased, the losses increase due to a larger
interaction between the incident field and the vegetation elements. This proceeds until the
wavelength approach the same size as the scattering body and thus enters the resonance
region. Here will the absorption and scattering values fluctuate strongly and the
attenuation becomes irregular and very frequency dependent. The size and shape of the
body is the main reason why this happens. The incident electric field induces an internal
electric field that takes different values at different parts of the scattering body (these
values are of course time dependent) since the wavelength no longer is much larger than
the size of the body. These different parts work as scatterers and will thus emit secondary
radiation. The radiation from the different mitters interferes, which leads to that specific
directions are predominated and radiation lobes are formed. When the frequency is
increased further, the effects of the resonance gradually decay, which leads to a more
predictable behavior. The attenuation of the leaves and branches increases with increasing
frequency. When the wavelength is much less than the scattering body no resonance
effects occur and the attenuation will be purely exponential. The number of scatterers
in the scattering body will of course increase, which leads to an increase in the number
of radiation lobes. For very high frequencies the width of the maximum lobes is thin and
thus forms radiation beams. This means that the intensity in the lobes, whose direction
corresponds to the beam directions, is much higher and differs by many orders of
magnitude compared to the other lobes. The fundamental principles behind the
interaction between the incident field and the scattering elements are very complicated
and will therefore not be discussed here. It should be mentioned though that some factors
that contribute to the losses are the fact that the incident field changes the permanent
dipole moment in the liquid and induces currents in the medium. The induced currents
can be created due to the charges in the saline water that the organic components contain.
body is approximated by the incident field which makes it possible to treat cases
when resonanceoccurs.
(PART-C)
11)Explain the space wave propagation?
We have so far discussed the interaction in general between the incident
electromagnetic field and the vegetation elements at different frequencies. From the
discussion we find that three types of interacting exist from which approximations
can be done. In the case of low frequencies we are dealing with Rayleigh scattering
(long wave approximations) and in the case of high frequencies, physical optics or
geometric optics (short wave approximations) are considered. In the resonance region
there is no simple way to do any approximations which leads to that the
electromagnetic problems are difficult to solve. If the electric properties of the
scattering body can be considered as weak, Born or Rytov approximations can be
used to simplify the calculations. In this case the internal fields inside the scattering
body is approximated by the incident field which makes it possible to treat cases
when resonance occurs. In the common microwave propagation models that are used
today, assumptions of small or large wavelength in comparison to the scatterers are
often done. Thus is Rayleigh scattering or physical optics considered. But when the
wavelength of the transmitted field approach the size of the leaves and branches,
resonance effects occur which leads to that these models generates incorrect results.
The purpose of this work is to study the vegetation attenuation and scattering at 3.1
GHz and 5.8 GHz. Since the wavelengths of the transmitted fields are about the same
size as the leaves and branches ( .=9.7 cm and .= 5.2 cm ) resonance effects occur.
Since the common models can not be used the wave propagation through the canopy
must be analyzed in detail which leads to an improved model for the attenuation. The
attenuation model is based on the total cross section of a leaf and a branch. A
computer program, based on the T-matrix theory,makes the computations of the total
cross section. The results from the simulations of the improved attenuation model will
finally be compared with measurements that have been made on a large test beech.
Wave Propagation through Vegetation at 3.1 GHz and 5.8 GHz 2 Basic relationships
This section gives a brief introduction to the theory of microwave propagation.
3.1 Leaf model
Effective dielectric properties are modeled by dielectric mixing theory. In the case of
vegetation elements, the components are liquid water with a high permittivity, organic
material with moderate to low permittivity and air with unit permittivity. For such
highly contrasting permittivities and large volume fractions physical mixing theory
has, so far, failed. In the attempt to overcome this problem Ulaby and El-Rayes [6]
assumed linear, i.e. empirical relationships between the permittivity and volume
fractions of the different components. Dielectric measurements by Ulaby and ElRayes indicate that the dielectric properties of vegetation can be modelled by
representing vegetation as a mixture of saline water, bound water and dry vegetation.
They derived a semi-empirical formula [6] from measurements at frequencies
between 1 and 20 GHz on corn leaves with relatively high dry matter contents. The
extrapolation of the formula to higher frequencies and lower dry matter contents leads
to incorrect values. This was shown by Mtzler and Sume [2]. From the data used in
[6], and their own data at frequencies up to 94 GHz, they developed and improved a
semi-empirical formula to calculate the dielectric constant of leaves. High and low
dry matter contents were included. M tzler combined the data of Ulaby and El Rayes
[6], El Rayes and Ulaby [9] and of M tzler and Sume [2] and derived a new dielectric
formula [1] eleaf = 0.522(1-1.32 md )e sw + 0.51 + 3.84 md which is valid over the
frequency range from 1 to 100 GHz. The formula is applicable to fresh leaves with
md values in the range 0.1 = md = 0.5 . Here esw is the dielectric permittivity for
saline water according to the Debye model and md is the dry-matter fraction of leaves
given by dry mass
md = fresh mass
Wegm ller, M tzler and Njoku [4] used the radiative transfer model, described by Kerr
and Njoku, as a reference point for studying the vegetation attenuation and emission.
The transfer model is a model for spaceborne observations of semi-arid land surfaces
and it is based on the concept of temperature instead of the concept of electric and
magnetic fields. It means that instead of analyzing how the magnitude of the electric
and magnetic field is distributed to the different components one analyzes how the
energy is distributed in terms of the temperature. Every component of the system
the land surface, air, leaves, branches etc.
(PART-C)
6)Write notes on propagation model?
Radio propagation
The usable frequency range for radio waves extends from the highest frequencies
of sound, about 20 kHz, to above 30,000 MHz. The frequency band from 3 to 30 MHz is
designated as the high frequency (HF) band. Most of the newer HF radios can operate in
a larger range of 1.6 to 30 MHz, or higher. Most long-haul communications in this band,
however, generally take place between 4 and 18 MHz. Depending on ionospheric
conditions and the time of day, the upper frequency range of about 18 to 30 MHz may
also be available. The HF band, of all of the frequency bands, is by far the most sensitive
to ionospheric effects. HF radio waves, in fact, experience some form of almost every
known propagation mode. The sun influences all radio communication beyond
groundwaveor line-of-sight ranges. Conditions vary with such obvious sun-related cycles
astime of day and season of the year. Since these conditions differ for appreciable
changesin latitude and longitude, and everything is constantly changing as the Earth
rotates,almost every communications circuit has unique features with respect to the band
offrequencies that are useful and the quality of signals in portions of that band.
The two basic modes of radio wave propagation at HF are ground wave and
skywave. Figure A1-1 illustrates these two modes.
attenuated at a much greater rate than inversely as the distance. This attenuation depends
on the relative conductivity of the surface over which the wave travels. The best type of
surface for surface-wave transmission is sea water. The electrical properties of the
underlying terrain that determine the attenuation of the surface-wave field intensity vary
little from time to time, and therefore, this type of transmission has relatively stable
characteristics. The surface-wave component generally is transmitted as a vertically
polarized wave, and it remains vertically polarized at appreciable distances from the
antenna. This polarization is chosen because the Earth has a short-circuiting effect on
the electric intensity of a horizontally polarized wave but offers resistance to this
component of the vertical wave.
Absorption of the radio wave increases with frequency and limits useful surface-wave
propagation to the lower HF range. At frequencies below about 5 MHz, the surface wave
is favored because the ground behaves as a conductor for the electromagnetic energy.
Above 10 MHz, however, the ground behaves as a dielectric. In the region below 10
MHz, conductivity of the surface is a primary factor in attenuation of the surface wave.
As frequencies approach 30 MHz, losses suffered by the surface wave become excessive
ocand
Direct waves, also known as line-of-sight (LOS) waves, follow a direct path through
the troposphere from the transmitting antenna to the receiving antenna. Propagation can
extend to somewhat beyond the visible horizon due to normal refraction in the
atmosphere causing the path to be somewhat bent or refracted. Because the electric field
intensity of a direct wave varies inversely with the distance of transmission, the wave
becomes weaker as distance increases, much like the light beam from a lantern or
headlight. The direct wave is not affected by the ground or by the tropospheric air over
the path but the transmitting and receiving antennas must be able to see each other for
communications to take place, making antenna height a very critical factor in determining
range. Almost all of the communications systems above 30 MHz use the direct (LOS)
mode. This includes the commercial broadcast FM stations, VHF, UHF, microwave,
cellular telephone systems, and satellite systems.
Space waves constitute the combination of all signal types which may reach a receiver
when both the transmitting and the receiving antennas are within LOS. In addition to the
direct signal, space waves include all of any earth-reflected signals of significance and,
under specific conditions, would include undesirable strong secondary ionospheric modes
as well. Space waves will support a relatively high signal bandwidth, as compared to
ionospheric modes.
Ground-reflected waves result from a portion of the propagated wave being reflected
from the surface of the earth at some point between the transmitting and receiving
antenna. This causes a phase change in the transmitted signal and can result in a
reduction or an enhancement of the combined received signal, depending on the time of
arrival of the reflected signal relative to the other components.
Tropospheric-reflected/refracted waves are generated when abrupt differences in
atmospheric density and refractive index exist between large air masses. This type of
refraction, associated with weather fronts, is not normally significant at HF.
Skywaves
Skywaves are those main portions of the total radiation leaving the antenna at
angles above the horizon. The term skywave describes the method of propagation by
which signals originating from one terminal arrive at a second terminal by refraction
from
the ionosphere. The refracting (bending) qualities of the ionosphere enable global-range
communications by bouncing the signals back to Earth and keeping them from being
beamed into outer space. This is one of the primary characteristics of long-haul HF
communication -- its dependence upon ionospheric refraction. Depending on frequency,
time of day, and atmospheric conditions, a signal can bounce several times before
reaching a receiver which may be thousands of kilometers away. Ionospheric skywave
returns, however, in addition to experiencing a much greater variability of attenuation and
delay, also suffer from fading, frequency (doppler) shifting and spreading, time
dispersionand delay distortion.
Nearly all medium- and long-distance (beyond the range of ground wave)
transmissivities and opacities of the crown of a beech (Fagus sylvatica L.). The technique
used for measurements corresponds to the one explained in section 3.2. To avoid any
prejudice on the type of microwave propagation model, M tzler limit the physical
interpretation to obvious facts and to consistency tests of the multivariate dataset. The
main instruments that have been used in the study are the five microwave radiometers of
the PAMIR system.
The transmitted power has been recorded during a whole year. In this way it has been
possible to get an apprehension of how much the attenuation is affected by the leaves
alone since measurements were made both for a canopy containing leaves and branches
and for a canopy without leaves. The microwave radiation at 4.9 GHz, 10.4 GHz, 21
GHz, 35 GHz and 94 GHz was measured about once every week between August 1987
and August 1988. During the measurements the radiometer was placed to measure the
transmissivity in a vertical direction through the beech. Thus it measures the brightness
temperature Tb1 of
downwelling radiation from the beech. This temperature can be expressed by
T = tT + rT +(1 - r- t)T1
(3.23)
b1 b 2 b 0
where t is the transmissivity and r the reflectivity of the vegetation layer. Here T1 is the
physical tree temperature and Tb 2 is the sky brightness temperature. That from the
ground upwelling brightness temperature Tb 0 is given by
Tb 0 = e0T0 +(1 - e0 )Tb1
(3.24)
where e0 is the emissivity of the ground surface and T0 is the ground temperature.
Eq. (3.23) and Eq. (3.24) are the basic equations for the experiments and they can be used
to
get an expression for the transmissivity of the tree crown. After some algebra we find
t = T1 + rd T - Tb1
(3.25)
T - T1 b 2
where d T = Tb 0 - T1 . Since the emissivity of the grass-covered ground below the beech
is near 0.95 over the entire frequency range Tb 0 approaches T0. This and the fact
that the reflectivity of the beech is close to 0.1 lead to the following estimation
r d T = 0.1(T0 - T1 )
Since T0 and T1 always are very similar (differences were typically within 2 oC) we
can neglect r dT in Eq. (3.25) and write
T -T
1 b1
t=
T T
(3.26)
1b2
In order to compute t we need values of the physical tree temperature T1, of the
brightness
temperature T , measured below the tree, and of the sky brightness temperature T .
b1 b 2
In the beech experiment T was measured at zenith angles of 50o and 60o, and T (the
b 2 b1
downwelling radiation of the beech) was measured at two linear (v) and (h) polarizations,
at vertical direction, and through the center of the crown at 30o off zenith opposite the
direction of the sky measurements. The tree temperature T1 was measured with an
infrared adiometer and compared with air and grass temperatures. We define the effective
opacity of the vegetation layer
t =-ln()t (3.27)
In this section we analyze and model the dielectric properties of leaves and branches. We
also analyze the structure of the crown of a tree. Despite the stochastic nature of this
subject it is still possible to make some conclusions on the orientation and distribution of
the leaves and branches. Since we have made our attenuation measurements on a Fagus
sylvatica Pendula (beech) the analysis is based on this tree. It is easy to adjust the results
to another tree type since only a few parameters are related to the structure of the tree.
corresponding permittivity to obtain the effective permittivity of the object. If the object
with the volume V consists of three components with the volumes V1, V2 and V3 , where
the respective component has the permittivity e , e and e , we get
where V =vV and v + v + v = 1 . water with a high permittivity, organic material with
moderate to low permittivity and air with unit permittivity. All attempts so far to use the
physical mixing theory to create a formula for
the effective permittivity of a leaf have failed. The reason is probably the large
differences of volume fractions and permittivities between the different components a
leaf can consists of
up to 90 percent of water (or even more) which probably causes nonlinear effects.
To create a valid formula for the permittivity of a leaf we have to use another technique.
Since the saline water of the leaf causes the largest contributions to the disturbance of the
incident electromagnetic field, a model of the water content could serve as a basis. This
model should thereafter be adjusted to the experimental values from leaves at different
frequencies and at different dry matter fractions in order to compensate for the effects
that the organic matter and air has on the permittivity.
A model that describes the dielectric properties of saline water is the
Debye model e-e s
esw =e8 + s 8+ I
1 - i.t .e0
(4.2)
Here e8 is the value of the dielectric function at high frequencies, es is the corresponding
value at .= 0 and t is the relaxation time. The values of the different parameters are 1 This
is valid for all sorts of vegetation elements such as branches, herbs, trunks etc.
Write notes on ionosphere models?
The ionosphere is a region of electrically charged gases and particles in the earths
atmosphere, which extends upward from approximately 50 km to 600 km (30 to 375
miles) above the earths surface. See Figure A1-3. During daylight hours, the lower
boundary of the ionosphere is normally about 65 to 75 km above the earths surface,
butcan be as low as about 50 km. At night the absence of direct solar radiation causes the
galactic cosmic rays. The ionization rate at various altitudes depends upon the intensity
of the solar radiation and the ionization efficiency of the neutral atmospheric gases.
Collisions in the atmosphere, however, usually result in the recombination of electrons
and positive ions, and the reattachment of electrons to neutral gas atoms and molecules,
thus decreasing the overall ionization density.
For the purpose of propagation prediction and ionospheric studies, it is frequently
useful to separate the environment (especially the ionosphere) into two states, benign and
disturbed. The benign ionosphere state is that which is undisturbed by solar flares, large
geomagnetic storms, and known manmade (including nuclear) events. Even then, there is
still a significant variability, partly due to the effects of such phenomena as traveling
ionospheric disturbances (TIDs), sudden ionospheric disturbances (SIDs), sporadic-E,
and
spread-F, as examples. The disturbed ionosphere is a state that includes the effects of
several disturbing influences which occur quite naturally. Solar flares, geomagnetic
storms, and nuclear detonations will cause significant ionospheric changes. Disturbances
may also be produced by the release of certain chemicals into the ionosphere. The
magnitudes of the introduced effects vary widely. Certain regions of the ionosphere, such
as the auroral zone and the equatorial region (in certain categories), are always in the
disturbed state.
Ionospheric layering
Within the ionosphere, there are four layers of varying ionization that have
notable effects on communications. As has been noted, solar radiation (EUV, UV, and
X-rays) and, to a lesser extent cosmic rays, act on ionospheric gases and cause ionization.
Since these ionization sources vary both in energy level and wavelength (frequency), they
penetrate to different depths of the atmosphere and cause different ionization effects. The
natural grouping of energy levels results in distinct layers being formed at different
altitudes.
At altitudes below about 80 km, winds and weather patterns cause a turbulent
mixing of the atmospheric gases present at these lower levels. This turbulent mixing
diminishes as altitude increases and as the stratification (or layering) of the constituent
gases becomes more pronounced. The density of ionized gases and particles increases
with altitude to a maximum value, then decreases or remains constant up to the next
layer. The higher layers of the ionosphere tend to be more densely ionized and contain
the smaller particles, while the lower layers, which are somewhat protected by the higher
ones, contain the larger particles and experience less ionization. The different
ionospheric gases each have different ionizing wavelengths, recombination times, and
collision cross sections, as well as several other characteristics. All of this results in the
creation of the ionized atmospheric layers. The boundaries between the various
ionospheric layers are not distinct, because of constant motion within the layers and the
changeability of the ionizing forces.
The ionospheric layers that most influence HF communications are the D, E, F1,
and F2 layers, and, when present, the sporadic-E layer. Of these, the D-layer acts as a
large rf sponge that absorbs signals passing through it. Depending on frequency and time
of day, the remaining four ionized layers are useful (necessary!) to the communicator and
HF communications.
Due to the ionization effects of the solar zenith angle (height of the Sun in the
sky), the altitudes of the various layers and their relative electron densities at any time
depend on the latitude. For mid-latitudes, the following are typical layer (region)
altitudes and extent:
D-region -- 70 to 90 km (a bottom level of 50 km is not too unusual)
E-region -- 90 to 140 km
Sporadic-E region -- typically 105 to 110 km
F-region -- from about 140 km to as high as 1000 km
F1-region -- 140 to over 200 km (during daylight only)
F2-region -- 200 to about 500 km
The hourly, daily, seasonal, and solar cycle variations in solar activity cause the altitudes
of these layers to undergo continual shifting and further substratification.
D-layer
The D-layer, which normally extends from 70 to 90 km above the Earth, is
strongest during daylight hours with its ionization being directly proportional to how high
the sun is in the sky. This layer often extends down to about 50 km. The electron
concentration and the corresponding ionization density is quite small at the lowest levels,
but increases rapidly with altitude. The D-region electron density has a maximum value
shortly after local solar noon and a very small value at night because it is ionized only
during the day. The D-layer is the lowest region affecting HF radio waves. There is a
pronounced seasonal variation in D-region electron densities with a maximum in
summer. The relatively high density of the neutral atmosphere in the D-region causes the
electron collision frequency to be correspondingly high. The main influence of the D
region on HF systems is absorption. In fact, this region is responsible for most of the
absorption encountered by HF signals which use the skywave mode. Because absorption
is inversely proportional to frequency, wave energy in the lower end of the HF band is
almost completely absorbed by this layer during daylight hours. The rise and fall of the
D-layer, and the corresponding amount of radio wave absorption, is the primary
determinant of the lowest usable frequency (LUF) over a given path. Due to the greater
penetration ability of higher radio frequencies, the D-layer has a smaller effect on
frequencies above about 10 MHz. At lower frequencies, however, absorption by the D
layeris significant. Absorption losses of the higher-frequency waves depend on the D
region ionization density, the extent of the region, the incident angle, the radio frequency,
and the number of hops, among other factors. (For every hop, the rf wave traverses the D
region twice, once on the way up, and once on the way down.)
E-layer
The lowest region of the ionosphere useful for returning radio signals to the Earth
is the E-layer. Its altitude ranges from about 90 km to about 130 km and includes both
the normal and the sporadic-E layers. The average altitude of the layers central region is
at about 110 km. At this height, the atmosphere is dense enough so that ions and
electrons set free by solar radiation do not have to travel far before they meet and
recombine to form neutral particles. It is also dense enough to allow rapid de-ionization
as solar energy ceases to reach it. Ionization of this layer begins near sunrise, reaches
maximum ionization at noon, and ceases shortly after sundown. The layer can maintain
its ability to bend radio waves only in the presence of sunlight. At night, only a small
residual level of ionization remains in the E-region. The normal E-layer is important for
daytime HF propagation at distances of up to about 2000 km. Irregular cloud-like layers
of ionization often occur in the region of normal E-layer appearance and are known as
sporadic-E (ES). These areas are highly ionized and are sometimes capable of supporting
the propagation of sky waves at the upper end of the HF band and into the lower VHF
band.
Sporadic E
In addition to the relatively regular ionospheric layers (D, E, and F), layers of
enhanced ionization often appear in the E (ES)-region and the lower parts of the Fregions
(sporadic F). The significant irregular reflective layer, from the point of view of HF
propagation, is the ES-layer since it occurs in the same altitude region as the regular
Elayer.
Despite what their name implies, these layers are quite common. A theory is that
ES occurs as a result of ionization from high altitude wind shear in the presence of the
magnetic field of the Earth, rather than from ionization by solar and cosmic radiation.
Another theory is that ES-layers are thin patches of long-lived ions (primarily metallic)
that are believed to be rubbed off from meteors as they pass through the atmosphere, and
then are formed into thin layers by the action of tidal wind systems. Layers of sodium
ions produced by similar mechanisms commonly appear in the 90-km altitude range.
Because the recombination rates of metallic ions are extremely low in the ionosphere,
these thin layers can persist for many hours before being neutralized by recombination
and dispersed by diffusion and are most commonly observed at night when the
background densities are low.. Areas of ES generally last only a few hours, and move
about rapidly under the influence of high altitude wind patterns. Different forms of ES,
having different characteristics and production mechanisms, are found in the auroral
zones and, at an attitude of about 105 km, in the low and middle equatorial latitudes.
They share the common characteristics that they are all E-layer phenomena, their
occurrence is not predictable, and they all have an effect on HF radio communications.
When ES occurs, it produces a marked effect on the geometry of radio propagation paths
which normally involve the higher layers. Their peak densities can sometimes exceed
that of the higher altitude F-region. When this occurs, these layers can reflect incident
HF waves at much lower altitudes and prevent reflections from the F-layer, thereby
greatly reducing the expected range of transmission. Although ES is difficult to predict, it
can be used to advantage when its presence is known. It has been found that close to the
equator, ES occurs primarily during the day and shows little seasonal variation. By
contrast, in the auroral zone, ES is most prevalent during the night but also shows little
seasonal variation. In middle latitudes however, ES occurrence is subject to both seasonal
and diurnal variations and is more prevalent in local summer than in winter and during
the day rather than at night.
F-layer
The F-layer is the highest and most heavily ionized of the ionized regions, and
usually ranges in altitude from about 140 km to about 500 km. At these altitudes, the air
is thin enough that the ions and electrons recombine very slowly, thus allowing the layer
to retain its ionized properties even after sunset. The F-layer is the most important one
for long-distance HF propagation. If sporadic ionospheric disturbances are ignored, the
height and density of this region varies in a predictable manner diurnally, seasonally, and
with the 11-year sunspot cycle. Under normal conditions it exists 24 hours a day. The
Flayersionize very rapidly at sunrise and reach peak electron density early in the
afternoonat the middle of the propagation path. The ionization decays very slowly after
sunset andreaches the minimum value just before sunrise. At night, the layer has a single
density peak and is called the F-layer. During the day, the absorption of solar energy
results in
the formation of two distinct density peaks. The lower peak, the F1- layer, ranges in
height from about 130 km to about 300 km and seldom is predominant in supporting HF
radio propagation. Occasionally, this layer is the reflecting region for HF transmission,
but in general, obliquely-incident waves that penetrate the E-region also penetrate the F1layer and are reflected by the F2-layer. The F1-layer, however, does introduce additional
absorption of the radio waves. After sunset, the F1-layer quickly decays and is replaced
by a broadened F2-layer, which is known simply as the F-layer. The F2-layer, the higher
and more important of the two layers, ranges in height from about 200 km to about 500
km. This F2-layer reaches maximum ionization at noon and remains charged at night,
gradually decreasing to a minimum just before sunrise. In addition to being the layer with
the maximum electron density, the F2-layer is also strongly influenced by solar winds,
diffusion, magnetospheric events, and other dynamic effects and exhibits considerable
variability. Ionization does not completely depend on the solar zenith angle because with
such low molecular collision rates, the region can store received solar energy for many
hours. In the daytime, the F2-layer is generally about 80 km thick, centered on about 300
km altitude. At night the F1-layer merges with the F2-layer resulting in a combined
Flayer
with a width of about 150 km, also centered on about 300 km altitude. Due to the
Earth/ionospheric geometry, the maximum range of a single hop off of the F2-region is
about 4000 km (2500 miles). The absence of the F1-layer, the sharp reduction in
absorption of the E-region, and absence of the D-layer cause night-time field intensities
and noise to be generally higher than during daylight. Near the equator, there are
significant latitudinal gradients in the F-region ionization. In the polar regions (high
latitudes), there is a region of strongly depressed electron density in the F-layer. These
can have important effects upon long-distance radio wave propagation.
13)Explain the wavelength virtual height?
If a receiving antenna is used to measure that through the canopy transmitted power, Pt,
at a distance r from the transmitter the properties of the receiving antenna have to be
considered. The incident waves are be received in an area that is not the same as the
physical area of the receiving antenna. It is therefore convenient to define a quantity
called the effective area7. The effective area, Ae().,f, of a receiving antenna is the ratio of
the average power delivered to a matched load to the time-average power density (timeaverage Poynting vector) of the
incident electromagnetic wave at the antenna. We write
PL =AeS (5.18)
where PL is the maximum average power transferred to the load (under matched
conditions) with the receiving antenna properly oriented with respect to the polarization
of the incident
Propagation through Vegetation at 3.1 GHz and 5.8 GHz wave. It can be proved that the
ratio of the directive gain and the effective area of an antenna is a universal constant and
follows the relation
waves, and in thin materials as plate waves. Longitudinal and shear waves are the two
modes of propagation most widely used in ultrasonic testing. The particle movement
responsible for the propagation of longitudinal and shear waves is illustrated below.
In longitudinal waves, the oscillations occur in the longitudinal direction or the direction
of wave propagation. Since compressional and dilational forces are active in these waves,
they are also called pressure or compressional waves. They are also sometimes called
density waves because their particle density fluctuates as they move. Compression waves
can be generated in liquids, as well as solids because the energy travels through the
atomic structure by a series of comparison and expansion (rarefaction) movements.
In the transverse or shear wave, the particles oscillate at a right angle or transverse to the
direction of propagation. Shear waves require an acoustically solid material for effective
propagation, and therefore, are not effectively propagated in materials such as liquids or
gasses. Shear waves are relatively weak when compared to longitudinal waves. In fact,
shear waves are usually generated in materials using some of the energy from
longitudinal waves.
From these values we calculated the attenuation of the tree. We used these values to
estimate the real values for the attenuation of the tree crown. To do that we assumed that
the difference between the sizes of the leaves reflected the difference in attenuation. We
therefor increased the attenuation values at 3.1 GHz with a factor of two and the
attenuation values at 5.8GHz by a factor of three. But it turned out that the results still
were to low compared to the measurements. The calculated values were 0.7 (0.3) dB/m at
3.1 GHz and 0.8 (0.3) dB/m at 5.8 GHz (the standard deviation is given inside the
parenthesis). The measured values were 1.3 (0.4) dB/m at 3.1 GHz and 1.4 (0.5) dB/m at
5.8 GHz. The predicted values are thus too low. If we compare the results we find that
they overlap and thus are the deviations from the correct values small. To decrease the
uncertainties more measurements have to be done. This means that further work is
needed but the modeling approach can be used
8.1 Future work
More measurements have to be done on the same test beech in order to increase the
accuracy of the mean value and the standard deviation of the attenuation. The inventory
of the test beech must be done with a greater accuracy since the values of the standard
deviation are much too high. The total cross section should be calculated for different
sizes of the branches and leaves. In that way a better estimation can be performed. If it is
possible the computer program based on the T-matrix method must be improved in order
to be able to calculate oblate spheroids and cylinders with extreme symmetries or
alternatively find another method to calculate the total cross section. Since the results of
the vegetation attenuation will be used in a prediction tool it is necessary to investigate
the attenuation from other trees so that a mean value of the attenuation can be estimated.
This prediction tool is used to investigate wave propagation in general at residential
environments. It is therefor important to investigate the attenuation of many different
types of trees. It is also important to investigate the frequency of tree types in cities. If
this factor can be determined a model for every single tree type can be constructed and
used together with this factor as a statistical weight to get a better estimation of the
vegetation attenuation in general. Of course measurements must be made on all different
types of trees in order to verify the validity of the theoretical model.
(PART-A)-ANTENNAS
1)
electromagnetic waves.
2)
electromagnetic field.
3)
to Guglielmo Marconi.
4)
Antennas have practical uses for the transmission and reception of radio
frequency signals.
5)Define an antenna.
Antenna is a transition device or a transducer between a guided wave and a free
space wave or vice versa. Antenna is also said to be an impedance transforming device.
6)The directionality of the array is due to the spatial relationships and the electrical feed
relationships between individual antennas.
7) What is meant by radiation pattern?
Radiation pattern is the relative distribution of radiated power as a function of distance in
space .It is a graph which shows the variation in actual field strength of
the EM wave at all points which are at equal distance from the antenna. The energy
radiated in a particular direction by an antenna is measured in terms of FIELD
STRENGTH.(E Volts/m)
8) Define Radiation intensity?
The power radiated from an antenna per unit solid angle is called the radiation intensity U
(watts per steradian or per square degree). The radiation intensity is independent of
distance.
9). Define Beam efficiency?
The total beam area ( WA) consists of themain beam area ( WM ) plus the minor lobe
area ( Wm) .Thus WA = WM+ Wm .
The ratio of the main beam area to the totalbeam area is called beam efficiency.
Beam efficiency = SM = WM / WA.
10).Define Directivity?
The directivity of an antenna is equal to the
ratio of the maximum power density P(q,f)max to its
average value over a sphere as observed in the far field
of an antenna.
D = P(q,f)max / P(q,f)av. Directivity fromPattern.
D = 4p / WA. . Directivity from beamarea(WA ).
11) What are the different types of aperture.?
i) Effective aperture.
ii). Scattering aperture.
iii) Loss aperture.
iv) collecting aperture.
v). Physical aperture.
12).Define different types of aperture.?
Effective aperture(Ae):
It is the area over which the power is extrated from the incident wave and delivered to the
polarization.
17). What is meant by front to back ratio.?
It is defined as the ratio of the power radiated in
desired direction to the power radiated in the opposite
direction. i.e FBR = Power radiated in desired direction / power radiated in the opposite
direction.
18) Define antenna efficiency.?
The efficiency of an antenna is defined as the ratio
of power radiated to the total input power supplied to the
antenna.
Antenna efficiency = Power radiated / Total input power
19) What is radiation resistance ?
The antenna is a radiating device in which power is radiated into space in the form of
electromagnetic wave.
W = I2 R
Rr = W/ I2
Where Rr is a fictitious resistance called called as radiation resistance.
20)Define gain
The ratio of maximum radiation intensity in given direction to the maximum radiation
intensity from a reference antenna produced in the same direction with same input power.
i.e
Maximum radiation intensity from test antenna
Gain (G) = ------------------------------------------------------------------------------Maximum radiation intensity from the reference antenna with same input
power
(PART-B)
1)
2)
The origin of the word antenna relative to wireless apparatus is attributed to Guglielmo
Marconi In 1895, while testing early radio apparatus in the Swiss Alps at Salvan,
Switzerlandin the Mont Blancregion, Marconi experimented with early wireless
equipment. A 2.5 meter long pole, along which was carried a wire, was used as a
radiating and receiving aerial element. In Italian a tent pole is known as l'antenna
centrale, and the pole with a wire alongside it used as an aerial was simply called
l'antenna. Until then wireless radiating transmitting and receiving elements were known
simply as aerials or terminals. Marconi's use of the word antenna (Italian for pole) would
become a popular term for what today is uniformly known as the antenna.
3)
Antennas have practical uses for the transmission and reception of radio frequency
signals (radio, TV, etc.). In air, those signals travel very quickly and with a very low
transmission loss. The signals are absorbed when moving through more conducting
materials, such as concrete walls, rock, etc. When encountering an interface, the waves
are partially reflected and partially transmitted through.
A common antenna is a vertical rod a quarter of a wavelength long. Such antennas are
simple in construction, usually inexpensive, and both radiate in and receive from all
horizontal directions (omnidirectional). One limitation of this antenna is that it does not
radiate or receive in the direction in which the rod points. This region is called the
antenna blind cone or null.
4)
A director is a parasitic element, usually a metallic conductive structure, which reradiates into free space impinging electromagnetic radiation coming from or going to the
active antenna, the velocity of the re-radiated wave having a component in the direction
of the velocity of the impinging wave. The director modifies the radiation pattern of the
active antenna but there is no direct electrical connection between the active antenna and
this parasitic element.
10) what is resonant antennas?
The "resonant frequeny" and "electrical resonance" is related to the electrical lengthof an
antenna. The electrical length is usually the physical length of the wire divided by its
velocity factor (the ratio of the speed of wave propagation in the wire to c0, the speed of
light in a vacuum). Typically an antenna is tuned for a specific frequency, and is effective
for a range of frequencies that are usually centered on that resonant frequency. However,
other properties of an antenna change with frequency, in particular the radiation pattern
and impedance, so the antenna's resonant frequency may merely be close to the center
frequency of these other more important properties.
Antennas can be made resonant on harmonic frequencies with lengths that are fractions
of the target wavelength. Some antenna designs have multiple resonant frequencies, and
some are relatively effective over a very broad range of frequencies. The most commonly
known type of wide band aerial is the logarithmic or log periodic, but its gain is usually
much lower than that of a specific or narrower band aerial.
(PART-C)
1)
Terminology
The words antenna (plural: antennas[1]) and aerial are used interchangeably; but usually a
rigid metallic structure is termed an antenna and a wire format is called an aerial. In the
United Kingdom and other British English speaking areas the term aerial is more
common, even for rigid types. The noun aerial is occasionally written with a diaresis
mark arial in recognition of the original spelling of the adjective arial from
which the noun is derived.
The origin of the word antenna relative to wireless apparatus is attributed to Guglielmo
Marconi. In 1895, while testing early radio apparatus in the Swiss Alps at Salvan,
Switzerland in the Mont Blanc region, Marconi experimented with early wireless
equipment. A 2.5 meter long pole, along which was carried a wire, was used as a
radiating and receiving aerial element. In Italian a tent pole is known as l'antenna
centrale, and the pole with a wire alongside it used as an aerial was simply called
l'antenna. Until then wireless radiating transmitting and receiving elements were known
simply as aerials or terminals. Marconi's use of the word antenna (Italian for pole) would
become a popular term for what today is uniformly known as the antenna.[2]
A Hertzian antenna is a set of terminals that does not require the presence of a ground for
its operation (versus a Tesla antenna which is grounded. [3]) A loaded antenna is an active
antenna having an elongated portion of appreciable electrical length and having
additional inductance or capacitance directly in series or shunt with the elongated portion
so as to modify the standing wave pattern existing along the portion or to change the
effective electrical length of the portion. An antenna grounding structure is a structure for
establishing a reference potential level for operating the active antenna. It can be any
structure closely associated with (or acting as) the ground which is connected to the
terminal of the signal receiver or source opposing the active antenna terminal (i.e., the
signal receiver or source is interposed between the active antenna and this structure).
Overview
Antennas have practical uses for the transmission and reception of radio frequency
signals (radio, TV, etc.). In air, those signals travel very quickly and with a very low
transmission loss. The signals are absorbed when moving through more conducting
materials, such as concrete walls, rock, etc. When encountering an interface, the waves
are partially reflected and partially transmitted through.
A common antenna is a vertical rod a quarter of a wavelength long. Such antennas are
simple in construction, usually inexpensive, and both radiate in and receive from all
horizontal directions (omnidirectional). One limitation of this antenna is that it does not
radiate or receive in the direction in which the rod points. This region is called the
antenna blind cone or null.
There are two fundamental types of antenna directional patterns, which, with reference to
a specific three dimensional (usually horizontal or vertical) plane are either:
1.
or
2.
known type of wide band aerial is the logarithmic or log periodic, but its gain is usually
much lower than that of a specific or narrower band aerial.
Gain
Main article: Antenna gain
Gain as a parameter measures the directionality of a given antenna. An antenna with a
low gain emits radiation with about the same power in all directions, whereas a high-gain
antenna will preferentially radiate in particular directions. Specifically, the Gain,
Directive gain or Power gain of an antenna is defined as the ratio of the intensity (power
per unit surface) radiated by the antenna in a given direction at an arbitrary distance
divided by the intensity radiated at the same distance by a hypothetical isotropic antenna.
The gain of an antenna is a passive phenomenon - power is not added by the antenna, but
simply redistributed to provide more radiated power in a certain direction than would be
transmitted by an isotropic antenna. If an antenna has a greater than one gain in some
directions, it must have a less than one gain in other directions since energy is conserved
by the antenna. An antenna designer must take into account the application for the
antenna when determining the gain. High-gain antennas have the advantage of longer
range and better signal quality, but must be aimed carefully in a particular direction. Lowgain antennas have shorter range, but the orientation of the antenna is inconsequential.
For example, a dish antenna on a spacecraft is a high-gain device that must be pointed at
the planet to be effective, whereas a typical Wi-Fi antenna in a laptop computer is lowgain, and as long as the base station is within range, the antenna can be in an any
orientation in space. It makes sense to improve horizontal range at the expense of
reception above or below the antenna. Thus most antennas labelled "omnidirectional"
really have some gain.[4]
Sometimes, the half-wave dipole is taken as a reference instead of the isotropic radiator.
The gain is then given in dBd (decibels over dipole):
2)
An unbalanced current path may result in a net magnetic field, which can
Bipolar
each end of the line, approximately half the rated power can continue to flow
using the earth as a return path, operating in monopolar mode.
Since for a given total power rating each conductor of a bipolar line
carries only half the current of monopolar lines, the cost of the second conductor
is reduced compared to a monopolar line of the same rating.
Bipolar systems may carry as much as 3,200 MW at voltages of +/-600 kV. Submarine
cable installations initially commissioned as a monopole may be upgraded with
additional cables and operated as a bipole.
A back-to-back station (or B2B for short) is a plant in which both static inverters and
rectifiers are in the same area, usually in the same building. The length of the direct
current line is kept as short as possible. HVDC back-to-back stations are used for
coupling two networks of the same nominal frequency but no fixed phase
cause considerable power loss, create audible and radio-frequency interference, generate
toxic compounds such as oxides of nitrogen and ozone, and bring forth arcing.
Both AC and DC transmission lines can generate coronas, in the former case in the form
of oscillating particles, in the latter a constant wind. Due to the space charge formed
around the conductors, an HVDC system may have about half the loss per unit length of a
high voltage AC system carrying the same amount of power. With monopolar
transmission the choice of polarity of the energised conductor leads to a degree of control
over the corona discharge. In particular, the polarity of the ions emitted can be controlled,
which may have an environmental impact on particulate condensation. (particles of
different polarities have a different mean-free path.) Negative coronas generate
considerably more ozone than positive coronas, and generate it further downwind of the
power line, creating the potential for health effects. The use of a positive voltage will
reduce the ozone impacts of monopole HVDC power lines.
Applications
Overview
The controllability of current-flow through HVDC rectifiers and inverters, their
application in connecting unsynchronized networks, and their applications in efficient
submarine cables mean that HVDC cables are often used at national boundaries for the
exchange of power. Offshore windfarms also require undersea cables, and their turbines
are unsynchronized. In very long-distance connections between just two points, for
example around the remote communities of Siberia, Canada, and the Scandinavian North,
the decreased line-costs of HVDC also makes it the usual choice. Other applications have
been noted throughout this article.
AC network interconnections
AC transmission lines can only interconnect synchronized AC networks that oscillate at
the same frequency and in phase. Many areas that wish to share power have
unsynchronized networks. The power grids of the UK, Northern Europe and continental
Europe are not united into a single synchronized network. Japan has 50 Hz and 60 Hz
networks. Continental North America, while operating at 60 Hz throughout, is divided
into regions which are unsynchronised: East, West, Texas, Quebec, and Alaska. Brazil
and Paraguay, which share the enormous Itaipu hydroelectric plant, operate on 60 Hz and
50 Hz respectively. However, HVDC systems make it possible to interconnect
unsynchronized AC networks, and also add the possibility of controlling AC voltage and
reactive power flow.
A generator connected to a long AC transmission line may become unstable and fall out
of synchronization with a distant AC power system. An HVDC transmission link may
make it economically feasible to use remote generation sites. Wind farms located offshore may use HVDC systems to collect power from multiple unsynchronized generators
for transmission to the shore by an underwater cable.
In general, however, an HVDC power line will interconnect two AC regions of the
power-distribution grid. Machinery to convert between AC and DC power adds a
considerable cost in power transmission. The conversion from AC to DC is known as
rectification, and from DC to AC as inversion. Above a certain break-even distance
(about 50 km for submarine cables, and perhaps 600800 km for overhead cables), the
lower cost of the HVDC electrical conductors outweighs the cost of the electronics.
The conversion electronics also present an opportunity to effectively manage the power
grid by means of controlling the magnitude and direction of power flow. An additional
advantage of the existence of HVDC links, therefore, is potential increased stability in the
transmission grid.
Renewable electricity superhighways
A number of studies have highlighted the potential benefits of very wide area super grids
based on HVDC since they can mitigate the effects of intermittency by averaging and
smoothing the outputs of large numbers of geographically dispersed wind farms or solar
farms.[18] Czisch's study concludes that a grid covering the fringes of Europe could bring
100% renewable power (70% wind, 30% biomass) at close to today's prices. There has
been debate over the technical feasibility of this proposal [19] and the political risks
involved in energy transmission across a large number of international borders.[20][21]
The construction of such green power superhighways is advocated in a white paper that
was released by the American Wind Energy Association and the Solar Energy Industries
Association[22]
In January, the European Commission proposed 300 million to subsidize the
development of HVDC links between Ireland, Britain, the Netherlands, Germany,
Denmark, and Sweden, as part of a wider 1.2 billion package supporting links to
offshore wind farms and cross-border interconnectors throughout Europe. Meanwhile the
recently founded Union of the Mediterranean has embraced a Mediterranean Solar Plan
to import large amounts of concentrating solar power into Europe from North Africa and
the Middle East.[23]
Smaller scale use
turn-off thyristors (GTO) has made smaller HVDC systems economical. These
may be installed in existing AC grids for their role in stabilizing power flow
without the additional short-circuit current that would be produced by an
additional AC transmission line. ABB manufacturer calls this concept "HVDC
Light" and Siemens manufacturer calls a similar concept "HVDC PLUS" (Power
Link Universal System). They have extended the use of HVDC down to blocks as
small as a few tens of megawatts and lines as short as a few score kilometres of
overhead line. The difference lies in the concept of the Voltage-Sourced Converter
(VSC) technology whereas "HVDC Light" uses Retrieved.
12)Describe the antenna bandwidth, beam width and bandwidth polarization?
Radiation pattern
The radiation pattern of an antenna is the geometric pattern of the relative field strengths
of the field emitted by the antenna. For the ideal isotropic antenna, this would be a
sphere. For a typical dipole, this would be a toroid. The radiation pattern of an antenna is
typically represented by a three dimensional graph, or polar plots of the horizontal and
vertical cross sections. The graph should show sidelobes and backlobes, where the
antenna's gain is at a minima or maxima.
See Antenna measurement: Radiation pattern or Radiation pattern for more information.
Impedance
As an electro-magnetic wave travels through the different parts of the antenna system
(radio, feed line, antenna, free space) it may encounter differences in impedance (E/H,
V/I, etc). At each interface, depending on the impedance match, some fraction of the
wave's energy will reflect back to the source[5], forming a standing wave in the feed line.
The ratio of maximum power to minimum power in the wave can be measured and is
called the standing wave ratio (SWR). A SWR of 1:1 is ideal. A SWR of 1.5:1 is
considered to be marginally acceptable in low power applications where power loss is
more critical, although an SWR as high as 6:1 may still be usable with the right
equipment. Minimizing impedance differences at each interface (impedance matching)
will reduce SWR and maximize power transfer through each part of the antenna system.
Complex impedance of an antenna is related to the electrical length of the antenna at the
wavelength in use. The impedance of an antenna can be matched to the feed line and
radio by adjusting the impedance of the feed line, using the feed line as an impedance
transformer. More commonly, the impedance is adjusted at the load (see below) with an
antenna tuner, a balun, a matching transformer, matching networks composed of
inductors and capacitors, or matching sections such as the gamma match.
Efficiency
Efficiency is the ratio of power actually radiated to the power put into the antenna
terminals. A dummy load may have an SWR of 1:1 but an efficiency of 0, as it absorbs all
power and radiates heat but not RF energy, showing that SWR alone is not an effective
measure of an antenna's efficiency. Radiation in an antenna is caused by radiation
resistance which can only be measured as part of total resistance including loss
resistance. Loss resistance usually results in heat generation rather than radiation, and
reduces efficiency. Mathematically, efficiency is calculated as radiation resistance divided
by total resistance.
Bandwidth
The bandwidth of an antenna is the range of frequencies over which it is effective,
usually centered on the resonant frequency. The bandwidth of an antenna may be
increased by several techniques, including using thicker wires, replacing wires with
cages to simulate a thicker wire, tapering antenna components (like in a feed horn), and
combining multiple antennas into a single assembly and allowing the natural impedance
to select the correct antenna. Small antennas are usually preferred for convenience, but
there is a fundamental limit relating bandwidth, size and efficiency.
Polarization
The polarization of an antenna is the orientation of the electric field (E-plane) of the
radio wave with respect to the Earth's surface and is determined by the physical structure
of the antenna and by its orientation. It has nothing in common with antenna
directionality terms: "horizontal", "vertical" and "circular". Thus, a simple straight wire
antenna will have one polarization when mounted vertically, and a different polarization
when mounted horizontally. "Electromagnetic wave polarization filters" are structures
which can be employed to act directly on the electromagnetic wave to filter out wave
energy of an undesired polarization and to pass wave energy of a desired polarization.
Reflections generally affect polarization. For radio waves the most important reflector is
the ionosphere - signals which reflect from it will have their polarization changed
unpredictably. For signals which are reflected by the ionosphere, polarization cannot be
relied upon. For line-of-sight communications for which polarization can be relied upon,
it can make a large difference in signal quality to have the transmitter and receiver using
the same polarization; many tens of dB difference are commonly seen and this is more
than enough to make the difference between reasonable communication and a broken
link.
Polarization is largely predictable from antenna construction but, especially in directional
antennas, the polarization of side lobes can be quite different from that of the main
propagation lobe. For radio antennas, polarization corresponds to the orientation of the
radiating element in an antenna. A vertical omnidirectional WiFi antenna will have
vertical polarization (the most common type). An exception is a class of elongated
waveguide antennas in which vertically placed antennas are horizontally polarized. Many
commercial antennas are marked as to the polarization of their emitted signals.
Polarization is the sum of the E-plane orientations over time projected onto an imaginary
plane perpendicular to the direction of motion of the radio wave. In the most general
case, polarization is elliptical (the projection is oblong), meaning that the antenna varies
over time in the polarization of the radio waves it is emitting. Two special cases are linear
polarization (the ellipse collapses into a line) and circular polarization (in which the
ellipse varies maximally). In linear polarization the antenna compels the electric field of
the emitted radio wave to a particular orientation. Depending on the orientation of the
antenna mounting, the usual linear cases are horizontal and vertical polarization. In
circular polarization, the antenna continuously varies the electric field of the radio wave
through all possible values of its orientation with regard to the Earth's surface. Circular
polarizations, like elliptical ones, are classified as right-hand polarized or left-hand
polarized using a "thumb in the direction of the propagation" rule. Optical researchers use
the same rule of thumb, but pointing it in the direction of the emitter, not in the direction
of propagation, and so are opposite to radio engineers' use.
In practice, regardless of confusing terminology, it is important that linearly polarized
antennas be matched, lest the received signal strength be greatly reduced. So horizontal
should be used with horizontal and vertical with vertical. Intermediate matchings will
lose some signal strength, but not as much as a complete mismatch. Transmitters
mounted on vehicles with large motional freedom commonly use circularly polarized
antennas so that there will never be a complete mismatch with signals from other sources.
In the case of radar, this is often reflections from rain drops.
12)Explain the antenna transmission and reception?
Transmission and reception
All of the antenna parameters are expressed in terms of a transmission antenna, but are
identically applicable to a receiving antenna, due to reciprocity. Impedance, however, is
not applied in an obvious way; for impedance, the impedance at the load (where the
power is consumed) is most critical. For a transmitting antenna, this is the antenna itself.
For a receiving antenna, this is at the (radio) receiver rather than at the antenna. Tuning is
done by adjusting the length of an electrically long linear antenna to alter the electrical
resonance of the antenna.
Antenna tuning is done by adjusting an inductance or capacitance combined with the
active antenna (but distinct and separate from the active antenna). The inductance or
capacitance provides the reactance which combines with the inherent reactance of the
active antenna to establish a resonance in a circuit including the active antenna. The
established resonance being at a frequency other than the natural electrical resonant
frequency of the active antenna. Adjustment of the inductance or capacitance changes this
resonance.
Antennas used for transmission have a maximum power rating, beyond which heating,
arcing or sparking may occur in the components, which may cause them to be damaged
or destroyed. Raising this maximum power rating usually requires larger and heavier
components, which may require larger and heavier supporting structures. This is a
concern only for transmitting antennas, as the power received by an antenna rarely
exceeds the microwatt range.
Antennas designed specifically for reception might be optimized for noise rejection
capabilities. An antenna shield is a conductive or low reluctance structure (such as a wire,
plate or grid) which is adapted to be placed in the vicinity of an antenna to reduce, as by
TV aerial antenna
There are many variations of antennas. Below are a few basic models. More can be found
in Category:Radio frequency antenna types.
reference to an isotropic radiator, and are rated in dBi (decibels with respect to an
isotropic radiator).
arranged either horizontally or vertically, with one end of each wire connected to
the radio and the other end hanging free in space. Since this is the simplest
practical antenna, it is also used as a reference model for other antennas; gain with
respect to a dipole is labeled as dBd. Generally, the dipole is considered to be
omnidirectional in the plane perpendicular to the axis of the antenna, but it has
deep nulls in the directions of the axis. Variations of the dipole include the folded
dipole, the half wave antenna, the ground plane antenna, the whip, and the J-pole.
elements added which are functionality similar to adding a reflector and lenses
(directors) to focus a filament light bulb.
The random wire antenna is simply a very long (at least one quarter
wavelength) wire with one end connected to the radio and the other in free space,
arranged in any way most convenient for the space available. Folding will reduce
effectiveness and make theoretical analysis extremely difficult. (The added length
helps more than the folding typically hurts.) Typically, a random wire antenna will
also require an antenna tuner, as it might have a random impedance that varies
nonlinearly with frequency.
The Horn is used where high gain is needed, the wavelength is short
(microwave) and space is not an issue. Horns can be narrow band or wide band,
depending on their shape. A horn can be built for any frequency, but horns for
lower frequencies are typically impractical. Horns are also frequently used as
reference antennas.
The basic structure of matter involves charged particles bound together in many
different ways. When electromagnetic radiation is incident on matter, it causes the
charged particles to oscillate and gain energy. The ultimate fate of this energy depends on
the situation. It could be immediately re-radiated and appear as scattered, reflected, or
transmitted radiation. It may also get dissipated into other microscopic motions within the
matter, coming to thermal equilibrium and manifesting itself as thermal energy in the
material. With a few exceptions such as fluorescence, harmonic generation,
photochemical reactions and the photovoltaic effect, absorbed electromagnetic radiation
simply deposits its energy by heating the material. This happens both for infrared and
non-infrared radiation. Intense radio waves can thermally burn living tissue and can cook
food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can
also easily set paper afire. Ionizing electromagnetic radiation can create high-speed
electrons in a material and break chemical bonds, but after these electrons collide many
times with other atoms in the material eventually most of the energy gets downgraded to
thermal energy, this whole process happening in a tiny fraction of a second. That infrared
radiation is a form of heat and other electromagnetic radiation is not, is a widespread
misconception in physics. Any electromagnetic radiation can heat a material when it is
absorbed.
The inverse or time-reversed process of absorption is responsible for thermal radiation.
Much of the thermal energy in matter consists of random motion of charged particles, and
this energy can be radiated away from the matter. The resulting radiation may
subsequently be absorbed by another piece of matter, with the deposited energy heating
the material. Radiation is an important mechanism of heat transfer.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a
form of thermal energy, having maximum radiation entropy. The thermodynamic
potentials of electromagnetic radiation can be well-defined as for matter. Thermal
radiation in a cavity has energy density (see Planck's Law) of
Differentiating the above with respect to temperature, we may say that the
electromagnetic radiation field has an effective volumetric heat capacity given by
Electromagnetic spectrum
Main article: Electromagnetic spectrum
Legend:
Gamma
rays
HX
Hard
X-rays
SX
Soft
X-Rays
Extreme
ultraviolet
Near
ultraviolet
EUV
NUV
Visible
light
NIR
MIR
FIR
Near
infrared
Moderate
infrared
Far
infrared
Radio
EHF
SHF
UHF
VHF
waves:
=
Extremely
high
Super
=
high
Ultrahigh
=
Very
frequency
frequency
(Microwaves)
(Microwaves)
frequency
(Microwaves)
high
frequency
HF
MF
LF
VLF
=
=
VF
High
frequency
Medium
frequency
Low
frequency
Very
=
low
Voice
frequency
frequency
into different shades and hues, and through this not-entirely-understood psychophysical
phenomenon, most people perceive a bowl of fruit.
At most wavelengths, however, the information carried by electromagnetic radiation is
not directly detected by human senses. Natural sources produce EM radiation across the
spectrum, and our technology can also manipulate a broad range of wavelengths. Optical
fiber transmits light which, although not suitable for direct viewing, can carry data that
can be translated into sound or an image. The coding used in such data is similar to that
used with radio waves.
Radio waves
Main article: Radio waves
Radio waves can be made to carry information by varying a combination of the
amplitude, frequency and phase of the wave within a frequency band.
When EM radiation impinges upon a conductor, it couples to the conductor, travels along
it, and induces an electric current on the surface of that conductor by exciting the
electrons of the conducting material. This effect (the skin effect) is used in antennas. EM
radiation may also cause certain molecules to absorb energy and thus to heat up; this is
exploited in microwave ovens.
Long distance HVDC lines carrying hydropower from Canada's Nelson river to this
station where it is converted to AC for use in Winnipeg's local grid
A high-voltage, direct current (HVDC) electric power transmission system uses direct
current for the bulk transmission of electrical power, in contrast with the more common
alternating current systems. For long-distance distribution, HVDC systems are less
expensive and suffer lower electrical losses. For shorter distances, the higher cost of DC
conversion equipment compared to an AC system may be warranted where other benefits
of direct current links are useful.
The modern form of HVDC transmission uses technology developed extensively in the
1930s in Sweden at ASEA. Early commercial installations included one in the Soviet
Union in 1951 between Moscow and Kashira, and a 10-20 MW system in Gotland,
Sweden in 1954.[1] The longest HVDC link in the world is currently the Inga-Shaba
1,700 km (1,100 mi) 600 MW link connecting the Inga Dam to the Shaba copper mine, in
the Democratic Republic of Congo.
HVDC interconnections in western Europe - red are existing links, green are under
construction, and blue are proposed. Many of these transfer power from renewable
sources such as hydro and wind. For names, see also the annotated version.
High voltages cannot be easily used in lighting and motors, and so transmission-level
voltage must be reduced to values compatible with end-use equipment. The transformer,
which only works with alternating current, is an efficient way to change voltages. The
competition between the DC of Thomas Edison and the AC of Nikola Tesla and George
Westinghouse was known as the War of Currents, with AC emerging victorious. Practical
manipulation of DC voltages only became possible with the development of high power
electronic devices such as mercury arc valves and later semiconductor devices, such as
thyristors, insulated-gate bipolar transistors (IGBTs), high power capable MOSFETs
(power metaloxidesemiconductor field-effect transistors) and gate turn-off thyristors
(GTOs).
History of HVDC transmission
HVDC in 1971: this 150 KV mercury arc valve converted AC hydropower voltage for
transmission to distant cities from Manitoba Hydro generators.
The first long-distance transmission of electric power was demonstrated using direct
current in 1882 at the Miesbach-Munich Power Transmission, but only 2.5 kW was
transmitted. An early method of high-voltage DC transmission was developed by the
Swiss engineer Rene Thury[2] and his method was put into practice by 1889 in Italy by the
Acquedotto de Ferrari-Galliera company. This system used series-connected motor-
generator sets to increase voltage. Each set was insulated from ground and driven by
insulated shafts from a prime mover. The line was operated in constant current mode,
with up to 5,000 volts on each machine, some machines having double commutators to
reduce the voltage on each commutator. This system transmitted 630 kW at 14 kV DC
over a distance of 120 km.[3][4] The Moutiers-Lyon system transmitted 8,600 kW of
hydroelectric power a distance of 124 miles, including 6 miles of underground cable. The
system used eight series-connected generators with dual commutators for a total voltage
of 150,000 volts between the poles, and ran from about 1906 until 1936. Fifteen Thury
systems were in operation by 1913
[5]
operated up to the 1930s, but the rotating machinery required high maintenance and had
high energy loss. Various other electromechanical devices were tested during the first half
of the 20th century with little commercial success.[6]
One conversion technique attempted for conversion of direct current from a high
transmission voltage to lower utilization voltage was to charge series-connected batteries,
then connect the batteries in parallel to serve distribution loads. [7] While at least two
commercial installations were tried around the turn of the 20th century, the technique was
not generally useful owing to the limited capacity of batteries, difficulties in switching
between series and parallel connections, and the inherent energy inefficiency of a battery
charge/discharge cycle.
The grid controlled mercury arc valve became available for power transmission during
the period 1920 to 1940. Starting in 1932, General Electric tested mercury-vapor valves
and a 12 kV DC transmission line, which also served to convert 40 Hz generation to serve
60 Hz loads, at Mechanicville, New York. In 1941, a 60 MW, +/-200 kV, 115 km buried
cable link was designed for the city of Berlin using mercury arc valves (Elbe-Project), but
owing to the collapse of the German government in 1945 the project was never
completed The nominal justification for the project was that, during wartime, a buried
cable would be less conspicuous as a bombing target. The equipment was moved to the
Soviet Union and was put into service there
Introduction of the fully-static mercury arc valve to commercial service in 1954 marked
the beginning of the modern era of HVDC transmission. A HVDC-connection was
constructed by ASEA between the mainland of Sweden and the island Gotland. Mercury
arc valves were common in systems designed up to 1975, but since then, HVDC systems
use only solid-state devices. From 1975 to 2000, line-commutated converters (LCC)
using thyristor valves were relied on. According to experts such as Vijay Sood, the next
25 years may well be dominated by force commutated converters, beginning with
capacitor commutative converters (CCC) followed by self commutating converters which
have largely supplanted LCC use Since use of semiconductor commutators, hundreds of
HVDC sea-cables have been laid and worked with high reliability, usually better than
96% of the time.
Advantages of HVDC over AC transmission
The advantage of HVDC is the ability to transmit large amounts of power over long
distances with lower capital costs and with lower losses than AC. Depending on voltage
level and construction details, losses are quoted as about 3% per 1,000 km. High-voltage
direct current transmission allows efficient use of energy sources remote from load
centers.
In a number of applications HVDC is more effective than AC transmission. Examples
include:
Endpoint-to-endpoint
long-haul
bulk
power
transmission
without
distribution systems
Stabilizing
predominantly
AC
power-grid,
without
increasing
support multiple phases. Also, thinner conductors can be used since HVDC does
not suffer from the skin effect
Long undersea cables have a high capacitance. While this has minimal effect for DC
transmission, the current required to charge and discharge the capacitance of the cable
causes additional I2R power losses when the cable is carrying AC. In addition, AC power
is lost to dielectric losses.
HVDC can carry more power per conductor, because for a given power rating the
constant voltage in a DC line is lower than the peak voltage in an AC line. In AC power,
the root mean square (RMS) voltage measurement is considered the standard, but RMS is
only about 71% of the peak voltage. The peak voltage of AC determines the actual
insulation thickness and conductor spacing. Because DC operates at a constant maximum
voltage without RMS, this allows existing transmission line corridors with equally sized
conductors and insulation to carry 29% more power into an area of high power
consumption than AC, which can lower costs.
Because HVDC allows power transmission between unsynchronised AC distribution
systems, it can help increase system stability, by preventing cascading failures from
propagating from one part of a wider power transmission grid to another. Changes in load
that would cause portions of an AC network to become unsynchronized and separate
would not similarly affect a DC link, and the power flow through the DC link would tend
to stabilize the AC network. The magnitude and direction of power flow through a DC
link can be directly commanded, and changed as needed to support the AC networks at
either end of the DC link. This has caused many power system operators to contemplate
wider use of HVDC technology for its stability benefits alone.
Disadvantages
The disadvantages of HVDC are in conversion, switching and control. Further operating
an HVDC scheme requires keeping many spare parts, which may be used exclusively in
one system as HVDC systems are less standardized than AC systems and the used
technology changes fast.
The required static inverters are expensive and have limited overload capacity. At smaller
transmission distances the losses in the static inverters may be bigger than in an AC
transmission line. The cost of the inverters may not be offset by reductions in line
construction cost and lower line loss. With two exceptions, all former mercury rectifiers
worldwide have been dismantled or replaced by thyristor units.
In contrast to AC systems, realizing multiterminal systems is complex, as is expanding
existing schemes to multiterminal systems. Controlling power flow in a multiterminal DC
system requires good communication between all the terminals; power flow must be
actively regulated by the control system instead of by the inherent properties of the
transmission line. High voltage DC circuit breakers are difficult to build because some
mechanism must be included in the circuit breaker to force current to zero, otherwise
arcing and contact wear would be too great to allow reliable switching. Multi-terminal
lines are rare. One is in operation at the Hydro Qubec - New England transmission from
Radisson to Sandy Pond Another example is the Sardinia-mainland Italy link which was
modified in 1989 to also provide power to the island of Corsica.
Costs of high voltage DC transmission
Normally manufacturers such as AREVA, Siemens and ABB do not state specific cost
information of a particular project since this is a commercial matter between the
manufacturer and the client.
Costs vary widely depending on the specifics of the project such as power rating, circuit
length, overhead vs. underwater route, land costs, and AC network improvements
required at either terminal. A detailed evaluation of DC vs. AC cost may be required
where there is no clear technical advantage to DC alone and only economics drives the
selection.
However some practitioners have given out some information that can be reasonably well
relied upon:
For an 8 GW 40 km link laid under the English Channel, the following are approximate
primary equipment costs for a 2000 MW 500 kV bipolar conventional HVDC link
(exclude way-leaving, on-shore reinforcement works, consenting, engineering, insurance,
etc.)
So for an 8 GW capacity between England and France in four links, little is left over from
750M for the installed works. Add another 200300M for the other works depending
on additional onshore works required.
Rectifying and inverting
Two of three thyristor valve stacks used for long distance transmission of power from
Manitoba Hydro dams
Early static systems used mercury arc rectifiers, which were unreliable. Two HVDC
systems using mercury arc rectifiers are still in service (As of 2008). The thyristor valve
was first used in HVDC systems in the 1960s. The thyristor is a solid-state semiconductor
device similar to the diode, but with an extra control terminal that is used to switch the
device on at a particular instant during the AC cycle. The insulated-gate bipolar transistor
(IGBT) is now also used and offers simpler control and reduced valve cost.
Because the voltages in HVDC systems, up to 800 kV in some cases, exceed the
breakdown voltages of the semiconductor devices, HVDC converters are built using large
numbers of semiconductors in series.
The low-voltage control circuits used to switch the thyristors on and off need to be
isolated from the high voltages present on the transmission lines. This is usually done
optically. In a hybrid control system, the low-voltage control electronics sends light
pulses along optical fibres to the high-side control electronics. Another system, called
direct light triggering, dispenses with the high-side electronics, instead using light pulses
from the control electronics to switch light-triggered thyristors (LTTs).
The wave reflected by earth can be considered as emitted by the image antenna
This means that the receptor "sees" the real antenna and, under the ground, the image of
the antenna reflected by the ground. If the ground has irregularities, the image will appear
fuzzy.
If the receiver is placed at some height above the ground, waves reflected by ground will
travel a little longer distance to arrive to the receiver than direct waves. The distance will
be the same only if the receiver is close to ground.
In the drawing at right, we have drawn the angle far bigger than in reality. Distance
between the antenna and its image is .
The situation is a bit more complex because the reflection of electromagnetic waves
depends on the polarization of the incident wave. As the refractive index of the ground
(average value ) is bigger than the refractive index of the air (), the direction of the
component of the electric field parallel to the ground inverses at the reflection. This is
equivalent to a phase shift of radians or 180. The vertical component of the electric field
reflects without changing direction. This sign inversion of the parallel component and the
non-inversion of the perpendicular component would also happen if the ground were a
good electrical conductor.
changing
sign.
The
horizontal
The sign inversion for the parallel field case just changes a cosine to a sine:
is the distance between antenna and its image (twice the height of the
Radiation patterns of antennas and their images reflected by the ground. At left the
polarization is vertical and there is always a maximum for . If the polarization is
horizontal as at right, there is always a zero for .
For emitting and receiving antenna situated near the ground (in a building or on a mast)
far from each other, distances traveled by direct and reflected rays are nearly the same.
There is no induced phase shift. If the emission is polarized vertically the two fields
(direct and reflected) add and there is maximum of received signal. If the emission is
polarized horizontally the two signals subtracts and the received signal is minimum. This
is depicted in the image at right. In the case of vertical polarization, there is always a
maximum at earth level (left pattern). For horizontal polarization, there is always a
minimum at earth level. Note that in these drawings the ground is considered as a perfect
mirror, even for low angles of incidence. In these drawings the distance between the
antenna and its image is just a few wavelengths. For greater distances, the number of
lobes increases.
Note that the situation is different and more complex if reflections in the ionosphere
occur. This happens over very long distances (thousands of kilometers). There is not a
direct ray but several reflected rays that add with different phase shifts.
This is the reason why almost all public address radio emissions have vertical
polarization. As public users are near ground, horizontal polarized emissions would be
poorly received. Observe household and automobile radio receivers. They all have
vertical antennas or horizontal ferrite antennas for vertical polarized emissions. In cases
where the receiving antenna must work in any position, as in mobile phones, the emitter
and receivers in base stations use circular polarized electromagnetic waves.
Classical (analog) television emissions are an exception. They are almost always
horizontally polarized, because the presence of buildings makes it unlikely that a good
emitter antenna image will appear. However, these same buildings reflect the
electromagnetic waves and can create ghost images. Using horizontal polarization,
reflections are attenuated because of the low reflection of electromagnetic waves whose
magnetic field is parallel to the dielectric surface near the Brewster's angle. Vertically
polarized analog television has been used in some rural areas. In digital terrestrial
television reflections are less annoying because of the type of modulation.
Mutual impedance and interaction between antennas
Mutual impedance between parallel dipoles not staggered. Curves Re and Im are the
resistive and reactive parts of the impedance.
Current circulating in any antenna induces currents in all others. One can postulate a
mutual impedance between two antennas that has the same significance as the in
ordinary coupled inductors. The mutual impedance between two antennas is defined as:
where is the current flowing in antenna 1 and is the voltage that would have to be applied
to antenna 2 with antenna 1 removed to produce the current in the antenna 2 that was
produced by antenna 1.
From this definition, the currents and voltages applied in a set of coupled antennas are:
where:
Rooftop TV antenna. It is
A multi-band rotaryactually
three
Yagi
A Yagi-Uda beamdirectional antennaantennas. The longestA
terrestrial
for amateur radioelements are for the lowmicrowave
antenna.
radio
use.
band, while the mediumantenna array.
and short elements are for
the high and UHF band.
Examples
of
US
log-periodicShortwave antennas
136-174 MHz baseLow cost LF timeRotatable
signal
receiver,array for VHF and UHF. in
Delano,
station antennas.
antenna
(left)
and
California.
receiver (right).
Territory
telecommunications
in
Northern
with
broadcasting
tower
Palmerston,
water
communications
radio
and
three-sectorTelephone site
palm tree.
antennas.
network
base station.
medical
for
an
services
Smart antenna
1.
antennas, and it has been this way since about 1950 (or earlier), when a
cornerstone textbook in this field, Antennas, was published by John D. Kraus of
the Ohio State University. Besides the title, Dr. Kraus noted this in a footnote on
the first page of his book. Insects may have "antennae," but this form is not used
in technical contexts.
2.
Experiments in the Swiss Alps", Fred Gardiol & Yves Fournier, Microwave
Journal, February 2006, pp. 124-136.
3.
Nikola Tesla said during the development of radio that "One of the
http://networkbits.net/wireless-printing/wireless-network-antenna-guide/.
Retrieved on 2008-04-08.
5.
Electricity Magnetism
Electromagnetic radiation (sometimes abbreviated EMR and often simply called light)
is a ubiquitous phenomenon that takes the form of self-propagating waves in a vacuum or
in matter. It consists of electric and magnetic field components which oscillate in phase
perpendicular to each other and perpendicular to the direction of energy propagation.
Electromagnetic radiation is classified into several types according to the frequency of its
wave; these types include (in order of increasing frequency and decreasing wavelength):
radio waves, microwaves, terahertz radiation, infrared radiation, visible light, ultraviolet
radiation, X-rays and gamma rays. A small and somewhat variable window of frequencies
is sensed by the eyes of various organisms; this is what we call the visible spectrum, or
light.
EM radiation carries energy and momentum that may be imparted to matter with which it
interacts.
Theory
Shows three electromagnetic modes (blue, green and red) with a distance scale in
micrometres along the x-axis.
Electromagnetic waves were first postulated by James Clerk Maxwell and subsequently
confirmed by Heinrich Hertz. Maxwell derived a wave form of the electric and magnetic
equations, revealing the wave-like nature of electric and magnetic fields, and their
symmetry. Because the speed of EM waves predicted by the wave equation coincided
with the measured speed of light, Maxwell concluded that light itself is an EM wave.
According to Maxwell's equations, a time-varying electric field generates a magnetic
field and vice versa. Therefore, as an oscillating electric field generates an oscillating
magnetic field, the magnetic field in turn generates an oscillating electric field, and so on.
These oscillating fields together form an electromagnetic wave.
A quantum theory of the interaction between electromagnetic radiation and matter such as
electrons is described by the theory of quantum electrodynamics.
Properties
will be more obvious when the average number of photons in the cube of the relevant
wavelength is much smaller than 1. Upon absorption the quantum nature of the light
leads to clearly non-uniform deposition of energy.
There are experiments in which the wave and particle natures of electromagnetic waves
appear in the same experiment, such as the diffraction of a single photon. When a single
photon is sent through two slits, it passes through both of them interfering with itself, as
waves do, yet is detected by a photomultiplier or other sensitive detector only once.
Similar self-interference is observed when a single photon is sent into a Michelson
interferometer or other interferometers.
[edit] Wave model
where v is the speed of the wave (c in a vacuum, or less in other media), f is the frequency
and is the wavelength. As waves cross boundaries between different media, their speeds
change but their frequencies remain constant.
Interference is the superposition of two or more waves resulting in a new wave pattern. If
the fields have components in the same direction, they constructively interfere, while
opposite directions cause destructive interference.
The energy in electromagnetic waves is sometimes called radiant energy.
Particle model
Because energy of an EM wave is quantized, in the particle model of EM radiation, a
wave consists of discrete packets of energy, or quanta, called photons. The frequency of
the wave is proportional to the magnitude of the particle's energy. Moreover, because
photons are emitted and absorbed by charged particles, they act as transporters of energy.
The energy per photon can be calculated from the PlanckEinstein equation:[1]
levels of electrons in atoms are discrete, each element emits and absorbs its own
characteristic frequencies.
Together, these effects explain the absorption spectra of light. The dark bands in the
spectrum are due to the atoms in the intervening medium absorbing different frequencies
of the light. The composition of the medium through which the light travels determines
the nature of the absorption spectrum. For instance, dark bands in the light emitted by a
distant star are due to the atoms in the star's atmosphere. These bands correspond to the
allowed energy levels in the atoms. A similar phenomenon occurs for emission. As the
electrons descend to lower energy levels, a spectrum is emitted that represents the jumps
between the energy levels of the electrons. This is manifested in the emission spectrum of
nebulae. Today, scientists use this phenomenon to observe what elements a certain star is
composed of. It is also used in the determination of the distance of a star, using the red
shift.
Speed of propagation
Main article: Speed of light
Any electric charge which accelerates, or any changing magnetic field, produces
electromagnetic radiation. Electromagnetic information about the charge travels at the
speed of light. Accurate treatment thus incorporates a concept known as retarded time (as
opposed to advanced time, which is unphysical in light of causality), which adds to the
expressions for the electrodynamic electric field and magnetic field. These extra terms are
responsible for electromagnetic radiation. When any wire (or other conducting object
such as an antenna) conducts alternating current, electromagnetic radiation is propagated
at the same frequency as the electric current. At the quantum level, electromagnetic
radiation is produced when the wavepacket of a charged particle oscillates or otherwise
accelerates. Charged particles in a stationary state do not move, but a superposition of
such states may result in oscillation, which is responsible for the phenomenon of radiative
transition between quantum states of a charged particle.
times with other atoms in the material eventually most of the energy gets downgraded to
thermal energy, this whole process happening in a tiny fraction of a second. That infrared
radiation is a form of heat and other electromagnetic radiation is not, is a widespread
misconception in physics. Any electromagnetic radiation can heat a material when it is
absorbed.
The inverse or time-reversed process of absorption is responsible for thermal radiation.
Much of the thermal energy in matter consists of random motion of charged particles, and
this energy can be radiated away from the matter. The resulting radiation may
subsequently be absorbed by another piece of matter, with the deposited energy heating
the material. Radiation is an important mechanism of heat transfer.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a
form of thermal energy, having maximum radiation entropy. The thermodynamic
potentials of electromagnetic radiation can be well-defined as for matter. Thermal
radiation in a cavity has energy density (see Planck's Law) of
Differentiating the above with respect to temperature, we may say that the
electromagnetic radiation field has an effective volumetric heat capacity given by
Electromagnetic spectrum
Main article: Electromagnetic spectrum
Legend:
Gamma
rays
HX
Hard
X-rays
SX
Soft
X-Rays
EUV
NUV
Extreme
ultraviolet
Near
ultraviolet
Visible
light
NIR
MIR
FIR
Near
infrared
Moderate
infrared
Far
infrared
Radio
EHF
SHF
UHF
VHF
waves:
=
Extremely
Super
=
high
Ultrahigh
=
HF
LF
=
=
VF
frequency
frequency
(Microwaves)
(Microwaves)
high
frequency
High
frequency
Medium
frequency
Low
frequency
Very
=
(Microwaves)
frequency
Very
=
MF
VLF
high
low
Voice
frequency
frequency
Light
Main article: Light
EM radiation with a wavelength between approximately 400 nm and 700 nm is detected
by the human eye and perceived as visible light. Other wavelengths, especially nearby
infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes
referred to as light, especially when visibility to humans is not relevant.
If radiation having a frequency in the visible region of the EM spectrum reflects off of an
object, say, a bowl of fruit, and then strikes our eyes, this results in our visual perception
of the scene. Our brain's visual system processes the multitude of reflected frequencies
into different shades and hues, and through this not-entirely-understood psychophysical
phenomenon, most people perceive a bowl of fruit.
At most wavelengths, however, the information carried by electromagnetic radiation is
not directly detected by human senses. Natural sources produce EM radiation across the
spectrum, and our technology can also manipulate a broad range of wavelengths. Optical
fiber transmits light which, although not suitable for direct viewing, can carry data that
can be translated into sound or an image. The coding used in such data is similar to that
used with radio waves.
Radio waves
Main article: Radio waves
Radio waves can be made to carry information by varying a combination of the
amplitude, frequency and phase of the wave within a frequency band.
When EM radiation impinges upon a conductor, it couples to the conductor, travels along
it, and induces an electric current on the surface of that conductor by exciting the
electrons of the conducting material. This effect (the skin effect) is used in antennas. EM
radiation may also cause certain molecules to absorb energy and thus to heat up; this is
exploited in microwave ovens.
Derivation
Electromagnetic waves as a general phenomenon were predicted by the classical laws of
electricity and magnetism, known as Maxwell's equations. If you inspect Maxwell's
equations without sources (charges or currents) then you will find that, along with the
possibility of nothing happening, the theory will also admit nontrivial solutions of
changing electric and magnetic fields. Beginning with Maxwell's equations for free
space:
where
is a vector differential operator (see Del).
One solution,
,
is trivial.
To see the more interesting one, we utilize vector identities, which work for any vector, as
follows:
To see how we can use this take the curl of equation (2):
Equations (6) and (7) are equal, so this results in a vector-valued differential equation for
the electric field, namely
Applying a similar pattern results in similar differential equation for the magnetic field:
.
These differential equations are equivalent to the wave equation:
where
c0 is the speed of the wave in free space and
f describes a displacement
Or more simply:
where
is d'Alembertian:
Notice that in the case of the electric and magnetic fields, the speed is:
Which, as it turns out, is the speed of light in free space. Maxwell's equations have
unified the permittivity of free space 0, the permeability of free space 0, and the speed
of light itself, c0. Before this derivation it was not known that there was such a strong
relationship between light and electricity and magnetism.
But these are only two equations and we started with four, so there is still more
information pertaining to these waves hidden within Maxwell's equations. Let's consider
a generic vector wave for the electric field.
Here
is a unit
,
for a generic wave traveling in the
direction.
This form will satisfy the wave equation, but will it satisfy all of Maxwell's equations,
and with what corresponding magnetic field?
The first of Maxwell's equations implies that electric field is orthogonal to the direction
the wave propagates.
The second of Maxwell's equations yields the magnetic field. The remaining equations
will be satisfied by this choice of
Not only are the electric and magnetic field waves traveling at the speed of light, but they
have a special restricted orientation and proportional magnitudes, E0 = c0B0, which can be
seen immediately from the Poynting vector. The electric field, magnetic field, and
direction of wave propagation are all orthogonal, and the wave propagates in the same
direction as
From the viewpoint of an electromagnetic wave traveling forward, the electric field might
be oscillating up and down, while the magnetic field oscillates right and left; but this
picture can be rotated with the electric field oscillating right and left and the magnetic
field oscillating down and up. This is a different solution that is traveling in the same
direction. This arbitrariness in the orientation with respect to propagation direction is
known as polarization.
UNIT-III(MODULATION TECHNIQUES)
PART-A
1)In telecommunication, a communications system is a collection of individual
communications networks.
2)A communications subsystem is a functional unit or operationa assembly that is
smaller than the larger assembly under consideration.
3)
It
also
contains
transponders
and
other
transponders
in
it
and
(PART-B)
7) What is communication?
In telecommunication, a communications system is a collection of individual
communications networks, transmission systems, relay stations, tributary stations, and
data terminal equipment (DTE) usually capable of interconnection and interoperation to
form an integrated whole. The components of a communications system serve a common
purpose, are technically compatible, use common procedures, respond to controls, and
operate in unison. Telecommunications is a method of communication (e.g., for sports
broadcasting, mass media, journalism, etc.).
8) What is transmitter?
In the early days of radio engineering, radio frequency energy was generated using
arcs known as Alexanderson alternator or mechanical alternators (of which a rare
example survives at the SAQ transmitter in Grimeton, Sweden). In the 1920s electronic
transmitters, based on vacuum tubes, began to be used.
In broadcasting, and telecommunication, the part which contains the oscillator,
modulator, and sometimes audio processor, is called the exciter. Confusingly, the highpower amplifier which the exciter then feeds into is often called the "transmitter" by
broadcast engineers. The final output is given as transmitter power output (TPO),
although this is not what most stations are rated by.
(PART-C)
11)Explain about communication?
Communications system
terminal of the satellite link, and (c) an optical fiber cable with its driver and receiver in
either of the interconnect facilities. Communication subsystem (b) basically consists of a
receiver, frequency translator and a transmitter. It also contains transponders and other
transponders in it and communication satellite communication system receives signals
from the antenna subsystem.
Examples
An optical communication system is any form of telecommunication that uses light as the
transmission medium. Optical communications consists of a transmitter, which encodes a
message into an optical signal, a channel, which carries the signal to its destination, and a
receiver, which reproduces the message from the received optical signal. Fiber-optic
communication systems transmit information from one place to another by sending light
through an optical fiber. The light forms an electromagnetic carrier wave that is
modulated to carry information. First developed in the 1970s, fiber-optic communication
systems have revolutionized the telecommunications industry and played a major role in
the advent of the Information Age. Because of its advantages over electrical transmission,
the use of optical fiber has largely replaced copper wire communications in core networks
in the developed world.
A radio communication system is composed of several communications subsystems that
give exterior communications capablilities.[1][2][3] A radio communication system
comprises a transmitting conductor[4] in which electrical oscillations[5][6][7] or currents are
produced and which is arranged to cause such currents or oscillations to be propagated
through the free space medium from one point to another remote therefrom and a
receiving conductor[4] at such distant point adapted to be excited by the oscillations or
currents propagated from the transmitter.[8][9][10][11]
Power line communications systems operate by impressing a modulated carrier signal on
the wiring system. Different types of powerline communications use different frequency
bands, depending on the signal transmission characteristics of the power wiring used.
Since the power wiring system was originally intended for transmission of AC power, the
power wire circuits have only a limited ability to carry higher frequencies. The
propagation problem is a limiting factor for each type of power line communications.
A duplex communication system is a system composed of two connected parties or
devices which can communicate with one another in both directions. The term duplex is
not used when describing communication between more than two parties or devices.
Duplex systems are employed in nearly all communications networks, either to allow for
a communication "two-way street" between two connected parties or to provide a
"reverse path" for the monitoring and remote adjustment of equipment in the field.
A tactical communications system is a communications system that (a) is used within, or
in direct support of, tactical forces, (b) is designed to meet the requirements of changing
tactical situations and varying environmental conditions, (c) provides securable
communications, such as voice, data, and video, among mobile users to facilitate
command and control within, and in support of, tactical forces, and (d) usually requires
extremely short installation times, usually on the order of hours, in order to meet the
requirements of frequent relocation.
3)
is any object (source) which sends information to an observer (receiver). When used in
this more general sense, vocal cords may also be considered an example of a transmitter.
In radio electronics and broadcasting, a transmitter usually has a power supply, an
oscillator, a modulator, and amplifiers for audio frequency (AF) and radio frequency
(RF). The modulator is the device which piggybacks (or modulates) the signal
information onto the carrier frequency, which is then broadcast. Sometimes a device (for
example, a cell phone) contains both a transmitter and a radio receiver, with the
combined unit referred to as a transceiver. In amateur radio, a transmitter can be a
separate piece of electronic gear or a subset of a transceiver, and often referred to using
an abbreviated form; "XMTR". [1] In most parts of the world, use of transmitters is strictly
controlled by laws since the potential for dangerous interference (for example to
emergency communications) is considerable. In consumer electronics, a common device
is a Personal FM transmitter, a very low power transmitter generally designed to take a
simple audio source like an iPod, CD player, etc. and transmit it a few feet to a standard
FM radio receiver. Most personal FM transmitters In the USA fall under Part 15 of the
FCC regulations to avoid any user licensing requirements.
In industrial process control, a "transmitter" is any device which converts
measurements from a sensor into a signal to be received, usually sent via wires, by
some display or control device located a distance away. Typically in process control
applications the "transmitter" will output an analog 4-20 mA current loop or digital
protocol to represent a measured variable within a range. For example, a pressure
transmitter might use 4 mA as a representation for 50 psig of pressure and 20 mA as
1000 psig of pressure and any value in between proportionately ranged between 50
and 1000 psig. (A 0-4 mA signal indicates a system error.) Older technology
transmitters used pneumatic pressure typically ranged between 3 to 15 psig (20 to
100).
13)Explain the need for modulation?
interference can be minimised, and in addition, those in marginal reception areas can use
more efficient grouped receiving antennas. Unfortunately, in the UK, this carefully
planned system has had to be compromised with the advent of digital broadcasting which
(during the changeover period at least) requires yet more channel space, and
consequently the additional digital broadcast channels cannot always be fitted within the
transmitter's existing group. Thus many UK transmitters have become "wideband" with
the consequent need for replacement of receiving antennas (see external links). Once the
Digital Switch Over (DSO) occurs the plan is that most transmitters will revert to their
original groups, source Further complication arises when adjacent transmitters have to
transmit on the same frequency and under these circumstances the broadcast radiation
patterns are attenuated in the relevant direction(s). A good example of this is in the United
Kingdom, where the Waltham transmitting station broadcasts at high power on the same
frequencies as the Sandy Heath transmitting station's high power transmissions, with the
two being only 50 miles apart. Thus Waltham's antenna array[1] does not broadcast these
two channels in the direction of Sandy Heath and vice versa.
Where a particular service needs to have wide coverage, this is usually achieved by using
multiple transmitters at different locations. Usually, these transmitters will operate at
different frequencies to avoid interference where coverage overlaps. Examples include
national broadcasting networks and cellular networks. In the latter, frequency switching is
automatically done by the receiver as necessary, in the former, manual retuning is more
common (though the Radio Data System is an example of automatic frequency switching
in broadcast networks). Another system for extending coverage using multiple
transmitters is quasi-synchronous transmission, but this is rarely used nowadays.
Main and relay (repeater) transmitters
Transmitting stations are usually either classified as main stations or relay stations (also
known as repeaters or translators).
Main stations are defined as those that generate their own modulated output signal from a
baseband (unmodulated) input. Usually main stations operate at high power and cover
large areas.
Relay stations (translators) take an already modulated input signal, usually by direct
reception of a parent station off the air, and simply rebroadcast it on another frequency.
Usually relay stations operate at medium or low power, and are used to fill in pockets of
poor reception within, or at the fringe of, the service area of a parent main station.
Note that a main station may also take its input signal directly off-air from another
station, however this signal would be fully demodulated to baseband first, processed, and
then remodulated for transmission.14)Explain about power relations in AM wave?
planning and construction, and high-power transmitters especially in the long- and
medium-wave ranges can be received over long distances, such facilities were often
mentioned in propaganda. Other examples were the Deutschlandsender Herzberg/Elster
and the Warsaw Radio Mast.
14)What are the advantages of FM over AM?
FM has following advantages over AM.
i) The amplitude of FM is constant. It is independent of depth of
modulation.
Hence transmitter power remains constant in FM whereas it varies in
AM.
ii) Since amplitude of FM constant, the noise interference is minimum
in FM. Any
noise superimposing an amplitude can be removed with the help of
amplitude
limits. Whereas it is difficult to remove amplitude variations due to
noise in
AM.
iii) The depth of modulation have limitation in AM. But in FM the depth
of
modulation can be increased to any value by increasing the deviation.
This does
not cause any distortion in FM signal.
iv) Since guard bands are provided in FM, there is less possibility of
adjacent
channel interference.
v) Since space waves are used for FM, the radius of propagation is
limited to line of
sight. Hence it is possible to operate several independent transmitters
on same
where Cos(2fct) is the carrier frequency and Vm(t) is the modulating signal. In our
application fc = 915 MHz and Vm(t) = Audio Signal. We can take Vm(t) = VmCos(2fmt)
where fm = highest frequency component in the message signal. For transmitting audio, fm
= 20 kHz. The constant "A" is chosen such that Vam(t) never becomes negative. Thus
Vam(t) can be modified to
What has occurred is that the low frequency message signal has been translated to a much
higher frequency range for greater transmission efficiency. Either of the side bands can be
used to recover the message signal at the demodulator. One simply needs to filter out the
unwanted side band before sending the signal to the demodulation stage. This modulation
scheme is typically implemented in circuitry by a component called a Double-Side-Band
mixer. The mixer physically multiplies the carrier wave, driven by an oscillator, with the
message signal to produce the AM signal.
RC [( c m ) ]
1
where
1
2
is the frequency of
the audio signal (20 kHz). Once we found a value for RC ( 2.34 10 7 ), we assumed a
value of 1000 for R. This gave us a value of 23 pF for C. In our implementation of the
circuit, we used a 24 pF capacitor because that was the closest standard value. The
circuit was constructed on a protoboard, again trying to keep the leads between
components as short as possible. We also used SMA connectors for the input and output
of this circuit.
Demodulator and Envelope Detector
(Written by Farial Mahbub)
Amplitude Modulation (AM) refers to the method of adjusting an electromagnetic
carrier frequency by varying its amplitude in accordance to the analogue signal to be
transmitted.There are two essential methods that are used to demodulate AM signals and
in this portion of the report we will discuss both. The figure below represents the circuit
used as the Demodulator in this project.
The first method of demodulation is using the envelope detector. The envelope
detector is essentially made up of a rectifier and a low pass filter (see figure below). In
this project a diode was used as the rectifier to pass current in one direction only. In order
to calculate the value of the RC time constant to be used, the following equation is used:
2fC >> 1/(RC) > 2fm
Vr = Vp (1 e -1/fcRC)
where fC and fm are the carrier frequency and the modulated frequency respectively. The
reason the inverse of the time constant is significantly smaller than the carrier frequency
is to keep the ripple created minimal. The second equation shown above defines the peakto-peak value of the ripple, Vr of the rectified signal and where Vp is the peak value of the
incoming signal and fc is the frequency of the signal.
The second method for demodulation that we did not choose to implement is the
product detector. This circuit essentially multiplies the incoming signal by the signal of a
local oscillator which is at the same frequency and phase as the carrier signal. After
filtering the product, only the original audio signal remains (works for AM as well as
Single Side Band Modulation, SSBM).
The output of the above described circuits can be seen graphically in the figure below.
The Signal 1 is the modulated signal that is applied to the Detector. The diode present in
the circuit demodulates the AM signal by allowing its carrier to multiply with its
sidebands. The diode passes current in only one direction and its output voltage is
proportional to the square of its input voltage. Thus, if an input voltage that varies
according to the modulation envelope is used, the information present in the sidebands
would be successfully recovered. Once the signal is rectified (after it passes through the
diode), it resembles Signal 2. The next component in the circuit is the low-pass filter (the
resistor and capacitor in parallel) and this filters out the RF and turns it into Signal 3. The
coupling capacitor in the circuit is present to eliminate the DC component in the received
signal thus centering the information signal around the zero axis as in Signal 4.
S(dBc)
-26.02
-6.02
0.0006
fc + 3kHz
45.55dBm
fc - 3kHz
45.16dBm
f(kHz)
27
33
86.2
92.6
P(dBm)
-22.71
-23
-53.3
-56.9
-40
-50
-60
-70
-80
0
30
60
90
120
Frequency (kHz)
The measured spectrum of the DSB-SC is comparably close to the theoretical model.
We now varied the input signal and tested the output. The input modulating signal
amplitude was varied and both the input and output voltage were measured.
Vin,pp(V)
0
2
4
6
8
10
Vo,pp(V)
0
0.8
1.5
2.3
3
4
The modulating signal (input signal) and the output amplitudes are proportional by the
amplitude of the carrier frequency.
Keeping the input amplitude constant at 10Vpp, we varied the frequency of the
modulating input signal from 4kHz to 10kHz in increments of 2kHz and measured
corresponding amplitudes.
f(kHz)
4
6
8
10
Vo,pp(V)
9.5
9.5
9.5
9.5
From the data it appears that the frequency of the modulating signal does not affect the
output amplitude.
Double Sideband Large Carrier (DSB-LC) Modulation
We now apply a DC Offset to get a DSB-LC signal. On adding DC offset to the input we
get an equation of the form [A + f(t)] which forms basic form of DSB-LC, i.e [A + f(t) ]
cosct.
We continue to use an input modulating signal of 3kHz, 2Vp-p sinusoidal with a +2V DC
offset. Measurements of the output were taken on both the oscilloscope and spectrum
analyzer. Data of the spectrum was taken twice, once with a span from 0-120kHz and
another with a span of 10-50kHz.
f(kHz)
27
30
33
86.2
89.2
92.2
P(dBm)
-36.4
-24.6
-36.8
-67.07
-56.3
-70.08
f(kHz)
26.6
29.6
32.7
P(dBm)
-36.64
-24.4
-36.9
Power(dBm)
0
-10
-20
-30
-40
-50
-60
-70
-80
0
30
60
90
120
Frequency(kHz)
Power(dBm)
-40
-50
-60
-70
-80
10
20
30
40
50
Frequency(kHz)
Comparing the spectrum of the DSB-LC with the theoretical model, they are almost the
same.
Looking at the modulated signal at the output, we obtained values for Emax and Emin
and computed the modulation index.
Emax = 2.2Vpp
m = 0.468
Emin = 0.8Vpp
The input amplitude at the carrier input was varied from 0-10Vpp in 1V increments and
measurements of the max and min energies were taken from the oscilloscope and the
power of the carrier and two sidebands were taken from the spectrum analyzer. Values of
m and S were calculated from the energies and power values.
Pl(dBm)
-56.3
-37.09
-31.1
-27.1
-24.6
-22.7
Pc(dBm)
-24.3
-24.4
-24.8
-24.9
-25.1
-25.3
Pr(dBm)
-56.6
-37.4
-31.4
-27.5
-25.13
-23
m
0.032258
0.073171
0.9
1.571429
2.230769
2.769231
S(dBc)
-32.15
-12.845
-6.45
-2.4
0.235
2.45
Plotting the modulation index vs. the input amplitude, the data shows that there is a linear
relationship. This makes sense since the input amplitude has a direct relationship with
Emax and the values of the m were calculated from the Emax and Emin values.
Looking at the graph for sideband power vs. modulation index, it looks close to
theoretical models.
Using a 3kHz, 2Vpp sinusoidal signal as our input, the DC offset was varied from -6 to
+6V in 1V increments. Measurements were taken from the oscilloscope of Emax and
Emin and the modulation indices were calculated.
DC Offset (V)
-6
-5
-4
-3
-2
-1
0
1
2
3
4
5
6
Emax(V)
5.2
4.5
3.8
3
2.3
1.6
0.8
1.8
2.3
3
3.8
4.5
5.1
Emin(V)
4
3.2
2.4
1.7
0.9
0.1
0
0.1
1
1.7
2.5
3.3
3.9
m
0.130435
0.168831
0.225806
0.276596
0.4375
0.882353
1
0.894737
0.393939
0.276596
0.206349
0.153846
0.133333
The relationship between the modulation index and the DC offset seems to be of an
exponential relation. That is m = e-k|x|, where x is dc offset and k is a con
Over-Modulation with DSB-LC:
In the Testing section, the modulating index m is merely defined as a parameter, which
determines the amount of modulation. However, we have to ask ourselves a question of
what is the degree of modulation required to establish a desirable AM communication
link?
The answer to the question is to maintain m < 1.0(100%).
This is important as to ensure successful retrieval of the original transmitted information
at the receiver end. Note that by performing the demodulation process (reverse of
modulation) the message signal is simply being traced out from the envelope of the
modulated signal. To have a quick recap, amplitude of the modulated signal varies in
proportion to the amplitude of the information signal.
Thus, once m > 1.0(100%), envelope distortion will occur and the waveform is said to be
overmodulated. Under this circumstances, Ac is large enough, resulting the nonproportionality of s (t ) to s m (t ) ----hence distortion of the desire message signal!!
Ac: DC component of Amplitude of carrier signal; s(t) is AM signal; sm (t ) is
modulation signal.
(http://foe.mmu.edu.my/course/etm3046/notes/AM(-DSSC)ETM2042.doc; Chapter 3
Amplitude modulation H
Communications I -ETM2042)
Envelope detectors only work satisfactorily when we ensure this inequality is true.
RC is too close to inverse of modulation frequency, less ripples but significant negative
peak
To find out why it is inefficient, it is necessary to look at a little theory behind the
operation of AM. When a radio-frequency signal is modulated by an audio signal, the
envelope will vary. The level of modulation can be increased to a level where the
envelope falls to zero and then rises to twice the unmodulated level. Any increase above
this will cause distortion because the envelope cannot fall below zero. As this is the
maximum amount of modulation possible, it is called 100 per cent modulation (Figure 35).
maximum amount of modulation possible, it is called 100 per cent modulation (Figure 35).
the total bandwidth used is equal to twice the top frequency that is transmitted. In the
crowded conditions found on many of the short wave bands today this is a waste of
space, and other modes of transmission that take up less space are often used.
The value of the modulation index must not be allowed to exceed 1 (i.e. 100 per cent in
terms of the depth of modulation), otherwise the envelope becomes distorted and the
signal will spread out either side of the wanted channel, causing interference to other
users.
the
X-ray
beam
from
some
given
position,
you
1,000,000,000,000,000,000 (that is, 1018) wave crests pass you every second.
would
see
For every electromagnetic wave, the product of the wavelength and frequency equals a
constant, the speed of light (c). In other words, f = c. This equation shows that
wavelength and frequency have a reciprocal relationship to each other. As one increases,
the other must decrease. Gamma rays, for example, have very small wavelengths and
very large frequencies. Radio waves, by contrast, have large wavelengths and very small
frequencies.
AM TRANSMITTER BLOCK DIAGRAM
As shown in the accompanying figure, the whole range of the electromagnetic spectrum
can be divided up into various regions based on wavelength and frequency.
Electromagnetic radiation with very short wavelengths and high frequencies fall into the
cosmic ray/gamma ray/ultraviolet radiation region. At the other end of the spectrum are
the long wavelength, low frequency forms of radiation: radio, radar, and microwaves. In
the middle of the range is visible light.
Properties of waves in different regions of the spectrum are commonly described by
different notation. Visible radiation is usually described by its wavelength, while X rays
are described by their energy. All of these schemes are equivalent, however; they are just
different ways of describing the same properties.
Words to Know
Electromagnetic radiation: Radiation that travels through a vacuum with the speed of
light and that has properties of both an electric and magnetic wave.
Frequency: The number of waves that pass a given point in a given period of time.
Hertz: The unit of frequency; a measure of the number of waves that pass a given point
per second of time.
Wavelength: The distance between two successive peaks or crests in a wave.
The boundaries between types of electromagnetic radiation are rather loose. Thus, a wave
with a frequency of 8 1014 hertz could be described as a form of very deep violet visible
light or as a form of ultraviolet radiation.
Applications
The various forms of electromagnetic radiation are used everywhere in the world around
us. Radio waves are familiar to us because of their use in communications. The standard
AM radio band includes radiation in the 540 to 1650 kilohertz (thousands of hertz) range.
The FM band includes the 88 to 108 megahertz (millions of hertz) range. This region also
includes shortwave radio transmissions and television broadcasts.
Microwaves are probably most familiar to people because of microwave ovens. In a
microwave oven, food is heated when microwaves excite water molecules contained
within foods (and the molecules' motion produces heat). In astronomy, emission of
radiation at a wavelength of 8 inches (21 centimeters) has been used to identify neutral
hydrogen throughout the galaxy. Radar is also included in this region.
The infrared region of the spectrum is best known to us because of the fact that heat is a
form of infrared radiation. But the visible wavelength range is the range of frequencies
with which we are most familiar. These are the wavelengths to which the human eye is
sensitive and which most easily pass through Earth's atmosphere. This region is further
broken down into the familiar colors of the rainbow, also known as the visible spectrum.
The ultraviolet range lies at wavelengths just short of the visible range. Most of the
ultraviolet radiation reaching Earth in sunlight is absorbed in the upper atmosphere.
Ozone, a form of oxygen, has the ability to trap ultraviolet radiation and prevent it from
reaching Earth. This fact is important since ultraviolet radiation can cause a number of
problems for both plants and animals. The depletion of the ozone layer during the 1970s
and 1980s was a matter of some concern to scientists because of the increase in
dangerous ultraviolet radiation reaching Earth.
We are most familiar with X rays because of their uses in medicine. X-radiation can pass
through soft tissue in the body, allowing doctors to examine bones and teeth from the
outside. Since X rays do not penetrate Earth's atmosphere, astronomers must place X-ray
telescopes in space.
Gamma rays are the most energetic of all electromagnetic radiation, and we have little
experience with them in everyday life. They are produced by nuclear processesduring
radioactive decay (in which an element gives off energy by the disintegration of its
nucleus) or in nuclear reactions in stars or in space.
UNIT-IV(SINGLE SIDEBAND MODULATION)
(PART-A)
1)
intelligence.
2)
frequency.
3)
balance.
4)
not only want to mix the audio and IF to produce an audio modulated IF
signal.
5)
(PART-B)
6)
The ssb generator (modulator) combines its audio input and its carrier
input to produce the two sidebands. The two sidebands are then fed to a
filter that selects the desired sideband and suppresses the other one. By
eliminating the carrier and one of the sidebands, intelligence is transmitted
at a savings in power and frequency bandwidth. In most cases ssb
generators operate at very low frequencies when compared with the
normally transmitted frequencies. For that reason, we must convert (or
translate) the filter output to the desired frequency. This is the purpose of
the mixer stage. A second output is obtained from the frequency generator
and fed to a frequency multiplier to obtain a higher carrier frequency for
the mixer stage. The output from the mixer is fed to a linear power
amplifier to build up the level of the signal for transmission. Suppressed
Carrier In ssb the carrier is suppressed (or eliminated) at the transmitter,
and the sideband frequencies produced by the carrier are reduced to a
minimum. You will probably find this reduction (or elimination) is the
most difficult aspect in the understanding of ssb. In a single-sideband
suppressed carrier, no carrier is present in the transmitted signal.
8)
SSB
signal.
If the modulation phasing is such that each time the carrier frequency
deviates higher due to FM the carrier amplitude increases due to AM the
upper sideband will become stronger. Likewise with that phasing, each
time the carrier frequency deviates lower the carrier amplitude will
decrease due to AM which decreases the strength of the lower sideband.
If the modulation phasing is reversed by reversing the audio input polarity
to either the FM or AM modulator the lower sideband will become
stronger
and
the
upper
sideband
will
become
weaker.
one
sideband.
9)
(PART-C)
11)
suppressed.
13)
sideband, any frequency component of a modulated carrier wave other than the frequency
of the carrier wave itself, i.e., any frequency added to the carrier as a result of
modulation; sidebands carry the actual information while the carrier contributes none at
all. Those frequency components that are higher than the carrier frequency are know as
upper sidebands; those lower are called lower sidebands. The upper and lower sidebands
contain equivalent information; thus only one needs to be transmitted. Such singlesideband signals are very efficient in their use of the frequency spectrum when compared
to standard amplitude modulation (AM) signals. See radio.
Either of the two bands of frequencies, one just above and one just below a carrier
frequency, that result from modulation of a carrier wave.
The range of the electromagnetic spectrum located either above (the upper sideband) or
below (the lower sideband) the frequency of a sinusoidal carrier signal c(t). The
sidebands are produced by modulating the carrier signal in amplitude, frequency, or phase
in accordance with a modulating signal m(t) to produce the modulated signal s(t). The
resulting distribution of power in the sidebands of the modulated signal depends on the
modulating signal and the particular form of modulation employed. See also Amplitude
modulation; Frequency modulation; Modulation; Phase modulation.
In radio communications, a signal that results from amplitude modulating a carrier
frequency. The upper sideband is the carrier plus modulation, and the lower sideband is
the carrier minus modulation, which are mirror images of each other. See single sideband.
Wikipedia: Sideband
Top
Home > Library > Miscellaneous > Wikipedia
The
power
of
an
AM
signal
plotted
against
frequency.
sideband modulation
side lobe
of said double side bands, are utilized to receive different carrier frequencies at said
plural separated frequencies.
3. A transmission system, as defined in claim 1, wherein at least one of said plural
receivers receives full double side band AM modulated communication signals.
4. A transmission system, as defined in claim 1, wherein said one of said plural receivers
provides substantially double side band reception at modulation frequencies below a first
predetermined frequency, and substantially single side bandreception at modulation
frequencies above a second predetermined frequency.
5. A transmission system, as defined in claim 1, wherein said one of said plural receivers
attenuates the received carrier frequency by approximately 3.5 db.
6. A transmission system, as defined in claim 1, wherein said one of said plural receivers
additionally comprises:
filter means having an attenuation versus frequency slope characteristic at the received
carrier frequency for reducing distortion caused by frequency drift of the carrier.
7. A transmission system, as defined in claim 1, wherein said one of said plural receivers
includes a band pass filter, the pass band of which extends on both sides of the received
carrier frequency, the poles on one side of the pass band havinga relatively lower Q than
the poles on the other side of the pass band.
8. A transmission system, as defined in claim 1, wherein said one of said plural receivers
includes a band pass filter providing a pass band which extends above and below the
received carrier frequency by a predetermined frequency amount, and anotch filter, the
notch of which is frequency positioned adjacent one edge of said band pass filter.
9. A method of carrier multiplexing multiple telephone communication channels between
a single central station and plural remote stations on a single communication medium
exhibiting phase nonlinearities at a certain frequency, comprising:
transmitting said multiple channels from said central and remote stations on said
communication medium as double side band AM modulated carrier signals having
carriers at different frequencies; and
14)
Aside from the issues regarding receiver synchronization, the key disadvantage of PPM is
that it is inherently sensitive to multipath interference that arises in channels with
frequency-selective fading, whereby the receiver's signal contains one or more echoes of
each transmitted pulse. Since the information is encoded in the time of arrival (either
differentially, or relative to a common clock), the presence of one or more echoes can
make it extremely difficult, if not impossible, to accurately determine the correct pulse
position corresponding to the transmitted pulse.
Non-coherent Detection
One of the principal advantages of Pulse Position Modulation is that it is an M-ary
modulation technique that can be implemented non-coherently, such that the receiver
does not need to use a Phase-locked loop (PLL) to track the phase of the carrier. This
makes it a suitable candidate for optical communications systems, where coherent phase
modulation and detection are difficult and extremely expensive. The only other common
M-ary non-coherent modulation technique is M-ary Frequency Shift Keying, which is the
frequency-domain dual to PPM.
PPM vs. M-FSK
PPM and M-FSK systems with the same bandwidth, average power, and transmission
rate of M/T bits per second have identical performance in an AWGN (Additive White
Gaussian Noise) channel. However, their performance differs greatly when comparing
frequency-selective and frequency-flat fading channels. Whereas frequency-selective
fading produces echoes that are highly disruptive for any of the M time-shifts used to
encode PPM data, it selectively disrupts only some of the M possible frequency-shifts
used to encode data for M-FSK. Conversely, frequency-flat fading is more disruptive for
M-FSK than PPM, as all M of the possible frequency-shifts are impaired by fading, while
the short duration of the PPM pulse means that only a few of the M time-shifts are
heavily impaired by fading.
Optical communications systems (even wireless ones) tend to have weak multipath
distortions, and PPM is a viable modulation scheme in many such applications.
PPM
Implementation
(UNIT-V)
(PART-A)
1)
an antenna.
2)
signals.
3)
what s semiconductor?
The advantage to this method is that most of the radio's signal path has to be
sensitive to only a narrow range of frequencies. Only the front end (the part
before the frequency converter stage) needs to be sensitive to a wide
frequency range. For example, the front end might need to be sensitive to 1
30 MHz, while the rest of the radio might need to be sensitive only to
455 kHz, a typical IF. Only one or two tuned stages need to be adjusted to
track over the tuning range of the receiver; all the intermediate-frequency
stages operate at a fixed frequency which need not be adjusted.
10)
antenna, uses electronic filters to separate a wanted radio signal from all other
signals picked up by this antenna, amplifies it to a level suitable for further
processing, and finally converts through demodulation and decoding the
signal into a form usable for the consumer, such as sound, pictures, digital
data, measurement values, navigational positions.
(PART-C)
11)
Receiver (radio)
A radio receiver is an electronic circuit that receives its input from an antenna, uses
electronic filters to separate a wanted radio signal from all other signals picked up by this
antenna, amplifies it to a level suitable for further processing, and finally converts
through demodulation and decoding the signal into a form usable for the consumer, such
as sound, pictures, digital data, measurement values, navigational positions, etc.[1]
Consumer audio and high fidelity audio receivers and AV receivers used
by home stereo listeners and audio and home theatre system enthusiasts.
Communications
receivers,
used
as
component
of
radio
Simple crystal radio receivers (also known as a crystal set) which operate
Scanners are specialized receivers that can automatically scan two or more
discrete frequencies, stopping when they find a signal on one of them and then
continuing to scan other frequencies when the initial transmission ceases. They
are mainly used for monitoring VHF and UHF radio systems.
Consumer audio receivers
In the context of home audio systems, the term "receiver" often refers to a combination of
a tuner, a preamplifier, and a power amplifier all on the same chassis. Audiophiles will
refer to such a device as an integrated receiver, while a single chassis that implements
only one of the three component functions is called a discrete component. Some audio
purists still prefer three discreet units - tuner, preamplifier and power amplifier - but the
integrated receiver has, for some years, been the mainstream choice for music listening.
The first integrated stereo receiver was made by the Harman Kardon company, and came
onto the market in 1958. It had undistinguished performance, but it represented a
breakthrough to the "all in one" concept of a receiver, and rapidly improving designs
gradually made the receiver the mainstay of the marketplace. Many radio receivers also
include a loudspeaker.
Today AV receivers are a common component in a high-fidelity or home-theatre system.
The receiver is generally the nerve centre of a sophisticated home-theatre system
providing selectable inputs for a number of different audio components like turntables,
compact-disc players and recorders, and tape decks ( like video-cassette recorders) and
video components (DVD players and recorders, video-game systems, and televisions).
With the decline of vinyl discs, modern receivers tend to omit inputs for turntables, which
have separate requirements of their own. All other common audio/visual components can
use any of the identical line-level inputs on the receiver for playback, regardless of how
they are marked (the "name" on each input is mostly for the convenience of the user.) For
instance, a second CD player can be plugged into an "Aux" input, and will work the same
as it will in the "CD" input jacks.
Some receivers can also provide signal processors to give a more realistic illusion of
listening in a concert hall. Digital audio S/PDIF and USB connections are also common
today. The home theater receiver, in the vocabulary of consumer electronics, comprises
both the 'radio receiver' and other functions, such as control, sound processing, and power
amplification. The standalone radio receiver is usually known in consumer electronics as
a tuner.
Some modern integrated receivers can send audio out to seven loudspeakers and an
additional channel for a subwoofer and often include connections for headphones.
Receivers vary greatly in price, and support stereophonic or surround sound. A highquality receiver for dedicated audio-only listening (two channel stereo) can be relatively
inexpensive; excellent ones can be purchased for $300 US or less. Because modern
receivers are purely electronic devices with no moving parts unlike electromechanical
devices like turntables and cassette decks, they tend to offer many years of trouble-free
service. In recent years, the home theater in a box has become common, which often
integrates a surround-capable receiver with a DVD player. The user simply connects it to
a television, perhaps other components, and a set of loudspeakers.
Portable radios
Portable radios include simple transistor radios that are typically monoaural and receive
the AM, FM, and/or short wave broadcast bands. FM, and often AM, radios are
sometimes included as a feature of portable DVD/CD, MP3 CD, and USB key players, as
well as cassette player/recorders.
AM/FM stereo car radios can be a separate dashboard mounted component or a feature of
in car entertainment systems.
A Boombox (or Boom-box)also sometimes known as a Ghettoblaster or a Jambox, or
(in parts of Europe) as a "radio-cassette"is a name given to larger portable stereo
systems capable of playing radio stations and recorded music, often at a high level of
volume.
Self-powered portable radios, such as clockwork radios are used in developing nations or
as part of an emergency preparedness kit.[2]
Early development
While James Clerk Maxwell was the first person to prove electromagnetic waves existed,
in 1887 a German named Heinrich Hertz demonstrated these new waves by using spark
gap equipment to transmit and receive radio or "Hertzian waves", as they were first
called. The experiments were not followed up by Hertz. The practical applications of the
wireless communication and remote control technology were implemented by Nikola
Tesla.
The worlds first radio receiver (thunderstorm register) was designed by Alexander
Stepanovich Popov, and it was first seen at the All-Russia exhibition in 1896. He was the
first to demonstrate the practical application of electromagnetic (radio) waves,[3] although
he did not care to apply for a patent for his invention.
A device called a coherer became the basis for receiving radio signals. The first person to
use the device to detect radio waves was a Frenchman named Edouard Branly, and Oliver
Lodge popularised it when he gave a lecture in 1898 in honour of Hertz. Lodge also made
improvements to the coherer. Guglielmo Marconi believed that these new waves could be
used to communicate over great distances and made significant improvements to both
radio receiving and transmitting apparatus. In 1895 Marconi demonstrated the first viable
radio system, leading to transatlantic radio communication in December 1901.
John Ambrose Fleming's development of an early thermionic valve to help detect radio
waves was based upon a discovery of Thomas Edison's (called "The Edison effect",
which essentially modified an early light bulb). Fleming called it his "oscillation valve"
because it acted in the same way as water valve in only allowing flow in one direction.
While Fleming's valve was a great stride forward it would take some years before
thermionic, or vacuum tube technology was fully adopted.
Around this time work on other types of detectors started to be undertaken and it resulted
in what was later known as the cat's whisker. It consisted of a crystal of a material such as
galena with a small springy piece of wire brought up against it. The detector was
constructed so that the wire contact could be moved to different points on the crystal, and
thereby obtain the best point for rectifying the signal and the best detection. They were
never very reliable as the "whisker" needed to be moved periodically to enable it to detect
the signal properly.[4]
Valves (Tubes)
An American named Lee de Forest, a competitor to Marconi, set about to develop
receiver technology that did not infringe any patents to which Marconi had access. He
took out a number of patents in the period between 1905 and 1907 covering a variety of
developments that culminated in the form of the triode valve in which there was a third
electrode called a grid. He called this an audion tube. One of the first areas in which
valves were used was in the manufacture of telephone repeaters, and although the
performance was poor, they gave significant improvement in long distance telephone
receiving circuits.
With the discovery that triode valves could amplify signals it was soon noticed that they
would also oscillate, a fact that was exploited in generating signals. Once the triode was
established as an amplifier it made a tremendous difference to radio receiver performance
as it allowed the incoming signals to be amplified. One way that proved very successful
was introduced in 1913 and involved the use of positive feedback in the form of a
regenerative detector. This gave significant improvements in the levels of gain that could
be achieved, greatly increasing selectivity, enabling this type of receiver to outperform all
other types of the era. With the outbreak of the First World War, there was a great impetus
to develop radio receiving technology further. An American named Irving Langmuir
helped introduce a new generation of totally air-evacuated "hard" valves. H. J. Round
undertook some work on this and in 1916 he produced a number of valves with the grid
connection taken out of the top of the envelope away from the anode connection.[4]
Autodyne and superheterodyne
By the 1920s, the tuned radio frequency receiver (TRF) represented a major improvement
in performance over what had been available before, it still fell short of the needs for
some of the new applications. To enable receiver technology to meet the needs placed
upon it a number of new ideas started to surface. One of these was a new form of direct
conversion receiver. Here an internal or local oscillator was used to beat with the
incoming signal to produce an audible signal that could be amplified by an audio
amplifier.
H. J. Round developed a receiver he called an autodyne in which the same valve was
used as a mixer and an oscillator, Whilst the set used fewer valves it was difficult to
optimise the circuit for both the mixer and oscillator functions.
The next leap forward in receiver technology was a new type of receiver known as the
superheterodyne, or supersonic heterodyne receiver. A Frenchman named Lucien Levy
was investigating ways in which receiver selectivity could be improved and in doing this
he devised a system whereby the signals were converted down to a lower frequency
where the filter bandwidths could be made narrower. A further advantage was that the
gain of valves was considerably greater at the lower frequencies used after the frequency
conversion, and there were fewer problems with the circuits bursting into oscillation.
The idea for developing a receiver with a fixed intermediate frequency amplifier and
filter is credited to Edwin Armstrong. Working for the American Expeditionary Force in
Europe in 1918, Armstrong thought that if the incoming signals were mixed with a
variable frequency oscillator, a low frequency fix tuned amplifier could be used.
Armstrong's original receiver consisted of a total of eight valves. Several tuned circuits
could be cascaded to improve selectivity, and being on a fixed frequency they did not all
need to be changed in line with one another. The filters could be preset and left correctly
tuned. Armstrong was not the only person working on the idea of a superhet. Alexander
Meissner in Germany took out a patent for the idea six months before Armstrong, but as
Meissner did not prove the idea in practice and did not build a superhet radio, the idea is
credited to Armstrong.
The need for the increased performance of the superhet receiver was first felt in America,
and by the late 1920s most sets were superhets. However in Europe the number of
broadcast stations did not start to rise as rapidly until later. Even so by the mid 1930s
virtually all receiving sets in Europe as well were using the superhet principle. In 1926
the tetrode valve was introduced, and enabled further improvements in performance.[4]
War and postwar developments
In 1939 the outbreak of war gave a new impetus to receiver development. During this
time a number of classic communications receivers were designed. Some like the
National HRO are still sought by enthusiasts today and although they are relatively large
by today's standards, they can still give a good account of themselves under current
crowded band conditions. In the late 1940s the transistor was discovered. Initially the
devices were not widely used because of their expense, and the fact that valves were
being made smaller, and performed better. However by the early 1960s portable transistor
broadcast receivers (transistor radios) were hitting the market place. These radios were
ideal for broadcast reception on the long and medium wave bands. They were much
smaller than their valve equivalents, they were portable and could be powered from
batteries. Although some valve portable receivers were available, batteries for these were
expensive and did not last for long. The power requirements for transistor radios were
very much less, resulting in batteries lasting for much longer and being considerably
cheaper.[4]
Semiconductors
Further developments in semiconductor technology led to the introduction of the
integrated circuit in the late 1950s.[5] This enabled radio receiver technology to move
forward even further. Integrated circuits enabled high performance circuits to be built for
less cost, and significant amounts of space could be saved.
As a result of these developments new techniques could be introduced. One of these was
the frequency synthesizer that was used to generate the local oscillator signal for the
receiver. By using a synthesizer it was possible to generate a very accurate and stable
local oscillator signal. Also the ability of synthesizers to be controlled by microprocessors
meant that many new facilities could be introduced apart from the significant
performance improvements offered by synthesizers.[4]
Digital technologies
Main article: Digital radio
Receiver technology is still moving forward. Digital signal processing where many of the
functions performed by an analog intermediate frequency stage can be performed
digitally by converting the signal to a digital stream that is manipulated mathematically is
now widespread. The new digital audio broadcasting standard being introduced can only
be used when the receiver can manipulate the signal digitally.
While today's radios are miracles of modern technology, filled with low power high
performance integrated circuits crammed into the smallest spaces, the basic principle of
the radio is usually the superhet, the same idea which was developed by Edwin
Armstrong back in 1918.[4]
12)
designs, dozens (in some cases over 100) low-gain triode stages had to be connected in
cascade to make workable equipment, which drew enormous amounts of power in
operation and required a team of maintenance engineers. The strategic value was so high,
however, that the British Admiralty felt the high cost was justified.
Armstrong had realized that if RDF could be operated at a higher frequency, it would
allow detection of enemy shipping much more effectively, but at the time, no practical
"short wave" amplifier existed, (defined then as any frequency above 500 kHz) due to the
limitations of triodes of the day.
A "heterodyne" refers a beat or "difference" frequency produced when two or more radio
frequency carrier waves are fed to a detector. The term was originally coined by
Canadian Engineer Reginald Fessenden describing his proposed method of making
Morse Code transmissions from an Alexanderson alternator type transmitter audible.
With the Spark gap transmitters then in wide use, the Morse Code signal consisted of
short bursts of a heavily modulated carrier wave which could be clearly heard as a series
of short chirps or buzzes in the receiver's headphones.
The signal from an Alexanderson Alternator on the other hand, did not have any such
inherent modulation and Morse Code from one of those would only be heard as a series
of clicks or thumps. Fessenden's idea was to run two Alexanderson Alternators, one
producing a carrier frequency 3kHz higher than the other. In the receiver's detector the
two carriers would beat together to produce a 3kHz tone and so in the headphones the
morse signals would then be heard as a series of 3kHz beeps. For this he coined the term
"heterodyne" meaning "Generated by a Difference" (in frequency).
Later, when vacuum triodes became available, the same result could be achieved more
conveniently by incorporating a "local oscillator" in the receiver, which became known as
a "Beat Frequency Oscillator" or BFO. As the BFO frequency was varied, the pitch of the
heterodyne could be heard to vary with it. If the frequencies were too far apart the
heterodyne became ultrasonic and hence no longer audible.
It had been noticed some time before that if a regenerative receiver was allowed to go
into oscillation, other receivers nearby would suddenly start picking up stations on
frequencies different from those that the stations were actually transmitted on. Armstrong
(and others) eventually deduced that this was caused by a "supersonic heterodyne"
between the station's carrier frequency and the oscillator frequency. Thus, for example, if
a station was transmitting on 300 kHz and the oscillating receiver was set to 400 kHz, the
station would be heard not only at the original 300 kHz, but also at 100 kHz and 700 kHz.
Armstrong realized that this was a potential solution to the "short wave" amplification
problem, since the beat frequency still retained its original moduation, but on a lower
carrier frequency. To monitor a frequency of 1500 kHz for example, he could set up an
oscillator to say, 1560 kHz, which would produce a heterodyne of 60kHz, a frequency
that could then be much more conveniently amplified by the triodes of the day. He termed
this the "Intermediate Frequency" often abbreviated to "IF"
In December, 1919, Major E. H. Armstrong gave publicity to an indirect method of
obtaining short-wave amplification, called the Super- Heterodyne. The idea is to reduce
the incoming frequency which may be, say 1,500,000 cycles (200 meters), to some
suitable super-audible frequency which can be amplified efficiently, then passing this
current through a radio frequency amplifier and finally rectifying and carrying on to one
or two stages of audio frequency amplification. (page 11 of December 1922 QST
magazine)
Early Superheterodyne receivers actually used IFs as low as 20 kHz, often based around
the self-resonance of iron-cored transformers. This made them extremely susceptible to
image frequency interference, but at the time, the main objective was sensitivity rather
than selectivity. Using this technique, a small number triodes could be made to do work
that formerly required dozens or even hundreds.
1920s commercial IF transformers actually look very similar to 1920s audio interstage
coupling transformers, and were wired up in an almost identical manner. By the mid-
1930s superhets were using much higher intermediate frequencies, (typically around 440470kHz), using tuned coils very similar in construction to the aerial and oscillator coils.
However the term "Intermediate Frequency Transformer" or "IFT" still persists to this
day.
Modern receivers typically use a mixture of Ceramic Filters and/or Saw Resonators as
well as traditional tuned-inductor IF transformers
Armstrong was able to put his ideas into practice quite quickly, and the technique was
rapidly adopted by the military. However, it was less popular when commercial radio
broadcasting began in the 1920s. There were many factors involved,but the main issues
were the need for an extra tube for the oscillator, the generally higher cost of the receiver,
and the level of technical skill required to operate it. For early domestic radios, Tuned
RFs ("TRF"), also called the Neutrodyne, were much more popular because they were
cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong
eventually sold his superheterodyne patent to Westinghouse, who then sold it to RCA, the
latter monopolizing the market for superheterodyne receivers until 1930.[2]
By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF
receiver's cost advantages, and the explosion in the number of broadcasting stations
created a demand for cheaper, higher-performance receivers.
First, the development of practical indirectly-heated-cathode tubes allowed the mixer and
oscillator functions to be combined in a single Pentode tube, in the so-called Autodyne
mixer. This was rapidly followed by the introduction of low-cost multi-element tubes
specifically designed for superheterodyne operation. These allowed the use of much
higher Intermediate Frequencies (typically around 440-470kHz) which eliminated the
problem of image frequency interference. By the mid-30s, for commercial receiver
production the TRF technique was obsolete.
The superheterodyne principle was eventually taken up for virtually all commercial radio
and TV designs.
13)
The diagram below shows the basic elements of a single conversion superhet receiver.
The essential elements of a local oscillator and a mixer followed by a fixed-tuned filter
and IF amplifier are common to all superhet circuits. Cost-optimized designs may use one
active device for both local oscillator and mixerthis is sometimes called a "converter"
stage. One such example is the pentagrid converter.
The advantage to this method is that most of the radio's signal path has to be sensitive to
only a narrow range of frequencies. Only the front end (the part before the frequency
converter stage) needs to be sensitive to a wide frequency range. For example, the front
end might need to be sensitive to 130 MHz, while the rest of the radio might need to be
sensitive only to 455 kHz, a typical IF. Only one or two tuned stages need to be adjusted
to track over the tuning range of the receiver; all the intermediate-frequency stages
operate at a fixed frequency which need not be adjusted.
To overcome obstacles such as image response, multiple IF stages are used, and in some
case multiple stages with two IFs of different values. For example, the front end might be
sensitive to 130 MHz, the first half of the radio to 5 MHz, and the last half to 50 kHz.
Two frequency converters would be used, and the radio would be a "Double Conversion
Super Heterodyne"a common example is a television receiver where the audio
information is obtained from a second stage of intermediate frequency conversion.
Occasionally special-purpose receivers will use an intermediate frequency much higher
than the signal, in order to obtain very high image rejection.
Superheterodyne receivers have superior characteristics to simpler receiver types in
frequency stability and selectivity. They offer much better stability than Tuned radio
frequency receivers (TRF) because a tuneable oscillator is more easily stabilized than a
tuneable amplifier, especially with modern frequency synthesizer technology. IF filters
can give much narrower passbands at the same Q factor than an equivalent RF filter. A
fixed IF also allows the use of a crystal filter when exceptionally high selectivity is
necessary. Regenerative and super-regenerative receivers offer better sensitivity than a
TRF receiver, but suffer from stability and selectivity problems.
In the case of modern television receivers, no other technique was able to produce the
precise bandpass characteristic needed for vestigial sideband reception, first used with the
original NTSC system introduced in 1941. This originally involved a complex collection
of tuneable inductors which needed careful adjustment, but since the early 1980s these
have been replaced with precision electromechanical surface acoustic wave (SAW)
filters. Fabricated by precision laser milling techniques, SAW filters are much cheaper to
produce, can be made to extremely close tolerances, and are extremely stable in
operation.
Microprocessor technology allows replacing the superheterodyne receiver design by a
software defined radio architecture, where the IF processing after the initial IF filter is
implemented in software. This technique is already in use in certain designs, such as very
low cost FM radios incorporated into mobile phones where the necessary microprocessor
is already present in the system.
Radio transmitters may also use a mixer stage to produce an output frequency, working
more or less as the reverse of a superheterodyne receiver.
Drawbacks
Drawbacks to the superheterodyne receiver include interference from signal frequencies
close to the intermediate frequency. To prevent this, IF frequencies are generally
controlled by regulatory authorities, and this is the reason most receivers use common
IFs. Examples are 455 kHz for AM radio, 10.7 MHz for FM, and 38.9 MHz (Europe) 45
MHz (US) for television.
(For AM radio, a variety of IFs have been used, but most of the Western World settled on
455kHz, in large part because of the almost universal transition to Japanese-made
ceramic resonators which used the US standard of 455kHz. In more recent digitally tuned
receivers, this was changed to 450kHz as this figure simplifies the design of the
synthesizer circuitry).
Additionally, in urban environments with many strong signals, the signals from multiple
transmitters may combine in the mixer stage to interfere with the desired signal.
14) Eplain about receiver applications?
High-side and low-side injection
The amount that a signal is down-shifted by the local oscillator depends on whether its
frequency f is higher or lower than fLO. That is because its new frequency is |f fLO| in
either case. Therefore, there are potentially two signals that could both shift to the same
fIF one at f = fLO + fIF and another at f = fLO fIF. One or the other of those signals, called
the image frequency, has to be filtered out prior to the mixer to avoid aliasing. When the
upper one is filtered out, it is called high-side injection, because fLO is above the
frequency of the received signal. The other case is called low-side injection. High-side
injection also reverses the order of a signal's frequency components. Whether or not that
actually changes the signal depends on whether it has spectral symmetry or not. The
reversal can be undone later in the receiver, if necessary.
Image Frequency (fimage)
One major disadvantage to the superheterodyne receiver is the problem of image
frequency. In heterodyne receivers, an image frequency is an undesired input frequency
equal to the station frequency plus twice the intermediate frequency. The image
frequency results in two stations being received at the same time, thus producing
interference. Image frequencies can be eliminated by sufficient attenuation on the
incoming signal by the RF amplifier filter of the superheterodyne receiver.
Early Autodyne receivers typically used IFs of only 150kHz or so, as it was difficult to
maintain reliable oscillation if higher frequencies were used. As a consequence, most
Autodyne receivers needed quite elaborate antenna tuning networks, often involving
double-tuned coils, to avoid image interference. Later superhets used tubes especially
designed for oscillator/mixer use, which were able work reliably with much higher IFs,
reducing the problem of image interference and so allowing simpler and cheaper aerial
tuning circuitry.
Local oscillator radiation
It is difficult to keep stray radiation from the local oscillator below the level that a nearby
receiver can detect. This means that there can be mutual interference in the operation of
two or more superheterodyne receivers in close proximity. In espionage, oscillator
radiation gives a means to detect a covert receiver and its operating frequency.
Further information: Electromagnetic compatibility
Local oscillator sideband noise
Local oscillators typically generate a single frequency signal that has negligible
amplitude modulation but some random phase modulation. Either of these impurities
spreads some of the signal's energy into sideband frequencies. That causes a
corresponding widening of the receiver's frequency response, which would defeat the aim
to make a very narrow bandwidth receiver such as to receive low-rate digital signals.
Care needs to be taken to minimise oscillator phase noise, usually by ensuring that the
oscillator never enters a non-linear mode.
MHz below which frequency dividers are usually the best choice. High frequency
oscillators may include phase-locked loops or frequency multipliers to take advantage of
a low frequency crystal's stability. Multiplied oscillators are preferred above 120 MHz
when stability is a key issue.
AGING: New, high quality ovenized quartz crystals typically exhibit small, positive
frequency drift with time unrelated to external influences. A significant drop in this
"aging" rate occurs after the first few weeks of operation at the operating temperature.
Ultimate aging rates below 0.1 PPB per day are achieved by the highest quality crystals
and 1 PPB per day rates are commonplace. Significant negative aging (dropping
frequency) indicates a bad crystal - probably a leaking package.
RETRACE: When power is removed from an oscillator, then re-applied several hours
later, the frequency will stabilize at a slightly different value. This "retrace" error is
usually specified for a twenty-four hour off-time followed by a warm-up time sufficient
to allow complete thermal equilibrium. Retrace errors often diminish after warming as
though the crystal walks back down its aging curve when cold and then exponentially
approaches the previous drift curve when activated. Oscillators stored at extremely cold
temperatures for extended periods of time may exhibit a frequency vs. time curve much
like the initial "green" aging curve of a new crystal. In addition to the crystal related
effects described above, mechanical shifts can also occur due to the thermal stresses from
heating and cooling the oven structure. A common retrace error source is the mechanical
device used to adjust the oscillator's frequency. Precision, multi-turn variable capacitors
exhibit good retrace but a good practice is to turn the screw back slightly after setting to
relieve any stress. Most Wenzel oscillators use special precision potentiometers which
exhibit an unusually low amount of retrace and hysterisis.