Vous êtes sur la page 1sur 23

1.

Introduction to telecom:
DEFINITION

telecommunications

Reprints Telecommunications, also called telecommunication, is the exchange of information over significant distances by electronic means. A complete, single telecommunications circuit consists of two stations, each equipped with a transmitter and a receiver. The transmitter and receiver at any station may be combined into a single device called a transceiver. The medium of signal transmission can be electrical wire or cable (also known as "copper"),optical fiber or electromagnetic fields. The free-space transmission and reception of data by means of electromagntetic fields is called wireless. The simplest form of telecommunications takes place between two stations. However, it is common for multiple transmitting and receiving stations to exchange data among themselves. Such an arrangement is called a telecommunications network. The Internet is the largest example. On a smaller scale, examples include:

Corporate and academic wide-area networks (WANs) Telephone networks Police and fire communications systems

Taxicab dispatch networks Groups of amateur radio operators Data is conveyed in a telecommunications circuit by means of an electrical signal called the carrier or carrier wave. In order for a carrier to convey information, some form of modulationis required. The mode of modulation can be broadly categorized as either analog or digital. In analog modulation, some aspect of the carrier is varied in a continuous fashion. The oldest form of analog modulation is amplitude modulation (AM), still used in radio broadcasting at some frequencies. Digital modulation actually predates analog modulation; the earliest form was Morse code. During the 1900s, dozens of new forms of modulation were developed and deployed, particularly during the so-called "digital revolution" when the use of computers among ordinary citizens became widespread. In some contexts, a broadcast network, consisting of a single transmitting station and multiple receive-only stations, is considered a form of telecommunications. Radio and television broadcasting are the most common examples. Telecommunications and broadcasting worldwide are overseen by the International Telecommunication Union (ITU), an agency of the United Nations (UN) with headquarters in Geneva, Switzerland. Most countries have their own agencies that enforce telecommunications

regulations formulated by their governments. In the United States, that agency is the Federal Communications Commission (FCC).

2. History of telecom

A Brief History of Telecommunications


BrianB (1477 days ago)

Telecommunication is a term coming from Greek and meaning communication at distance through signals of varied nature coming from a transmitter to a receiver. In order to achieve effective communication, the choice of a proper mean of transport for the signal has played (and still plays) a fundamental role.

In ancient times, the most common way of producing a signal would be through light (fires) and sound (drums and horns). However, those kinds communications were insecure and certainly left room to improvement as they did not permit message encryption nor a fast transmission of information on a large scale. The true jump in terms of quality came with the advent of electricity. Electromagnetic energy, in fact, is able to transport information in an extremely fast way (ideally to the speed of light), in a way that previously had no equals in terms of costs reliability. Therefore, we may say that the starting point of all modern telecommunications was the invention of the electric cell by Alessandro Volta (1800). It was shortly thereafter that the first experiments on more advanced communication system begun. In 1809, Thomas S. Sommering proposed a telegraphic system composed of a battery, 35 wires (one for each letter and number) and a group of sensors made of gold, which were submerged in a water tank: when a signal was passing from one of those wires,

electrical current would split water molecules, and small oxygen bubbles would be visible near that sensor. Many other experiments were soon to follow: Wheatstone, Weber and Karl Friedrich Gauss tried to further develop Sommerings idea in a product that could be mass-distributed, but their efforts were without success. For the next step we would have to wait until 1843, the year in which Samuel Morse proposed a way to assign each letter and number to a ternary code (point, line, and space). This way turned out to be extremely convenient and more affordable than Sommerings idea, especially in terms of reduced circuitry (you wouldnt need anymore a wire for each symbol). Meanwhile, technology became advanced enough to find a way to convert those signals in audible (or sometimes graphic) signals. The combination of these two factors quickly determined the success of Morses symbol code, which we can still find used today. The system was further developed and improved in the following years by Hughes, Baudot, and Gray (1879), who theorized other possible codes (Grays code has still applications today in the ICT industry and in barcodes technology). However, the telegraph could still be used just by trained personal and in certain buildings like offices, so it could only be used by a limited amount of people. Research of the time therefore took another direction and aimed at producing a machine that could transmit sounds, rather than just signals. The first big step in this direction was the invention of transducers which could transform an acoustic signal into an electric one and vice versa (microphone and receiver) with acceptable information loss, in 1850. Seven years later, Antonio Meucci and Graham Bell independently managed to build a prototype of an early telephone (sound at distance) machine. Since Meucci didnt have the money to patent his invention (the cost was $250 at the time), Bell managed to register it first. Both with telegraphs and telephones, the need for a distributed and reliable communication network soon became evident. Routing issues were first solved by means of human operators and circuit commutation: the PSTN (Public Switched Telephone Network) was born. However, this system didnt guarantee the privacy and secrecy of conversations, and efforts towards the development of an automatic circuit commutation were made. In 1899, Almon Strowger invented an electro-mechanic device simply known as selector, which was directed by the electrical signals coming from the calling telephone device, achieved through selection based on geographical prefixes. Many other innovations were soon to come:

In 1985, Guglielmo Marconi invented the wireless telegraph (radio); In 1920, valve amplifiers made their first appearance; In 1923, the television was invented;

In 1947, the invention of transistors gave birth to the field of electronics; In 1958, the first integrated circuit was built; In 1969, the first microprocessor was invented.

With the last step, electronics becomes more than ever a fundamental part in the telecommunication world, at first in the transmission, and soon also in the field of circuit commutation. Moreover, in 1946 the invention of ENIAC (Electronic Numerical Integrator and Computer) starts the era of informatics. Informatics and telecommunications inevitably begun to interact, as it was to be expected: the first made fast data processing possible, while thanks to second the data could then be sent to a distant location. The development of microelectronics and informatics radically revolutionized techniques both in telecommunication networks and performance requirements for the networks. Starting from 1938, an innovative technology called PCM (Pulse Code Modulation) started to grow more and more popular. This technology could achieve the digital transmission of a voice signal by digitally encoding and decoding, rather than by means of transducers: however, PCM was first used on a large scale only in 1962 in the United States (the socalled T1). During the mid Sixties Paul Baran, a RAND Corporation employee working on communication problems concerning the US Air Force, first gave birth to the concept of packet switching network rather than the conventional idea of circuit commutation network. According to this model, there should be no hierarchy in the nodes of a network, but each node should rather be connected to many others and be able to decide (and, in case of need, modify) the packet routing. Each packet is a bulk of data which consist of two main parts, a header containing routing information and a body containing the actual data. In this context Vincent Cerf, Bob Kahn and others developed, starting from the 70s, the TCP/IP protocol suite, which made possible communication of computers and heterogeneous machines through a series of physical and logical layers. Packet switching network and TCP/IP were later chosen by the military project ARPANET. The rest of the story is widely known: in 1983, ARPANET became available to universities and research centers, among which NSFNET (National Science Foundation + NET), which finally gave birth to the Internet. In the latest years, the importance of the Internet has been constantly growing. The high flexibility given by the TCP/IP suite and the ISO/OSI protocols provide a strong foundation on which communication among devices of different kind be it a laptop or a cell phone, an iPod or a GPS navigator has finally been made simple and easy to achieve.

3. Communication model

A Communication Model
On the one hand, you'd think we all understand what communication is well enough to talk about it meaningfully, since we all do it from a very young age. On the other hand, personal experience shows that people get very easily confused about the communication that occurs in the real world. Most people can't really answer ``What happens when you request a web page from a web server?'' If you can't even meaningfully answer questions about how things work or what happens, how can you expect to understand the ethics of such actions? And a complementary question, if one must have a post-grad degree in computer science to understand what is going on, how can we expect to hold anybody to whatever ethics may putatively exist? Since nobody can adequately describe what's going on, the debates almost inevitably degenerate into a flurry of metaphors trying to convince you that whatever they are arguing about is the same as one of the existing domains, and should be treated the same way. The metaphors are all inadequate, though, because as shown earlier, there is a wide variety of activities that do not fit into the old models at all. As a result, the metaphor-based debates also tend to turn into arguments about whether something is more like television or more like a newspaper. There are sufficient differences between things like using a search engine and reading a newspaper to render any metaphor moot, so the answer to the question of which metaphor is appropriate is almost always ``Neither.'' Before we can meaningfully discuss ethics, we need to establish what we mean by communication, and create a model we can use to understand and discuss various situations with. We'll find that simply the act of clarifying the issues will immediately produce some useful results, before we even try to apply the model to anything, as so frequently happens when fuzzy conceptions are replaced by clear ones. After we're done, we will not have need to resort to metaphors to think and talk about communication ethics; we will deal with the ethical issues directly. As a final bonus, the resultant model is simple enough that anybody can use it without a PhD in computer network topology.

The Model

For this model, I take my cue from the Internet and the computer revolution itself, because it is a superset of almost everything else. Telecommunication engineers and other people who deal with the technical aspects of communication have created a very common model of communication that has six components, which are a sender, an encoder, a medium, a decoder, a receiver, and a message, as in figure 10.

Figure 10: Standard Communication Model

The parts of this model are as follows:


Sender: The sender is what or who is trying to send a message to the receiver. Encoder: In the general case, it is not possible to directly insert the message onto the communications medium. For instance, when you speak on the telephone, it is not possible to actually transmit sound (vibrations in matter) across the wire for any distance. In your phone is a microphone, which converts the sound into electrical impulses, which can be transmitted by wires. Those electrical impulses are then manipulated by the electronics in the phone so they match up with what the telephone system expects. Message: Since this is a communication engineer's model, the message is the actual encoded message that is transmitted by the medium. Medium: The medium is what the message is transmitted on. The phone system, Internet, and many other electronic systems use wires. Television and radio can use electromagnetic radiation. Even bongo drums can be used as a medium (http://eagle.auc.ca/~dreid/overview.html).

Decoder: The decoder takes the encoded message and converts it to a form the receiver understands, since for example a human user of the phone system does not understand electrical impulses directly. Receiver: The receiver is the target of the message.

As a technical model this is fairly powerful and useful for thinking about networks and such. However, since the model is for technical people for technical purposes, it turns out that it's actually excessively complicated for our purposes of modeling communication for the purpose of ethics. We can collapse the encoder and decoder into the medium, because we never care about the details of the encoder or decoder in particular; it is sufficient for our purposes to consider changes to the encoder or decoder to be essentially the same as changes to the medium. That leaves us four basic components, as in figure 11.

Figure 11: Simplified Communication Model

The base unit of this model can be called a connection. connection If there is an identifiable sender, receiver, and medium, they define a connection along which a message can flow. When the sender sends a message, the medium transmits it, and the receiver receive the message.

Note that until the message is sent and recieved the medium may not literally exist; for instance, your phone right now theoretically connects to every other phone on the public network in the world. However, until you dial a number or recieve a call, none of the connections are ``real''. A connection is always unidirectional in this model. If communication flows in both directions, that should be represented as two connections, one for each direction. To send a message across the connection, a connection is initiated by a sender, and the receiver must desire to receive it, excepting sound-based messages which due to a weakness in our physical design can be forced upon a reciever. Either can occur independently; a receiver may be willing to receive a message, but the sender may not send it until they are compensated to their satisfaction. A sender may wish to send a message, but no receiver may be interested in receiving it. For a given message from a sender to receiver, the ``medium'' is the everything the message traverses, no matter what that is. If the phone system offloads to an Internet connection to transmit the message part of the way, and the Internet connection is then converted back to voice on the other end, the entire voice path is the medium. It may sometimes be useful to determine exactly where something occurred, but except for determining who is ``to blame'' for something, all that really matters are the characteristics of the medium as a whole.
Example

Let me show you an example of this model applied to one of the most common Internet operations, a search engine query. Let's call the search engine S (for Search engine) and the person querying the engine P (for person). Let's assume P is already on the search engine's home page and is about to push ``submit search''. 1. P (as sender) opens a connection to S (as receiver) via the Internet (the medium). P sends the search request (the message). 2. S, which exists for the sole purpose of searching the Internet in response to such requests, accepts the connection, receives the request and begins processing it. In the past, the search engine has read a lot of web pages. It puts together the results and creates a new connection to P, who is now the receiver, using the Internet. It sends back the results.

Technical people will note at this point that the same ``network connection'' is used, as TCP is both send and receive, so no new ``network connection'' is ever created. This is true on a technical level, but from this model's point of view, there is a new ``connection''; what constitutes a ``connection'' does not always match the obvious technical behaviors. On most search engine pages with most browsers, you'll also repeat this step for each graphic on the results page, loading a graphic that's on the page. In this case, the person P is the sender for the first connection, the company running the search engine S is the receiver for the first connection, and the medium is everything in between, starting at P's computer and going all the way to the search engine itself. This model does not just apply to the Internet and computer-based communication. It applies to all communication. When you buy a newspaper, the newspaper is the medium, and the sender is the publisher. When you watch television, the television is the medium, and the television program station is the sender. When you talk to somebody, the air is the medium and the speech is the message. This is a very general and powerful model for thinking about all forms of communication.

1. EM spectrum

The Electromagnetic Spectrum and Bandwidth


This section talks about bandwidth and about where the various transmission media lie within the electromagnetic spectrum.

The Electromagnetic Spectrum


When electrons move, they create electromagnetic waves that can propagate through free space. This phenomenon was first predicted to exist by James Maxwell, in 1865, and it was first produced and observed by Heinrich Hertz in 1887. All modern communication depends on manipulating and controlling signals within the electromagnetic spectrum. The electromagnetic spectrum ranges from extremely low-frequency radio waves of 30Hz, with wavelengths of nearly the earth's diameter, to high-frequency cosmic rays of more than 10 million trillion Hz, with wavelengths smaller than the nucleus of an atom. The electromagnetic spectrum is depicted as a

logarithmic progression: The scale increases by multiples of 10, so the higher regions encompass a greater span of frequencies than do the lower regions. Although the electromagnetic spectrum represents an enormous range of frequencies, not all the frequencies are suitable to purposes of human communications. At the very low end of the spectrum are signals that would be traveling at 30Hz (that is, at 30 cycles per second). One of the benefits of a very low frequency is that it can travel much farther than a high frequency before it loses power (that is, attenuates). So a 30Hz signal provides the benefit of being able to travel halfway around the world before it requires some form of amplification. For example, one defense agency uses 30Hz to communicate with its submarines by using telemetry (for example, a message that says "We're still here. We're still here" is sent, and the subs know that if they don't get that message, they better see what's going on). Again, the benefit of very-low-frequency signals is that they can travel a very long distance before they attenuate. At the high end of the electromagnetic spectrum, signals travel over a band of 10 million trillion Hz (that is, 10 Hz). This end of the spectrum has phenomenal bandwidth, but it has its own set of problems. The
22

wave forms are so miniscule that they're highly distorted by any type of interference, particularly environmental interference such as precipitation. Furthermore, higher-frequency wave forms such as xrays, gamma rays, and cosmic rays are not very good to human physiology and therefore aren't available for us to use for communication at this point. Infrasound and the Animal World The universe is full of infrasoundthe frequencies below the range of human hearing. Earthquakes, wind, thunder, volcanoes, and ocean stormsmassive movements of earth, air, fire, and watergenerate infrasound. In the past, very-low-frequency sound has not been thought to play much of a role in animals' lives. However, we know now that sound at the lowest frequencies of elephant rumbles (14Hz to 35Hz) has remarkable properties. It is little affected by passage through forests and grasslands, and male and female elephants use it to find one another for reproduction. It seems that elephants communicate with one another by using calls that are too low-pitched for human beings to hear, and because of the properties of the infrasound range, these communications can take place over very long distances. Intense infrasonic calls have also been recorded from finback whales. Because of the problems with very low and very high frequencies, we primarily use the middle of the electromagnetic spectrum for communicationthe radio, microwave, infrared, and visible light portions of the spectrum. We do this by modulating the amplitudes, the frequencies, and the phases of the electromagnetic waves. Bandwidth is actually a measure of the difference between the lowest and highest frequencies being carried. Each of these communications bands offers differing amounts of

bandwidth, based on the range of frequencies they cover. The higher up in the spectrum you go, the greater the range of frequencies involved. Figure 2.6 shows the electromagnetic spectrum and where some of the various transmission media operate. Along the right-hand side is the terminology that the International Telecommunication Union (ITU) applies to the various bands: Extremely low, very low, low, medium, high, very high (VHF), ultrahigh (UHF), superhigh (SHF), extremely high (EHF), and tremendously high frequencies (THF) are all various forms of radio bands. And then we move into the light range, with infrared and visible light. You can see just by the placement of the various transmission media that not all are prepared to face the highbandwidth future that demanding advanced applications (such as streaming media, e-learning, networked interactive games, interactive TV, telemedicine, metacomputing, and Web agents) will require. Figure 2.6 The electromagnetic spectrum The radio, microwave, infrared, and visible light portions of the spectrum can all be used for transmitting information by modulating various measurements related to electromagnetic waves (seeFigure 2.7): Figure 2.7 An electromagnetic wave FrequencyThe number of oscillations per second of an electromagnetic wave is called itsfrequency. HertzFrequency is measured in Hertz (Hz), in honor of Heinrich Hertz. WavelengthThe wavelength is the distance between two consecutive maxima or minima of the wave form. AmplitudeAmplitude is a measure of the height of the wave, which indicates the strength of the signal. PhasePhase refers to the angle of the wave form at any given moment. BandwidthThe range of frequencies (that is, the difference between the lowest and highest frequencies carried) that make up a signal is called bandwidth. You can manipulate frequency, amplitude, and phase in order to distinguish between a one and a zero. Hence, you can represent digital information over the electromagnetic spectrum. One way to manipulate frequency is by sending ones at a high frequency and zeros at a low frequency. Devices that do this are called frequency-modulated devices. You can also modulate amplitude by sending ones at a high

amplitude or voltage and zeros at a low amplitude. A complementary receiving device could then determine whether a one or a zero is being sent. As yet another example, because the phase of the wave form refers to shifting where the signal begins, you could have ones begin at 90 degrees and zeros begin at 270 degrees. The receiving device could discriminate between these two bit states (zero versus one) based on the phase of the wave as compared to a reference wave. Twisted-pair, which was the original foundation of the telecommunications network, has a maximum usable bandwidth of about 1MHz. Coax, on the other hand, has greater capacity, offering a total of 1GHz of frequency spectrum. The radio range, particularly microwave, is the workhorse of the radio spectrum. It gives us 100GHz to operate with. In comparison, fiber optics operates over a band of more than 200THz (terahertz). So, as we see increasingly more bandwidth-hungry applications, we'll need to use fiber optics to carry the amount of traffic those applications generate. Twisted-pair will see little use with the future application set. Figure 2.8 plots various telecommunications devices on the electromagnetic spectrum. Figure 2.8 Telecommunications devices and the electromagnetic spectrum

Bandwidth
As mentioned earlier, bandwidth is the range of frequencies that make up a signal. There are three major classes of bandwidth that we refer to in telecommunications networks: narrowband, wideband, and broadband.

Narrowband
Narrowband means that you can accommodate up to 64Kbps, which is also known as the DS-0 (Digital Signal level 0) channel. This is the fundamental increment on which digital networks were built. Initially, this metric of 64Kbps was derived based on our understanding of what it would take to carry voice in a digital manner through the network. If we combine these 64Kbps channels together, we can achieve wideband transmission rates.

Wideband
Wideband is defined as being n 64Kbps, up to approximately 45Mbps. A range of services are provisioned to support wideband capabilities, including T-carrier, E-carrier, and J-carrier services. These are the services on which the first generation of digital hierarchy was built. T-1 offers 1.544Mbps, and because the T-carrier system is a North American standard, T-1 is used in the United States. It is also used in some overseas territories, such as South Korea and Hong Kong. E-1, which provides a total of 2.048Mbps, is specified by the ITU. It is the international standard used throughout Europe, Africa, most of Asia-Pacific, the Middle East, and Latin America. J-carrier is the Japanese standard, and J-1 offers 1.544Mbps.

Not every office or application requires the total capacity of T-1, E-1, or J-1, so you can subscribe tofractional services, which means you subscribe to bundles of channels that offer less than the full rate. Fractional services are normally provided in bundles of 4, so you can subscribe to 4 channels, 8 channels, 12 channels, and so on. Fractional services are also referred as n 56Kbps/64Kbps in the Tcarrier system and n 64Kbps under E-carrier. High-bandwidth facilities include T-3, E-3, and J-3. T-3 offers 45Mbps, E-3 offers 34Mbps, and J-3 supports 32Mbps. (T-, E-, and J-carrier services are discussed in more detail in Chapter 5.)

Broadband
The future hierarchy, of course, rests on broadband capacities, and broadband can be defined in different ways, depending on what part of the industry you're talking about. Technically speaking, the ITU has defined broadband as being anything over 2Mbps. But this definition was created in the 1970s, when 2Mbps seemed like a remarkable capacity. The Impact of Fiber Optics on Bandwidth So far this chapter has used a lot of bits-per-second measurements. It can be difficult to grasp what these measurements really mean. So, here's a real-world example. Today, fiber optics very easily accommodates 10Gbps (that is, 10 billion bits per second). But what does that really mean? At 10Gbps you'd be able to transmit all 32 volumes of the Encyclopedia Britannica in 1/10 secondthe blink of an eye. That is an incredible speed. Not many people have a computer capable of capturing 10Gbps. Keep in mind that underlying all the various changes in telecommunications technologies and infrastructures, a larger shift is also occurringthe shift from the electronic to the optical, or photonic, era. To extract and make use of the inherent capacity that fiber optics affords, we will need an entire new generation of devices that are optical at heart. Otherwise, we'll need to stop a signal, convert it back into an electrical form to process it through the network node, and then convert it back into optics to pass it along, and this will not allow us to exercise the high data rates that we're beginning to envision. Given today's environment, for wireline facilities, it may be more appropriate to think of broadband as starting where the optical network infrastructure starts. Synchronous Digital Hierarchy (SDH) and Synchronous Optical Network (SONET) are part of the second generation of digital hierarchy, which is based on fiber optics as the physical infrastructure. (SDH and SONET are discussed in detail in Chapter 5.) The starting rate (that is, the lowest data rate supported) on SDH/SONET is roughly 51Mbps. So, for the wireline technologiesthose used in the core or backbone network51Mbps is considered the starting point for broadband. In the wireless realm, though, if we could get 2Mbps to a handheld today, we'd be extremely happy and would be willing to call it broadband. So, remember that the definition of

broadband really depends on the situation. But we can pretty easily say that broadband is always a multichannel facility that affords higher capacities than the traditional voice channel, and in the local loop, 2Mbps is a major improvement.

Electromagnetic Spectrum

Print

PDF

Cite

Share

The electromagnetic spectrum consists of all the frequencies at which electromagnetic waves can occur, ordered from zero to infinity. Radio waves, visible light, and x rays are examples of electromagnetic waves at different frequencies. Every part of the electromagnetic spectrum is exploited for some form of scientific or military activity; the entire spectrum is also key to science and industry. Forensic scientists often use ultraviolet light technologies to search for latent fingerprints and to examine articles of clothing. Infrared and near-infrared light technology is used by forensic scientists to record images on specialized film and inspectroscopy, a tool that determines the chemical structure of a molecule (such as DNA) without damaging the molecule. Electromagnetic waves have been known since the mid-nineteenth century, when their behavior was first described by the equations of Scottish physicist James Clerk Maxwell (1831879). Electromagnetic waves, according to Maxwell's equations, are generated whenever an electrical charge (e.g., an electron) is accelerated, that is, changes its direction of motion, its speed, or both. An electromagnetic wave is so named because it consists of an electric and a magnetic field propagating together through space. As the electric field varies with time, it renews the magnetic field; as the magnetic field varies, it renews the electric field. The two components of the wave, which always point at right angles both to each other and to their direction of motion, are thus mutually sustaining, and form a wave which moves forward through empty space indefinitely. The rate at which energy is periodically exchanged between the electric and magnetic components of a given electromagnetic wave is the frequency, , of that wave and has units of cycles per second, or Hertz (Hz); the linear distance between the wave's peaks is termed its wavelength, , and has units of length (e.g., feet or meters). The speed at which a wave travels is the product of its wavelength and its frequency, V = ; in the case of electromagnetic waves, Maxwell's equations

require that this velocity equal the speed of light, c (86,000 miles per second [300,000 km/sec]). Since the velocity of all electromagnetic waves is fixed, the wavelength of an electromagnetic wave always determines its frequency , or vice versa, by the relationship c = The higher the frequency (i.e., the shorter the wavelength) of an electromagnetic wave, the higher in the spectrum it is said to be. Since a wave cannot have a frequency less than zero, the spectrum is bound by zero at its lower end. In theory, it has no upper limit. All atoms and molecules at temperatures above absolute zero radiate electromagnetic waves at specific frequencies that are determined by the details of their internal structure. In quantum physics, this radiation must often be described as consisting of particles called photons rather than as waves; however, this article will restrict itself to the classical (continuous-wave) treatment of electromagnetic radiation, which is adequate for most technological purposes. Not only do atoms and molecules radiate electromagnetic waves at certain frequencies, they can absorb them at the same frequencies. All material objects, therefore, are continuously absorbing and radiating electromagnetic waves having various frequencies, thus exchanging energy with other objects, near and far. This makes it possible to observe objects at a distance by detecting the electromagnetic waves that they radiate or reflect, or to affect them in various ways by beaming electromagnetic waves at them. These facts make the manipulation of electromagnetic waves at various frequencies (i.e., from various parts of the electromagnetic spectrum) fundamental to many fields of technology and science, including radio communication, radar, infrared sensing, visible-light imaging, lasers, x rays, astronomy, and more. The spectrum has been divided up by physicists into a number of frequency ranges or bands denoted by convenient names. The points at which these bands begin and end do not correspond to shifts in the physics of electromagnetic radiation; rather, they reflect the importance of different frequency ranges for human purposes. Radio waves are typically produced by time-varying electrical currents in relatively large objects (i.e., at least centimeters across). This category of electromagnetic waves extends from the lowestfrequency, longest-wavelength electromagnetic waves up into the gigahertz (GHz; billions of cycles per second) range. The radio frequency spectrum is divided into more than 450 non-overlapping frequency bands. These bands are exploited by different users and technologies: for example, broadcast FM is transmitted using frequencies on the order of 106 Hz, while television signals are transmitted using frequencies on the order of 108 Hz (about a hundred times higher). In general, higher-frequency signals can always be used to transmit lower-frequency information, but not the reverse; thus, a voice signal with a maximum frequency content of 20 kHz (kilohertz, thousands of Hertz) can, if desired, be transmitted on a signal centered in the Ghz range, but it is impossible to transmit a television signal over a broadcast FM station. Radio waves termed microwaves are used for high-speed communications links, heating food, radar, and electromagnetic weapons, that is,

devices designed to irritate or injure people or to disable enemy devices. The microwave frequencies used for communications and radar are subdivided still further into frequency bands with special designations, such as "X band" and "Y band." Microwave radiation from the Big Bang, the cosmic explosion in which the Universe originated, pervades all of space. Electromagnetic waves from approximately 1012 to 5 1014 Hz are termed infrared radiation. The word infrared means "below red," and is assigned to these waves because their frequencies are just below those of red light, the lowest-frequency light visible to human beings. Infrared radiation is typically produced by molecular vibrations and rotations (i.e., heat) and causes or accelerates such motions in the molecules of objects that absorb it; it is therefore perceived by the body through the increased warmth of skin exposed to it. Since all objects above absolute zero emit infrared radiation, electronic devices sensitive to infrared can form images even in the absence of visible light. Because of their ability to "see" at night, imaging devices that electronically create visible images from infrared light from are important in security systems, on the battlefield, and in observations of the Earth from space for both scientific and military purposes. Visible light consists of elecromagnetic waves with frequencies in the 4.3 1014 to 7.5 1014 Hz range. Waves in this narrow band are typically produced by rearrangements (orbital shifts) in the outer electrons of atoms. Most of the energy in the sunlight that reaches the Earth's surface consists of electromagnetic waves in this narrow frequency range; our eyes have therefore evolved to be sensitive to this band of the electromagnetic spectrum. Photo-voltaic cellslectronic devices that turn incident electromagnetic radiation into electricityre also designed to work primarily in this band, and for the same reason. Because half the Earth is liberally illuminated by visible light at all times, this band of the spectrum, though narrow (less than an octave), is essential to thousands of applications, including all forms of natural and many forms of mechanical vision. Ultraviolet light consists of electromagnetic waves with frequencies in the 7.5 1014 to 1016 Hz range. It is typically produced by rearrangements in the outer and intermediate electrons of atoms. Ultraviolet light is invisible, but can cause chemical changes in many substances: for living things, consequences of these chemical changes can include skin burns, blindness, or cancer. Ultraviolet light can also cause some substances to give off visible light (flouresce), a property useful for mineral detection, art-forgery detection, and other applications. Various industrial processes employ ultraviolet light, including photolithography, in which patterned chemical changes are produced rapidly over an entire film or surface by projecting patterned ultraviolet light onto it. Most ultraviolet light from the Sun is absorbed by a thin layer of ozone (O3) in the stratosphere, making the Earth's surface much more hospitable to life than it would be otherwise; some chemicals produced by human industry (e.g., chlorfluorocarbons) destroy ozone, threatening this protective layer. Electromagnetic waves with frequencies from about 1016 to 1019 Hz are termed x rays. X rays are typically produced by rearrangements of electrons in the innermost orbitals of atoms. When absorbed, x rays are capable of ejecting electrons entirely from atoms and thus ionizing them (i.e.,

causing them to have a net positive electric charge). Ionization is destructive to living tissues because ions may abandon their original molecular bonds and form new ones, altering the structure of a DNA molecule or some other aspect of cell chemistry. However, x rays are useful in medical diagnosis and in security systems (e.g., airline luggage scanners) because they can pass entirely through many solid objects; both traditional contrast images of internal structure (often termed "x rays" for short) and modern computerized axial tomography images, which give much more information, depend on the penetrating power of x rays. X rays are produced in large quantities by nuclear explosions (as are electromagnetic waves at all other frequencies above the radio band), and have been proposed for use in a space-based ballistic-missile defense system. All electromagnetic waves above about 1019 Hz are termed gamma rays (g rays), which are typically produced by rearrangements of particles in atomic nuclei. A nuclear explosion produces large quantities of gamma radiation, which is both directly and indirectly destructive of life. By interacting with the Earth's magnetic field, gamma rays from a high-altitude nuclear explosion can cause an intense pulse of radio waves termed an electromagnetic pulse (EMP). EMP may be powerful enough to burn out unprotected electronics on the ground over a wide area. Radio waves present a unique regulatory problem, for only one broadcaster at a particular frequency can function in a given area. (Signals from overlapping same-frequency broadcasts would be received simultaneously by antennas, interfering with each other.) Throughout the world, therefore, governments regulate the radio portion of the electromagnetic spectrum, a process termed spectrum allocation. In the United States, since the passage of the Communications Act of 1934, the radio spectrum has been deemed a public resource. Individual private broadcasters are given licenses allowing them to use specific portions of this resource, that is, specific sub-bands of the radio spectrum. The United States Commerce Department's National Telecommunications and Information Administration (NTIA) and FCC (Federal Communications Commission) oversee the spectrum allocation process, which is subject to intense lobbying by various telecommunications stakeholders. In summary, it can be said that the manipulation of every level of the electromagnetic spectrum is of urgent technological interest, but most work is being done in the radio through the visible portions of the spectrum (below 7.5 1014 Hz), where communications, radar, and imaging can be accomplished. What is spectrum? The word spectrum refers to a collection of various types of electromagnetic radiations of different wavelengths. Spectrum or airwaves are the radio frequencies on which all communication singnals travel. In India the radio frequencies are being used for different types of services like space communication, mobile communication, broadcasting, radio navigation, mobile satellite service, aeronautical satellite services, defence communication etc. Radio frequency is a natural resource but unlike other resources it will deplete when used. But it will be wasted if not used efficiently.

6. Noise analysis
7. BLOWERS / FANS / FILTERS
8. 9. 10. 11. 12. 13. NOVEMBER 1ST, 2001 By Risto Kaivola, and Timo Avikainen CATEGORIES: Blowers / Fans / Filters, Design, Test & Measurement ISSUE: November 2001 TAGS: Airflow, Cooling Fan, Electrical Noise, Fan Noise,Telecommunication 1 Comment

14. ShareThis 15. Print This

16. Modern environmental requirements demand low noise levels for telecommunication cabinet products, especially when located in offices or urban areas. The spaces available are limited, and the power and electronics packaging density are increasing. Fans are commonly used to enhance the cooling of electronics by improving heat transfer due to higher air velocities. In this paper, noise measurements of air-cooled cabinets are analyzed, and possibilities to reduce the noise are discussed.

17.

Base Station Noise

18. Ordinarily, axial fans are used at the first stage to assist free convection, or the cabinet layout is designed from the beginning for forced convection. Axial fans generate a greater volume of flow, but less pressure. Thus, in compact and aggressive designs with high backpressure, mixed flow or radial fans can be less noisy.

19. Figure 1. Noise powers of three manufacturers (A, B and C), 127-mm axial fans with same backpressure and airflow. 20. In axial fans, the correct operating range is an important factor. When the pressure load crosses the capacity of the fan, noise increases significantly, but airflow decreases. Axial fans with equal interface dimensions and the same electronic power produce very different noise levels. Figure 1 illustrates the noise levels of 127-mm axial fans from three manufacturers.

Fans A5 and B5 each have 5 blades and fan C7 has 7 blades. The curves have been measured with the same airflow and backpressure. 21. The fan noise is proportional to the cabinet-induced total noise. Choosing a silent yet powerful fan is essential to total noise when the cabinet layout is fixed. Placing the fan too close to any obstacle can increase the noise by many dBs. An axial fans inlet side is especially sensitive, but in some cases a radial fan will produce less noise with a fingerguard (a thin wire web to protect people from being injured by rotating blades). 22. Fan noise has two main components: blade passing frequency (and its harmonics) and flow noise. The spectrum of the cabinet product is not the same shape as that of the fan alone; see Figures 2 and 3. Blade passing frequencies (BPF) and harmonics are sharp peaks, flow noise is approximately 10 dB lower. In any case, if the BPF is totally eliminated from the overall noise level, the total value decreases only by approximately 3 dB.

23. Figure 2. Base station spectrum (a third generation cabinet prototype with six B 5-type fans as the main noise source).

24. Figure 3. Single fan spectrum (one B 5 with the same backpressure and rotating speed as in Figure 2).

25. Airflow also makes noise in a cabinet without a fan. In Figure 4, a spectrum was measured by using relatively silent airflow through the structure. The shapes of the spectra in Figures 2, 3 and 4 give evidence that the blade passing frequency and flow noise can be treated and attenuated separately. The shapes of the airflow spectra are similar in Figures 2 and 3. The BPF peaks are similar in Figures 2 and 3. BPF is easily calculated from the number of blades and rotating speed.

26. Figure 4. Airflow noise spectrum in the mentioned prototype cabinet. 27. The rank order of the fans is not only dependent on the backpressure. In different cabinets, the rank order is subject to change; see Table 1, which compares weighted noise power results for Cabinet A (a prototype as in Figure 1), Cabinet B (a mock-up system for fan ranking), and a pressure box according the ISO 10320 standard. Because of the complicated nature and interactions of the BPF and especially the flow noise, determining successive rankings may require some very accurate mock-up geometry, or may only be possible with a prototype or a serial product cabinet. 28. Table 1. Noise Power Levels, dB(A) (various environments; same rotating speed and backpressure)

Cabinet A A5 B5 C7 74.6 71.6 70.8

Cabinet B 68.8 67.7 67.2

ISO 10320 74.6 68.1 72.6

29.

Noise Reduction Means

30. Noise can be decreased by reactive, passive and active means. Both reactive and passive means require space. A 3-dimensional, universal active solution is too complicated, requiring tens of microphones and loudspeakers and heavy signal processing. 31. The cabinets mechanical cavities can act as resonant chambers. A resonant chamber has relatively small openings compared to other dimensions. Because of the reactive nature, a resonant chamber can act both as a silencer and as a noise amplifier. In principle, reactive acoustical components divide energy spatially or in the frequency range. In practice, some cavities act like a musical instruments resonant chamber. 32. Different versions of cabinet models are possible including various components. This means that the cabinets are individual. The cavities and structures are, in general, acting as resonators rather than as attenuators. We have verified this by applying vibration excitation to the cabinet: the peaks in acoustical spectra are very sharp, meaning that the energy is not divided over a large frequency range but is causing strong vibration amplitude. Metal structures do not have significant vibration losses; usually in the cabinets they act as mechanical resonators and cavities act as acoustical resonators. 33. When the space between the ceiling and roof was filled with absorbent material, the noise level decreased by 0.2 dB, and when spaces between the inner and outer side panels were filled noise level decreased by 0.2 to 0.5 dB per cavity. This experiment was made to verify the hypotheses that cavities act as resonators. The mechanical vibration does not have the same effect as noise because the same cavity panels damped with heavy carpet did not have an influence on the noise. 34. Passive means (absorbent material transferring the noise energy to heat) are normally added afterwards. If airflow ducts or radial fan housings have free space, significant attenuation is possible. However, if the attenuation affects cooling, the added absorbent may require more fan power, and the total outcome can be negative: both noise and heat may increase. In these cases, finding the optimum solution requires further investigation.

35.

Fan Noise Theories

36. Fan noise has been a topic for active research for years. In earlier times, ventilation in buildings was of great interest. Sub- or supersonic fans are a great noise problem in airplanes. However, studies of this area are not very relevant for telecommunication cabinets induced air cooling noise because the air speeds are of different order. 37. Neise [1] divides fan noise to many components; see Figure 5. The discrete components comprise the BPF and its harmonics, and the broadband is related to the flow noise, or steady horizontal line in the spectrum.

38. Figure 5. Fan noise origin by Neise (1988). 39. GUTIN-noise can be neglected; in other words, it is the noise blades make when distance from listening point varies. This noise should vary when the listening point is moved, and it should be zero in the axial line of the fan. Other phenomena are much noisier than this. The next component, non-uniform stationary flow, is the effect of blades in the non-uniform flow field. This noise is present in telecommunication cabinet use. However, the next component, nonuniform unsteady flow, is more dominant because the flow field in electronic cabinet use varies so much. Our measurements in the prototype gave pressure fluctuations 2 3 times greater than the mean pressure. These measurements underestimate the pressure fluctuations due to the relatively slow pressure measurement and relative large distance from blades. 40. Secondary flows are a problem. See, for example, Longhouse [2]. These should be investigated perhaps using smaller gaps between the fan blade tip and the venturi. The tip and venturi geometry is essential, and modifications should be done together with a manufacturer. Modeling of the flow field may also give some results for this; see Fukano et al [3] and Mhring et al [4]. 41. Vortex shedding is related to tip and blade geometry.Some tests have been reported in which modifications were made to the tip, or to a cord in the middle of the blade, or to a saw-shape at the leaving edges. Turbulent boundary layer might be the main reason for the broadband flow noise in telecommunication cabinet use. If the units to be cooled are located in turbulent flow too close after the fan, the interaction can strengthen the turbulence and thus, significantly, the noise. 42. When using the silent air system described earlier, the flow to the cabinet was introduced by a tube, and the flow was laminar. When a cardboard cross-shaped structure, made with same dimensions as the fan motor support structure, was added to generate turbulence at the tube

outlet, the noise level increased by 3 dB. Our interpretation of this result is that the turbulence caused by the cross (or the fan) does not have adequate time or distance to become laminar before the flow hits the units. Even a very small excitation vortex at the axial fan inlet may significantly increase the noise level, or, in some cases with radial fans, it may do the opposite. 43. Fan noise prediction formulae are empirical; see Cudina [5]. For analytical solutions, the problem is too complicated. Most of the research done with regard to fan noise has attempted to investigate details or the effect of one factor. In practice, this may also cause unexpected behavior. For example, a grill near the inlet reduces the noise of a radial fan but increases the noise of an axial fan. A grill or a finger guard causes small disturbances to the inflow, but it is hard to predict how the fan is acting.

44.

Noise Standards and Regulations

45. Maximum noise levels specified in acoustic noise standards for electronic devices in general and telecommunication equipment in particular depend on the final location of the device. Noise levels are naturally most stringent for units used in office buildings or residences. Typically, acoustic noise level has been specified according to the relevant international noise standards (EU wide ETS 300-753, Bellcore in the USA [6], or national noise standards [7]). A new EU noise directive has been published and may influence future ETS 300-753 updates.

46.

Conclusion

47. The main noise source of telecommunication cabinets is the cooling fan, and the fan noise interacts with the cabinet structure and cavities in a complicated way. If the fan noise can be decreased, the total noise will also decrease. If noise is considered a part of the design from the very beginning, and enough space is reserved for passive noise attenuation, it is possible to achieve the required noise level.

Vous aimerez peut-être aussi