Vous êtes sur la page 1sur 32

PAMANTASAN NG LUNGSOD NG PASIG

Alkalde Jose St. Kapasigan, Pasig City


College of Engineering

ASSIGNMENT in ETECH

Submitted by:

Aporro, Rose Ann


ECE- 2A

Submitted to:
Prof .

August 17, 2015

What is a Crossover Network?


A crossover splits frequencies so that each speaker receives a certain range of frequencies so that
we do not exceed the capabilities of the speaker, reduce distortion, and avoid speaker damage.
Types of Crossover Filters
There are three types of crossovers: Low-Pass, High-Pass, and Band-Pass.
Low-Pass Filter: A low-pass filter blocks (attenuates) high frequency signals above the cutoff
frequency and passes low-frequency signals. It is sometimes called a high-cut filter, or treble cut filter.
Low-pass filters are generally used on subwoofers to block high frequency signals that they cannot
reproduce.
High-Pass Filter: A high-pass filter blocks (attenuates) the low frequency signals below the cutoff
frequency, but passes high frequency signals. It is sometimes called a low-cut filter.
Band-Pass Filter: A band-pass filter blocks (attenuates) frequency signals outside of a certain range
and passes frequency signals within that range. These filters can be created by combining a low-pass
filter with a high-pass filter into a single filter. The bandwidth of the filter is simply the difference
between the upper and lower cutoff frequencies.
In physics, attenuation is the gradual loss in intensity of any kind through a medium.
Filter Characteristics
Crossovers do not block undesired frequencies completely. They cut frequencies progressively. You
need two specifications for a crossover:

The crossover frequency or the point at which the filter begins to work
The slope or how quickly the filter sets block unwanted frequencies. A crossover slope
describes how effective a crossover is in blocking frequencies. It is usually expressed in
dB/octave.

Assume we have a two-way speaker with one woofer and one tweeter. The crossover frequency is
simply the point at which the crossover begins to separate the high frequencies from the bass
frequencies. For most two-way speakers, this frequency is set around 1 to 2kHz.
Lets assume the low-pass crossover frequency in the two-way speaker is set at 1.5kHz. This does
not mean that all the frequencies higher than 1.5 kHz go to the tweeter and everything lower goes to
the woofer. As you can see in the following diagram, the high frequencies are
reduced
very
gently.
Source: TheSpeakerCompany.com
Business)

(Out

of

This is most
probably a crossover network that has a slope of 6
dB/octave. This simply means that a frequency that is twice the cutoff frequency (i.e., 3 kHz) is

reduced by 6 dB compared to a signal at 1.5 kHz. This is an example of a first-order filter.


There are many different types of filter circuits, with different responses to changing frequency. In all
cases, at the cutoff frequency, the filter attenuates the input power by half or -3 dB. You should
remember that when you want to increase the sound level by 3db, you will have to double the
amplifiers
power.
A second-order filter (12 db/octave) attenuates higher frequencies more steeply. This means that a
frequency twice as large as the cutoff frequency (i.e., 3kHz) is reduced by 12db.
A third-order filter has 18 dB/octave slope and a fourth-order filter has a slope of 24 db/octave.
We rarely see slopes steeper than 24 dB/octave in loudspeaker crossovers.
Low order passive crossovers are not very expensive. Higher order (i.e., 24dB/octave) crossovers
can
get
more
expensive.
The following diagram demonstrates a third-order filter (18 dB/octave). Compare it to the first diagram
to see how much faster this filter reduces or attenuates high frequencies.

Source: TheSpeakerCompany.com (Out of Business)


Just as our low pass filter keeps high frequencies away from the woofer, a high pass filter keeps low
frequencies away from the tweeter. When we add a high pass filter to to a low-pass filter, the result
looks like the following diagram:

Source: TheSpeakerCompany.com (Out of


Business)

Symmetrical vs. Asymmetrical

If the slopes of both the low-pass filter and the highpass filters are identical, we usually refer to the
crossover as symmetrical. If, however, a crossovers
low pass filter has a slope of 6 dB/octave and the high
pass slope is 12 dB/octave, we call it an asymmetrical
crossover.
Two-Way and Three-Way
A two-way crossover splits the full range signal into
two parts lower and higher frequencies (low-pass
and high-pass). A three way crossover splits the full range signal into three parts low, middle, and
high frequencies (low-pass, band-pass, and high-pass). Four- or five-way crossovers are increasingly
rare nowadays.
A 2-way speaker has two independent components that are attached to a 2-way crossover network: a
separate tweeter and a separate mid-range/woofer.
A 3-way sepaker has three independent components that are attached to a 3-way crossover network:
a tweeter, a mid-range, and a woofer.
In most speaker drivers, a voice coil is a single coiled wire wrapped around a cylinder called
a Former, produces the changing magnetic field when alternating current from the amplifier flows
through it.
A dual voice coil driver is one in which two separate coils of wire are wound together around the same
former and terminated independently.
The main advantage of a dual voice coil speaker is wiring flexibility. They are wired in series or
parallel to increase or decrease impedance to deal with amplifiers that may not be able to handle
different loads. Some believe that there is no need for this anymore as most modern amplifiers can
deal with 2Ohm, 4Ohm, and 8Ohm loads without any problems.
Active and Passive Crossovers
There are two broad classification of crossovers, based on where the crossover is placed in the signal
path.
Passive crossovers are the most common. These crossovers are not powered and are usually
placed within a loudspeakers enclosure to direct an amplified signal to the appropriate drivers. The
following diagram should make this clear.
Active crossovers need power and have a cable to be
plugged into the wall. They split the full range signal before
it gets to an amplifier and sends them to the appropriate
amplifiers to drive the speakers. These type of crossovers
are sometimes called electronic crossovers. Active
crossovers can be implemented digitally using a DSP chip
or a microprocessor. These are the type you find on
powered subwoofers and inside A/V receivers.
A Guide to Crossover Networks & Crossover Settings - Blu-ray Forum. (n.d.). Retrieved from
http://forum.blu-ray.com/showthread.php?t=77011

PHONOGRAPH
Edison cylinder phonograph, circa 1899
Thomas Edison with his second phonograph,
photographed by Mathew Brady in Washington,
April 1878
Close up of the mechanism of an Edison Amberola, manufactured
circa 1915
A late 20th-century turntable and
record
The phonograph is
a
device
invented
in
1877
for
the
mechanical recording
and
reproduction of sound. In its later
forms it is also called agramophone (as a trademark since 1887, as a generic name since c. 1900).
The sound vibration waveforms are recorded as corresponding physical deviations of a spiral groove
engraved, etched or impressed into the surface of a rotating cylinder or disc. To recreate the sound,
the surface is similarly rotated while a playback stylus traces the groove and is therefore vibrated by
it, very faintly reproducing the recorded sound. In early acoustic phonographs, the stylus vibrated
a diaphragm which produced sound waves which were coupled to the open air through a flaring horn,
or directly to the listener's ears through stethoscope-type earphones. In later electric phonographs
(also known as record players (since 1940s) or, most recently, turntables[1]), the motions of the
stylus are converted into an analogous electrical signal by a transducer called a pickup or cartridge,
electronically amplified, then converted back into sound by aloudspeaker.
The phonograph was invented in 1877 by Thomas Edison.[2][3][4][5] While other inventors had produced
devices that could record sounds, Edison's phonograph was the first to be able to reproduce the
recorded sound. His phonograph originally recorded sound onto a tinfoilsheet phonograph cylinder,
and could both record and reproduce sounds. Alexander Graham Bell's Volta Laboratory made
several improvements in the 1880s, including the use of wax-coated cardboard cylinders, and a
cutting stylus that moved from side to side in a "zig zag" pattern across the record.
In the 1890s, Emile Berliner initiated the transition from phonograph cylinders to flat discs with a spiral
groove running from the periphery to near the center. Other improvements were made throughout the
years, including modifications to the turntable and its drive system, thestylus or needle, and the sound
and equalization systems.
The disc phonograph record was the dominant audio recording format throughout most of the 20th
century. From the mid-1980s, phonograph use declined sharply because of the rise of the compact

disc and other digital recording formats. While no longer mass-market items, modest numbers of
phonographs and phonograph records continue to be produced in the second decade of the 21st
century.
MAGNETIC TAPE
is a medium for magnetic recording, made of a thin magnetizable coating on a long, narrow
strip of plastic film. It was developed in Germany, based on magnetic wire recording. Devices that
record and play back audio and video using magnetic tape aretape recorders and video tape
recorders. A device that stores computer data on magnetic tape is a tape drive (tape unit, streamer).
Magnetic tape revolutionized broadcast and recording. When all radio was live, it allowed
programming to be recorded. At a time when gramophone records were recorded in one take, it
allowed recordings to be made in multiple parts, which were then mixed and edited with tolerable loss
in quality. It is a key technology in early computer development, allowing unparalleled amounts of
data to be mechanically created, stored for long periods, and to be rapidly accessed.
Nowadays other technologies can perform the functions of magnetic tape. In many cases these
technologies are replacing tape. Despite this, innovation in the technology continues and companies
like Sony and IBM continue to produce new magnetic tape drives.
Over years, magnetic tape can suffer from deterioration called sticky-shed syndrome. Caused by
absorption of moisture into the binder of the tape, it can render the
tape unusable.
COMPACT CASSETTE
Magnetic tape was invented for recording sound by Fritz
Pfleumer in 1928 in Germany, based on the invention of magnetic
wire recording by Oberlin Smith in 1888 and Valdemar Poulsen in
1898. Pfleumer's invention used a ferric oxide (Fe2O3) powder
coating on a long strip of paper. This invention was further developed by the German electronics
company AEG, which manufactured the recording machines and BASF, which manufactured the
tape. In 1933, working for AEG, Eduard Schuller developed the ring-shaped tape head. Previous
head designs were needle-shaped and tended to shred the tape. An important discovery made in this
period was the technique of AC biasing, which improved the fidelity of the recorded audio signal by
increasing the effective linearity of the recording medium.
Due to the escalating political tensions, and the outbreak of World War II, these developments
were largely kept secret. Although the Allies knew from their monitoring of Nazi radio broadcasts that
the Germans had some new form of recording technology, the nature was not discovered until the
Allies acquired captured German recording equipment as they invaded Europe in the closing of the
war. It was only after the war that Americans, particularly Jack Mullin, John Herbert Orr, and Richard
H. Ranger, were able to bring this technology out of Germany and develop it into commercially viable
formats.

A wide variety of recorders and formats have developed since, most significantly reel-toreel and Compact Cassette.

BETAMAX
Betamax (also called Beta, and referred to as such in the logo) is a consumer-level
analog videocassette magnetic tape recording format developed by Sony, released in Japan on May
10, 1975.The cassettes contain .50 in (12.7 mm)-wide videotape in a design similar to the earlier,
professional .75 in (19 mm) wide, U-matic format. The format is virtually obsolete, having lost
the videotape format war.
Like the rival videotape format VHS (introduced in Japan by JVC in October 1976 and in the United
States by RCA in August 1977 ]), Betamax had no guard band and used azimuth recording to
reduce crosstalk. According to Sony's own history webpages, the name came from a double
meaning: beta being the Japanese word used to describe the way signals were recorded onto the
tape, and from the fact that when the tape ran through the transport, it looked like the Greek
letter beta (). The suffix -max, from the word "maximum", was added to suggest greatness. In 1977,
Sony came out with the first long play Betamax VCR, the SL-8200. This VCR had two recording
speeds: normal, and the newer half speed. This provided two hours recording time on the L-500 Beta
videocassette. The SL-8200 was to compete against the VHS VCRs that had 2 or 4 hours of
recording time.
This mighty machine sparked a revolution in our use of media. Its a Sony Betamax video
cassette recorder from 1979. This monster weighs about 36 pounds. The engineer in me find it
fascinating: there is nothing digital, its a truly analog machine -- all moving pieces and parts.
Early adopters of the Betamax used it to record television shows -- a revolutionary concept at
the time -- because prior to the Betamax you had to watch a show when it was broadcast. It threatens
the entertainment industry so much that in 1979 they argued that recording television shows at home
infringed on their copyright. It all came to a head in a Supreme Court case -- Sony Corporation of
America versus Universal City Studios -- where five justices allowed home recording. Although Sony
won this court battle, they ultimately lost out to a machine that used this size tape. This is a VHS
recorder made by Sonys great rival JVC.
Both machines solved the same problem: How to store information compactly on a tape.
Heres the brilliant innovation used by both machines. The machine grabs the tape, drags it forward,
as this silver drum starts to spin rapidly. The drum has two electromagnets (called heads) arranged
on opposite sides of the drum that read the magnetic information on the tape. That rotating head
allowed for a compact recorder: in many previous recorders the magnetic heads didnt move, only the
tape. Because there was a limit to how fast the tape could move, it took a lot of tape -- about a seven
inch reel to record an hour, which means that a movie would need two 7-inch reels inside a cassette.
So, the rotating heads dramatically reduced the amount of tape needed, reducing the size to where it
could be easily held in a cassette.

So, if the machines are so similar why did Betamax lose to JVC? Many thought the betamax machine
would win: It had the better image quality and the Betamax is decidedly better built. Compare ejecting
a tape on the Betamax to the VHS. First, watch the Betamax. Note how smooth it is. And then watch
the VHS. Thats abrupt and will wear out the mechanism. Yet, to my engineers eye the VHS was the
better solution.
First, the VHS was lighter than the Betamax: 29 and a half lbs compared to 36 lbs for the this
Betamax machine. Thats a huge difference for a mass manufactured object. It impacts everything
from material costs to assembly time to shipping costs. So, at the low end of the market the VHS
machines were cheaper than Sonys Betamax.
Second, the earliest Betamax tapes played for only one hour, VHS played for 2 hours -- enough time
for a movie. The ultimate killer, though, was the rental market.
http://www.engineerguy.com/failure/betamax.htm
VHS
The Video
Home
System
(VHS)
is
a standard for consumer-level use
of analog
recording on videotape cassettes. It was developed
by Victor Company of Japan (JVC) in the 1970s.
The 1950s began the era when magnetic tape video
recording became a major contributor to the television
industry, via the first commercialized video tape
recorders (VTRs). At that time, the devices were used
only in expensive professional environments such
as television studios and medical imaging (fluoroscopy). But it was in the 1970s that
videotape entered home use, creating the home video industry and changing the economics of
the television and movie businesses. Like many other technological innovations, each of several
companies made an attempt to produce a television recording standard that the majority of the world
would embrace. At the peak of it all, the home video industry was caught up in a series of videotape
format wars. Two of the formats, VHS and Betamax, received the most media exposure. VHS would
eventually win the war, and therefore succeed as the dominant home video format, lasting throughout
the tape format period.
In later years, optical disc formats began to offer better quality than video tape. The earliest of these
formats, LaserDisc, was not widely adopted, but the subsequent DVD format eventually did achieve
mass acceptance and replaced VHS as the preferred method of distribution after 2000.

COMPACT DISC
From Wikipedia, the free encyclopedia

Compact disc (CD) is a digital optical disc data storage format. The
format was originally developed to store and play only sound recordings
but was later adapted for storage of data (CD-ROM). Several other
formats were further derived from these, including write-once audio
and data storage (CD-R), rewritable media (CD-RW), Video Compact
Disc (VCD), Super Video Compact Disc (SVCD), Photo CD,
PictureCD, CD-i, and Enhanced Music CD. Audio CDs and audio CD
players have been commercially available since October 1982.
Standard CDs have a diameter of 120 millimetres (4.7 in) and can hold up to
about 80 minutes of uncompressed audio or about 700 MiB of data. The Mini
CD has
various diameters ranging from 60 to 80 millimetres (2.4 to 3.1 in); they are sometimes used for CD
singles, storing up to 24 minutes of audio, or delivering device drivers.
At the time of the technology's introduction in 1982, a CD had greater capacity than a personal
computer hard drive. By the 2010s hard drives commonly had capacities exceeding those of CDs by
a factor of several thousand.
In 2004, worldwide sales of audio CDs, CD-ROMs and CD-Rs reached about 30 billion discs. By
2007, 200 billion CDs had been sold worldwide. [1] CDs are increasingly being replaced by other forms
of digital storage and distribution, with the result that audio CD sales rates in the U.S. have dropped
about 50% from their peak; however, they remain one of the primary distribution methods for the
music industry.[2]
HISTORY
American inventor James T. Russell has been credited with inventing the first system to record
digital information on an optical transparent foil that is lit from behind by a high-power halogen
lamp. Russell's patent application was first filed in 1966, and he was granted a patent in 1970.
Following litigation, Sony and Philips licensed Russell's patents (then held by a Canadian company,
Optical Recording Corp.) in the 1980s.
The Compact Disc is an evolution of LaserDisc technology, where a focused laser beam is used that
enables the high information density required for high-quality digital audio signals. Prototypes were
developed by Philips and Sony independently in the late 1970s. [8] In 1979, Sony and Philips set up a
joint task force of engineers to design a new digital audio disc. After a year of experimentation and
discussion, the Red Book CD-DA standard was published in 1980.

VCD
VCD (also called video CD, video compact disc or "disc") is a compact disk format based on
CD-ROM XAthat is specifically designed to hold MPEG-1video data and to include interactive
VCD (also called video CD, video compact disc or "disc") is a compact disk format based on CDROM XA that is specifically designed to hold MPEG-1 video data and to include interactive
capabilities. VCD has a resolution similar to that of VHS, which is far short of the resolution of DVD.
Each VCD disk holds 72-74 minutes of video and has a data transfer rate of 1.44 Mbps. VCDs can be

played on a VCD player connected to a television set (in the same way those video cassettes can on
a VCR) or computer, on a CD-i player, on some CD-ROM drives, and some DVD players
Eager to improve storage performance, many were quick to consider cache memory - but just as
quickly find it to be challenging to implement and expensive. Check out this quick guide for an
overview on some of the basic concepts surrounding cache memory and best practices for leveraging
cache memory technologies.
VCD was introduced in 1993 by JVC, Philips, SONY and Matsushita and is described in detail
in theWhite Book specifications. Video data is demanding in terms of storage capacity; it requires
approximately 5 MB of storage per second of video, which would translate to about two minutes of
video on a 680 MB CD. In order to store video information on a CD in a practical fashion, the data
must be compressed for storage and then decompressed for replay in real time. MPEG-1
compresses data at ratios of up to 200:1. MPEG is an international standard, and can be used by any
manufacturer to create hardware for use with MPEG video. MPEG video can also be recorded on any
CD. VCD formatting removes unnecessary information from MPEG-1 data, and adds specialized
video authoring capabilities through inclusion of a CD-i (CD- Interactive) runtime application.
VCD variations include: VCD 2.0, which was introduced in 1995 and adds hi-resolution stills, fastforward, and rewind functions to the original specifications; VCD-ROM, which was introduced in 1997
and enables the creation of hybrid VCD/CD-ROM disc; VCD-Internet, which was introduced in 1997
and is a standardized means of linking video and Internet data; and SuperVCD, which uses either
high bit rate MPEG-1 or variable bit rate MPEG-2 for the use of CD-R drives instead of DVD drives.
http://searchstorage.techtarget.com/definition/VCD
DVD
( "digital versatile disc"[4][5] or "digital video disc"[6]) is a digital optical disc storage format,
invented and developed by Philips, Sony, Toshiba, and Panasonic in 1995. The medium can store
any kind of digital data, and is widely used for software and other computer files, and for video
programs watched using DVD players. DVDs offer higher storage capacity than discs while having
the same dimensions.
Pre-recorded DVDs are mass-produced using molding machines that physically stamp data onto the
DVD. Such discs are known as DVD-ROM, because data can only be read and not written or erased.
Blank recordable DVD discs (DVD-R and DVD+R) can be recorded once using a DVD recorder and
then function as a DVD-ROM. Rewritable DVDs (DVD-RW, DVD+RW, and DVD-RAM) can be
recorded and erased many times.
DVDs are used in DVD-Video consumer digital video format and in DVD-Audio consumer digital
audio format, as well as for authoring DVD discs written in a special AVCHD format to hold high
definition material (often in conjunction with AVCHD format camcorders). DVDs containing other types
of information may be referred to as DVD data discs. https://en.wikipedia.org/wiki/DVD

LASERDISC
From Wikipedia, the free encyclopedia
LaserDisc (LD) is a home video format and the first
commercial optical disc storage medium, initially licensed, sold, and
marketed
as MCA DiscoVision (also
known
as
simply
"DiscoVision") in North America in 1978.
Although the format was capable of offering higher-quality video and
audio
than its consumer rivals, the VHS and Betamaxvideocassette systems,
LaserDisc
never managed to gain widespread use in North America, largely due to high costs for the players
and video titles themselves and the inability to record TV programming. [1] It also remained a largely
obscure format in Europe andAustralia. By contrast, the format was much more popular in Japan and
in the more affluent regions of Southeast Asia, such as Hong Kong, Singapore, and Malaysia, being
the prevalent rental video medium in Hong Kong during the 1990s. [2] Its superior video and audio
quality did make it a somewhat popular choice among videophiles and film enthusiasts during its
lifespan.[3]
The technologies and concepts behind LaserDisc are the foundation for later optical disc formats,
including Compact Disc, DVD, andBlu-ray Disc.
HISTORY
Optical video recording technology, using a transparent disc, [4] was invented by David Paul
Gregg and James Russell in 1958 (and patented in 1961 and 1990). [5][6] The Gregg patents were
purchased by MCA in 1968. By 1969, Philips had developed a videodisc in reflective mode, which
has advantages over the transparent mode. MCA and Philips then decided to combine their efforts
and first publicly demonstrated the video disc in 1972.
LaserDisc was first available on the market, in Atlanta, on December 15, 1978,[7] two years after the
introduction of the VHS VCR, and four years before the introduction of the CD(which is based on
laser disc technology). Initially licensed, sold, and marketed as MCA DiscoVision (also known as
simply "DiscoVision") in North America in 1978, the technology was previously referred to internally
as Optical Videodisc System, Reflective Optical Videodisc, Laser Optical Videodisc, and DiscoVision (with a dash), with the first players referring to the format as "Video Long Play".

BLU-RAY DISC
Blu-ray (not Blue-ray) also known as Blu-ray Disc (BD), is the name of a new optical disc
format jointly developed by the Blu-ray Disc Association (BDA), a group of the world's leading
consumer electronics, personal computer and media manufacturers (including Apple, Dell, Hitachi,
HP, JVC, LG, Mitsubishi, Panasonic, Pioneer, Philips, Samsung, Sharp, Sony, TDK and Thomson).
The format was developed to enable recording, rewriting and playback of high-definition video (HD),

as well as storing large amounts of data. The format offers more than five times the storage capacity
of traditional DVDs and can hold up to 25GB on a single-layer disc and 50GB on a dual-layer disc.
This extra capacity combined with the use of advanced video and audio codecs will offer consumers
an
unprecedented
HD
experience.
While current optical disc technologies such as DVD, DVDR, DVDRW, and DVD-RAM rely on a red
laser to read and write data, the new format uses a blue-violet laser instead, hence the name Blu-ray.
Despite the different type of lasers used, Blu-ray products can easily be made backwards compatible
with CDs and DVDs through the use of a BD/DVD/CD compatible optical pickup unit. The benefit of
using a blue-violet laser (405nm) is that it has a shorter wavelength than a red laser (650nm), which
makes it possible to focus the laser spot with even greater precision. This allows data to be packed
more tightly and stored in less space, so it's possible to fit more data on the disc even though it's the
same size as a CD/DVD. This together with the change of numerical aperture to 0.85 is what enables
Blu-ray Discs to hold 25GB/50GB. Recent development by Pioneer has pushed the storage capacity
to
500GB
on
a
single
disc
by
using
20
layers.
Blu-ray is currently supported by about 200 of the world's leading consumer electronics, personal
computer, recording media, video game and music companies. The format also has support from all
Hollywood studios and countless smaller studios as a successor to today's DVD format. Many studios
have also announced that they will begin releasing new feature films on Blu-ray Disc day-and-date
with DVD, as well as a continuous slate of catalog titles every month. For more information about Bluray movies, check out our Blu-ray movies and Blu-ray reviews section which offers information about
new and upcoming Blu-ray releases, as well as what movies are currently available in the Blu-ray
format. http://www.blu-ray.com/info/
CD-ROM
Compact Disc-Read Only Memory, a CD-ROM(shown right) is an optical
disc which contains audio or software data whose memory is read only. A CDROM Driveo r optical drive is the device used to read them. CD-ROM
drives have speeds ranging from 1x all the way up to 72x, meaning it reads
the CD roughly 72 times faster than the 1x version. As you would imagine,
these drives are capable playing audio CDs and reading data CDs. Below is
a

picture of the front and back of a standard CD-ROM drive.

INTERFACES
Below are the different
ROM and other disc

interfaces that allow a CDdrives to connect to a

computer.

IDE/ATA One of the most commonly


used interfaces for disc
drives.
Panasonic - Older
proprietary interface.
Parallel Interface used with old external
CD-ROM
drives.
PCMCIA (PC
Card) - Interface sometimes
used to
connect external disc drives
to
laptop computers.
SATA Replacing IDE as the new
standard to
connect disc drives.
SCSI - Another
common interface used with disk and
disc drives.
USB - Interface most commonly used to connect external disc drives.

CD-ROM transfer speeds.

FLOPPY DISK
A floppy disk, also called a diskette, is a disk storage medium composed of a disk of thin and
flexible magnetic storage medium, sealed in a rectangular plastic carrier lined with fabric that
removes dust particles. Floppy disks are read and written by a floppy disk drive (FDD).
Floppy disks, initially as 8-inch (200 mm) media and later in 5-inch (133 mm) and 3-inch (90 mm)
sizes, were a ubiquitous form of data storage and exchange from the mid-1970s well into the 2000s.
By 2010, computer motherboards were rarely manufactured with floppy drive support; 3-inch floppy
disks can be used with an external USB floppy disk drive, but USB drives for 5-inch, 8-inch and
non-standard diskettes are rare or non-existent, and those formats must usually be handled by old
equipment.
While floppy disk drives still have some limited uses, especially with legacy industrial computer
equipment, they have been superseded by data storage methods with much greater capacity, such
as USB flash drives, portable external hard disk drives, optical discs, memory cards and computer
networks.
HISTORY

8-inch disk drive with diskette (3-inch disk for comparison)


3-inch, high-density diskettes
affixed with adhesive labels
The
earliest
floppy
disks,
developed in the late 1960s, were
8 inches (200 mm) in diameter;
they
became
commercially
available in 1971. These disks and associated drives were produced and improved upon by IBM and
other companies such as Memorex, Shugart Associates, and Burroughs Corporation.The phrase
"floppy disk" appeared in print as early as 1970, and although in 1973 IBM announced its first media
as "Type 1 Diskette" the industry continued to use the terms "floppy disk" or "floppy".

UBIQUITY
SIZES
8-inch floppy disk
The first floppy disk was 8 inches in diameter, was protected by a flexible plastic
jacket and was a read-only device used by IBM as a way of loading
microcode. Read/Write floppy disks and their drives became available in 1972 but it
was IBM's 1973 introduction of the 3740 data entry system began the establishment of floppy disks, called by
IBM the "Diskette 1," as an industry standard for information interchange. Early microcomputers used for
engineering, business, or word processing often used one or more 8-inch disk drives for removable storage;
the CP/M operating system was developed for microcomputers with 8-inch drives.

The family of 8-inch disks and drives increased over time and later versions could store up to 1.2
MB;many microcomputer applications did not need that much capacity on one disk, so a smaller size
disk with lower-cost media and drives was feasible. The 5-inch inch drive succeeded the 8-inch size
in many applications, and developed to about the same storage capacity as the original 8-inch size,
using higher-density media and recording techniques.
5-inch floppy disk
Uncovered 5-inch disk mechanism with disk inserted. The edge of the
disk with the opening for the medium was inserted first, then the lever
was turned to close the mechanism and engage the drive motor and
heads.
The head gap of an 80-track high-density (1.2 MB in the MFM format)
5-inch drive (a.k.a. Mini diskette, Mini disk, or Minifloppy) is smaller
than that of a 40-track double-density (360 KB) drive but can format, read and write 40-track disks
well provided the controller supports double stepping or has a switch to do such a process. A blank
40-track disk formatted and written on an 80-track drive can be taken to its native drive without

problems, and a disk formatted on a 40-track drive can be used on an 80-track drive. Disks written on
a 40-track drive and then updated on an 80 track drive become unreadable on any 40-track drives
due to track width incompatibility.
3-inch floppy disk
Internal parts of a 3-inch floppy disk.
1) A hole that indicates a high-capacity disk.
2) The hub that engages with the drive motor.
3) A shutter that protects the surface when removed from the drive.
4) The plastic housing.
5) A polyester sheet reducing friction against the disk media as it rotates within
the housing.
6) The magnetic coated plastic disk.
7) A schematic representation of one sector of data on the disk; the tracks and
sectors are not visible on actual disks.
8) The write protection tab (unlabeled) is upper left.
A 3-inch floppy disk drive.
In the early 1980s, a number of manufacturers introduced smaller
floppy drives and media in various formats. A consortium of 21
companies eventually settled on a 3-inch floppy disk (actually
90 mm wide) a.k.a. Micro diskette, Micro disk, or Micro floppy,
similar to a Sony design, but improved to support both single-sided and double-sided media, with
formatted capacities generally of 360 KB and 720 KB respectively. Single-sided drives shipped in
1983, and double sided in 1984. What became the most common format, the double-sided, highdensity (HD) 1.44 MB disk drive, shipped in 1986.
https://en.wikipedia.org/wiki/Floppy_disk
AN INTRODUCTION TO THE AMPLIFIER TUTORIAL
Not all amplifiers are the same and are therefore classified according to their circuit
configurations and methods of operation. In Electronics, small signal amplifiers are commonly used
devices as they have the ability to amplify a relatively small input signal, for example from
a Sensor such as a photo-device, into a much larger output signal to drive a relay, lamp or
loudspeaker for example.
There are many forms of electronic circuits classed as amplifiers, from Operational Amplifiers
and Small Signal Amplifiers up to Large Signal and Power Amplifiers. The classification of an amplifier
depends upon the size of the signal, large or small, its physical configuration and how it processes
the input signal that is the relationship between input signal and current flowing in the load.
The type or classification of an amplifier is given in the following table.

Classification of Amplifiers
Type of Signal

Type of
Configuration

Frequency of

Classification

Operation

Small Signal

Common Emitter

Class A Amplifier

Direct Current (DC)

Large Signal

Common Base

Class B Amplifier

Audio Frequencies (AF)

Common Collector

Class AB Amplifier

Radio Frequencies (RF)

Class C Amplifier

VHF, UHF and SHF


Frequencies

Amplifiers can be thought of as a simple box or block containing the amplifying device, such as
aTransistor, Field Effect Transistor or Op-amp, which has two input terminals and two output
terminals (ground being common) with the output signal being
much greater than that of the input signal as it has been
Amplified.
Generally, an ideal signal amplifier has three main
properties, Input
Resistance or ( Rin ), Output
Resistance or ( Rout ) and of course amplification known
commonly as Gain or ( A ). No matter how complicated an
amplifier circuit is, a general amplifier model can still be used
to show the relationship of these three properties.
IDEAL AMPLIFIER MODEL
The difference between the input and output signals is known as the Gain of the amplifier and
is basically a measure of how much an amplifier amplifies the input signal. For example, if we have
an input signal of 1 volt and an output of 50 volts, then the gain of the amplifier would be 50. In other
words, the input signal has been increased by a factor of 50. This increase is called Gain.
Amplifier gain is simply the ratio of the output divided-by the input. Gain has no units as its a ratio, but
in Electronics it is commonly given the symbol A, for Amplification. Then the gain of an amplifier is
simply calculated as the output signal divided by the input signal.
AMPLIFIER CLASSES
The classification of an amplifier as either a voltage or a power amplifier is made by comparing
the characteristics of the input and output signals by measuring the amount of time in relation to the
input signal that the current flows in the output circuit. We saw in the Common Emitter transistor
tutorial that for the transistor to operate within its Active Region some form of Base Biasing was

required. This small Base Bias voltage added to the input signal allowed the transistor to reproduce
the full input waveform at its output with no loss of signal.
However, by altering the position of this Base bias voltage, it is possible to operate an amplifier
in an amplification mode other than that for full waveform reproduction. With the introduction to the
amplifier of a Base bias voltage, different operating ranges and modes of operation can be obtained
which are categorized according to their classification. These various modes of operation are better
known as Amplifier Class.
Audio power amplifiers are classified in an alphabetical order according to their circuit configurations
and mode of operation. Amplifiers are designated by different classes of operation such as class A,
class B, class C, class AB, etc. These different Amplifier Classes range from a near linear
output but with low efficiency to a non-linear output but with a high efficiency.
No one class of operation is better or worse than any other class with the type of operation being
determined by the use of the amplifying circuit. There are typical maximum efficiencies for the various
types or class of amplifier, with the most commonly used being:

Class A Amplifier has low efficiency of less than 40% but good signal reproduction and
linearity.

Class B Amplifier is twice as efficient as class A amplifiers with a maximum theoretical


efficiency of about 70% because the amplifying device only conducts (and uses power) for half
of the input signal.

Class AB Amplifier has an efficiency rating between that of Class A and Class B but
poorer signal reproduction than class A amplifiers.

Class C Amplifier is the most inefficient amplifier class as only a very small portion of
the input signal is amplified therefore the output signal bears very little resemblance to the input
signal. Class C amplifiers have the worst signal reproduction.

CLASS A AMPLIFIER OPERATION


Class A Amplifier operation is where the entire input signal waveform is faithfully reproduced
at the amplifiers output as the transistor is perfectly biased within its active region, thereby never
reaching either of its Cut-off or Saturation regions. This then results in the AC input signal being
perfectly centred between the amplifiers upper and lower signal limits as shown below.
Class A Output Waveform
In this configuration, the Class A amplifier uses the same transistor for both halves of the output
waveform and due to its biasing arrangement the output transistor always has current flowing through
it, even if there is no input signal. In other words the output transistors never turn OFF. This results
in the class A type of operation being very inefficient as its conversion of the DC supply power to the
AC signal power delivered to the load is usually very low.

Generally, the output transistor of a Class A amplifier gets very


hot even when there is no input signal present so some form of
heat sinking is required. The direct current flowing through
the output transistor (Ic) when there is no output signal will be
equal to the current flowing through the load. Then a Class A
amplifier is very inefficient as most of the DC power is
converted to heat.
CLASS B AMPLIFIER OPERATION
Unlike the Class A amplifier mode of operation above that
uses a single transistor for its output power stage, the Class B Amplifier uses two complimentary
transistors (either an NPN and a PNP or a NMOS and a PMOS) for each half of the output waveform.
One transistor conducts for one-half of the signal waveform while the other conducts for the other or
opposite half of the signal waveform. This means that each transistor spends half of its time in the
active region and half its time in the cut-off region thereby amplifying only 50% of the input signal.
Class B operation has no direct DC bias voltage like the class A amplifier, but instead the transistor
only conducts when the input signal is greater than the base-emitter voltage and for silicon devices is
about 0.7v. Therefore, at zero input there is zero output. This then results in only half the input signal
being presented at the amplifiers output giving a greater amount of amplifier efficiency as shown
below.
Class B Output Waveform
In a class B amplifier, no DC voltage is used to bias the transistors,
so for the output transistors to start to conduct each half of the
waveform, both positive and negative, they need the baseemittervoltage Vbe to be greater than the 0.7v required for a
bipolar transistor to start conducting.
Then the lower part of the output waveform which is
below this 0.7v window will not be reproduced accurately resulting
in a distorted area of the output waveform as one transistor turns
OFF waiting for
the other to turn back ON. The result is that there is a small part
of the output waveform at the zero voltage cross over point which will be distorted. This type of
distortion is called Crossover Distortion and is looked at later on in this section.
CLASS AB AMPLIFIER OPERATION
The Class AB Amplifier is a compromise between the Class A and the Class B configurations
above. While Class AB operation still uses two complementary transistors in its output stage a very
small biasing voltage is applied to the Base of the transistor to bias it close to the Cut-off region when
no input signal is present.
An input signal will cause the transistor to operate as normal in its Active region thereby eliminating
any crossover distortion which is present in class B configurations. A small Collector current will flow
when there is no input signal but it is much less than that for the Class A amplifier configuration. This
means then that the transistor will be ON for more than half a cycle of the waveform. This type of

amplifier configuration improves both the efficiency and linearity of the amplifier circuit compared to a
pure Class A configuration.
CLASS AB OUTPUT WAVEFORM
The class of operation for an amplifier is very important and is based on the amount of transistor bias required
for operation as well as the amplitude required for the input signal.
Amplifier classification takes
into account the portion of the input signal
in which the transistor conducts as well as determining both the efficiency
and the amount of power that the switching transistor both consumes and
dissipates in the form of wasted heat. Then we can make a comparison
between the most common types of amplifier classifications in the
following
table.

POWER AMPLIFIER CLASSES


Class

AB

360o

180o

Less than 90o

180 to 360o

Position of

Centre Point of

Exactly on the

Below the

the Q-point

the Load Line

X-axis

X-axis

Overall

Poor

Better

Higher

Efficiency

25 to 30%

70 to 80%

than 80%

Signal

None if Correctly

At the X-axis

Distortion

Biased

Crossover Point

Conduction
Angle

Large Amounts

In between the
X-axis and the
Centre Load Line
Better than A
but less than B
50 to 70%

Small Amounts

Badly designed amplifiers especially the Class A types may also require larger power transistors,
more expensive heat sinks, cooling fans, or even an increase in the size of the power supply required
to deliver the extra power required by the amplifier. Power converted into heat from transistors,
resistors or any other component for that matter, makes any electronic circuit inefficient and will result
in the premature failure of the device.
So why use a Class A amplifier if its efficiency is less than 40% compared to a Class B amplifier that
has a higher efficiency rating of over 70%. Basically, a Class A amplifier gives a much more linear
output meaning that it has, Linearity over a larger frequency response even if it does consume large
amounts of DC power.
BASIC BJT AMPLIFIER CONFIGURATIONS

There are plenty of texts around on basic electronics, so this is a very brief look at the three
basic ways in which a bipolar junction transistor (BJT) can be used. In each case, one terminal is
common to both the input and output signal. All the circuits shown here are without bias circuits and
power supplies for clarity.
Common Emitter Configuration
Here the emitter terminal is common to both the input and output signal. The arrangement is the
same for a PNP
transistor. Used in this way the transistor has the advantages of a
medium input impedance, medium output impedance, high voltage gain and high
current gain.

Common Base Configuration

Here the base is the common terminal. Used frequently for RF


applications, this stage has the following properties. Low input
impedance, high output impedance, unity (or less) current gain and
high voltage gain.
Common Collector Configuration
This last configuration is also more commonly known as the emitter follower. This is because the
input signal
applied at the base is "followed" quite closely at the emitter with a
voltage gain close to unity. The properties are a high input impedance, a very low
output impedance, a unity (or less) voltage gain and a high current gain.
This circuit is also used extensively as a "buffer" converting impedances
or for feeding
or driving long cables or low impedance loads.

Transistor Configuration Comparison Chart


(see Sedra & Smith and "Detailed Analysis" below)
AMPLIFIER TYPE

COMMON
BASE

COMMON
EMITTER

COMMON
EMITTER
(Emitter Resistor)

COMMON
COLLECTOR
(Emitter Follower)

INPUT/OUTPUT
PHASE RELATIONSHIP

180

180

VOLTAGE GAIN

HIGH

MEDIUM

MEDIUM

LOW

CURRENT GAIN

LOW

MEDIUM

MEDIUM

HIGH

POWER GAIN

LOW

HIGH

HIGH

MEDIUM

INPUT RESISTANCE

LOW

MEDIUM

MEDIUM

HIGH

OUTPUT RESISTANCE

HIGH

MEDIUM

MEDIUM

LOW

THE SIMPLEST AMPLIFIER CIRCUIT DIAGRAM

Parts list
PART

VALUE

DESCRIPTION

C1

100F

POLARIZED CAPACITOR

C2

100F

POLARIZED CAPACITOR

C3

100F

POLARIZED CAPACITOR

C4

100F

POLARIZED CAPACITOR

C5

100F

POLARIZED CAPACITOR

C6

470F

POLARIZED CAPACITOR

C7

100F

POLARIZED CAPACITOR

C8

470F

POLARIZED CAPACITOR

C9

0.22F

NON-POLARIZED CAPACITOR

C10

0.22F

NON-POLARIZED CAPACITOR

C11

0.15F

NON-POLARIZED CAPACITOR

C12

0.15F

NON-POLARIZED CAPACITOR

TEA2025

TEA2025B

Amplifier chip

SPKR1

4-8 Ohm speaker

SPKR2

4-8 Ohm speaker

R1+R2

10K

DUAL Potentiometer

http://www.build-electronic-circuits.com/amplifier-circuit-diagram/
TRANSISTORS
Transistors can be regarded as a type of switch, as can many electronic components. They are
used in a variety of circuits and you will find that it is rare that a circuit built in a school Technology
Department does not contain at least one transistor. They are central to electronics and there are two
main types; NPN and PNP. Most circuits tend to use NPN. There are hundreds of transistors which
work at different voltages but all of them fall into these two categories.
Transistors are manufactured in different shapes but they have three
leads
(legs).
The BASE - which is the lead responsible for activating the transistor.
The COLLECTOR which
is
the
positive
lead.
The EMITTER which
is
the
negative
lead.
The diagram below shows the symbol of an NPN transistor. They are
not always set out as shown in the diagrams to the left and right,
although the tab on the type shown to the left is usually next to the
emitter.
http://www.technologystudent.com/elec1/transis1.htm
Quasi-Symmetry
The case of quasi-symmetry can be handled by noting that the corresponding hypothesis can be
expressed in terms of odds-ratios. We consider a square two-dimensional contingency table
with I rows as well as I columns. For such a table Agresti (1990, p. 355[1]) gives the following
conditions
for
quasi-symmetry
to
hold:

Note that
omit

appears both in the left as in the right hand side of the equation above. Hence, we can
this
term.
We
obtain

We can use this expression to derive restrictions of the form


if quasi-symmetry holds. In
general, we have to derive as many restrictions in the 's as there are degrees of freedom. The quasi-

symmetry model has (I-1)(I-2)/2 degrees of freedom. Since the above expression has to hold for
all i and j it contains redundancy. As can be seen from the following arguments there are only as
many independent restrictions as there are degrees of freedom for the quasi-symmetry model.
Consider the case where i = j. It can easily be checked that in this case equality holds regardless of
the values of .
Hence, from the

restrictions we already have identified I restrictions that are superfluous. Now

assume that either i = I or j = I but


have

. We assume without loss of generality that j = I. Then we

from which we can readily see that this expression is trivially always true. This means that an
additional 2*(I-1) of the conditions are superfluous. Finally, we note that the restrictions in terms of
odds-ratios for (i,j) = (k,l) and (j,i) = (l,k) are equivalent. Hence, we have another (I-1)(I-2)/2
superfluous restrictions. Calculating the number of superfluous restrictions and subtracting this
number
from results
in

which is exactly equal to the number of degrees of freedom of the quasi-symmetry model. Hence, we
can deduce that just the restrictions where
and i < j holds are non-redundant. For a
3 3-table there is only one condition that needs to be tested for quasi-symmetry. For a 4 4-table
there are
conditions

are.

For

conditions that need to be tested. We already know which these


quasi-symmetry we have to check for a 4 4-table whether

are simultaneously true. Equivalently, we can take the logarithm of both sides of each equation and
move
all
terms
to
the
left
hand
side.
This
yields

Hence, the function


has been found. We now demonstrate the fitting of the quasi-symmetry
model to a 4 4-table. The data are given in Agresti (1990, p. 357[1]) and were obtained from a
sample of 55,981 residences sampled by the U. S. Bureau of the Census, see Table 1. The four
categories are the four regions, Northeast, Midwest, South, and West of the USA and interest lies in

analyzing the migration that took place between 1980, when the first observation was made, and
1985, the year when the sample was revisited.
Fitting the model of quasi-symmetry by maximum-likelihood yields a deviance value of
based on df = 3. Now, let us fit the model with the GSK approach. The only problem is implementing
the three response functions in a SAS job file. Here is how the model can be specified and fitted in
SAS.
proc catmod data=sasuser.agr10_2;
weight n;
response
0 1 0 -1 -1 0 0 1 0 0 0 0 1 -1 0 0,
0 0 1 -1 0 0 0 0 -1 0 0 1 1 0 -1 0,
0 0 0 0 0 0 1 -1 0 -1 0 1 0 1 -1 0
log;
model res80*res85 = /noint;
run;
The data are assumed to be contained in a file named agr10_2 in the sasuser library. The data file
contains three variables: res80 and res85, the indicator variables for geographical region in 1980 and
1985 respectively where the index for res85 changes fastest, and n which contains the numbers of
residences that fall in the cells of the contingency table. Most important is the response-statement.
First, we can see that it contains the log-keyword at the end. This means that the logarithm of the
vector of estimated probabilities p is used for calculations. The previous three lines define three linear
functions in the log cell probabilities according to the three single response functions
and

given

and calculating
commands.

above.

This

can

where A is the

be

seen

by

expressing

,
as

matrix given in the response-statement of the SAS

The model-statement merely tells the SAS system to test whether all the three response functions are
simultaneously equal to zero. These SAS-commands yield the following slightly abbreviated output:
CATMOD PROCEDURE
Response: RES80*RES85
Response Levels (R)= 16
Weight Variable: N
Populations (S)= 1
Data Set: AGR10_2
Total Frequency (N)= 55981
Frequency Missing: 0
Observations (Obs)= 16
Response Functions
Sample
1
2
3
-----------------------------------------1
0.00206 0.02042 0.23055
ANALYSIS-OF-VARIANCE TABLE
Source
DF Chi-Square
Prob
-------------------------------------------------RESIDUAL
3
2.98 0.3947

The test statistic yields a value of 2.98 based on 3 degrees of freedom. This is almost exactly the
result obtained from the maximum-likelihood analysis given above. In both cases we come to the
same substantive conclusion that the migration of US residence between 1980 and 1985 can be
nicely explained by a quasi-symmetry model.
For symmetry and quasi-symmetry models log-linear models and the procedure just outlined in this
section are closely related to each other. This relation stems from the fact that testing for
done by calculating linear functions of the logarithm of the cell probabilities,

is
that

is,
. Therefore, the following paragraphs apply also to the diagonals-parameter
symmetry model of Goodman discussed in a later section of this paper.
The connection between fitting log-linear models and the GSK approach can be seen by using the
design-matrix approach to log-linear models (Evers & Namboodiri, 1979[9]). Let X be the designmatrix for fitting a log-linear model. Let
where the
are expected frequencies in a
two-dimensional contingency table. Using matrix notation the log-linear can be written as
where X is called the design-matrix.
For explanations of how design-matrices for symmetry and quasi-symmetry models are set up, see
von Eye and Spiel (1996)[26]. Finding a log-linear model that fits the data well is done by adding or
deleting parameters to the model equation and adding or deleting the corresponding column-vectors
to the design-matrix X or in a more abstract sense, enlarging or diminishing the column space of X.
If X has q rows and p columns and
model selection of log-linear models can be seen as the
problem of selecting a p-dimensional subspace in q-dimensional space.
Positing that C(X) yields a good model-fit is equivalent to positing that the orthogonal complement
of C(X), that is,

adds nothing to the model. Since the GSK approach tests for

equivalent to examining whether

this is

can be ignored. Hence, while log-linear modeling focuses on

specifying C(X) the GSK approach focuses on finding

More specifically, let X be the design-matrix for a quasi-symmetry model and let A be a matrix such
that
then
. In the example for the quasisymmetry model from above, A was given by the three response functions that were assumed to be
simultaneously equal to zero if the quasi-symmetry model holds, see the matrix given in
the response-statement of the last SAS command file.
http://www.dgps.de/fachgruppen/methoden/mpr-online/issue5/art4/node4.html
TRANSFORMER PUSH-PULL STAGES
This amplifier uses two equal transistors which are operated in common emitter configuration. Each

transistor amplifies one half wave of the signal and the two half waves are assembled to a a.c. signal
at the secondary.
Furthermore the transformer will transform the high output resistance of the common emitter stage to
the required low value.
The transistors have to be controlled in anti-phase. This is either achieved by a coupling transformer
with centre tapping or by a phase splitter transistor stage.
Fig.

3.3.1:
Principle of a transformer push-pull amplifier with coupling
transformer.
Transformer push-pull amplifiers were only built in the
early days of the transistor. Because of their pour
performance (distortions, frequency response) and their
high costs and weight (transformers), they are not used in
professional audio equipment. Therefore they deserve here no further
consideration.

COMPLEMENTARY PUSH-PULL STAGES


Complementary transistor amplifiers consist of a NPN and a PNP transistor. Both transistors are
operated in common collector configuration.
A class B complementary amplifier has the following advantages:

Simple construction of the circuit

Both transistors can be controlled by the same signal

No

Low

Fig.
positive half
signal.

output transformer required


output resistance due to common collector configuration.
3.4.1:
Principle of the complementary amplifier. Transistor T1 will amplify the
wave while transistor T2 will amplify the negative half wave of the

This basic circuit has the disadvantage that the working point is set by UBE = 0 which is below cut off.
This is a class C working point. This amplifier will produce considerable cross over distortions.
To achieve a class B or class AB working point the transistors have to be biased. Different methods
can be used for biasing.

Fig.

3.4.2:
Different methods of biasing of a complementary amplifier:

a: Diode biasing
b: Resistor biasing
Diode biasing:
The voltage drop across the diodes will put UBE of the transistors near the cut off point, so
providing class B operation. The dynamic resistance of the conductive diodes is low and
produces little attenuation to the signal.
Resistor biasing:
The voltage divider consisting of R1, R2 and R3 is adjusted so that the voltage
across R2 is UBE1 + UBE2. R2 allows to adjust a class B or class AB working point. R2 must be bypassed by a capacitor to apply the same signal to both transistor.
Critical parts of this simple complementary push pull stage are the biasing
resistors RB1 and RB2. These resistors have to provide sufficient base current for the transistor if the
maximum collector current flows. The voltage between base and collector will be very low (1V to 3V)
then.

power

transistor

This will give low values for RB, which result in a high average current and high
dissipation in RB
Therefore such a simple complementary push pull stage can only supply
output currents of up to a few 100mA. If higher currents are required, several
stages have to be cascaded.

Transistor stages driving the final stage are called driver stages or driver amplifiers.
A simple and very convenient arrangement is achieved, if the biasing diodes are substituted by the
base emitter path of transistors. These transistors will form a driver stage, as they provide additional
amplification of the input current.

Fig.

3.4.3:
Complementary push pull stage with driver transistors in the biasing path.
Another method to achieve higher current gain is to use Darlington transistors at the
output.
Fig. 3.4.4:
Complementary push pull stage with Darlington transistors at the output.

Push pull amplifiers are ideally used with a symmetrical or dual supply. In this case the load
(loudspeaker) could be connected directly between output and ground.
Where only one supply voltage is available (battery operated equipment) the d.c. output voltage will
be at half of the supply voltage. A coupling capacitor will be required to prevent d.c. current from
flowing to the load. The capacitor forms a high pass with the load and the output resistance of the
amplifier. It must be calculated to produce negligible voltage drop at the lowest frequency.
Typically Co will have values between several 100 F and several
1000

F.

Pushload
If

Fig.
3.4.5:
pull stages with symmetrical supply can have the
connected directly between the transistors and
ground.
only one supply voltage is available a
coupling capacitor is required at the output.

with a

In practice these basic circuits are found


variety of modification to adopt the circuit to
special requirements and to improve the
performance.

QUASI-COMPLEMENTARY PUSH-PULL STAGES


Complementary push-pull amplifiers require a power NPN and PNP transistor. In order to
achieve symmetric amplification of both half waves, the transistors should have similar
characteristics. Especially in the early days of transistors this was difficult to achieve. Silicon power
transistors with reasonable current amplification were difficult to produce. Therefore power amplifiers
were required which worked with NPN power transistors only.
The basic output stage looks like this:

Fig.

3.5.1:
Basic principle of a push pull output stage with two NPN transistors.
This configuration produces following problems:

The two transistors require control signals of different polarity

The

upper
transistor
is
in common
collector and
the lower transistor is in common emitter configuration
The upper transistor has a gain of 1 the lower transistor of
more than 1.

The two transistors have different output impedance.

The problems have to be overcome by suitable driver stages:


The transistor T1 will be driven by a NPN transistor in Darlington configuration. The whole
configuration works in common collector mode and has thus a gain of one.
The transistor T2 will be driven by a PNP transistor in common collector mode between collector and
base of T2. There is full NFB from the collector of T2 to the emitter of T1 so that the whole
configuration has a gain of 1.
Fig. 3.5.2:
Quasi-complementary push-pull stage. The complementary
transistors T3 and T4 allow to drive the two sides of the push-pull stage with same
control signal.

The

circuit shows that at the input of the amplifier we actually see a complementary
push pull stage. The circuit also shows the same characteristics as a
complementary stage. Therefore it is called quasi-complementary stage.

http://www9.dwworld.de/rtc/infotheque/semiconamps/semiconductor_amps3.html

PUSHPULL OUTPUT
A pushpull output is a type of electronic circuit that uses a pair of active devices that
alternately supply current to, or absorb current from, a connected load. Pushpull outputs are present
in TTL and CMOS digital logic circuits and in some types of amplifiers, and are usually realized as a
complementary pair of transistors, one dissipating or sinking current from the load to ground or a
negative power supply, and the other supplying or sourcing current to the load from a positive power
supply.
A pushpull amplifier is more efficient than a single-ended "class-A" amplifier. The output power that
can be achieved is higher than the continuous dissipation rating of either transistor or tube used alone
and increases the power available for a given supply voltage. Symmetrical construction of the two
sides of the amplifier means that even-order harmonics are cancelled, which can reduce distortion.
[1]
DC current is cancelled in the output, allowing a smaller output transformer to be used than in a
single-ended amplifier. However, the pushpull amplifier requires a phase-splitting component that
adds complexity and cost to the system; use of center-tapped transformers for input and output is a
common technique but adds weight and restricts performance. If the two parts of the amplifier do not
have identical characteristics, distortion can be introduced as the two halves of the input waveform
are amplified unequally. Crossover distortion can be created near the zero point of each cycle as one
device is cut off and the other device enters its active region.
Symmetrical pushpull
Each half of the output pair "mirror" the other, in that an NPN (or N-Channel FET) device in
one half will be matched by a PNP (or P-Channel FET) in the other. This type of arrangement tends to
give lower distortion than quasi-symmetric stages because even harmonics are cancelled more
effectively with greater symmetry.
Quasi-symmetrical pushpull[edit]
In the past when good quality PNP complements for high power NPN silicon transistors were
limited, a workaround was to use identical NPN output devices, but fed from complementary PNP and
NPN driver circuits in such a way that the combination was close to being symmetrical (but never as
good as having symmetry throughout). Distortion due to mismatched gain on each half of the cycle
could be a significant problem.

Super-symmetric output stages


Employing some duplication in the whole driver circuit, to allow symmetrical drive circuits can
improve matching further, although driver asymmetry is a small fraction of the distortion generating
process. Using a bridge-tied load arrangement allows a much greater degree of matching between
positive and negative halves, compensating for the inevitable small differences between NPN and
PNP devices.
https://en.wikipedia.org/wiki/Push%E2%80%93pull_output
PACKAGE CONFIGURATIONS
Two of the most common package configurations are the SOT-23 and SOT-323 (SC-70). Other
package configurations are SOT-66, SOT-89, SOT-143, SOT-223 and TSOT-23, which is a thinner
version This package is used for transistors, comparators, diodes and other simple components.
There are several examples of varying types in tables below. Dimensions are used for orientation, for
exact information (and footprint) is necessary to find appropriate outline drawing.

Vous aimerez peut-être aussi