Vous êtes sur la page 1sur 25

PRODUCT PROFILE AND HISTORY:

GRAPHICS PROCESSING UNIT:

PROFILE AND HISTORY:

Today, a GPU is one of the most crucial


hardware components of computer architecture. Initially, the purpose of a video
card was to take a stream of binary data from the central processor and render
images to display. But modern graphics processing units are engaged in the
most complex calculations, like big data research, machine learning and AI.

Over several decades, the GPU evolved from a single core, fixed function
hardware used for graphics solely, to a set of programmable parallel cores. Let’s
have a look at some milestones in the fascinating history of the GPU.

ARCADE BOARDS AND DISPLAY ADAPTERS (1951 -1995)

As early as 1951, MIT built the Whirlwind, a


flight simulator for the Navy. Although it may be considered the first 3D
graphics system, the base of today’s GPUs was formed in the mid-70s with so-
called video shifters and video address generators. They carried out information
from the central processor to the display. Specialized graphics chips were
widely used in arcade system boards. In 1976, RCA built the “Pixie” video chip,
which was able to output video signal at 62×128 resolution. Graphics hardware
of Namco Galaxian arcade system supported RGB color, multi-colored sprites
and tilemap backgrounds as early as 1979.
In 1981, IBM started using monochrome and a color display adapter
(MDA/CDA) in its PCs. Not a modern GPU yet, it was a particular computer
component designed for one purpose: to display video. At first, it was 80

columns by 25 lines of text characters or symbols. ISBX 275 Video Graphics


Controller Multimodule Board, released by Intel in 1983, was the next
revolutionary device. It was able to display eight colors at a resolution of
256x256 or monochrome at 512x512.

MONOCHROME DISPLAY ADAPTER (IBM, 1981)

In 1985, three Hong Kong immigrants in


Canada formed Array Technology Inc, soon renamed as ATI Technologies.
This company would lead the market for years with its Wonder line of graphics
boards and chips.

S3 Graphics introduced the S3 86C911, named after the Porsche 911, in 1991.
The name was to indicate the performance increase. This card spawned a crowd
of imitators: by 1995, all major players in the making of graphics cards had
added 2D acceleration support to their chips. Throughout the 1990s, the level of
integration of video cards was significantly improved with the additional
application programming interfaces (APIs).

Overall, the early 1990s was the time when a lot of graphics hardware
companies were found, and then acquired or ousted out of business. Among the
winners founded during this time was NVIDIA. By the end of 1997, this
company had nearly 25 percent of the graphics market.

3D REVOLUTION (1995–2006)
The history of modern GPUs starts in 1995 with the
introduction of the first 3D add-in cards, and later the adoption of the 32-bit
operating systems and affordable personal computers. Previously, the industry
was focused on 2D and non-PC architecture, and graphics cards were mostly
known by alphanumeric names and huge price tags.

3DFx’s Voodoo graphics card, launched in late 1996, took over about 85% of
the market. Cards that could only render 2D became obsolete very fast. The
Voodoo1 steered clear of 2D graphics entirely; users had to run it together with
a separate 2D card. But it still was a godsend for gamers. The next company’s
product, Voodoo2 (1998), had three onboard chips and was one of the first
video cards ever to support parallel work of two cards within a single computer.

With the progress of manufacturing technology, video, 2D GUI acceleration and


3D functionality were all integrated into one chip. Rendition’s Verite chipsets
were among the first to do this well. 3D accelerator cards were not just
rasterizers any more.

Finally, the “world’s first GPU” came in 1999! This is how Nvidia promoted its
GeForce 256. Nvidia defined the term graphics processing unit as “a single-
chip processor with integrated transform, lighting, triangle setup/clipping, and
rendering engines that is capable of processing a minimum of 10 million
polygons per second.”

The rivalry between ATI and Nvidia was the highlight of the early 2000s. Over
this time, the two companies went head to head and delivered graphics cards
with features that are now commonplace. For example, the capability to perform
specular shading, volumetric explosion, waves, refraction, shadow volumes,
vertex blending, bump mapping and elevation mapping.
GENERAL PURPOSE GPUS (2006 — PRESENT DAY)

The era of the general purpose GPUs began


in 2007. Both Nvidia and ATI (since acquired by AMD) had been packing their
graphics cards with ever-more capabilities.

However, the two companies took different tracks to general purpose computing
GPU (GPGPU). In 2007, Nvidia released its CUDA development environment,
the earliest widely adopted programming model for GPU computing. Two years
later, OpenCL became widely supported. This framework allows for the
development of code for both GPUs and CPUs with an emphasis on portability.
Thus, GPUs became a more generalized computing device.

In 2010, Nvidia collaborated with Audi. They used Tegra GPUs to power the
cars’ dashboard and increase navigation and entertainment systems. These
advancements in graphics cards in vehicles pushed self-driving technology.

NVIDIA TITAN V, 2017

Pascal is the newest generation of graphics cards by Nvidia,


released in 2016. Their 16 nm manufacturing process improves upon previous
microarchitectures. AMD released Polaris 11 and Polaris 10 GPUs featuring 14
nm process, which resulted in a robust increase performance per watt in the
performance per watt. However, the energy consumption of modern GPUs has
increased as well.
Today, graphics processing units are not only for graphics. They have found
their way into fields as diverse as machine learning, oil exploration, scientific
image processing, statistics, linear algebra, 3D reconstruction, medical research
and even stock options pricing determination. The GPU technology tends to add
even more programmability and parallelism to a core architecture that is ever-
evolving towards a general purpose CPU-like core.

PROCESSOER:

PROFILE AND HISTORY:

Processors are probably the most single interesting


piece of hardware in your computer. They have a rich and neat history history,
dating all the way back to 1971 with the first commercially available
microprocessor, the Intel 4004. As you can imagine and have no doubt seen
yourself, since then, technology has improved by leaps and bounds.

We’re going to show you a history of the processor, starting with the Intel 8086.
It was the processor IBM chose for the first PC and only has a neat history from
then on out.

CPUs have gone through many changes through the few years since Intel came
out with the first one. IBM chose Intel’s 8088 processor for the brains of the
first PC. This choice by IBM is what made Intel the perceived leader of the
CPU market. Intel remains the perceived leader of microprocessor development.
While newer contenders have developed their own technologies for their own
processors, Intel continues to remain more than a viable source of new
technology in this market, with the ever-growing AMD nipping at their heels.
The first four generations of Intel processor took on the “8” as the series name,
which is why the technical types refer to this family of chips as the 8088, 8086,
and 80186. This goes right on up to the 80486, or simply the 486. The following
chips are considered the dinosaurs of the computer world. PC’s based on these
processors are the kind that usually sit around in the garage or warehouse
collecting dust. They are not of much use anymore, but us geeks don’t like
throwing them out because they still work. You know who you are.

 Intel 8086 (1978):


This chip was skipped over for the original PC, but was used in a
few later computers that didn’t amount to much. It was a true 16-bit processor
and talked with its cards via a 16 wire data connection. The chip contained
29,000 transistors and 20 address lines that gave it the ability to talk with up to
1 MB of RAM. What is interesting is that the designers of the time never
suspected anyone would ever need more than 1 MB of RAM. The chip was
available in 5, 6,, 8, and 10 MHz versions.
 Intel 8088 (1979):
The 8088 is, for all practical purposes, identical to the 8086. The
only difference is that it handles its address lines differently than the 8086. This
chip was the one that was chosen for the first IBM PC, and like the 8086, it is
able to work with the 8087 math coprocessor chip.
 NEC V20 and V30 (1981):
Clones of the 8088 and 8086. They are supposed to be
about 30% faster than the Intel ones, though.
 Intel 80186 (1980):
The 186 was a popular chip. Many versions have been
developed in its history. Buyers could choose from CHMOS or HMOS, 8-bit or
16-bit versions, depending on what they needed. A CHMOS chip could run at
twice the clock speed and at one fourth the power of the HMOS chip. In 1990,
Intel came out with the Enhanced 186 family. They all shared a common core
design. They had a 1-micron core design and ran at about 25MHz at 3 volts.
The 80186 contained a high level of integration, with the system controller,
interrupt controller, DMA controller and timing circuitry right on the CPU.
Despite this, the 186 never found itself in a personal computer.
Intel 80286 (1982):
A 16-bit, 134,000 transistor processor capable of addressing up
to 16 MB of RAM. In addition to the increased physical memory support, this
chip is able to work with virtual memory, thereby allowing much for
expandability. The 286 was the first “real” processor. It introduced the concept
of protected mode. This is the ability to multitask, having different programs
run separately but at the same time. This ability was not taken advantage of by
DOS, but future Operating Systems, such as Windows, could play with this new
feature. On the the drawbacks of this ability, though, was that while it could
switch from real mode to protected mode (real mode was intended to make it
backwards compatible with the 8088’s), it could not switch back to real mode
without a warm reboot. This chip was used by IBM in its Advanced Technology
PC/AT and was used in a lot of IBM-compatibles. It ran at 8, 10, and 12.5 MHz,
but later editions of the chip ran as high as 20 MHz. While these chips are
considered paperweights today, they were rather revolutionary for the time
period.
 Intel 386 (1985 – 1990):
The 386 signified a major increase in technology from Intel.
The 386 was a 32-bit processor, meaning its data
throughput was immediately twice that of the 286.
Containing 275,000 transistors, the 80386DX processor
came in 16, 20, 25, and 33 MHz versions. The 32-bit
address bus allowed the chip to work with a full 4 GB of RAM and a staggering
64 TB of virtual memory. In addition, the 386 was the first chip to use
instruction pipelining, which allows the processor to start working on the next
instruction before the previous one is complete. While the chip could run in
both real and protected mode (like the 286), it could also run in virtual real
mode, allowing several reasl mode sessions to be run at a time. A multi-tasking
operating system such as Windows was necessary to do this, though. In 1988,
Intel released the 386SX, which was basically a low-fat version of the 386. It
used the 16-bit data bus rather than the 32-bit, and it was slower, but it thus used
less power and thus enabled Intel to promote the chip into desktops and even
portables. In 1990, Intel released the 80386SL, which was basically an 855,00
transistor version of the 386SX processor, with ISA compatibility and power
management circuitry.
386 chips were designed to be user friendly. All chips in the family were pin-
for-pin compatible and they were binary compatible with the previous 186
chips, meaning that users didn’t have to get new software to use it. Also, the
386 offered power friendly features such as low voltage requirements and
System Management Mode (SMM) which could power down various
components to save power. Overall, this chip was a big step for chip
development. It set the standard that many later chips would follow. It offered a
simple design which developers could easily design for.
INTEL 486 (1989 – 1994)
The 80486DX was released in 1989. It was a 32-bit
processor containing 1.2 million transistors. It had the same
memory capacity as the 386 (both were 32-bit) but offered
twice the speed at 26.9 million instructions per second
(MIPS) at 33 MHz. There are some improvements here,
though, beyond just speed. The 486 was the first to have an integrated floating
point unit (FPU) to replace the normally separate math coprocessor (not all
flavors of the 486 had this, though). It also contained an integrated 8 KB on-die
cache. This increases speed by using the instruction pipelining to predict the
next instructions and then storing them in the cache. Then, when the processor
needs that data, it pulls it out of the cache rather than using the necessary
overhead to access the external memory. Also, the 486 came in 5 volt and 3 volt
versions, allowing flexibility for desktops and laptops.

The 486 chip was the first processor from Intel that was designed to be
upgradeable. Previous processors were not designed this way, so when the
processor became obsolete, the entire motherboard needed to be replaced. With
the 486, the same CPU socket could accommodate several different flavors of
the 486. Initial 486 offerings were designed to be able to be upgraded using
“OverDrive” technology. This means you can insert a chip with a faster internal
clock into the existing system. Not all 486 systems could use OverDrive, since
it takes a certain type of motherboard to support it.

The first member of the 486 family was the i486DX, but in 1991 they released
the 486SX and 486DX/50. Both chips were basically the same, except that the
486SX version had the math coprocessor disabled (yes, it was there, just turned
off). The 486SX was, of course, slower than its DX cousin, but the resulting
reduced cost and power lent itself to faster sales and movement into the laptop
market. The 486DX/50 was simply a 50MHz version of the original 486. The
DX could not support future OverDrives while the SX processor could.

In 1992, Intel released the next wave of 486’s making use of OverDrive
technology. The first models were the i486DX2/50 and i486DX2/66. The extra
“2” in the names indicate that the normal clock speed of the processor is being
effectively doubled using OverDrive, so the 486DX2/50 is a 25MHz chip being
doubled to 50MHz. The slower base speed allowed the chip to work with
existing motherboard designs, but allowed the chip internally to operate at the
increased speed, thereby increasing performance.
Also in 1992, Intel put out the 486SL. It was virtually identical to vintage 486
processors, but it contained 1.4 million transistors. The extra innards were used
by its internal power management circuitry, optimizing it for mobile use. From
there, Intel released various 486 flavors, mixing SL’s with SX’s and DX’s at a
variety of clock speeds. By 1994, they were rounding out their continued
development of the 486 family with the DX4 Overdrive processors. While you
might think these were 4X clock quadruplers, they were actually 3X triplers,
allowing a 33 MHz processor to operate internally at 100 MHz.

COMPUTER MOTHERBOARD:

PROFILE AND HISTORY:

A motherboard is the most vital component of


a computer, connecting almost all its peripherals. Without it, the computer will
just be an empty box of tin. We trace the motherboard’s tracks as it advanced
through the decades.

A motherboard is a complex printed circuit board (PCB), which is the main


central part of many electronic systems, particularly computers. They are
alternately known as mainboard, system board, or logic board (Apple
Computers). It is a platform that offers electrical connections through which
other components of a computer communicate, and also houses the central
processing unit (CPU), generally referred to as the brain of the computer.

Motherboards are also present in mobile phones, clocks, stop watches, etc. They
include a lot of essential components of a computer such as a microprocessor,
main memory, and the microprocessor’s supporting chipset that provides an
interface between the CPU and other external components.
This vital piece of technology revolutionized the way computer systems were
later designed. The earlier versions weren’t as efficient and dependable.
Today’s motherboards contain more or less of the following parts:

 Expansion card slots.


 Many computers come with a CPU directly welded to the motherboard.
 Logic and connectors that support input devices.
 They come with power connectors that use the electricity from a computer
power supply to run the expansion cards, memory, CPU, and chipset.
 Integrated sound card.
 Slots or sockets that allow one or multiple microprocessors to be installed.
 A clock generator is a vital component that sets the system clock signal to help
sync itself to a variety of components.
 There are non-volatile memory chips that contain BIOS or firmware of the
system.
 Graphic card supporter with 2D and 3D graphic capabilities.
 USB controllers that can support about 12 USB ports.

Before the invention of microprocessors, computers were built into mainframes


with components which were connected by a backplane that had countless slots
for connecting wires. In old designs, wires were needed to connect card
connector pins but they soon became a thing of the past with the invention of
PCBs. The CPU, memory and other peripherals were all housed on this printed
circuit board.

During the late 1980s and 1990s, it was found that an increasing number of
peripheral functions on the PCB were more economical. Hence, single
Integrated Circuits (ICs) capable of supporting low-speed peripherals like
serial ports, mouse, keyboards, etc., were included on a motherboard. By the
late 1990s, they began to have a multifaceted platform integrated with audio,
video, storage and networking functions. Higher end systems for 3D gaming
and graphic cards were also included later.

Micronics, Mylex, AMI, DTK, Orchid Technology, Elitegroup, etc. were some
of the few companies that were the early pioneers in the field of motherboard
manufacturing where companies like Apple and IBM soon took over. They
offered top grade, sophisticated devices that included upgraded features and
superior performance levels over the prevailing ones.

Timeline of Various Computer Components


1967: The first floppy disk is created by IBM.
1970: The first microprocessor is released by Intel, called the 4004. Shortly
after, Intel announces the release of the first random-access memory (RAM),
called the 1103.
1972: The invention of the compact disc.
1974: The 8080 microprocessor is released by Intel.
1975: Introduction of Apple I, (the company Apple Computer was founded by
Steve Wozniak and Steve Jobs) a device that consists of a motherboard, a
keyboard, and a display.
1977: The first commercial network ARCNET is developed, where Apple II
takes the market by storm with the first personal computer that integrates the
use of colored graphics.
1980: Paul Allen and Bill Gates are hired by IBM to create DOS. Microsoft in
the same year licenses UNIX and starts to develop a PC version called XENIX.
1987: Elitegroup Computer Systems Co. Ltd. is established in Taiwan and
becomes the largest supplier of motherboards in the world.
1989: AsusTek, one of Taiwan’s top companies, starts manufacturing graphic
cards.
1993: First International Computer Inc. becomes the largest motherboard
manufacturer in the world.
1997: Intel Corp. plans to add to its monopoly a microprocessor, by
manufacturing motherboards.
2000: ATI Technologies Inc. announces graphic cards technology, an
advancement in computer graphics.
2007: AsusTek becomes the world’s largest maker of computer motherboards.

Since the motherboard’s inception, technology has grown in leaps and bounds
to accommodate the needs of the modern-day man with faster, lighter, and high-
end capabilities. The need to reinvent and advance further has only been made
possible through the working geniuses behind each technological introduction.
With every passing year we see more innovations break through the market,
with others battling it out to top off the best. With time, we will be fortunate
enough to be able to witness a revolutionary change in the face of technology
within this lifetime.

STORAGE DEVICES:

PROFILE AND HISTORY:

Punch cards were the first effort at Data


Storage in a machine language. Punch cards were used to communicate
information to equipment “before” computers were developed. The punched
holes originally represented a “sequence of instructions” for pieces of
equipment, such as textile looms and player pianos. The holes acted as on/off
switches. Basile Bouchon developed the punch card as a control for looms in
1725.
In 1837, a little over 100 years later, Charles Babbage proposed the Analytical
Engine, a primitive calculator with moving parts, that used punch cards for
instructions and responses. Herman Hollerith developed this idea, and made the
Analytical Engine a reality by having the holes represent, not just a sequence of
instructions, but stored data the machine could read.

He developed a punch card data processing system for the 1890 U.S. Census,
and then started the Tabulating Machine Company in 1896. By 1950, punch
cards had become an integral part of the American industry and government.
The warning, “Do not fold, spindle, or mutilate,” originated from punch cards.
Punch cards were still being used quite regularly until the mid-1980s. (Punch
cards continue to be used in recording the results of standardized tests and
voting ballots.)

In the 1960s, “magnetic storage” gradually replaced punch cards as the primary
means for data storage. Magnetic tape was first patented in 1928, by Fritz
Pfleumer. (Cassette tapes were often used for homemade “personal computers,”
in the 1970s and 80s.) In 1965, Mohawk Data Sciences offered a magnetic tape
encoder, described as a punch card replacement. By 1990, the combination of
affordable personal computers and “magnetic disk storage” had made punch
cards nearly obsolete.

In the past, the terms “Data Storage” and “memory” were often used
interchangeably. However, at present, Data Storage is an umbrella phrase that
includes memory. Data Storage is often considered long term, while memory is
frequently described as short term.
VACUUM TUBES FOR RANDOM ACCESS MEMORY

In 1948, Professor Fredrick Williams, and colleagues, developed “the first”


Random Access Memory (RAM) for storing frequently used programming
instructions, in turn, increasing the overall speed of the computer. Williams
used an array of cathode-ray tubes (a form of vacuum tube) to act as on/off
switches, and digitally store 1024 bits of information.

Data in RAM (sometimes called volatile memory) is temporary and when a


computer loses power, the data is lost, and often frustratingly irretrievable.
ROM (Read Only Memory), on the other hand, is permanently written and
remains available after a computer has lost power.

MAGNETIC CORE, TWISTOR & BUBBLE MEMORY

In the late 1940s, magnetic core memory was developed, and patented, and over
ten years, became the primary way early computers wrote, read, and stored data.
The system used a grid of current carrying wires (address and sense wires), with
doughnut-shaped magnets (called Ferrite Cores) circling where the wires
intersected. Address lines polarized a Ferrite Core’s magnetic field one way or
the other, creating a switch that represents a zero or one (on/off). The
arrangement of address and sense wires feeding through the ferrite cores allows
each core to store one bit o’ data (on/off). Each bit is then grouped into units,
called words, to form a single memory address when accessed together.

In 1953, MIT purchased the patent, and developed the first computer to use this
technology, called the Whirlwind. Magnetic core memories, being faster and
more efficient than punch cards, became popular very quickly. However,
manufacturing them was difficult and time consuming. It involved delicate
work, using women with steady hands and microscopes to tediously thread thin
wires through very small holes.

The Twistor Magnetic Memory was invented in 1957 by Andrew Bobeck. It


creates computer memories using very fine magnetic wires interwoven with
current-carrying wire. It is similar to core memory, but the wrapped magnetic
wires replace the circular magnets, and each intersection on the network
represents one bit o’ data. The magnetic wires were specifically designed to
only allow magnetization along specific sections of the length, so only
designated areas of the Twistor would be magnetized, and capable of changing
polarization (on/off).

Bell Labs promoted the Twistor technology, describing it as superior to


magnetic core memories. The system weighed less, required less current, was
cheaper to produce, and was predicted to provide much lower production costs.
The Twistor Memory concept led Mr. Bobeck to develop another short-lived
magnetic memory technology in the 1980’s, known as Bubble Memory. Bubble
memory is a thin magnetic film using small magnetized areas which look like
bubbles.

SEMICONDUCTOR MEMORY

In 1966, the newly formed Intel Corporation began selling a semiconductor


chip with 2,000 bits of memory. A semiconductor memory chip stores data in a
small circuit referred to as a memory cell. Memory cells are made up of
miniaturized transistors and/or miniaturized capacitors, which act as on/off
switches.

A semiconductor can conduct electricity under specific conditions, making it an


excellent medium for controlling electricity. Its conductivity varies depending
on the current or voltage applied to a control electrode. A semiconductor device
offers a superior alternative to vacuum tubes, delivering hundreds of times more
processing power. A single microprocessor chip can replace thousands of
vacuum tubes, and requires significantly less electricity.

MAGNETIC DISK STORAGE

Magnetic drums were the first incarnation of magnetic disk storage. Gustav
Taushek, an Austrian inventor, developed the magnetic drum in 1932. The
drums read/write heads were designed for each drum track, using a staggered
system over the circumference. Without head movement to control, access time
is quite short, being based on one revolution of the drum. If multiple heads are
used, data can be transferred quickly, helping to compensate for the lack of
RAM in these systems.

IBM is primarily responsible for driving the early evolution of magnetic disk
storage. They invented both the floppy disk drive and the hard disk drive and
their staff are credited with many of the improvements supporting the products.
IBM developed and manufactured disk storage devices between 1956 to 2003,
and then sold its “hard disk” business to Hitachi in 2003.

IBM switched its focus to 8-inch floppy disks from 1969 until the mid-1980s. A
floppy disk is an easily removed (and easily installed) portable storage device. It
is made of magnetic film encased in a flexible plastic, and is inexpensive to
manufacture. IBM developed the 8-inch floppy specifically for the System/370
mainframe. On the downside, a floppy disk is very easy to damage.

In 1976, Allan Shugart improved on IBM’s floppy disk, by developing a smaller


version of it. This is because IBM’s 8-inch floppy disk was too big for a
standard desktop computer. The new 5.25-inch floppy disk was cheaper to
manufacture and could store 110 kilobytes of data. These disks became
extremely popular and were used on most personal computers.

The 3.5-inch floppy disc (introduced in 1982) gradually became more popular
than the 5.25-inch floppy disk. The 3.5 version came with a significant
advantage. It had a rigid cover protecting the magnetic film inside. However,
both formats remained quite popular until the mid-1990s. (Over time, several
size variations were introduced, but with very little marketing success.)

OPTICAL DISCS

In the 1960s, an inventor named James T. Russel thought about, and worked on,
the idea of using light as a mechanism to record, and then replay “music.” And
no one took his invention of the optical disc seriously, until 1975. This was
when Sony paid Russel millions of dollars to finish his project. This investment
led to his completing the project in 1980, in turn leading to CDs (Compact
Discs) and DVDs (Digital Video Recordings) and Blu-Ray. (The word “disk” is
used for magnetic recordings, while “disc” is used for optical recordings. IBM,
who had no optical formats, preferred the “k” spelling, but in 1979, Sony, and a
Dutch company named Philips, preferred to use the “c” spelling in developing
and trademarking the compact disc.

MAGNETO-OPTICAL DISCS

The Magneto-Optical disc, as a hybrid storage medium, was presented in 1990.


This disc format uses both magnetic and optical technologies for storing and
retrieving digital data. The discs normally come in 3.5 and 5.25 inch sizes. The
system reads sections of the disc with different magnetic alignments. Laser light
reflected from the different polarizations varies, per the Kerr effect, and
provides an on/off, bit of data storage system.
When the disc is prepped for writing, each section of the disc is heated, using a
strong laser, and is then cooled while under the influence of a magnetic field.
This has the effect of magnetizing the storage areas in one direction, “off.” The
writing process reverses the polarization of specific areas, turning them on, for
the storage of data.

FLASH DRIVES

Flash drives appeared on the market, late in the year 2000. A flash drive plugs
into computers with a built-in USB plug, making it a small, easily removable,
very portable storage device. Unlike a traditional hard drive, or an optical drive,
it has no moving parts, but instead combines chips and transistors for maximum
functionality. Generally, a flash drives storage capacity ranges from 8 to 64 GB.
(Other sizes are available, but can be difficult to find.)

A flash drive can be rewritten nearly a limitless number of times and is


unaffected by electromagnetic interference (making them ideal for moving
through airport security). Because of this, flash drives have entirely replaced
floppy disks for portable storage. With their large storage capacity, and low
cost, flash drives are now on the verge of replacing CDs and DVDs.

Flash drives are sometimes called pen drives, USB drives, thumb drives, or
jump drives. Solid State Drives (SSD) are sometimes referred to as flash drives,
but they are larger and clumsy to transport.

SOLID STATE DRIVES (SSD)

Variations of Solid State Drives have been used since the 1950s. An SSD is a
nonvolatile storage device that basically does everything a hard drive will do. It
stores data on interlinked flash memory chips. The memory chips can either be
part of the system’s motherboard or a separate box that is designed and wired to
plug into a laptop, or a desktop hard drive. The flash memory chips are different
than those used for USB thumb drives, making them faster and more reliable.
As a result, an SSD is more expensive than a USB thumb drive of the same
capacity.

SSDs “can” be portable, but will not fit in your pocket.

DATA SILOS

Data Silos are a data storage system, of sorts. Data Silos store data for a
business, or a department of the business, that is incompatible with their system,
but is deemed important enough to save for later translation. For many
businesses, this was a huge amount of information. Data Silos eventually
became useful as a source of information for Big Data and came to be used
deliberately for that purpose. Then came Data Lakes.

DATA LAKES

Data Lakes were formed specifically to store and process Big Data, with
multiple organizations pooling huge amounts of information into a single Data
Lake. A Data Lake stores data in its original format and is typically processed
by a NoSQL database (a Data Warehouse uses a hierarchical database). NoSQL
processes the data in all its various forms, and allows for the processing of raw
data. Most of this information could be accessed by its users via the internet.

Cloud Data Storage

The Internet made the Cloud available as a service. Improvements within the
Internet, such as continuously lowering the cost of storage capacity and
improved bandwidth, have made it more economical for individuals and
businesses to use the Cloud for data storage. The Cloud offers essentially an
infinite amount of data storage to its user. Cloud services provide near-infinite
scalability, and accessibility to data from anywhere, at anytime. Is often used to
backup information initially stored on site, making it available should the
company’s own system suffer a failure. Cloud security is a significant concern
among users, and service providers have built security systems, such as
encryption and authentication, into the services they provide.

RANDOM ACCESS MEMORY:

PROFILE AND HISTORY:

Memory is the core of logic – be it human or


machine, we can’t process anything unless we have a place to store data, and
that’s why memory has always been one of the core components in computer
design. When we talk about memory, most of us assume that we are referring to
RAM but that’s not how things actually started off.

Early computers had a completely different concept of memory from the one we
use today. Most of the people (who have studied Computer Science) would
know that they employed an electrical device called the Vacuum Tube –
something similar to what we have in CRT monitors and televisions. Then came
the era of the transistors – which were created by Bell Labs.

The transistor became the core component of modern day memory, which
started off with simple Latches – a circuit configuration of transistors which can
store 1 bit of data. Latches evolved in Flip-Flops, which could be packed
together to form Registers used in most static memory cells today. Another
approach tied a transistor with a capacitor which allowed smaller and more
compact dynamic memory.
BASIC TYPES OF MEMORY: SRAM AND DRAM
Memory can easily be classified into two major categories, Static RAM, and
Dynamic RAM. Like I said above, Static RAM uses a special arrangement of
transistors to make a flip-flop, a type of memory cell. One memory cell can
store 1-bit of data. Most modern SRAM cells are made of six CMOS transistors,
and are the fastest type of memory on planet Earth.

In contrast, Dynamic RAM lines up one transistor with a capacitor to create an


ultra compact memory cell. On the flip side, the capacitor needs to be refreshed
after a specific period to keep the charge in the capacitor, which introduces a
latency in memory access. Something we refer to as memory timings.

While DRAM has an obvious size advantage over SRAM, its speed can’t even
get close to those offered by static memory cells (because they don’t need to be
refreshed and are always available). That’s why faster memory is always made
out of SRAM cells – like Registers in the CPU and Caches used in numerous
devices. But thanks to much higher space requirements, SRAM is expensive
and can’t be used as primary memory.

DRAM on the other hand is quite dense, and therefore is employed in most
places which don’t require instantaneous access but large capacities – like main
memory in a computer.

ASYNCHRONOUS AND SYNCHRONOUS RAM


RAM can also be classified by functionality.
Everyone knows that electronic devices work on switching voltages or pulses
which we call the system Clock (the rate of which we call the Frequency or
Clock Speed).
Synchronous RAM can only send or receive data when a clock pulse enters or
leaves the system. I’ll explain this more in detail later on. Asynchronous RAM
can be accessed at any time during a clock cycle, which present an obvious
advantage over Synchronous RAM.

SINGLE DATA RATE SDRAM


SDR SDRAM is virtually obsolete now as far as the
computer industry is concerned. It was one of the first memory architectures to
support Synchronous Memory architectures and was only known as SDRAM at
its time. Single Data Rate means that it can transfer one machine word (16 bits
for the x86 architecture) of data during one clock cycle. It was widely used in
the 90s era for computer systems up till the Intel Pentium III.

Common SDR memory standards included PC-100 and PC-133 which ran on
clock speeds of 100MHz and 133MHz respectively.

DOUBLE DATA RATE SDRAM


Also known as DDR memory, it was the
direct successor to the single data rate SDRAM architecture. DDR improved
upon the SDR design by providing double the data during one clock cycle: One
word of data during the positive edge and one word of data during the negative
edge of the clock pulse. This provided a significant increase in performance
over the traditional architecture. DDR memory was primarily used in the Intel
Pentium 4 and the AMD Athlon architectures.

For marketing purposes, DDR memory clocks have always been promoted at
speeds twice their original value. For example, common memory standards for
DDR included DDR-200, DDR-266, DDR-333 and DDR-400 which actually
had respective clock speeds of 100MHz, 133MHz, 166MHz and 200MHz.
DDR2 SDRAM
The DDR standard gained a huge following and was subsequently
improved to address high-performance memory needs. Improvements were
made in memory bandwidth, clock rates, and voltages. This resulted in notable
improvements in overall system performance. DDR2 was standard for most
chipsets running Pentium 4 Prescott and later including Intel Core, and AMD
Athlon 64.

Common memory standards for DDR2 were DDR2-400, DDR2-533, DDR2-


667, DDR2-800 and DDR2-1066. All the modules operate at half the frequency
just like in DDR.

DDR3 SDRAM
The DDR3 specifications were finalized in 2007, and primarily increased the
clock rates possible while reducing the voltages. Unfortunately however the
latencies also increased significantly so there were only 2-5% performance
gains in real world applications compared to DDR2 (only on architectures that
support both standards). Though DDR3 is the logical next step because the
latest AMD and Intel platforms (790/AM3 and X58/P55) only support DDR3
memory.

Common memory standards for DDR3 today include DDR3-1066, DDR3-1333,


DDR3-1600, DDR3-1800 and DDR3-2000.

Vous aimerez peut-être aussi