Académique Documents
Professionnel Documents
Culture Documents
Over several decades, the GPU evolved from a single core, fixed function
hardware used for graphics solely, to a set of programmable parallel cores. Let’s
have a look at some milestones in the fascinating history of the GPU.
S3 Graphics introduced the S3 86C911, named after the Porsche 911, in 1991.
The name was to indicate the performance increase. This card spawned a crowd
of imitators: by 1995, all major players in the making of graphics cards had
added 2D acceleration support to their chips. Throughout the 1990s, the level of
integration of video cards was significantly improved with the additional
application programming interfaces (APIs).
Overall, the early 1990s was the time when a lot of graphics hardware
companies were found, and then acquired or ousted out of business. Among the
winners founded during this time was NVIDIA. By the end of 1997, this
company had nearly 25 percent of the graphics market.
3D REVOLUTION (1995–2006)
The history of modern GPUs starts in 1995 with the
introduction of the first 3D add-in cards, and later the adoption of the 32-bit
operating systems and affordable personal computers. Previously, the industry
was focused on 2D and non-PC architecture, and graphics cards were mostly
known by alphanumeric names and huge price tags.
3DFx’s Voodoo graphics card, launched in late 1996, took over about 85% of
the market. Cards that could only render 2D became obsolete very fast. The
Voodoo1 steered clear of 2D graphics entirely; users had to run it together with
a separate 2D card. But it still was a godsend for gamers. The next company’s
product, Voodoo2 (1998), had three onboard chips and was one of the first
video cards ever to support parallel work of two cards within a single computer.
Finally, the “world’s first GPU” came in 1999! This is how Nvidia promoted its
GeForce 256. Nvidia defined the term graphics processing unit as “a single-
chip processor with integrated transform, lighting, triangle setup/clipping, and
rendering engines that is capable of processing a minimum of 10 million
polygons per second.”
The rivalry between ATI and Nvidia was the highlight of the early 2000s. Over
this time, the two companies went head to head and delivered graphics cards
with features that are now commonplace. For example, the capability to perform
specular shading, volumetric explosion, waves, refraction, shadow volumes,
vertex blending, bump mapping and elevation mapping.
GENERAL PURPOSE GPUS (2006 — PRESENT DAY)
However, the two companies took different tracks to general purpose computing
GPU (GPGPU). In 2007, Nvidia released its CUDA development environment,
the earliest widely adopted programming model for GPU computing. Two years
later, OpenCL became widely supported. This framework allows for the
development of code for both GPUs and CPUs with an emphasis on portability.
Thus, GPUs became a more generalized computing device.
In 2010, Nvidia collaborated with Audi. They used Tegra GPUs to power the
cars’ dashboard and increase navigation and entertainment systems. These
advancements in graphics cards in vehicles pushed self-driving technology.
PROCESSOER:
We’re going to show you a history of the processor, starting with the Intel 8086.
It was the processor IBM chose for the first PC and only has a neat history from
then on out.
CPUs have gone through many changes through the few years since Intel came
out with the first one. IBM chose Intel’s 8088 processor for the brains of the
first PC. This choice by IBM is what made Intel the perceived leader of the
CPU market. Intel remains the perceived leader of microprocessor development.
While newer contenders have developed their own technologies for their own
processors, Intel continues to remain more than a viable source of new
technology in this market, with the ever-growing AMD nipping at their heels.
The first four generations of Intel processor took on the “8” as the series name,
which is why the technical types refer to this family of chips as the 8088, 8086,
and 80186. This goes right on up to the 80486, or simply the 486. The following
chips are considered the dinosaurs of the computer world. PC’s based on these
processors are the kind that usually sit around in the garage or warehouse
collecting dust. They are not of much use anymore, but us geeks don’t like
throwing them out because they still work. You know who you are.
The 486 chip was the first processor from Intel that was designed to be
upgradeable. Previous processors were not designed this way, so when the
processor became obsolete, the entire motherboard needed to be replaced. With
the 486, the same CPU socket could accommodate several different flavors of
the 486. Initial 486 offerings were designed to be able to be upgraded using
“OverDrive” technology. This means you can insert a chip with a faster internal
clock into the existing system. Not all 486 systems could use OverDrive, since
it takes a certain type of motherboard to support it.
The first member of the 486 family was the i486DX, but in 1991 they released
the 486SX and 486DX/50. Both chips were basically the same, except that the
486SX version had the math coprocessor disabled (yes, it was there, just turned
off). The 486SX was, of course, slower than its DX cousin, but the resulting
reduced cost and power lent itself to faster sales and movement into the laptop
market. The 486DX/50 was simply a 50MHz version of the original 486. The
DX could not support future OverDrives while the SX processor could.
In 1992, Intel released the next wave of 486’s making use of OverDrive
technology. The first models were the i486DX2/50 and i486DX2/66. The extra
“2” in the names indicate that the normal clock speed of the processor is being
effectively doubled using OverDrive, so the 486DX2/50 is a 25MHz chip being
doubled to 50MHz. The slower base speed allowed the chip to work with
existing motherboard designs, but allowed the chip internally to operate at the
increased speed, thereby increasing performance.
Also in 1992, Intel put out the 486SL. It was virtually identical to vintage 486
processors, but it contained 1.4 million transistors. The extra innards were used
by its internal power management circuitry, optimizing it for mobile use. From
there, Intel released various 486 flavors, mixing SL’s with SX’s and DX’s at a
variety of clock speeds. By 1994, they were rounding out their continued
development of the 486 family with the DX4 Overdrive processors. While you
might think these were 4X clock quadruplers, they were actually 3X triplers,
allowing a 33 MHz processor to operate internally at 100 MHz.
COMPUTER MOTHERBOARD:
Motherboards are also present in mobile phones, clocks, stop watches, etc. They
include a lot of essential components of a computer such as a microprocessor,
main memory, and the microprocessor’s supporting chipset that provides an
interface between the CPU and other external components.
This vital piece of technology revolutionized the way computer systems were
later designed. The earlier versions weren’t as efficient and dependable.
Today’s motherboards contain more or less of the following parts:
During the late 1980s and 1990s, it was found that an increasing number of
peripheral functions on the PCB were more economical. Hence, single
Integrated Circuits (ICs) capable of supporting low-speed peripherals like
serial ports, mouse, keyboards, etc., were included on a motherboard. By the
late 1990s, they began to have a multifaceted platform integrated with audio,
video, storage and networking functions. Higher end systems for 3D gaming
and graphic cards were also included later.
Micronics, Mylex, AMI, DTK, Orchid Technology, Elitegroup, etc. were some
of the few companies that were the early pioneers in the field of motherboard
manufacturing where companies like Apple and IBM soon took over. They
offered top grade, sophisticated devices that included upgraded features and
superior performance levels over the prevailing ones.
Since the motherboard’s inception, technology has grown in leaps and bounds
to accommodate the needs of the modern-day man with faster, lighter, and high-
end capabilities. The need to reinvent and advance further has only been made
possible through the working geniuses behind each technological introduction.
With every passing year we see more innovations break through the market,
with others battling it out to top off the best. With time, we will be fortunate
enough to be able to witness a revolutionary change in the face of technology
within this lifetime.
STORAGE DEVICES:
He developed a punch card data processing system for the 1890 U.S. Census,
and then started the Tabulating Machine Company in 1896. By 1950, punch
cards had become an integral part of the American industry and government.
The warning, “Do not fold, spindle, or mutilate,” originated from punch cards.
Punch cards were still being used quite regularly until the mid-1980s. (Punch
cards continue to be used in recording the results of standardized tests and
voting ballots.)
In the 1960s, “magnetic storage” gradually replaced punch cards as the primary
means for data storage. Magnetic tape was first patented in 1928, by Fritz
Pfleumer. (Cassette tapes were often used for homemade “personal computers,”
in the 1970s and 80s.) In 1965, Mohawk Data Sciences offered a magnetic tape
encoder, described as a punch card replacement. By 1990, the combination of
affordable personal computers and “magnetic disk storage” had made punch
cards nearly obsolete.
In the past, the terms “Data Storage” and “memory” were often used
interchangeably. However, at present, Data Storage is an umbrella phrase that
includes memory. Data Storage is often considered long term, while memory is
frequently described as short term.
VACUUM TUBES FOR RANDOM ACCESS MEMORY
In the late 1940s, magnetic core memory was developed, and patented, and over
ten years, became the primary way early computers wrote, read, and stored data.
The system used a grid of current carrying wires (address and sense wires), with
doughnut-shaped magnets (called Ferrite Cores) circling where the wires
intersected. Address lines polarized a Ferrite Core’s magnetic field one way or
the other, creating a switch that represents a zero or one (on/off). The
arrangement of address and sense wires feeding through the ferrite cores allows
each core to store one bit o’ data (on/off). Each bit is then grouped into units,
called words, to form a single memory address when accessed together.
In 1953, MIT purchased the patent, and developed the first computer to use this
technology, called the Whirlwind. Magnetic core memories, being faster and
more efficient than punch cards, became popular very quickly. However,
manufacturing them was difficult and time consuming. It involved delicate
work, using women with steady hands and microscopes to tediously thread thin
wires through very small holes.
SEMICONDUCTOR MEMORY
Magnetic drums were the first incarnation of magnetic disk storage. Gustav
Taushek, an Austrian inventor, developed the magnetic drum in 1932. The
drums read/write heads were designed for each drum track, using a staggered
system over the circumference. Without head movement to control, access time
is quite short, being based on one revolution of the drum. If multiple heads are
used, data can be transferred quickly, helping to compensate for the lack of
RAM in these systems.
IBM is primarily responsible for driving the early evolution of magnetic disk
storage. They invented both the floppy disk drive and the hard disk drive and
their staff are credited with many of the improvements supporting the products.
IBM developed and manufactured disk storage devices between 1956 to 2003,
and then sold its “hard disk” business to Hitachi in 2003.
IBM switched its focus to 8-inch floppy disks from 1969 until the mid-1980s. A
floppy disk is an easily removed (and easily installed) portable storage device. It
is made of magnetic film encased in a flexible plastic, and is inexpensive to
manufacture. IBM developed the 8-inch floppy specifically for the System/370
mainframe. On the downside, a floppy disk is very easy to damage.
The 3.5-inch floppy disc (introduced in 1982) gradually became more popular
than the 5.25-inch floppy disk. The 3.5 version came with a significant
advantage. It had a rigid cover protecting the magnetic film inside. However,
both formats remained quite popular until the mid-1990s. (Over time, several
size variations were introduced, but with very little marketing success.)
OPTICAL DISCS
In the 1960s, an inventor named James T. Russel thought about, and worked on,
the idea of using light as a mechanism to record, and then replay “music.” And
no one took his invention of the optical disc seriously, until 1975. This was
when Sony paid Russel millions of dollars to finish his project. This investment
led to his completing the project in 1980, in turn leading to CDs (Compact
Discs) and DVDs (Digital Video Recordings) and Blu-Ray. (The word “disk” is
used for magnetic recordings, while “disc” is used for optical recordings. IBM,
who had no optical formats, preferred the “k” spelling, but in 1979, Sony, and a
Dutch company named Philips, preferred to use the “c” spelling in developing
and trademarking the compact disc.
MAGNETO-OPTICAL DISCS
FLASH DRIVES
Flash drives appeared on the market, late in the year 2000. A flash drive plugs
into computers with a built-in USB plug, making it a small, easily removable,
very portable storage device. Unlike a traditional hard drive, or an optical drive,
it has no moving parts, but instead combines chips and transistors for maximum
functionality. Generally, a flash drives storage capacity ranges from 8 to 64 GB.
(Other sizes are available, but can be difficult to find.)
Flash drives are sometimes called pen drives, USB drives, thumb drives, or
jump drives. Solid State Drives (SSD) are sometimes referred to as flash drives,
but they are larger and clumsy to transport.
Variations of Solid State Drives have been used since the 1950s. An SSD is a
nonvolatile storage device that basically does everything a hard drive will do. It
stores data on interlinked flash memory chips. The memory chips can either be
part of the system’s motherboard or a separate box that is designed and wired to
plug into a laptop, or a desktop hard drive. The flash memory chips are different
than those used for USB thumb drives, making them faster and more reliable.
As a result, an SSD is more expensive than a USB thumb drive of the same
capacity.
DATA SILOS
Data Silos are a data storage system, of sorts. Data Silos store data for a
business, or a department of the business, that is incompatible with their system,
but is deemed important enough to save for later translation. For many
businesses, this was a huge amount of information. Data Silos eventually
became useful as a source of information for Big Data and came to be used
deliberately for that purpose. Then came Data Lakes.
DATA LAKES
Data Lakes were formed specifically to store and process Big Data, with
multiple organizations pooling huge amounts of information into a single Data
Lake. A Data Lake stores data in its original format and is typically processed
by a NoSQL database (a Data Warehouse uses a hierarchical database). NoSQL
processes the data in all its various forms, and allows for the processing of raw
data. Most of this information could be accessed by its users via the internet.
The Internet made the Cloud available as a service. Improvements within the
Internet, such as continuously lowering the cost of storage capacity and
improved bandwidth, have made it more economical for individuals and
businesses to use the Cloud for data storage. The Cloud offers essentially an
infinite amount of data storage to its user. Cloud services provide near-infinite
scalability, and accessibility to data from anywhere, at anytime. Is often used to
backup information initially stored on site, making it available should the
company’s own system suffer a failure. Cloud security is a significant concern
among users, and service providers have built security systems, such as
encryption and authentication, into the services they provide.
Early computers had a completely different concept of memory from the one we
use today. Most of the people (who have studied Computer Science) would
know that they employed an electrical device called the Vacuum Tube –
something similar to what we have in CRT monitors and televisions. Then came
the era of the transistors – which were created by Bell Labs.
The transistor became the core component of modern day memory, which
started off with simple Latches – a circuit configuration of transistors which can
store 1 bit of data. Latches evolved in Flip-Flops, which could be packed
together to form Registers used in most static memory cells today. Another
approach tied a transistor with a capacitor which allowed smaller and more
compact dynamic memory.
BASIC TYPES OF MEMORY: SRAM AND DRAM
Memory can easily be classified into two major categories, Static RAM, and
Dynamic RAM. Like I said above, Static RAM uses a special arrangement of
transistors to make a flip-flop, a type of memory cell. One memory cell can
store 1-bit of data. Most modern SRAM cells are made of six CMOS transistors,
and are the fastest type of memory on planet Earth.
While DRAM has an obvious size advantage over SRAM, its speed can’t even
get close to those offered by static memory cells (because they don’t need to be
refreshed and are always available). That’s why faster memory is always made
out of SRAM cells – like Registers in the CPU and Caches used in numerous
devices. But thanks to much higher space requirements, SRAM is expensive
and can’t be used as primary memory.
DRAM on the other hand is quite dense, and therefore is employed in most
places which don’t require instantaneous access but large capacities – like main
memory in a computer.
Common SDR memory standards included PC-100 and PC-133 which ran on
clock speeds of 100MHz and 133MHz respectively.
For marketing purposes, DDR memory clocks have always been promoted at
speeds twice their original value. For example, common memory standards for
DDR included DDR-200, DDR-266, DDR-333 and DDR-400 which actually
had respective clock speeds of 100MHz, 133MHz, 166MHz and 200MHz.
DDR2 SDRAM
The DDR standard gained a huge following and was subsequently
improved to address high-performance memory needs. Improvements were
made in memory bandwidth, clock rates, and voltages. This resulted in notable
improvements in overall system performance. DDR2 was standard for most
chipsets running Pentium 4 Prescott and later including Intel Core, and AMD
Athlon 64.
DDR3 SDRAM
The DDR3 specifications were finalized in 2007, and primarily increased the
clock rates possible while reducing the voltages. Unfortunately however the
latencies also increased significantly so there were only 2-5% performance
gains in real world applications compared to DDR2 (only on architectures that
support both standards). Though DDR3 is the logical next step because the
latest AMD and Intel platforms (790/AM3 and X58/P55) only support DDR3
memory.