Vous êtes sur la page 1sur 128

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/259397239

Lecture Notes - Algorithms and Data Structures - Part 1: Introduction

Book · December 2013


DOI: 10.13140/2.1.2012.3207

CITATIONS READS

0 30,161

3 authors, including:

Reiner Creutzburg Jenny Knackmuß


Brandenburg University of Applied Sciences 35 PUBLICATIONS   4 CITATIONS   
472 PUBLICATIONS   385 CITATIONS   
SEE PROFILE
SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Wikipedia Books - free, downloadable, multilingual lecture notes View project

Advanced Cybersecurity and Cyberforensics View project

All content following this page was uploaded by Reiner Creutzburg on 22 December 2013.

The user has requested enhancement of the downloaded file.


Algorithms and Data
Structures
Part 1: Introduction (Wikipedia Book 2014)

By Wikipedians

Editors: Reiner Creutzburg, Jenny Knackmuß

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Sun, 22 Dec 2013 11:59:19 UTC
Contents
Articles
Computer 1
Informatics (academic field) 22
Programming language 27
Algorithm 40
Deterministic algorithm 65
Data structure 68
List (abstract data type) 70
Array data structure 74
FIFO 79
Queue (abstract data type) 82
LIFO 84
Stack (abstract data type) 84
Computer program 114

References
Article Sources and Contributors 119
Image Sources, Licenses and Contributors 123

Article Licenses
License 125
Computer 1

Computer

A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical
operations. Since a sequence of operations can be readily changed, the computer can solve more than one kind of
problem.
Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU) and
some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and
control unit that can change the order of operations based on stored information. Peripheral devices allow
information to be retrieved from an external source, and the result of operations saved and retrieved.
In World War II, mechanical analog computers were used for specialized military applications. During this time the
first electronic digital computers were developed. Originally they were the size of a large room, consuming as much
power as several hundred modern personal computers (PCs).[1]
Modern computers based on integrated circuits are millions to billions of times more capable than the early
machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into mobile devices, and
mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the
Information Age and are what most people think of as “computers.” However, the embedded computers found in
many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous.

History of computing

Etymology
The first recorded use of the word “computer” was in 1613 in a book called “The
yong mans gleanings” by English writer Richard Braithwait I haue read the
truest computer of Times, and the best Arithmetician that euer breathed, and he
reduceth thy dayes into a short number. It referred to a person who carried out
calculations, or computations, and the word continued with the same meaning
until the middle of the 20th century. From the end of the 19th century the word
began to take on its more familiar meaning, a machine that carries out
computations.

Mechanical aids to computing


The Jacquard loom, on display at the
The history of the modern computer begins with two separate technologies, Museum of Science and Industry in
Manchester, England, was one of the
automated calculation and programmability. However no single device can be
first programmable devices.
identified as the earliest computer, partly because of the inconsistent application
of that term.[3] A few precursors are worth mentioning though, like some
mechanical aids to computing, which were very successful and survived for centuries until the advent of the
electronic calculator, like the Sumerian abacus, designed around 2500 BC[4] of which a descendant won a speed
Computer 2

competition against a contemporary desk calculating machine in Japan in 1946,[5] the slide rules, invented in the
1620s, which were carried on five Apollo space missions, including to the moon[6] and arguably the astrolabe and the
Antikythera mechanism, an ancient astronomical analog computer built by the Greeks around 80 BC. The Greek
mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting
10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of
deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.

Mechanical calculators and programmable looms


Blaise Pascal invented the mechanical calculator in 1642,[9] known as Pascal's
calculator. It was the first machine to better human performance of arithmetical
computations[10] and would turn out to be the only functional mechanical
calculator in the 17th century.[11] Two hundred years later, in 1851, Thomas de
Colmar released, after thirty years of development, his simplified arithmometer;
it became the first machine to be commercialized because it was strong enough
and reliable enough to be used daily in an office environment. The mechanical
calculator was at the root of the development of computers in two separate ways.
Initially, it was in trying to develop more powerful and more flexible
calculators[12] that the computer was first theorized by Charles Babbage[13][14]
and then developed.[15] Secondly, development of a low-cost electronic
calculator, successor to the mechanical calculator, resulted in the development by
Intel of the first commercially available microprocessor integrated circuit.
The Most Famous Image in the Early
In 1801, Joseph Marie Jacquard made an improvement to the textile loom by History of Computing From cave
introducing a series of punched paper cards as a template which allowed his paintings to the internet
HistoryofScience.comThis portrait of
loom to weave intricate patterns automatically. The resulting Jacquard loom was
Jacquard was woven in silk on a
an important step in the development of computers because the use of punched Jacquard loom and required 24,000
cards to define woven patterns can be viewed as an early, albeit limited, form of punched cards to create (1839). It
programmability. was only produced to order. Charles
Babbage started exhibiting this
portrait in 1840 to explain how his
First use of punched paper cards in computing analytical engine would work.See
#JACWEBJames Essinger, p.3-4
It was the fusion of automatic calculation with programmability that produced (2004), also see: Anthony Hyman,
the first recognizable computers. In 1837, Charles Babbage, "the actual father of ed., Science and Reform: Selected
the computer",[16] was the first to conceptualize and design a fully programmable Works of Charles Babbage
(Cambridge, England: Cambridge
mechanical calculator,[17] his analytical engine.[18] Babbage started in 1834.
University Press, 1989), page 298. It
Initially he was to program his analytical engine with drums similar to the ones is in the collection of the Science
used in Vaucanson's automata which by design were limited in size, but soon he Museum in London, England. (Delve
replaced them by Jacquard's card readers, one for data and one for the program. (2007), page 99.)

"The introduction of punched cards into the new engine was important not only
as a more convenient form of control than the drums, or because programs could now be of unlimited extent, and
could be stored and repeated without the danger of introducing errors in setting the machine by hand; it was
important also because it served to crystallize Babbage's feeling that he had invented something really new,
something much more than a sophisticated calculating machine."[19]

Now it is obvious that no finite machine can include infinity...It is impossible to construct machinery
occupying unlimited space; but it is possible to construct finite machinery, and to use it through unlimited
time. It is this substitution of the infinity of time for the infinity of space which I have made use of, to limit the
size of the engine and yet to retain its unlimited power.
Computer 3

—Charles Babbage, Passages from the Life of a Philosopher, Chapter VIII: On the Analytical Engine
After this breakthrough, he redesigned his difference engine (No. 2, still not programmable) incorporating his new
ideas. Allan Bromley came to the science museum of London starting in 1979 to study Babbage's engines and
determined that difference engine No. 2 was the only engine that had a complete enough set of drawings to be built,
and he convinced the museum to do it. This engine, finished in 1991, proved without doubt the validity of Charles
Babbage's work.[20] Except for a pause between 1848 and 1857, Babbage would spend the rest of his life simplifying
each part of his engine: "Gradually he developed plans for Engines of great logical power and elegant simplicity
(although the term 'simple' is used here in a purely relative sense)."[21]
Between 1842 and 1843, Ada Lovelace, an analyst of Charles Babbage's
analytical engine, translated an article by Italian military engineer Luigi
Menabrea on the engine, which she supplemented with an elaborate set of notes
of her own. These notes contained what is considered the first computer program
– that is, an algorithm encoded for processing by a machine. She also stated: “We
may say most aptly, that the Analytical Engine weaves algebraical patterns just
as the Jacquard-loom weaves flowers and leaves.”; furthermore she developed a
vision on the capability of computers to go beyond mere calculating or
number-crunching[22] claiming that: should “...the fundamental relations of
pitched sounds in the science of harmony and of musical composition...” be
susceptible “...of adaptations to the action of the operating notation and
mechanism of the engine...” it “...might compose elaborate and scientific pieces
of music of any degree of complexity or extent”. Ada Lovelace, considered to be the
first computer programmer
In the late 1880s, Herman Hollerith invented the recording of data on a
machine-readable medium. Earlier uses of machine-readable media had been for
control, not data. “After some initial trials with paper tape, he settled on punched cards...” To process these punched
cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the
modern information processing industry. Large-scale automated data processing of punched cards was performed for
the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th
century a number of ideas and technologies, that would later prove useful in the realization of practical computers,
had begun to appear: Boolean algebra, the vacuum tube (thermionic valve), punched cards and tape, and the
teleprinter.

Babbage's dream comes true


In 1888, Henry Babbage, Charles Babbage's son, completed a simplified version of the analytical engine's computing
unit (the mill) . He gave a successful demonstration of its use in 1906, calculating and printing the first 40 multiples
of pi with a precision of 29 decimal places.[23] This machine was given to the Science Museum in South Kensington
in 1910. He also gave a demonstration piece of one of his father's engines to Harvard University which convinced
Howard Aiken, 50 years later, to incorporate the architecture of the analytical engine in what would become the
ASCC/Mark I built by IBM.[24]
Leonardo Torres y Quevedo built two analytical machines to prove that all of the functions of Babbage's analytical
engine could be replaced with electromechanical devices. The first one, built in 1914, had a little electromechanical
memory and the second one, built in 1920 to celebrate the one hundredth anniversary of the invention of the
arithmometer, received its commands and printed its results on a typewriter.[25] Torres y Quevedo published
functional schematics of all of these functions: addition, multiplication, division ... and even a decimal comparator,
in his "Essais sur l'automatique" in 1915.
Computer 4

Some inventors like Percy Ludgate, Vannevar Bush and Louis Couffignal[26] tried to improve on the analytical
engine but didn't succeed at building a machine.
Howard Aiken wanted to build a giant calculator and was looking for a sponsor to build it. He first presented his
design to the Monroe Calculator Company and then to Harvard University, both without success. Carmello Lanza, a
technician in Harvard's physics laboratory who had heard Aiken's presentation "...couldn't see why in the world I
(Howard Aiken) wanted to do anything like this in the Physics laboratory, because we already had such a machine
and nobody used it... Lanza led him up into the attic... There, sure enough... were the wheels that Aiken later put on
display in the lobby of the Computer Laboratory. With them was a letter from Henry Prevost Babbage describing
these wheels as part of his father's proposed calculating engine. This was the first time Aiken ever heard of Babbage
he said, and it was this experience that led him to look up Babbage in the library and to come across his
autobiography" which gave a description of his analytical engine.
Aiken first contacted IBM in November 1937,[27] presenting a machine which, by then, had an architecture based on
Babbage's analytical engine. This was the first development of a programmable calculator that would succeed and
that would end up being used for many years to come: the ASCC/Mark I.
Zuse first heard of Aiken and IBM's work from the German Secret Service.[28] He considered his Z3 to be a Babbage
type machine.[29]

First general-purpose computers


During the first half of the 20th century, many scientific computing
needs were met by increasingly sophisticated analog computers, which
used a direct mechanical or electrical model of the problem as a basis
for computation. However, these were not programmable and generally
lacked the versatility and accuracy of modern digital computers.
Alan Turing is widely regarded as the father of modern computer
science. In 1936, Turing provided an influential formalization of the
concept of the algorithm and computation with the Turing machine,
providing a blueprint for the electronic digital computer. Of his role in The Zuse Z3, 1941, considered the world's first
the creation of the modern computer, Time magazine in naming Turing working programmable, fully automatic
one of the 100 most influential people of the 20th century, states: “The computing machine

fact remains that everyone who taps at a keyboard, opening a


spreadsheet or a word-processing program, is working on an incarnation of a Turing machine.”

The first really functional computer was the Z1, originally created by
Germany's Konrad Zuse in his parents' living room in 1936 to 1938,
and it is considered to be the first electro-mechanical binary
programmable (modern) computer.
George Stibitz is internationally recognized as a father of the modern
digital computer. While working at Bell Labs in November 1937,
Stibitz invented and built a relay-based calculator he dubbed the
“Model K” (for “kitchen table,” on which he had assembled it), which
The ENIAC, which became operational in 1946,
was the first to use binary circuits to perform an arithmetic operation. is considered to be the first general-purpose
Later models added greater sophistication including complex electronic computer. Programmers Betty Jean
arithmetic and programmability. Jennings (left) and Fran Bilas (right) are depicted
here operating the ENIAC's main control panel.
Computer 5

The Atanasoff–Berry Computer (ABC) was the world's first electronic digital
computer, albeit not programmable. Atanasoff is considered to be one of the
fathers of the computer. Conceived in 1937 by Iowa State College physics
professor John Atanasoff, and built with the assistance of graduate student
Clifford Berry, the machine was not programmable, being designed only to solve
systems of linear equations. The computer did employ parallel computation. A
1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC
computer derived from the Atanasoff–Berry Computer.

The first program-controlled computer was invented by Konrad Zuse, who built
the Z3, an electromechanical computing machine, in 1941. The first
programmable electronic computer was the Colossus, built in 1943 by Tommy EDSAC was one of the first
Flowers. computers to implement the
stored-program (von Neumann)
architecture.
Key steps towards modern computers

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s,
gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented
by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point
along this road as “the first digital electronic computer” is difficult.Shannon 1940 Notable achievements include:
• Konrad Zuse's electromechanical “Z machines.” The Z3 (1941) was the first working machine featuring binary
arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to
be Turing complete, therefore being the world's first operational computer. Thus, Zuse is often regarded as the
inventor of the computer.[30][31][32][33]
• The non-programmable Atanasoff–Berry Computer (commenced in 1937, completed in 1941) which used
vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative
memory allowed it to be much more compact than its peers (being approximately the size of a large desk or
workbench), since intermediate results could be stored and then fed back into the same set of computation
elements.
• The secret British Colossus computers (1943),[34] which had limited programmability but demonstrated that a
device using thousands of tubes could be reasonably reliable and electronically re-programmable. It was used for
breaking German wartime codes.
• The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
• The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes
called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead
of electronics). Initially, however, ENIAC had an architecture which required rewiring a plugboard to change its
programming.
• The Ferranti Mark 1 was the world's first commercially available general-purpose computer.

Stored-program architecture
Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which
came to be known as the “stored-program architecture” or von Neumann architecture. This design was first formally
described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number
of projects to develop computers based on the stored-program architecture commenced around this time, the first of
which was completed in 1948 at the University of Manchester in England, the Manchester Small-Scale Experimental
Machine (SSEM or “Baby”). The Electronic Delay Storage Automatic Calculator (EDSAC), completed a year after
the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored-program
design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally
Computer 6

described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two
years.
Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by
which the word “computer” is now defined. While the technologies used in computers have changed dramatically
since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.
Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay
Brusentsov conducted research on ternary computers, devices that
operated on a base three numbering system of -1, 0, and 1 rather than
the conventional binary numbering system upon which most computers
are based. They designed the Setun, a functional ternary computer, at
Moscow State University. The device was put into limited production
in the Soviet Union, but supplanted by the more common binary
architecture.

Die of an Intel 80486DX2 microprocessor (actual


Semiconductors and microprocessors size: 12×6.75 mm) in its packaging

Computers using vacuum tubes as their electronic elements were in use


throughout the 1950s, but by the 1960s they had been largely replaced by transistor-based machines, which were
smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorized computer
was demonstrated at the University of Manchester in 1953. In the 1970s, integrated circuit technology and the
subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased
speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated
computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic
appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal
computer. With the evolution of the Internet, personal computers are becoming as common as the television and the
telephone in the household.[citation needed]

Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most
common form of such computers in existence.[citation needed]

Programs
The defining feature of modern computers which distinguishes them from all other machines is that they can be
programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will
process them. Modern computers based on the von Neumann architecture often have machine code in the form of an
imperative programming language.
In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as
do the programs for word processors and web browsers for example. A typical modern computer can execute billions
of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer
programs consisting of several million instructions may take teams of programmers years to write, and due to the
complexity of the task almost certainly contain errors.
Computer 7

Stored program architecture


This section applies to most common RAM machine-based computers.
In most cases, computer instructions are simple: add one number to
another, move some data from one location to another, send a message
to some external device, etc. These instructions are read from the
computer's memory and are generally carried out (executed) in the
order they were given. However, there are usually specialized
instructions to tell the computer to jump ahead or backwards to some
other place in the program and to carry on executing from there. These
Replica of the Small-Scale Experimental
are called “jump” instructions (or branches). Furthermore, jump Machine (SSEM), the world's first
instructions may be made to happen conditionally so that different stored-program computer, at the Museum of
sequences of instructions may be used depending on the result of some Science and Industry in Manchester, England

previous calculation or some external event. Many computers directly


support subroutines by providing a type of jump that “remembers” the location it jumped from and another
instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in
sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest.
Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and
over again until some internal condition is met. This is called the flow of control within the program and it is what
allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two
numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands
of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be
programmed to do this with just a few simple instructions. For example:

mov No. 0, sum ; set sum to 0


mov No. 1, num ; set num to 1
loop: add num, sum ; add num to sum
add No. 1, num ; add 1 to num
cmp num, #1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop'
halt ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further human
intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a
second.[35]
Computer 8

Bugs
Errors in computer programs are called “bugs.” They may be benign
and not affect the usefulness of the program, or have only subtle
effects. But in some cases, they may cause the program or the entire
system to “hang,” becoming unresponsive to input such as mouse clicks
or keystrokes, to completely fail, or to crash. Otherwise benign bugs
may sometimes be harnessed for malicious intent by an unscrupulous
user writing an exploit, code designed to take advantage of a bug and
disrupt a computer's proper execution. Bugs are usually not the fault of
the computer. Since computers merely execute the instructions they are
given, bugs are nearly always the result of programmer error or an The actual first computer bug, a moth found
oversight made in the program's design.[36] trapped on a relay of the Harvard Mark II
computer
Admiral Grace Hopper, an American computer scientist and developer
of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found
shorting a relay in the Harvard Mark II computer in September 1947.

Machine code
In most computers, individual instructions are stored as machine code with each instruction being given a unique
number (its operation code or opcode for short). The command to add two numbers together would have one opcode;
the command to multiply them would have a different opcode, and so on. The simplest computers are able to
perform any of a handful of different instructions; the more complex computers have several hundred to choose
from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the
instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can
be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as
numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they
operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store
some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard
architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard
architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this
technique was used with many early computers,[37] it is extremely tedious and potentially error-prone to do so in
practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is
indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These
mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly
language into something the computer can actually understand (machine language) is usually done by a computer
program called an assembler.
Computer 9

Programming language
Programming languages provide various ways of specifying programs
for computers to run. Unlike natural languages, programming
languages are designed to permit no ambiguity and to be concise. They
are purely written languages and are often difficult to read aloud. They
are generally either translated into machine code by a compiler or an A 1970s punched card containing one line from a
assembler before being run, or translated directly at run time by an FORTRAN program. The card reads: “Z(1) = Y +
interpreter. Sometimes programs are executed by a hybrid method of W(1)” and is labeled “PROJ039” for identification
purposes.
the two techniques.

Low-level languages

Machine languages and the assembly languages that represent them (collectively termed low-level programming
languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as
may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or
the AMD Athlon 64 computer that might be in a PC.[38]

Higher-level languages
Though considerably easier than in machine language, writing long programs in assembly language is often difficult
and is also error prone. Therefore, most practical programs are written in more abstract high-level programming
languages that are able to express the needs of the programmer more conveniently (and thereby help reduce
programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly
language and then into machine language) using another computer program called a compiler.[39] High level
languages are less related to the workings of the target computer than assembly language, and more related to the
language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use
different compilers to translate the same high level language program into the machine language of many different
types of computer. This is part of the means by which software like video games may be made available for different
computer architectures such as personal computers and various video game consoles.

Program design
Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs,
using the programming constructs within languages, devising or using established procedures and algorithms,
providing data for output devices and solutions to the problem as applicable. As problems become larger and more
complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented
programming are encountered. Large programs involving thousands of line of code and more require formal
software methodologies. The task of developing large software systems presents a significant intellectual challenge.
Producing software with an acceptably high reliability within a predictable schedule and budget has historically been
difficult; the academic and professional discipline of software engineering concentrates specifically on this
challenge.
Computer 10

Components
A general purpose computer has four main components: the arithmetic
logic unit (ALU), the control unit, the memory, and the input and
output devices (collectively termed I/O). These parts are
interconnected by buses, often made of groups of wires.
Inside each of these parts are thousands to trillions of small electrical
circuits which can be turned off or on by means of an electronic
switch. Each circuit represents a bit (binary digit) of information so
that when the circuit is on it represents a “1”, and when off it represents
Video demonstrating the standard components of
a “0” (in positive logic representation). The circuits are arranged in a "slimline" computer
logic gates so that one or more of the circuits may control the state of
one or more of the other circuits.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively
known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the
mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

Control unit
The control unit (often called a control system or central controller)
manages the computer's various components; it reads and interprets
(decodes) the program instructions, transforming them into a series of
control signals which activate other parts of the computer.[40] Control
systems in advanced computers may change the order of some
Diagram showing how a particular MIPS
instructions so as to improve performance. architecture instruction would be decoded by the
control system
A key component common to all CPUs is the program counter, a
special memory cell (a register) that keeps track of which location in
memory the next instruction is to be read from.[41]
The control system's function is as follows—note that this is a simplified description, and some of these steps may be
performed concurrently or in a different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location
of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the
requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done
in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100
locations further down the program. Instructions that modify the program counter are often known as “jumps” and
allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both
examples of control flow).
Computer 11

The sequence of operations that the control unit goes through to process an instruction is in itself like a short
computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a
microsequencer, which runs a microcode program that causes all of these events to happen.

Arithmetic logic unit (ALU)


The ALU is capable of performing two classes of operations: arithmetic and logic.
The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might
include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only
operate on whole numbers (integers) whilst others use floating point to represent real numbers, albeit with limited
precision. However, any computer that is capable of performing just the simplest operations can be programmed to
break down the more complex operations into simple steps that it can perform. Therefore, any computer can be
programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not
directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false)
depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for creating complicated
conditional statements and processing boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.
Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic
on vectors and matrices.

Memory
A computer's memory can be viewed as a list of cells into which
numbers can be placed or read. Each cell has a numbered “address” and
can store a single number. The computer can be instructed to “put the
number 123 into the cell numbered 1357” or to “add the number that is
in cell 1357 to the number that is in cell 2468 and put the answer into
cell 1595.” The information stored in memory may represent
practically anything. Letters, numbers, even computer instructions can
be placed into memory with equal ease. Since the CPU does not
differentiate between different types of information, it is the software's
Magnetic core memory was the computer
responsibility to give significance to what the memory sees as nothing
memory of choice throughout the 1960s, until it
but a series of numbers. was replaced by semiconductor memory.

In almost all modern computers, each memory cell is set up to store


binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 =
256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used
(typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement
notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical
contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern
computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly
than the main memory area. There are typically between two and one hundred registers depending on the type of
CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every
time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is
often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer 12

Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or
ROM. RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software
that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial
start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but
ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates
loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or
reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored
in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than
software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is
also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to
applications where high speed is unnecessary.[42]
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers
but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed
data into the cache automatically, often without the need for any intervention on the programmer's part.

Input/output (I/O)
I/O is the means by which a computer exchanges information with the
outside world. Devices that provide input or output to the computer are
called peripherals. On a typical personal computer, peripherals include
input devices like the keyboard and mouse, and output devices such as
the display and printer. Hard disk drives, floppy disk drives and optical
disc drives serve as both input and output devices. Computer
networking is another form of I/O.

I/O devices are often complex computers in their own right, with their
Hard disk drives are common storage devices
own CPU and memory. A graphics processing unit might contain fifty used with computers.
or more tiny computers that perform the calculations necessary to
display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main
CPU in performing I/O.

Multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is
necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e.
having the computer switch rapidly between running each program in turn.
One means by which this is done is with a special signal called an interrupt, which can periodically cause the
computer to stop executing instructions where it was and do something else instead. By remembering where it was
executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the
same time,” then the interrupt generator might be causing several hundred interrupts per second, causing a program
switch each time. Since modern computers typically execute instructions several orders of magnitude faster than
human perception, it may appear that many programs are running at the same time even though only one is ever
executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program
is allocated a “slice” of time in turn.
Before the era of cheap computers, the principal use for multitasking was to allow many people to share the same
computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in
direct proportion to the number of programs it is running, but most programs spend much of their time waiting for
Computer 13

slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a
key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up
time for other programs to execute so that many programs may be run simultaneously without unacceptable speed
loss.

Multiprocessing
Some computers are designed to distribute their work across several
CPUs in a multiprocessing configuration, a technique once employed
only in large and powerful machines such as supercomputers,
mainframe computers and servers. Multiprocessor and multi-core
(multiple CPUs on a single integrated circuit) personal and laptop
computers are now widely available, and are being increasingly used in
lower-end markets as a result.

Supercomputers in particular often have highly unique architectures


that differ significantly from the basic stored-program architecture and Cray designed many supercomputers that used
from general purpose computers.[43] They often feature thousands of multiprocessing heavily.
CPUs, customized high-speed interconnects, and specialized
computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program
organization required to successfully utilize most of the available resources at once. Supercomputers usually see
usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called
“embarrassingly parallel” tasks.

Networking and the Internet


Computers have been used to coordinate information between multiple
locations since the 1950s. The U.S. military's SAGE system was the
first large-scale example of such a system, which led to a number of
special-purpose commercial systems such as Sabre.
In the 1970s, computer engineers at research institutions throughout
the United States began to link their computers together using
telecommunications technology. The effort was funded by ARPA (now
DARPA), and the computer network that resulted was called the
ARPANET. The technologies that made the Arpanet possible spread
and evolved.

In time, the network spread beyond academic and military institutions


Visualization of a portion of the routes on the
and became known as the Internet. The emergence of networking
Internet
involved a redefinition of the nature and boundaries of the computer.
Computer operating systems and applications were modified to include
the ability to define and access the resources of other computers on the network, such as peripheral devices, stored
information, and the like, as extensions of the resources of an individual computer. Initially these facilities were
available primarily to people working in high-tech environments, but in the 1990s the spread of applications like
e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like
Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are
networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet
to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant
networking is becoming increasingly ubiquitous even in mobile computing environments.
Computer 14

Computer architecture paradigms


There are many types of computer architectures:
• Quantum computer vs Chemical computer
• Scalar processor vs Vector processor
• Non-Uniform Memory Access (NUMA) computers
• Register machine vs Stack machine
• Harvard architecture vs von Neumann architecture
• Cellular architecture
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[44]
Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms.
The ability to store and execute lists of instructions called programs makes computers extremely versatile,
distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any
computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks
that any other computer can perform. Therefore any type of computer (netbook, supercomputer, cellular automaton,
etc.) is able to perform the same computational tasks, given enough time and storage capacity.

Misconceptions
A computer does not need to be electronic, nor even have a processor,
nor RAM, nor even a hard disk. While popular usage of the word
“computer” is synonymous with a personal electronic computer, the
modern[45] definition of a computer is literally “A device that
computes, especially a programmable [usually] electronic machine that
performs high-speed mathematical or logical operations or that
assembles, stores, correlates, or otherwise processes information.” Any
device which processes information qualifies as a computer, especially
if the processing is purposeful.
Women as computers in NACA High Speed
Flight Station "Computer Room"

Required technology
Historically, computers evolved from mechanical computers and eventually from vacuum tubes to transistors.
However, conceptually computational systems as flexible as a personal computer can be built out of almost anything.
For example, a computer can be made out of billiard balls (billiard ball computer); an often quoted example.[citation
needed]
More realistically, modern computers are made out of transistors made of photolithographed semiconductors.
There is active research to make computers out of many promising new types of technology, such as optical
computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able
to calculate any computable function, and are limited only by their memory capacity and operating speed. However
different designs of computers can give very different performance for particular problems; for example quantum
computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
Computer 15

Capabilities of computers (In general)


1.) Ability to perform certain logical and mathematical functions.
2.) Ability to store data and/or information.
3.) Ability to retrieve data and/or information.
4.) Ability to search data and/or information.
5.) Ability to compare data and/or information.
6.) Ability to sort data and/or information.
7.) Ability to control errors.
8.) Ability to check itself.
9.) Ability to perform a set of tasks with speed and accuracy.
10.) Ability to do a set of tasks repetitively.
11.) Ability to provide new time dimensions.
12.) Excellent substitute for writing instrument and paper.

Limitations of computers (In general)


1.) Dependence on prepared set of instructions.
2.) Inability to derive meanings from objects.
3.) Inability to generate data on its own.
4.) Inability to generate information on its own.
5.) Cannot correct wrong instructions.
6.) Dependence on electricity.
7.) Dependence on human interventions.
8.) Inability to decide on its own.
9.) Not maintenance-free.
10.) Limited to the processing speed of its interconnected peripherals.
11.) Limited to the available amount of storage on primary data storage devices.
12.) Limited to the available amount of storage on secondary data storage devices.
13.) Not a long-term investment.

Further topics
• Glossary of computers

Artificial intelligence
A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative
solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the
emerging field of artificial intelligence and machine learning.

Hardware
The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power
supplies, cables, keyboards, printers and mice are all hardware.
Computer 16

History of computing hardware

First generation Calculators Pascal's calculator, Arithmometer, Difference engine,


(mechanical/electromechanical) Quevedo's analytical machines

Programmable devices Jacquard loom, Analytical engine, IBM ASCC/Harvard Mark


I, Harvard Mark II, IBM SSEC, Z3

Second generation (vacuum tubes) Calculators Atanasoff–Berry Computer, IBM 604, UNIVAC 60,
UNIVAC 120

Programmable devices Colossus, ENIAC, Manchester Small-Scale Experimental


Machine, EDSAC, Manchester Mark 1, Ferranti Pegasus,
Ferranti Mercury, CSIRAC, EDVAC, UNIVAC I, IBM 701,
IBM 702, IBM 650, Z22

Third generation (discrete transistors and Mainframes IBM 7090, IBM 7080, IBM System/360, BUNCH
SSI, MSI, LSI integrated circuits)
Minicomputer PDP-8, PDP-11, IBM System/32, IBM System/36

Fourth generation (VLSI integrated Minicomputer VAX, IBM System i


circuits)
4-bit microcomputer Intel 4004, Intel 4040

8-bit microcomputer Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS
Technology 6502, Zilog Z80

16-bit microcomputer Intel 8088, Zilog Z8000, WDC 65816/65802

32-bit microcomputer Intel 80386, Pentium, Motorola 68000, ARM

[46] Alpha, MIPS, PA-RISC, PowerPC, SPARC, x86-64,


64-bit microcomputer
ARMv8-A

Embedded computer Intel 8048, Intel 8051

Personal computer Desktop computer, Home computer, Laptop computer,


Personal digital assistant (PDA), Portable computer, Tablet
PC, Wearable computer

Theoretical/experimental Quantum computer, Chemical


computer, DNA computing, Optical
computer, Spintronics based computer

Other hardware topics

Peripheral device Input Mouse, keyboard, joystick, image scanner, webcam, graphics tablet,
(input/output) microphone

Output Monitor, printer, loudspeaker

Both Floppy disk drive, hard disk drive, optical disc drive, teleprinter

Computer busses Short range RS-232, SCSI, PCI, USB

Long range (computer Ethernet, ATM, FDDI


networking)
Computer 17

Software
Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc.
When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible),
it is sometimes called “firmware.”

Operating Unix and BSD UNIX System V, IBM AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD operating systems
system
GNU/Linux List of Linux distributions, Comparison of Linux distributions

Microsoft Windows Windows 95, Windows 98, Windows NT, Windows 2000, Windows Me, Windows XP, Windows Vista,
Windows 7, Windows 8

DOS 86-DOS (QDOS), IBM PC DOS, MS-DOS, DR-DOS, FreeDOS

Mac OS Mac OS classic, Mac OS X

Embedded and List of embedded operating systems


real-time

Experimental Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs

Library Multimedia DirectX, OpenGL, OpenAL

Programming library C standard library, Standard Template Library

Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTP

File format HTML, XML, JPEG, MPEG, PNG

User interface Graphical user interface Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM, Aqua
(WIMP)

Text-based user Command-line interface, Text user interface


interface

Application Office suite Word processing, Desktop publishing, Presentation program, Database management system, Scheduling
& Time management, Spreadsheet, Accounting software

Internet Access Browser, E-mail client, Web server, Mail transfer agent, Instant messaging

Design and Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing,
manufacturing Supply chain management

Graphics Raster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer graphics,
Video editing, Image processing

Audio Digital audio editor, Audio playback, Mixing, Audio synthesis, Computer music

Software engineering Compiler, Assembler, Interpreter, Debugger, Text editor, Integrated development environment, Software
performance analysis, Revision control, Software configuration management

Educational Edutainment, Educational game, Serious game, Flight simulator

Games Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive
fiction

Misc Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File
manager
Computer 18

Languages
There are thousands of different programming languages—some intended to be general purpose, others useful only
for highly specialized applications.

Programming languages
Lists of programming languages Timeline of programming languages, List of programming languages by category, Generational list of
programming languages, List of programming languages, Non-English-based programming languages

Commonly used assembly ARM, MIPS, x86


languages

Commonly used high-level Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object Pascal
programming languages

Commonly used scripting Bourne script, JavaScript, Python, Ruby, PHP, Perl
languages

Professions and organizations


As the use of computers has spread throughout society, there are an increasing number of careers involving
computers.

Computer-related professions
Hardware-related Electrical engineering, Electronic engineering, Computer engineering, Telecommunications engineering, Optical engineering,
Nanoengineering

Software-related Computer science, Computer engineering, Desktop publishing, Human–computer interaction, Information technology,
Information systems, Computational science, Software engineering, Video game industry, Web design

The need for computers to work well together and to be able to exchange information has spawned the need for
many standards organizations, clubs and societies of both a formal and informal nature.

Organizations
Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C

Professional societies ACM, AIS, IET, IFIP, BCS

Free/open source software groups Free Software Foundation, Mozilla Foundation, Apache Software Foundation

Degradation
Rasberry crazy ants have been known to consume the insides of electrical wiring in computers; preferring DC to AC
currents. This behavior is not well understood by scientists.
Computer 19

Notes
[1] In 1946, ENIAC required an estimated 174 kW. By comparison, a modern laptop computer may use around 30 W; nearly six thousand times
less.
[2] Early computers such as Colossus and ENIAC were able to process between 5 and 100 operations per second. A modern “commodity”
microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than
early computer operations.
[3] Bernard Cohen, p. 297, 2000 : "Historians of technology and computer scientists interested in history have adopted a number of qualifications
that define a computer. As a result, the question of whether Mark I was or was not a computer depends not on a general consensus but rather
on the particular definition that is adopted. Often, some primary defining characteristics of a computer are that it must (1) be electronic, (2) be
digital (rather than analog), (3) be programmed, (4) be able to perform the four elementary operations (addition, subtraction, multiplication,
and division) and -often- extract roots or obtain information from built-in tables, and (5) incorporate the principle of the stored program. A
machine does not generally qualify as a computer unless it has some further properties, for example the ability to perform certain specified
operations automatically in a controlled and predetermined sequence. For some historians and computer scientists, a machine must also have
been actually constructed and then become fully operational."
[4] * From 2700 to 2300 BC, Georges Ifrah, pp.11
[5] Edmund Berkeley
[6] According to advertising on Pickett's N600 slide rule boxes.
[7] From cave paintings to the internet (http:/ / www. historyofinformation. com/ expanded. php?id=2245) HistoryofScience.com
[8] See James Essinger, p.3-4 (2004), also see: Anthony Hyman, ed., Science and Reform: Selected Works of Charles Babbage (Cambridge,
England: Cambridge University Press, 1989), page 298. It is in the collection of the Science Museum in London, England. (Delve (2007),
page 99.)
[9] Dorr E. Felt
[10] "The arithmetical machine produces effects which approach nearer to thought than all the actions of animals. But it does nothing which
would enable us to attribute will to it, as to the animals.", Pascal, Pensées Bartleby.com, Great Books online, Blaise Pascal, Thoughts (http:/ /
www. bartleby. com/ 48/ 1/ 6. html)
[11] See the paragraph Pascal's calculator#Competing designs
[12] Babbage's Difference engine in 1823 and his Analytical engine in the mid-1830s
[13] “It is reasonable to inquire, therefore, whether it is possible to devise a machine which will do for mathematical computation what the
automatic lathe has done for engineering. The first suggestion that such a machine could be made came more than a hundred years ago from
the mathematician Charles Babbage. Babbage's ideas have only been properly appreciated in the last ten years, but we now realize that he
understood clearly all the fundamental principles which are embodied in modern digital computers” Faster than thought, edited by B. V.
Bowden, 1953, Pitman Publishing Corporation
[14] “...Among this extraordinary galaxy of talent Charles Babbage appears to be one of the most remarkable of all. Most of his life he spent in an
entirely unsuccessful attempt to make a machine which was regarded by his contemporaries as utterly preposterous, and his efforts were
regarded as futile, time-consuming and absurd. In the last decade or so we have learnt how his ideas can be embodied in a modern digital
computer. He understood more about the logic of these machines than anyone else in the world had learned until after the end of the last war.”
Foreword to Irascible Genius, Charles Babbage inventor p. 15 (1964)
[15] In the proposal that Aiken gave IBM in 1937 while requesting funding for the Harvard Mark I we can read: “Few calculating machines
have been designed strictly for application to scientific investigations, the notable exceptions being those of Charles Babbage and others who
followed him ... After abandoning the difference engine, Babbage devoted his energy to the design and construction of an analytical engine of
far higher powers than the difference engine ... Since the time of Babbage, the development of calculating machinery has continued at an
increasing rate.” Howard Aiken, Proposed automatic calculating machine, reprinted in: The Origins of Digital Computers, Selected Papers,
Edited by Brian Randell, 1973, ISBN 3-540-06169-X
[16] Konrad Zuse p. 33 (1993)
[17] "It is reasonable to inquire, therefore, whether it is possible to devise a machine which will do for mathematical computation what the
automatic lathe has done for engineering. The first suggestion that such a machine could be made came more than a hundred years ago from
the mathematician Charles Babbage. Babbage's ideas have only been properly appreciated in the last ten years, but we now realize that he
understood clearly all the fundamental principles which are embodied in modern digital computers" B. V. Bowden, 1953, pp.6,7
[18] The analytical engine should not be confused with Babbage's difference engine which was a non-programmable mechanical calculator.
[19] Bruce Collier, 1970
[20] Doron D. Swade Scientific American, February 1993: "During several visits to London beginning in 1979, Allan G. Bromley of the
University of Sydney in Australia examined Babbage's drawings and notebooks in the Science Museum Library and became convinced that
Difference Engine No. 2 could be built and would work...In 1985, shortly after my appointment as curator of computing, Bromley appeared at
the science museum carrying a two-page proposal to do just that. He suggested that the museum attempt to complete the machine by 1991, the
bicentenary of Babbage's birth."
[21] Anthony Hyman p. 167 (1985)
[22] Fuegi and Francis, 2003, pp. 19, 25.
[23] Robert Ligonnière p. 109 (1987)
Computer 20

[24] Bernard Cohen, pp. 61–62 (2000):


[25] Brian Randell 1982
[26] Louis Couffignal p. VII (1933). Maurice d'Ocagne mentions that Louis Couffignal is building a simpler version of the analytical engine in
the preface.
[27] Bernard Cohen, p. 53 (2000)
[28] Bernard Cohen pp. 299-300 (2000): "When Zuse learned that I was gathering materials for a book on Aiken, he told me that he had first
come across Aiken and Mark I in an indirect manner, through the daughter of his bookkeeper. She was working for the German Geheimdienst
(Secret Service) and ... knew enough about Zuse's machine to recognize that the material filed in a certain drawer related to a device that
seemed somewhat like Zuse's ... Zuse, of course, could not go to the Secret Service and ask for the document since that would give away the
illegal source of his information. Zuse was well connected, however, and was able to send two of his assistants to the Secret Services ...
requesting any information that might be in the files concerning a device or machine in any way similar to Zuse's ... There they found a
newspaper clipping ...containing a picture of Mark I and a brief description about Aiken and the new machine."
[29] Konrad Zuse p.50, (1993): "...the logical development from the Babbage machine - or my Z3 - to stored-program computers..."
[30] RTD Net (http:/ / www. rtd-net. de/ Zuse. html): "From various sides Konrad Zuse was awarded with the title "Inventor of the computer"."
[31] GermanWay (http:/ / www. german-way. com/ famous-konrad-zuse. html): "... German inventor of the computer"
[32] Monsters & Critics (http:/ / www. monstersandcritics. com/ tech/ features/ article_1566782. php/
Z-like-Zuse-German-inventor-of-the-computer): "he(Zuse) built the world's first computer in Berlin"
[33] About.com (http:/ / inventors. about. com/ library/ weekly/ aa050298. htm): "Konrad Zuse earned the semiofficial title of "inventor of the
modern computer""
[34] B. Jack Copeland, ed., Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford University Press, 2006
[35] This program was written similarly to those for the PDP-11 minicomputer and shows some typical things a computer can do. All the text
after the semicolons are comments for the benefit of human readers. These have no significance to the computer and are ignored.
[36] It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental
problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the
early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design
and resulted in a partial recall of the affected devices.
[37] Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be
programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern
computers boot entirely automatically by reading a boot program from some non-volatile memory.
[38] However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible
microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs
designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which
were often one-of-a-kind and totally incompatible with other computers.
[39] High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly, while
running, by another program called an interpreter.
[40] The control unit's role in interpreting instructions has varied somewhat in the past. Although the control unit is solely responsible for
instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be
partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing
hardware that may be partially self-contained. For example, EDVAC, one of the earliest stored-program computers, used a central control unit
that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
[41] Instructions often occupy more than one memory address, therefore the program counter usually increases by the number of memory
locations required to store one instruction.
[42] Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access
usage.
[43] However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual
computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than
customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of
cluster computers in recent years.
[44] "Computer architecture: fundamentals and principles of computer design" (http:/ / books. google. com/ books?id=ZWaUurOwMPQC&
q=quantum+ computers& dq=insufficient+ address+ computer+ architecture& source=gbs_word_cloud_r& cad=3#v=snippet& q=quantum
computers& f=false) by Joseph D. Dumas 2006. page 340.
[45] According to the Shorter Oxford English Dictionary (6th ed, 2007), the word computer dates back to the mid 17th century, when it referred
to “A person who makes calculations; specifically a person employed for this in an observatory etc.”
[46] Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table, except for Alpha,
existed in 32-bit forms before their 64-bit incarnations were introduced.
Computer 21

References
• Fuegi, J. and Francis, J. "Lovelace & Babbage and the creation of the 1843 'notes'". IEEE Annals of the History of
Computing 25 No. 4 (October–December 2003): Digital Object Identifier (http://dx.doi.org/10.1109/MAHC.
2003.1253887)Wikipedia:Link rot
• a Kempf, Karl (1961). Historical Monograph: Electronic Computers Within the Ordnance Corps (http://
ed-thelen.org/comp-hist/U-S-Ord-61.html). Aberdeen Proving Ground (United States Army).
• a Phillips, Tony (2000). "The Antikythera Mechanism I" (http://www.math.sunysb.edu/~tony/whatsnew/
column/antikytheraI-0400/kyth1.html). American Mathematical Society. Retrieved 5 April 2006.
• a Shannon, Claude Elwood (1940). A symbolic analysis of relay and switching circuits (http://hdl.handle.net/
1721.1/11173). Massachusetts Institute of Technology.
• Digital Equipment Corporation (1972). PDP-11/40 Processor Handbook (http://bitsavers.vt100.net/dec/www.
computer.museum.uq.edu.au_mirror/D-09-30_PDP11-40_Processor_Handbook.pdf) (PDF). Maynard, MA:
Digital Equipment Corporation.
• Verma, G.; Mielke, N. (1988). Reliability performance of ETOX based flash memories. IEEE International
Reliability Physics Symposium.
• Doron D. Swade (February 1993). Redeeming Charles Babbage's Mechanical Computer. Scientific American.
p. 89.
• Meuer, Hans; Strohmaier, Erich; Simon, Horst; Dongarra, Jack (13 November 2006). "Architectures Share Over
Time" (http://www.top500.org/lists/2006/11/overtime/Architectures). TOP500. Retrieved 27 November
2006.
• Lavington, Simon (1998). A History of Manchester Computers (2 ed.). Swindon: The British Computer Society.
ISBN 978-0-902505-01-8.
• Stokes, Jon (2007). Inside the Machine: An Illustrated Introduction to Microprocessors and Computer
Architecture. San Francisco: No Starch Press. ISBN 978-1-59327-104-6.
• Zuse, Konrad (1993). The Computer - My life. Berlin: Pringler-Verlag. ISBN 0-387-56453-5.
• Felt, Dorr E. (1916). Mechanical arithmetic, or The history of the counting machine (http://www.archive.org/
details/mechanicalarithm00feltrich). Chicago: Washington Institute.
• Ifrah, Georges (2001). The Universal History of Computing: From the Abacus to the Quantum Computer. New
York: John Wiley & Sons. ISBN 0-471-39671-0.
• Berkeley, Edmund (1949). Giant Brains, or Machines That Think. John Wiley & Sons.
• Cohen, Bernard (2000). Howard Aiken, Portrait of a computer pioneer. Cambridge, Massachusetts: The MIT
Press. ISBN 978-0-2625317-9-5.
• Ligonnière, Robert (1987). Préhistoire et Histoire des ordinateurs. Paris: Robert Laffont.
ISBN 9-782221-052617.
• Couffignal, Louis (1933). Les machines à calculer ; leurs principes, leur évolution. Paris: Gauthier-Villars.
• Essinger, James (2004). Jacquard's Web, How a hand loom led to the birth of the information age. Oxford
University Press. ISBN 0-19-280577-0.
• Hyman, Anthony (1985). Charles Babbage: Pioneer of the Computer. Princeton University Press.
ISBN 978-0-6910237-7-9.
• Cohen, Bernard (2000). Howard Aiken, Portrait of a computer pioneer. Cambridge, Massachusetts: The MIT
Press. ISBN 978-0-2625317-9-5.
• Bowden, B. V. (1953). Faster than thought. New York, Toronto, London: Pitman publishing corporation.
• Moseley, Maboth (1964). Irascible Genius, Charles Babbage, inventor. London: Hutchinson.
• Collier, Bruce (1970). The little engine that could've: The calculating machines of Charles Babbage (http://
robroy.dyndns.info/collier/index.html). Garland Publishing Inc. ISBN 0-8240-0043-9.
• Randell, Brian (1982). "From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate,
Torres, and Bush" (http://www.cs.ncl.ac.uk/publications/articles/papers/398.pdf). Retrieved 29 October
Computer 22

2013.

External links
• A Brief History of Computing (http://www.life.com/image/first/in-gallery/48681/
click-a-brief-history-of-computing#index/0)Wikipedia:Link rot – slideshow by Life magazine

Informatics (academic field)


Information science
General aspects
• Information access · Information architecture
• Information management
• Information retrieval
• Information seeking · Information society
• Knowledge organization · Ontology
• Philosophy of information
• Science, technology and society
Related fields and sub-fields
• Bibliometrics · Categorization
• Censorship · Classification
• Computer data storage · Cultural studies
• Data modeling · Informatics
• Information technology
• Intellectual freedom
• Intellectual property · Memory
• Library and information science
• Preservation · Privacy
• Information science portal

• v
• t
• e [1]

Informatics is, in its most general sense, the science of information. As an academic field it involves the practice of
information processing, and the engineering of information systems. It studies the structure, algorithms, behaviour,
and interactions of natural and artificial systems which store, process, access, and communicate information. The
field considers the interaction between humans and information systems alongside the construction of computer
interfaces. It also develops its own conceptual and theoretical foundations and utilizes foundations developed in
other fields. As such, the field of informatics has great breadth and encompasses many individual specialisations
including the more particular discipline of computing science. Since the advent of computers, individuals and
organizations increasingly process information digitally. This has led to the study of informatics with computational,
mathematical, biological, cognitive and social aspects, including study of the social impact of information
technologies. Importantly however, informatics as an academic field is not explicitly dependent upon technological
aspects of information, while computer science and information technology are.
Informatics (academic field) 23

Etymology
In 1957 the German computer scientist Karl Steinbuch coined the word Informatik by publishing a paper called
Informatik: Automatische Informationsverarbeitung ("Informatics: Automatic Information Processing").[2] The
English term Informatics is sometimes understood as meaning the same as computer science. The German word
Informatik is usually translated to English as computer science.
The French term informatique was coined in 1962 by Philippe Dreyfus[3] together with various
translations—informatics (English), also proposed independently and simultaneously by Walter F. Bauer and
associates who co-founded Informatics Inc., and informatica (Italian, Spanish, Romanian, Portuguese, Dutch),
referring to the application of computers to store and process information.
The term was coined as a combination of "information" and "automatic" to describe the science of automating
information interactions. The morphology—informat-ion + -ics—uses "the accepted form for names of sciences, as
conics, linguistics, optics, or matters of practice, as economics, politics, tactics",[4] and so, linguistically, the meaning
extends easily to encompass both the science of information and the practice of information processing.
A practitioner of informatics may be called an informatician or an informaticist.

History
This new term was adopted across Western Europe, and, except in English, developed a meaning roughly translated
by the English ‘computer science’, or ‘computing science’. Mikhailov et al. advocated the Russian term informatika
(1966), and the English informatics (1967), as names for the theory of scientific information, and argued for a
broader meaning, including study of the use of information technology in various communities (for example,
scientific) and of the interaction of technology and human organizational structures.
Informatics is the discipline of science which investigates the structure and properties (not specific content) of
scientific information, as well as the regularities of scientific information activity, its theory, history,
methodology and organization.[5]
Usage has since modified this definition in three ways. First, the restriction to scientific information is removed, as in
business informatics or legal informatics. Second, since most information is now digitally stored, computation is
now central to informatics. Third, the representation, processing and communication of information are added as
objects of investigation, since they have been recognized as fundamental to any scientific account of information.
Taking information as the central focus of study distinguishes informatics from computer science. Informatics
includes the study of biological and social mechanisms of information processing whereas computer science focuses
on the digital computation. Similarly, in the study of representation and communication, informatics is indifferent to
the substrate that carries information. For example, it encompasses the study of communication using gesture, speech
and language, as well as digital communications and networking.
In the English-speaking world the term informatics was first widely used in the compound, ‘medical informatics’,
taken to include "the cognitive, information processing, and communication tasks of medical practice, education, and
research, including information science and the technology to support these tasks".[6] Many such compounds are now
in use; they can be viewed as different areas of applied informatics.
Informatics encompasses the study of systems that represent, process, and communicate information. However, the
theory of computation in the specific discipline of theoretical computer science, which evolved from Alan Turing,
studies the notion of a complex system regardless of whether or not information actually exists. Since both fields
process information, there is some disagreement among scientists as to field hierarchy; for example Arizona State
University attempted to adopt a broader definition of informatics to even encompass cognitive science at the launch
of its School of Computing and Informatics [7] in September 2006. The confusion arises since information can be
easily stored on a computer and hence informatics could be considered the parent of computer science. However, the
original notion of a computer was the name given to the action of computation regardless of the existence of
Informatics (academic field) 24

information or the existence of a Von Neumann architecture. Humans are examples of computational systems and
not information systems. Many fields such as quantum computing theory are studied in theoretical computer science
but not related to informatics.
In 1989, the first International Olympiad in Informatics (IOI) was held in Bulgaria. The olympiad involves two five
hour days of intense competition. Four students are selected from each participating country to attend and compete
for Gold, Silver, and Bronze medals. The 2008 IOI was held in Cairo, Egypt.
The first example of a degree level qualification in Informatics occurred in 1982 when Plymouth Polytechnic (now
the University of Plymouth) offered a four year BSc(Honours) degree in Computing and Informatics – with an initial
intake of only 35 students. The course still runs today [8] making it the longest available qualification in the subject.
A broad interpretation of informatics, as "the study of the structure, algorithms, behaviour, and interactions of natural
and artificial computational systems," was introduced by the University of Edinburgh in 1994 when it formed the
grouping that is now its School of Informatics. This meaning is now (2006) increasingly used in the United
Kingdom.[9]
The 2008 Research Assessment Exercise, of the UK Funding Councils, includes a new, Computer Science and
Informatics, unit of assessment (UoA),[10] whose scope is described as follows:
The UoA includes the study of methods for acquiring, storing, processing, communicating and reasoning
about information, and the role of interactivity in natural and artificial systems, through the implementation,
organisation and use of computer hardware, software and other resources. The subjects are characterised by
the rigorous application of analysis, experimentation and design.
At the Indiana University School of Informatics (Bloomington [11], Indianapolis [12] and Southeast [13]), informatics
is defined as "the art, science and human dimensions of information technology" and "the study, application, and
social consequences of technology." It is also defined in Informatics 101, Introduction to Informatics as "the
application of information technology to the arts, sciences, and professions." These definitions are widely accepted
in the United States, and differ from British usage in omitting the study of natural computation.
At the University of California, Irvine Department of Informatics [14], informatics is defined as "the interdisciplinary
study of the design, application, use and impact of information technology. The discipline of informatics is based on
the recognition that the design of this technology is not solely a technical matter, but must focus on the relationship
between the technology and its use in real-world settings. That is, informatics designs solutions in context, and takes
into account the social, cultural and organizational settings in which computing and information technology will be
used."
At the University of Michigan, Ann Arbor Informatics interdisciplinary major [15], informatics is defined as "the
study of information and the ways information is used by and affects human beings and social systems. The major
involves coursework from the College of Literature, Science and the Arts, where the Informatics major is housed, as
well as the School of Information and the College of Engineering. Key to this growing field is that it applies both
technological and social perspectives to the study of information. Michigan's interdisciplinary approach to teaching
Informatics gives you a solid grounding in contemporary computer programming, mathematics, and statistics,
combined with study of the ethical and social science aspects of complex information systems. Experts in the field
help design new information technology tools for specific scientific, business, and cultural needs." Michigan offers
four curricular tracks within the informatics degree to provide students with increased expertise. These four track
topics include:
• Internet Informatics: An applied track in which students experiment with technologies behind Internet-based
information systems and acquire skills to map problems to deployable Internet-based solutions. This track will
replace Computational Informatics in Fall 2013.
• Data Mining & Information Analysis: Integrates the collection, analysis, and visualization of complex data and its
critical role in research, business, and government to provide students with practical skills and a theoretical basis
Informatics (academic field) 25

for approaching challenging data analysis problems.


• Life Science Informatics: Examines artificial information systems, which has helped scientists make great
progress in identifying core components of organisms and ecosystems.
• Social Computing: Advances in computing have created opportunities for studying patterns of social interaction
and developing systems that act as introducers, recommenders, coordinators, and record-keepers. Students, in this
track, craft, evaluate, and refine social software computer applications for engaging technology in unique social
contexts. This track will be phased out in Fall 2013 in favor of the new bachelor of science in information. This
will be the first undergraduate degree offered by the School of Information since its founding in 1996. The School
of Information already contains a Master's program, Doctorate program, and a professional master's program in
conjunction with the School of Public Health. The BS in Information at the University of Michigan will be the
first curriculum program of its kind in the United States, with the first graduating class to emerge in 2015.
Students will be able to apply for this unique degree in 2013 for the 2014 Fall semester; the new degree will be a
stem off of the most popular Social Computing track in the current Informatics interdisciplinary major in LSA.
Applications will be open to upper-classmen, juniors and seniors, along with a variety of information classes
available for first and second year students to gauge interest and value in the specific sector of study. The degree
was approved by the University on June 11, 2012. Along with a new degree in the School of Information, there
has also been the first and only chapter of an Informatics Professional Fraternity, Kappa Theta Pi, chartered in
Fall 2012.
One of the most significant areas of applied informatics is that of organizational informatics. Organisational
informatics is fundamentally interested in the application of information, information systems and ICT within
organisations of various forms including private sector, public sector and voluntary sector organisations.[16][17] As
such, organisational informatics can be seen to be sub-category of social informatics and a super-category of
business informatics.

Contributing disciplines
• Artificial Intelligence
• Bioinformatics
• Biomimetics
• Cognitive Science
• Computer science
• Communication studies
• Complex systems
• Didactics of informatics (Didactics of computer science)
• Information science
• Information theory
• Information technology
• Mathematics
• Robotics
Informatics (academic field) 26

Notes
[1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Information_science& action=edit
[2] Karl Steinbuch Eulogy – Bernard Widrow, Reiner Hartenstein, Robert Hecht-Nielsen (http:/ / citeseer. ist. psu. edu/ cache/ papers/ cs2/ 334/
http:zSzzSzhelios. informatik. uni-kl. dezSzeuology. pdf/ unknown. pdf)
[3] Dreyfus, Phillipe. L’informatique. Gestion, Paris, June 1962, pp. 240–41
[4] Oxford English Dictionary 1989
[5] Mikhailov, A.I., Chernyl, A.I., and Gilyarevskii, R.S. (1966) "Informatika – novoe nazvanie teorii naučnoj informacii." Naučno tehničeskaja
informacija, 12, pp. 35–39.
[6] Greenes, R.A. and Shortliffe, E.H. (1990) "Medical Informatics: An emerging discipline with academic and institutional perspectives."
Journal of the American Medical Association, 263(8) pp. 1114–20.
[7] http:/ / sci. asu. edu/ index. php
[8] BSc(Hons) Computing Informatics – University of Plymouth Link (http:/ / www. plymouth. ac. uk/ courses/ undergraduate/ 1926/ BSc+
(Hons)+ Computing+ Informatics/ )
[9] For example, at University of Reading (http:/ / www. henley. reading. ac. uk/ IRC/ ), Sussex (http:/ / www. sussex. ac. uk/ informatics/ ), City
University (http:/ / www. soi. city. ac. uk/ ), Ulster (http:/ / www. infc. ulst. ac. uk/ ), Bradford (http:/ / www. inf. brad. ac. uk/ home/ index.
php), Manchester (http:/ / www. informatics. manchester. ac. uk/ ) and Newcastle (http:/ / www. ncl. ac. uk/ iri/ )
[10] UoA 23 Computer Science and Informatics, Panel working methods (http:/ / www. rae. ac. uk/ pubs/ 2006/ 01/ docs/ f23. pdf)
[11] http:/ / www. soic. indiana. edu
[12] http:/ / informatics. iupui. edu
[13] http:/ / www. informatics. ius. edu/
[14] http:/ / www. ics. uci. edu/ informatics/
[15] http:/ / informatics. umich. edu/
[16] Beynon-Davies P. (2002). Information Systems: an introduction to informatics in Organisations. Palgrave, Basingstoke, UK. ISBN
0-333-96390-3
[17] Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke, UK. ISBN 978-0-230-20368-6

External links
• informatics (http://www.inf.ed.ac.uk/publications/online/0139.pdf): entry from International Encyclopedia
of Information and Library Science
• Software History Center (http://www.softwarehistory.org/history/Bauer1.html): First usage of informatics in
the US
• What is Informatics? (http://informatics.iupui.edu/about/what-is-informatics) : Indiana University
• Q&A about informatics (http://www.ics.uci.edu/informatics/qa/)
• Prior Art Database (http://www.priorartdatabase.com/IPCOM/000129939/): Informatics: An Early Software
Company
• Informatics Europe (http://www.informatics-europe.org)
• The Council of European Professional Informatics Societies (CEPIS) (http://www.cepis.org)
• An Informatics Education: What and who is it for? [[Northern Kentucky University (http://informatics.nku.edu/
about/whatis.php)]
Programming language 27

Programming language
A programming language is a formal
language designed to communicate
instructions to a machine, particularly
a computer. Programming languages
can be used to create programs that
control the behavior of a machine
and/or to express algorithms precisely.

The earliest programming languages


preceded the invention of the
computer, and were used to direct the
An example of source code written in the Java programming language, which will print
behavior of machines such as Jacquard the message "Hello World!" to the standard output when it is compiled and then run by
looms and player pianos.[1] Thousands the Java Virtual Machine.
of different programming languages
have been created, mainly in the computer field, and still many are being created every year. Many programming
languages require computation to be specified in an imperative form (i.e., as a sequence of operations to perform),
while other languages utilize other forms of program specification such as the declarative form (i.e., the desired
result is specified, not how to achieve it).

The description of a programming language is usually split into the two components of syntax (form) and semantics
(meaning). Some languages are defined by a specification document (for example, the C programming language is
specified by an ISO Standard), while other languages, such as Perl 5 and earlier, have a dominant implementation
that is used as a reference.

Definitions
A programming language is a notation for writing programs, which are specifications of a computation or algorithm.
Some, but not all, authors restrict the term "programming language" to those languages that can express all possible
algorithms.[2] Traits often considered important for what constitutes a programming language include:
• Function and target: A computer programming language is a language used to write computer programs, which
involve a computer performing some kind of computation[3] or algorithm and possibly control external devices
such as printers, disk drives, robots, and so on. For example PostScript programs are frequently created by
another program to control a computer printer or display. More generally, a programming language may describe
computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a
programming language includes a description, possibly idealized, of a machine or processor for that language.[4]
In most practical contexts, a programming language involves a computer; consequently, programming languages
are usually defined and studied this way. Programming languages differ from natural languages in that natural
languages are only used for interaction between people, while programming languages also allow humans to
communicate instructions to machines.
• Abstractions: Programming languages usually contain abstractions for defining and manipulating data structures
or controlling the flow of execution. The practical necessity that a programming language support adequate
abstractions is expressed by the abstraction principle;[5] this principle is sometimes formulated as
recommendation to the programmer to make proper use of such abstractions.
• Expressive power: The theory of computation classifies languages by the computations they are capable of
expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and
Charity are examples of languages that are not Turing complete, yet often called programming languages.[6]
Programming language 28

Markup languages like XML, HTML or troff, which define structured data, are not usually considered programming
languages.[7] Programming languages may, however, share the syntax with markup languages if a computational
semantics is defined. XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is mostly
used for structuring documents, also contains a Turing complete subset.[8]
The term computer language is sometimes used interchangeably with programming language.[9] However, the usage
of both terms varies among authors, including the exact scope of each. One usage describes programming languages
as a subset of computer languages.[10] In this vein, languages used in computing that have a different goal than
expressing computer programs are generically designated computer languages. For instance, markup languages are
sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.[11]
Another usage regards programming languages as theoretical constructs for programming abstract machines, and
computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[12]
John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the
languages intended for execution. He also argues that textual and even graphical input formats that affect the
behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and
remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[13]

Elements
All programming languages have some primitive building blocks for the description of data and the processes or
transformations applied to them (like the addition of two numbers or the selection of an item from a collection).
These primitives are defined by syntactic and semantic rules which describe their structure and meaning
respectively.

Syntax
A programming language's surface
form is known as its syntax. Most
programming languages are purely
textual; they use sequences of text
including words, numbers, and
punctuation, much like written natural
languages. On the other hand, there are
some programming languages which
are more graphical in nature, using
visual relationships between symbols
to specify a program.

The syntax of a language describes the


possible combinations of symbols that
form a syntactically correct program.
The meaning given to a combination of
symbols is handled by semantics Parse tree of Python code with inset tokenization

(either formal or hard-coded in a


reference implementation). Since most languages are textual, this article discusses textual syntax.
Programming language 29

Programming language syntax is usually


defined using a combination of regular
expressions (for lexical structure) and
Backus–Naur Form (for grammatical
structure). Below is a simple grammar, based
on Lisp:

Syntax highlighting is often used to aid programmers in recognizing elements of


source code. The language above is Python.

expression ::= atom | list


atom ::= number | symbol
number ::= [+-]?['0'-'9']+
symbol ::= ['A'-'Z''a'-'z'].*
list ::= '(' expression* ')'

This grammar specifies the following:


• an expression is either an atom or a list;
• an atom is either a number or a symbol;
• a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;
• a symbol is a letter followed by zero or more of any characters (excluding whitespace); and
• a list is a matched pair of parentheses, with zero or more expressions inside it.
The following are examples of well-formed token sequences in this grammar: 12345, () and (a b c232
(1)).
Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless
ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the
implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined
behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by
the person who wrote it.
Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct
sentence or the sentence may be false:
• "Colorless green ideas sleep furiously." is grammatically well-formed but has no generally accepted meaning.
• "John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true.
The following C language fragment is syntactically correct, but performs operations that are not semantically defined
(the operation *p >> 4 has no meaning for a value having a complex type and p->im is not defined because the
value of p is the null pointer):

complex *p = NULL;
complex abs_p = sqrt(*p >> 4 + p->im);
Programming language 30

If the type declaration on the first line were omitted, the program would trigger an error on compilation, as the
variable "p" would not be defined. But the program would still be syntactically correct, since type declarations
provide only semantic information.
The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy.
The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free
grammars.[14] Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing
phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax
analysis an undecidable problem, and generally blur the distinction between parsing and execution.[15] In contrast to
Lisp's macro system and Perl's BEGIN blocks, which may contain general computations, C macros are merely string
replacements, and do not require code execution.[16]

Semantics
The term Semantics refers to the meaning of languages, as opposed to their form (syntax).

Static semantics
The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in
standard syntactic formalisms. For compiled languages, static semantics essentially include those semantic rules that
can be checked at compile time. Examples include checking that every identifier is declared before it is used (in
languages that require such declarations) or that the labels on the arms of a case statement are distinct.[17] Many
important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding
an integer to a function name), or that subroutine calls have the appropriate number and type of arguments, can be
enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow
analysis may also be part of static semantics. Newer programming languages like Java and C# have definite
assignment analysis, a form of data flow analysis, as part of their static semantics.

Dynamic semantics
Once data has been specified, the machine must be instructed to perform operations on the data. For example, the
semantics may define the strategy by which expressions are evaluated to values, or the manner in which control
structures conditionally execute statements. The dynamic semantics (also known as execution semantics) of a
language defines how and when the various constructs of a language should produce a program behavior. There are
many ways of defining execution semantics. Natural language is often used to specify the execution semantics of
languages commonly used in practice. A significant amount of academic research went into formal semantics of
programming languages, which allow execution semantics to be specified in a formal manner. Results from this field
of research have seen limited application to programming language design and implementation outside academia.

Type system
A type system defines how a programming language classifies values and expressions into types, how it can
manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain
level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable
type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit
unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked
casts that may be used by the programmer to explicitly allow a normally disallowed operation between different
types. In most typed languages, the type system is used only to type check programs, but a number of languages,
usually functional ones, infer types, relieving the programmer from the need to write type annotations. The formal
design and study of type systems is known as type theory.
Programming language 31

Typed versus untyped languages


A language is typed if the specification of every operation defines types of data to which the operation is applicable,
with the implication that it is not applicable to other types. For example, the data represented by "this text
between the quotes" is a string. In most programming languages, dividing a number by a string has no
meaning; most modern programming languages will therefore reject any program attempting to perform such an
operation. In some languages the meaningless operation will be detected when the program is compiled ("static" type
checking), and rejected by the compiler; while in others, it will be detected when the program is run ("dynamic" type
checking), resulting in a run-time exception.
A special case of typed languages are the single-type languages. These are often scripting or markup languages, such
as REXX or SGML, and have only one data type—most commonly character strings which are used for both
symbolic and numeric data.
In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any
data, which are generally considered to be sequences of bits of various lengths. High-level languages which are
untyped include BCPL and some varieties of Forth.
In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting
all operations), most modern languages offer a degree of typing. Many production languages provide means to
bypass or subvert the type system, trading type-safety for finer control over the program's execution (see casting).

Static versus dynamic typing


In static typing, all expressions have their types determined prior to when the program is executed, typically at
compile-time. For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a
string, or stored in a variable that is defined to hold dates.
Statically typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must
explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the
compiler infers the types of expressions and declarations based on context. Most mainstream statically typed
languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been
associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages
support partial type inference; for example, Java and C# both infer types in certain limited cases.[18]
Dynamic typing, also called latent typing, determines the type-safety of operations at run time; in other words, types
are associated with run-time values rather than textual expressions. As with type-inferred languages, dynamically
typed languages do not require the programmer to write explicit type annotations on expressions. Among other
things, this may permit a single variable to refer to values of different types at different points in the program
execution. However, type errors cannot be automatically detected until a piece of code is actually executed,
potentially making debugging more difficult. Lisp, Perl, Python, JavaScript, and Ruby are dynamically typed.

Weak and strong typing


Weak typing allows a value of one type to be treated as another, for example treating a string as a number. This can
occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even
at run time.
Strong typing prevents the above. An attempt to perform an operation on the wrong type of value raises an error.
Strongly typed languages are often termed type-safe or safe.
An alternative definition for "weakly typed" refers to languages, such as Perl and JavaScript, which permit a large
number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a
number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such
implicit conversions are often useful, but they can mask programming errors. Strong and static are now generally
considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean
Programming language 32

strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both
strongly typed and weakly, statically typed.
It may seem odd to some professional programmers that C could be "weakly, statically typed". However, notice that
the use of the generic pointer, the void* pointer, does allow for casting of pointers to other pointers without needing
to do an explicit cast. This is extremely similar to somehow casting an array of bytes to any kind of datatype in C
without using an explicit cast, such as (int) or (char).

Standard library and run-time system


Most programming languages have an associated core library (sometimes known as the 'standard library', especially
if it is included as part of the published language standard), which is conventionally made available by all
implementations of the language. Core libraries typically include definitions for commonly used algorithms, data
structures, and mechanisms for input and output.
A language's core library is often treated as part of the language by its users, although the designers may have treated
it as a separate entity. Many language specifications define a core that must be made available in all
implementations, and in the case of standardized languages this core library may be required. The line between a
language and its core library therefore differs from language to language. Indeed, some languages are designed so
that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For
example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in
Smalltalk, an anonymous function expression (a "block") constructs an instance of the library's BlockContext
class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as
library macros, and so the language designers do not even bother to say which portions of the language must be
implemented as language constructs, and which must be implemented as parts of a library.

Design and implementation


Programming languages share properties with natural languages related to their purpose as vehicles for
communication, having a syntactic form separate from its semantics, and showing language families of related
languages branching one from another.[19] But as artificial constructs, they also differ in fundamental ways from
languages that have evolved through usage. A significant difference is that a programming language can be fully
described and studied in its entirety, since it has a precise and finite definition. By contrast, natural languages have
changing meanings given by their users in different communities. While constructed languages are also artificial
languages designed from the ground up with a specific purpose, they lack the precise and complete semantic
definition that a programming language has.
Many programming languages have been designed from scratch, altered to meet new needs, and combined with
other languages. Many have eventually fallen into disuse. Although there have been attempts to design one
"universal" programming language that serves all purposes, all of them have failed to be generally accepted as filling
this role.[20] The need for diverse programming languages arises from the diversity of contexts in which languages
are used:
• Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of
programmers.
• Programmers range in expertise from novices who need simplicity above all else, to experts who may be
comfortable with considerable complexity.
• Programs must balance speed, size, and simplicity on systems ranging from microcontrollers to supercomputers.
• Programs may be written once and not change for generations, or they may undergo continual modification.
• Finally, programmers may simply differ in their tastes: they may be accustomed to discussing problems and
expressing them in a particular language.
Programming language 33

One common trend in the development of programming languages has been to add more ability to solve problems
using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying
hardware of the computer. As new programming languages have developed, features have been added that let
programmers express ideas that are more remote from simple translation into underlying hardware instructions.
Because programmers are less tied to the complexity of the computer, their programs can do more computing with
less effort from the programmer. This lets them write more functionality per time unit.[21]
Natural language processors have been proposed as a way to eliminate the need for a specialized language for
programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the
position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and
dismissed natural language programming as "foolish".[22] Alan Perlis was similarly dismissive of the idea. Hybrid
approaches have been taken in Structured English and SQL.
A language's designers and users must construct a number of artifacts that govern and enable the practice of
programming. The most important of these artifacts are the language specification and implementation.

Specification
The specification of a programming language is intended to provide a definition that the language users and the
implementors can use to determine whether the behavior of a program is correct, given its source code.
A programming language specification can take several forms, including the following:
• An explicit definition of the syntax, static semantics, and execution semantics of the language. While syntax is
commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., as in
the C language), or a formal semantics (e.g., as in Standard ML and Scheme specifications).
• A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The
syntax and semantics of the language have to be inferred from this description, which may be written in natural or
a formal language.
• A reference or model implementation, sometimes written in the language being specified (e.g., Prolog or ANSI
REXX[23]). The syntax and semantics of the language are explicit in the behavior of the reference
implementation.

Implementation
An implementation of a programming language provides a way to execute that program on one or more
configurations of hardware and software. There are, broadly, two approaches to programming language
implementation: compilation and interpretation. It is generally possible to implement a language using either
technique.
The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations
that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For
instance, some implementations of BASIC compile and then execute the source a line at a time.
Programs that are executed directly on the hardware usually run several orders of magnitude faster than those that are
interpreted in software.[citation needed]
One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual
machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for
direct execution on the hardware.
Programming language 34

Usage
Thousands of different programming languages have been created, mainly in the computing field. Programming
languages differ from most other forms of human expression in that they require a greater degree of precision and
completeness.
When using a natural language to communicate with other people, human authors and speakers can be ambiguous
and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do
exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The
combination of the language definition, a program, and the program's inputs must fully specify the external behavior
that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas
about an algorithm can be communicated to humans without the precision required for execution by using
pseudocode, which interleaves natural language with code written in a programming language.
A programming language provides a structured mechanism for defining pieces of data, and the operations or
transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the
language to represent the concepts involved in a computation. These concepts are represented as a collection of the
simplest elements available (called primitives). Programming is the process by which programmers combine these
primitives to compose new programs, or adapt existing ones to new uses or a changing environment.
Programs for a computer might be executed in a batch process without human interaction, or a user might type
commands in an interactive session of an interpreter. In this case the "commands" are simply programs, whose
execution is chained together. When a language is used to give commands to a software application (such as a shell)
it is called a scripting language.[citation needed]

Measuring language usage


It is difficult to determine which programming languages are most widely used, and what usage means varies by
context. One language may occupy the greater number of programmer hours, a different one have more lines of
code, and a third utilize the most CPU time. Some languages are very popular for particular kinds of applications.
For example, COBOL is still strong in the corporate data center, often on large mainframes; Fortran in scientific and
engineering applications; and C in embedded applications and operating systems. Other languages are regularly used
to write many different kinds of applications.
Various methods of measuring language popularity, each subject to a different bias over what is measured, have been
proposed:
• counting the number of job advertisements that mention the language
• the number of books sold that teach or describe the language
• estimates of the number of existing lines of code written in the language—which may underestimate languages
not often found in public searches[24]
• counts of language references (i.e., to the name of the language) found using a web search engine.
Combining and averaging information from various internet sites, langpop.com claims that in 2008 the 10 most cited
programming languages are (in alphabetical order): C, C++, C#, Java, JavaScript, Perl, PHP, Python, Ruby, and
SQL.
Programming language 35

Taxonomies
There is no overarching classification scheme for programming languages. A given programming language does not
usually have a single ancestor language. Languages commonly arise by combining the elements of several
predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse
throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely
different family.
The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is
both an object-oriented language (because it encourages object-oriented organization) and a concurrent language
(because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented
scripting language.
In broad strokes, programming languages divide into programming paradigms and a classification by intended
domain of use, with general-purpose programming languages distinguished from domain-specific programming
languages. Traditionally, programming languages have been regarded as describing computation in terms of
imperative sentences, i.e. issuing commands. These are generally called imperative programming languages. A great
deal of research in programming languages has been aimed at blurring the distinction between a program as a set of
instructions and a program as an assertion about the desired answer, which is the main feature of declarative
programming.[25] More refined paradigms include procedural programming, object-oriented programming,
functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic.
An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By
purpose, programming languages might be considered general purpose, system programming languages, scripting
languages, domain-specific languages, or concurrent/distributed languages (or a combination of these). Some general
purpose languages were designed largely with educational goals.
A programming language may also be classified by factors unrelated to programming paradigm. For instance, most
programming languages use English language keywords, while a minority do not. Other languages may be classified
as being deliberately esoteric or not.
Programming language 36

History

Early developments
The first programming languages predate the modern
computer. The 19th century saw the invention of
"programmable" looms and player piano scrolls, both of
which implemented examples of domain-specific
languages. By the beginning of the twentieth century,
punch cards encoded data and directed mechanical
processing. In the 1930s and 1940s, the formalisms of
Alonzo Church's lambda calculus and Alan Turing's
Turing machines provided mathematical abstractions for
expressing algorithms; the lambda calculus remains
influential in language design.[26]

In the 1940s, the first electrically powered digital


computers were created. Grace Hopper, one of the first
programmers of the Harvard Mark I computer, developed
the first compiler, around 1952, for a computer
programming language.
Notwithstanding, the idea of programming language
existed earlier; the first high-level programming language
A selection of textbooks that teach programming, in languages
to be designed for a computer was Plankalkül, developed both popular and obscure. These are only a few of the thousands
for the German Z3 by Konrad Zuse between 1943 and of programming languages and dialects that have been designed in
1945. However, it was not implemented until 1998 and history.

2000.[27]

Programmers of early 1950s computers, notably UNIVAC I and IBM 701, used machine language programs, that is,
the first generation language (1GL). 1GL programming was quickly superseded by similarly machine-specific, but
mnemonic, second generation languages (2GL) known as assembly languages or "assembler". Later in the 1950s,
assembly language programming, which had evolved to include the use of macro instructions, was followed by the
development of "third generation" programming languages (3GL), such as FORTRAN, LISP, and COBOL.[28] 3GLs
are more abstract and are "portable", or at least implemented similarly on computers that do not support the same
native machine code. Updated versions of all of these 3GLs are still in general use, and each has strongly influenced
the development of later languages. At the end of the 1950s, the language formalized as ALGOL 60 was introduced,
and most later programming languages are, in many respects, descendants of Algol. The format and use of the early
programming languages was heavily influenced by the constraints of the interface.[29]
Programming language 37

Refinement
The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use,
though many aspects were refinements of ideas in the very first Third-generation programming languages:
• APL introduced array programming and influenced functional programming.[30]
• PL/I (NPL) was designed in the early 1960s to incorporate the best ideas from FORTRAN and COBOL.
• In the 1960s, Simula was the first language designed to support object-oriented programming; in the mid-1970s,
Smalltalk followed with the first "purely" object-oriented language.
• C was developed between 1969 and 1973 as a system programming language, and remains popular.[31]
• Prolog, designed in 1972, was the first logic programming language.
• In 1978, ML built a polymorphic type system on top of Lisp, pioneering statically typed functional programming
languages.
Each of these languages spawned an entire family of descendants, and most modern languages count at least one of
them in their ancestry.
The 1960s and 1970s also saw considerable debate over the merits of structured programming, and whether
programming languages should be designed to support it. Edsger Dijkstra, in a famous 1968 letter published in the
Communications of the ACM, argued that GOTO statements should be eliminated from all "higher level"
programming languages.
The 1960s and 1970s also saw expansion of techniques that reduced the footprint of a program as well as improved
productivity of the programmer and user. The card deck for an early 4GL was a lot smaller for the same functionality
expressed in a 3GL deck.

Consolidation and growth


The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The
United States government standardized Ada, a systems programming language derived from Pascal and intended for
use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called "fifth generation"
languages that incorporated logic programming constructs.[32] The functional languages community moved to
standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas
invented in the previous decade.
One important trend in language design for programming large-scale systems during the 1980s was an increased
focus on the use of modules, or large-scale organizational units of code. Modula-2, Ada, and ML all developed
notable module systems in the 1980s, although other languages, such as PL/I, already had extensive support for
modular programming. Module systems were often wedded to generic programming constructs.
The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix
scripting tool first released in 1987, became common in dynamic websites. Java came to be used for server-side
programming, and bytecode virtual machines became popular again in commercial settings with their promise of
"Write once, run anywhere" (UCSD Pascal had been popular for a time in the early 1980s). These developments
were not fundamentally novel, rather they were refinements to existing languages and paradigms, and largely based
on the C family of programming languages.
Programming language evolution continues, in both industry and research. Current directions include security and
reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration such as
Microsoft's LINQ.
The 4GLs are examples of languages which are domain-specific, such as SQL, which manipulates and returns sets of
data rather than the scalar values which are canonical to most programming languages. Perl, for example, with its
'here document' can hold multiple 4GL programs, as well as multiple JavaScript programs, in part of its own perl
code and use variable interpolation in the 'here document' to support multi-language programming.[33]
Programming language 38

References
[1] Ettinger, James (2004) Jacquard's Web, Oxford University Press
[2] In mathematical terms, this means the programming language is Turing-complete
[3] , The scope of SIGPLAN is the theory, design, implementation, description, and application of computer programming languages - languages
that permit the specification of a variety of different computations, thereby providing the user with significant control (immediate or delayed)
over the computer's operation.
[4] R. Narasimahan, Programming Languages and Computers: A Unified Metatheory, pp. 189--247 in Franz Alt, Morris Rubinoff (eds.)
Advances in computers, Volume 8, Academic Press, 1994, ISBN 012012108, p.193 : "a complete specification of a programming language
must, by definition, include a specification of a processor--idealized, if you will--for that language." [the source cites many references to
support this statement]
[5] David A. Schmidt, The structure of typed programming languages, MIT Press, 1994, ISBN 0-262-19349-3, p. 32
[6] , Charity is a categorical programming language..., All Charity computations terminate.
[7] XML in 10 points (http:/ / www. w3. org/ XML/ 1999/ XML-in-10-points. html) W3C, 1999, XML is not a programming language.
[8] http:/ / tobi. oetiker. ch/ lshort/ lshort. pdf
[9] Robert A. Edmunds, The Prentice-Hall standard glossary of computer terminology, Prentice-Hall, 1985, p. 91
[10] Pascal Lando, Anne Lapujade, Gilles Kassel, and Frédéric Fürst, Towards a General Ontology of Computer Programs (http:/ / www.
loa-cnr. it/ ICSOFT2007_final. pdf), ICSOFT 2007 (http:/ / dblp. uni-trier. de/ db/ conf/ icsoft/ icsoft2007-1. html), pp. 163-170
[11] S.K. Bajpai, Introduction To Computers And C Programming, New Age International, 2007, ISBN 81-224-1379-X, p. 346
[12] R. Narasimahan, Programming Languages and Computers: A Unified Metatheory, pp. 189--247 in Franz Alt, Morris Rubinoff (eds.)
Advances in computers, Volume 8, Academic Press, 1994, ISBN 012012108, p.215: "[...] the model [...] for computer languages differs from
that [...] for programming languages in only two respects. In a computer language, there are only finitely many names--or registers--which can
assume only finitely many values--or states--and these states are not further distinguished in terms of any other attributes. [author's footnote:]
This may sound like a truism but its implications are far reaching. For example, it would imply that any model for programming languages, by
fixing certain of its parameters or features, should be reducible in a natural way to a model for computer languages."
[13] John C. Reynolds, Some thoughts on teaching programming and programming languages, SIGPLAN Notices, Volume 43, Issue 11,
November 2008, p.109
[14] Section 2.2: Pushdown Automata, pp.101–114.
[15] Jeffrey Kegler, " Perl and Undecidability (http:/ / www. jeffreykegler. com/ Home/ perl-and-undecidability)", The Perl Review. Papers 2 and
3 prove, using respectively Rice's theorem and direct reduction to the halting problem, that the parsing of Perl programs is in general
undecidable.
[16] Marty Hall, 1995, Lecture Notes: Macros (http:/ / www. apl. jhu. edu/ ~hall/ Lisp-Notes/ Macros. html), PostScript version (http:/ / www.
apl. jhu. edu/ ~hall/ Lisp-Notes/ Macros. ps)
[17] Michael Lee Scott, Programming language pragmatics, Edition 2, Morgan Kaufmann, 2006, ISBN 0-12-633951-1, p. 18–19
[18] Specifically, instantiations of generic types are inferred for certain expression forms. Type inference in Generic Java—the research language
that provided the basis for Java 1.5's bounded parametric polymorphism extensions—is discussed in two informal manuscripts from the Types
mailing list: Generic Java type inference is unsound (http:/ / www. seas. upenn. edu/ ~sweirich/ types/ archive/ 1999-2003/ msg00849. html)
(Alan Jeffrey, 17 December 2001) and Sound Generic Java type inference (http:/ / www. seas. upenn. edu/ ~sweirich/ types/ archive/
1999-2003/ msg00921. html) (Martin Odersky, 15 January 2002). C#'s type system is similar to Java's, and uses a similar partial type
inference scheme.
[19] Steven R. Fischer, A history of language, Reaktion Books, 2003, ISBN 1-86189-080-X, p. 205
[20] IBM in first publishing PL/I, for example, rather ambitiously titled its manual The universal programming language PL/I (IBM Library;
1966). The title reflected IBM's goals for unlimited subsetting capability: PL/I is designed in such a way that one can isolate subsets from it
satisfying the requirements of particular applications. (). Ada and UNCOL had similar early goals.
[21] Frederick P. Brooks, Jr.: The Mythical Man-Month, Addison-Wesley, 1982, pp. 93-94
[22] Dijkstra, Edsger W. On the foolishness of "natural language programming." (http:/ / www. cs. utexas. edu/ users/ EWD/ transcriptions/
EWD06xx/ EWD667. html) EWD667.
[23] ANSI — Programming Language Rexx, X3-274.1996
[24] Bieman, J.M.; Murdock, V., Finding code on the World Wide Web: a preliminary investigation, Proceedings First IEEE International
Workshop on Source Code Analysis and Manipulation, 2001
[25] Carl A. Gunter, Semantics of Programming Languages: Structures and Techniques, MIT Press, 1992, ISBN 0-262-57095-5, p. 1
[26] Benjamin C. Pierce writes:

"... the lambda calculus has seen widespread use in the specification of programming language features, in
language design and implementation, and in the study of type systems."
[27] Rojas, Raúl, et al. (2000). "Plankalkül: The First High-Level Programming Language and its Implementation". Institut für Informatik, Freie
Universität Berlin, Technical Report B-3/2000. (full text) (http:/ / www. zib. de/ zuse/ Inhalt/ Programme/ Plankalkuel/ Plankalkuel-Report/
Plankalkuel-Report. htm)
[28] Linda Null, Julia Lobur, The essentials of computer organization and architecture, Edition 2, Jones & Bartlett Publishers, 2006, ISBN
0-7637-3769-0, p. 435
Programming language 39

[29] Frank da Cruz. IBM Punch Cards (http:/ / www. columbia. edu/ acis/ history/ cards. html) Columbia University Computing History (http:/ /
www. columbia. edu/ acis/ history/ index. html).
[30] Richard L. Wexelblat: History of Programming Languages, Academic Press, 1981, chapter XIV.
[31] . This comparison analyzes trends in number of projects hosted by a popular community programming repository. During most years of the
comparison, C leads by a considerable margin; in 2006, Java overtakes C, but the combination of C/C++ still leads considerably.
[32] Tetsuro Fujise, Takashi Chikayama, Kazuaki Rokusawa, Akihiko Nakase (December 1994). "KLIC: A Portable Implementation of KL1"
Proc. of FGCS '94, ICOT Tokyo, December 1994. http:/ / www. icot. or. jp/ ARCHIVE/ HomePage-E. html KLIC is a portable
implementation of a concurrent logic programming language KL1.
[33] Wall, Programming Perl ISBN 0-596-00027-8 p. 66

Further reading
• Abelson, Harold; Sussman, Gerald Jay (1996). Structure and Interpretation of Computer Programs (http://
mitpress.mit.edu/sicp/full-text/book/book-Z-H-4.html) (2nd ed.). MIT Press.
• Raphael Finkel: Advanced Programming Language Design (http://www.nondot.org/sabre/Mirrored/
AdvProgLangDesign/), Addison Wesley 1995.
• Daniel P. Friedman, Mitchell Wand, Christopher T. Haynes: Essentials of Programming Languages, The MIT
Press 2001.
• Maurizio Gabbrielli and Simone Martini: "Programming Languages: Principles and Paradigms", Springer, 2010.
• David Gelernter, Suresh Jagannathan: Programming Linguistics, The MIT Press 1990.
• Ellis Horowitz (ed.): Programming Languages, a Grand Tour (3rd ed.), 1987.
• Ellis Horowitz: Fundamentals of Programming Languages, 1989.
• Shriram Krishnamurthi: Programming Languages: Application and Interpretation, online publication (http://
www.cs.brown.edu/~sk/Publications/Books/ProgLangs/).
• Bruce J. MacLennan: Principles of Programming Languages: Design, Evaluation, and Implementation, Oxford
University Press 1999.
• John C. Mitchell: Concepts in Programming Languages, Cambridge University Press 2002.
• Benjamin C. Pierce: Types and Programming Languages, The MIT Press 2002.
• Terrence W. Pratt and Marvin V. Zelkowitz: Programming Languages: Design and Implementation (4th ed.),
Prentice Hall 2000.
• Peter H. Salus. Handbook of Programming Languages (4 vols.). Macmillan 1998.
• Ravi Sethi: Programming Languages: Concepts and Constructs, 2nd ed., Addison-Wesley 1996.
• Michael L. Scott: Programming Language Pragmatics, Morgan Kaufmann Publishers 2005.
• Robert W. Sebesta: Concepts of Programming Languages, 9th ed., Addison Wesley 2009.
• Franklyn Turbak and David Gifford with Mark Sheldon: Design Concepts in Programming Languages, The MIT
Press 2009.
• Peter Van Roy and Seif Haridi. Concepts, Techniques, and Models of Computer Programming, The MIT Press
2004.
• David A. Watt. Programming Language Concepts and Paradigms. Prentice Hall 1990.
• David A. Watt and Muffy Thomas. Programming Language Syntax and Semantics. Prentice Hall 1991.
• David A. Watt. Programming Language Processors. Prentice Hall 1993.
• David A. Watt. Programming Language Design Concepts. John Wiley & Sons 2004.
Programming language 40

External links
• 99 Bottles of Beer (http://www.99-bottles-of-beer.net/) A collection of implementations in many languages.
• Computer Programming Languages (http://www.dmoz.org/Computers/Programming/Languages/) on the
Open Directory Project

Algorithm
i
In mathematics and computer science, an algorithm ( /ˈælɡərɪðəm/
AL-gə-ri-dhəm) is a step-by-step procedure for calculations. Algorithms
are used for calculation, data processing, and automated reasoning.
An algorithm is an effective method expressed as a finite list[1] of
well-defined instructions[2] for calculating a function.[3] Starting from an
initial state and initial input (perhaps empty),[4] the instructions describe a
computation that, when executed, proceeds through a finite [5] number of
well-defined successive states, eventually producing "output"[6] and
terminating at a final ending state. The transition from one state to the next
is not necessarily deterministic; some algorithms, known as randomized
algorithms, incorporate random input.[7]

Though al-Khwārizmī's algorism referred to the rules of performing


arithmetic using Hindu-Arabic numerals and the systematic solution of
linear and quadratic equations, a partial formalization of what would
become the modern algorithm began with attempts to solve the
Entscheidungsproblem (the "decision problem") posed by David Hilbert in
1928. Subsequent formalizations were framed as attempts to define
"effective calculability"[8] or "effective method";[9] those formalizations
included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934
and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's
"Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7
and 1939. Giving a formal definition of algorithms, corresponding to the
intuitive notion, remains a challenging problem. Flow chart of an algorithm (Euclid's
algorithm) for calculating the greatest
common divisor (g.c.d.) of two numbers a
and b in locations named A and B. The
Informal definition algorithm proceeds by successive
subtractions in two loops: IF the test B ≥ A
While there is no generally accepted formal definition of "algorithm," an
yields "yes" (or true) (more accurately the
informal definition could be "a set of rules that precisely defines a number b in location B is greater than or
sequence of operations."[10] which would include all computer programs, equal to the number a in location A) THEN,
including programs that do not perform numeric calculations. For some the algorithm specifies B ← B − A (meaning

people, a program is only an algorithm if it stops eventually.[11] For others, the number b − a replaces the old b).
Similarly, IF A > B, THEN A ← A − B. The
a program is only an algorithm if it performs a number of calculation steps. process terminates when (the contents of) B
A prototypical example of an algorithm is Euclid's algorithm to determine is 0, yielding the g.c.d. in A. (Algorithm
derived from Scott 2009:13; symbols and
the maximum common divisor of two integers; an example (there are
drawing style from Tausworthe 1977).
others) is described by the flow chart above and as an example in a later
section.
Boolos & Jeffrey (1974, 1999) offer an informal meaning of the word in the following quotation:
Algorithm 41

No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller
without limit ...you'd be trying to write on molecules, on atoms, on electrons") to list all members of an
enumerably infinite set by writing out their names, one after another, in some notation. But humans can
do something equally useful, in the case of certain enumerably infinite sets: They can give explicit
instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to
be given quite explicitly, in a form in which they could be followed by a computing machine, or by a
human who is capable of carrying out only very elementary operations on symbols.[12]
The term "enumerably infinite" means "countable using integers perhaps extending to infinity." Thus, Boolos and
Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary
"input" integer or integers that, in theory, can be chosen from 0 to infinity. Thus an algorithm can be an algebraic
equation such as y = m + n—two arbitrary "input variables" m and n that produce an output y. But various authors'
attempts to define the notion indicate that the word implies much more than this, something on the order of (for the
addition example):
Precise instructions (in language understood by "the computer")[13] for a fast, efficient, "good"[14] process that
specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained
information and capabilities)[15] to find, decode, and then process arbitrary input integers/symbols m and n,
symbols + and = ... and "effectively"[16] produce, in a "reasonable" time,[17] output-integer y at a specified
place and in a specified format.
The concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how
formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm
requires to complete cannot be measured, as it is not apparently related with our customary physical dimension.
From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that
suits both concrete (in some sense) and abstract usage of the term.

Formalization
Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail
the specific instructions a computer should perform (in a specific order) to carry out a specified task, such as
calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any
sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include
Minsky (1967), Savage (1987) and Gurevich (2000):
Minsky: "But we will also maintain, with Turing . . . that any procedure which could "naturally" be
called effective, can in fact be realized by a (simple) machine. Although this may seem extreme, the
arguments . . . in its favor are hard to refute".[18]
Gurevich: "...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm
can be simulated by a Turing machine ... according to Savage [1987], an algorithm is a computational
process defined by a Turing machine".[19]
Typically, when an algorithm is associated with processing information, data is read from an input source, written to
an output device, and/or stored for further processing. Stored data is regarded as part of the internal state of the entity
performing the algorithm. In practice, the state is stored in one or more data structures.
For some such computational process, the algorithm must be rigorously defined: specified in the way it applies in all
possible circumstances that could arise. That is, any conditional steps must be systematically dealt with,
case-by-case; the criteria for each case must be clear (and computable).
Because an algorithm is a precise list of precise steps, the order of computation is always critical to the functioning
of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top"
and going "down to the bottom", an idea that is described more formally by flow of control.
Algorithm 42

So far, this discussion of the formalization of an algorithm has assumed the premises of imperative programming.
This is the most common conception, and it attempts to describe a task in discrete, "mechanical" means. Unique to
this conception of formalized algorithms is the assignment operation, setting the value of a variable. It derives from
the intuition of "memory" as a scratchpad. There is an example below of such an assignment.
For some alternate conceptions of what constitutes an algorithm see functional programming and logic
programming.

Expressing algorithms
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts,
drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of
algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode,
flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the
ambiguities common in natural language statements. Programming languages are primarily intended for expressing
algorithms in a form that can be executed by a computer, but are often used as a way to define or document
algorithms.
There is a wide variety of representations possible and one can express a given Turing machine program as a
sequence of machine tables (see more at finite state machine, state transition table and control table), as flowcharts
and drakon-charts (see more at state diagram), or as a form of rudimentary machine code or assembly code called
"sets of quadruples" (see more at Turing machine).
Representations of algorithms can be classed into three accepted levels of Turing machine description:[20]
• 1 High-level description:
"...prose to describe an algorithm, ignoring the implementation details. At this level we do not need to
mention how the machine manages its tape or head."
• 2 Implementation description:
"...prose used to define the way the Turing machine uses its head and the way that it stores data on its
tape. At this level we do not give details of states or transition function."
• 3 Formal description:
Most detailed, "lowest level", gives the Turing machine's "state table".
For an example of the simple algorithm "Add m+n" described in all three levels see Algorithm examples.
Algorithm 43

Implementation
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented
by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an
insect looking for food), in an electrical circuit, or in a mechanical device.

Computer algorithms
In computer systems, an algorithm is basically an instance of logic
written in software by software developers to be effective for the
intended "target" computer(s) for the target machines to produce output
from given input (perhaps null).
"Elegant" (compact) programs, "good" (fast) programs : The
notion of "simplicity and elegance" appears informally in Knuth and
precisely in Chaitin:
Knuth: ". . .we want good algorithms in some loosely defined
aesthetic sense. One criterion . . . is the length of time taken to
perform the algorithm . . .. Other criteria are adaptability of the
algorithm to computers, its simplicity and elegance, etc"[21]
Chaitin: " . . . a program is 'elegant,' by which I mean that it's the
smallest possible program for producing the output that it
does"[22]
Chaitin prefaces his definition with: "I'll show you can't prove that a
program is 'elegant'"—such a proof would solve the Halting problem
(ibid).
Algorithm versus function computable by an algorithm: For a given
function multiple algorithms may exist. This is true, even without Flowchart examples of the canonical
expanding the available instruction set available to the programmer. Böhm-Jacopini structures: the SEQUENCE
(rectangles descending the page), the
Rogers observes that "It is . . . important to distinguish between the
WHILE-DO and the IF-THEN-ELSE. The three
notion of algorithm, i.e. procedure and the notion of function structures are made of the primitive conditional
computable by algorithm, i.e. mapping yielded by procedure. The same GOTO (IF test=true THEN GOTO
function may have several different algorithms".[23] step xxx) (a diamond), the unconditional
GOTO (rectangle), various assignment operators
Unfortunately there may be a tradeoff between goodness (speed) and (rectangle), and HALT (rectangle). Nesting of
elegance (compactness)—an elegant program may take more steps to these structures inside assignment-blocks result
complete a computation than one less elegant. An example that uses in complex diagrams (cf Tausworthe
1977:100,114).
Euclid's algorithm appears below.
Computers (and computors), models of computation: A computer (or human "computor"[24]) is a restricted type
of machine, a "discrete deterministic mechanical device"[25] that blindly follows its instructions.[26] Melzak's and
Lambek's primitive models[27] reduced this notion to four elements: (i) discrete, distinguishable locations, (ii)
discrete, indistinguishable counters[28] (iii) an agent, and (iv) a list of instructions that are effective relative to the
capability of the agent.[29]
Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for
Computability".[30] Minsky's machine proceeds sequentially through its five (or six depending on how one counts)
instructions unless either a conditional IF–THEN GOTO or an unconditional GOTO changes program flow out of
sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution)[31] operations:
ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g.
Algorithm 44

L ← L − 1).[32] Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do
Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional
GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT.[33]
Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn
an algorithm is to try it . . . immediately take pen and paper and work through an example".[34] But what about a
simulation or execution of the real thing? The programmer must translate the algorithm into a language that the
simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a
quadratic equation the computor must know how to take a square root. If they don't then for the algorithm to be
effective it must provide a set of rules for extracting a square root.[35]
This means that the programmer must know a "language" that is effective relative to the target computing agent
(computer/computor).
But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on
abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion
of simulation enters". When speed is being measured, the instruction set matters. For example, the subprogram in
Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus"
(division) instruction available rather than just subtraction (or worse: just Minsky's "decrement").
Structured programming, canonical structures: Per the Church-Turing thesis any algorithm can be computed by a
model known to be Turing complete, and per Minsky's demonstrations Turing completeness requires only four
instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that
while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code"
a programmer can write structured programs using these instructions; on the other hand "it is also possible, and not
too hard, to write badly structured programs in a structured language".[36] Tausworthe augments the three
Böhm-Jacopini canonical structures:[37] SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more:
DO-WHILE and CASE.[38] An additional benefit of a structured program is that it lends itself to proofs of
correctness using mathematical induction.[39]
Canonical flowchart symbols[40]: The graphical aide called a flowchart offers a way to describe and document an
algorithm (and a computer program of one). Like program flow of a Minsky machine, a flowchart always starts at
the top of a page and proceeds down. Its primary symbols are only 4: the directed arrow showing program flow, the
rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm-Jacopini
canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles but only if a single
exit occurs from the superstructure. The symbols and their use to build the canonical structures are shown in the
diagram.
Algorithm 45

Examples

Algorithm example
One of the simplest algorithms is to find the largest number in an
(unsorted) list of numbers. The solution necessarily requires looking at
every number in the list, but only once at each. From this follows a
simple algorithm, which can be stated in a high-level description
English prose, as:
High-level description:
1. Assume the first item is largest.
2. Look at each of the remaining items in the list and if it is larger than
the largest item so far, make a note of it.
An animation of the quicksort algorithm sorting
3. The last noted item is the largest in the list when the process is an array of randomized values. The red bars mark
complete. the pivot element; at the start of the animation,
the element farthest to the right hand side is
(Quasi-)formal description: Written in prose but much closer to the
chosen as the pivot.
high-level language of a computer program, the following is the more
formal coding of the algorithm in pseudocode or pidgin code:

Algorithm LargestNumber
Input: A non-empty list of numbers L.
Output: The largest number in the list L.

largest ← L0
for each item in the list (Length(L)≥1), do
if the item > largest, then
largest ← the item
return largest

• "←" is a shorthand for "changes to". For instance, "largest ← item" means that the value of largest changes to the value of item.
• "return" terminates the algorithm and outputs the value that follows.
Algorithm 46

Euclid’s algorithm
Euclid’s algorithm appears as Proposition II in Book
VII ("Elementary Number Theory") of his Elements.[41]
Euclid poses the problem: "Given two numbers not
prime to one another, to find their greatest common
measure". He defines "A number [to be] a multitude
composed of units": a counting number, a positive
integer not including 0. And to "measure" is to place a
shorter measuring length s successively (q times) along
longer length l until the remaining portion r is less than
the shorter length s.[42] In modern words, remainder r =
l − q*s, q being the quotient, or remainder r is the
"modulus", the integer-fractional part left over after the
division.[43]

For Euclid’s method to succeed, the starting lengths


must satisfy two requirements: (i) the lengths must not
be 0, AND (ii) the subtraction must be “proper”, a test
The example-diagram of Euclid's algorithm from T.L. Heath 1908 must guarantee that the smaller of the two numbers is
with more detail added. Euclid does not go beyond a third measuring
subtracted from the larger (alternately, the two can be
and gives no numerical examples. Nicomachus gives the example of
49 and 21: "I subtract the less from the greater; 28 is left; then again I equal so their subtraction yields 0).
subtract from this the same 21 (for this is possible); 7 is left; I
Euclid's original proof adds a third: the two lengths are
subtract this from 21, 14 is left; from which I again subtract 7 (for
not prime to one another. Euclid stipulated this so that
this is possible); 7 is left, but 7 cannot be subtracted from 7." Heath
he could construct a reductio ad absurdum proof that
comments that, "The last phrase is curious, but the meaning of it is
obvious enough, as also the meaning of the phrase about ending 'at
the two numbers' common measure is in fact the
one and the same number'."(Heath 1908:300).
greatest.[44] While Nicomachus' algorithm is the same
as Euclid's, when the numbers are prime to one another
it yields the number "1" for their common measure. So to be precise the following is really Nicomachus' algorithm.

Computer language for Euclid's


algorithm

Only a few instruction types are


required to execute Euclid's
algorithm—some logical tests
(conditional GOTO), unconditional
GOTO, assignment (replacement), and
subtraction.

• A location is symbolized by upper


case letter(s), e.g. S, A, etc.
• The varying quantity (number) in a
location is written in lower case
A graphical expression on Euclid's algorithm using example with 1599 and 650. 1599 =
letter(s) and (usually) associated 650*2 + 299 650 = 299*2 + 52 299 = 52*5 + 39 52 = 39*1 + 13 39 = 13*3 + 0
with the location's name. For
example, location L at the start might contain the number l = 3009.
Algorithm 47

An inelegant program for Euclid's algorithm

The following algorithm is framed as Knuth's 4-step version of Euclid's


and Nichomachus', but rather than using division to find the remainder it
uses successive subtractions of the shorter length s from the remaining
length r until r is less than s. The high-level description, shown in
boldface, is adapted from Knuth 1973:2–4:
INPUT:

"Inelegant" is a translation of Knuth's


version of the algorithm with a
subtraction-based remainder-loop replacing
his use of division (or a "modulus"
instruction). Derived from Knuth 1973:2–4.
Depending on the two numbers "Inelegant"
may compute the g.c.d. in fewer steps than
"Elegant".

1 [Into two locations L and S put the numbers l and s that represent the two lengths]:
INPUT L, S
2 [Initialize R: make the remaining length r equal to the starting/initial/input length l]:
R ←L

E0: [Ensure r ≥ s.]

3 [Ensure the smaller of the two numbers is in S and the larger in R]:
IF R > S THEN
the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6:
GOTO step 6
ELSE
swap the contents of R and S.
4 L ← R (this first step is redundant, but is useful for later discussion).
5 R ←S
6 S ←L
Algorithm 48

E1: [Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the
measuring number s in S from the remaining length r in R.

7 IF S > R THEN
done measuring so
GOTO 10
ELSE
measure again,
8 R ←R −S
9 [Remainder-loop]:
GOTO 7.

E2: [Is the remainder 0?]: EITHER (i) the last measure was exact and the remainder in R is 0 program can halt, OR
(ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S.

10 IF R = 0 THEN
done so
GOTO step 15
ELSE
CONTINUE TO step 11,

E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller
number s:; L serves as a temporary location.

11 L ←R
12 R ←S
13 S ←L
14 [Repeat the measuring process]:
GOTO 7

OUTPUT:

15 [Done. S contains the greatest common divisor]:


PRINT S

DONE:

16 HALT, END, STOP.

An elegant program for Euclid's algorithm


The following version of Euclid's algorithm requires only 6 core instructions to do what 13 are required to do by
"Inelegant"; worse, "Inelegant" requires more types of instructions. The flowchart of "Elegant" can be found at the
top of this article. In the (unstructured) Basic language the steps are numbered, and the instruction LET [] = [] is the
assignment instruction symbolized by ←.

5 REM Euclid's algorithm for greatest common divisor


6 PRINT "Type two integers greater than 0"
10 INPUT A,B
20 IF B=0 THEN GOTO 80
30 IF A > B THEN GOTO 60
40 LET B=B-A
50 GOTO 20
Algorithm 49

60 LET A=A-B
70 GOTO 20
80 PRINT A
90 END

How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops",
an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at
last the minuend M is less than or equal to the subtrahend S ( Difference = Minuend − Subtrahend), the minuend can
become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other
words the "sense" of the subtraction reverses.

Testing the Euclid algorithms


Does an algorithm do what its author wants it to do? A few test cases usually suffice to confirm core functionality.
One source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime
numbers 14157 and 5950.
But exceptional cases must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S?
Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are
zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative
numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the
algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the
algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due
to exceptions is the Ariane V rocket failure.
Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of
mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method
applicable to proving the validity of any algorithm".[45] Tausworthe proposes that a measure of the complexity of a
program be the length of its correctness proof.[46]

Measuring and improving the Euclid algorithms


Elegance (compactness) versus goodness (speed) : With only 6 core instructions, "Elegant" is the clear winner
compared to "Inelegant" at 13 instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps).
Algorithm analysis[47] indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop,
whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time
is wasted doing a "B = 0?" test that is needed only after the remainder is computed.
Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it
computes the function intended by its author—then the question becomes, can it be improved?
The compactness of "Inelegant" can be improved by the elimination of 5 steps. But Chaitin proved that compacting
an algorithm cannot be automated by a generalized algorithm;[48] rather, it can only be done heuristically, i.e. by
exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive
reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides
a hint that these steps together with steps 2 and 3 can be eliminated. This reduces the number of core instructions
from 13 to 8, which makes it "more elegant" than "Elegant" at 9 steps.
The speed of "Elegant" can be improved by moving the B=0? test outside of the two subtraction loops. This change
calls for the addition of 3 instructions (B=0?, A=0?, GOTO). Now "Elegant" computes the example-numbers faster;
whether for any given A, B and R, S this is always the case would require a detailed analysis.
Algorithm 50

Algorithmic analysis
It is frequently important to know how much of a particular resource (such as time or storage) is theoretically
required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such
quantitative answers (estimates); for example, the sorting algorithm above has a time requirement of O(n), using the
big O notation with n as the length of the list. At all times the algorithm only needs to remember two values: the
largest number found so far, and its current position in the input list. Therefore it is said to have a space requirement
of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or
'effort' than others. For example, a binary search algorithm usually outperforms a brute force sequential search when
used for table lookups on sorted lists.

Formal versus empirical


The analysis and study of algorithms is a discipline of computer science, and is often practiced abstractly without the
use of a specific programming language or implementation. In this sense, algorithm analysis resembles other
mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of
any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general
representation. However, ultimately, most algorithms are usually implemented on particular hardware / software
platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off"
problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large)
but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling
from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may
be used to compare before/after potential improvements to an algorithm after program optimization.

FFT speedup
To illustrate the potential improvements possible even in some extremely "well established" algorithms, a recent
significant innovation, relating to FFT algorithms (used very heavily in the field of image processing), may have
decreased processing times by a factor as high as 10,000 . The impact of this speedup enables, for example, portable
computing devices (as well as other devices) to consume less power[49]

Classification
There are various ways to classify algorithms, each with its own merits.

By implementation
One way to classify algorithms is by implementation means.
• Recursion or iteration: A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a
certain condition matches, which is a method common to functional programming. Iterative algorithms use
repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems.
Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is a well
understood in recursive implementation. Every recursive version has an equivalent (but possibly more or less
complex) iterative version, and vice versa.
• Logical: An algorithm may be viewed as controlled logical deduction. This notion may be expressed as:
Algorithm = logic + control.[50] The logic component expresses the axioms that may be used in the computation
and the control component determines the way in which deduction is applied to the axioms. This is the basis for
the logic programming paradigm. In pure logic programming languages the control component is fixed and
Algorithm 51

algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant
semantics: a change in the axioms has a well-defined change in the algorithm.
• Serial or parallel or distributed: Algorithms are usually discussed with the assumption that computers execute
one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm
designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed
algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a
problem at the same time, whereas distributed algorithms utilize multiple machines connected with a network.
Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and
collect the results back together. The resource consumption in such algorithms is not only processor cycles on
each processor but also the communication overhead between the processors. Some sorting algorithms can be
parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally
parallelizable. Some problems have no parallel algorithms, and are called inherently serial problems.
• Deterministic or non-deterministic: Deterministic algorithms solve the problem with exact decision at every
step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses
are made more accurate through the use of heuristics.
• Exact or approximate: While many algorithms reach an exact solution, approximation algorithms seek an
approximation that is close to the true solution. Approximation may use either a deterministic or a random
strategy. Such algorithms have practical value for many hard problems.
• Quantum algorithm run on a realistic model of quantum computation. The term is usually used for those
algorithms which seem inherently quantum, or use some essential feature of quantum computation such as
quantum superposition or quantum entanglement.

By design paradigm
Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of
paradigms, each different from the other. Furthermore, each of these categories includes many different types of
algorithms. Some commonly found paradigms include:
• Brute-force or exhaustive search. This is the naive method of trying every possible solution to see which is best.
• Divide and conquer. A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more
smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily.
One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after
dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the
segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, that solves an
identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer
divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and
conquer algorithms. An example of decrease and conquer algorithm is the binary search algorithm.
• Dynamic programming. When a problem shows optimal substructure, meaning the optimal solution to a
problem can be constructed from optimal solutions to subproblems, and overlapping subproblems, meaning the
same subproblems are used to solve many different problem instances, a quicker approach called dynamic
programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall
algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to
the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference
between dynamic programming and divide and conquer is that subproblems are more or less independent in
divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic
programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems
are independent and there is no repetition, memoization does not help; hence dynamic programming is not a
solution for all complex problems. By using memoization or maintaining a table of subproblems already solved,
dynamic programming reduces the exponential nature of many problems to polynomial complexity.
Algorithm 52

• The greedy method. A greedy algorithm is similar to a dynamic programming algorithm, but the difference is
that solutions to the subproblems do not have to be known at each stage; instead a "greedy" choice can be made of
what looks best for the moment. The greedy method extends the solution with the best possible decision (not all
feasible decisions) at an algorithmic stage based on the current local optimum and the best decision (not all
possible decisions) made in a previous stage. It is not exhaustive, and does not give an accurate answer to many
problems. But when it works, it is the fastest method. The most popular greedy algorithm is finding the minimal
spanning tree as given by Huffman Tree, Kruskal, Prim, Sollin.
• Linear programming. When solving a problem using linear programming, specific inequalities involving the
inputs are found and then an attempt is made to maximize (or minimize) some linear function of the inputs. Many
problems (such as the maximum flow for directed graphs) can be stated in a linear programming way, and then be
solved by a 'generic' algorithm such as the simplex algorithm. A more complex variant of linear programming is
called integer programming, where the solution space is restricted to the integers.
• Reduction. This technique involves solving a difficult problem by transforming it into a better known problem
for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose
complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for
finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the
middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
• Search and enumeration. Many problems (such as playing chess) can be modeled as problems on graphs. A
graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This
category also includes search algorithms, branch and bound enumeration and backtracking.
1. Randomized algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it
can in fact be proven that the fastest solutions must involve some randomness. There are two large classes of such
algorithms:
1. Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run
in polynomial time)
2. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound,
e.g. ZPP.
2. In optimization problems, heuristic algorithms do not try to find an optimal solution, but an approximate solution
where the time or resources are limited. They are not practical to find perfect solutions. An example of this would
be local search, tabu search, or simulated annealing algorithms, a class of heuristic probabilistic algorithms that
vary the solution of a problem by a random amount. The name "simulated annealing" alludes to the metallurgic
term meaning the heating and cooling of metal to achieve freedom from defects. The purpose of the random
variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that
the random element decreases as the algorithm settles down to a solution. Approximation algorithms are those
heuristic algorithms that additionally provide some bounds on the error. Genetic algorithms attempt to find
solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding
successive generations of "solutions". Thus, they emulate reproduction and "survival of the fittest". In genetic
programming, this approach is extended to algorithms, by regarding the algorithm itself as a "solution" to a
problem.
Algorithm 53

By field of study
Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often
studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical
algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms,
medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques.
Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes
completely unrelated, fields. For example, dynamic programming was invented for optimization of resource
consumption in industry, but is now used in solving a broad range of problems in many fields.

By complexity
Algorithms can be classified by the amount of time they need to complete compared to their input size. There is a
wide variety: some algorithms complete in linear time relative to input size, some do so in an exponential amount of
time or even worse, and some never halt. Additionally, some problems may have multiple algorithms of differing
complexity, while other problems might have no algorithms or no known efficient algorithms. There are also
mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the
problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible
algorithms for them.
Burgin (2005, p. 24) uses a generalized definition of algorithms that relaxes the common requirement that the output
of the algorithm that computes a function must be determined after a finite number of steps. He defines a
super-recursive class of algorithms as "a class of algorithms in which it is possible to compute functions not
computable by any Turing machine" (Burgin 2005, p. 107). This is closely related to the study of methods of
hypercomputation.

Continuous algorithms
The adjective "continuous" when applied to the word "algorithm" can mean:
1. An algorithm operating on data that represents continuous quantities, even though this data is represented by
discrete approximations—such algorithms are studied in numerical analysis; or
2. An algorithm in the form of a differential equation that operates continuously on the data, running on an analog
computer.

Legal issues
See also: Software patents for a general overview of the patentability of software, including
computer-implemented algorithms.
Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple
manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence
algorithms are not patentable (as in Gottschalk v. Benson). However, practical applications of algorithms are
sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in
the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there
are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW
patent.
Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
Algorithm 54

Etymology
The word "Algorithm", or "Algorism" in some other writing versions, comes from the name al-Khwārizmī,
pronounced in classical Arabic as Al-Khwarithmi. Al-Khwārizmī (Persian: ‫ﺧﻮﺍﺭﺯﻣﻲ‬, c. 780-850) was a Persian
mathematician, astronomer, geographer and a scholar in the House of Wisdom in Baghdad, whose name means "the
native of Khwarezm", a city that was part of the Greater Iran during his era and now is in modern day Uzbekistan.[51]
About 825, he wrote a treatise in the Arabic language, which was translated into Latin in the 12th century under the
title Algoritmi de numero Indorum. This title means "Algoritmi on the numbers of the Indians", where "Algoritmi"
was the translator's Latinization of Al-Khwarizmi's name. Al-Khwarizmi was the most widely read mathematician in
Europe in the late Middle Ages, primarily through his other book, the Algebra.[52] In late medieval Latin,
algorismus, the corruption of his name, simply meant the "decimal number system" that is still the meaning of
modern English algorism. In 17th-century French the word's form, but not its meaning, changed to algorithme.
English adopted the French very soon afterwards, but it wasn't until the late 19th century that "Algorithm" took on
the meaning that it has in modern English.[53]
Alternative etymology claims origin from the terms algebra in its late medieval sense of "Arabic arithmetics" and
arithmos the Greek term for number (thus literally meaning "Arabic numbers" or "Arabic calculation"). Algorithms
of Al-Kharizmi's works are not meant in their modern sense but as a type of repetitive calculus (here is to mention
that his fundamental work known as algebra was originally titled "The Compendious Book on Calculation by
Completion and Balancing" describing types of repetitive calculation and quadratic equations). In that sense,
algorithms were known in Europe long before Al-Kharizmi. The oldest algorithm known today is the Euclidean
algorithm (see also Extended Euclidean algorithm). Before the coining of the term algorithm the Greeks were calling
them anthyphairesis literally meaning anti-subtraction or reciprocal subtraction (further reading here and here [54]).
Algorithms were known to the Greeks centuries before[55] Euclid. Instead of the word algebra the Greeks were using
the term arithmetica (ἀριθμητική, i.e. in Diophantus' works the so-called "father of Algebra" - see also Wikipedia's
articles Diophantine equation and Eudoxos).

History: Development of the notion of "algorithm"

Origin
The word algorithm comes from the name of the 9th century Persian mathematician Abu Abdullah Muhammad ibn
Musa Al-Khwarizmi, whose work built upon that of the 7th-century Indian mathematician Brahmagupta. The word
algorism originally referred only to the rules of performing arithmetic using Hindu-Arabic numerals but evolved via
European Latin translation of Al-Khwarizmi's name into algorithm by the 18th century. The use of the word evolved
to include all definite procedures for solving problems or performing tasks.

Discrete and distinguishable symbols


Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying:
accumulating stones or marks scratched on sticks, or making discrete symbols in clay. Through the Babylonian and
Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally
marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine
computations.
Algorithm 55

Manipulation of symbols as "place holders" for numbers: algebra


The work of the ancient Greek geometers (Euclidean algorithm), the Indian mathematician Brahmagupta, and the
Persian mathematician Al-Khwarizmi (from whose name the terms "algorism" and "algorithm" are derived), and
Western European mathematicians culminated in Leibniz's notion of the calculus ratiocinator (ca 1680):
A good century and a half ahead of his time, Leibniz proposed an algebra of logic, an algebra that would
specify the rules for manipulating logical concepts in the manner that ordinary algebra specifies the rules
for manipulating numbers.[56]

Mechanical contrivances with discrete states


The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle
Ages]", in particular the verge escapement[57] that provides us with the tick and tock of a mechanical clock. "The
accurate automatic machine"[58] led immediately to "mechanical automata" beginning in the 13th century and finally
to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada
Lovelace, mid-19th century.[59] Lovelace is credited with the first creation of an algorithm intended for processing
on a computer - Babbage's analytical engine, the first device considered a real Turing-complete computer instead of
just a calculator - and is sometimes called "history's first programmer" as a result, though a full implementation of
Babbage's second device would not be realized until decades after her lifetime.
Logical machines 1870—Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to
reduce Boolean equations when presented in a form similar to what are now known as Karnaugh maps. Jevons
(1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of
the [logical] combinations can be picked out mechanically . . . More recently however I have reduced the system to a
completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be
called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21
keys like those of a piano [etc] . . .". With this machine he could analyze a "syllogism or any other simple logical
argument".[60]
This machine he displayed in 1870 before the Fellows of the Royal Society.[61] Another logician John Venn,
however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the
interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances
at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm
characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's
abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be
described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all
that can be rationally expected of any logical machine".[62]
Jacquard loom, Hollerith punch cards, telegraphy and telephony—the electromechanical relay: Bell and
Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and
"telephone switching technologies" were the roots of a tree leading to the development of the first computers.[63] By
the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and
distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca
1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (ca. 1910)
with its punched-paper use of Baudot code on tape.
Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz
(1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome'
use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the
tinkering was over, Stibitz had constructed a binary adding device".[64]
Algorithm 56

Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and
closed):
It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical
relays, that machines were built having the scope Babbage had envisioned."[65]

Mathematics during the 19th century up to the mid-20th century


Symbols and rules: In rapid succession the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and
Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The
principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of
mathematics in a symbolic language".[66]
But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in
logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special
symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that
are manipulated according to definite rules".[67] The work of Frege was further simplified and amplified by Alfred
North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913).
The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular the
Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox.[68] The resultant
considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely
reduces rules of recursion to numbers.
Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928,
mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or
"effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo
Church, Stephen Kleene and J.B. Rosser's λ-calculus[69] a finely honed definition of "general recursion" from the
work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent
simplifications by Kleene.[70] Church's proof[71] that the Entscheidungsproblem was unsolvable, Emil Post's
definition of effective calculability as a worker mindlessly following a list of instructions to move left or right
through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no
decision about the next instruction.[72] Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use
of his "a- [automatic-] machine"[73]—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition
of "effective method" in terms of "a machine".[74] S. C. Kleene's proposal of a precursor to "Church thesis" that he
called "Thesis I",[75] and a few years later Kleene's renaming his Thesis "Church's Thesis"[76] and proposing
"Turing's Thesis".[77]

Emil Post (1936) and Alan Turing (1936–37, 1939)


Here is a remarkable coincidence of two men not knowing each other but describing a process of men-as-computers
working on computations—and they yield virtually identical definitions.
Emil Post (1936) described the actions of a "computer" (human being) as follows:
"...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to
be carried out, and a fixed unalterable set of directions.
His symbol space would be
"a two way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this
symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two
possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke.
"One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form
by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise the answer [i.e., OUTPUT] is
Algorithm 57

to be given in symbolic form by such a configuration of marked boxes....


"A set of directions applicable to a general problem sets up a deterministic process when applied to each
specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]".[78]
See more at Post–Turing machine
Alan Turing's work[79] preceded that of Stibitz (1937); it is
unknown whether Stibitz knew of the work of Turing. Turing's
biographer believed that Turing's use of a typewriter-like model
derived from a youthful interest: "Alan had dreamt of inventing
typewriters as a boy; Mrs. Turing had a typewriter; and he could
well have begun by asking himself what was meant by calling a
typewriter 'mechanical'".[80] Given the prevalence of Morse code
and telegraphy, ticker tape machines, and teletypewriters we might
conjecture that all were influences.

Turing—his model of computation is now called a Turing Alan Turing's statue at Bletchley Park.
machine—begins, as did Post, with an analysis of a human
computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further
and creates a machine as a model of computation of numbers.[81]
"Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into
squares like a child's arithmetic book....I assume then that the computation is carried out on one-dimensional
paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be
printed is finite....
"The behaviour of the computer at any moment is determined by the symbols which he is observing, and his
"state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares
which the computer can observe at one moment. If he wishes to observe more, he must use successive
observations. We will also suppose that the number of states of mind which need be taken into account is
finite...
"Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are
so elementary that it is not easy to imagine them further divided."[82]
Turing's reduction yields the following:
"The simple operations must therefore include:
"(a) Changes of the symbol on one of the observed squares
"(b) Changes of one of the squares observed to another square within L squares of one of the previously
observed squares.
"It may be that some of these change necessarily invoke a change of state of mind. The most general single operation
must therefore be taken to be one of the following:
"(A) A possible change (a) of symbol together with a possible change of state of mind.
"(B) A possible change (b) of observed squares, together with a possible change of state of mind"
"We may now construct a machine to do the work of this computer."
A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it:
"A function is said to be "effectively calculable" if its values can be found by some purely mechanical process.
Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more
definite, mathematical expressible definition . . . [he discusses the history of the definition pretty much as
presented above with respect to Gödel, Herbrand, Kleene, Church, Turing and Post] . . . We may take this
statement literally, understanding by a purely mechanical process one which could be carried out by a
Algorithm 58

machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these
machines. The development of these ideas leads to the author's definition of a computable function, and to an
identification of computability † with effective calculability . . . .
"† We shall use the expression "computable function" to mean a function calculable by a machine, and
we let "effectively calculable" refer to the intuitive idea without particular identification with any one of
these definitions".[83]

J. B. Rosser (1939) and S. C. Kleene (1943)


J. Barkley Rosser defined an 'effective [mathematical] method' in the following manner (boldface added):
"'Effective method' is used here in the rather special sense of a method each step of which is precisely
determined and which is certain to produce the answer in a finite number of steps. With this special meaning,
three different precise definitions have been given to date. [his footnote #5; see discussion immediately
below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of
solving certain sets of problems exists if one can build a machine which will then solve any problem of
the set with no human intervention beyond inserting the question and (later) reading the answer. All
three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are
equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–6)
Rosser's footnote #5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular
Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and
their use of recursion in particular Gödel's use in his famous paper On Formally Undecidable Propositions of
Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–7) in their
mechanism-models of computation.
Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the
following context (boldface in original):
"12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a
procedure, performable for each set of values of the independent variables, which procedure necessarily
terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the
question, "is the predicate value true?"" (Kleene 1943:273)

History after 1950


A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is
on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing
thesis) and philosophy of mind (especially arguments around artificial intelligence). For more, see Algorithm
characterizations.

Notes
[1] "Any classical mathematical algorithm, for example, can be described in a finite number of English words" (Rogers 1987:2).
[2] Well defined with respect to the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the
instructions and carry out the computations" (Rogers 1987:2).
[3] "an algorithm is a procedure for computing a function (with respect to some chosen notation for integers) ... this limitation (to numerical
functions) results in no loss of generality", (Rogers 1987:1).
[4] "An algorithm has zero or more inputs, i.e., quantities which are given to it initially before the algorithm begins" (Knuth 1973:5).
[5] "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'"
(Knuth 1973:5).
[6] "An algorithm has one or more outputs, i.e. quantities which have a specified relation to the inputs" (Knuth 1973:5).
[7] Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a
computation is carried out in a discrete stepwise fashion, without use of continuous methods or analogue devices . . . carried forward
deterministically, without resort to random methods or devices, e.g., dice" Rogers 1987:2.
Algorithm 59

[8] Kleene 1943 in Davis 1965:274


[9] Rosser 1939 in Davis 1965:225
[10] Stone 1973:4
[11] Stone simply requires that "it must terminate in a finite number of steps" (Stone 1973:7–8).
[12] Boolos and Jeffrey 1974,1999:19
[13] cf Stone 1972:5
[14] Knuth 1973:7 states: "In practice we not only want algorithms, we want good algorithms ... one criterion of goodness is the length of time
taken to perform the algorithm ... other criteria are the adaptability of the algorithm to computers, its simplicity and elegance, etc."
[15] cf Stone 1973:6
[16] Stone 1973:7–8 states that there must be, "...a procedure that a robot [i.e., computer] can follow in order to determine precisely how to obey
the instruction." Stone adds finiteness of the process, and definiteness (having no ambiguity in the instructions) to this definition.
[17] Knuth, loc. cit
[18] Minsky 1967:105
[19] Gurevich 2000:1, 3
[20] Sipser 2006:157
[21] Knuth 1973:7
[22] Chaitin 2005:32
[23] Rogers 1987:1–2
[24] In his essay "Calculations by Man and Machine: Conceptual Analysis" Seig 2002:390 credits this distinction to Robin Gandy, cf Wilfred
Seig, et al., 2002 Reflections on the foundations of mathematics: Essays in honor of Solomon Feferman, Association for Symbolic Logic, A. K
Peters Ltd, Natick, MA.
[25] cf Gandy 1980:126, Robin Gandy Church's Thesis and Principles for Mechanisms appearing on pp. 123–148 in J. Barwise et al. 1980 The
Kleene Symposium, North-Holland Publishing Company.
[26] A "robot": "A computer is a robot that performs any task that can be described as a sequence of instructions." cf Stone 1972:3
[27] Lambek’s "abacus" is a "countably infinite number of locations (holes, wires etc.) together with an unlimited supply of counters (pebbles,
beads, etc). The locations are distinguishable, the counters are not". The holes have unlimited capacity, and standing by is an agent who
understands and is able to carry out the list of instructions" (Lambek 1961:295). Lambek references Melzak who defines his Q-machine as "an
indefinitely large number of locations . . . an indefinitely large supply of counters distributed among these locations, a program, and an
operator whose sole purpose is to carry out the program" (Melzak 1961:283). B-B-J (loc. cit.) add the stipulation that the holes are "capable of
holding any number of stones" (p. 46). Both Melzak and Lambek appear in The Canadian Mathematical Bulletin, vol. 4, no. 3, September
1961.
[28] If no confusion results, the word "counters" can be dropped, and a location can be said to contain a single "number".
[29] "We say that an instruction is effective if there is a procedure that the robot can follow in order to determine precisely how to obey the
instruction." (Stone 1972:6)
[30] cf Minsky 1967: Chapter 11 "Computer models" and Chapter 14 "Very Simple Bases for Computability" pp. 255–281 in particular
[31] cf Knuth 1973:3.
[32] But always preceded by IF–THEN to avoid improper subtraction.
[33] However, a few different assignment instructions (e.g. DECREMENT, INCREMENT and ZERO/CLEAR/EMPTY for a Minsky machine)
are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a
convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0
THEN GOTO xxx is unconditional.
[34] Knuth 1973:4
[35] Stone 1972:5. Methods for extracting roots are not trivial: see Methods of computing square roots.
[36] John G. Kemeny and Thomas E. Kurtz 1985 Back to Basic: The History, Corruption, and Future of the Language, Addison-Wesley
Publishing Company, Inc. Reading, MA, ISBN 0-201-13433-0.
[37] Tausworthe 1977:101
[38] Tausworthe 1977:142
[39] Knuth 1973 section 1.2.1, expanded by Tausworthe 1977 at pages 100ff and Chapter 9.1
[40] cf Tausworthe 1977
[41] Heath 1908:300; Hawking’s Dover 2005 edition derives from Heath.
[42] " 'Let CD, measuring BF, leave FA less than itself.' This is a neat abbreviation for saying, measure along BA successive lengths equal to CD
until a point F is reached such that the length FA remaining is less than CD; in other words, let BF be the largest exact multiple of CD
contained in BA" (Heath 1908:297
[43] For modern treatments using division in the algorithm see Hardy and Wright 1979:180, Knuth 1973:2 (Volume 1), plus more discussion of
Euclid's algorithm in Knuth 1969:293-297 (Volume 2).
[44] Euclid covers this question in his Proposition 1.
[45] Knuth 1973:13–18. He credits "the formulation of algorithm-proving in terms of asertions and induction" to R. W. Floyd, Peter Naur, C. A.
R. Hoare, H. H. Goldstine and J. von Neumann. Tausworth 1977 borrows Knuth's Euclid example and extends Knuth's method in section 9.1
Formal Proofs (pages 288–298).
Algorithm 60

[46] Tausworthe 1997:294


[47] cf Knuth 1973:7 (Vol. I), and his more-detailed analyses on pp. 1969:294-313 (Vol II).
[48] Breakdown occurs when an algorithm tries to compact itself. Success would solve the Halting problem.
[49] Haitham Hassanieh, Piotr Indyk, Dina Katabi, and Eric Price, " ACM-SIAM Symposium On Discrete Algorithms (SODA) (http:/ / siam.
omnibooksonline. com/ 2012SODA/ data/ papers/ 500. pdf), Kyoto, January 2012. See also the sFFT Web Page (http:/ / groups. csail. mit.
edu/ netmit/ sFFT/ ).
[50] Kowalski 1979
[51] Toomer 1990, Template:Harvard citation documentation#Wikilink to citation does not work.
[52] Foremost mathematical texts in history (http:/ / www-history. mcs. st-and. ac. uk/ Extras/ Boyer_Foremost_Text. html), according to Carl B.
Boyer.
[53] Etymology of algorithm at Dictionary.Reference.com (http:/ / dictionary. reference. com/ browse/ algorithm)
[54] http:/ / livetoad. org/ Courses/ Documents/ bb63/ Notes/ continued_fractions. pdf
[55] Becker O (1933). "Eudoxus-Studien I. Eine voreuklidische Proportionslehre und ihre Spuren bei Aristoteles und Euklid". Quellen und
Studien zur Geschichte der Mathematik B 2: 311–333.
[56] Davis 2000:18
[57] Bolter 1984:24
[58] Bolter 1984:26
[59] Bolter 1984:33–34, 204–206.
[60] All quotes from W. Stanley Jevons 1880 Elementary Lessons in Logic: Deductive and Inductive, Macmillan and Co., London and New
York. Republished as a googlebook; cf Jevons 1880:199–201. Louis Couturat 1914 the Algebra of Logic, The Open Court Publishing
Company, Chicago and London. Republished as a googlebook; cf Couturat 1914:75–76 gives a few more details; interestingly he compares
this to a typewriter as well as a piano. Jevons states that the account is to be found at Jan . 20, 1870 The Proceedings of the Royal Society.
[61] Jevons 1880:199–200
[62] All quotes from John Venn 1881 Symbolic Logic, Macmillan and Co., London. Republished as a googlebook. cf Venn 1881:120–125. The
interested reader can find a deeper explanation in those pages.
[63] Bell and Newell diagram 1971:39, cf. Davis 2000
[64] * Melina Hill, Valley News Correspondent, A Tinkerer Gets a Place in History, Valley News West Lebanon NH, Thursday March 31, 1983,
page 13.
[65] Davis 2000:14
[66] van Heijenoort 1967:81ff
[67] van Heijenoort's commentary on Frege's Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought in van
Heijenoort 1967:1
[68] Dixon 1906, cf. Kleene 1952:36–40
[69] cf. footnote in Alonzo Church 1936a in Davis 1965:90 and 1936b in Davis 1965:110
[70] Kleene 1935–6 in Davis 1965:237ff, Kleene 1943 in Davis 1965:255ff
[71] Church 1936 in Davis 1965:88ff
[72] cf. "Formulation I", Post 1936 in Davis 1965:289–290
[73] Turing 1936–7 in Davis 1965:116ff
[74] Rosser 1939 in Davis 1965:226
[75] Kleene 1943 in Davis 1965:273–274
[76] Kleene 1952:300, 317
[77] Kleene 1952:376
[78] Turing 1936–7 in Davis 1965:289–290
[79] Turing 1936 in Davis 1965, Turing 1939 in Davis 1965:160
[80] Hodges, p. 96
[81] Turing 1936–7:116
[82] Turing 1936–7 in Davis 1965:136
[83] Turing 1939 in Davis 1965:160
Algorithm 61

References
• Axt, P. (1959) On a Subrecursive Hierarchy and Primitive Recursive Degrees, Transactions of the American
Mathematical Society 92, pp. 85–105
• Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw-Hill Book
Company, New York. ISBN 0-07-004357-4.
• Blass, Andreas; Gurevich, Yuri (2003). "Algorithms: A Quest for Absolute Definitions" (http://research.
microsoft.com/~gurevich/Opera/164.pdf). Bulletin of European Association for Theoretical Computer Science
81. Includes an excellent bibliography of 56 references.
• Boolos, George; Jeffrey, Richard (1974, 1999). Computability and Logic (4th ed.). Cambridge University Press,
London. ISBN 0-521-20402-X. : cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not
effectively (mechanically) enumerable".
• Burgin, Mark (2004). Super-Recursive Algorithms. Springer. ISBN 978-0-387-95569-8.
• Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In
Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109
• Church, Alonzo (1936a). "An Unsolvable Problem of Elementary Number Theory". The American Journal of
Mathematics 58 (2): 345–363. doi: 10.2307/2371045 (http://dx.doi.org/10.2307/2371045). JSTOR  2371045
(http://www.jstor.org/stable/2371045). Reprinted in The Undecidable, p. 89ff. The first expression of
"Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective
calculability" in terms of "an algorithm", and he uses the word "terminates", etc.
• Church, Alonzo (1936b). "A Note on the Entscheidungsproblem". The Journal of Symbolic Logic 1 (1): 40–41.
doi: 10.2307/2269326 (http://dx.doi.org/10.2307/2269326). JSTOR  2269326 (http://www.jstor.org/stable/
2269326). Church, Alonzo (1936). "Correction to a Note on the Entscheidungsproblem". The Journal of Symbolic
Logic 1 (3): 101–102. doi: 10.2307/2269030 (http://dx.doi.org/10.2307/2269030). JSTOR  2269030 (http://
www.jstor.org/stable/2269030). Reprinted in The Undecidable, p. 110ff. Church shows that the
Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes.
• Daffa', Ali Abdullah al- (1977). The Muslim contribution to mathematics. London: Croom Helm.
ISBN 0-85664-464-1.
• Davis, Martin (1965). The Undecidable: Basic Papers On Undecidable Propositions, Unsolvable Problems and
Computable Functions. New York: Raven Press. ISBN 0-486-43228-9. Davis gives commentary before each
article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the
article are listed here by author's name.
• Davis, Martin (2000). Engines of Logic: Mathematicians and the Origin of the Computer. New York: W. W.
Nortion. ISBN 0-393-32229-7. Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel
and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage,
Ada Lovelace, Claude Shannon, Howard Aiken, etc.
• Paul E. Black, algorithm (http://www.nist.gov/dads/HTML/algorithm.html) at the NIST Dictionary of
Algorithms and Data Structures.
• Dennett, Daniel (1995). Darwin's Dangerous Idea. New York: Touchstone/Simon & Schuster.
ISBN 0-684-80290-2.
• Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms (http://research.microsoft.
com/~gurevich/Opera/141.pdf), ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages
77–111. Includes bibliography of 33 sources.
• Kleene, Stephen C. (1936). "General Recursive Functions of Natural Numbers" (http://gdz.sub.uni-goettingen.
de/index.php?id=11&PPN=GDZPPN002278499&L=1). Mathematische Annalen 112 (5): 727–742. doi:
10.1007/BF01565439 (http://dx.doi.org/10.1007/BF01565439). Presented to the American Mathematical
Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion"
(known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary
Algorithm 62

Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result).
• Kleene, Stephen C. (1943). "Recursive Predicates and Quantifiers". American Mathematical Society Transactions
54 (1): 41–73. doi: 10.2307/1990131 (http://dx.doi.org/10.2307/1990131). JSTOR  1990131 (http://www.
jstor.org/stable/1990131). Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general
recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later
repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis).
• Kleene, Stephen C. (First Edition 1952). Introduction to Metamathematics (Tenth Edition 1991 ed.).
North-Holland Publishing Company. ISBN 0-7204-2103-9. Excellent—accessible, readable—reference source
for mathematical "foundations".
• Knuth, Donald (1997). Fundamental Algorithms, Third Edition. Reading, Massachusetts: Addison–Wesley.
ISBN 0-201-89683-4.
• Knuth, Donald (1969). Volume 2/Seminumerical Algorithms, The Art of Computer Programming First Edition.
Reading, Massachusetts: Addison–Wesley.
• Kosovsky, N. K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms,
LSU Publ., Leningrad, 1981
• Kowalski, Robert (1979). "Algorithm=Logic+Control". Communications of the ACM 22 (7): 424–436. doi:
10.1145/359131.359136 (http://dx.doi.org/10.1145/359131.359136).
• A. A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint
Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations,
1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444
p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the
USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of
Commerce, Office of Technical Services, number OTS 60-51085.]
• Minsky, Marvin (1967). Computation: Finite and Infinite Machines (First ed.). Prentice-Hall, Englewood Cliffs,
NJ. ISBN 0-13-165449-7. Minsky expands his "...idea of an algorithm—an effective procedure..." in chapter 5.1
Computability, Effective Procedures and Algorithms. Infinite machines."
• Post, Emil (1936). "Finite Combinatory Processes, Formulation I". The Journal of Symbolic Logic 1 (3): 103–105.
doi: 10.2307/2269031 (http://dx.doi.org/10.2307/2269031). JSTOR  2269031 (http://www.jstor.org/stable/
2269031). Reprinted in The Undecidable, p. 289ff. Post defines a simple algorithmic-like process of a man
writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple
instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis.
• Rogers, Jr, Hartley (1987). Theory of Recursive Functions and Effective Computability. The MIT Press.
ISBN 0-262-68052-1 (pbk.) Check |isbn= value (help).
• Rosser, J.B. (1939). "An Informal Exposition of Proofs of Godel's Theorem and Church's Theorem". Journal of
Symbolic Logic 4. Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective
method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in
a finite number of steps... a machine which will then solve any problem of the set with no human intervention
beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable)
• Scott, Michael L. (2009). Programming Language Pragmatics (3rd ed.). Morgan Kaufmann Publishers/Elsevier.
ISBN 978-0-12-374514-9.
• Sipser, Michael (2006). Introduction to the Theory of Computation. PWS Publishing Company.
ISBN 0-534-94728-X.
• Stone, Harold S. (1972). Introduction to Computer Organization and Data Structures (1972 ed.). McGraw-Hill,
New York. ISBN 0-07-061726-0. Cf. in particular the first chapter titled: Algorithms, Turing Machines, and
Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is
called an algorithm" (p. 4).
Algorithm 63

• Tausworthe, Robert C (1977). Standardized Development of Computer Software Part 1 Methods. Englewood
Cliffs NJ: Prentice-Hall, Inc. ISBN 0-13-842195-1.
• Turing, Alan M. (1936–7). "On Computable Numbers, With An Application to the Entscheidungsproblem".
Proceedings of the London Mathematical Society, Series 2 42: 230–265. doi: 10.1112/plms/s2-42.1.230 (http://
dx.doi.org/10.1112/plms/s2-42.1.230). . Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The
Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College
Cambridge UK.
• Turing, Alan M. (1939). "Systems of Logic Based on Ordinals". Proceedings of the London Mathematical Society
45: 161–228. doi: 10.1112/plms/s2-45.1.161 (http://dx.doi.org/10.1112/plms/s2-45.1.161). Reprinted in The
Undecidable, p. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton USA.
• United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability
(http://www.uspto.gov/web/offices/pac/mpep/documents/2100_2106_02.htm), Manual of Patent
Examining Procedure (MPEP). Latest revision August 2006

Secondary references
• Bolter, David J. (1984). Turing's Man: Western Culture in the Computer Age (1984 ed.). The University of North
Carolina Press, Chapel Hill NC. ISBN 0-8078-1564-0., ISBN 0-8078-4108-0 pbk.
• Dilson, Jesse (2007). The Abacus ((1968,1994) ed.). St. Martin's Press, NY. ISBN 0-312-10409-X., ISBN
0-312-10409-X (pbk.)
• van Heijenoort, Jean (2001). From Frege to Gödel, A Source Book in Mathematical Logic, 1879–1931 ((1967)
ed.). Harvard University Press, Cambridge, MA. ISBN 0-674-32449-8., 3rd edition 1976[?], ISBN 0-674-32449-8
(pbk.)
• Hodges, Andrew (1983). Alan Turing: The Enigma ((1983) ed.). Simon and Schuster, New York.
ISBN 0-671-49207-1., ISBN 0-671-49207-1. Cf. Chapter "The Spirit of Truth" for a history leading to, and a
discussion of, his proof.

Further reading
• Jean Luc Chabert (1999). A History of Algorithms: From the Pebble to the Microchip. Springer Verlag.
ISBN 978-3-540-63369-3.
• Algorithmics.: The Spirit of Computing. Addison-Wesley. 2004. ISBN 978-0-321-11784-7.
• Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms (http://www-cs-faculty.stanford.edu/
~uno/aa.html). Stanford, California: Center for the Study of Language and Information.
• Knuth, Donald E. (2010). Selected Papers on Design of Algorithms (http://www-cs-faculty.stanford.edu/~uno/
da.html). Stanford, California: Center for the Study of Language and Information.
• Berlinski, David (2001). The Advent of the Algorithm: The 300-Year Journey from an Idea to the Computer.
Harvest Books. ISBN 978-0-15-601391-8.
• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein (2009). Introduction To
Algorithms, Third Edition. MIT Press. ISBN 978-0262033848.
Algorithm 64

External links
• Hazewinkel, Michiel, ed. (2001), "Algorithm" (http://www.encyclopediaofmath.org/index.php?title=p/
a011780), Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
• Algorithms (http://www.dmoz.org/Computers/Algorithms//) on the Open Directory Project
• Weisstein, Eric W., " Algorithm (http://mathworld.wolfram.com/Algorithm.html)", MathWorld.
• Dictionary of Algorithms and Data Structures (http://www.nist.gov/dads/)—National Institute of Standards
and Technology
• Algorithms and Data Structures by Dr Nikolai Bezroukov (http://www.softpanorama.org/Algorithms/index.
shtml)
Algorithm repositories
• The Stony Brook Algorithm Repository (http://www.cs.sunysb.edu/~algorith/)—State University of New
York at Stony Brook
• Netlib Repository (http://www.netlib.org/)—University of Tennessee and Oak Ridge National Laboratory
• Collected Algorithms of the ACM (http://calgo.acm.org/)—Association for Computing Machinery
• The Stanford GraphBase (http://www-cs-staff.stanford.edu/~knuth/sgb.html)—Stanford University
• Combinatorica (http://www.combinatorica.com/)—University of Iowa and State University of New York at
Stony Brook
• Library of Efficient Datastructures and Algorithms (LEDA) (http://www.algorithmic-solutions.com/
)—previously from Max-Planck-Institut für Informatik
• Archive of Interesting Code (http://www.keithschwarz.com/interesting/)
• A semantic wiki to collect, categorize and relate all algorithms and data structures (http://allmyalgorithms.org)
Lecture notes
• Algorithms Course Materials (http://compgeom.cs.uiuc.edu/~jeffe//teaching/algorithms/). Jeff Erickson.
University of Illinois.
Community
• Algorithms (https://plus.google.com/communities/101392274103811461838) on Google+
Deterministic algorithm 65

Deterministic algorithm
In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce
the same output, with the underlying machine always passing through the same sequence of states. Deterministic
algorithms are by far the most studied and familiar kind of algorithm, as well as one of the most practical, since they
can be run on real machines efficiently.
Formally, a deterministic algorithm computes a mathematical function; a function has a unique value for any given
input, and the algorithm is a process that produces this particular value as output.

Formal definition
Deterministic algorithms can be defined in terms of a state machine: a state describes what a machine is doing at a
particular instant in time. State machines pass in a discrete manner from one state to another. Just after we enter the
input, the machine is in its initial state or start state. If the machine is deterministic, this means that from this point
onwards, its current state determines what its next state will be; its course through the set of states is predetermined.
Note that a machine can be deterministic and still never stop or finish, and therefore fail to deliver a result.
Examples of particular abstract machines which are deterministic include the deterministic Turing machine and
deterministic finite automaton.

What makes algorithms non-deterministic?


A variety of factors can cause an algorithm to behave in a way which is not deterministic, or non-deterministic:
• If it uses external state other than the input, such as user input, a global variable, a hardware timer value, a
random value, or stored disk data.
• If it operates in a way that is timing-sensitive, for example if it has multiple processors writing to the same data at
the same time. In this case, the precise order in which each processor writes its data will affect the result.
• If a hardware error causes its state to change in an unexpected way.
Although real programs are rarely purely deterministic, it is easier for humans as well as other programs to reason
about programs that are. For this reason, most programming languages and especially functional programming
languages make an effort to prevent the above events from happening except under controlled conditions.
The prevalence of multi-core processors has resulted in a surge of interest in determinism in parallel programming
and challenges of non-determinism have been well documented. A number of tools to help deal with the challenges
have been proposed[1] to deal with deadlocks and race conditions.

Problems with deterministic algorithms


Unfortunately, for some problems deterministic algorithms are also hard to find. For example, there are simple and
efficient probabilistic algorithms that determine whether a given number is prime and have a very small chance of
being wrong. These have been known since the 1970s (see for example Fermat primality test); the known
deterministic algorithms remain considerably slower in practice.
As another example, NP-complete problems, which include many of the most important practical problems, can be
solved quickly using a machine called a nondeterministic Turing machine, but efficient practical algorithms have
never been found for any of them. At best, we can currently only find approximate solutions or solutions in special
cases.
Another major problem with deterministic algorithms is that sometimes, we don't want the results to be predictable.
For example, if you are playing an on-line game of blackjack that shuffles its deck using a pseudorandom number
generator, a clever gambler might guess precisely the numbers the generator will choose and so determine the entire
Deterministic algorithm 66

contents of the deck ahead of time, allowing him to cheat; for example, the Software Security Group at Reliable
Software Technologies was able to do this for an implementation of Texas Hold 'em Poker that is distributed by ASF
Software, Inc, allowing them to consistently predict the outcome of hands ahead of time.[2] Similar problems arise in
cryptography, where private keys are often generated using such a generator. This sort of problem is generally
avoided using a cryptographically secure pseudo-random number generator.

Failure / Success in algorithms

Exceptions
Exception throwing is a usual mechanism to signal failure due to unexpected/undesired states.

Failure as a return value


In order to overcome the exception unhandling problem that may result in non termination, the "Total functional
programming" way is to wrap the result of a partial function in an option type result.
• the option type in ML and the Maybe type in Haskell

(* Standard ML *)
datatype 'a option = NONE | SOME of 'a

(* OCaml *)
type 'a option = None | Some of 'a

-- Haskell
data Maybe a = Nothing | Just a

• the Either type in Haskell, include the failure reason.

data Either errorType resultType = Right resultType | Left errorType

Failure in Monads, the Left zero property


As Monads model sequential composition, the Left zero property (z * s = z) in a monad means that the right side of
the sequence will not be evaluated.

-- Left zero in the Maybe monad


Nothing >> k = Nothing
Nothing >>= f = Nothing

-- Left zero in the Either monad


Left err >> k = Left err
Left err >>= f = Left err
Deterministic algorithm 67

Determinism categories in languages

Mercury
This logic-functional programming language establish different determinism categories for predicate modes as
explained in the ref.[3][4]

Haskell
Haskell provides several mechanisms:
non-determinism or notion of Fail
• the Maybe and Either types include the notion of success in the result.
• the fail method of the class Monad, may be used to signal fail as exception.
• the Maybe monad and MaybeT monad transformer provide for failed computations (stop the computation
sequence and return Nothing)[5]
determinism/non-det with multiple solutions
you may retrieve all possible outcomes of a multiple result computation, by wrapping its result type in a
MonadPlus monad. (its method mzero makes an outcome fail and mplus collects the successful results).[6]

ML family and derived languages


As seen in Standard ML, OCaml and Scala
• The option type includes the notion of success.

Java
• The null reference value may represent an unsuccessful (out-of-domain) result.

References
[1] Parallel Studio
[2] Gary McGraw and John Viega. Make your software behave: Playing the numbers: How to cheat in online gambling. http:/ / www. ibm. com/
developerworks/ library/ s-playing/ #h4
[3] Determinism categories in the Mercury programming language (http:/ / www. mercury. csse. unimelb. edu. au/ information/ doc-release/
mercury_ref/ Determinism-categories. html#Determinism-categories)
[4] Mercury predicate modes (http:/ / www. mercury. csse. unimelb. edu. au/ information/ doc-release/ mercury_ref/
Predicate-and-function-mode-declarations. html#Predicate-and-function-mode-declarations)
[5] Representing failure using the Maybe monad (http:/ / www. haskell. org/ haskellwiki/ Monad#Common_monads)
[6] The class MonadPlus (http:/ / www. haskell. org/ haskellwiki/ MonadPlus)
Data structure 68

Data structure
In computer science, a data structure is
a particular way of storing and
organizing data in a computer so that it
can be used efficiently.[1][2]
Different kinds of data structures are
suited to different kinds of applications,
and some are highly specialized to
specific tasks. For example, B-trees are
particularly well-suited for
implementation of databases, while
compiler implementations usually use
hash tables to look up identifiers.

Data structures provide a means to


manage large amounts of data efficiently,
A hash table
such as large databases and internet
indexing services. Usually, efficient data
structures are a key to designing efficient algorithms. Some formal design methods and programming languages
emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and
retrieving can be carried out on data stored in both main memory and in secondary memory.

Overview
• An array stores a number of elements in a specific order. They are accessed using an integer to specify which
element is required (although the elements may be of almost any type). Arrays may be fixed-length or
expandable.
• Records (also called tuples or structs) are among the simplest data structures. A record is a value that contains
other values, typically in fixed number and sequence and typically indexed by names. The elements of records are
usually called fields or members.
• A hash table (also called a dictionary or map) is a more flexible variation on a record, in which name-value
pairs can be added and deleted freely.
• A union type specifies which of a number of permitted primitive types may be stored in its instances, e.g. "float
or long integer". Contrast with a record, which could be defined to contain a float and an integer; whereas, in a
union, there is only one value at a time.
• A tagged union (also called a variant, variant record, discriminated union, or disjoint union) contains an
additional field indicating its current type, for enhanced type safety.
• A set is an abstract data structure that can store specific values, without any particular order, and with no repeated
values. Values themselves are not retrieved from sets, rather one tests a value for membership to obtain a boolean
"in" or "not in".
• Graphs and trees are linked abstract data structures composed of nodes. Each node contains a value and also one
or more pointers to other nodes. Graphs can be used to represent networks, while trees are generally used for
sorting and searching, having their nodes arranged in some relative order based on their values.
• An object contains data fields, like a record, and also contains program code fragments for accessing or
modifying those fields. Data structures not containing code, like those above, are called plain old data structures.
Many others are possible, but they tend to be further variations and compounds of the above.
Data structure 69

Basic principles
Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory,
specified by an address—a bit string that can be itself stored in memory and manipulated by the program. Thus the
record and array data structures are based on computing the addresses of data items with arithmetic operations; while
the linked data structures are based on storing addresses of data items within the structure itself. Many data structures
use both principles, sometimes combined in non-trivial ways (as in XOR linking).
The implementation of a data structure usually requires writing a set of procedures that create and manipulate
instances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations.
This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly
by the operations that may be performed on it, and the mathematical properties of those operations (including their
space and time cost).

Language support
Most assembly languages and some low-level languages, such as BCPL (Basic Combined Programming Language),
lack support for data structures. Many high-level programming languages and some higher-level assembly
languages, such as MASM, on the other hand, have special syntax or other built-in support for certain data
structures, such as vectors (one-dimensional arrays) in the C language or multi-dimensional arrays in Pascal.
Most programming languages feature some sort of library mechanism that allows data structure implementations to
be reused by different programs. Modern languages usually come with standard libraries that implement the most
common data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, and
Microsoft's .NET Framework.
Modern languages also generally support modular programming, the separation between the interface of a library
module and its implementation. Some provide opaque data types that allow clients to hide implementation details.
Object-oriented programming languages, such as C++, Java and Smalltalk may use classes for this purpose.
Many known data structures have concurrent versions that allow multiple computing threads to access the data
structure simultaneously.

References
[1] Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and
Technology. 15 December 2004. Online version (http:/ / www. itl. nist. gov/ div897/ sqg/ dads/ HTML/ datastructur. html) Accessed May 21,
2009.
[2] Entry data structure in the Encyclopædia Britannica (2009) Online entry (http:/ / www. britannica. com/ EBchecked/ topic/ 152190/
data-structure) accessed on May 21, 2009.

Further reading
• Peter Brass, Advanced Data Structures, Cambridge University Press, 2008.
• Donald Knuth, The Art of Computer Programming, vol. 1. Addison-Wesley, 3rd edition, 1997.
• Dinesh Mehta and Sartaj Sahni Handbook of Data Structures and Applications, Chapman and Hall/CRC Press,
2007.
• Niklaus Wirth, Algorithms and Data Structures, Prentice Hall, 1985.
• Diane Zak, Introduction to programming with c++, copyright 2011 Cengage Learning Asia Pte Ltd
Data structure 70

External links
• UC Berkeley video course on data structures (http://academicearth.org/courses/data-structures)
• Descriptions (http://nist.gov/dads/) from the Dictionary of Algorithms and Data Structures
• Data structures course (http://www.cs.auckland.ac.nz/software/AlgAnim/ds_ToC.html)
• An Examination of Data Structures from .NET perspective (http://msdn.microsoft.com/en-us/library/
aa289148(VS.71).aspx)
• Schaffer, C. Data Structures and Algorithm Analysis (http://people.cs.vt.edu/~shaffer/Book/C++
3e20110915.pdf)

List (abstract data type)


In computer science, a list or sequence is an abstract data type that implements a finite ordered collection of values,
where the same value may occur more than once. An instance of a list is a computer representation of the
mathematical concept of a finite sequence; the (potentially) infinite analog of a list is a stream. Lists are a basic
example of containers, as they contain other values. Each instance of a value in the list is usually called an item,
entry, or element of the list; if the same value occurs multiple times, each occurrence is considered a distinct item.
Lists are distinguished from arrays in that lists only allow sequential access, while arrays allow random access.
The name list is also used for several concrete data structures that can
be used to implement abstract lists, especially linked lists.
A singly linked list structure, implementing a list
The so-called static list structures allow only inspection and
with 3 integer elements.
enumeration of the values. A mutable or dynamic list may allow
items to be inserted, replaced, or deleted during the list's existence.
Many programming languages provide support for list data types, and have special syntax and semantics for lists
and list operations. A list can often be constructed by writing the items in sequence, separated by commas,
semicolons, or spaces, within a pair of delimiters such as parentheses '()', brackets, '[]', braces '{}', or angle brackets
'<>'. Some languages may allow list types to be indexed or sliced like array types, in which case the data type is
more accurately described as an array. In object-oriented programming languages, lists are usually provided as
instances of subclasses of a generic "list" class, and traversed via separate iterators. List data types are often
implemented using array data structures or linked lists of some sort, but other data structures may be more
appropriate for some applications. In some contexts, such as in Lisp programming, the term list may refer
specifically to a linked list rather than an array.
In type theory and functional programming, abstract lists are usually defined inductively by four operations: nil that
yields the empty list, cons, which adds an item at the beginning of a list, head, that returns the first element of a list,
and tail that returns a list minus its first element. Formally, Peano's natural numbers can be defined as abstract lists
with elements of unit type.
List (abstract data type) 71

Operations
Implementation of the list data structure may provide some of the following operations:
• a constructor for creating an empty list;
• an operation for testing whether or not a list is empty;
• an operation for prepending an entity to a list
• an operation for appending an entity to a list
• an operation for determining the first component (or the "head") of a list
• an operation for referring to the list consisting of all the components of a list except for its first (this is called the
"tail" of the list.)

Characteristics
Lists have the following properties:
• The size of lists. It indicates how many elements there are in the list.
• Equality of lists:
• In mathematics, sometimes equality of lists is defined simply in terms of object identity: two lists are equal if
and only if they are the same object.
• In modern programming languages, equality of lists is normally defined in terms of structural equality of the
corresponding entries, except that if the lists are typed, then the list types may also be relevant.
• Lists may be typed. This implies that the entries in a list must have types that are compatible with the list's type.
It is common that lists are typed when they are implemented using arrays.
• Each element in the list has an index. The first element commonly has index 0 or 1 (or some other predefined
integer). Subsequent elements have indices that are 1 higher than the previous element. The last element has index
<initial index> + <size> − 1.
• It is possible to retrieve the element at a particular index.
• It is possible to traverse the list in the order of increasing index.
• It is possible to change the element at a particular index to a different value, without affecting any other
elements.
• It is possible to insert an element at a particular index. The indices of higher elements at that are increased by
1.
• It is possible to remove an element at a particular index. The indices of higher elements at that are decreased
by 1.

Implementations
Lists are typically implemented either as linked lists (either singly or doubly linked) or as arrays, usually variable
length or dynamic arrays.
The standard way of implementing lists, originating with the programming language Lisp, is to have each element of
the list contain both its value and a pointer indicating the location of the next element in the list. This results in either
a linked list or a tree, depending on whether the list has nested sublists. Some older Lisp implementations (such as
the Lisp implementation of the Symbolics 3600) also supported "compressed lists" (using CDR coding) which had a
special internal representation (invisible to the user). Lists can be manipulated using iteration or recursion. The
former is often preferred in imperative programming languages, while the latter is the norm in functional languages.
Lists can be implemented as self-balancing binary search trees holding index-value pairs, providing equal-time
access to any element (e.g. all residing in the fringe, and internal nodes storing the right-most child's index, used to
guide the search), taking the time logarithmic in the list's size, but as long as it doesn't change much will provide the
illusion of random access and enable swap, prefix and append operations in logarithmic time as well.
List (abstract data type) 72

Programming language support


Some languages do not offer a list data structure, but offer the use of associative arrays or some kind of table to
emulate lists. For example, Lua provides tables. Although Lua stores lists that have numerical indices as arrays
internally, they still appear as hash tables.
In Lisp, lists are the fundamental data type and can represent both program code and data. In most dialects, the list of
the first three prime numbers could be written as (list 2 3 5). In several dialects of Lisp, including Scheme, a
list is a collection of pairs, consisting of a value and a pointer to the next pair (or null value), making a singly linked
list.

Applications
As the name implies, lists can be used to store a list of records. The items in a list can be sorted for the purpose of
fast search (binary search).
Because in computing, lists are easier to realize than sets, a finite set in mathematical sense can be realized as a list
with additional restrictions, that is, duplicate elements are disallowed and such that order is irrelevant. If the list is
sorted, it speeds up determining if a given item is already in the set but in order to ensure the order, it requires more
time to add new entry to the list. In efficient implementations, however, sets are implemented using self-balancing
binary search trees or hash tables, rather than a list.

Abstract definition
The abstract list type L with elements of some type E (a monomorphic list) is defined by the following functions:
nil: () → L
cons: E × L → L
first: L → E
rest: L → L
with the axioms
first (cons (e, l)) = e
rest (cons (e, l)) = l
for any element e and any list l. It is implicit that
cons (e, l) ≠ l
cons (e, l) ≠ e
cons (e1, l1) = cons (e2, l2) if e1 = e2 and l1 = l2
Note that first (nil ()) and rest (nil ()) are not defined.
These axioms are equivalent to those of the abstract stack data type.
In type theory, the above definition is more simply regarded as an inductive type defined in terms of constructors: nil
and cons. In algebraic terms, this can be represented as the transformation 1 + E × L → L. first and rest are then
obtained by pattern matching on the cons constructor and separately handling the nil case.
List (abstract data type) 73

The list monad


The list type forms a monad with the following functions (using E* rather than L to represent monomorphic lists with
elements of type E):

where append is defined as:

Alternatively, the monad may be defined in terms of operations return, fmap and join, with:

Note that fmap, join, append and bind are well-defined, since they're applied to progressively deeper arguments at
each recursive call.
The list type is an additive monad, with nil as the monadic zero and append as monadic sum.
Lists form a monoid under the append operation. The identity element of the monoid is the empty list, nil. In fact,
this is the free monoid over the set of list elements.
Array data structure 74

Array data structure


In computer science, an array data structure or simply an array is a data structure consisting of a collection of
elements (values or variables), each identified by at least one array index or key. An array is stored so that the
position of each element can be computed from its index tuple by a mathematical formula.
For example, an array of 10 integer variables, with indices 0 through 9, may be stored as 10 words at memory
addresses 2000, 2004, 2008, … 2036, so that the element with index i has the address 2000 + 4 × i.[1]
Because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays
are also sometimes called matrices. In some cases the term "vector" is used in computing to refer to an array,
although tuples rather than vectors are more correctly the mathematical equivalent. Arrays are often used to
implement tables, especially lookup tables; the word table is sometimes used as a synonym of array.
Arrays are among the oldest and most important data structures, and are used by almost every program. They are
also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing
logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional
array of words, whose indices are their addresses. Processors, especially vector processors, are often optimized for
array operations.
Arrays are useful mostly because the element indices can be computed at run time. Among other things, this feature
allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of
an array data structure are required to have the same size and should use the same data representation. The set of
valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually,[2] but
not always, fixed while the array is in use.
The term array is often used to mean array data type, a kind of data type provided by most high-level programming
languages that consists of a collection of values or variables that can be selected by one or more indices computed at
run-time. Array types are often implemented by array structures; however, in some languages they may be
implemented by hash tables, linked lists, search trees, or other data structures.
The term is also used, especially in the description of algorithms, to mean associative array or "abstract array", a
theoretical computer science model (an abstract data type or ADT) intended to capture the essential properties of
arrays.

History
The first digital computers used machine-language programming to set up and access array structures for data tables,
vector and matrix computations, and for many other purposes. Von Neumann wrote the first array-sorting program
(merge sort) in 1945, during the building of the first stored-program computer.[3]p. 159 Array indexing was originally
done by self-modifying code, and later using index registers and indirect addressing. Some mainframes designed in
the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds
checking in hardware.
Assembly languages generally have no special support for arrays, other than what the machine itself provides. The
earliest high-level programming languages, including FORTRAN (1957), COBOL (1960), and ALGOL 60 (1960),
had support for multi-dimensional arrays, and so has C (1972). In C++ (1983), class templates exist for
multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays.
Array data structure 75

Applications
Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables. Many
databases, small and large, consist of (or include) one-dimensional arrays whose elements are records.
Arrays are used to implement other data structures, such as heaps, hash tables, deques, queues, stacks, strings, and
VLists.
One or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly
memory pool allocation. Historically, this has sometimes been the only way to allocate "dynamic memory" portably.
Arrays can be used to determine partial or complete control flow in programs, as a compact alternative to (otherwise
repetitive) multiple IF statements. They are known in this context as control tables and are used in conjunction with
a purpose built interpreter whose control flow is altered according to values contained in the array. The array may
contain subroutine pointers (or relative subroutine numbers that can be acted upon by SWITCH statements) that
direct the path of the execution.

Array element identifier and addressing formulas


When data objects are stored in an array, individual objects are selected by an index that is usually a non-negative
scalar integer. Indices are also called subscripts. An index maps the array value to a stored object.
There are three ways in which the elements of an array can be indexed:
• 0 (zero-based indexing): The first element of the array is indexed by subscript of 0.
• 1 (one-based indexing): The first element of the array is indexed by subscript of 1.
• n (n-based indexing): The base index of an array can be freely chosen. Usually programming languages allowing
n-based indexing also allow negative index values and other scalar data types like enumerations, or characters
may be used as an array index.
Arrays can have multiple dimensions, thus it is not uncommon to access an array using multiple indices. For
example a two-dimensional array A with three rows and four columns might provide access to the element at the
2nd row and 4th column by the expression A[1, 3] (in a row major language) or A[3, 1] (in a column major
language) in the case of a zero-based indexing system. Thus two indices are used for a two-dimensional array, three
for a three-dimensional array, and n for an n-dimensional array.
The number of indices needed to specify an element is called the dimension, dimensionality, or rank of the array.
In standard arrays, each index is restricted to a certain range of consecutive integers (or consecutive values of some
enumerated type), and the address of an element is computed by a "linear" formula on the indices.

One-dimensional arrays
A one-dimensional array (or single dimension array) is a type of linear array. Accessing its elements involves a
single subscript which can either represent a row or column index.
As an example consider the C declaration int anArrayName[10];
Syntax : datatype anArrayname[sizeofArray];
In the given example the array can contain 10 elements of any value available to the int type. In C, the array
element indices are 0-9 inclusive in this case. For example, the expressions anArrayName[0] and
anArrayName[9] are the first and last elements respectively.
For a vector with linear addressing, the element with index i is located at the address B + c · i, where B is a fixed
base address and c a fixed constant, sometimes called the address increment or stride.
If the valid element indices begin at 0, the constant B is simply the address of the first element of the array. For this
reason, the C programming language specifies that array indices always begin at 0; and many programmers will call
Array data structure 76

that element "zeroth" rather than "first".


However, one can choose the index of the first element by an appropriate choice of the base address B. For example,
if the array has five elements, indexed 1 through 5, and the base address B is replaced by B + 30c, then the indices of
those same elements will be 31 to 35. If the numbering does not start at 0, the constant B may not be the address of
any element.

Multidimensional arrays
For a two-dimensional array, the element with indices i,j would have address B + c · i + d · j, where the coefficients c
and d are the row and column address increments, respectively.
More generally, in a k-dimensional array, the address of an element with indices i1, i2, …, ik is
B + c1 · i1 + c2 · i2 + … + ck · ik.
For example: int a[3][2];
This means that array a has 3 rows and 2 columns, and the array is of integer type. Here we can store 6 elements they
are stored linearly but starting from first row linear then continuing with second row. The above array will be stored
as a11, a12, a13, a21, a22, a23.
This formula requires only k multiplications and k additions, for any array that can fit in memory. Moreover, if any
coefficient is a fixed power of 2, the multiplication can be replaced by bit shifting.
The coefficients ck must be chosen so that every valid index tuple maps to the address of a distinct element.
If the minimum legal value for every index is 0, then B is the address of the element whose indices are all zero. As in
the one-dimensional case, the element indices may be changed by changing the base address B. Thus, if a
two-dimensional array has rows and columns indexed from 1 to 10 and 1 to 20, respectively, then replacing B by B +
c1 - − 3 c1 will cause them to be renumbered from 0 through 9 and 4 through 23, respectively. Taking advantage of
this feature, some languages (like FORTRAN 77) specify that array indices begin at 1, as in mathematical tradition;
while other languages (like Fortran 90, Pascal and Algol) let the user choose the minimum value for each index.

Dope vectors
The addressing formula is completely defined by the dimension d, the base address B, and the increments c1, c2, …,
ck. It is often useful to pack these parameters into a record called the array's descriptor or stride vector or dope
vector. The size of each element, and the minimum and maximum values allowed for each index may also be
included in the dope vector. The dope vector is a complete handle for the array, and is a convenient way to pass
arrays as arguments to procedures. Many useful array slicing operations (such as selecting a sub-array, swapping
indices, or reversing the direction of the indices) can be performed very efficiently by manipulating the dope vector.

Compact layouts
Often the coefficients are chosen so that the elements occupy a contiguous area of memory. However, that is not
necessary. Even if arrays are always created with contiguous elements, some array slicing operations may create
non-contiguous sub-arrays from them.
There are two systematic compact layouts for a two-dimensional array. For example, consider the matrix

In the row-major order layout (adopted by C for statically declared arrays), the elements in each row are stored in
consecutive positions and all of the elements of a row have a lower address than any of the elements of a consecutive
row:
Array data structure 77

1 2 3 4 5 6 7 8 9

In column-major order (traditionally used by Fortran), the elements in each column are consecutive in memory and
all of the elements of a column have a lower address than any of the elements of a consecutive column:

1 4 7 2 5 8 3 6 9

For arrays with three or more indices, "row major order" puts in consecutive positions any two elements whose index
tuples differ only by one in the last index. "Column major order" is analogous with respect to the first index.
In systems which use processor cache or virtual memory, scanning an array is much faster if successive elements are
stored in consecutive positions in memory, rather than sparsely scattered. Many algorithms that use
multidimensional arrays will scan them in a predictable order. A programmer (or a sophisticated compiler) may use
this information to choose between row- or column-major layout for each array. For example, when computing the
product A·B of two matrices, it would be best to have A stored in row-major order, and B in column-major order.

Array resizing
Static arrays have a size that is fixed when they are created and consequently do not allow elements to be inserted or
removed. However, by allocating a new array and copying the contents of the old array to it, it is possible to
effectively implement a dynamic version of an array; see dynamic array. If this operation is done infrequently,
insertions at the end of the array require only amortized constant time.
Some array data structures do not reallocate storage, but do store a count of the number of elements of the array in
use, called the count or size. This effectively makes the array a dynamic array with a fixed maximum size or
capacity; Pascal strings are examples of this.

Non-linear formulas
More complicated (non-linear) formulas are occasionally used. For a compact two-dimensional triangular array, for
instance, the addressing formula is a polynomial of degree 2.

Efficiency
Both store and select take (deterministic worst case) constant time. Arrays take linear (O(n)) space in the number of
elements n that they hold.
In an array with element size k and on a machine with a cache line size of B bytes, iterating through an array of n
elements requires the minimum of ceiling(nk/B) cache misses, because its elements occupy contiguous memory
locations. This is roughly a factor of B/k better than the number of cache misses needed to access n elements at
random memory locations. As a consequence, sequential iteration over an array is noticeably faster in practice than
iteration over many other data structures, a property called locality of reference (this does not mean however, that
using a perfect hash or trivial hash within the same (local) array, will not be even faster - and achievable in constant
time). Libraries provide low-level optimized facilities for copying ranges of memory (such as memcpy) which can be
used to move contiguous blocks of array elements significantly faster than can be achieved through individual
element access. The speedup of such optimized routines varies by array element size, architecture, and
implementation.
Memory-wise, arrays are compact data structures with no per-element overhead. There may be a per-array overhead,
e.g. to store index bounds, but this is language-dependent. It can also happen that elements stored in an array require
less memory than the same elements stored in individual variables, because several array elements can be stored in a
single word; such arrays are often called packed arrays. An extreme (but commonly used) case is the bit array, where
every bit represents a single element. A single octet can thus hold up to 256 different combinations of up to 8
Array data structure 78

different conditions, in the most compact form.


Array accesses with statically predictable access patterns are a major source of data parallelism.

Efficiency comparison with other data structures

Linked list Array Dynamic Balanced Random access


array tree list

Indexing Θ(n) Θ(1) Θ(1) Θ(log n) Θ(log n)

Insert/delete at beginning Θ(1) N/A Θ(n) Θ(log n) Θ(1)

Insert/delete at end Θ(n) Θ(1) amortized Θ(log n) Θ(log n) updating


last element is
unknown N/A
Θ(1)
last element is known

Insert/delete in middle Θ(n) Θ(log n) Θ(log n) updating


search time +
[4][5][6] N/A
Θ(1)

Wasted space (average) Θ(n) 0 Θ(n) Θ(n) Θ(n)

Growable arrays are similar to arrays but add the ability to insert and delete elements; adding and deleting at the end
is particularly efficient. However, they reserve linear (Θ(n)) additional storage, whereas arrays do not reserve
additional storage.
Associative arrays provide a mechanism for array-like functionality without huge storage overheads when the index
values are sparse. For example, an array that contains values only at indexes 1 and 2 billion may benefit from using
such a structure. Specialized associative arrays with integer keys include Patricia tries, Judy arrays, and van Emde
Boas trees.
Balanced trees require O(log n) time for indexed access, but also permit inserting or deleting elements in O(log n)
time,[7] whereas growable arrays require linear (Θ(n)) time to insert or delete elements at an arbitrary position.
Linked lists allow constant time removal and insertion in the middle but take linear time for indexed access. Their
memory use is typically worse than arrays, but is still linear.
An Iliffe vector is an alternative to a multidimensional array structure. It uses a
one-dimensional array of references to arrays of one dimension less. For two
dimensions, in particular, this alternative structure would be a vector of pointers to
vectors, one for each row. Thus an element in row i and column j of an array A
would be accessed by double indexing (A[i][j] in typical notation). This alternative
structure allows ragged or jagged arrays, where each row may have a different size
— or, in general, where the valid range of each index depends on the values of all
preceding indices. It also saves one multiplication (by the column address increment) replacing it by a bit shift (to
index the vector of row pointers) and one extra memory access (fetching the row address), which may be worthwhile
in some architectures.
Array data structure 79

Meaning of dimension
The dimension of an array is the number of indices needed to select an element. Thus, if the array is seen as a
function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete
subset. Thus a one-dimensional array is a list of data, a two-dimensional array a rectangle of data, a
three-dimensional array a block of data, etc.
This should not be confused with the dimension of the set of all matrices with a given domain, that is, the number of
elements in the array. For example, an array with 5 rows and 4 columns is two-dimensional, but such matrices form a
20-dimensional space. Similarly, a three-dimensional vector can be represented by a one-dimensional array of size
three.

References
[1] David R. Richardson (2002), The Book on Data Structures. iUniverse, 112 pages. ISBN 0-595-24039-9, ISBN 978-0-595-24039-5.
[2] T. Veldhuizen. Arrays in Blitz++. In Proc. of the 2nd Int. Conf. on Scientific Computing in Object-Oriented Parallel Environments
(ISCOPE), LNCS 1505, pages 223-220. Springer, 1998.
[3] Donald Knuth, The Art of Computer Programming, vol. 3. Addison-Wesley
[4] Gerald Kruse. CS 240 Lecture Notes (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ syllabus. htm): Linked Lists Plus: Complexity
Trade-offs (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ linkedlist2. htm). Juniata College. Spring 2008.
[5] Day 1 Keynote - Bjarne Stroustrup: C++11 Style (http:/ / channel9. msdn. com/ Events/ GoingNative/ GoingNative-2012/
Keynote-Bjarne-Stroustrup-Cpp11-Style) at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44
[6] Number crunching: Why you should never, ever, EVER use linked-list in your code again (http:/ / kjellkod. wordpress. com/ 2012/ 02/ 25/
why-you-should-never-ever-ever-use-linked-list-in-your-code-again/ ) at kjellkod.wordpress.com
[7] Counted B-Tree (http:/ / www. chiark. greenend. org. uk/ ~sgtatham/ algorithms/ cbtree. html)

FIFO
FIFO is an acronym for First In, First Out, a method for organizing and manipulating a data buffer, or data stack,
where the oldest entry, or 'bottom' of the stack, is processed first. It is analagous to processing a queue with
first-come, first-served (FCFS) behaviour: where the people leave the queue in the order in which they arrive.
FCFS is also the jargon term for the FIFO operating system scheduling algorithm, which gives every process CPU
time in the order in which it is demanded.
FIFO's opposite is LIFO, Last-In-First-Out, where the youngest entry or 'top of the stack' is processed first.
A priority queue is neither FIFO or LIFO but may adopt similar behaviour temporarily or by default.
Queueing theory encompasses these methods for processing data structures, as well as interactions between
strict-FIFO queues.

Computer science

Data structure
A typical data structure in the C++ language will look like

struct fifo_node
{
struct fifo_node *next;
value_type value;
};

class fifo
FIFO 80

{
fifo_node *front;
fifo_node *back;

fifo_node *dequeue(void)
{
fifo_node *tmp = front;
if ( front != NULL )
front = front->next;
else
back = NULL;
return tmp;
}

queue(value)
{
fifo_node *tempNode = new fifo_node;
tempNode->value = value;
if ( front == NULL )
{
front = tempNode;
back = tempNode;
}
else
{
back->next = tempNode;
back = tempNode;
}
}
};

(For information on the abstract data structure, see Queue. For details of a common implementation, see Circular
buffer.)
Popular Unix systems include a sys/queue.h C/C++ header file which provides macros usable by applications which
need to create FIFO queues.

Head or tail first


Controversy over the terms "head" and "tail" exists in reference to FIFO queues. To many people, items should enter
a queue at the tail, remain in the queue until they reach the head and leave the queue from there. This point of view is
justified by analogy with queues of people waiting for some kind of service and parallels the use of "front" and
"back" in the above example. Other people believe that objects enter a queue at the head and leave at the tail, in the
manner of food passing through a snake. Queues written in that way appear in places that might be considered
authoritative, such as the GNU/Linux operating system.
FIFO 81

Pipes
In computing environments that support the pipes and filters model for interprocess communication, a FIFO is
another name for a named pipe.

Disk scheduling
Disk controllers can use the FIFO as a disk scheduling algorithm to determine the order to service disk I/O requests.

Communications and networking


Communications bridges, switches and routers used in Computer networks use FIFOs to hold data packets in route to
their next destination. Typically at least one FIFO structure is used per network connection. Some devices feature
multiple FIFOs for simultaneously and independently queuing different types of information.

Electronics
FIFOs are commonly used in electronic circuits for buffering and flow control which is from hardware to software.
In its hardware form, a FIFO primarily consists of a set of read and write pointers, storage and control logic. Storage
may be SRAM, flip-flops, latches or any other suitable form of storage. For FIFOs of non-trivial size, a dual-port
SRAM is usually used, where one port is dedicated to writing and the other to reading.
A synchronous FIFO is a FIFO where the same clock is used for both reading and writing. An asynchronous FIFO
uses different clocks for reading and writing. Asynchronous FIFOs introduce metastability issues. A common
implementation of an asynchronous FIFO uses a Gray code (or any unit distance code) for the read and write
pointers to ensure reliable flag generation. One further note concerning flag generation is that one must necessarily
use pointer arithmetic to generate flags for asynchronous FIFO implementations. Conversely, one may use either a
"leaky bucket" approach or pointer arithmetic to generate flags in synchronous FIFO implementations.
Examples of FIFO status flags include: full, empty, almost full, almost empty, etc.
The first known FIFO implemented in electronics was done by Peter Alfke in 1969 at Fairchild Semiconductors.
Peter Alfke was later a Director at Xilinx.

FIFO full/empty
A hardware FIFO is used for synchronization purposes. It is often implemented as a circular queue, and thus has two
pointers:
1. Read Pointer/Read Address Register
2. Write Pointer/Write Address Register
Read and write addresses are initially both at the first memory location and the FIFO queue is Empty.
FIFO Empty
When the read address register reaches the write address register, the FIFO triggers the Empty signal.
FIFO FULL
When the write address register reaches the read address register, the FIFO triggers the FULL signal.
In both cases, the read and write addresses end up being equal. To distinguish between the two situations, a simple
and robust solution is to add one extra bit for each read and write address which is inverted each time the address
wraps. With this set up, the conditions are:
FIFO Empty
When the read address register equals the write address register, the FIFO is empty.
FIFO FULL
FIFO 82

When the read address LSBs equal the write address LSBs and the extra MSBs are different, the FIFO is full.

Notes and references


• Cummings et al., Simulation and Synthesis Techniques for Asynchronous FIFO Design with Asynchronous
Pointer Comparisons, SNUG San Jose 2002 (http://www.sunburst-design.com/papers/
CummingsSNUG2002SJ_FIFO2.pdf)
• Ronen Perry & Tal Zarsky, Queues in Law (http://ssrn.com/abstract=2147333), Iowa Law Review (August 10,
2012)

Queue (abstract data type)


In computer science, a queue (/ˈkjuː/ KEW)
is a particular kind of abstract data type or
collection in which the entities in the
collection are kept in order and the principal
(or only) operations on the collection are the
addition of entities to the rear terminal
position, known as enqueue, and removal of
entities from the front terminal position,
known as dequeue. This makes the queue a
First-In-First-Out (FIFO) data structure. In a
FIFO data structure, the first element added
to the queue will be the first one to be
removed. This is equivalent to the Representation of a Queue with FIFO (First In First Out) property
requirement that once a new element is
added, all elements that were added before have to be removed before the new element can be removed. Often a peek
or front operation is also entered, returning the value of the front element without dequeuing it. A queue is an
example of a linear data structure, or more abstractly a sequential collection.

Queues provide services in computer science, transport, and operations research where various entities such as data,
objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the
function of a buffer.
Queues are common in computer programs, where they are implemented as data structures coupled with access
routines, as an abstract data structure or in object-oriented languages as classes. Common implementations are
circular buffers and linked lists.

Queue implementation
Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many
elements are already contained, a new element can always be added. It can also be empty, at which point removing
an element will be impossible until a new element has been added again.
Fixed length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the
queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in
that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing
indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a
high level language, but it does admittedly slow things down a little, because the array indices must be compared to
Queue (abstract data type) 83

zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds,
which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for
any high level language that does not have pointer syntax. The array size must be declared ahead of time, but some
implementations simply double the declared array size when overflow occurs. Most modern languages with objects
or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified fixed
capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue
and queue underflow happens when trying to remove an element from an empty queue.
A bounded queue is a queue limited to a fixed number of items.
There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the
operations—enqueuing and dequeuing—in O(1) time.
• Linked list
• A doubly linked list has O(1) insertion and deletion at both ends, so is a natural choice for queues.
• A regular singly linked list only has efficient insertion and deletion at one end. However, a small
modification—keeping a pointer to the last node in addition to the first one—will enable it to implement an
efficient queue.
• A deque implemented using a modified dynamic array

Queues and programming languages


Queues may be implemented as a separate data type, or may be considered a special case of a double-ended queue
(deque) and not implemented separately. For example, Perl and Ruby allow pushing and popping an array from both
ends, so one can use push and shift functions to enqueue and dequeue a list (or, in reverse, one can use unshift and
pop), although in some cases these operations are not efficient.
C++'s Standard Template Library provides a "queue" templated class which is restricted to only push/pop
operations. Since J2SE5.0, Java's library contains a Queue [1] interface that specifies queue operations;
implementing classes include LinkedList [2] and (since J2SE 1.6) ArrayDeque [3]. PHP has an SplQueue [4]
class and third party libraries like beanstalk'd and Gearman.

References
General
• Donald Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third Edition.
Addison-Wesley, 1997. ISBN 0-201-89683-4. Section 2.2.1: Stacks, Queues, and Deques, pp. 238–243.
• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 10.1: Stacks and queues,
pp. 200–204.
• William Ford, William Topp. Data Structures with C++ and STL, Second Edition. Prentice Hall, 2002. ISBN
0-13-085850-1. Chapter 8: Queues and Priority Queues, pp. 386–390.
• Adam Drozdek. Data Structures and Algorithms in C++, Third Edition. Thomson Course Technology, 2005.
ISBN 0-534-49182-0. Chapter 4: Stacks and Queues, pp. 137–169.
Citations
[1] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ Queue. html
[2] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ LinkedList. html
[3] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ ArrayDeque. html
[4] http:/ / www. php. net/ manual/ en/ class. splqueue. php
Queue (abstract data type) 84

External links
• Queues with algo and 'c' programe (http://scanftree.com/Data_Structure/Queues)
• STL Quick Reference (http://www.halpernwightsoftware.com/stdlib-scratch/quickref.html#containers14)
• VBScript implementation of stack, queue, deque, and Red-Black Tree (http://www.ludvikjerabek.com/
downloads.html)
Paul E. Black, Bounded queue (http:/ / www. nist. gov/ dads/ HTML/ boundedqueue. html) at the NIST Dictionary
of Algorithms and Data Structures.

LIFO
LIFO may refer to:

Queues
• FIFO and LIFO accounting
• LIFO (computing)
• LIFO (education) a layoff policy

Other
• LIFO (magazine), a magazine published in Greece
__DISAMBIG__

Stack (abstract data type)


In computer science, a stack is a particular kind of abstract data
type or collection in which the principal (or only) operations on
the collection are the addition of an entity to the collection, known
as push and removal of an entity, known as pop. The relation
between the push and pop operations is such that the stack is a
Last-In-First-Out (LIFO) data structure. In a LIFO data structure,
the last element added to the structure must be the first one to be
removed. This is equivalent to the requirement that, considered as
a linear data structure, or more abstractly a sequential collection,
the push and pop operations occur only at one end of the structure, Simple representation of a stack
referred to as the top of the stack. Often a peek or top operation is
also implemented, returning the value of the top element without removing it.

A stack may be implemented to have a bounded capacity. If the stack is full and does not contain enough space to
accept an entity to be pushed, the stack is then considered to be in an overflow state. The pop operation removes an
item from the top of the stack. A pop either reveals previously concealed items or results in an empty stack, but, if
the stack is empty, it goes into underflow state, which means no items are present in stack to be removed.
A stack is a restricted data structure, because only a small number of operations are performed on it. The nature of
the pop and push operations also means that stack elements have a natural order. Elements are removed from the
stack in the reverse order to the order of their addition. Therefore, the lower elements are those that have been on the
Stack (abstract data type) 85

stack the longest.[1]

History
The stack was first proposed in 1946, in the computer design of Alan M. Turing (who used the terms "bury" and
"unbury") as a means of calling and returning from subroutines.Wikipedia:Please clarify The Germans Klaus
Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea in 1955 and filed a patent in
1957. The same concept was developed, independently, by the Australian Charles Leonard Hamblin in the first half
of 1957.[2]

Abstract definition
A stack is a basic computer science data structure and can be defined in an abstract, implementation-free manner, or
it can be generally defined as a linear list of items in which all additions and deletion are restricted to one end that is
Top.
This is a VDM (Vienna Development Method) description of a stack:[3]
Function signatures:

init: -> Stack


push: N x Stack -> Stack
top: Stack -> (N U ERROR)
pop: Stack -> Stack
isempty: Stack -> Boolean

(where N indicates an element (natural numbers in this case), and U indicates set union)
Semantics:

top(init()) = ERROR
top(push(i,s)) = i
pop(init()) = init()
pop(push(i, s)) = s
isempty(init()) = true
isempty(push(i, s)) = false

Inessential operations
In many implementations, a stack has more operations than "push" and "pop". An example is "top of stack", or
"peek", which observes the top-most element without removing it from the stack.[4] Since this can be done with a
"pop" and a "push" with the same data, it is not essential. An underflow condition can occur in the "stack top"
operation if the stack is empty, the same as "pop". Often implementations have a function which just returns if the
stack is empty.

Software stacks

Implementation
In most high level languages, a stack can be easily implemented either through an array or a linked list. What
identifies the data structure as a stack in either case is not the implementation but the interface: the user is only
allowed to pop or push items onto the array or linked list, with few other helper operations. The following will
demonstrate both implementations, using C.
Stack (abstract data type) 86

Array
The array implementation aims to create an array where the first element (usually at the zero-offset) is the bottom.
That is, array[0] is the first element pushed onto the stack and the last element popped off. The program must
keep track of the size, or the length of the stack. The stack itself can therefore be effectively implemented as a
two-element structure in C:

typedef struct {
size_t size;
int items[STACKSIZE];
} STACK;

The push() operation is used both to initialize the stack, and to store values to it. It is responsible for inserting
(copying) the value into the ps->items[] array and for incrementing the element counter (ps->size). In a
responsible C implementation, it is also necessary to check whether the array is already full to prevent an overrun.

void push(STACK *ps, int x)


{
if (ps->size == STACKSIZE) {
fputs("Error: stack overflow\n", stderr);
abort();
} else
ps->items[ps->size++] = x;
}

The pop() operation is responsible for removing a value from the stack, and decrementing the value of
ps->size. A responsible C implementation will also need to check that the array is not already empty.

int pop(STACK *ps)


{
if (ps->size == 0){
fputs("Error: stack underflow\n", stderr);
abort();
} else
return ps->items[--ps->size];
}

If we use a dynamic array, then we can implement a stack that can grow or shrink as much as needed. The size of the
stack is simply the size of the dynamic array. A dynamic array is a very efficient implementation of a stack, since
adding items to or removing items from the end of a dynamic array is amortized O(1) time.
Stack (abstract data type) 87

Linked list
The linked-list implementation is equally simple and straightforward. In fact, a simple singly linked list is sufficient
to implement a stack—it only requires that the head node or element can be removed, or popped, and a node can
only be inserted by becoming the new head node.
Unlike the array implementation, our structure typedef corresponds not to the entire stack structure, but to a single
node:

typedef struct stack {


int data;
struct stack *next;
} STACK;

Such a node is identical to a typical singly linked list node, at least to those that are implemented in C.
The push() operation both initializes an empty stack, and adds a new node to a non-empty one. It works by
receiving a data value to push onto the stack, along with a target stack, creating a new node by allocating memory for
it, and then inserting it into a linked list as the new head:

void push(STACK **head, int value)


{
STACK *node = malloc(sizeof(STACK)); /* create a new node */

if (node == NULL){
fputs("Error: no space available for node\n", stderr);
abort();
} else { /* initialize node */
node->data = value;
node->next = empty(*head) ? NULL : *head; /* insert new head if
any */
*head = node;
}
}

A pop() operation removes the head from the linked list, and assigns the pointer to the head to the previous second
node. It checks whether the list is empty before popping from it:

int pop(STACK **head)


{
if (empty(*head)) { /* stack is empty */
fputs("Error: stack underflow\n", stderr);
abort();
} else { //pop a node
STACK *top = *head;
int value = top->data;
*head = top->next;
free(top);
return value;
}
}
Stack (abstract data type) 88

Stacks and programming languages


Some languages, like LISP and Python, do not call for stack implementations, since push and pop functions are
available for any list. All Forth-like languages (such as Adobe PostScript) are also designed around language-defined
stacks that are directly visible to and manipulated by the programmer. Examples from Common Lisp:

(setf list (list 'a 'b 'c))


;; ⇒ (A B C)
(pop list)
;; ⇒ A
list
;; ⇒ (B C)
(push 'new list)
;; ⇒ (NEW B C)

C++'s Standard Template Library provides a "stack" templated class which is restricted to only push/pop
operations. Java's library contains a Stack [5] class that is a specialization of Vector [6]---this could be
considered a design flaw, since the inherited get() method from Vector [6] ignores the LIFO constraint of the
Stack [5]. PHP has an SplStack [7] class.
Stack (abstract data type) 89

Hardware stacks
A common use of stacks at the architecture level is as a means of allocating and accessing memory.

Basic architecture of a stack


A typical stack is an area of computer
memory with a fixed origin and a variable
size. Initially the size of the stack is zero.
A stack pointer, usually in the form of a
hardware register, points to the most
recently referenced location on the stack;
when the stack has a size of zero, the stack
pointer points to the origin of the stack.

The two operations applicable to all stacks


are:
• a push operation, in which a data item
is placed at the location pointed to by
the stack pointer, and the address in the
stack pointer is adjusted by the size of
the data item;
• a pop or pull operation: a data item at
the current location pointed to by the
stack pointer is removed, and the stack
pointer is adjusted by the size of the
data item.
There are many variations on the basic
principle of stack operations. Every stack
A typical stack, storing local data and call information for nested procedure calls (not
has a fixed location in memory at which it necessarily nested procedures!). This stack grows downward from its origin. The
begins. As data items are added to the stack pointer points to the current topmost datum on the stack. A push operation
stack, the stack pointer is displaced to decrements the pointer and copies the data to the stack; a pop operation copies data
from the stack and then increments the pointer. Each procedure called in the program
indicate the current extent of the stack,
stores procedure return information (in yellow) and local data (in other colors) by
which expands away from the origin. pushing them onto the stack. This type of stack implementation is extremely
Stack pointers may point to the origin of a common, but it is vulnerable to buffer overflow attacks (see the text).

stack or to a limited range of addresses


either above or below the origin (depending on the direction in which the stack grows); however, the stack pointer
cannot cross the origin of the stack. In other words, if the origin of the stack is at address 1000 and the stack grows
downwards (towards addresses 999, 998, and so on), the stack pointer must never be incremented beyond 1000 (to
1001, 1002, etc.). If a pop operation on the stack causes the stack pointer to move past the origin of the stack, a stack
underflow occurs. If a push operation causes the stack pointer to increment or decrement beyond the maximum
extent of the stack, a stack overflow occurs.

Some environments that rely heavily on stacks may provide additional operations, for example:
• Duplicate: the top item is popped, and then pushed again (twice), so that an additional copy of the former top item
is now on top, with the original below it.
• Peek: the topmost item is inspected (or returned), but the stack pointer is not changed, and the stack size does not
change (meaning that the item remains on the stack). This is also called top operation in many articles.
Stack (abstract data type) 90

• Swap or exchange: the two topmost items on the stack exchange places.
• Rotate (or Roll): the n topmost items are moved on the stack in a rotating fashion. For example, if n=3, items 1, 2,
and 3 on the stack are moved to positions 2, 3, and 1 on the stack, respectively. Many variants of this operation
are possible, with the most common being called left rotate and right rotate.
Stacks are often visualized growing from the bottom up (like real-world stacks). They may also be visualized
growing from left to right, so that "topmost" becomes "rightmost", or even growing from top to bottom. The
important feature is that the top of the stack is in a fixed position. The image to the right is an example of a top to
bottom growth visualization: the top (28) is the stack 'bottom', since the stack 'top' is where items are pushed or
popped from. Sometimes stacks are also visualized metaphorically, such as coin holders or Pez dispensers.
A right rotate will move the first element to the third position, the second to the first and the third to the second.
Here are two equivalent visualizations of this process:

apple banana
banana ===right rotate==> cucumber
cucumber apple

cucumber apple
banana ===left rotate==> cucumber
apple banana

A stack is usually represented in computers by a block of memory cells, with the "bottom" at a fixed location, and
the stack pointer holding the address of the current "top" cell in the stack. The top and bottom terminology are used
irrespective of whether the stack actually grows towards lower memory addresses or towards higher memory
addresses.
Pushing an item on to the stack adjusts the stack pointer by the size of the item (either decrementing or incrementing,
depending on the direction in which the stack grows in memory), pointing it to the next cell, and copies the new top
item to the stack area. Depending again on the exact implementation, at the end of a push operation, the stack pointer
may point to the next unused location in the stack, or it may point to the topmost item in the stack. If the stack points
to the current topmost item, the stack pointer will be updated before a new item is pushed onto the stack; if it points
to the next available location in the stack, it will be updated after the new item is pushed onto the stack.
Popping the stack is simply the inverse of pushing. The topmost item in the stack is removed and the stack pointer is
updated, in the opposite order of that used in the push operation.

Hardware support

Stack in main memory


Most CPUs have registers that can be used as stack pointers. Processor families like the x86, Z80, 6502, and many
others have special instructions that implicitly use a dedicated (hardware) stack pointer to conserve opcode space.
Some processors, like the PDP-11 and the 68000, also have special addressing modes for implementation of stacks,
typically with a semi-dedicated stack pointer as well (such as A7 in the 68000). However, in most processors, several
different registers may be used as additional stack pointers as needed (whether updated via addressing modes or via
add/sub instructions).
Stack (abstract data type) 91

Stack in registers or dedicated memory


The x87 floating point architecture is an example of a set of registers organised as a stack where direct access to
individual registers (relative the current top) is also possible. As with stack-based machines in general, having the
top-of-stack as an implicit argument allows for a small machine code footprint with a good usage of bus bandwidth
and code caches, but it also prevents some types of optimizations possible on processors permitting random access to
the register file for all (two or three) operands. A stack structure also makes superscalar implementations with
register renaming (for speculative execution) somewhat more complex to implement, although it is still feasible, as
exemplified by modern x87 implementations.
Sun SPARC, AMD Am29000, and Intel i960 are all examples of architectures using register windows within a
register-stack as another strategy to avoid the use of slow main memory for function arguments and return values.
There are also a number of small microprocessors that implements a stack directly in hardware and some
microcontrollers have a fixed-depth stack that is not directly accessible. Examples are the PIC microcontrollers, the
Computer Cowboys MuP21, the Harris RTX line, and the Novix NC4016. Many stack-based microprocessors were
used to implement the programming language Forth at the microcode level. Stacks were also used as a basis of a
number of mainframes and mini computers. Such machines were called stack machines, the most famous being the
Burroughs B5000.

Applications
Stacks have numerous applications. We see stacks in everyday life, from the books in our library, to the blank sheets
of paper in our printer tray. All of them follow the Last In First Out (LIFO) logic, that is when we add a book to a
pile of books, we add it to the top of the pile, whereas when we remove a book from the pile, we generally remove it
from the top of the pile.
Given below are a few applications of stacks in the world of computers:

Converting a decimal number into a binary number


The logic for transforming a decimal
number into a binary number is as
follows:
1. Read a number
2. Iteration (while number is greater
than zero)
1. Find out the remainder after
dividing the number by 2
2. Print the remainder
3. End the iteration
However, there is a problem with this Decimal to binary conversion of 23
logic. Suppose the number, whose
binary form we want to find, is 23. Using this logic, we get the result as 11101, instead of getting 10111.
To solve this problem, we use a stack. We make use of the LIFO property of the stack. Initially we push the binary
digit formed into the stack, instead of printing it directly. After the entire number has been converted into the binary
form, we pop one digit at a time from the stack and print it. Therefore we get the decimal number converted into its
proper binary form.
Algorithm:
Stack (abstract data type) 92

function outputInBinary(Integer n)
Stack s = new Stack
while n > 0 do
Integer bit = n modulo 2
s.push(bit)
if s is full then
return error
end if
n = floor(n / 2)
end while
while s is not empty do
output(s.pop())
end while
end function

Towers of Hanoi
One of the most interesting applications of
stacks can be found in solving a puzzle
called Tower of Hanoi. According to an old
Brahmin story, the existence of the universe
is calculated in terms of the time taken by a
number of monks, who are working all the
time, to move 64 disks from one pole to
another. But there are some rules about how
this should be done, which are:
Towers of Hanoi
1. move only one disk at a time.
2. for temporary storage, a third pole may be used.
3. a disk of larger diameter may not be placed on a disk of smaller diameter.
For algorithm of this puzzle see Tower of Hanoi.
Assume that A is the first tower, B is the second tower, and C is the third tower.
Stack (abstract data type) 93

Tower of Hanoi

Towers of Hanoi example, steps 1-2

Towers of Hanoi example, steps 3-4


Stack (abstract data type) 94

Towers of Hanoi example, steps 5-6

Towers of Hanoi example, steps 7-8


Stack (abstract data type) 95

Output: (when there are 3 disks)


Let 1 be the smallest disk, 2 be the disk of medium size and 3 be the largest disk.

Move disk From peg To peg

1 A C

2 A B

1 C B

3 A C

1 B A

2 B C

1 A C

The C++ code for this solution can be implemented in two ways:

First implementation (using stacks implicitly by recursion)


#include <stdio.h>

void TowersofHanoi(int n, int a, int b, int c)


{
if(n > 0)
{
TowersofHanoi(n-1, a, c, b); //recursion
printf("> Move top disk from tower %d to tower %d.\n", a, b);
TowersofHanoi(n-1, c, b, a); //recursion
}
}

Second implementation (using stacks explicitly)


// Global variable , tower [1:3] are three towers
arrayStack<int> tower[4];

void TowerofHanoi(int n)
{
// Preprocessor for moveAndShow.
for (int d = n; d > 0; d--) //initialize
tower[1].push(d); //add disk d to tower 1
moveAndShow(n, 1, 2, 3); /*move n disks from tower 1 to
tower 3 using
tower 2 as intermediate tower*/

void moveAndShow(int n, int a, int b, int c)


{
// Move the top n disks from tower a to tower b showing states.
// Use tower c for intermediate storage.
Stack (abstract data type) 96

if(n > 0)
{
moveAndShow(n-1, a, c, b); //recursion
int d = tower[a].top(); //move a disc from top of tower
x to top of
tower[a].pop(); //tower y
tower[c].push(d);
showState(); //show state of 3 towers
moveAndShow(n-1, b, a, c); //recursion
}
}

However complexity for above written implementations is . So it's obvious that problem can only be solved
for small values of n (generally ).
In case of the monks, the number of turns taken to transfer 64 disks, by following the above rules, will be
18,446,744,073,709,551,615; which will surely take a lot of time!

Expression evaluation and syntax parsing


Calculators employing reverse Polish notation use a stack structure to hold values. Expressions can be represented in
prefix, postfix or infix notations and conversion from one form to another may be accomplished using a stack. Many
compilers use a stack for parsing the syntax of expressions, program blocks etc. before translating into low level
code. Most programming languages are context-free languages, allowing them to be parsed with stack based
machines.

Evaluation of an infix expression that is fully parenthesized


Input: (((2 * 5) - (1 * 2)) / (11 - 9))
Output: 4
Analysis: Five types of input characters
1. Opening bracket
2. Numbers
3. Operators
4. Closing bracket
5. New line character
Data structure requirement: A character stack
Algorithm
1. Read one input character

2. Actions at end of each input

Opening brackets (2.1) Push into stack and then Go to step (1)

Number (2.2) Push into stack and then Go to step (1)

Operator (2.3) Push into stack and then Go to step (1)

Closing brackets (2.4) Pop from character stack

(2.4.1) if it is opening bracket, then discard it, Go to step (1)

(2.4.2) Pop is used four times

The first popped element is assigned to op2

The second popped element is assigned to op

The third popped element is assigned to op1


Stack (abstract data type) 97

The fourth popped element is the remaining opening bracket, which can be discarded

Evaluate op1 op op2

Convert the result into character and

push into the stack

Go to step (2.4)

New line character (2.5) Pop from stack and print the answer

STOP

Result: The evaluation of the fully parenthesized infix expression is printed as follows:
Input String: (((2 * 5) - (1 * 2)) / (11 - 9))

Input Symbol Stack (from bottom to top) Operation

( (

( ((

( (((

2 (((2

* (((2*

5 (((2*5

) ( ( 10 2 * 5 = 10 and push

- ( ( 10 -

( ( ( 10 - (

1 ( ( 10 - ( 1

* ( ( 10 - ( 1 *

2 ( ( 10 - ( 1 * 2

) ( ( 10 - 2 1 * 2 = 2 & Push

) (8 10 - 2 = 8 & Push

/ (8/

( (8/(

11 ( 8 / ( 11

- ( 8 / ( 11 -

9 ( 8 / ( 11 - 9

) (8/2 11 - 9 = 2 & Push

) 4 8 / 2 = 4 & Push

New line Empty Pop & Print


Stack (abstract data type) 98

Evaluation of infix expression which is not fully parenthesized


Input: (2 * 5 - 1 * 2) / (11 - 9)
Output: 4
Analysis There are five types of input characters which are:
1. Opening parentheses
2. Numbers
3. Operators
4. Closing parentheses
5. New line character (\n)
We do not know what to do if an operator is read as an input character. By implementing the priority rule for
operators, we have a solution to this problem.
The Priority rule: we should perform a comparative priority check if an operator is read, and then push it. If the stack
top contains an operator of priority higher than or equal to the priority of the input operator, then we pop it and print
it. We keep on performing the priority check until the top of stack either contains an operator of lower priority or if it
does not contain an operator.
Data Structure Requirement for this problem: a character stack and an integer stack
Algorithm:
1. Read an input character

2. Actions that will be performed at the end of each input

Opening parentheses (2.1) Push it into character stack and then Go to step (1)

Number (2.2) Push into integer stack, Go to step (1)

Operator (2.3) Do the comparative priority check

(2.3.1) if the character stack's top contains an operator with equal

or higher priority, then pop it into op

Pop a number from integer stack into op2

Pop another number from integer stack into op1

Calculate op1 op op2 and push the result into the integer

stack

Closing parentheses (2.4) Pop from the character stack

(2.4.1) if it is an opening parentheses, then discard it and Go to

step (1)

(2.4.2) To op, assign the popped element

Pop a number from integer stack and assign it op2

Pop another number from integer stack and assign it

to op1

Calculate op1 op op2 and push the result into the integer

stack

Convert into character and push into stack

Go to the step (2.4)

New line character (2.5) Print the result after popping from the stack

STOP

Result: The evaluation of an infix expression that is not fully parenthesized is printed as follows:
Input String: (2 * 5 - 1 * 2) / (11 - 9)
Stack (abstract data type) 99

Input Symbol Character Stack (from bottom to top) Integer Stack (from bottom to top) Operation performed

( (

2 ( 2

* (* Push as * has higher priority

5 (* 25

- (* Since '-' has less priority, we do 2 * 5 = 10

(- 10 We push 10 and then push '-'

1 (- 10 1

* (-* 10 1 Push * as it has higher priority

2 (-* 10 1 2

) (- 10 2 Perform 1 * 2 = 2 and push it

( 8 Pop - and 10 - 2 = 8 and push, Pop (

/ / 8

( /( 8

11 /( 8 11

- /(- 8 11

9 /(- 8 11 9

) / 82 Perform 11 - 9 = 2 and push it

New line 4 Perform 8 / 2 = 4 and push it

4 Print the output, which is 4

Evaluation of prefix expression


Input: / - * 2 5 * 1 2 - 11 9
Output: 4
Analysis There are three types of input characters
1. Numbers
2. Operators
3. New line character (\n)
Data structure requirement: a character stack and an integer stack
Algorithm:
1. Read one character input at a time and keep pushing it into the character stack until the new

line character is reached

2. Perform pop from the character stack. If the stack is empty, go to step (3)

Number (2.1) Push in to the integer stack and then go to step (1)

Operator (2.2) Assign the operator to op

Pop a number from integer stack and assign it to op1

Pop another number from integer stack

and assign it to op2

Calculate op1 op op2 and push the output into the integer

stack. Go to step (2)

3. Pop the result from the integer stack and display the result
Stack (abstract data type) 100

Result: the evaluation of prefix expression is printed as follows:


Input String: / - * 2 5 * 1 2 - 11 9

Input Symbol Character Stack (from bottom to top) Integer Stack (from bottom to top) Operation performed

/ /

- /-

* /-*

2 /-*2

5 /-*25

* /-*25*

1 /-*25*1

2 /-*25*12

- /-*25*12-

11 / - * 2 5 * 1 2 - 11

9 / - * 2 5 * 1 2 - 11 9

\n / - * 2 5 * 1 2 - 11 9

/-*25*12- 9 11

/-*25*12 2 11 - 9 = 2

/-*25*1 22

/-*25* 221

/-*25 22 1*2=2

/-*2 225

/-* 2252

/- 2 2 10 5 * 2 = 10

/ 28 10 - 2 = 8

Stack is empty 4 8/2=4

Stack is empty Print 4

Evaluation of postfix expression


The calculation: 1 + 2 * 4 + 3 can be written down like this in postfix notation with the advantage of no precedence
rules and parentheses needed:

1 2 4 * + 3 +

The expression is evaluated from the left to right using a stack:


1. when encountering an operand: push it
2. when encountering an operator: pop two operands, evaluate the result and push it.
Like the following way (the Stack is displayed after Operation has taken place):
Stack (abstract data type) 101

Input Operation Stack (after op)

1 Push operand 1

2 Push operand 2, 1

4 Push operand 4, 2, 1

* Multiply 8, 1

+ Add 9

3 Push operand 3, 9

+ Add 12

The final result, 12, lies on the top of the stack at the end of the calculation.
Example in C

#include<stdio.h>

int main()
{
int a[100], i;
printf("To pop enter -1\n");
for(i = 0;;)
{
printf("Push ");
scanf("%d", &a[i]);
if(a[i] == -1)
{
if(i == 0)
{
printf("Underflow\n");
}
else
{
printf("pop = %d\n", a[--i]);
}
}
else
{
i++;
}
}
}
Stack (abstract data type) 102

Evaluation of postfix expression (Pascal)


This is an implementation in Pascal, using marked sequential file as data archives.

{
programmer : clx321
file : stack.pas
unit : Pstack.tpu
}
program TestStack;
{this program uses ADT of Stack, I will assume that the unit of ADT of
Stack has already existed}

uses
PStack; {ADT of STACK}

{dictionary}
const
mark = '.';

var
data : stack;
f : text;
cc : char;
ccInt, cc1, cc2 : integer;

{functions}
IsOperand (cc : char) : boolean; {JUST Prototype}
{return TRUE if cc is operand}
ChrToInt (cc : char) : integer; {JUST Prototype}
{change char to integer}
Operator (cc1, cc2 : integer) : integer; {JUST Prototype}
{operate two operands}

{algorithms}
begin
assign (f, cc);
reset (f);
read (f, cc); {first elmt}
if (cc = mark) then
begin
writeln ('empty archives !');
end
else
begin
repeat
if (IsOperand (cc)) then
begin
ccInt := ChrToInt (cc);
Stack (abstract data type) 103

push (ccInt, data);


end
else
begin
pop (cc1, data);
pop (cc2, data);
push (data, Operator (cc2, cc1));
end;
read (f, cc); {next elmt}
until (cc = mark);
end;
close (f);
end

Conversion of an Infix expression that is fully parenthesized into a Postfix expression


Input: (((8 + 1) - (7 - 4)) / (11 - 9))
Output: 8 1 + 7 4 - - 11 9 - /
Analysis: There are five types of input characters which are:

* Opening parentheses
* Numbers
* Operators
* Closing parentheses
* New line character (\n)

Requirement: A character stack


Algorithm:
1. Read an character input
2. Actions to be performed at end of each input
Opening parentheses (2.1) Push into stack and then Go to step (1)
Number (2.2) Print and then Go to step (1)
Operator (2.3) Push into stack and then Go to step (1)
Closing parentheses (2.4) Pop it from the stack
(2.4.1) If it is an operator, print it, Go to step (2.4)
(2.4.2) If the popped element is an opening parentheses,
discard it and go to step (1)
New line character (2.5) STOP

Therefore, the final output after conversion of an infix expression to a postfix expression is as follows:
Stack (abstract data type) 104

Input Operation Stack (after Output on


op) monitor

( (2.1) Push operand into stack (

( (2.1) Push operand into stack ((

( (2.1) Push operand into stack (((

8 (2.2) Print it 8

+ (2.3) Push operator into stack (((+ 8

1 (2.2) Print it 81

) (2.4) Pop from the stack: Since popped element is '+' print it ((( 81+

(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next (( 81+
character

- (2.3) Push operator into stack ((-

( (2.1) Push operand into stack ((-(

7 (2.2) Print it 81+7

- (2.3) Push the operator in the stack ((-(-

4 (2.2) Print it 81+74

) (2.4) Pop from the stack: Since popped element is '-' print it ((-( 81+74-

(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next ((-
character

) (2.4) Pop from the stack: Since popped element is '-' print it (( 81+74--

(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next (
character

/ (2.3) Push the operand into the stack (/

( (2.1) Push into the stack (/(

11 (2.2) Print it 8 1 + 7 4 - - 11

- (2.3) Push the operand into the stack (/(-

9 (2.2) Print it 8 1 + 7 4 - - 11 9

) (2.4) Pop from the stack: Since popped element is '-' print it (/( 8 1 + 7 4 - - 11 9 -

(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next (/
character

) (2.4) Pop from the stack: Since popped element is '/' print it ( 8 1 + 7 4 - - 11 9 - /

(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next Stack is empty
character

New line (2.5) STOP


character
Stack (abstract data type) 105

Rearranging railroad cars

Problem Description
This is one useful application of stacks. Consider that a freight train has n railroad cars, each to be left at different
station. They're numbered 1 through n and freight train visits these stations in order n through 1. Obviously, the
railroad cars are labeled by their destination. To facilitate removal of the cars from the train, we must rearrange them
in ascending order of their number (i.e. 1 through n). When cars are in this order, they can be detached at each
station. We rearrange cars at a shunting yard that has input track, output track and k holding tracks between input &
output tracks (i.e. holding track).

Solution Strategy
To rearrange cars, we examine the cars on the input from front to back. If the car being examined is next one in the
output arrangement, we move it directly to output track. If not, we move it to the holding track & leave it there until
it's time to place it to the output track. The holding tracks operate in a LIFO manner as the cars enter & leave these
tracks from top. When rearranging cars only following moves are permitted:
• A car may be moved from front (i.e. right end) of the input track to the top of one of the holding tracks or to the
left end of the output track.
• A car may be moved from the top of holding track to left end of the output track.
The figure shows a shunting yard with k = 3, holding tracks H1, H2 & H3, also n = 9. The n cars of freight train
begin in the input track & are to end up in the output track in order 1 through n from right to left. The cars initially
are in the order 5,8,1,7,4,2,9,6,3 from back to front. Later cars are rearranged in desired order.
Stack (abstract data type) 106

A Three Track Example

• Consider the input arrangement from figure, here we note that the car 3
is at the front, so it can't be output yet, as it to be preceded by cars 1 &
2. So car 3 is detached & moved to holding track H1.
• The next car 6 can't be output & it is moved to holding track H2.
Because we have to output car 3 before car 6 & this will not possible if
we move car 6 to holding track H1.
• Now it's obvious that we move car 9 to H3.
The requirement of rearrangement of cars on any holding track is that the
cars should be preferred to arrange in ascending order from top to bottom.
• So car 2 is now moved to holding track H1 so that it satisfies the
previous statement. If we move car 2 to H2 or H3, then we've no place
to move cars 4,5,7,8.The least restrictions on future car placement
arise when the new car λ is moved to the holding track that has a car
at its top with smallest label Ψ such that λ < Ψ. We may call it an
assignment rule to decide whether a particular car belongs to a
specific holding track.
• When car 4 is considered, there are three places to move the car
H1,H2,H3. The top of these tracks are 2,6,9.So using above mentioned
Assignment rule, we move car 4 to H2.
• The car 7 is moved to H3.
• The next car 1 has the least label, so it's moved to output track.
• Now it's time for car 2 & 3 to output which are from H1(in short all the
Railroad cars example
cars from H1 are appended to car 1 on output track).
The car 4 is moved to output track. No other cars can be moved to output
track at this time.
• The next car 8 is moved to holding track H1.
• Car 5 is output from input track. Car 6 is moved to output track from H2, so is the 7 from H3,8 from H1 & 9 from
H3.
[]

Backtracking
Another important application of stacks is backtracking. Consider a simple example of finding the correct path in a
maze. There are a series of points, from the starting point to the destination. We start from one point. To reach the
final destination, there are several paths. Suppose we choose a random path. After following a certain path, we
realise that the path we have chosen is wrong. So we need to find a way by which we can return to the beginning of
that path. This can be done with the use of stacks. With the help of stacks, we remember the point where we have
reached. This is done by pushing that point into the stack. In case we end up on the wrong path, we can pop the last
point from the stack and thus return to the last point and continue our quest to find the right path. This is called
backtracking.
Stack (abstract data type) 107

Quicksort
Sorting means arranging the list of elements in a particular order. In case of numbers, it could be in ascending order,
or in the case of letters, alphabetic order.
Quicksort is an algorithm of the divide and conquer type. In this method, to sort a set of numbers, we reduce it to
two smaller sets, and then sort these smaller sets.
This can be explained with the help of the following example:
Suppose A is a list of the following numbers:

In the reduction step, we find the final position of one of the numbers. In this case, let us assume that we have to find
the final position of 48, which is the first number in the list.
To accomplish this, we adopt the following method. Begin with the last number, and move from right to left.
Compare each number with 48. If the number is smaller than 48, we stop at that number and swap it with 48.
In our case, the number is 24. Hence, we swap 24 and 48.

The numbers 96 and 72 to the right of 48, are greater than 48. Now beginning with 24, scan the numbers in the
opposite direction, that is from left to right. Compare every number with 48 until you find a number that is greater
than 48.
In this case, it is 60. Therefore we swap 48 and 60.

Note that the numbers 12, 24 and 36 to the left of 48 are all smaller than 48. Now, start scanning numbers from 60,
in the right to left direction. As soon as you find lesser number, swap it with 48.
In this case, it is 44. Swap it with 48. The final result is:
Stack (abstract data type) 108

Now, beginning with 44, scan the list from left to right, until you find a number greater than 48.
Such a number is 84. Swap it with 48. The final result is:

Now, beginning with 84, traverse the list from right to left, until you reach a number lesser than 48. We do not find
such a number before reaching 48. This means that all the numbers in the list have been scanned and compared with
48. Also, we notice that all numbers less than 48 are to the left of it, and all numbers greater than 48, are to its right.
The final partitions look as follows:

Therefore, 48 has been placed in its proper position and now our task is reduced to sorting the two partitions. This
above step of creating partitions can be repeated with every partition containing 2 or more elements. As we can
process only a single partition at a time, we should be able to keep track of the other partitions, for future processing.
This is done by using two stacks called LOWERBOUND and UPPERBOUND, to temporarily store these partitions.
The addresses of the first and last elements of the partitions are pushed into the LOWERBOUND and
UPPERBOUND stacks respectively. Now, the above reduction step is applied to the partitions only after its
boundary values are popped from the stack.
We can understand this from the following example:
Take the above list A with 12 elements. The algorithm starts by pushing the boundary values of A, that is 1 and 12
into the LOWERBOUND and UPPERBOUND stacks respectively. Therefore the stacks look as follows:

LOWERBOUND: 1 UPPERBOUND: 12

To perform the reduction step, the values of the stack top are popped from the stack. Therefore, both the stacks
become empty.

LOWERBOUND: {empty} UPPERBOUND: {empty}

Now, the reduction step causes 48 to be fixed to the 5th position and creates two partitions, one from position 1 to 4
and the other from position 6 to 12. Hence, the values 1 and 6 are pushed into the LOWERBOUND stack and 4 and
Stack (abstract data type) 109

12 are pushed into the UPPERBOUND stack.

LOWERBOUND: 1, 6 UPPERBOUND: 4, 12

For applying the reduction step again, the values at the stack top are popped. Therefore, the values 6 and 12 are
popped. Therefore the stacks look like:

LOWERBOUND: 1 UPPERBOUND: 4

The reduction step is now applied to the second partition, that is from the 6th to 12th element.

After the reduction step, 98 is fixed in the 11th position. So, the second partition has only one element. Therefore, we
push the upper and lower boundary values of the first partition onto the stack. So, the stacks are as follows:

LOWERBOUND: 1, 6 UPPERBOUND: 4, 10

The processing proceeds in the following way and ends when the stacks do not contain any upper and lower bounds
of the partition to be processed, and the list gets sorted.

The Stock Span Problem


In the stock span problem, we will
solve a financial problem with the help
of stacks.
Suppose, for a stock, we have a series
of n daily price quotes, the span of the
stock's price on a given day is defined
as the maximum number of
consecutive days just before the given
day, for which the price of the stock on
the current day is less than or equal to
its price on the given day. The Stockspan Problem

Let, Price(i) = price of the stock on day


"i".
Then, Span(i) = Max{k : k>=0 and Price(j)<=Price(i) for j=i-k, .., i}
Stack (abstract data type) 110

Thus, if Price(i-1)>Price(i), then Span(i)=0.

An algorithm which has Quadratic Time Complexity


Input: An array P with n elements
Output: An array S of n elements such that S[i] is the largest integer k such that k <= i + 1 and P[j] <= P[i] for j = i -
k + 1,.....,i
Algorithm:

1. Initialize an array P which contains the daily prices of the stocks


2. Initialize an array S which will store the span of the stock
3. for i = 0 to i = n - 1
3.1 Initialize k to zero
3.2 Done with a false condition
3.3 repeat
3.3.1 if ( P[i - k] <= P[i] ) then
Increment k by 1
3.3.2 else
Done with true condition
3.4 Till (k > i) or done with processing
Assign value of k to S[i] to get the span of the stock
4. Return array S

Now, analyzing this algorithm for running time, we observe:


• We have initialized the array S at the beginning, hence takes O(n) time
• The repeat loop is nested within the for loop. The for loop, whose counter is i is executed n times. The statements
which are not in the repeat loop, but in the for loop are executed n times. Therefore these statements and the
increment and condition testing of i take O(n) time.
• In repetition of i for the outer for loop, the body of the inner repeat loop is executed maximum i + 1 times. In the
worst case, element S[i] is greater than all the previous elements. So, testing for the if condition, the statement
after that, as well as testing the until condition, will be performed i + 1 times during iteration i for the outer for
loop. Hence, the total time taken by the inner loop is O(n(n + 1)/2), which is O( )
• We returned the array S at the end. This is a constant time operation, hence takes O(1) time
The running time of all these steps is calculated by adding the time taken by all these four steps. The first two terms
are O( ) while the last term is O( ). Therefore the total running time of the algorithm is O( ).

An algorithm which has Linear Time Complexity


In order to calculate the span more efficiently, we see that the span on a particular day can be easily calculated if we
know the closest day before i, such that the price of the stocks on that day was higher than the price of the stocks on
the present day. If there exists such a day, we can represent it by h(i) and initialize h(i) to be -1. This is basically the
same algorithm as the one used for efficient construction of Cartesian tree.
Therefore the span of a particular day is given by the formula, s = i - h(i).
To implement this logic, we use a stack as an abstract data type to store the days i, h(i), h(h(i)) and so on. When we
go from day i-1 to i, we pop the days when the price of the stock was less than or equal to p(i) and then push the
value of day i back into the stack.
Here, we assume that the stack is implemented by operations that take O(1) that is constant time. The algorithm is as
follows:
Stack (abstract data type) 111

Input: An array P with n elements and an empty stack N


Output: An array S of n elements such that P[i] is the largest integer k such that k <= i + 1 and P[j] <= P[i] for j = i -
k + 1,.....,i
Algorithm:

1. Initialize an array P which contains the daily prices of the stocks


2. Initialize an array S which will store the span of the stock
3. for i = 0 to i = n - 1
3.1 Initialize k to zero
3.2 Done with a false condition
3.3 while not (Stack N is empty or done with processing)
3.3.1 if ( P[i] >= P[N.top())] then
Pop a value from stack N
3.3.2 else
Done with true condition
3.4 if Stack N is empty
3.4.1 Initialize h to -1
3.5 else
3.5.1 Initialize stack top to h
3.6 Put the value of i - h in S[i]
3.7 Push the value of i in N
4. Return array S

Now, analyzing this algorithm for running time, we observe:


• We have initialized the array S at the beginning and returned it at the end. This is a constant time operation, hence
takes O(n) time
• The while loop is nested within the for loop. The for loop, whose counter is i is executed n times. The statements
which are not in the repeat loop, but in the for loop are executed n times. Therefore these statements and the
incrementing and condition testing of i take O(n) time.
• Now, observe the inner while loop during i repetitions of the for loop. The statement done with a true condition is
done at most once, since it causes an exit from the loop. Let us say that t(i) is the number of times statement Pop a
value from stack N is executed. So it becomes clear that while not (Stack N is empty or done with processing) is
tested maximum t(i) + 1 times.
• Adding the running time of all the operations in the while loop, we get:

• An element once popped from the stack N is never pushed back into it. Therefore,

So, the running time of all the statements in the while loop is O( )
The running time of all the steps in the algorithm is calculated by adding the time taken by all these steps. The run
time of each step is O( ). Hence the running time complexity of this algorithm is O( ).
Stack (abstract data type) 112

Runtime memory management


A number of programming languages are stack-oriented, meaning they define most basic operations (adding two
numbers, printing a character) as taking their arguments from the stack, and placing any return values back on the
stack. For example, PostScript has a return stack and an operand stack, and also has a graphics state stack and a
dictionary stack.
Forth uses two stacks, one for argument passing and one for subroutine return addresses. The use of a return stack is
extremely commonplace, but the somewhat unusual use of an argument stack for a human-readable programming
language is the reason Forth is referred to as a stack-based language.
Many virtual machines are also stack-oriented, including the p-code machine and the Java Virtual Machine.
Almost all calling conventions – computer runtime memory environments – use a special stack (the "call stack") to
hold information about procedure/function calling and nesting in order to switch to the context of the called function
and restore to the caller function when the calling finishes. The functions follow a runtime protocol between caller
and callee to save arguments and return value on the stack. Stacks are an important way of supporting nested or
recursive function calls. This type of stack is used implicitly by the compiler to support CALL and RETURN
statements (or their equivalents) and is not manipulated directly by the programmer.
Some programming languages use the stack to store data that is local to a procedure. Space for local data items is
allocated from the stack when the procedure is entered, and is deallocated when the procedure exits. The C
programming language is typically implemented in this way. Using the same stack for both data and procedure calls
has important security implications (see below) of which a programmer must be aware in order to avoid introducing
serious security bugs into a program.

Security
Some computing environments use stacks in ways that may make them vulnerable to security breaches and attacks.
Programmers working in such environments must take special care to avoid the pitfalls of these implementations.
For example, some programming languages use a common stack to store both data local to a called procedure and
the linking information that allows the procedure to return to its caller. This means that the program moves data into
and out of the same stack that contains critical return addresses for the procedure calls. If data is moved to the wrong
location on the stack, or an oversized data item is moved to a stack location that is not large enough to contain it,
return information for procedure calls may be corrupted, causing the program to fail.
Malicious parties may attempt a stack smashing attack that takes advantage of this type of implementation by
providing oversized data input to a program that does not check the length of input. Such a program may copy the
data in its entirety to a location on the stack, and in so doing it may change the return addresses for procedures that
have called it. An attacker can experiment to find a specific type of data that can be provided to such a program such
that the return address of the current procedure is reset to point to an area within the stack itself (and within the data
provided by the attacker), which in turn contains instructions that carry out unauthorized operations.
This type of attack is a variation on the buffer overflow attack and is an extremely frequent source of security
breaches in software, mainly because some of the most popular compilers use a shared stack for both data and
procedure calls, and do not verify the length of data items. Frequently programmers do not write code to verify the
size of data items, either, and when an oversized or undersized data item is copied to the stack, a security breach may
occur.
Stack (abstract data type) 113

Programming tasks
There are many programming tasks which require application of a stack. The following tasks can be solved and
evaluated online:
• SPOJ [8] tasks: Transform the Expression [9]
• Codility training [10] tasks: Brackets [11], Fish [12], Stone-wall [13]

References
[1] http:/ / www. cprogramming. com/ tutorial/ computersciencetheory/ stack. html cprogramming.com
[2] C. L. Hamblin, "An Addressless Coding Scheme based on Mathematical Notation", N.S.W University of Technology, May 1957 (typescript)
[3] Jones: "Systematic Software Development Using VDM"
[4] Horowitz, Ellis: "Fundamentals of Data Structures in Pascal", page 67. Computer Science Press, 1984
[5] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ Stack. html
[6] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ Vector. html
[7] http:/ / www. php. net/ manual/ en/ class. splstack. php
[8] http:/ / www. spoj. com
[9] http:/ / www. spoj. com/ problems/ ONP/
[10] http:/ / codility. com/ train
[11] http:/ / codility. com/ demo/ take-sample-test/ brackets
[12] http:/ / codility. com/ demo/ take-sample-test/ fish
[13] http:/ / codility. com/ demo/ take-sample-test/ sigma2012

• Stack implementation on goodsoft.org.ua (http://goodsoft.org.ua/en/data_struct/stack.html)

Further reading
• Donald Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third
Edition.Addison-Wesley, 1997. ISBN 0-201-89683-4. Section 2.2.1: Stacks, Queues, and Deques, pp. 238–243.
• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 10.1: Stacks and queues,
pp. 200–204.

External links
• Stacks and its Applications (http://scanftree.com/Data_Structure/Application-of-stack)
• Stack Machines - the new wave (http://www.ece.cmu.edu/~koopman/stack_computers/index.html)
• Bounding stack depth (http://www.cs.utah.edu/~regehr/stacktool)
• Libsafe - Protecting Critical Elements of Stacks (http://research.avayalabs.com/project/libsafe/)
• VBScript implementation of stack, queue, deque, and Red-Black Tree (http://www.ludvikjerabek.com/
downloads.html)
• Stack Size Analysis for Interrupt-driven Programs (http://www.cs.ucla.edu/~palsberg/paper/sas03.pdf) (322
KB)
• Paul E. Black, Bounded stack (http://www.nist.gov/dads/HTML/boundedstack.html) at the NIST Dictionary
of Algorithms and Data Structures.
Computer program 114

Computer program
A computer program, or just a program, is a sequence of
instructions, written to perform a specified task with a computer. A
computer requires programs to function, typically executing the
program's instructions in a central processor. The program has an
executable form that the computer can use directly to execute the
instructions. The same program in its human-readable source code
form, from which executable programs are derived (e.g., compiled),
enables a programmer to study and develop its algorithms. A collection
of computer programs and related data is referred to as the software.

Computer source code is typically written by computer programmers.


Source code is written in a programming language that usually follows
one of two main paradigms: imperative or declarative programming.
A computer program written in an object-oriented
Source code may be converted into an executable file (sometimes style.
called an executable program or a binary) by a compiler and later
executed by a central processing unit. Alternatively, computer programs may be executed with the aid of an
interpreter, or may be embedded directly into hardware.

Computer programs may be ranked along functional lines: system software and application software. Two or more
computer programs may run simultaneously on one computer from the perspective of the user, this process being
known as multitasking.

Programming

#include <stdio.h>
int main(void) {
printf("Hello world!\n");
return 0;
}
Source code of a Hello World program written in the C programming language

Computer programming is the iterative process of writing or editing source code. Editing source code involves
testing, analyzing, refining, and sometimes coordinating with other programmers on a jointly developed program. A
person who practices this skill is referred to as a computer programmer, software developer, and sometimes coder.
The sometimes lengthy process of computer programming is usually referred to as software development. The term
software engineering is becoming popular as the process is seen as an engineering discipline.

Paradigms
Computer programs can be categorized by the programming language paradigm used to produce them. Two of the
main paradigms are imperative and declarative.
Programs written using an imperative language specify an algorithm using declarations, expressions, and statements.
A declaration couples a variable name to a datatype. For example: var x: integer; . An expression yields a
value. For example: 2 + 2 yields 4. Finally, a statement might assign an expression to a variable or use the value
of a variable to alter the program's control flow. For example: x := 2 + 2; if x = 4 then
do_something();. One criticism of imperative languages is the side effect of an assignment statement on a class
of variables called non-local variables.
Computer program 115

Programs written using a declarative language specify the properties that have to be met by the output. They do not
specify details expressed in terms of the control flow of the executing machine but of the mathematical relations
between the declared objects and their properties. Two broad categories of declarative languages are functional
languages and logical languages. The principle behind functional languages (like Haskell) is to not allow side
effects, which makes it easier to reason about programs like mathematical functions. The principle behind logical
languages (like Prolog) is to define the problem to be solved — the goal — and leave the detailed solution to the
Prolog system itself. The goal is defined by providing a list of subgoals. Then each subgoal is defined by further
providing a list of its subgoals, etc. If a path of subgoals fails to find a solution, then that subgoal is backtracked and
another path is systematically attempted.
The form in which a program is created may be textual or visual. In a visual language program, elements are
graphically manipulated rather than textually specified.

Compiling or interpreting
A computer program in the form of a human-readable, computer programming language is called source code.
Source code may be converted into an executable image by a compiler or executed immediately with the aid of an
interpreter.
Either compiled or interpreted programs might be executed in a batch process without human interaction, but
interpreted programs allow a user to type commands in an interactive session. In this case the programs are the
separate commands, whose execution occurs sequentially, and thus together. When a language is used to give
commands to a software application (such as a shell) it is called a scripting language.
Compilers are used to translate source code from a programming language into either object code or machine code.
Object code needs further processing to become machine code, and machine code is the central processing unit's
native code, ready for execution. Compiled computer programs are commonly referred to as executables, binary
images, or simply as binaries — a reference to the binary file format used to store the executable code.
Interpreted computer programs — in a batch or interactive session — are either decoded and then immediately
executed or are decoded into some efficient intermediate representation for future execution. BASIC, Perl, and
Python are examples of immediately executed computer programs. Alternatively, Java computer programs are
compiled ahead of time and stored as a machine independent code called bytecode. Bytecode is then executed on
request by an interpreter called a virtual machine.
The main disadvantage of interpreters is that computer programs run slower than when compiled. Interpreting code
is slower than running the compiled version because the interpreter must decode each statement each time it is
loaded and then perform the desired action. However, software development may be faster using an interpreter
because testing is immediate when the compiling step is omitted. Another disadvantage of interpreters is that at least
one must be present on the computer during computer program execution. By contrast, compiled computer programs
need no compiler present during execution.
No properties of a programming language require it to be exclusively compiled or exclusively interpreted. The
categorization usually reflects the most popular method of language execution. For example, BASIC is thought of as
an interpreted language and C a compiled language, despite the existence of BASIC compilers and C interpreters.
Some systems use just-in-time compilation (JIT) whereby sections of the source are compiled 'on the fly' and stored
for subsequent executions.
Computer program 116

Self-modifying programs
A computer program in execution is normally treated as being different from the data the program operates on.
However, in some cases this distinction is blurred when a computer program modifies itself. The modified computer
program is subsequently executed as part of the same program. Self-modifying code is possible for programs written
in machine code, assembly language, Lisp, C, COBOL, PL/1, Prolog and JavaScript (the eval feature) among others.

Execution and storage


Typically, computer programs are stored in non-volatile memory until requested either directly or indirectly to be
executed by the computer user. Upon such a request, the program is loaded into random access memory, by a
computer program called an operating system, where it can be accessed directly by the central processor. The central
processor then executes ("runs") the program, instruction by instruction, until termination. A program in execution is
called a process. Termination is either by normal self-termination or by error — software or hardware error.

Embedded programs
Some computer programs are embedded into hardware. A
stored-program computer requires an initial computer
program stored in its read-only memory to boot. The boot
process is to identify and initialize all aspects of the system,
from processor registers to device controllers to memory
contents. Following the initialization process, this initial
computer program loads the operating system and sets the
program counter to begin normal operations. Independent of
the host computer, a hardware device might have embedded
firmware to control its operation. Firmware is used when
the computer program is rarely or never expected to change,
or when the program must not be lost when the power is off. The microcontroller on the right of this USB flash drive is
controlled with embedded firmware.

Manual programming
Computer programs historically were manually input to the
central processor via switches. An instruction was
represented by a configuration of on/off settings. After
setting the configuration, an execute button was pressed.
This process was then repeated. Computer programs also
historically were manually input via paper tape or punched
cards. After the medium was loaded, the starting address
was set via switches and the execute button pressed.

Automatic program generation


Switches for manual input on a Data General Nova 3
Generative programming is a style of computer
programming that creates source code through generic classes, prototypes, templates, aspects, and code generators to
improve programmer productivity. Source code is generated with programming tools such as a template processor or
an integrated development environment. The simplest form of source code generator is a macro processor, such as
the C preprocessor, which replaces patterns in source code according to relatively simple rules.
Computer program 117

Software engines output source code or markup code that simultaneously become the input to another computer
process. Application servers are software engines that deliver applications to client computers. For example, a Wiki
is an application server that lets users build dynamic content assembled from articles. Wikis generate HTML, CSS,
Java, and JavaScript which are then interpreted by a web browser.

Simultaneous execution
Many operating systems support multitasking which enables many computer programs to appear to run
simultaneously on one computer. Operating systems may run multiple programs through process scheduling — a
software mechanism to switch the CPU among processes often so users can interact with each program while it runs.
Within hardware, modern day multiprocessor computers or computers with multicore processors may run multiple
programs.
One computer program can calculate simultaneously more than one operation using threads or separate processes.
Multithreading processors are optimized to execute multiple threads efficiently.

Functional categories
Computer programs may be categorized along functional lines. The main functional categories are system software
and application software. System software includes the operating system which couples computer hardware with
application software. The purpose of the operating system is to provide an environment in which application
software executes in a convenient and efficient manner. In addition to the operating system, system software
includes utility programs that help manage and tune the computer. If a computer program is not system software then
it is application software. Application software includes middleware, which couples the system software with the
user interface. Application software also includes utility programs that help users solve application problems, like the
need for sorting.
Sometimes development environments for software development are seen as a functional category on its own,
especially in the context of human-computer interaction and programming language design.Wikipedia:Please clarify
Development environments gather system software (such as compilers and system's batch processing scripting
languages) and application software (such as IDEs) for the specific purpose of helping programmers create new
programs.

References

Further reading
• Knuth, Donald E. (1997). The Art of Computer Programming, Volume 1, 3rd Edition. Boston: Addison-Wesley.
ISBN 0-201-89683-4.
• Knuth, Donald E. (1997). The Art of Computer Programming, Volume 2, 3rd Edition. Boston: Addison-Wesley.
ISBN 0-201-89684-2.
• Knuth, Donald E. (1997). The Art of Computer Programming, Volume 3, 3rd Edition. Boston: Addison-Wesley.
ISBN 0-201-89685-0.
Computer program 118

External links
• Definition of "Program" (http://www.webopedia.com/TERM/P/program.html) at Webopedia
• Definition of "Computer Program" (http://dictionary.reference.com/browse/computer program) at
dictionary.com
Article Sources and Contributors 119

Article Sources and Contributors


Computer  Source: https://en.wikipedia.org/w/index.php?oldid=586506426  Contributors: 1297, 12visakhva, 144.92.164.xxx, 193.203.83.xxx, 24fan24, 7265, 876wer, A D Monroe III, A Softer
Answer, A d777, A2raya07, ABShipper, AThing, AXRL, Aaron Schulz, AaronTownsend, Abc753159, Abelson, Abhingeorgegodwin, Abiyoyo, Abner Doon, Academic Challenger, Acasson,
Accurizer, Aceofskies05, Acroterion, AdAdAdAd, Adam1213, AdamM, Adashiel, Adnandeura, Adolphus79, Adrian Robson, Adrian.benko, Afghangangster, Ahoerstemeier, Aim Here, Akamad,
Akrancis, AlMac, Alan Liefting, Alatius, Aldie, Ale jrb, AlefZet, Alegoo92, Alexmyboy, AlistairMcMillan, Allen Jesus, Alphax, Alwolff55, Amazon10x, AmyzzXX, Anacon, Ancheta Wis,
Andoni, Andonic, Andre Engels, Andrew73, Andrewbadr, Andrewpmk, Android79, Andy Janata, Andy24, Andycjp, AngelOfSadness, Angela, Anger22, Angus Lepper, Anmol9999, Anonymous
editor, Anshuman.jrt, Antandrus, Antony the genius, Apol0gies, AquaRichy, Arbero, Archer3, ArglebargleIV, Arjun01, Arpingstone, ArsenalTechKB, Art LaPella, Arthur Rubin, Arwel Parry,
AshLin, Atomaton, Atomice, AtticusX, Aude, Auric, Awien, AxelBoldt, Axeman89, AzaToth, Aztek2313, B. van der Wee, B4hand, B7T, BD2412, BDerrly, BW, Babij, Banes, Barefootguru,
BaronLarf, Baseball007, Bastien Sens-Méyé, BazookaJoe, Bcasterline, Bcnfal@hotmail.com, Ben Standeven, Ben-Zin, Ben414, Benched3, Benhoyt, Benny476, Beweird123456, Bgwhite,
Bhebhe19, BigCow, BigFatBuddha, Bill37212, Birdhombre, Bissinger, Bjarki S, Bjmurph, Bkell, Bkkbrad, Bkonrad, Blackmail, Blacksmith, Blainster, Blaxthos, Blazzer44, Blitz1941,
Blueforce4116, Bluemoose, BobShair, Bobblewik, Bobo192, Bogdangiusca, Bonadea, Bongwarrior, Bookandcoffee, Bookinvestor, Bookofjude, BorgQueen, Branddobbe, Brandenads,
Branrile09, Breno, Brian0918, BriandaBrain1447, Brianjd, BrokenSegue, Brokenfrog, BrotherFlounder, Brunnock, Brusegadi, Bryan Derksen, Bubba73, Bucketsofg, Buidinhthiem, Butros,
C.Fred, C1932, C777, CBDroege, CLW, COMPATT, COMPFUNK2, CSI Laredo, CSWarren, CTF83!, CTanguy, CTho, Cactus.man, Cadiomals, Caknuck, Cal T, Camarcus, Cameron168, Can't
sleep, clown will eat me, Canderson7, Cangate, CanisRufus, Canoe1967, CapitalR, CaptainVindaloo, Casper2k3, Catdude, CatherineMunro, Causa sui, Cause of death, Cbrodersen, Cbrown1023,
Cburnett, Cdc, Cedars, Cellmaker, Cenarium, CensoredScribe, Ceyockey, Cfailde, Cfilorvy, Chameleon, Chappie2006, CharlesGillingham, Cheesewheel, Cheezyp18, Chinneeb, Chowbok, Chris
73, Chris j wood, ChrisO, Chriscm, Chrisjj, Chrislk02, Christian List, Christian Storm, Christopheee, Christopher Parham, Christy747, Chshoaib, Chun-hian, Cicicicico, Cimbalom, Claygate,
Cleared as filed, Cliff12345, Clockwork Soul, ClockworkSoul, Cma, Cmputer, CoMaDaReInCaRnAtE, CodeCat, Codegrinder, Colin99, Collard, Color probe, Comp.arch, Compaqevo,
Computerjoe, Conversion script, Cool200, Coolbho3000, Coolcaesar, Coolman435, Cpiral, Crazycomputers, Cremepuff222, Crispichikin, Crusadeonilliteracy, Csarvey, Curious DGM, Curps,
Cursit, Curtis.Everingham, Cutter20, Cverlo, Cy0x, Cyan, Cybercobra, Cybiko123, Cyfal, CyrilB, DJ Clayworth, DRTllbrg, DV8 2XL, DVD R W, DaiTengu, Damicatz, Dan D. Ric, Dan
Hickman, Danakil, DangApricot, Dangerousnerd, Daniel C, Daniel Lawrence, DanielCD, Danigoldman, DarkFalls, Darrendeng, DarthVader, Dashes, Dasunt, Daven200520, Daveydweeb, David
Couch, David R. Ingham, Davidgoldner18, Dawn Bard, Dbunds, Dcljr, Dcooper, Dekart, Delahe15, Delldot, Demmy, DerHexer, Deskana, Dewritech, Dharmabum420, Dhong55, Dhp1080,
Dhriti pati sarkar 1641981, Diderot, Digitalme, DirkvdM, Disavian, Discospinster, Djembayz, Djhbrown, Dlauri, Dmcq, Dmharvey, Dmn, Dmsar, Doc glasgow, DocWatson42, Dome89,
Dominus, Don't fear the reaper, Donald Albury, Donarreiskoffer, Dori, Dr.Bhatta, Drdestiny77, Drummer1508, Duckman89, Duomillia, Dust Filter, Dustin Dewynne, Dyl, Dysprosia, ESkog,
EagleOne, Eaglesfan2593, Easel3, Eccentrix inc, Ed g2s, Edam, Edcolins, Edgar181, Edits, Edlin, Edward, Eiler7, El C, ElTyrant, Elroch, Emersoni, Emoboy 99, Emote, Emre D., Emurph,
Epicstonemason, Equendil, Eran of Arcadia, ErrantX, Error5001, Escape Orbit, Eurobas, Evan Robidoux, Evercat, Everyking, Evil Monkey, Evil saltine, Expensivehat, Extra999, Ezrdr, F,
FF2010, FT2, Fallout boy, Fantasy, Faucon7, Favonian, Fazdadaz, Feezo, Felixdakat, Ferkelparade, Firsfron, FisherQueen, Flammingo, Flowerparty, Flyguy649, FocalPoint, Foobar,
FootholdTechnology, Forderud, Formatinitials, Fourthords, Foxtrotalpha, FrYGuY, Franamax, Francs2000, Frank.manus, Frap, Frazzydee, Freakofnurture, Freakydance, Fredrik, FreplySpang,
Frietjes, Frymaster, Fsiler, Funandtrvl, Funnybunny, Furrykef, Fuzheado, GTBacchus, Gaga654, Gaius Cornelius, Galactor213, Galzigler, Gap9551, Gardar Rurak, Garrison Savannah, Gary D
Robson, Gary King, Gazpacho, Geeoharee, Geni, Georgy90, Geosultan4, Getsuga, Ghettoblaster, Giftlite, Gilliam, Gimboid13, Gimmetrow, GliderMaven, Globalsolidarity, Glome83, Gmcole,
God101, Goel madhur, Gogo Dodo, Golbez, Goldom, Gracenotes, GraemeL, Graham87, Grand Edgemaster, GregAsche, Greswik, Grm wnr, Gsapient, Gscshoyru, Guanaco, Guppy,
Guppyfinsoup, Gurch, Guy Harris, Guy M, Guy Peters, Gwern, Gwernol, Gwizard, Gyrofrog, Gzornenplatz, HJKeats, Hadal, Haham hanuka, Hallenrm, Hamiltondaniel, Hannes Hirzel,
Happyisenough, HarisM, Harmil, Hazmo1, Hdante, Hdt83, HeikoEvermann, Helena srilowa, Helix84, Helixblue, HellRaiserDP, Hemanshu, Henry Flower, Hi-lariousdude22, Hi332211, HiLo48,
Highdefinition, Hintha, Hipporoo, Homeless5, Homerjay, Horst-schlaemma, Hotshot977, Htaccess, Hu, Hughey, HugoLoris, HunterX, Hurricane111, Hvn0413, Hyperboreer88, I luv jonas
brothers, IAMARTHUR, IIIraute, Ian slater, Ian13, IanLewis, Idleguy, Ike9898, Ikiroid, Illyria05, Imroy, InShaneee, Indium, Infinity Wasted, Intgr, Invader chris, InverseHypercube, Invisigoth,
Ipigott, Irpatos, Isam, Isotope23, Itscrazyluv, Ixfd64, Izcool, J-Wiki, J.delanoy, J1459, JC Chu, JDspeeder1, JMK, JSimmonz, JTN, JYolkowski, Jacek Kendysz, Jackehammond, Jacky man
Toronto, Jagged 85, Jaimo01, Jake279, JamesTeterenko, Jameshater, Jamesk111, Janekm, Janke, Janus303, Jaranda, Jarble, Jarle fagerheim, JasonMacker, Jasper Deng, Jaxl, Jbitkill, Jcbutler,
Jchwang, JeLuF, JederCoulious, Jeff G., Jeff3000, Jenn0123, Jessesaurus, Jhballard, Jiang, Jim Horning, JimVC3, Jimbreed, JimmyShelter, Jjshapiro, Jkl, Jni, JoanneB, Joe Beaudoin Jr.,
Joe07734, Joezamboni, Johann Wolfgang, Johantheghost, John, Johnleemk, Johnnyw, Johnrpenner, Jojit fb, Jon Awbrey, JonHarder, Jonathanfspencer, Jonhope123, Jorophose, Jose77, Josephs1,
Josh Parris, JoshEdgar, JoshuaZ, Joshw101, Jossi, Joyous!, Jpbowen, Jpgordon, Jpisokas, Jpolster2005, Jrauser, Jredmond, Jrockley, Jrpibb, Jtatum, Jtkiefer, Juansempere, Jumbuck, Jus930710,
JustPhil, Justin Hirsh, Justin Stafford, Jwissick, Kaldari, Karam.Anthony.K, Karch, Karl2620, Kaseyjean, Katavothron, Katimawan2005, Kazmimi, Kbdank71, KeKe, Keegan, Keithonearth,
Kelly Martin, KellyCoinGuy, Kelton2, Ken428, KennedyBaird, Kenny sh, KerryO77, Ketiltrout, Kevin B12, Kevin Langendyk, Khin007, Kim Bruning, King of Hearts, KingGrue, Kizor,
Kkhairunnisa, Klemen Kocjancic, Klenje, Klingoncowboy4, KnowledgeOfSelf, Koman90, Konstable, Kornxi, Kortsleting, Koyaanis Qatsi, Kozuch, Krawi, Krich, Kubanczyk, KumfyKittyKlub,
Kungfuadam, Kurt m 4, KurtRaschke, Kuru, Kwertii, Kx1186, L337p4wn, Lachiester, Lajm, Lappado, Laptop65, Laurinavicius, Leahcim512, Lectonar, Leeyhe, Leigh, Lethe, Leuliett, Levin,
Lexi Marie, Lexor, Li-sung, Liftarn, Lightdarkness, Lightmouse, Lights, Ligulem, Lincher, Link5547, Linkspamremover, Linuxbeak, Linuxerist, Litefantastic, LizGere, Llamadog903, Lloydpick,
Locos epraix, LogX, Looloopoo9, Lord Muck, Lotje, Luckydhaliwal, Luigiacruz, Luna Santin, Lupin, Lupo, Lysander89, M4gnum0n, MER-C, MIT Trekkie, MJGR, MONGO, MPerel,
MZMcBride, Mac, Madchester, Madman91, Maelor, Magicker71, Magioladitis, Magister Mathematicae, Mahagna, Mailer diablo, Majorclanger, Makewater, Maladziec, Malleus Fatuorum, Malo,
Manitu, Manjithkaini, MansonP, Manufracture, Maralia, Marcika, Marcus Qwertyus, Marioromeroaguirre, MarkS, Marnanel, Marsheo, Martarius, Martin g2, MartinDK, Marxmax,
Marysunshine, Masiano, Master Jay, Mathboy155, Matheustex, Matilda, Matt Britt, Matt Crypto, MattGiuca, Matthew Fennell, Matthiaspaul, Mattman2593, Maurice Carbonaro, Maustrauser,
Mav, MaxD, Maxim, Maximus4140, Maxlaker, MayaSimFan, Mayumashu, Mbarbier, McNeight, Mcelite, MeBee, Melaen, Memenen, Mensch, Merovingian, Methnor, Metrax, Mets501,
Michael Hardy, Michael Jones jnr, Michael K. Edwards, Michael Zimmermann, Michael93555, Michaelas10, Mightyman67, Mike Rosoft, Mike in Aus, MikeSy, Mikeblas, Mini-Geek,
Mirelespm, Misza13, Mitch Ames, Mjpieters, Mkr10001, Mogism, Mojo Hand, Moppet65535, Mortense, Mpradeep, Mr x2, Mr. Billion, Mr. Lefty, Mr. Maupin, MrBoo, MrFish, Mrtomaas,
Mudux01, Muffin34, Mukulnryn, Murray Langton, Mushroom, Mxn, NHRHS2010, NSLE, Nachoman-au, Nafsadh, Nageh, Nagytibi, Najoj, Nakon, Nameneko, Nancy, Nanosilver, Nanshu,
Nascar1996, Ndp2005goh, Necrowarrio0, Neelix, Neil12, Nelson50, Nervi modest, Netoholic, NewEnglandYankee, Newnew123, Newone, Niall2, Nick, NickDanger42, NigelR, Nigelj, Nihiltres,
Nike8, Nikola Smolenski, Nilmerg, Ninjagecko, Niteowlneils, No Guru, NoSeptember, Node ue, Noldoaran, Nono64, Norm, NrDg, Nsmith 84, Nucleusboy, Nurg, Nyttend, OOJaxxOo, Obeso24,
Oda Mari, Off!, Ohconfucius, Ohnoitsjamie, Ohokohok, Old nic, Oleg Alexandrov, Oli Filth, OliD, Olorin28, Omicronpersei8, OregonD00d, Orioane, Osric, Ost316, Otets, OtherPerson,
OverlordQ, OwenX, Owned3, Oxymoron83, P. S. F. Freitas, P09ol,, PFHLai, PJM, PaePae, Pagingmrherman, Pagrashtak, Palosirkka, Panser Born, Paolo.dL, Papadopa, Papppfaffe, Paradoctor,
Pass a Method, Patrick, Paul August, Pax:Vobiscum, Pegasus1138, Perl87, Peruvianllama, Peterdjones, Petre Buzdugan, Pgk, Phaedriel, Phalacee, Phgao, Philip Trueman, PhilipO, Pholy, Physis,
Piccor, Picus viridis, PierreAbbat, Pigsonthewing, Piotrus, Plastikspork, Pmaguire, Pmjjj, Pnm, Poindexter Propellerhead, Poli, Poor Yorick, Pooresd, Pope16, Postdlf, Powo, Pradkart,
Praemonitus, Prodego, Psy guy, Public Menace, PuzzletChung, Pwner2, Python eggs, Qirex, Qst, Quackor, Quadell, Quarma, Quentin mcalmott, Quiksilviana, Quispiam, Qwerasd1, Qxz, R. S.
Shaw, RMuffin, RN1970, RTC, RW Marloe, RaCha'ar, Rabidbuzz, Rac7hel, Radagast83, RadioBroadcast, RadioKirk, Radius, Ragib, RandomP, Rangoon11, RapidR, Rasmus Faber, Raul654,
Raven4x4x, Rd232, Rdsmith4, Reatlas, RedWolf, Redthoreau, Reedy, Reisio, Reject, Revolución, RexNL, Rfl, Rgreenday1011, Rhododendrites, Rhynchosaur, Riana, Rich Farmbrough,
Richard001, Richdude24, Rick Sidwell, Rico402, Rieger, Rigadoun, Rilak, Rishi225, Rivertorch, Rjsc, RkOrton, Rlinfinity, Rnt20, Robert Brockway, Robert K S, Robert Merkel, RobertG,
Robertvan1, Rockhall, Rocky34, Rodrigo braz, RogueMountie, Romanm, Rory096, Roscoe x, Rovibroni, Rowan Moore, RoyBoy, Royote, Rsduhamel, Rubicon, Rudjek, Ruhrjung, Rwthplb,
Rwwww, RxS, Ryuch, Ryulong, S14jduma, SCEhardt, SCJohnson77, SG, SJP, ST47, Salsb, Saltiem, Sam Hocevar, Sam Li, Sam130132, Samuel, Sander123, Sango123, Saudade7, Saulsinaloa,
Saurabh jain999, Sc147, Sceptre, SchreyP, Schzmo, Science History, Scott McNay, Scott Paeth, Sdornan, Sean Whitton, SebastianHelm, Secretlondon, Seewolf, Seibei, Selesti, Seraphim, Serlin,
Shadikka, Shadow Android, Shadow1, Shalom Yechiel, Shanel, Shanes, ShaunES, Shauom, Shevonsilva, Shoshonna, Sickbrah, Siddhant, Sietse Snel, Sigma 7, Silentx, SimonP, Singsmasta,
Singularity, Sir Nicholas de Mimsy-Porpington, SirVulture, Siroxo, SivaKumar, Sjakkalle, Skidude9950, Skraz, Slakr, Slark, Sligocki, Slohar, Smack, Smart Nomad, Smash, Smilliga, Smokizzy,
Snozzer, Socalaaron, Soir, SomeStranger, Someone else, Sonny1day, Spaceboy492, Spangineer, Spartan-James, Speaker4000, Spearhead, Specs112, Spelling Corrector, Spencer195, Spiritg1rl95,
Splash, Spliffy, Sportzplyr9090, SpuriousQ, Squishy, Srice13, Srikeit, Ssd175, Ssilvers, Steelergolf11, Stemonitis, Stephen Compall, Stephenb, SteveBaker, Stevenj, Stevo1000, Stijn Vermeeren,
Stormie, Stormscape, Strait, StuffOfInterest, Subodhdamle, Sun Creator, SuperDude115, SuperHamster, Supertouch, Svetoslavv, Swatjester, Symane, Syvanen, T-rex, T0ny, TERdON, THB,
TJDay, TXAggie, Taajikhan, Tabby, Tangotango, Tapir Terrific, Targaryen, Tarret, Tary123, Tasc, Tastemyhouse, Tawker, Taxman, Tdvance, TeaDrinker, Techie007, Technion, TedColes,
TedE, Tedzdog, Terence, Test2008, Test2010, Tetsuo, Tevildo, TexasAndroid, Thadius856, The Rambling Man, The Stoneman, The informator, The rekcaH, The silent assasin, TheGWO,
TheGeneralUser, TheYmode, Thenewguy34, Thepielord, Theresa knott, Theroadislong, This user has left wikipedia, Thomas Larsen, Thrissel, ThrustVectoring, Thue, Thunderbrand, Thw1309,
TigerShark, Tigerhawkvok, Tillmo, Tim1988, TimTIm, Timhowardriley, Timir Saxa, TimmyTimson, Timwi, Titoxd, Tlenyard, Tobby72, Tobias Bergemann, Tom harrison, Tom5760,
TomTheHand, Tomgally, Tomi T Ahonen, TonyClarke, Torc2, Tosayit, ToxicPlatypus, Tpbradbury, Traroth, TravelinSista, Tregoweth, Trenchcoat99, Trinitymix, Trobert, Trodaikid1983,
Trovatore, Trusilver, Truthflux, Ttz642, Tulip19, Tír na nÓg 1982, U, U.Steele, Uartseieu, Ugen64, Ugur Basak, UkPaolo, UncleDouggie, Urod, Useight, Utcursch, Vald, Vanished User 0001,
Vanished user fois8fhow3iqf9hsrlgkjw4tus, Vary, Vegaswikian, Velella, Vesailius, Viriditas, Virtual Traveler, Vovkav, Vranak, Vulcanstar6, WAS 4.250, Waggers, Wanderingcat,
Wanderingstan, Wavelength, Wayfarer, Wayiran, Wayward, Wbm1058, Wernher, West Brom 4ever, Whaa?, Where, WhiteDragon, Whosasking, Widefox, Wiki alf, Wikianon, Wikibase,
Wikipelli, Will Beback Auto, William M. Connolley, Wingo, Winhunter, Wknight94, Wolfman, Wolfmankurd, Woohookitty, Woome, Wranglers 04, Wrcovington, Wscdfightyuim,
Wtshymanski, Wyverald, X201, X570, Xaffect, Xaosflux, Xavier Combelle, Xelgen, Xevi, Xezbeth, XmDXtReMeK, Xpclient, Xrarey, XxXrah-chompXxX, Xyzzyplugh, Y2kcrazyjoker4,
Yamaguchi先 生, Yamamoto Ichiro, Yaronf, Yashtulsyan, Ybbor, YellowMonkey, Yelyos, Yensin, Yngvarr, Yoganate79, Yonatan, Yosri, Yulius, Yworo, Z3, ZachPruckowski, Zackmorris,
Zanimum, Zanorath, Zanuga, Zarvok, Zastil, Zebbie, Zeldafreak104, Zemooo, Zman2000, Zoicon5, Zondor, Zzuuzz, Zzyzx11, Ævar Arnfjörð Bjarmason, Александър, Дарко Максимовић, 康
非 字 典, 2094 anonymous edits

Informatics (academic field)  Source: https://en.wikipedia.org/w/index.php?oldid=584601250  Contributors: 16@r, A-giau, ASNelson, ATDC Raigeki, Abcreviewer, Aeternus, Ahouseholder,
Alan Au, Alex.g, Amomam, AoV2, Ardarley, Atharhaq, Averros11, Badsoupday, Beclaw44, Benjamin Mako Hill, Bijuro, Biruitorul, Brandizzi, Buridan, CEPIS Secretariat, CanadianLinuxUser,
Article Sources and Contributors 120

Cbdorsett, Ceyockey, Codycann, Cradel, Csstsrg, Cybercobra, D.h, Darigan, Davis.devries, Dbulwink, Dekimasu, Dirkstanley, Ditsonis, Djh94001, Eenu, Epbr123, Esmith8, FelipeVargasRigo,
Fmenczer, Funandtrvl, Gadget850, Giftlite, Giriprakash123, Glendac, Goldenrowley, Gsmgm, Hamish.MacEwan, Hldsc, Hmains, Howeworth, Iancarter, Informatician1011111, Informatwr,
Ivucica, Ixfd64, J Casanova, JIP, JMSwtlk, Jastman, Jdegreef, Jfroelich, Jkbioinfo, Joerg Kurt Wegner, JordoCo, Jpbowen, JpurvisUM, Julcia, Jvargh, Kalamkaar, Kmakice, LA2, Lchadick, LjL,
Lperez2029, MCrawford, Mario Žamić, Masgatotkaca, Mattram, Maurice Carbonaro, Max rspct, Mboverload, Mdd, Merlissimo, MetaNest, Michael Fourman, Michael Hardy, Mindmatrix,
Mmortal03, Mora.klein, Mxipp, Naf312, Nervexmachina, Nowimnthing, Oli Filth, Pawyilee, Pegship, Pinar, Postonm, Ps ttf, Qswitch426, Quebec99, Qwertyus, R'n'B, Raptur, RazorICE, RedZiz,
Rjwilmsi, Roberto Cruz, Ruud Koot, Sarterus, Shana Lei, SidP, Spacepotato, Spencer, Star Mississippi, Stephber, Systemetsys, Tedder, The Thing That Should Not Be, Theoprakt, Thermochap,
ThomHImself, TobyDZ, Tohd8BohaithuGh1, Tosayit, TruthinQuest, Vanwhistler, Velho, Verbal, Versageek, Veterinarian, Vlad, Vpovilaitis, WDavis1911, Walshga, WaydeRob, WheezePuppet,
Xb9q, Zouxiaohui, Zzuuzz, ‫ﺗﺮﺟﻤﺎﻥ‬05, 石, 152 anonymous edits

Programming language  Source: https://en.wikipedia.org/w/index.php?oldid=585188885  Contributors: -Barry-, 10.7, 151.203.224.xxx, 16@r, 198.97.55.xxx, 199.196.144.xxx, 1exec1,
203.37.81.xxx, 212.188.19.xxx, 2988, 96.186, A. Parrot, A.amitkumar, A520, AJim, Abednigo, Abeliavsky, Abram.carolan, Acacix, Acaciz, AccurateOne, Addicted2Sanity, Ahoerstemeier,
Ahy1, Akadruid, Alansohn, Alex, AlexPlank, Alhade, Alhoori, Alksub, Allan McInnes, Alliswellthen, Altenmann, Amire80, Amitkumargarg88, Ancheta Wis, Andonic, Andre Engels, Andres,
Andylmurphy, Angela, Angusmclellan, Antonielly, Ap, Apwestern, ArmadilloFromHell, AstroNomer, Asukite, Autocratique, Avono, B4hand, Behnam, Beland, Ben Ben, Ben Standeven,
Benjaminct, Bevo, Bgwhite, Bh3u4m, BigSmoke, Bill122, BioPupil, BirgitteSB, Blaisorblade, Blanchardb, Bobblewik, Bobbygammill, Bobo192, Bonaovox, Booyabazooka, Borislav, Brandon,
Brentdax, Brianjd, Brick Thrower, Brion VIBBER, Bubba73, Burkedavis, CSProfBill, Calltech, Can't sleep, clown will eat me, CanisRufus, Capricorn42, Captain Conundrum, Captain-n00dle,
CarlHewitt, Carmichael, Carrot Lord, Catgut, Cedar101, Centrx, Charlesriver, Charlie Huggard, Chillum, Chinabuffalo, Chun-hian, Cireshoe, Ckatz, Closedmouth, Cmdodanli, Cmichael,
Cobaltbluetony, ColdFeet, Compfreak7, Conor123777, Conversion script, Cp15, Cybercobra, DBigXray, DMacks, DVD R W, Damian.rouson, Damieng, Dan128, Danakil, Danielmask, Danim,
Dave Bell, David.Monniaux, DavidHOzAu, Davidfstr, Davidpdx, Dcoetzee, DeadEyeArrow, DenisMoskowitz, DennisDaniels, DerHexer, Derek Ross, Derek farn, Diannaa, Diego Moya,
Dl2000, Dolfrog, Dominator09, Don't Copy That Floppy, Donhalcon, Doradus, DouglasGreen, DragonLord, Dreftymac, Dsimic, Dtaylor1984, Duke Ganote, Dysprosia, EJF, ESkog, EagleOne,
Edward301, Eivind F Øyangen, ElAmericano, Elembis, EncMstr, EngineerScotty, Epbr123, Esap, Evercat, Everyking, Ewlyahoocom, Ezrakilty, Fantom, Faradayplank, Fayt82, Fieldday-sunday,
Finlay McWalter, Fl, Foobah, Forderud, Four Dog Night, Fplay, Fraggle81, François Robere, Fredrik, Friedo, Fubar Obfusco, Funandtrvl, FvdP, Gaius Cornelius, Galoubet, Gazpacho, Gbruin,
Georg Peter, Giftlite, Ginsuloft, Giorgios (usurped), Gioto, Goodgerster, Gploc, Green caterpillar, GregAsche, Grin, Grouphardev, Gurch, Gutza, Gwicke, Hadal, Hairy Dude, Hammer1980, Hans
Adler, HarisM, Harmil, Hayabusa future, Headbomb, HeikoEvermann, HenryLi, Hfastedge, HopeChrist, Hoziron, Hut 8.5, Hyad, INkubusse, IanOsgood, Icey, Ideogram, Ilario, Imran, Inaaaa,
Indon, Infinoid, Iridescent, It Is Me Here, Iuliatoyo, Iwantitalllllllll, Ixfd64, J.delanoy, J991, JMSwtlk, JPINFV, JaK81600, JanSuchy, Jarble, Jason5ayers, Jaxad0127, Jaxl, Jeffrey Mall, Jeltz,
Jeronimo, Jerryobject, Jesant13, Jguy, Jim1138, Jitse Niesen, Jj137, Johann Wolfgang, John lindgren, John254, JohnLai, JohnWittle, Jonesey95, Jonik, Jorend, Jossi, Joyous!, Jpbowen, Jpk,
Jprg1966, Jschnur, JulesH, Juliancolton, Jusjih, Jwissick, K.lee, K12308025, KHaskell, KSmrq, KTC, Karingo, Karthikndr, Katieh5584, Kbdank71, Kbh3rd, Kedearian, Ketiltrout, Khalid
Mahmood, Kickstart70, Kiminatheguardian, Kimse, Kinema, Klasbricks, KnowledgeOfSelf, Knyf, Komarov om, Kooginup, Koyaanis Qatsi, Kragen, Krauss, Krawi, Kris Schnee, Krischik,
Kuciwalker, Kungfuadam, Kwertii, KymFarnik, L Gottschalk, L33tminion, LC, Lagalag, Leibniz, Liao, Lightmouse, Ligulem, LindsayH, LinguistAtLarge, Logarkh, LordCo Centre, Loriendrew,
Lradrama, Lucian1900, Lucky7-phool, Lulu of the Lotus-Eaters, Luna Santin, Lupo, MER-C, MK8, Mac c, Macaldo, Macrakis, Magnus Manske, Mahanga, Majilis, Malcolm Farmer, Malleus
Fatuorum, Mangojuice, Manpreett, Marcos, Mark Renier, MarsRover, MartinHarper, MartyMcGowan, Marudubshinki, Materialscientist, Matthew Woodcraft, Mattisse, Mav, Maxis ftw, McSly,
Mccready, Mean as custard, MearsMan, MegaHasher, Mellum, Mendaliv, Merbabu, Merphant, Mesoderm, Michael Hardy, Michael Zimmermann, Midinastasurazz, Mike Rosoft, Mild Bill
Hiccup, Minesweeper, MisterCharlie, Miym, Mkdw, Monz, Mpils, Mpradeep, Mrjeff, Ms2ger, Mschel, Muro de Aguas, Murray Langton, MusikAnimal, Mwaisberg, Mxn, Mìthrandir, N5iln,
Naderi 8189, Nameneko, Nanshu, Napi, Natalie Erin, Natkeeran, NawlinWiki, Nbarth, Nbrothers, Necklace, NewEnglandYankee, NewbieDoo, Nick125, Nikai, Nima1024, Ningauble, Nixdorf,
Noctibus, Noformation, Noisy, Noldoaran, Noosentaal, NotQuiteEXPComplete, Nottsadol, Novasource, Ntalamai, Nuggetboy, Nutsnbolts222, Oblivious, Ohms law, Ohnoitsjamie, Oldadamml,
Oleg Alexandrov, Olmerta, Omphaloscope, OrgasGirl, Orphan Wiki, Papercutbiology, Paul August, PaulFord, Pcap, Peter, Peter Flass, Peterdjones, Pharaoh of the Wizards, Phil Sandifer,
PhilKnight, Philg88, Photonique, Phyzome, Pieguy48, Piet Delport, PlayStation 69, Poor Yorick, Pooryorick, Positron, Prolog, Ptk, Pumpie, Pwv1, Quagmire, Quiddity, Quota, Quuxplusone,
Qwyrxian, RainerBlome, Raise exception, Ranafon, RayAYang, RedWolf, Reddi, Reelrt, Reinis, RenamedUser2, Revived, RexNL, Rezonansowy, Rich Farmbrough, Rjstott, Rjwilmsi, Rlee0001,
Robbe, Robert A West, Robert Skyhawk, Robo Cop, Roland2, Romanm, Ronhjones, Roux, Royboycrashfan, Rrburke, Rursus, Rushyo, Russell Joseph McCann, Ruud Koot, S.Örvarr.S, Saccade,
Sam Korn, Science History, Seanhalle, Seaphoto, SeeAnd, Sekelsenmat, Sgbirch, Shadowjams, Shane A. Bender, Shanes, ShelfSkewed, SimonP, Simplyanil, Sjakkalle, Skytreader, Slaad, Slakr,
Slashem, SmartBee, Snickel11, Sonicology, SparsityProblem, Specs112, Speed Air Man, SpeedyGonsales, Speuler, SpuriousQ, Steel1943, Stephen B Streater, Stephenb, Stevertigo, SubSeven,
Suffusion of Yellow, Suruena, Swatiri, Swirsky, Switchercat, Systemetsys, TakuyaMurata, Tarret, Taxman, Teammm, Techman224, Tedickey, Template namespace initialisation script,
Tentinator, Teval, Tewy, Tgeairn, Tgr, The Thing That Should Not Be, TheTechFan, Thetimperson, Thniels, Thomasuniko, Thv, Tiddly Tom, Tide rolls, Tim Starling, Timhowardriley, Tizio,
Tobias Bergemann, TomT0m, Tomatensaft, Tony1, TonyClarke, Torc2, Toussaint, Trusilver, TuukkaH, Tysto, Ubiq, Ulric1313, Ultra two, Undeference, Useight, Usman&muzammal, Vadmium,
Vahid83, Vaibhavkanwal, Vald, VampWillow, VictorAnyakin, Victorgrigas, Vivin, Vkhaitan, Vriullop, Vsion, WAS 4.250, Waterfles, Wavelength, Wiki alf, Wiki13, Wiki4Blog, WikiTome,
Wikibob, Wikibofh, Wikisedia, Wimt, Windharp, Wlievens, Wmahan, Woohookitty, Ww, Xaosflux, Xavier Combelle, Yana209, Yath, Yk Yk Yk, Yoric, Zaheen, Zarniwoot, Zawersh,
ZeWrestler, Zero1328, Zoicon5, Zondor, ²¹², Σ, ‫ﺋﺎﺭﺍﺱ ﻧﻮﺭﯼ‬, 986 anonymous edits

Algorithm  Source: https://en.wikipedia.org/w/index.php?oldid=586944864  Contributors: "alyosha", 12.35.86.xxx, 128.214.48.xxx, 151.26.10.xxx, 161.55.112.xxx, 204.248.56.xxx,
24.205.7.xxx, 747fzx, 84user, 98dblachr, A bit iffy, APH, APerson, Aarandir, Abovechief, Abrech, Acroterion, Adam Marx Squared, Adamarthurryan, Adambiswanger1, Addshore, Aekamir,
Agasta, Agent phoenex, Ahy1, Alcalazar, Ale2006, Alemua, Alex43223, Alexandre Bouthors, Alexius08, Algogeek, Algorithmguru, Algoritmy, Allan McInnes, Altaïr, Amberdhn, Andonic,
Andre Engels, Andreas Kaufmann, Andrejj, Andres, Andrewman327, Anomalocaris, Anrnusna, Antandrus, Anthony, Anthony Appleyard, Antiqueight, Anwar saadat, Apofisu, Arvindn,
Athaenara, AtticusX, AxelBoldt, Azurgi, B4hand, Bact, Bapi mahanta, Bart133, Basketboy63, Bb vb, BeavisSanchez, Belmira11, Benn Adam, Bethnim, Bgwhite, Bigchip, Bill4341, BillC,
Billcarr178, Billymac00, Blackguy1212, Blackrock01, Blankfaze, Bloorain, Bob1312, Bobblewik, Boing! said Zebedee, Bonadea, Bongwarrior, BorgQueen, Boud, Brendonshay, Brenont,
BriEnBest, Brion VIBBER, Brutannica, Bryan Derksen, Bth, Bucephalus, CBM, CRGreathouse, Caltas, Cameltrader, CarloMartinelli, CarlosMenendez, Cascade07, Cbdorsett, Cedar101, Cedars,
Chadernook, Chamal N, Charles Matthews, CharlesGillingham, Charvex, Chasingsol, Chatfecter, Chewings72, Chinju, Chris 73, Chris Roy, Chris the speller, ChrisGualtieri, Cic, Citneman,
Ckatz, Clarince63, Closedmouth, Cmdieck, Colonel Warden, Conversion script, Cornflake pirate, Corti, CountingPine, Cplakidas, Crazysane, Cremepuff222, Crimsonraptor, Curps, Cybercobra,
Cyberjoac, Cyrusace, DASSAF, DAndC, DCDuring, DagosNavy, Dan.inPractice, Danakil, Danger, Dastoger Bashar, Daven200520, David Eppstein, David Gerard, Dbabbitt, Dcoetzee,
DeadEyeArrow, Deadcracker, Deeptrivia, Deflective, Delta Tango, Den fjättrade ankan, Denisarona, Deor, Depakote, DerHexer, Derek farn, DevastatorIIC, Dgrant, Diannaa, Diego Moya,
Dinsha 89, Discospinster, Djbrainboy, Dkwebsub, Dmcq, Donner60, DopefishJustin, DouglasCalvert, Dreftymac, Drilnoth, Drpaule, Drrevu, DslIWG,UF, Duncharris, Dwheeler, Dylan Lake,
Dysprosia, EconoPhysicist, Ed Poor, Ed g2s, Editorinchief1234, Eequor, Efflux, EhsanKhaki, El C, ElectricRay, Electron9, ElfMage, Ellegantfish, Eloquence, Emadfarrokhih, Epbr123, Eric
Wester, Eric.ito, Erik9, Essjay, Eubulides, Everything counts, Evil saltine, EyeSerene, Fabullus, Falcon Kirtaran, FallingGravity, Fantom, Farosdaughter, Farshadrbn, Fastfission, Fastilysock,
Favonian, Fernkes, Fetchcomms, FiP, FlyHigh, Fragglet, Frecklefoot, Fredrik, Friginator, Frikle, Furkaocean, G2dk7g, GB fan, GOV, GRAHAMUK, Gabbe, Gaius Cornelius, Galoubet,
Galzigler, Gandalf61, Gary King, Geniac, Geo g guy, Geometry guy, George100, GeorgeAhad, Ghimboueils, Gianfranco, Giantscoach55, Giftlite, Gilgamesh, Giminy, Ginsuloft, Gioto, Glass
Sword, Gnowor, Gogo Dodo, GoingBatty, Goochelaar, Goodnightmush, Googl, GraemeL, Graham87, Greensburger, Gregbard, Groupthink, Grubber, Gscshoyru, Gubbubu, Gurch,
Guruduttmallapur, Guy Peters, Guywhite, H3l1x, HMSSolent, Hadal, Hairy Dude, Hamid88, Hannes Eder, Hannes Hirzel, Harryboyles, Harvester, Headbomb, HenryLi, HereToHelp, Heron,
Hexii, Hfastedge, Hiraku.n, Hmains, Hobbobobo, Hu12, Hurmata, Hvn0413, IGeMiNix, Iames, Ian Pitchford, Imfa11ingup, Inkling, InterruptorJones, Intgr, Iridescent, Isheden, Isis, Isofox,
Ixfd64, J.delanoy, JForget, JIP, JSimmonz, Jacomo, Jacoplane, Jagged 85, January, Jarble, Jaredwf, JediMaster362, Jeff Edmonds, Jeronimo, Jersey Devil, Jerzy, Jidan, Jim1138, Jiri 1984,
JoanneB, Jochen Burghardt, Johan1298, Johantheghost, John of Reading, JohnBlackburne, Johneasley, Johnsap, Jojit fb, Jonik, Jonpro, Joosyfoo, Jorvik, Josh Triplett, Jpbowen, Jtvisona,
JuPitEer, Jundi78, Jusdafax, Jóna Þórunn, K3fka, KHamsun, Kabton14, Kanags, Kanjy, Kanzure, Kazvorpal, Keilana, Kenbei, Kevin Baas, Kh0061, Khakbaz, Khazar2, Kku, Kl4m, Klausness,
Klemen Kocjancic, Klugkerl, Kntg, Kozuch, Kragen, Krellis, Kushalbiswas777, Kwamikagami, LC, LCS check, Lambiam, LancerSix, Larry R. Holmgren, Ldo, Ldonna, Leszek Jańczuk,
Levineps, Lexor, Lhademmor, Lightmouse, LilHelpa, Lilwik, Ling.Nut, Lissajous, Logan, Loggerjack, Lrsjohnson, Lucyin, Lugia2453, Lumidek, Lumos3, Lupin, Luís Felipe Braga, Lycurgus,
MARVEL, MJ94, MSPbitmesra, Macrakis, Magioladitis, MagnaMopus, Mahali syarifuddin, Makewater, Makewrite, Maldoddi, Malleus Fatuorum, Mange01, Mani1, ManiF, Manik762007,
Manojkumarcm, Marek69, Mark Dingemanse, Markaci, Markh56, Markluffel, Martarius, Martinkunev, Marysunshine, Materialscientist, MathMartin, Mathviolintennis, Matma Rex, Matt Crypto,
MattOates, Maurice Carbonaro, Mav, Maxamegalon2000, McDutchie, Meowist, Mesimoney, Metalhead94, Mfc, Mhakcm, Mhcs.907, Michael Hardy, Michael Slone, Michael Snow, MickWest,
Miguel, Mikael Häggström, Mike Rosoft, Mikeblas, Mindmatrix, Mission2ridews, Miym, Mlpkr, Moe Epsilon, Mogism, Mpeisenbr, MrOllie, Mttcmbs, Multipundit, MusicNewz, MustangFan,
Mutinus, Mxn, Nanshu, Napmor, Nasaralla, Neutral current, Nick Number, Nihonjoe, Nikai, Nikhileditor, Nikola Smolenski, Nil Einne, Nmnogueira, Nodira777, Noisy, Nthep, Nummer29,
Obradovic Goran, Od Mishehu, Odin of Trondheim, Ohnoitsjamie, Omnipaedista, Onorem, OrgasGirl, Orion11M87, Ortolan88, Oskar Sigvardsson, Oxinabox, Oxymoron83, Ozziev, PAK Man,
PMDrive1061, Paddu, PaePae, Paolo.dL, Pascal.Tesson, Pasky, Paul August, Paul Foxworthy, Paxinum, Pb30, Pcap, Pde, Penumbra2000, Persian Poet Gal, Pgr94, PhageRules1, Philip Trueman,
Philipp Wetzlar, Phobosrocks, Pinethicket, Pit, Plowboylifestyle, Policron, Poor Yorick, Populus, Possum, PradeepArya1109, Preetykondyal, Proffesershean, Quendus, Quintote, Quota,
Qwertyus, R. S. Shaw, RA0808, Raayen, RainbowOfLight, Randomblue, Raul654, Rdsmith4, Reconsider the static, Rednas1234, Rejka, Rettetast, RexNL, Rgoodermote, Rholton, Riana, Rich
Farmbrough, Rizzardi, Rjwilmsi, Robbiemorrison, Robert s denton, RobertG, RobinK, Rpwikiman, Rror, Rrsoni, RussBlau, Ruud Koot, Ryguasu, SJP, SNmirza, Salix alba, Salleman,
SamShearman, SarekOfVulcan, Satassi, Satellizer, Savidan, Scarian, Seanwal111111, Seb, Sesse, Sfan00 IMG, Shadowjams, Shamalyguy, ShelfSkewed, Shipmaster, Silly rabbit, SilverStar,
Silvrous, Simeondahl, Sitharama.iyengar1, Skylo Frost, SlackerMom, Sni56996, Snow Blizzard, Snowolf, Snoyes, Soler97, Some jerk on the Internet, Sonjaaa, Sophus Bie, Sopoforic, Soroosh60,
Spankman, Speck-Made, Spellcast, Spiff, Splang, Sridharinfinity, Staszek Lem, Stephan Leclercq, Storkk, Sulaymaan114, Sundar, SusikMkr, Susurrus, Swerdnaneb, Swfung8, Systemetsys,
TakuyaMurata, Tarquin, Tatelyle, Taw, Tempodivalse, Thane, The Firewall, The Fish, The High Fin Sperm Whale, The Nut, The Thing That Should Not Be, The ansible, TheGWO,
TheNewPhobia, Thecarbanwheel, Theodore7, Tiddly Tom, Tide rolls, Tijfo098, Tim Marklew, Timc, Timhowardriley, Timir2, Timrollpickering, Tizio, Tlesher, Tlork Thunderhead, Tobby72,
Tobias Bergemann, Tolly4bolly, Toncek, Tony1, Torchwoodwho, Tpbradbury, Trevor MacInnis, Treyt021, TuukkaH, UberScienceNerd, Ud1406, Ugog Nizdast, Uri-Levy, User A1, V31a,
Vasileios Zografos, Vikreykja, Vildricianus, Vincent Lextrait, Wa3frp, Wael Ellithy, Wainkelly, Waltnmi, Waqaee, Wavelength, Wayiran, Waynefan23, Webclient101, Weetoddid, Werieth,
Wexcan, Who, Whosyourjudas, WhyDoIKeepForgetting, Widr, WikHead, Wiki-uk, Willking1979, WillowW, Winston365, Wjejskenewr, Woohookitty, Wvbailey, Xact, Xashaiar, Yamamoto
Ichiro, Yintan, Yohannesb, Ysindbad, Yworo, Zfr, Zocky, Zondor, Zoney, Zundark, ZxxZxxZ, Владимир Паронджанов, 1218 anonymous edits
Article Sources and Contributors 121

Deterministic algorithm  Source: https://en.wikipedia.org/w/index.php?oldid=585095985  Contributors: ANONYMOUS COWARD0xC0DE, Abdull, Airplaneman, AmirOnWiki, Charles
Matthews, Creidieki, Dcoetzee, DouglasHeld, Edward, Griba2010, Jafet, Jamelan, Jcarroll, Jdforrester, Kooky, Malcohol, ParallelWolverine, Pol098, Quentar, Spiritia, Suruena, Sweavo,
Todofixthis, Yuval madar, Zogromalvus, Zuidervled, 12 anonymous edits

Data structure  Source: https://en.wikipedia.org/w/index.php?oldid=586032832  Contributors: -- April, 195.149.37.xxx, 24.108.14.xxx, Abd, Abhishek.kumar.ak, Adrianwn, Ahoerstemeier,
Ahy1, Aks1521, Alansohn, Alexius08, Alhoori, Allan McInnes, Altenmann, Anderson, Andre Engels, Andreas Kaufmann, Antonielly, Ap, Apoctyliptic, Arjayay, Arvindn, Babbage, Banaticus,
Bereajan, Bharatshettybarkur, BioPupil, Bluemoose, BurntSky, Bushytails, CRGreathouse, Caiaffa, Caltas, Carlette, Chandraguptamaurya, Chris Lundberg, Closedmouth, Cncmaster, Coldfire82,
Conversion script, Corti, Cpl Syx, Craig Stuntz, DAndC, DCDuring, DRAGON BOOSTER, DancingPhilosopher, Danim, David Eppstein, DavidCary, Dcoetzee, Demicx, Derbeth, Digisus,
Dmoss, Dougher, DragonLord, Easyas12c, EconoPhysicist, EdEColbert, Edaelon, EncMstr, Er Komandante, Esap, Eurooppa, Eve Hall, Excirial, Falcon8765, FinalMinuet, Forderud, Forgot to
put name, Fraggle81, Fragglet, Frap, Fresheneesz, GPhilip, Galzigler, Garyzx, Gauravxpress, GeorgeBills, Ghyll, Giftlite, Gilliam, Glenn, Gmharhar, Googl, GreatWhiteNortherner, HMSSolent,
Haeynzen, Hairy Dude, Haiviet, Ham Pastrami, Helix84, Hernan mvs, Hypersonic12, I am One of Many, IGeMiNix, Iridescent, JLaTondre, Jacob grace, Jerryobject, Jiang, Jim1138,
Jimmytharpe, Jirka6, Jncraton, Jorge Stolfi, Jorgenev, Justin W Smith, Karl E. V. Palmen, Kh31311, Khukri, Kingpin13, Kingturtle, Kjetil r, Koavf, LC, Lancekt, Lanov, Laurențiu Dascălu, Liao,
Ligulem, Liridon, Lithui, Loadmaster, Lotje, MTA, Mahanga, Mandarax, Marcin Suwalczan, Mark Renier, MasterRadius, Materialscientist, Mdd, MertyWiki, Methcub, Michael Hardy,
Mindmatrix, Minesweeper, Mipadi, MisterSheik, MithrandirAgain, Miym, Morel, Mr Stephen, MrOllie, Mrjeff, Mushroom, Nanshu, Nick Levine, Nikola Smolenski, Nnp, Noah Salzman,
Noldoaran, Nskillen, Nyq, Obradovic Goran, Ohnoitsjamie, Oicumayberight, Orzechowskid, PaePae, Pale blue dot, Panchobook, Pascal.Tesson, Paushali, Peterdjones, Pgallert, Pgan002, Piet
Delport, Populus, Prari, Publichealthguru, Pur3r4ngelw, Qwyrxian, Ramkumaran7, Raveendra Lakpriya, Reedy, Requestion, Rettetast, RexNL, ReyBrujo, Rhwawn, Richfaber, Ripper234,
Rocketrod1960, Rodhullandemu, Rrwright, Ruud Koot, Ryan Roos, Sallupandit, Sanjay742, Seth Ilys, Sethwoodworth, Sgord512, Shadowjams, Shanes, Sharcho, Siroxo, SoniyaR, Soumyasch,
Spellsinger180, Spitfire8520, SpyMagician, SteelPangolin, Strife911, Sundar sando, Tablizer, TakuyaMurata, Tanvir Ahmmed, Tas50, Tbhotch, Teles, Thadius856, The Thing That Should Not
Be, Thecheesykid, Thinktdub, Thompsonb24, Thunderboltz, Tide rolls, Tobias Bergemann, Tom 99, Tony1, Traroth, TreveX, TuukkaH, Uriah123, User A1, UserGoogol, Varma rockzz,
Vicarious, Vineetzone, Vipinhari, Viriditas, Vishnu0919, Vortexrealm, Walk&check, Wbm1058, Widefox, Wikilolo, Wmbolle, Wrp103, Wwmbes, XJaM, Yamla, Yashykt, Yoric, Доктор
прагматик, ‫ﺳﻌﯽ‬, ‫ﻣﺎﻧﻲ‬, 509 anonymous edits

List (abstract data type)  Source: https://en.wikipedia.org/w/index.php?oldid=571037469  Contributors: Adrianwn, Alfredo ougaowen, Alihaq717, Altenmann, Andreas Kaufmann, Andrew
Eisenberg, Angela, Bomazi, BradBeattie, Brick Thrower, Calexico, Chevan, Chowbok, Chris the speller, Christian List, Classicalecon, Cmdrjameson, Crater Creator, Cybercobra, Daniel
Brockman, Delirium, Denispir, Dgreen34, Dijxtra, Dismantle101, Docu, Drag, Eao, Ed Cormany, Elaz85, Elf, Elwikipedista, EugeneZelenko, Falk Lieder, Fredrik, Gaius Cornelius, Glenn,
HQCentral, Ham Pastrami, Hyacinth, Jan Hidders, Jareha, Jeff3000, Jeffrey Mall, Jimmisbl, Jorge Stolfi, Joseghr, Josh Parris, Joswig, Ketiltrout, Liao, ManN, Mav, Mic, Michael Hardy,
Mickeymousechen, Mike.nicholaides, Mindmatrix, Mipadi, Nbarth, Neilc, Noldoaran, P0nc, Paddy3118, Palmard, Patrick, Paul G, Paul foord, Pcap, Peak, Poor Yorick, Prumpf, Puckly, R. S.
Shaw, Rp, Ruud Koot, Salix alba, Samuelsen, Spoon!, Stormie, TShilo12, TakuyaMurata, The Thing That Should Not Be, Tokek, VictorAnyakin, WODUP, Wavelength, Wbm1058, WillNess,
Wmahan, Wnissen, Wwwwolf, XJaM, ZeroOne, 92 anonymous edits

Array data structure  Source: https://en.wikipedia.org/w/index.php?oldid=586603034  Contributors: 111008066it, 16@r, 209.157.137.xxx, A'bad group, AbstractClass, Ahy1, Alfio, Alksentrs,
Alksub, Andre Engels, Andreas Kaufmann, Anonymous Dissident, Anwar saadat, Apers0n, Army1987, Atanamir, Awbell, B4hand, Bargomm, Beej71, Beetstra, Beland, Beliavsky,
BenFrantzDale, Berland, Betacommand, Bill37212, Blue520, Borgx, Brick Thrower, Btx40, Caltas, Cameltrader, Cgs, Chetan chopade, Chris glenne, ChrisGualtieri, Christian75, Conversion
script, Corti, Courcelles, Cybercobra, DAGwyn, Danakil, Darkspots, David Eppstein, DavidCary, Dcoetzee, Derek farn, Dmason, Don4of4, Dreftymac, Dysprosia, ESkog, EconoPhysicist, Ed
Poor, Engelec, Fabartus, Footballfan190, Forderud, Fraggle81, Fredrik, Funandtrvl, Func, Fvw, G worroll, Garde, Gaydudes, George100, Gerbrant, Giftlite, Graham87, Graue, Grika, GwydionM,
Heavyrain2408, Henrry513414, Hide&Reason, Highegg, Icairns, Ieee andy, Immortal Wowbagger, Intgr, Ipsign, J.delanoy, JLaTondre, JaK81600, Jackollie, Jandalhandler, Jeff3000, Jfmantis,
Jh51681, Jimbryho, Jkl, Jleedev, Jlmerrill, Jogloran, John, Johnuniq, Jonathan Grynspan, Jorge Stolfi, Josh Cherry, JulesH, Julesd, Kaldosh, Karol Langner, Kbdank71, Kbrose, Ketiltrout,
Kimchi.sg, Krischik, Kukini, LAX, Lardarse, Laurențiu Dascălu, Liempt, Ligulem, Ling.Nut, Lockeownzj00, Lowellian, Macrakis, Magioladitis, Mark Arsten, Massysett, Masterdriverz, Mattb90,
Mcaruso, Mdd, Merlinsorca, Mfaheem007, Mfb52, Michael Hardy, Mike Van Emmerik, Mikeblas, Mikhail Ryazanov, Mindmatrix, MisterSheik, Mr Adequate, Mrstonky, Muzadded, Mwtoews,
Narayanese, Neelix, Nicvaroce, Nixdorf, Norm, Oxymoron83, Patrick, PhiLho, Piet Delport, Poor Yorick, Princeatapi, Pseudomonas, Qutezuce, Quuxplusone, R000t, RTC, Rbj, Redacteur,
ReyBrujo, Rgrig, Rich Farmbrough, Rilak, Roger Wellington-Oguri, Rossami, Ruud Koot, SPTWriter, Sagaciousuk, Sewing, Sharkface217, Simeon, Simoneau, SiobhanHansa, Skittleys, Slakr,
Slogsweep, Smremde, Spoon!, Squidonius, Ssd, Stephenb, Strangelv, Supertouch, Suruena, Svick, TakuyaMurata, Tamfang, Tauwasser, Thadius856, The Anome, The Thing That Should Not Be,
The Utahraptor, Themania, Thingg, Timneu22, Travelbird, Trevyn, Trojo, Tsja, TylerWilliamRoss, User A1, Vanished user 1234567890, Visor, Waywardhorizons, Wbm1058, Wernher, Widr,
Wws, Yamamoto Ichiro, ZeroOne, Zzedar, 334 anonymous edits

FIFO  Source: https://en.wikipedia.org/w/index.php?oldid=585765159  Contributors: Adam Blinkinsop, Ahoerstemeier, AlphaAqua, Amitalon, Andres, Antaeus Feldspar, AtomicSource, Axem
Titanium, Bearian, Bp2u, Calliopejen1, ChanceTheGardener, Chasingsol, Chowbok, Chromancer, Clawed, Compprof9000, Conversion script, Coremayo, Craig t moore, Cronian, Cybercobra,
D6, David Eppstein, Dcoetzee, Dicklyon, Dkasak, Dominick, DropDeadGorgias, Fabartus, Fetofs, Francois Trazzi, Frap, Fæ, GB fan, Graham87, Gushi, HamburgerRadio, Heron, Jack
Greenmaven, JackLumber, Jim62sch, Josh3736, Kaarthikstars, Kenneth614, Kevin B12, Khanayoub, Killiondude, Knutux, Kvng, L33th4x0rguy, LSB, Llloic, Manassehkatz, Mandarax,
Materialscientist, Matt.fidler, Melcombe, Mike1024, MiroslavPragl, Nairobiny, Nbarth, Neier, Neilc, Nuno Tavares, Oleg Alexandrov, Omegatron, Patrick, Piet Delport, Pomte, Puckly,
Quark1005, Quilz, R!SC, R. S. Shaw, RJHall, Renata3, Rhobite, Rich Farmbrough, Rjanag, Rjwilmsi, Rklawton, Rlandmann, Rob Hooft, Royalguard11, Ruud Koot, Savannah Kaylee, Shai-kun,
SimonArlott, Singaraja, SirIsaacBrock, Smalljim, Stephenb, Sunray, TenPoundHammer, Tetraedycal, Thattommyguy, TheParanoidOne, Thumperward, TimBentley, TuukkaH, UkPaolo,
Unixguy, Wayiran, Wereon, Wolkykim, Wtmitchell, Xezbeth, Yonidebest, Yvesk, Zeeyanwiki, ZeroOne, Лев Дубовой, 石, 116 anonymous edits

Queue (abstract data type)  Source: https://en.wikipedia.org/w/index.php?oldid=587025354  Contributors: 16@r, Ahoerstemeier, Akerans, Almkglor, Andre Engels, Andreas Kaufmann,
Arsenic99, Atiflz, Banditlord, BenFrantzDale, BlckKnght, Bobo2000, Brain, Bruce1ee, Caerwine, Caesura, Calliopejen1, Carlosguitar, Cdills, Chairboy, Chelseafan528, Chris the speller,
Christian75, Ckatz, Clehner, Conan, Contactbanish, Conversion script, Corti, Cybercobra, Dabear, DavidLevinson, Dcoetzee, DePiep, Deflective, Detonadorado, Discospinster, Dmitrysobolev,
Edward, Egerhart, Emperorbma, Ewlyahoocom, Fredrik, Fswangke, Furrykef, Garfieldnate, Gbduende, Ggia, Giftlite, Glenn, Graham87, GrahamDavies, Gralfca, Gunslinger47, Ham Pastrami,
Hariva, Helix84, Hires an editor, Honza Záruba, Howcheng, Indil, Iprathik, Ixfd64, J. M., JC Chu, Jesin, Jguk, JohJak2, John lindgren, Joseph.w.s, JosephBarillari, Jrtayloriv, Jusjih, Keilana,
Kenyon, Kflorence, Kletos, Ksulli10, Kushalbiswas777, Kwamikagami, LapoLuchini, Liao, Loupeter, Lperez2029, M2MM4M, MahlerFive, Mark Renier, Marry314113, Massysett,
Materialscientist, MattGiuca, Maw, Maxwellterry, Mc6809e, Mecanismo, Mehrenberg, Metasquares, Michael Hardy, Mike1024, MikeDunlavey, Miklcct, Mindmatrix, Mlpkr, MrOllie, Nanshu,
Nbarth, Nemo Kartikeyan, Noldoaran, Nutster, Nwbeeson, Oli Filth, OliverTwisted, Olivier Teuliere, Patrick, Peng, PhilipR, PhuksyWiki, Pissant, PrometheeFeu, PseudoSudo, Qwertyus,
Rachel1, Rahulghose, Rasmus Faber, Rdsmith4, Redhanker, Ruby.red.roses, Ruud Koot, Sanjay742, SensuiShinobu1234, Sharcho, SimenH, SiobhanHansa, SoSaysChappy, Some jerk on the
Internet, Sorancio, Spoon!, SpuriousQ, Stassats, Stephenb, Thadius856, Thesuperslacker, Tlefebvre, Tobias Bergemann, TobiasPersson, Tranzenic, Traroth, Tsemii, Uruiamme, VTBassMatt,
Vanmaple, Vegpuff, W3bbo, Wikibarista, Wikilolo, Woohookitty, Wouter.oet, Wrp103, X96lee15, Zachlipton, Zanaferx, Zoney, Zotel, Ztothefifth, Zvar, ‫ﻣﺎﻧﻲ‬, 251 anonymous edits

LIFO  Source: https://en.wikipedia.org/w/index.php?oldid=548809929  Contributors: Andyhowlett, Calliopejen1, FFEFD5, Fui in terra aliena, Funandtrvl, Jim.henderson, JzG, Lapost, Nbarth,
Viking59, WereSpielChequers, Widefox, Zippanova, 14 anonymous edits

Stack (abstract data type)  Source: https://en.wikipedia.org/w/index.php?oldid=585052246  Contributors: 144.132.75.xxx, 1exec1, 202.144.44.xxx, 2mcm, Aaron Rotenberg, Aavviof,
Abhidesai, Adam majewski, Adam78, Agateller, Ahluka, Aillema, Aitias, Al Lemos, Alxeedo, Andre Engels, Andreas Kaufmann, Andrejj, Angusmclellan, Arch dude, Arkahot, Arvindn,
BenFrantzDale, Bentonjimmy, BiT, BlizzmasterPilch, Bobo192, Boivie, Bookmaker, Borgx, Bsmntbombdood, Caerwine, Calliopejen1, CanadianLinuxUser, CanisRufus, Cedar101, Ceriak,
Chengshuotian, Cheusov, Chris the speller, ChrisGualtieri, Christian List, Cjhoyle, Clx321, Cmccormick8, Colin meier, Conversion script, CoolKoon, Corti, CosineKitty, Ctxppc, Cybercobra,
David Eppstein, David.Federman, Davidhorman, Dcoetzee, Dhardik007, Dillesca, Dinoen, Dreamkxd, ENeville, Edgar181, ElNuevoEinstein, F15x28, Faysol037, Fernandopabon, Finlay
McWalter, Flaqueleto, Fragment, Fredrik, FrontLine, Funandtrvl, Funky Monkey, Gauravxpress, Gecg, Gggh, Ghettoblaster, Giftlite, Gonzen, Graham87, Grue, Guy Peters, Gwernol,
Hackwrench, Ham Pastrami, Hariva, Headbomb, Hftf, Hgfernan, Hook43113, Hqb, IITManojit, IanOsgood, Ianboggs, Icarot, Incnis Mrsi, Individual X, IntrigueBlue, Ionutzmovie, Iridescent,
Ixfd64, Jacektomas, Jake Nelson, James Foster, Jarble, Jeff G., JensMueller, Jesse Viviano, Jfmantis, Jheiv, Johnuniq, Jprg1966, Jyotiswaroopr123321, Jzalae, Karl-Henner, Kbdank71, Klower,
KnightRider, Kushalbiswas777, L Kensington, Liao, Loadmaster, LobStoR, Luciform, Maashatra11, Macrakis, Maeganm, Magioladitis, Mahlon, Mahue, Manassehkatz, Mandarax, Marc
Mongenet, Mark Renier, MartinHarper, Materialscientist, MattGiuca, Maxim Razin, Maximaximax, Mbessey, Mdd, MegaHasher, Melizg, Mentifisto, Michael Hardy, Michael Slone, Mindmatrix,
Mipadi, Mlpkr, Modster, Mogism, Mohinib27, Mr. Stradivarius, Murray Langton, Musiphil, Myasuda, Nakarumaka, Nbarth, Netkinetic, Nipunbayas, NoirNoir, Noldoaran, Notheruser,
Nova2358, Nutster, Obradovic Goran, OlEnglish, Oli Filth, Patrick, Paul Kube, PeterJeremy, Physicistjedi, Pion, Poccil, Pomte, Postrach, PranavAmbhore, Proxyma, Quantran202, R'n'B, R. S.
Shaw, RA0808, RDBrown, RTC, Rajavenu.iitm, Raviemani, Reikon, RevRagnarok, ReyBrujo, Robbe, Robert impey, Robin400, Rpv.imcc, Rustamabd, Ruud Koot, Rwxrwxrwx, Salocin-yel,
Sanjay742, Seaphoto, Seth Manapio, Shuipzv3, SimenH, SiobhanHansa, Slgrandson, Spieren, Spoon!, SpyMagician, Stan Shebs, StanBally, Stephenb, Stevenbird, Strcat, Stw, TakuyaMurata,
Tapkeerrambo007, Tasc, Tbhotch, The Anome, Thine Antique Pen, Thumperward, Tiberius6996, Tobias Bergemann, Tom.Reding, Tranzenic, Traroth, Tsf, TuukkaH, Ultranaut, Unara,
VTBassMatt, VampWillow, Vanished user 1029384756, Vanished user 9i39j3, Vasiliy Faronov, Vishal G.Dhavale., Vystrix Nexoth, Whosasking, Widr, Wikidan829, Wikilolo, WiseWoman,
Wlievens, Xcvista, Xdenizen, Yammesicka, Zakblade2000, Zchenyu, Ztothefifth, 399 anonymous edits

Computer program  Source: https://en.wikipedia.org/w/index.php?oldid=585477271  Contributors: 10metreh, 16@r, ABF, AKGhetto, AVRS, Abdullais4u, Adrianwn, AdultSwim,
Ahoerstemeier, Airada, Alansohn, Aldaron, Aldie, Ale jrb, AlefZet, Aleksd, Alisha0512, AlistairMcMillan, Allan McInnes, Anaxial, Ancheta Wis, Andre Engels, Andrejj, Andres, Angrytoast,
Animum, Anja98, Ans-mo, Antandrus, Arthena, Artichoker, Ash211, Atlant, Auric, Barkingdoc, Barts1a, Bcastell, Behnam8419, Bfinn, Bhadani, Blainster, Bob1960evens, Boothy443,
Born2cycle, Bornhj, Bryan Derksen, Callanecc, Can't sleep, clown will eat me, CanadianLinuxUser, Cap'n Refsmmat, CardinalDan, Carrot Lord, Cartiman, Children.of.the.Kron, Chriswiki,
Article Sources and Contributors 122

Chun-hian, CommonsDelinker, Conversion script, CopperMurdoch, CurranH, Cybercobra, DARTH SIDIOUS 2, DMacks, DVdm, Danakil, Dannyruthe, David Kernow, Deliv3ranc3, Derek farn,
DexDor, Dicklyon, Diego Moya, Diptytiw, Discospinster, DonToto, Donner60, Duncharris, ERobson, ESkog, Edward, Ehheh, ElKevbo, Elassint, EncMstr, Enchanter, Epbr123, FF2010,
Fabartus, Filemon, Firemaker117, FleetCommand, Flyer22, Frencheigh, Frietjes, Frosted14, Funandtrvl, Furrykef, Gaga654, Gaius Cornelius, Ghyll, Giftlite, Greenrd, Grstain, Grunt, Guppy,
Gurchzilla, Guyjohnston, HappyDog, Hoo man, HotQuantum3000, Hu12, I dream of horses, I'll suck anything, Inaaaa, Incnis Mrsi, Ipsign, IslandHopper973, Islescape, Iulianu, JGNPILOT,
Jaanus.kalde, JackLumber, Jackmatty, Jackol, JamesMoose, Jerome Kelly, Jerryobject, Jesant13, John Fader, John5678777, JohnWittle, JohnnyRush10, Jonnyapple, Josh Parris, Josve05a,
Jotomicron, Jpbowen, Jpgordon, Jusjih, K.Nevelsteen, K.lee, KANURI SRINIVAS, Karlzt, Kbh3rd, Kdakin, Keilana, KellyCoinGuy, Kenny sh, Khalid hassani, Kongr43gpen, Kusunose,
Kwekubo, Larry_Sanger, Laurens-af, Lev, Lfdder, Liberty Miller, Liempt, Lightmouse, Ligulem, Longhair, LuchoX, Lucky7654321, Lulu of the Lotus-Eaters, Luna Santin, M, MAG1, Mac,
Madhero88, Maestro magico, Magister Mathematicae, Mani1, Manop, Martijn Hoekstra, MartinRe, Martynas Patasius, Marudubshinki, Matty4123, Maximaximax, Mayur, McGeddon, Mercer
island student, Mermaid from the Baltic Sea, Metrax, Miguelfms, Mike Rosoft, Mike Van Emmerik, Mikrosam Akademija 2, Mild Bill Hiccup, Mindmatrix, Mlpkr, MmisNarifAlhoceimi,
Mohamedsayad, Mortenoesterlundjoergensen, Murray Langton, Nanshu, Nickokillah, Nikai, Nixdorf, Noctibus, Noosentaal, NovaSTL, Ohnoitsjamie, Oicumayberight, Oliver Pereira, Onopearls,
Orange Suede Sofa, OrgasGirl, Palnu, Paulkramer, Pearle, PetterBudt, Pharaoh of the Wizards, Philip Trueman, Poor Yorick, Power User, Proofreader77, Quota, Quuxplusone, R. S. Shaw, R.
fiend, Racerx11, Radarjw, Radon210, Raise exception, Raven in Orbit, Rdsmith4, RedWolf, Rich Farmbrough, Rjwilmsi, Robert123R, Roybristow, Rusty Cashman, Ruud Koot, S.Örvarr.S, Sadi
Carnot, Sae1962, Sannse, Saros136, Satellizer, Sean.hoyland, Sebbb-m, Sfahey, Shanes, SigmaEpsilon, Silver hr, SimonD, Sir Anon, Sir Nicholas de Mimsy-Porpington, Sjö, Skizzik,
SlackerMom, Slady, Slashem, Slowking Man, Smiller933, SqlPac, Stephenb, Stevertigo, Storm Rider, Subdolous, Suisui, Symphony Girl, TBloemink, TakuyaMurata, Template namespace
initialisation script, Tgeairn, Tharkowitz, The Anome, The Thing That Should Not Be, Thecheesykid, Thegreenflashlight, Thingg, Thumperward, TiagoTiago, Tide rolls, Timhowardriley,
Timshelton114, Tobias Bergemann, Tobiasjwt, TomasBat, Tommy2010, TonyClarke, Tpbradbury, Troels Arvin, True Genius, Ukexpat, UrbanBard, VIKIPEDIA IS AN ANUS!, Vcelloho,
WJetChao, Welsh, Wereon, Wernher, Wesley, WhatamIdoing, Wiki alf, WikiDan61, Wikijens, Wikiloop, Wolfkeeper, XJaM, Xn4, Xp54321, Yidisheryid, Yintan, Ykhwong, Yonaa, Zipircik,
ZonkBB6, Zundark, Zzuuzz, 462 anonymous edits
Image Sources, Licenses and Contributors 123

Image Sources, Licenses and Contributors


File:Acer Aspire 8920 Gemstone by Georgy.JPG  Source: https://en.wikipedia.org/w/index.php?title=File:Acer_Aspire_8920_Gemstone_by_Georgy.JPG  License: Creative Commons
Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Georgy90
File:Columbia Supercomputer - NASA Advanced Supercomputing Facility.jpg  Source:
https://en.wikipedia.org/w/index.php?title=File:Columbia_Supercomputer_-_NASA_Advanced_Supercomputing_Facility.jpg  License: Public Domain  Contributors: Trower, NASA
File:Intertec Superbrain.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Intertec_Superbrain.jpg  License: Creative Commons Attribution-Sharealike 2.0  Contributors:
Brighterorange, 1 anonymous edits
File:2010-01-26-technikkrempel-by-RalfR-05.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:2010-01-26-technikkrempel-by-RalfR-05.jpg  License: GNU Free Documentation
License  Contributors: Ralf Roletschek (talk) - Fahrradtechnik auf fahrradmonteur.de
File:Thinking Machines Connection Machine CM-5 Frostburg 2.jpg  Source:
https://en.wikipedia.org/w/index.php?title=File:Thinking_Machines_Connection_Machine_CM-5_Frostburg_2.jpg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Mark
Pellegrini
File:G5 supplying Wikipedia via Gigabit at the Lange Nacht der Wissenschaften 2006 in Dresden.JPG  Source:
https://en.wikipedia.org/w/index.php?title=File:G5_supplying_Wikipedia_via_Gigabit_at_the_Lange_Nacht_der_Wissenschaften_2006_in_Dresden.JPG  License: Creative Commons Attribution
2.5  Contributors: Conrad Nutschan
File:DM IBM S360.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:DM_IBM_S360.jpg  License: Creative Commons Attribution 2.5  Contributors: Ben Franske
File:Acorn BBC Master Series Microcomputer.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Acorn_BBC_Master_Series_Microcomputer.jpg  License: Creative Commons
Attribution-Sharealike 2.0  Contributors: MarkusHagenlocher, Mono, Ubcule
File:Dell PowerEdge Servers.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Dell_PowerEdge_Servers.jpg  License: Public Domain  Contributors: Dsv
File:Jacquard.loom.full.view.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Jacquard.loom.full.view.jpg  License: Public Domain  Contributors: User Ghw on en.wikipedia
File:Jacquard Joseph Marie woven silk.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Jacquard_Joseph_Marie_woven_silk.jpg  License: Public Domain  Contributors:
Astrochemist, Ecummenic, Ezrdr, Kilom691, Leyo, Mdd, Racconish, WikipediaMaster, 1 anonymous edits
File:Ada Lovelace portrait.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Ada_Lovelace_portrait.jpg  License: Public Domain  Contributors: Jean-Frédéric, Jkadavoor, Kaldari,
Mywood, Pine, SirHenryNorris, Tokorokoko, 1 anonymous edits
File:Z3 Deutsches Museum.JPG  Source: https://en.wikipedia.org/w/index.php?title=File:Z3_Deutsches_Museum.JPG  License: GNU Free Documentation License  Contributors: User:Teslaton
File:Two women operating ENIAC.gif  Source: https://en.wikipedia.org/w/index.php?title=File:Two_women_operating_ENIAC.gif  License: Public Domain  Contributors: United States Army
File:EDSAC (10).jpg  Source: https://en.wikipedia.org/w/index.php?title=File:EDSAC_(10).jpg  License: Creative Commons Attribution 2.0  Contributors: Copyright Computer Laboratory,
University of Cambridge. Reproduced by permission.
File:80486dx2-large.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:80486dx2-large.jpg  License: GNU Free Documentation License  Contributors: A23cd-s, Adambro, Admrboltz,
Artnnerisa, CarolSpears, Denniss, Greudin, Julia W, Kozuch, Martin Kozák, Mattbuck, Rjd0060, Rocket000, 12 anonymous edits
File:SSEM Manchester museum.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:SSEM_Manchester_museum.jpg  License: Creative Commons Attribution-Sharealike 3.0
 Contributors: Parrot of Doom
File:H96566k.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:H96566k.jpg  License: Public Domain  Contributors: Courtesy of the Naval Surface Warfare Center, Dahlgren, VA.,
1988.
File:FortranCardPROJ039.agr.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:FortranCardPROJ039.agr.jpg  License: Creative Commons Attribution-Sharealike 2.5  Contributors:
Arnold Reinhold
File:Computer Components.webm  Source: https://en.wikipedia.org/w/index.php?title=File:Computer_Components.webm  License: Creative Commons Attribution-Sharealike 3.0
 Contributors: User:Tlenyard
File:Mips32 addi.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Mips32_addi.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors:
en:User:Booyabazooka
File:Magnetic core.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Magnetic_core.jpg  License: Creative Commons Attribution 2.5  Contributors: Apalsola, Fayenatic london,
Gribozavr, Uberpenguin
File:HDDspin.JPG  Source: https://en.wikipedia.org/w/index.php?title=File:HDDspin.JPG  License: Creative Commons Attribution-Sharealike 2.0  Contributors: Alpha six from Germany
File:Cray 2 Arts et Metiers dsc03940.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Cray_2_Arts_et_Metiers_dsc03940.jpg  License: Creative Commons Attribution-Sharealike
2.0  Contributors: User:David.Monniaux
File:Internet map 1024.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Internet_map_1024.jpg  License: Creative Commons Attribution 2.5  Contributors: Barrett Lyon The Opte
Project
File:Human computers - Dryden.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Human_computers_-_Dryden.jpg  License: Public Domain  Contributors: NACA (NASA)
File:Messagebox info.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Messagebox_info.svg  License: Public Domain  Contributors: Amada44
File:Classes_and_Methods.png  Source: https://en.wikipedia.org/w/index.php?title=File:Classes_and_Methods.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors:
Bobbygammill
Image:Python add5 parse.png  Source: https://en.wikipedia.org/w/index.php?title=File:Python_add5_parse.png  License: Public Domain  Contributors: User:Lulu of the Lotus-Eaters
Image:Python add5 syntax.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Python_add5_syntax.svg  License: Copyrighted free use  Contributors: Xander89
File:Bangalore India Tech books for sale IMG 5261.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Bangalore_India_Tech_books_for_sale_IMG_5261.jpg  License: Creative
Commons Attribution-Sharealike 3.0  Contributors: User:Victorgrigas
Image:Euclid flowchart 1.png  Source: https://en.wikipedia.org/w/index.php?title=File:Euclid_flowchart_1.png  License: Creative Commons Attribution 3.0  Contributors: Wvbailey
file:speakerlink-new.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Speakerlink-new.svg  License: Creative Commons Zero  Contributors: User:Kelvinsong
File:Euclid's algorithm structured blocks 1.png  Source: https://en.wikipedia.org/w/index.php?title=File:Euclid's_algorithm_structured_blocks_1.png  License: Creative Commons Attribution
3.0  Contributors: Wvbailey
File:Sorting quicksort anim.gif  Source: https://en.wikipedia.org/w/index.php?title=File:Sorting_quicksort_anim.gif  License: Creative Commons Attribution-ShareAlike 3.0 Unported
 Contributors: Wikipedia:en:User:RolandH
File:Euclid's algorithm Book VII Proposition 2 2.png  Source: https://en.wikipedia.org/w/index.php?title=File:Euclid's_algorithm_Book_VII_Proposition_2_2.png  License: Creative
Commons Attribution 3.0  Contributors: Wvbailey
File:Euclids-algorithm-example-1599-650.gif  Source: https://en.wikipedia.org/w/index.php?title=File:Euclids-algorithm-example-1599-650.gif  License: Creative Commons
Attribution-Sharealike 3.0  Contributors: Swfung8
File:Euclid's algorithm Inelegant program 1.png  Source: https://en.wikipedia.org/w/index.php?title=File:Euclid's_algorithm_Inelegant_program_1.png  License: Creative Commons
Attribution 3.0  Contributors: Wvbailey
File:Alan Turing.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Alan_Turing.jpg  License: Creative Commons Attribution 2.0  Contributors: Jon Callas from San Jose, USA
Image:Hash table 3 1 1 0 1 0 0 SP.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Hash_table_3_1_1_0_1_0_0_SP.svg  License: Creative Commons Attribution-Sharealike 3.0
 Contributors: Jorge Stolfi
Image:Singly linked list.png  Source: https://en.wikipedia.org/w/index.php?title=File:Singly_linked_list.png  License: Public Domain  Contributors: User:Dcoetzee
Image:Array of array storage.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Array_of_array_storage.svg  License: Public Domain  Contributors: User:Dcoetzee
Image:Data Queue.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Data_Queue.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Foroa, Hazmat2,
Martynas Patasius, Ocaroline, Vegpuff
Image:Data stack.svg  Source: https://en.wikipedia.org/w/index.php?title=File:Data_stack.svg  License: Public Domain  Contributors: User:Boivie
Image Sources, Licenses and Contributors 124

Image:ProgramCallStack2.png  Source: https://en.wikipedia.org/w/index.php?title=File:ProgramCallStack2.png  License: Public Domain  Contributors: Agateller, DragonLord, LobStoR,
Obradovic Goran
File:Decimaltobinary.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Decimaltobinary.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
Image:Tower of Hanoi.jpeg  Source: https://en.wikipedia.org/w/index.php?title=File:Tower_of_Hanoi.jpeg  License: GNU Free Documentation License  Contributors: Ies, Ævar Arnfjörð
Bjarmason
File:Towerofhanoi.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Towerofhanoi.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:PranavAmbhore
file:Towersofhanoi1.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Towersofhanoi1.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
file:Towersofhanoi2.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Towersofhanoi2.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
file:Towersofhanoi3.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Towersofhanoi3.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
file:Towersofhanoi4.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Towersofhanoi4.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Railroadcars2.png  Source: https://en.wikipedia.org/w/index.php?title=File:Railroadcars2.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:PranavAmbhore
File:Quicksort1.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Quicksort1.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Quicksort2.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Quicksort2.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Quicksort3.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Quicksort3.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Quicksort4.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Quicksort4.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Quicksort5.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Quicksort5.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Quicksort6.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Quicksort6.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Quicksort7.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Quicksort7.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Stockspan.pdf  Source: https://en.wikipedia.org/w/index.php?title=File:Stockspan.pdf  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Nipunbayas
File:Object-Oriented-Programming-Methods-And-Classes-with-Inheritance.png  Source:
https://en.wikipedia.org/w/index.php?title=File:Object-Oriented-Programming-Methods-And-Classes-with-Inheritance.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors:
Carrot Lord
File:USB flash drive.JPG  Source: https://en.wikipedia.org/w/index.php?title=File:USB_flash_drive.JPG  License: GNU Free Documentation License  Contributors: User:Nrbelex
File:Dg-nova3.jpg  Source: https://en.wikipedia.org/w/index.php?title=File:Dg-nova3.jpg  License: Copyrighted free use  Contributors: User Qu1j0t3 on en.wikipedia
License 125

License
Creative Commons Attribution-Share Alike 3.0
//creativecommons.org/licenses/by-sa/3.0/

View publication stats

Vous aimerez peut-être aussi