Vous êtes sur la page 1sur 68

Embedded system

From Wikipedia, the free encyclopedia

Picture of the internals of an ADSLmodem/router, a modern example of an embedded system. Labelled parts
include a microprocessor (4), RAM (6), and flash memory (7).

An embedded system is a computer system with a dedicated function within a larger mechanical or
electrical system, often with real-time computing constraints.[1][2] It is embedded as part of a complete
device often including hardware and mechanical parts. Embedded systems control many devices in
common use today.[3] Ninety-eight percent of all microprocessors are manufactured as components
of embedded systems.[4]
Examples of properties of typically embedded computers when compared with general-purpose
counterparts are low power consumption, small size, rugged operating ranges, and low per-unit cost.
This comes at the price of limited processing resources, which make them significantly more difficult
to program and to interact with. However, by building intelligence mechanisms on top of the
hardware, taking advantage of possible existing sensors and the existence of a network of
embedded units, one can both optimally manage available resources at the unit and network levels
as well as provide augmented functions, well beyond those available. [5] For example, intelligent
techniques can be designed to manage power consumption of embedded systems. [6]
Modern embedded systems are often based on microcontrollers (i.e. CPUs with integrated memory
or peripheral interfaces),[7] but ordinary microprocessors (using external chips for memory and
peripheral interface circuits) are also common, especially in more-complex systems. In either case,
the processor(s) used may be types ranging from general purpose to those specialised in certain
class of computations, or even custom designed for the application at hand. A common standard
class of dedicated processors is the digital signal processor (DSP).
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to
reduce the size and cost of the product and increase the reliability and performance. Some
embedded systems are mass-produced, benefiting from economies of scale.
Embedded systems range from portable devices such as digital watches and MP3 players, to large
stationary installations like traffic lights, factory controllers, and largely complex systems like hybrid
vehicles, MRI, and avionics. Complexity varies from low, with a single microcontroller chip, to very
high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.




3.1User interface

3.2Processors in embedded systems

3.2.1Ready made computer boards

3.2.2ASIC and FPGA solutions






4.3High vs. low volume

5Embedded software architectures


5.1Simple control loop

5.2Interrupt-controlled system

5.3Cooperative multitasking

5.4Preemptive multitasking or multi-threading

5.5Microkernels and exokernels

5.6Monolithic kernels

5.7Additional software components

6See also



9Further reading

10External links

One of the very first recognizably modern embedded systems was the Apollo Guidance Computer,
developed by Charles Stark Draper at the MIT Instrumentation Laboratory. At the project's inception,
the Apollo guidance computer was considered the riskiest item in the Apollo project as it employed
the then newly developed monolithic integrated circuits to reduce the size and weight. An early
mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman
missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was
replaced with a new computer that was the first high-volume use of integrated circuits.
Since these early applications in the 1960s, embedded systems have come down in price and there
has been a dramatic rise in processing power and functionality. An early microprocessor for
example, the Intel 4004, was designed for calculators and other small systems but still required
external memory and support chips. In 1978 National Engineering Manufacturers Association
released a "standard" for programmable microcontrollers, including almost any computer-based
controllers, such as single board computers, numerical, and event-based controllers.
As the cost of microprocessors and microcontrollers fell it became feasible to replace expensive
knob-based analog components such as potentiometers and variable capacitorswith up/down
buttons or knobs read out by a microprocessor even in consumer products. By the early 1980s,
memory, input and output system components had been integrated into the same chip as the
processor forming a microcontroller. Microcontrollers find applications where a general-purpose
computer would be too costly.
A comparatively low-cost microcontroller may be programmed to fulfill the same role as a large
number of separate components. Although in this context an embedded system is usually more
complex than a traditional solution, most of the complexity is contained within the microcontroller
itself. Very few additional components may be needed and most of the design effort is in the
software. Software prototype and test can be quicker compared with the design and construction of
a new circuit not using an embedded pro

Kurukshetra War
From Wikipedia, the free encyclopedia

Kurukshetra War

c.1700 watercolour from Mewar depicts the Pandava and

Kaurava armies arrayed against each other.

unknown, but lasted 18 days

Kurukshetra, modern-day Haryana, India

Victory for Pandavas and allies, fall

of Kauravas
Dhritarashtra abdicated the throne of
Hastinapura and Yudhishthirasucceeded him
Yuyutsu was appointed as Yudhishthira's
subordinate king in Indraprastha
Various succession took place due to many
kings and rulers' deaths in the
war: Anga, Chedi, Gandhar, Kalinga, Kosala, Madra,
Magadh, Matsya, Panchal, Sindhu, Virata
The center of power in the Gangetic basin
shifted from the Kurus to the Panchalas

Reunification of the Kuru states

of Hastinapura and Indraprastha under the Pandavas
Return of Panchal lands held by Drona to
the original Panchala state
Truce and status quo ante bellum in

Territory-less Pandavas of the

Kauravas (Kuru tribe) with

Kurus with the support of the

capital at Hastinapura and

mighty Panchalatribe and

their allies


Commanders and leaders







Dhrishtadyumna (day 1-18)

Bhishma (day 1-10)


Drona (day 11-15)


Karna (day 16-17)

Shalya (day 18)
Ashwatthama (night raid)


7 Akshauhinis

11 Akshauhinis

153,090 chariots and chariot-

240,570 chariots and chariot-



153,090 elephants and

240,570 elephants and



459,270 horses and horse-riders 721,710 horses and horse765,450 infantry


(total 1,530,900 soldiers)

1,202,850 infantry
(total 2,405,700 soldiers)

Casualties and losses

Almost total (1,530,900

Almost total (2,405,700



only 8 known survivors:

only 4 known survivors:

the five Pandavas



Sage Kripa




Vrishakethu (son of
Part of a series on




Gurus, saints, philosophers[show]
Other topics[show]

Glossary of Hinduism terms

Hinduism portal

The Kurukshetra War, also called the Mahabharata War, is a war described in the Indian
epic Mahabharata. The conflict arose from a dynastic succession struggle between two groups of
cousins, the Kauravas and Pandavas, for the throne of Hastinapura in an Indian kingdom
called Kuru. It involved a number of ancient kingdoms participating as allies of the rival groups.
The location of the battle is described as having occurred in Kurukshetra in the modern state
of Haryana. Despite only referring to these eighteen days, the war narrative forms more than a
quarter of the book, suggesting its relative importance within the epic, which overall spans decades
of the warring families. The narrative describes individual battles and deaths of various heroes of
both sides, military formations, war diplomacy, meetings and discussions among the characters, and
the weapons used. The chapters (parvas) dealing with the war (from chapter six to ten) are
considered amongst the oldest in the entire Mahabharata.
The historicity of the war remains subject to scholarly discussions.[1] Attempts have been made to
assign a historical date to the Kurukshetra War. Popular tradition holds that the war marks the
transition to Kaliyuga and thus dates it to 3102 BCE.[citation needed]


2Historicity and dating


4Mahabharata account of the war



4.2Krishna's Peace Mission

4.3War Preparations

4.3.1Pandava Army

4.3.2Kaurava Army

4.3.3Neutral parties

4.3.4Army Divisions & Weaponry

4.3.5Military Formations

4.3.6Rules of Engagement
4.4Course of war

4.4.1Before the Battle

4.4.2The Bhagavad Gita

4.4.3Day 1

4.4.4Day 2

4.4.5Day 3

4.4.6Day 4

4.4.7Days 59

4.4.8Day 10

4.4.9Day 11

4.4.10Day 12

4.4.11Day 13

4.4.12Day 14

4.4.13Day 15

4.4.14Day 16

4.4.15Day 17

4.4.16Day 18





8External links

Main article: Mahbhrata

India during the time of Mahabharata

Mahabharata, one of the most important Hindu epics, is an account of the life and deeds of several
generations of a ruling dynasty called the Kuru clan. Central to the epic is an account of a war that
took place between two rival families belonging to this clan. Kurukshetra (literally "field of the
Kurus"), was the battleground on which this war, known as the Kurukshetra War, was fought.
Kurukshetra was also known as "Dharmakshetra" (the "field of Dharma"), or field of righteousness.
Mahabharata tells that this site was chosen for the war because a sin committed on this land was
forgiven on account of the sanctity of this land.[citation needed]
The Kuru territories were divided into two and were ruled by Dhritarashtra (with his capital
at Hastinapura) and Yudhishthira of the Pandavas(with his capital at Indraprastha). The immediate
dispute between the Kauravas(sons of Dhritarashtra) and the Pandavas arose from a game of dice,
which Duryodhana won by deceit, forcing their Pandava cousins to transfer their entire territories to
the Kauravas (to Hastinapura) and to "go into exile" for thirteen years. The dispute escalated into a
full-scale war when Duryodhana, driven by jealousy, refused to restore the Pandavas their territories

after the exile as earlier decided, as Duryodhana objected that they were discovered while in exile,
and that no return of their kingdom was agreed upon.

Historicity and dating[edit]

The position of the Kuru and Panchala kingdoms in Iron Age Vedic India

Geography of the Rigved, with river names; the extent of the Swat and Cemetery H cultures are indicated

Swaraj Prakash Gupta and K.S. Ramachandran state that the

Divergence of views regarding the Mahabharata war is due to the absence of reliable history of the
ancient period. This is also true of the historical period, where also there is no unanimity of opinion
on innumerable issues. Dr Mirashi accepts that there has been interpolation in the Mahabharata and
observes that, 'Originally it (Mahabharata) was a small poem of 8,800 verses and was known by the
name Jaya (vic

From Wikipedia, the free encyclopedia


The band (with a fan in front in the green shirt)

Background information


Tijuana, Baja California, Mexico


Pop rock

Years active



Sony BMG




Mo Prez
Leo Ramrez
Damin Dvila

Past members

Nacho (20012004)
Alex Ortega (20042012)
Jesse Huizar (20122015)

Delux is a pop rock band formed in Tijuana, Mexico, in the summer of 2000. The group is influenced
by 1980s new wave music and the SoCal punk scene. Their 2005 debut self-titled record spawned
three music videos which included an MTV VMA Nomination for Best New Artist. In 2007 they
released Entre La Guerra y El Amor under major label (Sony/BMG) which allowed them to reach a

more diverse and larger audience. In 2010, Delux released a bilingual album, which propelled them
to be named one of the most influential Hispanic rock bands of the Internet era. They appeared
on Alternative Press's 100 bands to watch in 2007.

1Band members



4External links

Band members[edit]

Mauricio Prez Guitar, vocals

Leonardo Ramirez Guitar, vocals

Damin Davila Drums


Delux (2004)

Entre la Guerra y el Amor (2007)

$ (2010)


"Ms de lo que te Imaginas"

"Chat Noir" (video featuring live concert clips)

"Apague el Sol"

"Entre la Guerra y el Amor"


"Quien Comparte tu Silencio"

"Get the Money" (animated video)

"Hey Lover"

"Infatuacion" (filmed live at House of Blues sunset strip U.S.A.)

From Wikipedia, the free encyclopedia

For other uses, see Rapture (disambiguation).

This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged and
removed. (March 2015) (Learn how and when to remove this template message)

One in the bed

One at the mill
One on the field
Jan Luyken's three-part illustration of the rapture described in Matthew 24, verse 40, from the 1795 Bowyer

Christian eschatology
Eschatology views

Contrasting beliefs[show]

The Millennium[show]

Biblical texts[show]

Key terms[show]

Christianity portal

In Christian eschatology the rapture refers to the belief that either before, or simultaneously with,
the Second Coming of Jesus Christ to Earth, believers who have died will be raised and believers
who are still alive and remain shall be caught up together with them (the resurrected dead believers)
in the clouds to meet the Lord in the air.[1][2] The concept has its basis in various interpretations of
the biblical book of First Thessalonians and how it relates to interpretations of various other biblical
passages, such as those from Second Thessalonians, Gospel of Matthew, First Corinthians and
the Book of Revelation.[3]
The exact meaning, timing and impact of the event are disputed among Christians [4] and the term is
used in at least two senses. In the pre-tribulation view, a group of people will be left behind on earth
after another group literally leaves "to meet the Lord in the air." This is now the most common use of
the term, especially among fundamentalist Christians in the United States.[5] The other, older use of
the term "Rapture" is simply as a synonym for mystical union with God, or our final sharing in Gods
heavenly life generally, without a belief that a group of people is left behind on earth for an
extended Tribulation period after the events of 1 Thessalonians 4:17.[6]
With respect to the rapture, Catholics believe that this event of their gathering with Christ in Heaven
will take place, though they do not generally use the word "rapture" to refer to this event. An event
that would happen in the second coming of Christ. [7]
There are many views among Christians regarding the timing of Christ's return (including whether it
will occur in one event or two), and various views regarding the destination of the aerial gathering
described in 1 Thessalonians 4. Denominations such as Roman Catholics,[8] Orthodox Christians,
Lutherans, and Reformed Christians[10] believe in a rapture only in the sense of a gathering with
Christ in Heaven after a general final resurrection, when Christ returns in his Second Coming. They
do not believe that a group of people is left behind on earth for an extended Tribulation period after
the events of 1 Thessalonians 4:17.[11]
Many authors maintain that the pre-tribulation Rapture doctrine originated in the eighteenth century,
with the Puritan preachers Increase and Cotton Mather, and was then popularized in the 1830s
by John Darby.[12][13] Others, including Grant Jeffrey, maintain that an earlier document called
Ephraem or Pseudo-Ephraem already supported a pre-tribulation rapture.[14]
Pre-tribulation rapture theology was popularized extensively in the 1830s by John Nelson Darby and
the Plymouth Brethren,[15]and further popularized in the United States in the early 20th century by the
wide circulation of the Scofield Reference Bible.[16]




2English Bible translations

3Doctrinal history


4.1One event or two




5.2Pre-tribulational Premillennialism

5.3Mid-tribulational Premillennialism

5.4Prewrath Premillennialism

5.5Partial Premillennialism

5.6Post-tribulational Premillennialism

5.7Post-Millennialism and Amillennialism

6.1Failed predictions


8See also


10External links

"Rapture" is derived from Middle French rapture, via the Medieval
Latin raptura ("seizure,kidnapping"), which derives from the Latin raptus ("a carrying off").[17]

The Koine Greek of 1 Thessalonians 4:17 uses the verb form (harpagisometha),
which means "we shall be caught up" or "taken away", with the connotation that this is a sudden
event. The dictionary form of this Greek verb is harpaz ().[18] This use is also seen in such
texts as Acts 8:39, 2Corinthians 12:2-4 and Revelation 12:5.

The Latin Vulgate translates the Greek as rapiemur,[19] from the verb rapio meaning
"to catch up" or "take away".[20]

English Bible translations[edit]

English versions of the Bible have expressed the concept of rapiemur in various ways:

The Wycliffe Bible (1395), translated from the Latin Vulgate, uses "rushed".[21]

The Tyndale New Testament (1525), the Bishop's Bible (1568), the Geneva Bible (1587) and
the King James Version (1611) use "caught up".[22]

The on-line NET Bible (1995-2005) translates the Greek of 1 Thessalonians 4:17 [23] using the
phrase "suddenly caught up" with this footnote: "Or 'snatched up.' The Greek verb implies
that the action is quick or forceful, so the translation supplied the adverb 'suddenly' to make this implicit
notion clear."

Doctrinal history[edit]
In 1590, Francisco Ribera, a Catholic Jesuit, taught "futurism" the idea that most of Revelation was
about the future, and therefore, not about the Catholic Church. He also taught that the rapture would
happen 45 days before the end of a 3.5-year tribulation.
The concept of the rapture, in connection with premillennialism, was expressed by the 17thcentury American Puritans Increase and Cotton Mather. They held to the idea that believers would
be caught up in the air, followed by judgments on earth, and then the millennium.[24][25] Other 17thcentury expressions of the rapture are found in the works of: Robert Maton, Nathaniel Homes, John
Browne, Thomas Vincent, Henry Danvers, and William Sherwin.[26] The term rapture was used by
Philip Doddridge[27] and John Gill[28] in their New Testament commentaries, with the idea that believers
would be caught up prior to judgment on earth and Jesus' second coming.
There exists at least one 18th-century and two 19th-century pre-tribulation references: in an essay
published in 1788 in Philadelphia by the Baptist Morgan Edwards which articulated the concept of a
pre-tribulation rapture,[29] in the writings of Catholic priest Manuel Lacunza in 1812,[30] and by John
Nelson Darby in 1827.[31] Manuel Lacunza (17311801), a Jesuit priest (under the pseudonym Juan
Josafat Ben Ezra), wrote an apocalyptic work entitled La venida del Mesas en gloria y
majestad (The Coming of the Messiah in Glory and Majesty). The book appeared first in 1811, 10
years after his death. In 1827, it was translated into English by the Scottish minister Edward Irving.
[citation needed]

Dr. Samuel Prideaux Tregelles (1813-1875), a prominent English theologian and biblical scholar,
wrote a pamphlet in 1866 tracing the concept of the rapture through the works of John Darby back
to Edward Irving.[32]

From Wikipedia, the free encyclopedia

This article is about blood clotting. For other uses, see Coagulation (disambiguation).
Coagulation (also known as clotting) is the process by which blood changes from a liquid to a gel,
forming a blood clot. It potentially results in hemostasis, the cessation of blood loss from a damaged
vessel, followed by repair. The mechanism of coagulation involves activation, adhesion, and
aggregation of platelets along with deposition and maturation of fibrin. Disorders of coagulation are
disease states which can result in bleeding (hemorrhage or bruising) or obstructive clotting

Coagulation is highly conserved throughout biology; in all mammals, coagulation involves both a
cellular (platelet) and a protein (coagulation factor) component.[2] The system in humans has been
the most extensively researched and is the best understood. [3]
Coagulation begins almost instantly after an injury to the blood vessel has damaged
the endothelium lining the vessel. Leaking of blood through the endothelium initiates two processes:
changes in platelets, and the exposure of subendothilial tissue factor to plasma Factor VII, which
ultimately leads to fibrin formation. Platelets immediately form a plug at the site of injury; this is
called primary hemostasis. Secondary hemostasis occurs simultaneously: Additional coagulation
factors or clotting factors beyond Factor VII (listed below) respond in a complex cascade to
form fibrin strands, which strengthen the platelet plug.[4]

Blood coagulation pathways in vivo showing the central role played by thrombin



1.1Platelet activation

1.2The coagulation cascade

1.2.1Tissue factor pathway (extrinsic)

1.2.2Contact activation pathway (intrinsic)

1.2.3Final common pathway




1.6Role in immune system


3Role in disease

3.1Platelet disorders

3.2Disease and clinical significance of thrombosis




5Coagulation factors


6.1Initial discoveries

6.2Coagulation factors


7Other species


9Further reading

10External links

10.13D structures

Platelet activation[edit]
When the endothelium is damaged, the normally isolated, underlying collagen is exposed to
circulating platelets, which bind directly to collagen with collagen-specific glycoprotein Ia/IIa surface
receptors. This adhesion is strengthened further by von Willebrand factor (vWF), which is released
from the endothelium and from platelets; vWF forms additional links between the
platelets' glycoprotein Ib/IX/V and the collagen fibrils. This localization of platelets to the extracellular
matrix promotes collagen interaction with platelet glycoprotein VI. Binding of collagen to glycoprotein

VI triggers a signaling cascade that results in activation of platelet integrins. Activated integrins
mediate tight binding of platelets to the extracellular matrix. This process adheres platelets to the site
of injury.[5]
Activated platelets will release the contents of stored granules into the blood plasma. The granules
include ADP, serotonin, platelet-activating factor (PAF), vWF, platelet factor 4, and thromboxane
A2 (TXA2), which, in turn, activate additional platelets. The granules' contents activate a Gq-linked
protein receptor cascade, resulting in increased calcium concentration in the platelets' cytosol. The
calcium activates protein kinase C, which, in turn, activates phospholipase A2 (PLA2). PLA2 then
modifies the integrin membrane glycoprotein IIb/IIIa, increasing its affinity to bind fibrinogen. The
activated platelets change shape from spherical to stellate, and the fibrinogen cross-links
with glycoprotein IIb/IIIaaid in aggregation of adjacent platelets (completing primary hemostasis). [6]

The coagulation cascade[edit]

The classical blood coagulation pathway[7]

Modern coagulation pathway. Hand-drawn composite from similar drawings presented by Professor Dzung Le,
MD, PhD, at UCSD Clinical Chemistry conferences on 14 and 21 October 2014. Original schema from
Introduction to Hematology by Samuel I. Rapaport. 2nd edition;Lippencott:1987. Dr Le added the factor XI
portion based on a paper from about year 2000. Dr Le's similar drawings presented the development of this
cascade over 6 frames, like a comic.

The coagulation cascade of secondary hemostasis has two initial pathways which lead
to fibrin formation. These are the contact activation pathway (also known as the intrinsic pathway),
and the tissue factor pathway (also known as the extrinsic pathway) which both lead to the same

fundamental reactions that produce fibrin. It was previously thought that the two pathways of
coagulation cascade were of equal importance, but it is now known that the primary pathway for the
initiation of blood coagulation is the tissue factor (extrinsic) pathway. The pathways are a series of
reactions, in which a zymogen (inactive enzyme precursor) of a serine protease and
its glycoprotein co-factor are activated to become active components that then catalyze the next
reaction in the cascade, ultimately resulting in cross-linked fibrin. Coagulation factors are generally
indicated by Roman numerals, with a lowercase a appended to indicate an active form.[7]
The coagulation factors are generally serine proteases (enzymes), which act by cleaving
downstream proteins. There are some exceptions. For example, FVIII and FV are glycoproteins, and
Factor XIII is a transglutaminase.[7] The coagulation factors circulate as inactive zymogens. The
coagulation cascade is therefore classically divided into three pathways. The tissue
factor and contact activation pathways both activate the "final common pathway" of factor X,
thrombin and fibrin.[8]
Tissue factor pathway (extrinsic)[edit]

Solar irradiance
From Wikipedia, the free encyclopedia
(Redirected from Solar radiation)

"Insolation" redirects here. It is not to be confused with Thermal insulation.

This article needs attention from an expert on the subject. Please add a reason or
a talk parameter to this template to explain the issue with the article. Consider associating this
request with a WikiProject. (July 2015)
Solar irradiance is the power per unit area received from the Sun in the form of electromagnetic radiation in
the wavelength range of the measuring instrument. Irradiance may be measured in space or at the Earth's
surface after atmospheric absorption and scattering. It is measured perpendicular to the incoming sunlight.
Total solar irradiance (TSI), is a measure of the solar power over all wavelengths per unit area incident on
the Earth's upper atmosphere. The solar constant is a conventional measure of mean TSI at a distance of
one astronomical Unit (AU). Irradiance is a function of distance from the Sun, the solar cycle, and cross-cycle
changes.[2] Irradiance on Earth is also measured perpendicular to the incoming sunlight. Insolation is the power
received on Earth per unit area on a horizontal surface.[3] It depends on the height of the Sun above the horizon.

Annual mean insolation at the top of Earth's atmosphere (TOA) and at the planet's surface




3Absorption and reflection

4Projection effect



6.1Solar potential maps

6.2Top of the atmosphere


7.1Total irradiance

7.2Ultraviolet irradiance

7.3Milankovitch cycles


8.2Intertemporal calibration

8.3Persistent inconsistencies

8.4TSI Radiometer Facility

8.52011 reassessment

8.62014 reassessment



9.2Solar power

9.3Climate research

9.4Space travel

9.5Civil engineering

10See also

11.1General references

12External links

The solar irradiance integrated over time is called solar irradiation, solar exposure or insolation. However,
insolation is often used interchangeably with irradiance in practice.

The SI unit of irradiance is watt per square meter (W/m2).
An alternate unit of measure is the Langley (1 thermochemical calorie per square centimeter or 41,840 J/m2)
per unit time.
The solar energy industry uses watt-hour per square metre (Wh/m2) divided by the recording time. 1 kW/m2 =
24 kWh/(m2 day).
Irradiance can also be expressed in Suns, where one Sun equals 1000 W/m 2 at the point of arrival.[4]

Absorption and reflection[edit]

Part of the radiation reaching an object is absorbed and the remainder reflected. Usually the absorbed radiation
is converted to thermal energy, increasing the object's temperature. Manmade or natural systems, however, can
convert part of the absorbed radiation into another form such as electricity or chemical bonds, as in the case
of photovoltaic cells or plants. The proportion of reflected radiation is the object's reflectivity or albedo.

Projection effect[edit]

One sunbeam one mile wide shines on the ground at a 90 angle, and another at a 30 angle. The oblique sunbeam
distributes its light energy over twice as much area.

Insolation onto a surface is largest when the surface directly faces (is normal to) the sun. As the angle between
the surface and the Sun moves from normal, the insolation is reduced in proportion to the angle's Cosine;
see Effect of sun angle on climate.
In the figure, the angle shown is between the ground and the sunbeam rather than between the vertical
direction and the sunbeam; hence the sine rather than the cosine is appropriate. A sunbeam one mile (1.6 km)
wide arrives from directly overhead, and another at a 30 angle to the horizontal. The Sine of a 30 angle
is 1/2, whereas the sine of a 90 angle is 1. Therefore, the angled sunbeam spreads the light over twice the area.
Consequently, half as much light falls on each square mile.
This 'projection effect' is the main reason why Earth's polar regions are much colder than equatorial regions.
On an annual average the poles receive less insolation than does the equator, because the poles are always
angled more away from the sun than the tropics. At a lower angle the light must travel through more
atmosphere. This attenuates it (by absorption and scattering) further reducing insolation at the surface.


Solar potential global horizontal irradiation

Direct insolation is measured at a given location with a surface element perpendicular to the Sun. It excludes
diffuse insolation (radiation that is scattered or reflected by atmospheric components). Direct insolation is
equal to the irradiance above the atmosphere minus the atmospheric losses due to absorption and scattering.
While the irradiance above the atmosphere varies with time of year (because the distance to the sun varies),
losses depend on time of day (length of light's path through the atmosphere depending on the Solar elevation
angle), Cloud cover, Moisture content and other contents. (See values for direct and total insolation further
down.) Insolation affects plant metabolism and animal behavior.[5]
Diffuse insolation is the contribution of light scattered by the atmosphere to total insolation.


A Pyranometer, a component of a temporary remote meteorological station, measures insolation on Skagit

Bay, Washington.

Average annual solar radiation arriving at the top of the Earth's atmosphere is roughly 1366 W/m 2.[6][7] The
radiation is distributed across the electromagnetic spectrum. About half is infrared light.[8] The Sun's rays
are attenuated as they pass through the atmosphere, leaving maximum normal surface irradiance at
approximately 1000 W /m2 at sea level on a clear day. When 1367 W/m2 is arriving above the atmosphere (as
when the earth is one astronomical unit from the sun), direct sun is about 1050 W/m2, and global radiation on a
horizontal surface at ground level is about 1120 W/m2.[9] The latter figure includes radiation scattered or
reemitted by atmosphere and surroundings. The actual figure varies with the Sun's angle and atmospheric
circumstances. Ignoring clouds, the daily average insolation for the Earth is approximately 6 kWh/m 2 = 21.6
The output of, for example, a photovoltaic panel, partly depends on the angle of the sun relative to the panel.
One Sun is a unit of power flux, not a standard value for actual insolation. Sometimes this unit is referred to as
a Sol, not to be confused with a sol, meaning one solar day.[10]

Solar potential maps[edit]

North America

South America


From Wikipedia, the free encyclopedia

Not to be confused with Hooking up (disambiguation).

For other uses, see Hooking (disambiguation).
In computer programming, the term hooking covers a range of techniques used to alter or augment
the behavior of an operating system, of applications, or of other software components by
intercepting function calls or messages or events passed between software components. Code that
handles such intercepted function calls, events or messages is called a "hook".
Hooking is used for many purposes, including debugging and extending functionality. Examples
might include intercepting keyboard or mouse event messages before they reach an application, or
intercepting operating system calls in order to monitor behavior or modify the function of an
application or other component. It is also widely used in benchmarking programs, for example frame
rate measuring in 3D games, where the output and input is done through hooking.
Hooking can also be used by malicious code. For example, rootkits, pieces of software that try to
make themselves invisible by faking the output of API calls that would otherwise reveal their
existence, often use hooking techniques. A wallhack is another example of malicious behavior that

can stem from hooking techniques. It is done by intercepting function calls in a computer game and
altering what is shown to the player to allow them to gain an unfair advantage over other players.


1.1Physical modification

1.2Runtime modification

2Sample code

2.1Virtual Method Table hooking

2.2C# keyboard event hook

2.3API/Function Hooking/Interception Using JMP Instruction

2.4Netfilter hook

2.5Internal IAT Hooking

3See also


5External links





5.5OS X and iOS

5.6In Depth API Hooking

Typically hooks are inserted while software is already running, but hooking is a tactic that can also
be employed prior to the application being started. Both these techniques are described in greater
detail below.

Physical modification[edit]

By physically modifying an executable or library before an application is running through techniques

of reverse engineering you can also achieve hooking. This is typically used to intercept function calls
to either monitor or replace them entirely.
For example, by using a disassembler, the entry point of a function within a module can be found. It
can then be altered to instead dynamically load some other library module and then have it execute
desired methods within that loaded library. If applicable, another related approach by which hooking
can be achieved is by altering the import table of an executable. This table can be modified to load
any additional library modules as well as changing what external code is invoked when a function is
called by the application.
An alternate method for achieving function hooking is by intercepting function calls through
a wrapper library. When creating a wrapper, you make your own version of a library that an
application loads, with all the same functionality of the original library that it will replace. That is, all
the functions that are accessible are essentially the same between the original and the replacement.
This wrapper library can be designed to call any of the functionality from the original library, or
replace it with an entirely new set of logic.

Runtime modification[edit]
Operating systems and software may provide the means to easily insert event hooks at runtime. It is
available provided that the process inserting the hook is granted enough permission to do so.
Microsoft Windows for example, allows you to insert hooks that can be used to process or modify
system events and application events for dialogs, scrollbars, and menus as well as other items. It
also allows a hook to insert, remove, process or modify keyboard and mouse events. Linux provides
another example where hooks can be used in a similar manner to process network events within
the kernel through NetFilter.
When such functionality is not provided, a special form of hooking employs intercepting the library
function calls made by a process. Function hooking is implemented by changing the very first few
code instructions of the target function to jump to an injected code. Alternatively on systems using
the shared library concept, the interrupt vector table or the import descriptor table can be modified in
memory. Essentially these tactics employ the same ideas as those of physical modification, but
instead altering instructions and structures located in the memory of a process once it is already

Sample code[edit]
Virtual Method Table hooking[edit]
Whenever a class defines a virtual function (or method), most compilers add a hidden member
variable to the class which points to a virtual method table (VMT or Vtable). This VMT is basically an
array of pointers to (virtual) functions. At runtime these pointers will be set to point to the right
function, because at compile time, it is not yet known if the base function is to be called or a derived
one implemented by a class that inherits from the base class. The code below shows an example of
a typical VMT hook in Microsoft Windows.
class VirtualTable {

// example class

virtual void VirtualFunction01( ticket );
void VirtualTable::VirtualFunction01( ticket )

printf("VirtualFunction01 called");
typedef void ( __thiscall* VirtualFunction01_t )( ticket* thisptr );
VirtualFunction01_t g_org_VirtualFunction01;
//our detour function
void __fastcall hk_VirtualFunction01( ticket* thisptr, int edx )

printf("Custom function called");

//call the original function
int _tmain(int argc, _TCHAR* argv[])

DWORD oldProtection;
VirtualTable* myTable = new VirtualTable();
void** base = *(void***)myTable;
VirtualProtect( &base[0], 4, PAGE_EXECUTE_READWRITE, &oldProtection );
//save the original function
g_org_VirtualFunction01 = (VirtualFunction01_t)base[0];
base[0] = &hk_VirtualFunction01;
VirtualProtect( &base[0], 4, oldProtection, 0 );
//call the virtual function (now hooked) from our class instance
return 0;

Keyboard layout
From Wikipedia, the free encyclopedia

A keyboard layout is any specific mechanical, visual, or functional arrangement of the keys,
legends, or key-meaning associations (respectively) of a computer, typewriter, or
other typographic keyboard.
Mechanical layout
The placements and keys of a keyboard.
Visual layout

The arrangement of the legends (labels, markings, engravings) that appear on the keys of a
Functional layout
The arrangement of the key-meaning associations, determined in software, of all the keys of
a keyboard.
Most computer keyboards are designed to send scancodes to the operating system,
rather than directly sending characters. From there, the series of scancodes is
converted into a character stream by keyboard layout software. This allows a physical
keyboard to be dynamically mapped to any number of layouts without switching
hardware components merely by changing the software that interprets the keystrokes.
It is usually possible for an advanced user to change keyboard operation, and thirdparty software is available to modify or extend keyboard functionality.

1Key types
o 1.1Character keys
o 1.2Modifier keys

1.2.1Dead keys

1.2.2Compose key


3Mechanical, visual and functional layouts

o 3.1Mechanical and visual layouts
o 3.2Functional layout

3.2.1Customized functional layouts

o 3.3National variants

4QWERTY-based layouts for Latin script

o 4.4ERTY (Lithuanian)

5Non-QWERTY keyboards for Latin scripts

o 5.1Dvorak
o 5.2Colemak
o 5.3Workman
o 5.4Other English layouts
o 5.5JCUKEN (Latin)
o 5.6Neo
o 5.7Plover
o 5.8BPO
o 5.9Turkish (F-keyboard)
o 5.10Shuangpin
o 5.11Latvian
o 5.12Chorded keyboards and mobile devices
o 5.13Other original layouts and layout design software

6Keyboard layouts for non-Latin alphabetic scripts

o 6.1Brahmic scripts






6.1.6Tibetan (China) (International) (Bhutan)

o 6.2Arabic
o 6.3Armenian
o 6.4Cyrillic


6.4.2Russian QWERTY/QWERTZ-based phonetic layouts

6.4.3Serbian (Cyrillic)


o 6.5Georgian
o 6.6Greek
o 6.7Hebrew
o 6.8Inuktitut
o 6.9Cherokee
o 6.10Tifinagh

6.10.1Tamazight (Berber)

7East Asian languages

o 7.1Hangul (for Korean)


7.1.2Sebeolsik 390

7.1.3Sebeolsik Final

7.1.4Sebeolsik Noshift

o 7.2Chinese

7.2.1Mainland China


7.2.3Hong Kong

o 7.3Japanese

8See also

9Notes and references

10External links
o 10.1Custom layouts

Key types[edit]

This section does not cite any sources. Please help improve this secti
by adding citations to reliable sources. Unsourced material may be chal
and removed. (June 2013) (Learn how and when to remove this template messag

A typical computer keyboard comprises sections with different types of keys.

A computer keyboard comprises alphanumeric or character keys for typing, modifier

keys for altering the functions of other keys, navigation keys for moving the text
cursor on the screen, function keys and system command keys such
as Esc and Break for special actions, and often a numeric keypad to facilitate
There is some variation between different keyboard models in the mechanical layout
i.e., how many keys there are and how they are positioned on the keyboard. However,
differences between national layouts are mostly due to different selections and
placements of symbols on the character keys.

Character keys[edit]
The core section of a keyboard comprises character keys, which can be used to
type letters and other characters. Typically, there are three rows of keys for typing letters
and punctuation, an upper row for typing digits and special symbols, and the Space
bar on the bottom row. The positioning of the character keys is similar to the keyboard of
a typewriter.

Modifier keys[edit]

Main article: Modifier key

The MIT "space-cadet keyboard", an early keyboard with a large number of modifier keys. It
was equipped with four keys for bucky bits("control", "meta", "hyper", and "super"); and three
shift keys, called "shift", "top", and "front".

Besides the character keys, a keyboard incorporates special keys that do nothing by
themselves but modify the functions of other keys. For example, the Shift key can be
used to alter the output of character keys, whereas the Ctrl (control) and Alt (alternate)
keys trigger special operations when used in concert with other keys.
Typically, a modifier key is held down while another key is struck. To facilitate this,
modifier keys usually come in pairs, one functionally identical key for each hand, so
holding a modifier key with one hand leaves the other hand free to strike another key.
An alphanumeric key labeled with only a single letter (usually the capital form) can
generally be struck to type either a lower case or capital letter, the latter requiring the
simultaneous holding of the Shift key. The Shift key is also used to type the upper of
two symbols engraved on a given key, the lower being typed without using the modifier
The English alphanumeric keyboard has a dedicated key for each of the letters AZ,
along with keys for punctuation and other symbols. In many other languages there are
additional letters (often with diacritics) or symbols, which also need to be available on
the keyboard. To make room for additional symbols, keyboards often have what is
effectively a secondary shift key, labeled AltGr (which typically takes the place of the
right-hand Alt key). It can be used to type an extra symbol in addition to the two
otherwise available with an alphanumeric key, and using it simultaneously with the
Shift key may even give access to a fourth symbol. On the visual layout, these third-level
and fourth-level symbols may appear on the right half of the key top, or they may be
Instead of the Alt and AltGr keys, Apple Keyboards have Cmd (command)
and Option keys. The Option key is used much like the AltGr, and the Cmd key like
the Ctrl on IBM PCs, to access menu options and shortcuts. The main use of
the Ctrl key on Macs is to produce a secondary mouse click, and to provide support for
programs running in X11 (a Unix environment included with OS X as an install option)
or MS Windows. There is also a Fn key on modern Mac keyboards, which is used for
switching between use of the F1, F2, etc. keys either as function keys or for other
functions like media control, accessing dashboard widgets, controlling the volume, or
handling expos. Fn key can be also found on many IBM PC laptops, where it serves a
similar purpose.
Many Unix workstations (and also Home Computers like the Amiga) keyboards placed
the Ctrl key to the left of the letter A, and the Caps Lock key in the bottom left. This

layout is often preferred by programmers as it makes the Ctrl key easier to reach. This
position of the Ctrl key is also used on the XO laptop, which does not have a Caps Lock.
The UNIX keyboard layout also differs in the placement of the ESC key, which is to the
left of 1.
Some early keyboards experimented with using large numbers of modifier keys. The
most extreme example of such a keyboard, the so-called "Space-cadet keyboard" found
on MIT LISP machines, had no fewer than seven modifier keys: four control
keys, Ctrl, Meta, Hyper, and Super, along with three shift keys, Shift, Top, and Front.
This allowed the user to type over 8000 possible characters by playing suitable "chords"
with many modifier keys pressed simultaneously.
Dead keys[edit]

From Wikipedia, the free encyclopedia

For other uses, see Engineering (disambiguation).

The steam engine, a major driver in the Industrial Revolution, underscores the importance of engineering in
modern history. This beam engine is on display in the Technical University of Madrid.

Engineering is the application of mathematics and scientific, economic, social, and

practical knowledge in order to invent, innovate, design, build, maintain, research, and
improve structures, machines, tools, systems, components, materials, processes and organizations.
The discipline of engineering is extremely broad, and encompasses a range of more
specialized fields of engineering, each with a more specific emphasis on particular areas of applied
science, technology and types of application.
The term Engineering is derived from the Latin ingenium, meaning "cleverness" and ingeniare,
meaning "to contrive, devise".



2.1Ancient era

2.2Renaissance era

2.3Modern era

3Main branches of engineering



5.1Problem solving

5.2Computer use

6Social context

7Relationships with other disciplines



7.2Medicine and biology


7.4Business Engineering and Engineering Management

7.5Other fields

8See also


10Further reading

11External links

The American Engineers' Council for Professional Development (ECPD, the predecessor of ABET)
has defined "engineering" as:
The creative application of scientific principles to design or develop structures, machines, apparatus,
or manufacturing processes, or works utilizing them singly or in combination; or to construct or
operate the same with full cognizance of their design; or to forecast their behavior under specific
operating conditions; all as respects an intended function, economics of operation or safety to life
and property.[2][3]

Main article: History of engineering

Relief map of the Citadel of Lille, designed in 1668 by Vauban, the foremost military engineer of his age.

Engineering has existed since ancient times as humans devised fundamental inventions such as the
wedge, lever, wheel, and pulley. Each of these inventions is essentially consistent with the modern
definition of engineering.
The term engineering is derived from the word engineer, which itself dates back to 1390, when
an engine'er (literally, one who operates an engine) originally referred to "a constructor of military
engines."[4] In this context, now obsolete, an "engine" referred to a military machine, i.e., a
mechanical contraption used in war (for example, a catapult). Notable examples of the obsolete
usage which have survived to the present day are military engineering corps, e.g., the U.S. Army
Corps of Engineers.
The word "engine" itself is of even older origin, ultimately deriving from the Latin ingenium (c. 1250),
meaning "innate quality, especially mental power, hence a clever invention." [5]
Later, as the design of civilian structures such as bridges and buildings matured as a technical
discipline, the term civil engineering[3]entered the lexicon as a way to distinguish between those
specializing in the construction of such non-military projects and those involved in the older discipline
of military engineering.

Ancient era[edit]

The Ancient Romans built aqueducts to bring a steady supply of clean fresh water to cities and towns in the

The Pharos of Alexandria, the pyramids in Egypt, the Hanging Gardens of Babylon,
the Acropolis and the Parthenon in Greece, the Roman aqueducts, Via Appia and
the Colosseum, Teotihuacn and the cities and pyramids of the Mayan, Inca and Aztec Empires,
the Great Wall of China, the Brihadeeswarar Temple of Thanjavur and Indian Temples, among many
others, stand as a testament to the ingenuity and skill of the ancient civil and military engineers.

The earliest civil engineer known by name is Imhotep.[3] As one of the officials of
the Pharaoh, Djosr, he probably designed and supervised the construction of the Pyramid of
Djoser (the Step Pyramid) at Saqqara in Egypt around 26302611 BC.[6]
Ancient Greece developed machines in both civilian and military domains. The Antikythera
mechanism, the first known mechanical computer,[7][8] and the
mechanical inventions of Archimedes are examples of early mechanical engineering. Some of
Archimedes' inventions as well as the Antikythera mechanism required sophisticated knowledge
of differential gearing or epicyclic gearing, two key principles in machine theory that helped design
the gear trains of the Industrial Revolution, and are still widely used today in diverse fields such
as robotics and automotive engineering.[9]
Chinese, Greek, Roman and Hungarian armies employed complex military machines and inventions
such as artillery which was developed by the Greeks around the 4th century B.C.,[10] the trireme,
the ballista and the catapult. In the Middle Ages, the trebuchet was developed.

Renaissance era[edit]
The first steam engine was built in 1698 by Thomas Savery.[11] The development of this device gave
rise to the Industrial Revolution in the coming decades, allowing for the beginnings of mass
With the rise of engineering as a profession in the 18th century, the term became more narrowly
applied to fields in which mathematics and science were applied to these ends. Similarly, in addition
to military and civil engineering the fields then known as the mechanic arts became incorporated into

Modern era[edit]

The International Space Stationrepresents a modern engineering challenge from many disciplines.

The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to
modern mechanical engineering. The development of specialized machines and machine
tools during the industrial revolution led to the rapid growth of mechanical engineering both in its
birthplace Britain and abroad.[3]

Structural engineers investigating NASA's Mars-bound spacecraft, the Phoenix Mars Lander

John Smeaton was the first self-proclaimed civil engineer, and is often regarded as the "father"
of civil engineering. He was an English civil engineer responsible for the design
of bridges, canals, harbours and lighthouses. He was also a capable mechanical engineerand an
eminent physicist. Smeaton designed the third Eddystone Lighthouse (175559) where he
pioneered the use of 'hydraulic lime' (a form of mortar which will set under water) and developed a
technique involving dovetailed blocks of granite in the building of the lighthouse. His lighthouse
remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is
known as Smeaton's Tower. He is important in the history, rediscovery of, and development of
modern cement, because he identified the compositional requirements needed to obtain
"hydraulicity" in lime; work which led ultimately to the invention of Portland cement.
The United States census of 1850 listed the occupation of "engineer" for the first time with a count of
2,000.[12] There were fewer than 50 engineering graduates in the U.S. before 1865. In 1870 there
were a dozen U.S. mechanical engineering graduates, with that number increasing to 43 per year in
1875. In 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. [13]
There was no chair of applied mechanism and applied mechanics established at Cambridge until
1875, and no chair of engineering at Oxford until 1907. Germany established technical universities
The foundations of electrical engineering in the 1800s included the experiments of Alessandro
Volta, Michael Faraday, Georg Ohmand others and the invention of the electric telegraph in 1816
and the electric motor in 1872. The theoretical work of James Maxwell(see: Maxwell's equations)
and Heinrich Hertz in the late 19th century gave rise to the field of electronics. The later inventions of
the vacuum tube and the transistor further accelerated the development of electronics to such an
extent that electrical and electronics engineers currently outnumber their colleagues of any other
engineering specialty.[3] Chemical engineering developed in the late nineteenth century.[3] Industrial
scale manufacturing demanded new materials and new processes and by 1880 the need for large
scale production of chemicals was such that a new industry was created, dedicated to the
development and large scale manufacturing of chemicals in new industrial plants. [3] The role of the
chemical engineer was the design of these chemical plants and processes. [3]

The Falkirk Wheel in Scotland

Aeronautical engineering deals with aircraft design process design while aerospace engineering is a
more modern term that expands the reach of the discipline by including spacecraft design. Its origins
can be traced back to the aviation pioneers around the start of the 20th century although the work
of Sir George Cayley has recently been dated as being from the last decade of the 18th century.
Early knowledge of aeronautical engineering was largely empirical with some concepts and skills
imported from other branches of engineering.[15]

The first PhD in engineering (technically, applied science and engineering) awarded in the United
States went to Josiah Willard Gibbsat Yale University in 1863; it was also the second PhD awarded
in science in the U.S.[16]
Only a decade after the successful flights by the Wright brothers, there was extensive development
of aeronautical engineering through development of military aircraft that were used in World War I.
Meanwhile, research to provide fundamental background science continued by combining theoretical
physics with experiments.
In 1990, with the rise of computer technology, the first search engine was built by computer
engineer Alan Emtage.

Main branches of engineering[edit]

This section needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged and
removed. (August 2013) (Learn how and when to remove this template message)
Main article: List of engineering branches
For a topical guide to this subject, see Outline of engineering Branches of engineering.

The design of a modern auditorium involves many branches of engineering,

including acoustics, architecture and civil engineering.

Hoover Dam

Engineering is a broad discipline which is often broken down into several sub-disciplines. These
disciplines concern themselves with differing areas of engineering work. Although initially an
engineer will usually be trained in a specific discipline, throughout an engineer's career the engineer

may become multi-disciplined, having worked in several of the outlined areas. Engineering is often
characterized as having four main branches:[17][18][19]

Chemical engineering The application of physics, chemistry, biology, and engineering

principles in order to carry out chemical processes on a commercial scale, such as petroleum
refining, microfabrication, fermentation, and biomolecule production.

Civil engineering The design and construction of public and private works, such
as infrastructure (airports, roads, railways, water supply and treatment etc.), bridges, dams, and

Electrical engineering The design, study and manufacture of various electrical and
electronic systems, such as electrical
circuits, generators, motors, electromagnetic/electromechanical devices, electronic
devices, electronic circuits, optical fibers, optoelectronic
devices, computer systems, telecommunications, instrumentation, controls, and electronics.

Mechanical engineering The design and manufacture of physical or mechanical systems,

such as power and energy systems, aerospace/aircraft products, weapon
systems, transportation products, engines, compressors, powertrains, kinematic chains, vacuum
technology, vibration isolation equipment, manufacturing, and mechatronics.

Beyond these four, a number of other branches are recognized. Historically, naval
engineering and mining engineering were major branches. Other engineering fields sometimes
included as major branches[citation needed] are manufacturing engineering, acoustical
engineering, corrosion engineering, Instrumentation and
control, aerospace, automotive, computer, electronic, petroleum, systems, audio, software, architect
ural, agricultural, biosystems, biomedical,[20] geological, textile, industrial, materials,[21] and nuclea
r[22]engineering. These and other branches of engineering are represented in the 36 Licensed Membe

From Wikipedia, the free encyclopedia

For other uses, see Chemistry (disambiguation).

"Chemical science" redirects here. For the Royal Society of Chemistry journal, see Chemical
Science (journal).

Solutions of substances in reagent bottles, including ammonium hydroxideand nitric acid, illuminated in
different colors








Chemistry is a branch of physical science that studies the composition, structure, properties and
change of matter.[1][2] Chemistry includes topics such as the properties of individual atoms, how atoms
form chemical bonds to create chemical compounds, the interactions of substances
through intermolecular forces that give matter its general properties, and the interactions between
substances through chemical reactions to form different substances.
Chemistry is sometimes called the central science because it bridges other natural sciences,
including physics, geology and biology.[3][4]For the differences between chemistry and physics
see comparison of chemistry and physics.[5]
Scholars disagree about the etymology of the word chemistry. The history of chemistry can be traced
to alchemy, which had been practiced for several millennia in various parts of the world.




2.1Chemistry as science

2.2Chemical structure

3Principles of modern chemistry






3.1.5Substance and mixture

3.1.6Mole and amount of substance





3.6Ions and salts

3.7Acidity and basicity



3.10Chemical laws



4.2Chemical industry

4.3Professional societies

5See also



8Further reading

9External links

The word chemistry comes from alchemy, which referred to an earlier set of practices that
encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism and
medicine. It is often seen as linked to the quest to turn lead or another common starting material into
gold,[6] though in ancient times the study encompassed many of the questions of modern chemistry
being defined as the study of the composition of waters, movement, growth, embodying,
disembodying, drawing the spirits from bodies and bonding the spirits within bodies by the early 4th
century Greek-Egyptian alchemist Zosimos.[7] An alchemist was called a 'chemist' in popular speech,
and later the suffix "-ry" was added to this to describe the art of the chemist as "chemistry".
The modern word alchemy in turn is derived from the Arabic word al-km (). In origin, the term
is borrowed from the Greek or .[8][9] This may have Egyptianorigins since al-km is
derived from the Greek , which is in turn derived from the word Chemi or Kimi, which is the
ancient name of Egypt in Egyptian.[8] Alternately, al-kmmay derive from , meaning "cast

In retrospect, the definition of chemistry has changed over time, as new discoveries and theories
add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert
Boyle in 1661, meant the subject of the material principles of mixed bodies. [11] In 1663 the
chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve
bodies, and draw from them the different substances on their composition, and how to unite them
again, and exalt them to a higher perfection.[12]
The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving
mixed, compound, or aggregate bodies into their principles; and of composing such bodies from
those principles.[13] In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the
science concerned with the laws and effects of molecular forces.[14] This definition further evolved
until, in 1947, it came to mean the science of substances: their structure, their properties, and the
reactions that change them into other substances - a characterization accepted by Linus Pauling.
More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to
mean the study of matter and the changes it undergoes.[16]

Main article: History of chemistry
See also: Alchemy and Timeline of chemistry

Democritus' atomist philosophy was later adopted by Epicurus(341270 BCE).

Early civilizations, such as the Egyptians[17] Babylonians, Indians[18] amassed practical knowledge
concerning the arts of metallurgy, pottery and dyes, but didn't develop a systematic theory.
A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as
propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental
elements from which everything is formed as a combination. Greek atomism dates back to 440 BC,
arising in works by philosophers such as Democritus and Epicurus. In 50 BC,
the Roman philosopher Lucretius expanded upon the theory in his book De rerum natura (On The
Nature of Things).[19][20] Unlike modern concepts of science, Greek atomism was purely philosophical
in nature, with little concern for empirical observations and no concern for chemical experiments. [21]
In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the
study of natural substances with the ultimate goal of transmuting elements into gold and discovering
the elixir of eternal life.[22] Work, particularly the development of distillation, continued in the
early Byzantine period with the most famous practitioner being the 4th century GreekEgyptian Zosimos of Panopolis.[23] Alchemy continued to be developed and practised throughout
the Arab world after the Muslim conquests,[24] and from there, and from the Byzantine remnants,
diffused into medieval and Renaissance Europe through Latin translations. Some influential
Muslim chemists, Ab al-Rayhn al-Brn,[26] Avicenna[27] and Al-Kindi refuted the theories of
alchemy, particularly the theory of the transmutation of metals; and al-Tusidescribed a version of
the conservation of mass, noting that a body of matter is able to change but is not able to disappear.

Chemistry as science

Jbir ibn Hayyn (Geber), a Persian alchemist whose experimental research laid the foundations of chemistry.

The development of the modern scientific method was slow and arduous, but an early scientific
method for chemistry began emerging among early Muslim chemists, beginning with the 9th century
Persian or Arabian chemist Jbir ibn Hayyn (known as "Geber" in Europe), who is sometimes
referred to as "the father of chemistry".[29][30][31][32] He introduced a systematic
and experimental approach to scientific research based in the laboratory, in contrast to the ancient
Greek and Egyptian alchemists whose works were largely allegorical and often unintelligble. [33] Under
the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of
chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old
alchemical traditions into a scientific discipline. Boyle in particular is regarded as the founding father
of chemistry due to his most important work, the classic chemistry text The Sceptical Chymistwhere
the differentiation is made between the claims of alchemy and the empirical scientific discoveries of
the new chemistry.[34] He formulated Boyle's law, rejected the classical "four elements" and proposed
a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous

Antoine-Laurent de Lavoisier is considered the "Father of Modern Chemistry". [36]

The theory of phlogiston (a substance at the root of all combustion) was propounded by the
German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the
century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics; who
did more than any other to establish the new science on proper theoretical footing, by elucidating the
principle of conservation of mass and developing a new system of chemical nomenclature used to
this day.[37]
Before his work, though, many important discoveries had been made, specifically relating to the
nature of 'air' which was discovered to be composed of many different gases. The Scottish
chemist Joseph Black (the first experimental chemist) and the Dutchman J. B. van
Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry
Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and,
independently, Carl Wilhelm Scheele isolated pure oxygen.

From Wikipedia, the free encyclopedia

"Biological science" redirects here. It is not to be confused with life science.

For other uses, see Biology (disambiguation).

Biology deals with the study of the many

living organisms.

top: E. coli bacteria and gazelle

bottom: Goliath beetle and tree fern

Biology is a natural science concerned with the study of life and living organisms, including their
structure, function, growth, evolution, distribution, identification and taxonomy.[1] Modern biology is a
vast and eclectic field, composed of many branches and subdisciplines. However, despite the broad
scope of biology, there are certain general and unifying concepts within it that govern all study and
research, consolidating it into single, coherent field. In general, biology recognizes the cell as the
basic unit of life, genes as the basic unit of heredity, and evolution as the engine that propels the

synthesis and creation of new species. It is also understood today that all the organisms survive by
consuming and transforming energy and by regulating their internal environment to maintain a stable
and vital condition known as homeostasis.
Sub-disciplines of biology are defined by the scale at which organisms are studied, the kinds of
organisms studied, and the methods used to study them: biochemistry examines the rudimentary
chemistry of life; molecular biology studies the complex interactions among
biological molecules; botany studies the biology of plants; cellular biology examines the basic
building-block of all life, the cell; physiology examines the physical and chemical functions
of tissues, organs, and organ systems of an organism; evolutionary biology examines the processes
that produced the diversity of life; and ecology examines how organisms interact in
their environment.[2]


2Foundations of modern biology


2.1Cell theory





3Study and research







3.6Ecological and environmental

4Basic unresolved problems in biology


6See also


8Further reading

9External links

Main article: History of biology

A Diagram of a fly from Robert Hooke's innovative Micrographia, 1665

Ernst Haeckel's Tree of Life (1879)

The term biology is derived from the Greek word , bios, "life" and the suffix -, -logia, "study
of."[3][4] The Latin-language form of the term first appeared in 1736 when Swedish scientist Carl
Linnaeus (Carl von Linn) used biologi in his Bibliotheca botanica. It was used again in 1766 in a
work entitled Philosophiae naturalis sive physicae: tomus III, continens geologian, biologian,
phytologian generalis, by Michael Christoph Hanov, a disciple of Christian Wolff. The first German
use, Biologie, was in a 1771 translation of Linnaeus' work. In 1797, Theodor Georg August Roose
used the term in the preface of a book, Grundzge der Lehre van der Lebenskraft. Karl Friedrich
Burdach used the term in 1800 in a more restricted sense of the study of human beings from a
morphological, physiological and psychological perspective (Propdeutik zum Studien der
gesammten Heilkunst). The term came into its modern usage with the six-volume treatise Biologie,
oder Philosophie der lebenden Natur (180222) by Gottfried Reinhold Treviranus, who announced:[5]
The objects of our research will be the different forms and manifestations of life, the
conditions and laws under which these phenomena occur, and the causes through which
they have been effected. The science that concerns itself with these objects we will indicate
by the name biology [Biologie] or the doctrine of life [Lebenslehre].
Although modern biology is a relatively recent development, sciences related to and included
within it have been studied since ancient times. Natural philosophy was studied as early as the
ancient civilizations of Mesopotamia, Egypt, the Indian subcontinent, and China. However, the
origins of modern biology and its approach to the study of nature are most often traced back
to ancient Greece.[6][7] While the formal study of medicine dates back to Hippocrates (ca. 460 BC
ca. 370 BC), it was Aristotle (384 BC 322 BC) who contributed most extensively to the
development of biology. Especially important are his History of Animals and other works where
he showed naturalist leanings, and later more empirical works that focused on biological
causation and the diversity of life. Aristotle's successor at the Lyceum, Theophrastus, wrote a
series of books on botany that survived as the most important contribution of antiquity to the
plant sciences, even into the Middle Ages.[8]














Biology began to quickly develop and grow with Anton van Leeuwenhoek's dramatic
improvement of the microscope. It was then that scholars
discovered spermatozoa, bacteria, infusoria and the diversity of microscopic life. Investigations
by Jan Swammerdam led to new interest in entomology and helped to develop the basic
techniques of microscopic dissection and staining.[10]
Advances in microscopy also had a profound impact on biological thinking. In the early 19th
century, a number of biologists pointed to the central importance of the cell. Then, in
1838, Schleiden and Schwann began promoting the now universal ideas that (1) the basic unit
of organisms is the cell and (2) that individual cells have all the characteristics of life, although
they opposed the idea that (3) all cells come from the division of other cells. Thanks to the work
of Robert Remak and Rudolf Virchow, however, by the 1860s most biologists accepted all three
tenets of what came to be known as cell theory.[11][12]
Meanwhile, taxonomy and classification became the focus of natural historians. Carl
Linnaeus published a basic taxonomy for the natural world in 1735 (variations of which have
been in use ever since), and in the 1750s introduced scientific names for all his species.
Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living
forms as malleableeven suggesting the possibility of common descent. Though he was
opposed to evolution, Buffon is a key figure in the history of evolutionary thought; his work
influenced the evolutionary theories of both Lamarck and Darwin.[14]

From Wikipedia, the free encyclopedia

"Bearded" redirects here. For the British music magazine, see Bearded (magazine).

For other uses, see Beard (disambiguation).


Hindu Sadhu with a goatee and moustache.









Anatomical terminology
[edit on Wikidata]

A beard is the collection of hair that grows on the chin and cheeks of humans and some non-human
animals. In humans, usually only pubescent or adult males are able to grow beards. From an

evolutionary viewpoint the beard is a part of the broader category of androgenic hair. It is a vestigial
trait from a time when humans had hair on their face and entire body like the hair on gorillas. The
evolutionary loss of hair is pronounced in some populations such as indigenous Americans and
some east Asian populations, who have less facial hair, whereas Caucasians and the Ainu have
more facial hair. Women with hirsutism, a hormonal condition of excessive hairiness, may develop a
Throughout the course of history, societal attitudes toward male beards have varied widely
depending on factors such as prevailing cultural-religious traditions and the current
era's fashion trends. Some religions (such as Sikhism) have considered a full beard to be absolutely
essential for all males able to grow one, and mandate it as part of their official dogma. Other
cultures, even while not officially mandating it, view a beard as central to a man's virility,
exemplifying such virtues as wisdom, strength, sexual prowess and high social status. However, in
cultures where facial hair is uncommon (or currently out of fashion), beards may be associated with
poor hygiene or a "savage", uncivilized, or even dangerous demeanor.


2.1Ancient and classical world








2.1.8Kingdom of Macedonia


2.1.10Celts and Germanic tribes

2.2Middle ages

2.3From the Renaissance to the present day

2.3.119th century

2.3.220th century
2.4Facial hair and Political Leaders

3Beards in religion

3.1.1LDS Church





3.6Rastafari Movement

4The "philosopher's beard"

5Modern prohibition of beards

5.1Civilian prohibitions


5.2Armed forces


7In art

8In animals

9See also



12Further reading

13External links

The beard develops during puberty. Beard growth is linked to stimulation of hair follicles in the area
by dihydrotestosterone, which continues to affect beard growth after puberty. Various hormones
stimulate hair follicles from different areas. Dihydrotestosterone, for example, may also promote
short-term pogonotrophy (i.e., the grooming of facial hair). For example, a scientist who chose to
remain anonymous had to spend periods of several weeks on a remote island in comparative
isolation. He noticed that his beard growth diminished, but the day before he was due to leave the
island it increased again, to reach unusually high rates during the first day or two on the mainland.
He studied the effect and concluded that the stimulus for increased beard growth was related to the
resumption of sexual activity.[1] However, at that time professional pogonologists such as R.M.
Hardisty reacted vigorously and almost dismissively.[2]
Beard growth rate is also genetic.[3]

Biologists characterize beards as a secondary sexual characteristic because they are unique to one
sex, yet do not play a direct role in reproduction. Charles Darwin first suggested
possible evolutionary explanation of beards in his work The Descent of Man, which hypothesized
that the process of sexual selection may have led to beards.[4]Modern biologists have reaffirmed the
role of sexual selection in the evolution of beards, concluding that there is evidence that a majority of
females find men with beards more attractive than men without beards.[5][6][7]

Charles Darwin

Evolutionary psychology explanations for the existence of beards include signalling sexual maturity
and signalling dominance by increasing perceived size of jaws, and clean-shaved faces are rated
less dominant than bearded.[8] Some scholars assert that it is not yet established whether the sexual
selection leading to beards is rooted in attractiveness (inter-sexual selection) or dominance (intrasexual selection).[9] A beard can be explained as an indicator of a male's overall condition. [10] The rate
of facial hairiness appears to influence male attractiveness.[11][12] The presence of a beard makes the
male vulnerable in fights, which is costly, so biologists have speculated that there must be other
evolutionary benefits that outweigh that drawback.[13] Excess testosterone evidenced by the beard
may indicate mild immunosuppression, which may support spermatogenesis.[14][15]

From Wikipedia, the free encyclopedia

For other uses, see Paper (disambiguation).

Different types of paper: carton, tissue paper


"Paper" in Traditional (top) and Simplified (bottom) Chinese characters

Traditional Chinese

Simplified Chinese

Paper is a thin material produced by pressing together moist fibres of cellulose pulp derived
from wood, rags or grasses, and drying them into flexible sheets. It is a versatile material with many
uses, including writing, printing, packaging, cleaning, and a number of industrial and construction
The pulp papermaking process is said to have been developed in China during the early 2nd century
AD, possibly as early as the year 105 A.D.,[1] by the Han court eunuch Cai Lun, although the earliest
archaeological fragments of paper derive from the 2nd century BC in China. [2] The modern pulp and
paper industry is global, with China leading its production and the United States right behind it.


2Early sources of Fiber



4.1Chemical pulping

4.2Mechanical pulping

4.3De-inked pulp


4.5Producing paper

4.6.1Paper grain


6Types, thickness and weight

7Paper stability

8Environmental impact of paper

9Future of paper

10See also



13Further reading

14External links

Main article: History of paper

Hemp wrapping paper, China, circa 100 BC.

The oldest known archaeological fragments of the immediate precursor to modern paper, date to the
2nd century BC in China. The pulp papermaking process is ascribed to Cai Lun, a 2nd-century
AD Han court eunuch.[2] With paper as an effective substitute for silk in many applications, China
could export silk in greater quantity, contributing to a Golden Age.
Its knowledge and uses spread from China through the Middle East to medieval Europe in the 13th
century, where the first water powered paper mills were built.[3] Because of paper's introduction to the
West through the city of Baghdad, it was first called bagdatikos.[4] In the 19th century, industrial
manufacture greatly lowered its cost, enabling mass exchange of information and contributing to
significant cultural shifts. In 1844, the Canadian inventor Charles Fenerty and the German F. G.
Keller independently developed processes for pulping wood fibres. [5]

Early sources of Fiber

Ancient Sanskrit on Hemp based Paper. Hemp Fiber was commonly used in the production of paper from 200
BCE to the Late 1800's.

See also: wood pulp and deinking

Before the industrialisation of the paper production the most common fibre source was recycled
fibres from used textiles, called rags. The rags were from hemp, linen and cotton.[6] A process for
removing printing inks from recycled paper was invented by German jurist Justus Claproth in 1774.
Today this method is called deinking. It was not until the introduction of wood pulp in 1843 that
paper production was not dependent on recycled materials from ragpickers.[6]

Further information: Papyrus
The word "paper" is etymologically derived from Latin papyrus, which comes from
the Greek (papuros), the word for the Cyperus papyrus plant.[7][8] Papyrus is a thick, paperlike material produced from the pith of the Cyperus papyrus plant, which was used in ancient
Egypt and other Mediterranean cultures for writing before the introduction of paper into the Middle
East and Europe.[9] Although the word paper is etymologically derived from papyrus, the two are
produced very differently and the development of the first is distinct from the development of the
second. Papyrus is a lamination of natural plant fibres, while paper is manufactured from fibres
whose properties have been changed by maceration.[2]

Main article: Papermaking

Chemical pulping
Main articles: kraft process, sulfite process, and soda pulping
To make pulp from wood, a chemical pulping process separates lignin from cellulose fibres. This is
accomplished by dissolving lignin in a cooking liquor, so that it may be washed from the cellulose;
this preserves the length of the cellulose fibres. Paper made from chemical pulps are also known
as wood-free papersnot to be confused with tree-free paper; this is because they do not contain
lignin, which deteriorates over time. The pulp can also be bleached to produce white paper, but this
consumes 5% of the fibres; chemical pulping processes are not used to make paper made from
cotton, which is already 90% cellulose.

The microscopic structure of paper: Micrograph of paper autofluorescing under ultraviolet illumination. The
individual fibres in this sample are around 10 min diameter.

There are three main chemical pulping processes: the sulfite process dates back to the 1840s and it
was the dominant method extent before the second world war. The kraft process, invented in the
1870s and first used in the 1890s, is now the most commonly practiced strategy, one of its
advantages is the chemical reaction with lignin, that produces heat, which can be used to run a
generator. Most pulping operations using the kraft process are net contributors to the electricity grid
or use the electricity to run an adjacent paper mill. Another advantage is that this process recovers
and reuses all inorganic chemical reagents. Soda pulping is another specialty process used to
pulp straws, bagasse and hardwoods with high silicate content.

Mechanical pulping
There are two major mechanical pulps, the thermomechanical one (TMP) and groundwood pulp
(GW). In the TMP process, wood is chipped and then fed into steam heated refiners, where the chips
are squeezed and converted to fibres between two steel discs. In the groundwood process,
debarked logs are fed into grinders where they are pressed against rotating stones to be made into
fibres. Mechanical pulping does not remove the lignin, so the yield is very high, >95%, however it
causes the paper thus produced to turn yellow and become brittle over time. Mechanical pulps have
rather short fibres, thus producing weak paper. Although large amounts of electrical energy are
required to produce mechanical pulp, it costs less than the chemical kind.

De-inked pulp
Paper recycling processes can use either chemically or mechanically produced pulp; by mixing it
with water and applying mechanical action the hydrogen bonds in the paper can be broken and

fibres separated again. Most recycled paper contains a proportion of virgin fibre for the sake of
quality; generally speaking, de-inked pulp is of the same quality or lower than the collected paper it
was made from.
There are three main classifications of recycled fibre:.

Mill broke or internal mill waste This incorporates any substandard or grade-change paper
made within the paper mill itself, which then goes back into the manufacturing system to be repulped back into paper. Such out-of-specification paper is not sold and is therefore often not
classified as genuine reclaimed recycled fibre, however most paper mills have been reusing
their own waste fibre for many years, long before recycling become popular.

Preconsumer waste This is offcut and processing waste, such as guillotine trims and
envelope blank waste; it is generated outside the paper mill and could potentially go to landfill,
and is a genuine recycled fibre source; it includes de-inked preconsumer (recycled material that
has been printed but did not reach its intended end use, such as waste from printers and unsold

Postconsumer waste This is fibre from paper that has been used for its intended end use
and includes office waste, magazine papers and newsprint. As the vast majority of this material
has been printed either digitally or by more conventional means such as lithography or
rotogravure it will either be recycled as printed paper or go through a de-inking process first.

Recycled papers can be made from 100% recycled materials or blended with virgin pulp, although
they are (generally) not as strong nor as bright as papers made from the latter.

Besides the fibres, pulps may contain fillers such as chalk or china clay, which improve its
characteristics for printing or writing. Additives for sizing purposes may be mixed with it and/or
applied to the paper web later in the manufacturing process; the purpose of such sizing is to
establish the correct level of surface absorbency to suit ink or paint.

Producing paper
Main articles: Paper machine and papermaking
The pulp is fed to a paper machine where it is formed as a paper web and the water is removed from
it by pressing and drying.
Pressing the sheet removes the water by force; once the water is forced from the sheet, a special
kind of felt, which is not to be confused with the traditional one, is used to collect the water; whereas
when making paper by hand, a blotter sheet is used instead.
Drying involves using air and/or heat to remove water from the paper sheets; in the earliest days of
paper making this was done by hanging the sheets like laundry; in more modern times various forms
of heated drying mechanisms are used. On the paper machine the most common is the steam
heated can dryer. These can reach temperatures above 200 F (93 C) and are used in long
sequences of more than 40 cans; where the heat produced by these can easily dry the paper to less
than 6% moisture.

From Wikipedia, the free encyclopedia

For other uses, see Disease (disambiguation).

Scanning electron micrograph of Mycobacterium tuberculosis, a bacterium that causes tuberculosis.

A disease is a particular abnormal condition, a disorder of a structure or function, that affects part or
all of an organism. The study of disease is called pathology which includes the causal study
of etiology. Disease is often construed as a medical condition associated with
specific symptoms and signs.[1] It may be caused by external factors such as pathogens, or it may be
caused by internal dysfunctions particularly of the immune system such as an immunodeficiency, or
a hypersensitivity including allergies and autoimmunity.
In humans, disease is often used more broadly to refer to any condition that
causes pain, dysfunction, distress, social problems, or deathto the person afflicted, or similar
problems for those in contact with the person. In this broader sense, it sometimes
includes injuries, disabilities, disorders, syndromes, infections, isolated symptoms,
deviant behaviors, and atypical variations of structure and function, while in other contexts and for
other purposes these may be considered distinguishable categories. Diseases can affect people not
only physically, but also emotionally, as contracting and living with a disease can alter the affected
person's perspective on life.
Death due to disease is called death by natural causes. There are four main types of
disease: infectious diseases, deficiency diseases, genetic diseases both (hereditary and nonhereditary), and physiological diseases. Diseases can also be classified as communicable and noncommunicable. The deadliest diseases in humans are coronary artery disease (blood flow
obstruction), followed by cerebrovascular disease and lower respiratory infections.[2]



1.2Types by body system




3.1Types of causes




6.1Burdens of disease
7Society and culture

7.1Language of disease

8See also


10External links

In many cases, terms such as disease, disorder, morbidity and illness are used interchangeably.
There are situations, however, when specific terms are considered preferable.
The term disease broadly refers to any condition that impairs the normal functioning of the
body. For this reason, diseases are associated with dysfunctioning of the body's
normal homeostatic processes.[4] The term disease has both a count sense (a disease, two
diseases, many diseases) and a noncount sense (not much disease, less disease, a lot of
disease). Commonly, the term is used to refer specifically to infectious diseases, which are
clinically evident diseases that result from the presence of pathogenicmicrobial agents,
including viruses, bacteria, fungi, protozoa, multicellular organisms, and aberrant proteins
known as prions. An infection that does not and will not produce clinically evident impairment
of normal functioning, such as the presence of the normal bacteria and yeasts in the gut, or
of a passenger virus, is not considered a disease. By contrast, an infection that is
asymptomatic during its incubation period, but expected to produce symptoms later, is
usually considered a disease. Non-infectious diseases are all other diseases, including most
forms of cancer[citation needed], heart disease, and genetic disease.
Acquired disease
disease that began at some point during one's lifetime, as opposed to disease that was
already present at birth, which is congenital disease. "Acquired" sounds like it could mean
"caught via contagion", but it simply means acquired sometime after birth. It also sounds like
it could imply secondary disease, but acquired disease can be primary disease.
Acute disease
disease of a short-term nature (acute); the term sometimes also connotes a fulminant nature
Chronic disease

disease that is a long-term issue (chronic)

Congenital disease
disease that is present at birth. It is often, genetic and can be inherited. It can also be the
result of a vertically transmitted infection from the mother such as HIV/AIDS.
Genetic disease
disease that is caused by genetic mutation. It is often inherited, but some mutations are
random and de novo.
Hereditary or inherited disease
a type of genetic disease caused by mutation that is hereditary (and can run in families)
Idiopathic disease
disease whose cause is unknown. As medical science has advanced, many diseases whose
causes were formerly complete mysteries have been somewhat explained (for example,
when it was realized that autoimmunity is the cause of some forms of diabetes mellitus type
1, even if we do not yet understand every molecular detail involved) or even extensively
explained (for example, when it was realized that gastric ulcers are often associated
with Helicobacter pylori infection).
Incurable disease
disease that cannot be cured
Primary disease
disease that came about as a root cause of illness, as opposed to secondary disease, which
is a sequela of another disease
Secondary disease
disease that is a sequela or complication of some other disease or underlying cause (root
cause). Bacterial infections can be either primary (healthy but then bacteria arrived) or
secondary to a viral infection or burn, which predisposed by creating an open wound or
weakened immunity (bacteria would not have gotten established otherwise).
Terminal disease
disease with death as an inevitable result
Illness is generally used as a synonym for disease.[5] However, this term is occasionally used
to refer specifically to the patient's personal experience of his or her disease. [6][7]In this model,
it is possible for a person to have a disease without being ill (to have an objectively
definable, but asymptomatic, medical condition, such as a subclinical infection), and to
be ill without being diseased (such as when a person perceives a normal experience as a
medical condition, or medicalizes a non-disease situation in his or her life for example, a
person who feels unwell as a result of embarrassment, and who interprets those feelings as
sickness rather than normal emotions). Symptoms of illness are often not directly the result
of infection, but a collection of evolved responsessickness behavior by the bodythat
helps clear infection. Such aspects of illness can include lethargy, depression, loss of
appetite, sleepiness, hyperalgesia, and inability to concentrate.[8][9][10]