Vous êtes sur la page 1sur 276

Age hardening (Precipitation Hardening)

Precipitation Hardening Stainless Steels – Alloys, Properties, Fabrication Processes.

Background

Precipitation hardening stainless steels are chromium and nickel containing steels that provide an
optimum combination of the properties of martensitic and austenitic grades. Like martensitic
grades, they are known for their ability to gain high strength through heat treatment and they also
have the corrosion resistance of austenitic stainless steel.

The high tensile strengths of precipitation hardening stainless steels come after a heat treatment
process that leads to precipitation hardening of a martensitic or austenitic matrix. Hardening is
achieved through the addition of one or more of the elements Copper, Aluminium, Titanium,
Niobium, and Molybdenum. The most well known precipitation hardening steel is 17-4 PH.

The name comes from the additions 17% Chromium and 4% Nickel. It also contains 4% Copper
and 0.3% Niobium. 17-4 PH is also known as stainless steel grade 630. The advantage of
precipitation hardening steels is that they can be supplied in a “solution treated” condition, which
is readily machineable. After machining or another fabrication method, a single, low temperature
heat treatment can be applied to increase the strength of the steel. This is known as ageing or
age-hardening. As it is carried out at low temperature, the component undergoes no distortion.
Characterisation
Precipitation hardening steels are characterised into one of three groups based on their final
microstructures after heat treatment. The three types are: martensitic (e.g. 17-4 PH), semi-
austenitic (e.g. 17-7 PH) and austenitic (e.g. A-286).

Martensitic Alloys
Martensitic precipitation hardening stainless steels have a predominantly austenitic structure at
annealing temperatures of around 1040 to 1065°C. Upon cooling to room temperature, they
undergo a transformation that changes the austenite to martensite.

Semi-austenitic Alloys
Unlike martensitic precipitation hardening steels, annealed semi-austenitic precipitation
hardening steels are soft enough to be cold worked. Semi-austenitc steels retain their austenitic
structure at room temperature but will form martensite at very low temperatures.
Austenitic Alloys
Austenitic precipitation hardening steels retain their austenitic structure after annealing and
hardening by ageing. At the annealing temperature of 1095 to 1120°C the precipitation hardening
phase is soluble. It remains in solution during rapid cooling. When reheated to 650 to 760°C,
precipitation occurs. This increases the hardness and strength of the material. Hardness remains
lower than that for martensitic or semi-austenitic precipitation hardening steels. Austenitic alloys
remain nonmagnetic.

Properties
Strength
Yield strengths for precipitation-hardening stainless steels are 515 to 1415 MPa. Tensile
strengths range from 860 to 1520 MPa. Elongations are 1 to 25%. Cold working before ageing
can be used to facilitate even higher strengths.

Heat Treatment
The key to the properties of precipitation hardening stainless steels lies in heat treatment. After
solution treatment or annealing of precipitation hardening stainless steels, a single low
temperature “age hardening” stage is employed to achieve the required properties. As this
treatment is carried out at a low temperature, no distortion occurs and there is only superficial
discolouration. During the hardening process a slight decrease in size takes place.

This shrinking is approximately 0.05% for condition H900 and 0.10% for H1150. Typical
mechanical properties achieved for 17-4 PH after solution treating and age hardening are given
in the following table. Condition designations are given by the age hardening temperature in °F.
Table 1. Mechanical property ranges after solution treating and age hardening

Hardening Temp and


Cond. Hardness (Rockwell C) Tensile Strength (MPa)
time
A Annealed 36 1100
H900 482°C, 1 hour 44 1310
H925 496°C, 4 hours 42 1170-1320
H1025 552°C, 4 hours 38 1070-1220
H1075 580°C, 4 hours 36 1000-1150
H1100 593°C, 4 hours 35 970-1120
H1150 621°C, 4 hours 33 930-1080
Typical Chemical Composition
Table 2. Typical chemical composition for stainless steel alloy 17-4PH
17-4 PH
C 0.07%
Mn 1.00%
Si 1.00%
P 0.04%
S 0.03%
Cr 17.0%
Ni 4.0%
Cu 4.0%
Nb+Ta 0.30%

Typical Mechanical Properties


Table 3. Typical mechanical properties for stainless steel alloy 17-4PH
Grade 17-4PH Annealed Cond 900 Cond 1150
Tensile Strength (MPa) 1100 1310 930
Elongation A5 (%) 15 10 16
Proof Stress 0.2% (MPa) 1000 1170 724
Elongation A5 (%) 15 10 16

Typical Physical Properties


Table 4. Typical physical properties for stainless steel alloy 17-4PH
Property Value
Density 7.75 g/cm3
Melting Point °C
Modulus of Elasticity 196 GPa
Electrical Resistivity 0.080x10-6 Ω.m
Thermal Conductivity 18.4 W/m.K at 100°C
Thermal Expansion 10.8x10-6 /K at 100°C

Alloy Designations
Stainless steel 17-4 PH also corresponds to a number of following standard designations and
specifications. Table 5. Alternate designations for stainless steel alloy 17-4PH
Euronorm UNS BS En Grade
1.4542 S17400 - - 630

Corrosion Resistance
Precipitation hardening stainless steels have moderate to good corrosion resistance in a range of
environments. They have a better combination of strength and corrosion resistance than when
compared with the heat treatable 400 series martensitic alloys. Corrosion resistance is similar to
that found in grade 304 stainless steel. In warm chloride environments, 17-4 PH is susceptible to
pitting and crevice corrosion.

When aged at 550°C or higher, 17-4 PH is highly resistant to stress corrosion cracking. Better
stress corrosion cracking resistance comes with higher ageing temperatures. Corrosion resistance
is low in the solution treated (annealed) condition and it should not be used before heat
treatment.

Heat Resistance
17-4 PH has good oxidation resistance. In order to avoid reduction in mechanical properties, it
should not be used over its precipitation hardening temperature. Prolonged exposure to 370-
480°C should be avoided if ambient temperature toughness is critical.

Fabrication
Fabrication of all stainless steels should be done only with tools dedicated to stainless steel
materials or tooling and work surfaces must be thoroughly cleaned before use. These precautions
are necessary to avoid cross contamination of stainless steel by easily corroded metals that may
discolour the surface of the fabricated product.

Cold Working
Cold forming such as rolling, bending and hydroforming can be performed on 17-4PH but only
in the fully annealed condition. After cold working, stress corrosion resistance is improved by re-
ageing at the precipitation hardening temperature.

Hot Working
Hot working of 17-4 PH should be performed at 950°-1200°C. After hot working, full heat
treatment is required. This involves annealing and cooling to room temperature or lower. Then
the component needs to be precipitation hardened to achieve the required mechanical properties.

Machinability
In the annealed condition, 17-4 PH has good machinability, similar to that of 304 stainless steel.
After hardening heat treatment, machining is difficult but possible. Carbide or high speed steel
tools are normally used with standard lubrication. When strict tolerance limits are required, the
dimensional changes due to heat treatment must be taken into account
Welding
Precipitation hardening steels can be readily welded using procedures similar to those used for
the 300 series of stainless steels. Grade 17-4 PH can be successfully welded without preheating.
Heat treating after welding can be used to give the weld metal the same properties as for the
parent metal. The recommended grade of filler rods for welding 17-4 PH is 17-7 PH.
Applications
Due to the high strength of precipitation hardening stainless steels, most applications are in
aerospace and other high-technology industries. Applications include: · Gears · Valves and other
engine components · High strength shafts · Turbine blades · Moulding dies · Nuclear waste casks

Supplied Forms

17-4 PH is typically supplied by Aalco in the following forms: · Round bar · Hexagonal bar ·
Billet
Source: Aalco For more information on this source please visit Aalco
ALLOYS -- NOTES

ALLOYS
An alloy is a homogeneous mixture of two or more elements, at least one of which is a metal,
and where the resulting material has metallic properties. The resulting metallic substance usually
has different properties (sometimes substantially different) from those of its components.
Contents
• 1 Properties
• 2 Classification
• 3 Terminology
• 4 See also

Properties:
Alloys are usually prepared to improve on the properties of their components. For instance, steel
is stronger than iron, its primary component. The physical properties of an alloy, such as density,
reactivity and electrical and thermal conductivity may not differ greatly from the alloy's
elements, but engineering properties, such as tensile strength, shear strength and Young's
modulus, can be substantially different from those of the constituent materials. This is sometimes
due to the differing sizes of the atoms in the alloy—larger atoms exert a compressive force on
neighboring atoms, and smaller atoms exert a tensile force on their neighbors. This helps the
alloy resist deformation, unlike a pure metal where the atoms move more freely. Unlike pure
metals, most alloys do not have a single melting point. Instead, they have a melting range in
which the material is a mixture of solid and liquid phases. The temperature at which melting
begins is called the solidus, and that at which melting is complete is called the liquidus.
However, for most pairs of elements, there is a particular ratio which has a single melting point;
this is called the eutectic mixture.

Classification
Alloys can be classified by the number of their constituents. An alloy with two components is
called a binary alloy; one with three is a ternary alloy, and so forth. Alloys can be further
classified as either substitution alloys or interstitial alloys, depending on their method of
formation. In substitution alloys, the atoms of the components are approximately the same size
and the various atoms are simply substituted for one another in the crystal structure. An example
of a (binary) substitution alloy is brass, made up of copper and zinc. Interstitial alloys occur
when the atoms of one component are substantially smaller than the other and the smaller atoms
fit into the spaces (interstices) between the larger atoms.

Terminology
In practice, some alloys are used so predominantly with respect to their base metals that the
name of the primary constituent is also used as the name of the alloy. For example, 14 karat gold
is an alloy of gold with other elements. Similarly, the silver used in jewelry and the aluminium
used as a structural building material are also alloys. The term "alloy" is sometime used in
everyday speech as a synonym for a particular alloy. For example, automobile wheels made of
"aluminium alloy" are commonly referred to as simply "alloy wheels". The usage is obviously
indefinite, since steels and most other metals in practical use are also alloys.

See also
Look up alloy in Wiktionary, the free dictionary.

• List of alloys
• Intermetallics
• Heat treatment

Retrieved from "http://en.wikipedia.org/wiki/Alloy"

List of alloys
This is a list of alloys for which an article exists in Wikipedia (or is proposed but not yet
written). They are grouped by base metal, in order of increasing atomic number. Within these
headings they are in no particular order. Some of the main alloying elements are optionally listed
after the alloy names.
Contents
• 1 Alloys of magnesium
• 2 Alloys of aluminium
• 3 Alloys of potassium
• 4 Alloys of iron
• 5 Alloys of cobalt
• 6 Alloys of nickel
• 7 Alloys of copper
• 8 Alloys of zinc
• 9 Alloys of gallium
• 10 Alloys of zirconium
• 11 Alloys of silver
• 12 Alloys of indium
• 13 Alloys of tin
• 14 Rare earth alloys
• 15 Alloys of gold
• 16 Alloys of mercury
• 17 Alloys of lead
• 18 Alloys of bismuth
• 19 Alloys of uranium

Alloys of magnesium
• Magnox (aluminium)
• T-Mg-Al-Zn (Bergman phase) is a complex metallic alloy
• Elektron

Alloys of aluminium
Main article: Aluminium alloys

• Al-Li (lithium)
• Duralumin (copper)
• Nambe (aluminium plus seven other undisclosed metals)
• Silumin (silicon)
• AA-8000: used for building wire in the U.S. per the National Electrical Code
• Magnalium (5% magnesium)/used in airplane bodies, ladders,etc.
• Aluminium also forms complex metallic alloys, like β-Al-Mg, ξ'-Al-Pd-Mn, T-Al3Mn
• Alnico - alloy of aluminum, nickel, and cobalt used in magnets

Alloys of potassium
• NaK (sodium)

Alloys of iron
See also: Category:Ferrous alloys

• Steel (carbon) (category:steels)


o Stainless steel (chromium, nickel)
 AL-6XN
 Alloy 20
 Celestrium
 Marine grade stainless
 Martensitic stainless steel
 Surgical stainless steel (chromium, molybdenum, nickel)
o Silicon steel (silicon)
o Tool steel (tungsten or manganese)
o Bulat steel
o Chromoly (chromium, molybdenum)
o Crucible steel
o Damascus steel
o HSLA steel
o High speed steel
o Maraging steel
o Reynolds 531
o Wootz steel
• Iron
o Anthracite iron (carbon)
o Cast iron (carbon)
o Pig iron (carbon)
o Wrought iron (carbon)
• Fernico (nickel, cobalt)
• Elinvar (nickel, chromium)
• Invar (nickel)
• Kovar (cobalt)
• Spiegeleisen (manganese, carbon, silicon)
• Ferroalloys (category:Ferroalloys)
o Ferroboron
o Ferrochrome
o Ferromagnesium
o Ferromanganese
o Ferromolybdenum
o Ferronickel
o Ferrophosphorus
o Ferrotitanium
o Ferrovanadium
o Ferrosilicon

Alloys of cobalt
• Megallium
• Stellite (chromium, tungsten, carbon)
o Talonite
• Alnico
• Vitallium

Alloys of nickel
• German silver (copper, zinc)
• Chromel (chromium)
• Hastelloy (molybdenum, chromium, sometimes tungsten)
• Inconel (chromium, iron)
• Monel metal (copper, nickel, iron, manganese)
• Nichrome (chromium, iron, nickel)
• Nicrosil (chromium, silicon, magnesium)
• Nisil (silicon)
• Nitinol (titanium, shape memory alloy)
• Cupronickel (bronze, copper)
• Soft magnetic alloys
o Mu-metal (iron)

Alloys of copper
Main article: Copper alloys

• Beryllium copper (beryllium)


• Billon (silver)
• Brass (zinc)
o Calamine brass (zinc)
o Chinese silver (zinc)
o Dutch metal (zinc)
o Gilding metal (zinc)
o Muntz metal (zinc)
o Pinchbeck (zinc)
o Prince's metal (zinc)
o Tombac (zinc)
• Bronze (tin, aluminium or any other element)
o Aluminium bronze (aluminium)
o Bell metal (tin)
o Florentine bronze (aluminium or tin)
o Guanín
o Gunmetal (tin, zinc)
o Glucydur
o Phosphor bronze (tin and phosphorus)
o Ormolu (Gilt Bronze) (zinc)
o Speculum metal (tin)
• Constantan (nickel)
• Corinthian brass (gold, silver)
• Cunife (nickel, iron)
• Cupronickel (nickel)
• Cymbal alloys (Bell metal) (tin)
• Devarda's alloy (aluminium, zinc)
• Hepatizon (gold, silver)
• Heusler alloy (manganese, tin)
• Manganin (manganese, nickel)
• Molybdochalkos (lead)
• Nickel silver (nickel)
• Nordic gold (aluminium, zinc, tin)
• Shakudo (gold)
• Tumbaga (gold)

Alloys of zinc
• Zamak (aluminium, magnesium, copper)

Alloys of gallium
• Galinstan

Alloys of zirconium
• Zircaloy

Alloys of silver
• Sterling silver (copper)
• Britannia silver (copper)
• Goloid (copper, gold)

Alloys of indium
• Field's metal (bismuth, tin)

Alloys of tin
• Britannium (copper, antimony)[1]
• Pewter (lead, copper)
• Solder (lead, antimony)

Rare earth alloys


• Mischmetal (various rare earths)

Alloys of gold
• Corinthian brass (copper)
• Electrum (silver, copper)
• Tumbaga (copper)
• Rose gold (copper)
• White gold

Alloys of mercury
• Amalgam

Alloys of lead
• Molybdochalkos (copper)
• Solder (tin)
• Terne (tin)
• Type metal (tin, antimony)

Alloys of bismuth
• Wood's metal (lead, tin, cadmium)
• Rose metal (lead, tin)

Alloys of uranium
• Staballoy (depleted uranium with other metals, usually titanium or molybdenum)
Bauschinger effect
The Bauschinger effect refers to a property of materials where the material's stress-strain
characteristics change as a result of the microscopic stress distribution of the material. For
example, an increase in tensile yield strength at the expense of compressive yield strength.

The Bauschinger effect is named after the German engineer Johann Bauschinger (de:Johann
Bauschinger).

While more tensile cold working increases the tensile yield strength, the local initial compressive
yield strength after tensile cold working is actually reduced. The greater the tensile cold working,
the lower the compressive yield strength.
The Bauschinger effect is normally associated with conditions where the yield strength of a
metal decreases when the direction of strain is changed. It is a general phenomenon found in
most polycrystalline metals.

The basic mechanism for the Bauschinger effect is related to the dislocation structure in the cold-
worked metal. As deformation occurs, the dislocations will accumulate at barriers and produce
dislocation pileups and tangles.

Based on the cold work structure, two types of mechanisms are generally used to explain the
Bauschinger effect.

First, local back stresses may be present in the material, which assist the movement of
dislocations in the reverse direction. Thus, the dislocations can move easily in the reverse
direction and the yield strength of the metal is lower. The pile-up of dislocations at grain
boundaries and Orowan loops around strong precipitates are two main sources of these back
stresses.

Second, when the strain direction is reversed, dislocations of the opposite sign can be produced
from the same source that produced the slip-causing dislocations in the initial direction.
Dislocations with opposite signs can attract and annihilate each other. Since strain hardening is
related to an increased dislocation density, reducing the number of dislocations reduces strength.
The net result is that the yield strength for strain in the opposite direction is less than it would be
if the strain had continued in the initial direction.
BRITTLE FRACTURE |
Version 4 - view current page
what is brittle fracture?

Basically, brittle fracture is a rapid run of cracks through a stressed material. The cracks usually
travel so fast that you can't tell when the material is about to break. In other words, there is very
little plastic deformation before failure occurs. In most cases, this is the worst type of fracture
because you can't repair visible damage in a part or structure before it breaks. In brittle fracture,
the cracks run close to perpendicular to the applied stress.

This perpendicular fracture leaves a relatively flat surface at the break. Besides having a nearly
flat fracture surface, brittle materials usually contain a pattern on their fracture surfaces. Some
brittle materials have lines and ridges beginning at the origin of the crack and spreading out
across the crack surface.

Other materials, like some steels have back to back V-shaped markings pointing to the origin of
the crack. These V-shaped markings are called chevrons. Very hard or fine grained materials
have no special pattern on their fracture surface, and amorphous materials like ceramic glass
have shiny smooth fracture surfaces.

Chevron Fracture Surface (Callister p. 185)


Radiating Ridge Fracture Surface (Callister pg. 186, copyright by John Wiley & Sons, inc.)
Types of Brittle Fracture
The first type of fracture is transgranular. In transgranular fracture, the fracture travels
through the grain of the material. The fracture changes direction from grain to grain due
to the different lattice orientation of atoms in each grain. In other words, when the crack
reaches a new grain, it may have to find a new path or plane of atoms to travel on because
it is easier to change direction for the crack than it is to rip through. Cracks choose the
path of least resistance. You can tell when a crack has changed in direction through the
material, because you get a slightly bumpy crack surface.

The second type of fracture is intergranular fracture. Intergranular fracture is the crack
traveling along the grain boundaries, and not through the actual grains. Intergranular
fracture usually occurs when the phase in the grain boundary is weak and brittle ( i.e.
Cementite in Iron's grain boundaries). Think of a metal as one big 3-D puzzle.
Transgranular fracture cuts through the puzzle pieces, and intergranular fracture travels
along the puzzle pieces pre-cut edges.

Ductile to Brittle Fracture Transition

In fracture, there are many shades of gray. Brittle fracture and ductile fracture are fairly general
terms describing the two opposite extremes of the fracture spectrum. I will explain the factors
that make a material lean toward one type of fracture as opposed to the other type of fracture.

The first and foremost factor is temperature. Basically, at higher temperatures the yield
strength is lowered and the fracture is more ductile in nature. On the opposite end, at
lower temperatures the yield strength is greater and the fracture is more brittle in nature.
This relationship with temperature has to do with atom vibrations.
As temperature increases, the atoms in the material vibrate with greater frequency and
amplitude. This increased vibration allows the atoms under stress to slip to new places in
the material ( i.e. break bonds and form new ones with other atoms in the material). This
slippage of atoms is seen on the outside of the material as plastic deformation, a common
feature of ductile
fracture.
When temperature decreases however, the exact opposite is true. Atom vibration
decreases, and the atoms do not want to slip to new locations in the material. So when the
stress on the material becomes high enough, the atoms just break their bonds and do not
form new ones. This decrease in slippage causes little plastic deformation before fracture.
Thus, we have a brittle type fracture.
At moderate temperatures (with respect to the material) the material exhibits
characteristics of both types of fracture. In conclusion, temperature determines the
amount of brittle or ductile fracture that can occur in a material.

Another factor that determines the amount of brittle or ductile fracture that occurs in a material is
dislocation density. The higher the dislocation density, the more brittle the fracture will be in the
material. The idea behind this theory is that plastic deformation comes from the movement of
dislocations.

As dislocations increase in a material due to stresses above the materials yield point, it becomes
increasingly difficult for the dislocations to move because they pile into each other. So a material
that already has a high dislocation density can only deform but so much before it fractures in a
brittle manner.

The last factor is grain size. As grains get smaller in a material, the fracture becomes more
brittle. This phenomena is do to the fact that in smaller grains, dislocations have less space to
move before they hit a grain boundary. When dislocations can not move very far before fracture,
then plastic deformation decreases. Thus, the material's fracture is more brittle. In ending, I
would like to say that these are just the basics of brittle fracture. There are whole books written
on just brittle fracture. So, if this section interested you at all, go look the books up at your local
university library. Good Luck with your studies MSE 2034/44 students. .
CERAMICS - NOTES |
Version 3 - view current page

CeramicS
This article is about ceramic materials. For the fine art, see ceramics (art). Fixed Partial
Denture, or "Bridge" The word ceramic is derived from the Greek word κεραµικός
(keramikos). The term covers inorganic non-metallic materials whose formation is due
to the action of heat. Up until the 1950s or so, the most important of these were the
traditional clays, made into pottery, bricks, tiles and are like, along with cements and
glass. Clay based ceramics are described in the article on pottery. A composite material
of ceramic and metal is known as cermet. The word ceramic can be an adjective, and
can also be used as a noun to refer to a ceramic material, or a product of ceramic
manufacture. Ceramics is a singular noun referring to the art of making things out of
ceramic materials. The technology of manufacturing and usage of ceramic materials is
part of the field of ceramic engineering. Many ceramic materials are hard, porous and
brittle. The study and development of ceramics includes methods to mitigate problems
associated with these characteristics, and to accentuate the strengths of the materials
as well as to investigate novel applications. The American Society for Testing and
Materials (ASTM) defines a ceramic article as “an article having a glazed or unglazed
body of crystalline or partly crystalline structure, or of glass, which body is produced
from essentially inorganic, non-metallic substances and either is formed from a molten
mass which solidifies on cooling, or is formed and simultaneously or subsequently
matured by the action of the heat.”[1]

Contents
• 1 Types of ceramic materials
• 2 Examples of structural ceramics
• 3 Examples of whiteware ceramics
• 4 Classification of technical ceramics
o 4.1 Examples of technical ceramics
• 5 Properties of ceramics
o 5.1 Mechanical properties
o 5.2 Electrical properties
 5.2.1 Semiconductors
 5.2.2 Superconductivity
 5.2.3 Ferroelectricity and supersets
 5.2.4 Positive thermal coefficient
• 6 Classification of ceramics
o 6.1 In situ manufacturing
o 6.2 Sintering-based methods
• 7 Other applications of ceramics
• 8 References
• 9 See also

Types of ceramic materials


For convenience ceramic products are usually divided into four sectors, and these are
shown below with some examples:

• Structural, including bricks, pipes, floor and roof tiles


• Refractories, such as kiln linings, gas fire radiants, steel and glass making
crucibles
• Whitewares, including tableware, wall tiles, decorative art objects and sanitary
ware
• Technical, is also known as Engineering, Advanced, Special, and in Japan, Fine
Ceramics. Such items include tiles used in the Space Shuttle program, gas
burner nozzles, ballistic protection, nuclear fuel uranium oxide pellets, bio-
medical implants, jet engine turbine blades, and missile nose cones. Frequently
the raw materials do not include clays.

Examples of structural ceramics


• Construction bricks.
• Floor and roof tiles.
• Sewage pipes

Examples of whiteware ceramics


• Bone china
• Earthenware, which is often made from clay, quartz and feldspar.
• Porcelain, which are often made from kaolin
• Stoneware

Classification of technical ceramics


Technical ceramics can also be classified into three distinct material categories:
• Oxides: Alumina, zirconia
• Non-oxides: Carbides, borides, nitrides, silicides
• Composites: Particulate reinforced, combinations of oxides and non-oxides.

Each one of these classes can develop unique material properties

Examples of technical ceramics

• Barium titanate (often mixed with strontium titanate) displays ferroelectricity,


meaning that its mechanical, electrical, and thermal responses are coupled to
one another and also history-dependent. It is widely used in electromechanical
transducers, ceramic capacitors, and data storage elements. Grain boundary
conditions can create PTC effects in heating elements.
• Bismuth strontium calcium copper oxide, a high-temperature superconductor
• Boron carbide (B4C), which is used in ceramic plates in some personnel,
helicopter and tank armor.
• Boron nitride is structurally isoelectronic to carbon and takes on similar physical
forms: a graphite-like one used as a lubricant, and a diamond-like one used as
an abrasive.
• Ferrite (Fe3O4), which is ferrimagnetic and is used in the magnetic cores of
electrical transformers and magnetic core memory.
• Lead zirconate titanate is another ferroelectric material.
• Magnesium diboride (MgB2), which is an unconventional superconductor.
• Silicon carbide (SiC), which is used as a susceptor in microwave furnaces, a
commonly used abrasive, and as a refractory material.
• Silicon nitride (Si3N4), which is used as an abrasive powder.
• Steatite is used as an electrical insulator.
• Uranium oxide (UO2), used as fuel in nuclear reactors.
• Yttrium barium copper oxide (YBa2Cu3O7-x), another high temperature
superconductor.
• Zinc oxide (ZnO), which is a semiconductor, and used in the construction of
varistors.
• Zirconium dioxide (zirconia), which in pure form undergoes many phase changes
between room temperature and practical sintering temperatures, can be
chemically "stabilized" in several different forms. Its high oxygen ion conductivity
recommends it for use in fuel cells. In another variant, metastable structures can
impart transformation toughening for mechanical applications; most ceramic knife
blades are made of this material.

Properties of ceramics
Mechanical properties

Ceramic materials are usually ionic or covalently-bonded materials, and can be


crystalline or amorphous. A material held together by either type of bond will tend to
fracture before any plastic deformation takes place, which results in poor toughness in
these materials. Additionally, because these materials tend to be porous, the pores and
other microscopic imperfections act as stress concentrators, decreasing the toughness
further, and reducing the tensile strength. These combine to give catastrophic failures,
as opposed to the normally much more gentle failure modes of metals. These materials
do show plastic deformation. However, due to the rigid structure of the crystalline
materials, there are very few available slip systems for dislocations to move, and so
they deform very slowly. With the non-crystalline (glassy) materials, viscous flow is the
dominant source of plastic deformation, and is also very slow. It is therefore neglected
in many applications of ceramic materials.

Electrical properties

Semiconductors There are a number of ceramics that are semiconductors. Most


of these are transition metal oxides that are II-VI semiconductors, such as zinc oxide.
While there is talk of making blue LEDs from zinc oxide, ceramicists are most interested
in the electrical properties that show grain boundary effects. One of the most widely
used of these is the varistor. These are devices that exhibit the property that resistance
drops sharply at a certain threshold voltage. Once the voltage across the device
reaches the threshold, there is a breakdown of the electrical structure in the vicinity of
the grain boundaries, which results in its electrical resistance dropping from several
megohms down to a few hundred ohms. The major advantage of these is that they can
dissipate a lot of energy, and they self reset — after the voltage across the device drops
below the threshold, its resistance returns to being high. This makes them ideal for
surge-protection applications. As there is control over the threshold voltage and energy
tolerance, they find use in all sorts of applications. The best demonstration of their
ability can be found in electrical substations, where they are employed to protect the
infrastructure from lightning strikes. They have rapid response, are low maintenance,
and do not appreciably degrade from use, making them virtually ideal devices for this
application. Semiconducting ceramics are also employed as gas sensors. When various
gases are passed over a polycrystalline ceramic, its electrical resistance changes. With
tuning to the possible gas mixtures, very inexpensive devices can be produced.

Superconductivity Under some conditions, such as extremely low temperature,


some ceramics exhibit superconductivity. The exact reason for this is not known, but
there are two major families of superconducting ceramics.

Ferro electricity and supersets Piezoelectricity, a link between electrical


and mechanical response, is exhibited by a large number of ceramic materials,
including the quartz used to measure time in watches and other electronics. Such
devices use both properties of piezoelectrics, using electricity to produce a mechanical
motion (powering the device) and then using this mechanical motion to produce
electricity (generating a signal). The unit of time measured is the natural interval
required for electricity to be converted into mechanical energy and back again. The
piezoelectric effect is generally stronger in materials that also exhibit pyroelectricity, and
all pyroelectric materials are also piezoelectric. These materials can be used to inter
convert between thermal, mechanical, and/or electrical energy; for instance, after
synthesis in a furnace, a pyroelectric crystal allowed to cool under no applied stress
generally builds up a static charge of thousands of volts. Such materials are used in
motion sensors, where the tiny rise in temperature from a warm body entering the room
is enough to produce a measurable voltage in the crystal. In turn, pyroelectricity is seen
most strongly in materials which also display the ferroelectric effect, in which a stable
electric dipole can be oriented or reversed by applying an electrostatic field.
Pyroelectricity is also a necessary consequence of ferroelectricity. This can be used to
store information in ferroelectric capacitors, elements of ferroelectric RAM. The most
common such materials are lead zirconate titanate and barium titanate. Aside from the
uses mentioned above, their strong piezoelectric response is exploited in the design of
high-frequency loudspeakers, transducers for sonar, and actuators for atomic force and
scanning tunneling microscopes.

Positive thermal coefficient Increases in temperature can cause grain


boundaries to suddenly become insulating in some semiconducting ceramic materials,
mostly mixtures of heavy metal titanates. The critical transition temperature can be
adjusted over a wide range by variations in chemistry. In such materials, current will
pass through the material until joule heating brings it to the transition temperature, at
which point the circuit will be broken and current flow will cease. Such ceramics are
used as self-controlled heating elements in, for example, the rear-window defrost
circuits of automobiles. At the transition temperature, the material's dielectric response
becomes theoretically infinite. While a lack of temperature control would rule out any
practical use of the material near its critical temperature, the dielectric effect remains
exceptionally strong even at much higher temperatures. Titanates with critical
temperatures far below room temperature have become synonymous with "ceramic" in
the context of ceramic capacitors for just this reason.

Classification of ceramics
Non-crystalline ceramics: Non-crystalline ceramics, being glasses, tend to be formed
from melts. The glass is shaped when either fully molten, by casting, or when in a state
of toffee-like viscosity, by methods such as blowing to a mold. If later heat-treatments
cause this class to become partly crystalline, the resulting material is known as a glass-
ceramic. Crystalline ceramics: Crystalline ceramic materials are not amenable to a
great range of processing. Methods for dealing with them tend to fall into one of two
categories - either make the ceramic in the desired shape, by reaction in situ, or by
"forming" powders into the desired shape, and then sintering to form a solid body.
Ceramic forming techniques include shaping by hand (sometimes including a rotation
process called "throwing"), slip casting, tape casting (used for making very thin ceramic
capacitors, etc.), injection molding, dry pressing, and other variations. (See also
Ceramic forming techniques. Details of these processes are described in the two books
listed below.) A few methods use a hybrid between the two approaches.

In situ manufacturing

The most common use of this method is in the production of cement and concrete.
Here, the dehydrated powders are mixed with water. This starts hydration reactions,
which result in long, interlocking crystals forming around the aggregates. Over time,
these result in a solid ceramic. The biggest problem with this method is that most
reactions are so fast that good mixing is not possible, which tends to prevent large-scale
construction. However, small-scale systems can be made by deposition techniques,
where the various materials are introduced above a substrate, and react and form the
ceramic on the substrate. This borrows techniques from the semiconductor industry,
such as chemical vapour deposition, and is very useful for coatings. These tend to
produce very dense ceramics, but do so slowly.

Sintering-based methods

The principles of sintering-based methods is simple. Once a roughly held together


object (called a "green body") is made, it is baked in a kiln, where diffusion processes
cause the green body to shrink. The pores in the object close up, resulting in a denser,
stronger product. The firing is done at a temperature below the melting point of the
ceramic. There is virtually always some porosity left, but the real advantage of this
method is that the green body can be produced in any way imaginable, and still be
sintered. This makes it a very versatile route. There are thousands of possible
refinements of this process. Some of the most common involve pressing the green body
to give the densification a head start and reduce the sintering time needed. Sometimes
organic binders such as polyvinyl alcohol are added to hold the green body together;
these burn out during the firing (at 200–350°C). So metimes organic lubricants are
added during pressing to increase densification. It is not uncommon to combine these,
and add binders and lubricants to a powder, then press. (The formulation of these
organic chemical additives is an art in itself. This is particularly important in the
manufacture of high performance ceramics such as those used by the billions for
electronics, in capacitors, inductors, sensors, etc. The specialized formulations most
commonly used in electronics are detailed in the book "Tape Casting," by R.E. Mistler,
et al., Amer. Ceramic Soc. [Westerville, Ohio], 2000.) A comprehensive book on the
subject, for mechanical as well as electronics applications, is "Organic Additives and
Ceramic Processing," by D. J. Shanefield, Kluwer Publishers [Boston], 1996. A slurry
can be used in place of a powder, and then cast into a desired shape, dried and then
sintered. Indeed, traditional pottery is done with this type of method, using a plastic
mixture worked with the hands. If a mixture of different materials is used together in a
ceramic, the sintering temperature is sometimes above the melting point of one minor
component - a liquid phase sintering. This results in shorter sintering times compared to
solid state sintering.

Other applications of ceramics


• Ceramics are used in the manufacture of knives. The blade of the ceramic knife
will stay sharp for much longer than that of a steel knife, although it is more brittle
and can be snapped by dropping it on a hard surface.

• Ceramics such as alumina and boron carbide have been used in ballistic
armored vests to repel large-caliber rifle fire. Such plates are known commonly
as small-arms protective inserts (SAPI). Similar material is used to protect
cockpits of some military airplanes, because of the low weight of the material.

• Ceramic balls can be used to replace steel in ball bearings. Their higher
hardness means that they are much less susceptible to wear and can often more
than triple lifetimes. They also deform less under load meaning they have less
contact with the bearing retainer walls and can roll faster. In very high speed
applications, heat from friction during rolling can cause problems for metal
bearings; problems which are reduced by the use of ceramics. Ceramics are also
more chemically resistant and can be used in wet environments where steel
bearings would rust. The major drawback to using ceramics is a significantly
higher cost. In many cases their electrically insulating properties may also be
valuable in bearings.

• In the early 1980s, Toyota researched production of an adiabatic ceramic engine


which can run at a temperature of over 6000 °F (330 0 °C). Ceramic engines do
not require a cooling system and hence allow a major weight reduction and
therefore greater fuel efficiency. Fuel efficiency of the engine is also higher at
high temperature, as shown by Carnot's theorem. In a conventional metallic
engine, much of the energy released from the fuel must be dissipated as waste
heat in order to prevent a meltdown of the metallic parts. Despite all of these
desirable properties, such engines are not in production because the
manufacturing of ceramic parts in the requisite precision and durability is difficult.
Imperfection in the ceramic leads to cracks, which can lead to potentially
dangerous equipment failure. Such engines are possible in laboratory settings,
but mass-production is unfeasible with current technology.

• Work is being done in developing ceramic parts for gas turbine engines.
Currently, even blades made of advanced metal alloys used in the engines' hot
section require cooling and careful limiting of operating temperatures. Turbine
engines made with ceramics could operate more efficiently, giving aircraft greater
range and payload for a set amount of fuel.

• Recently, there have been advances in ceramics which include bio-ceramics,


such as dental implants and synthetic bones. Hydroxyapatite, the natural mineral
component of bone, has been made synthetically from a number of biological
and chemical sources and can be formed into ceramic materials. Orthopedic
implants made from these materials bond readily to bone and other tissues in the
body without rejection or inflammatory reactions. Because of this, they are of
great interest for gene delivery and tissue engineering scaffolds. Most hydroxy
apatite ceramics are very porous and lack mechanical strength and are used to
coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers.
They are also used as fillers for orthopedic plastic screws to aid in reducing the
inflammation and increase absorption of these plastic materials. Work is being
done to make strong-fully dense nano crystalline hydroxapatite ceramic materials
for orthopedic weight bearing devices, replacing foreign metal and plastic
orthopedic materials with a synthetic natural bone mineral. Ultimately these
ceramic materials may be used as bone replacements or with the incorporation
of protein collagens, synthetic bones.
Ceramics
The word ceramic is derived from the Greek word κεραµικός (keramikos). The term
covers inorganic non-metallic materials whose formation is due to the action of heat. Up
until the 1950s or so, the most important of these were the traditional clays, made into
pottery, bricks, tiles and are like, along with cements and glass. Clay based ceramics
are described in the article on pottery.

A composite material of ceramic and metal is known as cermet. The word ceramic can
be an adjective, and can also be used as a noun to refer to a ceramic material, or a
product of ceramic manufacture. Ceramics is a singular noun referring to the art of
making things out of ceramic materials. The technology of manufacturing and usage of
ceramic materials is part of the field of ceramic engineering.

Many ceramic materials are hard, porous and brittle. The study and development of
ceramics includes methods to mitigate problems associated with these characteristics,
and to accentuate the strengths of the materials as well as to investigate novel
applications.

The American Society for Testing and Materials (ASTM) defines a ceramic article as “an
article having a glazed or unglazed body of crystalline or partly crystalline structure, or
of glass, which body is produced from essentially inorganic, non-metallic substances
and either is formed from a molten mass which solidifies on cooling, or is formed and
simultaneously or subsequently matured by the action of the heat.”[1]

Types of ceramic materials


For convenience ceramic products are usually divided into four sectors, and these are
shown below with some examples:

• Structural, including bricks, pipes, floor and roof tiles


• Refractories, such as kiln linings, gas fire radiants, steel and glass making
crucibles
• Whitewares, including tableware, wall tiles, decorative art objects and sanitary
ware
• Technical, is also known as Engineering, Advanced, Special, and in Japan, Fine
Ceramics. Such items include tiles used in the Space Shuttle program, gas
burner nozzles, ballistic protection, nuclear fuel uranium oxide pellets, bio-
medical implants, jet engine turbine blades, and missile nose cones. Frequently
the raw materials do not include clays.

Examples of structural ceramics


• Construction bricks.
• Floor and roof tiles.
• Sewage pipes

Examples of whiteware ceramics


• Bone china
• Earthenware, which is often made from clay, quartz and feldspar.
• Porcelain, which are often made from kaolin
• Stoneware

Classification of technical ceramics


Technical ceramics can also be classified into three distinct material categories:

• Oxides: Alumina, zirconia


• Non-oxides: Carbides, borides, nitrides, silicides
• Composites: Particulate reinforced, combinations of oxides and non-oxides.

Each one of these classes can develop unique material properties

Examples of technical ceramics

• Barium titanate (often mixed with strontium titanate) displays ferroelectricity,


meaning that its mechanical, electrical, and thermal responses are coupled to
one another and also history-dependent. It is widely used in electromechanical
transducers, ceramic capacitors, and data storage elements. Grain boundary
conditions can create PTC effects in heating elements.
• Bismuth strontium calcium copper oxide, a high-temperature superconductor
• Boron carbide (B4C), which is used in ceramic plates in some personnel,
helicopter and tank armor.
• Boron nitride is structurally isoelectronic to carbon and takes on similar physical
forms: a graphite-like one used as a lubricant, and a diamond-like one used as
an abrasive.
• Ferrite (Fe3O4), which is ferrimagnetic and is used in the magnetic cores of
electrical transformers and magnetic core memory.
• Lead zirconate titanate is another ferroelectric material.
• Magnesium diboride (MgB2), which is an unconventional superconductor.
• Silicon carbide (SiC), which is used as a susceptor in microwave furnaces, a
commonly used abrasive, and as a refractory material.
• Silicon nitride (Si3N4), which is used as an abrasive powder.
• Steatite is used as an electrical insulator.
• Uranium oxide (UO2), used as fuel in nuclear reactors.
• Yttrium barium copper oxide (YBa2Cu3O7-x), another high temperature
superconductor.
• Zinc oxide (ZnO), which is a semiconductor, and used in the construction of
varistors.
• Zirconium dioxide (zirconia), which in pure form undergoes many phase changes
between room temperature and practical sintering temperatures, can be
chemically "stabilized" in several different forms. Its high oxygen ion conductivity
recommends it for use in fuel cells. In another variant, metastable structures can
impart transformation toughening for mechanical applications; most ceramic knife
blades are made of this material.

Properties of ceramics

Mechanical properties

Ceramic materials are usually ionic or covalently-bonded materials, and can be


crystalline or amorphous. A material held together by either type of bond will tend to
fracture before any plastic deformation takes place, which results in poor toughness in
these materials. Additionally, because these materials tend to be porous, the pores and
other microscopic imperfections act as stress concentrators, decreasing the toughness
further, and reducing the tensile strength. These combine to give catastrophic failures,
as opposed to the normally much more gentle failure modes of metals.

These materials do show plastic deformation. However, due to the rigid structure of the
crystalline materials, there are very few available slip systems for dislocations to move,
and so they deform very slowly. With the non-crystalline (glassy) materials, viscous flow
is the dominant source of plastic deformation, and is also very slow. It is therefore
neglected in many applications of ceramic materials.

Electrical properties

Semiconductors There are a number of ceramics that are semiconductors. Most of


these are transition metal oxides that are II-VI semiconductors, such as zinc oxide.
While there is talk of making blue LEDs from zinc oxide, ceramicists are most interested
in the electrical properties that show grain boundary effects.
One of the most widely used of these is the varistor. These are devices that exhibit the
property that resistance drops sharply at a certain threshold voltage. Once the voltage
across the device reaches the threshold, there is a breakdown of the electrical structure
in the vicinity of the grain boundaries, which results in its electrical resistance dropping
from several megohms down to a few hundred ohms. The major advantage of these is
that they can dissipate a lot of energy, and they self reset — after the voltage across the
device drops below the threshold, its resistance returns to being high.

This makes them ideal for surge-protection applications. As there is control over the
threshold voltage and energy tolerance, they find use in all sorts of applications. The
best demonstration of their ability can be found in electrical substations, where they are
employed to protect the infrastructure from lightning strikes. They have rapid response,
are low maintenance, and do not appreciably degrade from use, making them virtually
ideal devices for this application.

Semiconducting ceramics are also employed as gas sensors. When various gases are
passed over a polycrystalline ceramic, its electrical resistance changes. With tuning to
the possible gas mixtures, very inexpensive devices can be produced.

Superconductivity Under some conditions, such as extremely low temperature,


some ceramics exhibit superconductivity. The exact reason for this is not known, but
there are two major families of superconducting ceramics.

Ferroelectricity and supersets

Piezoelectricity, a link between electrical and mechanical response, is exhibited by a


large number of ceramic materials, including the quartz used to measure time in
watches and other electronics. Such devices use both properties of piezoelectrics, using
electricity to produce a mechanical motion (powering the device) and then using this
mechanical motion to produce electricity (generating a signal). The unit of time
measured is the natural interval required for electricity to be converted into mechanical
energy and back again.

The piezoelectric effect is generally stronger in materials that also exhibit pyroelectricity,
and all pyroelectric materials are also piezoelectric. These materials can be used to
inter convert between thermal, mechanical, and/or electrical energy; for instance, after
synthesis in a furnace, a pyroelectric crystal allowed to cool under no applied stress
generally builds up a static charge of thousands of volts. Such materials are used in
motion sensors, where the tiny rise in temperature from a warm body entering the room
is enough to produce a measurable voltage in the crystal.

In turn, pyroelectricity is seen most strongly in materials which also display the
ferroelectric effect, in which a stable electric dipole can be oriented or reversed by
applying an electrostatic field. Pyroelectricity is also a necessary consequence of
ferroelectricity. This can be used to store information in ferroelectric capacitors,
elements of ferroelectric RAM.

The most common such materials are lead zirconate titanate and barium titanate. Aside
from the uses mentioned above, their strong piezoelectric response is exploited in the
design of high-frequency loudspeakers, transducers for sonar, and actuators for atomic
force and scanning tunneling microscopes.

Positive thermal coefficient

Increases in temperature can cause grain boundaries to suddenly become insulating in


some semiconducting ceramic materials, mostly mixtures of heavy metal titanates. The
critical transition temperature can be adjusted over a wide range by variations in
chemistry. In such materials, current will pass through the material until joule heating
brings it to the transition temperature, at which point the circuit will be broken and
current flow will cease. Such ceramics are used as self-controlled heating elements in,
for example, the rear-window defrost circuits of automobiles.

At the transition temperature, the material's dielectric response becomes theoretically


infinite. While a lack of temperature control would rule out any practical use of the
material near its critical temperature, the dielectric effect remains exceptionally strong
even at much higher temperatures. Titanates with critical temperatures far below room
temperature have become synonymous with "ceramic" in the context of ceramic
capacitors for just this reason.
Classification of ceramics
Non-crystalline ceramics: Non-crystalline ceramics, being glasses, tend to be formed
from melts. The glass is shaped when either fully molten, by casting, or when in a state
of toffee-like viscosity, by methods such as blowing to a mold. If later heat-treatments
cause this class to become partly crystalline, the resulting material is known as a glass-
ceramic.

Crystalline ceramics: Crystalline ceramic materials are not amenable to a great range
of processing. Methods for dealing with them tend to fall into one of two categories -
either make the ceramic in the desired shape, by reaction in situ, or by "forming"
powders into the desired shape, and then sintering to form a solid body. Ceramic
forming techniques include shaping by hand (sometimes including a rotation process
called "throwing"), slip casting, tape casting (used for making very thin ceramic
capacitors, etc.), injection molding, dry pressing, and other variations. (See also
Ceramic forming techniques. Details of these processes are described in the two books
listed below.) A few methods use a hybrid between the two approaches.

In situ manufacturing

The most common use of this method is in the production of cement and concrete.
Here, the dehydrated powders are mixed with water. This starts hydration reactions,
which result in long, interlocking crystals forming around the aggregates. Over time,
these result in a solid ceramic.
The biggest problem with this method is that most reactions are so fast that good mixing
is not possible, which tends to prevent large-scale construction. However, small-scale
systems can be made by deposition techniques, where the various materials are
introduced above a substrate, and react and form the ceramic on the substrate. This
borrows techniques from the semiconductor industry, such as chemical vapour
deposition, and is very useful for coatings.
These tend to produce very dense ceramics, but do so slowly.

Sintering-based methods

The principles of sintering-based methods is simple. Once a roughly held together


object (called a "green body") is made, it is baked in a kiln, where diffusion processes
cause the green body to shrink. The pores in the object close up, resulting in a denser,
stronger product.

The firing is done at a temperature below the melting point of the ceramic. There is
virtually always some porosity left, but the real advantage of this method is that the
green body can be produced in any way imaginable, and still be sintered. This makes it
a very versatile route.

There are thousands of possible refinements of this process. Some of the most
common involve pressing the green body to give the densification a head start and
reduce the sintering time needed. Sometimes organic binders such as polyvinyl alcohol
are added to hold the green body together; these burn out during the firing (at 200–
350°C).
Sometimes organic lubricants are added during pressing to increase densification. It is
not uncommon to combine these, and add binders and lubricants to a powder, then
press. (The formulation of these organic chemical additives is an art in itself. This is
particularly important in the manufacture of high performance ceramics such as those
used by the billions for electronics, in capacitors, inductors, sensors, etc.

The specialized formulations most commonly used in electronics are detailed in the
book "Tape Casting," by R.E. Mistler, et al., Amer. Ceramic Soc. [Westerville, Ohio],
2000.) A comprehensive book on the subject, for mechanical as well as electronics
applications, is "Organic Additives and Ceramic Processing," by D. J. Shanefield,
Kluwer Publishers [Boston], 1996.
A slurry can be used in place of a powder, and then cast into a desired shape, dried and
then sintered. Indeed, traditional pottery is done with this type of method, using a plastic
mixture worked with the hands.

If a mixture of different materials is used together in a ceramic, the sintering


temperature is sometimes above the melting point of one minor component - a liquid
phase sintering. This results in shorter sintering times compared to solid state sintering.

Other applications of ceramics


• Ceramics are used in the manufacture of knives. The blade of the ceramic knife
will stay sharp for much longer than that of a steel knife, although it is more brittle
and can be snapped by dropping it on a hard surface.

• Ceramics such as alumina and boron carbide have been used in ballistic
armored vests to repel large-caliber rifle fire. Such plates are known commonly
as small-arms protective inserts (SAPI). Similar material is used to protect
cockpits of some military airplanes, because of the low weight of the material.

• Ceramic balls can be used to replace steel in ball bearings. Their higher
hardness means that they are much less susceptible to wear and can often more
than triple lifetimes. They also deform less under load meaning they have less
contact with the bearing retainer walls and can roll faster. In very high speed
applications, heat from friction during rolling can cause problems for metal
bearings; problems which are reduced by the use of ceramics. Ceramics are also
more chemically resistant and can be used in wet environments where steel
bearings would rust. The major drawback to using ceramics is a significantly
higher cost. In many cases their electrically insulating properties may also be
valuable in bearings.

• In the early 1980s, Toyota researched production of an adiabatic ceramic engine


which can run at a temperature of over 6000 °F (330 0 °C). Ceramic engines do
not require a cooling system and hence allow a major weight reduction and
therefore greater fuel efficiency. Fuel efficiency of the engine is also higher at
high temperature, as shown by Carnot's theorem. In a conventional metallic
engine, much of the energy released from the fuel must be dissipated as waste
heat in order to prevent a meltdown of the metallic parts. Despite all of these
desirable properties, such engines are not in production because the
manufacturing of ceramic parts in the requisite precision and durability is difficult.
Imperfection in the ceramic leads to cracks, which can lead to potentially
dangerous equipment failure. Such engines are possible in laboratory settings,
but mass-production is unfeasible with current technology.

• Work is being done in developing ceramic parts for gas turbine engines.
Currently, even blades made of advanced metal alloys used in the engines' hot
section require cooling and careful limiting of operating temperatures. Turbine
engines made with ceramics could operate more efficiently, giving aircraft greater
range and payload for a set amount of fuel.

• Recently, there have been advances in ceramics which include bio-ceramics,


such as dental implants and synthetic bones. Hydroxyapatite, the natural mineral
component of bone, has been made synthetically from a number of biological
and chemical sources and can be formed into ceramic materials. Orthopedic
implants made from these materials bond readily to bone and other tissues in the
body without rejection or inflammatory reactions. Because of this, they are of
great interest for gene delivery and tissue engineering scaffolds. Most hydroxy
apatite ceramics are very porous and lack mechanical strength and are used to
coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers.
They are also used as fillers for orthopedic plastic screws to aid in reducing the
inflammation and increase absorption of these plastic materials. Work is being
done to make strong-fully dense nano crystalline hydroxapatite ceramic materials
for orthopedic weight bearing devices, replacing foreign metal and plastic
orthopedic materials with a synthetic natural bone mineral. Ultimately these
ceramic materials may be used as bone replacements or with the incorporation
of protein collagens, synthetic bones.
Ceramics-2 |
Version 3 - view current page

Ceramics
A ceramic has traditionally been defined as “an inorganic, nonmetallic solid that is prepared
from powdered materials, is fabricated into products through the application of heat, and displays
such characteristic properties as hardness, strength, low electrical conductivity, and brittleness."
The word ceramic comes the from Greek word "keramikos", which means "pottery." They are
typically crystalline in nature and are compounds formed between metallic and nonmetallic
elements such as aluminum and oxygen (alumina-Al2O3), calcium and oxygen (calcia - CaO),
and silicon and nitrogen (silicon nitride-Si3N4).

Depending on their method of formation, ceramics can be dense or lightweight. Typically, they
will demonstrate excellent strength and hardness properties; however, they are often brittle in
nature. Ceramics can also be formed to serve as electrically conductive materials or insulators.
Some ceramics, like superconductors, also display magnetic properties. They are also more
resistant to high temperatures and harsh environments than metals and polymers. Due to ceramic
materials wide range of properties, they are used for a multitude of applications.

The broad categories or segments that make up the ceramic industry can be classified as:

• Structural clay products (brick, sewer pipe, roofing and wall tile, flue linings, etc.)
• Whitewares (dinnerware, floor and wall tile, electrical porcelain, etc.)
• Refractories (brick and monolithic products used in metal, glass, cements, ceramics,
energy conversion, petroleum, and chemicals industries)
• Glasses (flat glass (windows), container glass (bottles), pressed and blown glass
(dinnerware), glass fibers (home insulation), and advanced/specialty glass (optical
fibers))
• Abrasives (natural (garnet, diamond, etc.) and synthetic (silicon carbide, diamond, fused
alumina, etc.) abrasives are used for grinding, cutting, polishing, lapping, or pressure
blasting of materials)
• Cements (for roads, bridges, buildings, dams, and etc.)
• Advanced ceramics
o Structural (wear parts, bioceramics, cutting tools, and engine components)
o Electrical (capacitors, insulators, substrates, integrated circuit packages,
piezoelectrics, magnets and superconductors)
o Coatings (engine components, cutting tools, and industrial wear parts)
o Chemical and environmental (filters, membranes, catalysts, and catalyst supports)
The atoms in ceramic materials are held together by a chemical bond which will be discussed a
bit later. Briefly though, the two most common chemical bonds for ceramic materials are
covalent and ionic. Covalent and ionic bonds are much stronger than in metallic bonds and,
generally speaking, this is why ceramics are brittle and metals are ductile.

Ceramic Structures
As discussed in the introduction, ceramics and related materials cover a wide range of objects.
Ceramics are a little more complex than metallic structures, which is why metals were covered
first. A ceramic has traditionally been defined as “an inorganic, nonmetallic solid that is prepared
from powdered materials and is fabricated into products through the application of heat. Most
ceramics are made up of two or more elements. This is called a compound. For example, alumina
(Al2O3) is a compound made up of aluminum atoms and oxygen atoms. The two most common
chemical bonds for ceramic materials are covalent and ionic. The bonding of atoms together is
much stronger in covalent and ionic bonding than in metallic. This is why ceramics generally
have the following properties: high hardness, high compressive strength, and chemical inertness.
This strong bonding also accounts for the less attractive properties of ceramics, such as low
ductility and low tensile strength. The absence of free electrons is responsible for making most
ceramics poor conductors of electricity and heat. However, it should be noted that the crystal
structures of ceramics are many and varied and this results in a very wide range of properties.
For example, while ceramics are perceived as electrical and thermal insulators, ceramic oxide
(initially based on Y-Ba-Cu-O) is the basis for high temperature superconductivity. Diamond and
silicon carbide have a higher thermal conductivity than aluminum or copper. Control of the
microstructure can overcome inherent stiffness to allow the production of ceramic springs, and
ceramic composites which have been produced with a fracture toughness about half that of steel.
Also, the atomic structures are often of low symmetry that gives some ceramics interesting
electromechanical properties like piezoelectricity, which is used in sensors and transducers. The
structure of most ceramics varies from relatively simple to very complex. The microstructure can
be entirely glassy (glasses only); entirely crystalline; or a combination of crystalline and glassy.
In the latter case, the glassy phase usually surrounds small crystals, bonding them together. The
main compositional classes of engineering ceramics are the oxides, nitrides and carbides.
COLD WORK AND HOT WORK

Cold working refers to plastic deformation that occurs usually, but not necessarily, at
room temperature.

Hot working refers to plastic deformation carried out above the recrystallization
temperature.

Warm working: as the name implies, is carried out at intermediate temperatures. It is a


compromise between cold and hot working.

The temperature ranges for these 3 categories of plastic deformation are given in the
next table in term of a ratio, where T is the working temperature and Tm is the melting
point of the metal, both on the absolute scale. Although it is a dimensionless quantity,
this ratio is known as the homologous temperature.

Definition:

As stated before, cold working refers to plastic deformation that occurs usually, but not
necessarily, at room temperature.

For example: Deforming lead at room temperature is a hot working process because the
recrystallization temperature of lead is about room temperature.

Cold and hot are relative terms.

Plastic deformation is a deformation in which the material does not return to its original
shape; this is the opposite of an elastic deformation.

Effects of Cold Working:

The behavior and workability of the metals depend largely on whether deformation
takes place below or above the recrystallization temperature.

Deformation using cold working results in:


· Higher stiffness, and strength, but

· Reduced malleability and ductility of the metal.

· Anisotropy

-----------------------------------------------------------------------------------------------------

Hot Working

Definition

Hot working is the deformation that is carried out above the recrystallization
temperature.

In these circumstances, annealing takes place while the metal is worked rather than
being a separate process. The metal can therefore be worked without it becoming work
hardened. Hot working is usually carried out with the metal at a temperature of about
0.6 of its melting point.

Effects of hot working

· At high temperature, scaling and oxidation exist. Scaling and oxidation produce
undesirable surface finish. Most ferrous metals needs to be cold worked after hot
working in order to improve the surface finish.

· The amount of force needed to perform hot working is less than that for cold work.

· The mechanical properties of the material remain unchanged during hot working.

· The metal usually experiences a decrease in yield strength when hot worked.
Therefore, it is possible to hot work the metal without causing any fracture.

Quenching is the sudden immersion of a heated metal into cold water or oil. It is used to
make the metal very hard. To reverse the effects of quenching, tempering is used
(reheated of the metal for a period of time)

To reverse the process of quenching, tempering is used, which is the reheat of the
metal.
Methods used for Cold, Hot working

ROLLING -- FORGING ------

The advantages of hot working are

• Lower working forces to produce a given shape, which means the machines
involved don't have to be as strong, which means they can be built more cheaply;
• The possibility of producing a very dramatic shape change in a single working
step, without causing large amounts of internal stress, cracks or cold working;
• Sometimes hot working can be combined with a casting process so that metal is
cast and then immediately hot worked. This saves money because we don't have
to pay for the energy to reheat the metal.
• Hot working tends to break up large crystals in the metal and can produce a
favourable alignment of elongated crystals (see DeGarmo Fig. 17-4 below).
• Hot working can remove some kinds of defects that occur in cast metals. It can
close gas pockets (bubbles) or voids in a cast billet; and it may also break up
non-metallic slag which can sometimes get caught in the melt (inclusions).

The main problems, however, are

• If the recrystallisation temperature of the worked metal is high e.g. if we are


talking about steel, specialised methods are needed to protect the machines that
work the metal. The working processes are also dangerous to human operators
and very unpleasant to work near (see picture below for some idea why).
• The surface finish of hot worked steel tends to be pretty crude because (a) the
dies or rollers wear quite rapidly; (b) there is a lot of dimensional change as the
worked object cools; and (c) there is the constant annoying problem of scale
formation on the surface of the hot steel.

Of course smart people have found ways to minimise the problems or work around
them - more below - and as a result hot working is a very common and useful process.
We just have to be aware of its limits and follow the hot working operations with other
types of manufacturing process that can fix the problems that occur.

Cold working

As explained above, when we work a metal below the recrystallisation temperature,


there is accumulation of a kind of material damage at the atomic level, through the pile-
up of dislocations. However this is not necessarily a bad thing. Many useful engineering
objects are deliberately cold-worked as part of the manufacturing process to achieve
improved properties. One common example is fencing wire. It is cold-drawn in the final
stages, before being galvanised (plated with zinc) and coiled ready for sale. The cold
working stages increase the yeild stress of the wire, meaning we can pull harder on the
wire before it deforms plastically (stretches). That's helpful when you are stringing a
fence. However the cold working does not increase the ultimate strength of the
material. So in a sense, cold working uses up some of the safety margin of the material.
If a very strongly cold worked material is overloaded, it could well just break like a brittle
material with no warning. So we try to design cold working as a compromise. A little bit
can be good: too much could be dangerous.

The advantages of cold working are

• A better surface finish may be achieved;


• Dimensional accuracy can be excellent because the work is not hot so it doesn't
shrink on cooling; also the low temperatures mean the tools such as dies and
rollers can last a long time without wearing out.
• Usually there is no problem with oxidative effects such as scale formation. In fact,
cold rolling (for example) can make such scale come off the surface of a
previously hot-worked object.
• Controlled amounts of cold work may be introduced.
• As with hot working, the grain structure of the material is made to follow the
deformation direction, which can be good for the strength of the final product.
• Strength and hardness are increased, although at the expense of ductility.
• OH & S problems related to working near hot metal are eliminated.

However

• There is a limit to how much cold work can be done on a given piece of metal.
See the discussion above about accumulation of damage in the form of piled up
dislocations. There are ways to get around this problem, see below.
• Higher forces are required to produce a given deformation, which means we
need heavily built, strong forming machines (= $$$).

A neat trick: cold work then normalise

Cold working has many advantages and is very much the more common type of metal
forming. However if a large overall deformation is desired, how can we do it using only
cold working? The answer is: do some cold work, then put the object through a heat-
treatment cycle to relieve the atomic-scale damage caused by the cold work. This is
called annealing or normalising the metal. It is done by heating the metal object above
the recrystallisation temperature, waiting a few minutes, then allowing it to cool. Of
course we have to pay for the energy to do the heating.
This type of cold-work/anneal/cold-work/anneal sequence is used by plumbers who
shape copper tube on a building site. When a piece of tube has to bent sharply, it is
done in easy stages with a proper annealing between each stage (usually done using a
hand-held gas flame). This ensures that metal won't crack during the bending
operations.
Think about working with a sheet of lead on a nice warm day in the Australian sun. The
lead will likely be above its recrystallisation temperature, with no special heating
required. This can actually be very useful. It means you can shape your sheet of lead
for hours - bend it back and forth, hammer it out, whatever - and it will probably accept
all the deformation with cracking. This is one of the reasons lead sheet was so popular
in ancient times as a roofing/guttering material (for those who could afford it). Any
strange shape needed could be hammered out of a sheet or even a lump, right on site,
and with no special furnaces or other technology.

The advantages of hot working are

• Lower working forces to produce a given shape, which means the machines
involved don't have to be as strong, which means they can be built more cheaply;
• The possibility of producing a very dramatic shape change in a single working
step, without causing large amounts of internal stress, cracks or cold working;
• Sometimes hot working can be combined with a casting process so that metal is
cast and then immediately hot worked. This saves money because we don't have
to pay for the energy to reheat the metal.
• Hot working tends to break up large crystals in the metal and can produce a
favourable alignment of elongated crystals (see DeGarmo Fig. 17-4 below).
• Hot working can remove some kinds of defects that occur in cast metals. It can
close gas pockets (bubbles) or voids in a cast billet; and it may also break up
non-metallic slag which can sometimes get caught in the melt (inclusions).

The main problems, however, are

• If the recrystallisation temperature of the worked metal is high e.g. if we are


talking about steel, specialised methods are needed to protect the machines that
work the metal. The working processes are also dangerous to human operators
and very unpleasant to work near (see picture below for some idea why).
• The surface finish of hot worked steel tends to be pretty crude because (a) the
dies or rollers wear quite rapidly; (b) there is a lot of dimensional change as the
worked object cools; and (c) there is the constant annoying problem of scale
formation on the surface of the hot steel.

Of course smart people have found ways to minimise the problems or work around
them - more below - and as a result hot working is a very common and useful process.
We just have to be aware of its limits and follow the hot working operations with other
types of manufacturing process that can fix the problems that occur.

Cold working

As explained above, when we work a metal below the recrystallisation temperature,


there is accumulation of a kind of material damage at the atomic level, through the pile-
up of dislocations. However this is not necessarily a bad thing. Many useful engineering
objects are deliberately cold-worked as part of the manufacturing process to achieve
improved properties. One common example is fencing wire. It is cold-drawn in the final
stages, before being galvanised (plated with zinc) and coiled ready for sale. The cold
working stages increase the yeild stress of the wire, meaning we can pull harder on the
wire before it deforms plastically (stretches). That's helpful when you are stringing a
fence. However the cold working does not increase the ultimate strength of the
material. So in a sense, cold working uses up some of the safety margin of the material.
If a very strongly cold worked material is overloaded, it could well just break like a brittle
material with no warning. So we try to design cold working as a compromise. A little bit
can be good: too much could be dangerous.
The advantages of cold working are

• A better surface finish may be achieved;


• Dimensional accuracy can be excellent because the work is not hot so it doesn't
shrink on cooling; also the low temperatures mean the tools such as dies and
rollers can last a long time without wearing out.
• Usually there is no problem with oxidative effects such as scale formation. In fact,
cold rolling (for example) can make such scale come off the surface of a
previously hot-worked object.
• Controlled amounts of cold work may be introduced.
• As with hot working, the grain structure of the material is made to follow the
deformation direction, which can be good for the strength of the final product.
• Strength and hardness are increased, although at the expense of ductility.
• OH & S problems related to working near hot metal are eliminated.

However

• There is a limit to how much cold work can be done on a given piece of metal.
See the discussion above about accumulation of damage in the form of piled up
dislocations. There are ways to get around this problem, see below.
• Higher forces are required to produce a given deformation, which means we
need heavily built, strong forming machines (= $$$).

A neat trick: cold work then normalise

Cold working has many advantages and is very much the more common type of metal
forming. However if a large overall deformation is desired, how can we do it using only
cold working? The answer is: do some cold work, then put the object through a heat-
treatment cycle to relieve the atomic-scale damage caused by the cold work. This is
called annealing or normalising the metal. It is done by heating the metal object above
the recrystallisation temperature, waiting a few minutes, then allowing it to cool. Of
course we have to pay for the energy to do the heating.
This type of cold-work/anneal/cold-work/anneal sequence is used by plumbers who
shape copper tube on a building site. When a piece of tube has to bent sharply, it is
done in easy stages with a proper annealing between each stage (usually done using a
hand-held gas flame). This ensures that metal won't crack during the bending
operations.
Think about working with a sheet of lead on a nice warm day in the Australian sun. The
lead will likely be above its recrystallisation temperature, with no special heating
required. This can actually be very useful. It means you can shape your sheet of lead
for hours - bend it back and forth, hammer it out, whatever - and it will probably accept
all the deformation with cracking. This is one of the reasons lead sheet was so popular
in ancient times as a roofing/guttering material (for those who could afford it). Any
strange shape needed could be hammered out of a sheet or even a lump, right on site,
and with no special furnaces or other technology.
Composite material

Composite materials (or composites for short) are engineered materials made from
two or more constituent materials with significantly different physical or chemical
properties and which remain separate and distinct on a macroscopic level within the
finished structure.

Background

The most primitive composite materials comprised straw and mud in the form of bricks
for building construction; the Biblical book of Exodus speaks of the Israelites being
oppressed by Pharaoh, by being forced to make bricks without straw. The ancient brick-
making process can still be seen on Egyptian tomb paintings in the Metropolitan
Museum of Art[1]. The most advanced examples perform routinely on spacecraft in
demanding environments. The most visible applications pave our roadways in the form
of either steel and aggregate reinforced portland cement or asphalt concrete. Those
composites closest to our personal hygiene form our shower stalls and bath tubs made
of fiberglass. Solid surface, imitation granite and cultured marble sinks and counter tops
are widely used to enhance our living experiences.

There are two categories of constituent materials: matrix and reinforcement. At least
one portion of each type is required. The matrix material surrounds and supports the
reinforcement materials by maintaining their relative positions. The reinforcements
impart their special mechanical and physical properties to enhance the matrix
properties.

A synergism produces material properties unavailable from the individual constituent


materials, while the wide variety of matrix and strengthening materials allows the
designer of the product or structure to choose an optimum combination. Engineered
composite materials must be formed to shape. The matrix material can be introduced to
the reinforcement before or after the reinforcement material is placed into the mold
cavity or onto the mold surface.

The matrix material experiences a melding event, after which the part shape is
essentially set. Depending upon the nature of the matrix material, this melding event
can occur in various ways such as chemical polymerization or solidification from the
melted state.

A variety of molding methods can be used according to the end-item design


requirements. The principal factors impacting the methodology are the natures of the
chosen matrix and reinforcement materials. Another important factor is the gross
quantity of material to be produced. Large quantities can be used to justify high capital
expenditures for rapid and automated manufacturing technology. Small production
quantities are accommodated with lower capital expenditures but higher labor and
tooling costs at a correspondingly slower rate.

Most commercially produced composites use a polymer matrix material often called a
resin solution. There are many different polymers available depending upon the starting
raw ingredients. There are several broad categories, each with numerous variations.
The most common are known as polyester, vinyl ester, epoxy, phenolic, polyimide,
polyamide, polypropylene, PEEK, and others. The reinforcement materials are often
fibers but also commonly ground minerals.

Molding methods

In general, the reinforcing and matrix materials are combined, compacted and
processed to undergo a melding event. After the melding event, the part shape is
essentially set, although it can deform under certain process conditions. For a
thermoset polymeric matrix material, the melding event is a curing reaction that is
initiated by the application of additional heat or chemical reactivity such as an organic
peroxide. For a thermoplastic polymeric matrix material, the melding event is a
solidification from the melted state. For a metal matrix material such as titanium foil, the
melding event is a fusing at high pressure and a temperature near the melt point.

For many molding methods, it is convenient to refer to one mold piece as a "lower" mold
and another mold piece as an "upper" mold. Lower and upper refer to the different faces
of the molded panel, not the mold's configuration in space. In this convention, there is
always a lower mold, and sometimes an upper mold. Part construction begins by
applying materials to the lower mold. Lower mold and upper mold are more generalized
descriptors than more common and specific terms such as male side, female side, a-
side, b-side, tool side, bowl, hat, mandrel, etc. Continuous manufacturing processes use
a different nomenclature.
The molded product is often referred to as a panel. For certain geometries and material
combinations, it can be referred to as a casting. For certain continuous processes, it can
be referred to as a profile.

Open molding
A process using a rigid, one sided mold which shapes only one surface of the panel.
The opposite surface is determined by the amount of material placed upon the lower
mold. Reinforcement materials can be placed manually or robotically. They include
continuous fiber forms fashioned into textile constructions and chopped fiber. The matrix
is generally a resin, and can be applied with a pressure roller, a spray device or
manually. This process is generally done at ambient temperature and atmospheric
pressure. Two variations of open molding are Hand Layup and Spray-up.

Vacuum bag molding

A process using a two-sided mold set that shapes both surfaces of the panel. On the
lower side is a rigid mold and on the upper side is a flexible membrane. The flexible
membrane can be a reusable silicone material or an extruded polymer film such as
nylon. Reinforcement materials can be placed on the lower mold manually or robotically,
generally as continuous fiber forms fashioned into textile constructions. The matrix is
generally a resin. The fiber form may be pre-impregnated with the resin in the form of
prepreg fabrics or unidirectional tapes. Otherwise, liquid matrix material is introduced to
dry fiber forms prior to applying the flexible film. Then, vacuum is applied to the mold
cavity. This process can performed at either ambient or elevated temperature with
ambient atmospheric pressure acting upon the vacuum bag.

Autoclave molding

A process using a two-sided mold set that forms both surfaces of the panel. One the
lower side is a rigid mold and on the upper side is a flexible membrane made from
silicone or an extruded polymer film such as nylon. Reinforcement materials can be
placed manually or robotically. They include continuous fiber forms fashioned into textile
constructions. Most often, they are pre-impregnated with the resin in the form of prepreg
fabrics or unidirectional tapes. In some instances, a resin film is placed upon the lower
mold and dry reinforcement is placed above. The upper mold is installed and vacuum is
applied to the mold cavity. Then, the assembly is placed into an autoclave pressure
vessel. This process is generally performed at both elevated pressure and elevated
temperature. The use of elevated pressure facilitates a high fiber volume fraction and
low void content for maximum structural efficiency.
Resin transfer molding

A process using a two-sided mold set that forms both surfaces of the panel. The lower
side is a rigid mold. The upper side can be a rigid or flexible mold. Flexible molds can
be made from composite materials, silicone or extruded polymer films such as nylon.
The two sides fit together to produce a mold cavity. The distinguishing feature of resin
transfer molding is that the reinforcement materials are placed into this cavity and the
mold set is closed prior to the introduction of matrix material. Resin transfer molding
includes numerous varieties which differ in the mechanics of how the resin is introduced
to the reinforcement in the mold cavity. These variations include everything from
vacuum infusion to vacuum assisted resin transfer molding. This process can be
performed at either ambient or elevated temperature.

Other

Other types of molding include press molding, transfer molding, pultrusion molding,
filament winding, casting, centrifugal casting and continuous casting.

Tooling

Some types of tooling materials used in the manufacturing of composites structures


include invar, steel, aluminum, reinforced silicon rubber, nickle, and carbon fiber.
Selection of the tooling material is typically based on, but not limited to, the coefficient of
thermal expansion, expected number of cycles, end item tolerance, desired or required
surface condition, method of cure, glass transition temperature of the material being
molded, molding method, matrix, cost and a variety of other considerations.

Mechanics of composite materials

The physical properties of composite materials are generally not isotropic in nature, but
rather are typically orthotropic. For instance, the stiffness of a composite panel will often
depend upon the directional orientation of the applied forces and/or moments. Panel
stiffness is also dependent on the design of the panel. For instance, the fiber
reinforcement and matrix used, the method of panel build, thermoset versus
thermoplastic, type of weave, and orientation of fiber axis to the primary force.

In contrast, isotropic materials (for example, aluminium or steel), in standard wrought


forms, typically have the same stiffness regardless of the directional orientation of the
applied forces and/or moments.

The relationship between forces/moments and strains/curvatures for an isotropic


material can be described with the following material properties: Young's Modulus, the
Shear Modulus and the Poisson's ratio, in relatively simple mathematical relationships.
For the anisotropic material, it requires the mathematics of a second order tensor and
can require up to 21 material property constants. For the special case of orthogonal
isotropy, there are three different material property constants for each of Young's
Modulus, Shear Modulus and Poisson's Ratio for a total of 9 material property constants
to describe the relationship between forces/moments and strains/curvatures.

Categories of fiber reinforced composite materials

Fiber reinforced composite materials can be divided into two main categories normally
referred to as short fiber reinforced materials and continuous fiber reinforced materials.
Continuous reinforced materials will often constitute a layered or laminated structure.
The woven and continuous fiber styles are typically available in a variety of forms, being
pre-impregnated with the given matrix (resin), dry, uni-directional tapes of various
widths, plain weave, harness satins, braided, and stitched.

The short and long fibers are typically employed in compression molding and sheet
molding operations. These come in the form of flakes, chips, and random mate (which
can also be made from a continuous fiber laid in random fashion until the desired
thickness of the ply / laminate is achieved).

Failure of Composites

Shocks, impact, loadings or repeated cyclic stresses can cause the laminate to
separate at the interface between two layers, a condition known as delamination.
Individual fibers can separate from the matrix e.g. fiber pull-out.

Examples of composite materials

Fiber Reinforced Polymers or FRPs include Wood comprising (cellulose fibers in a lignin
and hemicellulose matrix), Carbon-fiber reinforced plastic or CFRP, Glass-fiber
reinforced plastic or GFRP (also GRP). If classified by matrix then there are
Thermoplastic Composites, short fiber thermoplastics, long fiber thermoplastics or long
fiber reinforced thermoplastics There are numerous thermoset composites, but
advanced systems usually incorporate aramid fibre and carbon fibre in an epoxy resin
matrix.
Composites can also utilise metal fibres reinforcing other metals, as in Metal matrix
composites or MMC. Ceramic matrix composites include Bone (hydroxyapatite
reinforced with collagen fibers), Cermet (ceramic and metal) and Concrete. Organic
matrix/ceramic aggregate composites include Asphalt concrete, Mastic asphalt, Mastic
roller hybrid, Dental composite, Syntactic foam and Mother of Pearl. Chobham armour
is a special composite used in military applications.

Additionally, thermoplastic composite materials can be formulated with specific metal


powders resulting in materials with a density range from 2 g/cc to 11 g/cc (same density
as lead). These materials can be used in place of traditional materials such as
aluminum, stainless steel, brass, bronze, copper, lead, and even tungsten in weighting,
balancing, vibration dampening, and radiation shielding applications. High density
composites are an economically viable option when certain materials are deemed
hazardous and are banned (such as lead) or when secondary operations costs (such as
machining, finishing, or coating) are a factor.
Engineered wood includes a wide variety of different products such as Plywood,
Oriented strand board, Wood plastic composite (recycled wood fiber in polyethylene
matrix), Pykrete (sawdust in ice matrix), Plastic-impregnated or laminated paper or
textiles, Arborite, Formica (plastic) and Micarta.

Typical Products
Composite materials have gained popularity (despite their generally high cost) in high-
performance products such as aerospace components (tails, wings , fuselages,
propellors), boat and scull hulls, and racing car bodies. More mundane uses include
fishing rods and storage tanks.
CORROSION
Corrosion is deterioration of essential properties in a material due to reactions with its
surroundings.

In the most common use of the word, this means a loss of an electron of metals reacting
with water and oxygen. Weakening of iron due to oxidation of the iron atoms is a well-
known example of electrochemical corrosion. This is commonly known as rust. This
type of damage usually affects metallic materials, and typically produces oxide(s) and/or
salt(s) of the original metal. Corrosion also includes the dissolution of ceramic materials
and can refer to discoloration and weakening of polymers by the sun's ultraviolet light.

Most structural alloys corrode merely from exposure to moisture in the air, but the
process can be strongly affected by exposure to certain substances (see below).
Corrosion can be concentrated locally to form a pit or crack, or it can extend across a
wide area to produce general deterioration. While some efforts to reduce corrosion
merely redirect the damage into less visible, less predictable forms, controlled corrosion
treatments such as passivation and chromate-conversion will increase a material's
corrosion resistance.

Corrosion in nonmetals

Most ceramic materials are almost entirely immune to corrosion. The strong ionic and/or
covalent bonds that hold them together leave very little free chemical energy in the
structure; they can be thought of as already corroded. When corrosion does occur, it is
almost always a simple dissolution of the material or chemical reaction, rather than an
electrochemical process. A common example of corrosion protection in ceramics is the
lime added to soda-lime glass to reduce its solubility in water; though it is not nearly as
soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when
exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the
strength of a glass object during its first few hours at room temperature.

The degradation of polymeric materials is due to a wide array of complex and often
poorly-understood physiochemical processes. These are strikingly different from the
other processes discussed here, and so the term "corrosion" is only applied to them in a
loose sense of the word. Because of their large molecular weight, very little entropy can
be gained by mixing a given mass of polymer with another substance, making them
generally quite difficult to dissolve. While dissolution is a problem in some polymer
applications, it is relatively simple to design against. A more common and related
problem is swelling, where small molecules infiltrate the structure, reducing strength and
stiffness and causing a volume change. Conversely, many polymers (notably flexible
vinyl) are intentionally swelled with plasticizers, which can be leached out of the
structure, causing brittleness or other undesirable changes. The most common form of
degradation, however, is a decrease in polymer chain length. Mechanisms which break
polymer chains are familiar to biologists because of their effect on DNA: ionizing
radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen,
ozone, and chlorine. Additives can slow these process very effectively, and can be as
simple as a UV-absorbing pigment (i.e., titanium dioxide or carbon black). Plastic
shopping bags often do not include these additives so that they break down more easily
as litter.
The remainder of this article is about electrochemical corrosion.

Electrochemical theory
Main article: Electrochemistry

One way to understand the structure of metals on the basis of particles is to imagine an
array of positively-charged ions sitting in a negatively-charged "gas" of free electrons.
Coulombic attraction holds these oppositely-charged particles together, but the
positively-charged ions are attracted to negatively charged particles outside the metal
as well, such as the negative ions (anions) in an electrolyte. For a given ion at the
surface of a metal, there is a certain amount of energy to be gained or lost by dissolving
into the electrolyte or becoming a part of the metal, which reflects an atom-scale tug-of-
war between the electron gas and dissolved anions. The quantity of energy then
strongly depends on a host of variables, including the types of ions in a solution and
their concentrations, and the number of electrons present at the metal's surface. In turn,
corrosion processes cause electrochemical changes, meaning that they strongly affect
all of these variables. The overall interaction between electrons and ions tends to
produce a state of local thermodynamic equilibrium that can often be described using
basic chemistry and a knowledge of initial conditions.

Galvanic series

Main article: Galvanic series

In a given environment (one standard medium is aerated, room-temperature seawater),


one metal will be either more noble or more active than the next, based on how strongly
its ions are bound to the surface. Two metals in electrical contact share the same
electron gas, so that the tug-of-war at each surface is translated into a competition for
free electrons between the two materials. The noble metal will tend to take electrons
from the active one, while the electrolyte hosts a flow of ions in the same direction. The
resulting mass flow or electrical current can be measured to establish a hierarchy of
materials in the medium of interest. This hierarchy is called a Galvanic series, and can
be a very useful in predicting and understanding corrosion.

Resistance to corrosion
Some metals are more intrinsically resistant to corrosion than others, either due to the
fundamental nature of the electrochemical processes involved or due to the details of
how reaction products form. For some examples, see galvanic series. If a more
susceptible material is used, many techniques can be applied during an item's
manufacture and use to protect its materials from damage.

Intrinsic chemistry

Gold nuggets do not corrode, even on a geological time scale.

The materials most resistant to corrosion are those for which corrosion is
thermodynamically unfavorable. Any corrosion products of gold or platinum tend to
decompose spontaneously into pure metal, which is why these elements can be found
in metallic form on Earth, and is a large part of their intrinsic value. More common
"base" metals can only be protected by more temporary means.
Some metals have naturally slow reaction kinetics, even though their corrosion is
thermodynamically favorable. These include such metals as zinc, magnesium, and
cadmium. While corrosion of these metals is continuous and ongoing, it happens at an
acceptably slow rate. An extreme example is graphite, which releases large amounts of
energy upon oxidation, but has such slow kinetics that it is effectively immune to
electrochemical corrosion under normal conditions.

Passivation

Main article: Passivation


Given the right conditions, a thin film of corrosion products can form on a metal's
surface spontaneously, acting as a barrier to further oxidation. When this layer stops
growing at less than a micrometre thick under the conditions that a material will be used
in, the phenomenon is known as passivation (rust, for example, usually grows to be
much thicker, and so is not considered passivation, because this mixed oxidized layer is
not protective). While this effect is in some sense a property of the material, it serves as
an indirect kinetic barrier: the reaction is often quite rapid unless and until an
impermeable layer forms. Passivation in air and water at moderate pH is seen in such
materials as aluminium, stainless steel, titanium, and silicon.

These conditions required for passivation are specific to the material. The effect of pH is
recorded using Pourbaix diagrams, but many other factors are influential. Some
conditions that inhibit passivation include: high pH for aluminum, low pH or the presence
of chloride ions for stainless steel, high temperature for titanium (in which case the
oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon.
On the other hand, sometimes unusual conditions can bring on passivation in materials
that are normally unprotected, as the alkaline environment of concrete does for steel
rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent
passivation mechanisms.

Surface treatments

Galvanized surface

[edit]
Applied coatings

Main article: Galvanization

Plating, painting, and the application of enamel are the most common anti-corrosion
treatments. They work by providing a barrier of corrosion-resistant material between the
damaging environment and the (often cheaper, tougher, and/or easier-to-process)
structural material. Aside from cosmetic and manufacturing issues, there are tradeoffs in
mechanical flexibility versus resistance to abrasion and high temperature. Platings
usually fail only in small sections, and if the plating is more noble than the substrate (for
example, chromium on steel), a galvanic couple will cause any exposed area to corrode
much more rapidly than an unplated surface would. For this reason, it is often wise to
plate with a more active metal such as zinc or cadmium.

Reactive coatings If the environment is controlled (especially in recirculating


systems), corrosion inhibitors can often be added to it. These form an electrically
insulating and/or chemically impermeable coating on exposed metal surfaces, to
suppress electrochemical reactions. Such methods obviously make the system less
sensitive to scratches or defects in the coating, since extra inhibitors can be made
available wherever metal becomes exposed. Chemicals that inhibit corrosion include
some of the salts in hard water (Roman water systems are famous for their mineral
deposits), chromates, phosphates, and a wide range of specially-designed chemicals
that resemble surfactants (i.e. long-chain organic molecules with ionic end groups).

This figure-8 descender is annodized with a yellow finish. Climbing equipment is


available in a wide range of anodized colors.

Anodization

Main article: Anodising

Aluminium alloys often undergo a surface treatment. Electrochemical conditions in the


bath are carefully adjusted so that uniform pores several nanometers wide appear in the
metal's oxide film. These pores allow the oxide to grow much thicker than passivating
conditions would allow. At the end of the treatment, the pores are allowed to seal,
forming a harder-than-usual surface layer. If this coating is scratched, normal
passivation processes take over to protect the damaged area.
Cathodic protection

Main article: Cathodic protection

Cathodic protection (CP) is a technique to control the corrosion of a metal surface by


making that surface the cathode of an electrochemical cell.
It is a method used to protect metal structures from corrosion. Cathodic protection
systems are most commonly used to protect steel, water, and fuel pipelines and tanks;
steel pier piles, ships, and offshore oil platforms.
For effective CP, the potential of the steel surface is polarized (pushed) more negative
until the metal surface has a uniform potential. With a uniform potential, the driving force
for the corrosion reaction is halted. For galvanic CP systems, the anode material
corrodes under the influence of the steel, and eventually it must be replaced. The
polarization is caused by the current flow from the anode to the cathode, driven by the
difference in electrochemical potential between the anode and the cathode.

For larger structures, galvanic anodes cannot economically deliver enough current to
provide complete protection. Impressed Current Cathodic Protection (ICCP) systems
use anodes connected to a DC power source (a cathodic protection rectifier). Anodes
for ICCP systems are tubular and solid rod shapes of various specialized materials.
These include high silicon cast iron, graphite, mixed metal oxide or platinum coated
titanium or niobium coated rod and wires.

Corrosion in passivated materials


Passivation is extremely useful in alleviating corrosion damage, but care must be taken
not to trust it too thoroughly. Even a high-quality alloy will corrode if its ability to form a
passivating film is hindered. Because the resulting modes of corrosion are more exotic
and their immediate results are less visible than rust and other bulk corrosion, they often
escape notice and cause problems among those who are not familiar with them.

Pitting corrosion

Main article: Pitting corrosion

Certain conditions, such as low concentrations of oxygen or high concentrations of


species such as chloride which compete as anions, can interfere with a given alloy's
ability to re-form a passivating film. In the worst case, almost all of the surface will
remain protected, but tiny local fluctuations will degrade the oxide film in a few critical
points. Corrosion at these points will be greatly amplified, and can cause corrosion pits
of several types, depending upon conditions. While the corrosion pits only nucleate
under fairly extreme circumstances, they can continue to grow even when conditions
return to normal, since the interior of a pit is naturally deprived of oxygen. In extreme
cases, the sharp tips of extremely long and narrow can cause stress concentration to
the point that otherwise tough alloys can shatter, or a thin film pierced by an invisibly
small hole can hide a thumb sized pit from view. These problems are especially
dangerous because they are difficult to detect before a part or structure fails. Pitting
remains among the most common and damaging forms of corrosion in passivated
alloys, but it can be prevented by control of the alloy's environment, which often
includes ensuring that the material is exposed to oxygen uniformly (i.e., eliminating
crevices).

Weld decay and knifeline attack

Main article: Intergranular corrosion

Stainless steel can pose special corrosion challenges, since its passivating behavior
relies on the presence of a minor alloying component (Chromium, typically only 18%).
Due to the elevated temperatures of welding or during improper heat treatment,
chromium carbides can form in the grain boundaries of stainless alloys. This chemical
reaction robs the material of chromium in the zone near the grain boundary, making
those areas much less resistant to corrosion. This creates a galvanic couple with the
well-protected alloy nearby, which leads to weld decay (corrosion of the grain
boundaries near welds) in highly corrosive environments. Special alloys, either with low
carbon content or with added carbon "getters" such as titanium and niobium (in types
321 and 347, respectively), can prevent this effect, but the latter require special heat
treatment after welding to prevent the similar phenomenon of knifeline attack. As its
name applies, this is limited to a small zone, often only a few micrometres across, which
causes it to proceed more rapidly. This zone is very near the weld, making it even less
noticeable1.

Galvanic corrosion
Main article: Galvanic corrosion

Galvanic corrosion occurs when two different metals electrically contact each other and
are immersed in an electrolyte. In order for galvanic corrosion to occur, an electrically
conductive path and an ionically conductive path are necessary. This effects a galvanic
couple where the more active metal corrodes at an accelerated rate and the more noble
metal corrodes at a retarded rate. When immersed, neither metal would normally
corrode as quickly without the electrically conductive connection (usually via a wire or
direct contact). Galvanic corrosion is often utilised in sacrificial anodes. What type of
metal(s) to use is readily determined by following the galvanic series. For example, zinc
is often used as a sacrificial anode for steel structures, such as pipelines or docked
naval ships. Galvanic corrosion is of major interest to the marine industry and also
anywhere water can contact pipes or metal structures.
Factors such as relative size of anode (smaller is generally less desirable), types of
metal, and operating conditions (temperature, humidity, salinity, &c.) will affect galvanic
corrosion. The surface area ratio of the anode and cathode will directly affect the
corrosion rates of the materials.

Microbial corrosion
Main article: Microbial corrosion

Microbial corrosion, or bacterial corrosion, is a corrosion caused or promoted by


microorganisms, usually chemoautotrophs. It can apply to both metals and non-metallic
materials, in both the presence and lack of oxygen. Sulfate-reducing bacteria are
common in lack of oxygen; they produce hydrogen sulfide, causing sulfide stress
cracking. In presence of oxygen, some bacteria directly oxidize iron to iron oxides and
hydroxides, other bacteria oxidize sulfur and produce sulfuric acid causing biogenic
sulfide corrosion. Concentration cells can form in the deposits of corrosion products,
causing and enhancing galvanic corrosion.

High temperature corrosion


High temperature corrosion is chemical deterioration of a material (typically a metal)
under very high temperature conditions. This non-galvanic form of corrosion can occur
when a metal is subject to a high temperature atmosphere containing oxygen, sulphur
or other compounds capable of oxidising (or assisting the oxidation of) the material
concerned. For example, materials used in aerospace, power generation and even in
car engines have to resist sustained periods at high temperature in which they may be
exposed to an atmosphere containing potentially highly corrosive products of
combustion.
The products of high temperature corrosion can potentially be turned to the advantage
of the engineer. The formation of oxides on stainless steels, for example, can provide a
protective layer preventing further atmospheric attack, allowing for a material to be used
for sustained periods at both room and high temperature in hostile conditions. Such high
temperature corrosion products in the form of compacted oxide layer glazes have also
been shown to prevent or reduce wear during high temperature sliding contact of
metallic (or metallic and ceramic) surfaces.

Economic impact
The US Federal Highway Administration released a study, entitled Corrosion Costs and
Preventive Strategies in the United States, in 2002 on the direct costs associated with
metallic corrosion in nearly every U.S. industry sector. The study showed that for 1998
the total annual estimated direct cost of corrosion in the U.S. was approximately $276
billion (approximately 3.1% of the US gross domestic product). FHWA Report
Number:FHWA-RD-01-156. The NACE International website has a summary slideshow
of the report findings. Jones1 writes that electrochemical corrosion causes between $8
billion and $128 billion in economic damage per year in the United States alone,
degrading structures, machines, and containers. This is when Dampney Company
came up with a solution for corrosion or "corrosion under insulation" coatings to prevent
this reaction.
CREEP |
Version 3 - view current page

CREEP

Creep is the term used to describe the tendency of a material to move or to deform
permanently to relieve stresses.

Material deformation occurs as a result of long term exposure to levels of stress that are
below the yield or ultimate strength of the material.

Creep is more severe in materials that are subjected to heat for long periods and near
melting point. Creep is often observed in glasses, and there is a direct proportionality
between increasing temperature and creep. The rate of this damage is a function of the
material properties, exposure time, exposure temperature and the applied load (stress).
Depending on the magnitude of the applied stress and its duration, the deformation may
become so large that a component can no longer perform its function — for example
creep of a turbine blade will cause the blade to contact the casing, resulting in the
failure of the blade. Creep is usually a concern to engineers and metallurgists when
evaluating components that operate under high stresses or temperatures. Creep is not
necessarily a failure mode, but is instead a damage mechanism. Moderate creep in
concrete is sometimes welcomed because it relieves tensile stresses that may
otherwise have led to cracking.

Overview

Rather than failing suddenly with a fracture, the material permanently strains over a
longer period of time until it finally fails. Creep does not happen upon sudden loading
but the accumulation of creep strain in longer times causes failure of the material. This
makes creep deformation a "time-dependent" deformation of the material.

Creep deformation of various materials may occur in time under all temperatures i.e.
Tungsten requires a temperature in the thousands of degrees before creep can occur
while the ceramic ice Antarctica ice cap creeps even during freezing temperatures.
Generally 30-40% of the melting point for metals and 40-50% of melting point for
ceramics is required before creep can occur. This deformation behavior is not only
important in systems for which high temperatures are endured, such as nuclear power
plants, jet engines, heat exchangers etc. It is also a factor in the design of everyday
objects/appliances/etc, i.e. lead pipes are not used anymore because it can creep at
room temperature, also metal paper clips are stronger than plastic ones because
plastics also creep in room temperatures. It is also a consideration in the design of
magnesium alloy engines. Since the relevant temperature is relative to melting point
(usually at temperatures greater than half the melting temperature), creep can be seen
at relatively low temperatures depending upon the alloy. Plastics and low-melting-
temperature metals, including many solders creep at room temperature, as can be seen
markedly in older lead hot-water pipes. Planetary ice is often at a high temperature
(relative to its melting point), and creeps. Virtually any material will creep upon
approaching its melting temperature. Glass windows are often erroneously used as an
example of this phenomenon: creep would only occur at temperatures above the glass
transition temperature (around 900°F/500°C).
An example of an application involving creep deformation is the design of tungsten
lightbulb filaments. Sagging of the filament coil between its supports increases with time
due to creep deformation caused by the weight of the filament itself. If too much
deformation occurs, the adjacent turns of the coil touch one another, causing an
electrical short and local overheating, which quickly leads to failure of the filament. The
coil geometry and supports are therefore designed to limit the stresses caused by the
weight of the filament, and a special tungsten alloy with small amounts of oxygen
trapped in the grain boundaries is used to slow the rate of Coble creep.
Steam piping within fossil-fuel fired power plants with superheated vapour work under
high temperature (1050°F/565.5°C and high pressure (often at 3500 psig/ 24.1 MPa or
greater). In a jet engine temperatures may reach to 1000°C, which may initiate creep
deformation in a weak zone. Because of these reasons, understanding and studying
creep deformation behaviour of engineering materials is very crucial for public and
operational safety.

Stages of creep
Initially, the strain rate slows with increasing strain. This is known as primary creep. The
strain rate eventually reaches a minimum and becomes near-constant. This is known as
secondary or steady-state creep. It is this regime that is most well understood. The
"creep strain rate" is typically the rate in this secondary stage. The stress dependence
of this rate depends on the creep mechanism. In tertiary creep, the strain-rate
exponentially increases with strain.
Mechanisms of creep
The mechanism of creep depends on temperature and stress. The various methods are:

• Thermally activated glide - e.g. via cross-slip


• Climb assisted glide - here the climb is an enabling mechanism, allowing
dislocations to get around obstacles
• Climb - here the strain is actually accomplished by climb
• Grain boundary diffusion
• Bulk diffusion

The most common mechanism is climb-assisted glide.

General creep equation

where C is a constant dependent on the material and the particular creep mechanism,
m and b are exponents dependent on the creep mechanism, Q is the activation energy
of the creep mechanism, σ is the applied stress, d is the grain size of the material, k is
Boltzmann's constant, and T is the temperature.

Dislocation creep

At high stresses (relative to the shear modulus), creep is controlled by the movement of
dislocations. When a stress is applied to a material, plastic deformation occurs due to
the movement of dislocations in the slip plane. Materials contain a variety of defects, for
example solute atoms, that act as obstacles to dislocation motion. Creep arises from
this because of the phenomenon of dislocation climb. At high temperatures vacancies in
the crystal can diffuse to the location of a dislocation and cause the dislocation to move
to an adjacent slip plane. By climbing to adjacent slip planes dislocations can get
around obstacles to their motion, allowing further deformation to occur. Because it takes
time for vacancies to diffuse to the location of a dislocation this results in time
dependent strain, or creep.
For dislocation creep Q = Qself diffusion, m = 4-6, and b=0. Therefore dislocation creep
has a strong dependence on the applied stress and no grain size dependence.
Some alloys exhibit a very large stress exponent (n > 10), and this has typically been
explained by introducing a "threshold stress," σth, below which creep can't be
measured. The modfied power law equation then becomes:
where A, Q and n can all be explained by conventional
mechanisms (so ).

Nabarro-Herring Creep
Nabarro-Herring creep is a form of diffusion controlled creep. In N-H creep atoms
diffuse through the lattice causing grains to elongate along the stress axis. For Nabarro-
Herring creep k is related to the diffusion coefficient of atoms through the lattice, Q =
Qself diffusion, m=1, and b=2. Therefore N-H creep has a weak stress dependence and
a moderate grain size dependence, with the creep rate decreasing as grain size is
increased.
Nabarro-Herring creep is found to be strongly temperature dependent. For lattice
diffusion of atoms to occur in a material, neighboring lattice sites or interstitial sites in
the crystal structure must be free. A given atom must also overcome the energy barrier
to move from its current site (it lies in an energetically favorable potential well) to the
nearby vacant site (another potential well). The general form of the diffusion equation is
D = DoExp(Ea / KT) where Do has a dependence on both the attempted jump
frequency and the number of nearest neighbor sites and the probability of the sites
being vacant. Thus there is a double dependence upon temperature. At higher
temperatures the diffusivity increases due to the direct temperature dependence of the
equation, the increase in vacancies through Shottky defect formation, and an increase
in the average energy of atoms in the material. Nabarro-Herring creep dominates at
very high temperatures relative to a material's melting temperature.

Coble Creep
Main article: Coble creep

Coble creep is a second form of diffusion controlled creep. In Coble creep the atoms
diffuse along grain boundaries to elongate the grains along the stress axis. This causes
Coble creep to have a stronger grain size dependence than N-H creep. For Coble creep
k is related to the diffusion coefficient of atoms along the grain boundary, Q = Qgrain
boundary diffusion, m=1, and b=3. Because Qgrain boundary diffusion < Qself diffusion,
Coble creep occurs at lower temperatures than N-H creep. Coble creep is still
temperature dependent, as the temperature increases so does the grain boundary
diffusion. However, since the number of nearest neighbors is effectively limited along
the interface of the grains, and thermal generation of vacancies along the boundaries is
less prevalent, the temperature dependence is not as strong as in Nabarro-Herring
creep. It also exhibits the same linear dependence on stress as N-H creep.

Creep of Polymers

Creep can occur in polymers and metals which are considered viscoelastic materials.
When a polymeric material is subjected to an abrupt force, the response can be
modeled using the Kelvin-Voigt Model. In this model, the material is represented by a
Hookean spring and a Newtonian dashpot in parallel. The creep strain is given

by: Where:

• σ = applied stress
• C0 = instantaneous creep compliance
• C = creep compliance coefficient
• τ = retardation time
• f(τ) = distribution of retardation times

Applied stress (a) and induced strain (b) as functions of time over an extended period
for a viscoelastic material.
Applied stress (a) and induced strain (b) as functions of time over a short period for a
viscoelastic material.

When subjected to a step constant stress, viscoelastic materials experience a time-


dependent increase in strain. This phenomenon is known as viscoelastic creep.
At a time t0, a viscoelastic material is loaded with a constant stress that is maintained
for a sufficiently long time period. The material responds to the stress with a strain that
increases until the material ultimately fails. When the stress is maintained for a shorter
time period, the material undergoes an initial strain until a time t1, after which the strain
immediately decreases (discontinuity) then gradually decreases at times t > t1 to a
residual strain.
Viscoelastic creep data can be presented in one of two ways. Total strain can be plotted
as a function of time for a given temperature or temperatures. Below a critical value of
applied stress, a material may exhibit linear viscoelasticity. Above this critical stress, the
creep rate grows disproportionately faster. The second way of graphically presenting
viscoelastic creep in a material is by plotting the creep modulus (constant applied stress
divided by total strain at a particular time) as a function of time.[1] Below its critical
stress, the viscoelastic creep modulus is independent of stress applied. A family of
curves describing strain versus time response to various applied stress may be
represented by a single viscoelastic creep modulus versus time curve if the applied
stresses are below the material's critical stress value.
Additionally, the molecular weight of the polymer of interest is known to affect its creep
behavior. The effect of increasing molecular weight tends to promote secondary
bonding between polymer chains and thus make the polymer more creep resistant.
Similarly, aromatic polymers are even more creep resistant due to the added stiffness
from the rings. Both molecular weight and aromatic rings add to polymers' thermal
stability, increasing the creep resistance of a polymer. (Meyers and Chawla, 1999, 573)
Both polymers and metals can creep.[2] Polymers experience significant creep at all
temperatures above ~-200°C, however there are three main differences between
polymetric and metallic creep. Metallic creep:[2]

• is not linearly viscoelastic


• in not recoverable
• only significant at high temperatures
Deformation
In engineering mechanics, deformation is a change in shape due to an applied force.
This can be a result of tensile (pulling) forces, compressive (pushing) forces, shear,
bending or torsion (twisting). Deformation is often described in terms of strain.
In the figure it can be seen that the compressive loading (indicated by the arrow) has
caused deformation in the cylinder so that the original shape (dashed lines) has
changed (deformed) into one with bulging sides. The sides bulge because the material,
although strong enough to not crack or otherwise fail, is not strong enough to support
the load without change, thus the material is forced out laterally. Deformation may be
temporary, as a spring returns to its original length when tension is removed, or
permanent as when an object is irreversibly bent or broken.

The concept of a rigid body can be applied if the deformation is negligible.

Diagram of a Stress-strain curve, showing the relationship between stress (force


applied) and strain (deformation) of a ductile metal.
Types of deformation

Depending on the type of material, size and geometry of the object, and the forces
applied, various types of deformation may result.

Elastic deformation

This type of deformation is reversible. Once the forces are no longer applied, the object
returns to its original shape. As the name implies, elastic (rubber) has a rather large
elastic deformation range. Soft thermoplastics and metals have moderate elastic
deformation ranges while ceramics, crystals, and hard thermosetting plastics undergo
almost no elastic deformation.

Metal fatigue
A phenomenon only discovered in modern times is metal fatigue, which occurs primarily
in ductile metals. It was originally thought that a material deformed only within the
elastic range returned completely to its original state once the forces were removed.
However, faults are introduced at the molecular level with each deformation. After many
deformations, cracks will begin to appear, followed soon after by a fracture, with no
apparent plastic deformation in between. Depending on the material, shape, and how
close to the elastic limit it is deformed, failure may require thousands, millions, billions,
or trillions of deformations.
Metal fatigue has been a major cause of aircraft failure, such as the De Havilland
Comet, especially before the process was well understood. There are two ways to
determine when a part is in danger of metal fatigue; either predict when failure will occur
due to the material/force/shape/iteration combination, and replace the vulnerable
materials before this occurs, or perform inspections to detect the microscopic cracks
and perform replacement once they occur. Selection of materials which are not likely to
suffer from metal fatigue during the life of the product is the best solution, but not always
possible. Avoiding shapes with sharp corners limits metal fatigue by reducing force
concentrations, but does not eliminate it.

Plastic deformation

This type of deformation is not reversible. However, an object in the plastic deformation
range will first have undergone elastic deformation, which is reversible, so the object will
return part way to its original shape. Soft thermoplastics have a rather large plastic
deformation range as do ductile metals such as copper, silver, and gold. Steel does,
too, but not iron. Hard thermosetting plastics, rubber, crystals, and ceramics have
minimal plastic deformation ranges. Perhaps the material with the largest plastic
deformation range is wet chewing gum, which can be stretched dozens of times its
original length.

Fracture

This type of deformation is also not reversible. A break occurs after the material has
reached the end of the elastic, and then plastic, deformation ranges. At this point forces
accumulate until they are sufficient to cause a fracture. All materials will eventually
fracture, if sufficient forces are applied.

Misconceptions
A popular misconception is that all materials that bend are "weak" and all those which
don't are "strong". In reality, many materials which undergo large elastic and plastic
deformations, such as steel, are able to absorb stresses which would cause brittle
materials, such as glass, with minimal elastic and plastic deformation ranges, to break.
There is even a parable to describe this observation (paraphrased below):"The mighty
oak stands strong and firm before the wind, while the willow yields to the slightest
breeze. However, in the strongest storm, the oak will break while the willow will bend,
and thus survive. So, in the end, which is the stronger of the two?"
Dislocation Theory

Dislocations and Strengthening Mechanisms


[ Home ] [ Up ] [ Chapter 1. Introduction ] [ Chapter 2. Atomic Structure and Bonding ] [
Chapter 3. Structure of Crystals ] [ Chapter 4. Imperfections ] [ Chapter 5. Diffusion ] [ Chapter
6. Mechanical Properties of Metals ] [ Chapter 7. Dislocations and Strengthening Mechanisms ] [
Chapter 8. Failure ] [ Chapter 9. Phase Diagrams ] [ Chapter 10: Phase Transformations in
Metals ] [ Chapter 11. Thermal Processing of Metal Alloys ] [ Chapter 13. Ceramics - Structures
and Properties ] [ Chapter 14. Ceramics - Applications and Processing ] [ Chapter 15. Polymer
Structures ] [ Chapter 16. Polymers. Characteristics, Applications and Processing ] [ Chapter 17.
Composites ] [ Chapter 19. Electrical Properties ]

Dislocations and Strengthening Mechanisms

1. Introduction The key idea of the chapter is that plastic deformation is due to the
motion of a large number of dislocations. The motion is called slip. Thus, the
strength (resistance to deformation) can be improved by putting obstacles to slip.
2. Basic Concepts Dislocations can be edge dislocations, screw dislocations and
exist combination of the two (Ch. 4.4). Their motion (slip) occurs by sequential
bond breaking and bond reforming (Fig. 7.1). The number of dislocations per unit
volume is the dislocation density, in a plane they are measured per unit area.
3. Characteristics of Dislocations There is strain around a dislocation which
influences how they interact with other dislocations, impurities, etc. There is
compression near the extra plane (higher atomic density) and tension following
the dislocation line (Fig. 7.4)
4. Dislocations interact among themselves (Fig. 7.5). When they are in the same
plane, they repel if they have the same sign and annihilate if they have opposite
signs (leaving behind a perfect crystal). In general, when dislocations are close
and their strain fields add to a larger value, they repel, because being close
increases the potential energy (it takes energy to strain a region of the material).
The number of dislocations increases dramatically during plastic
deformation. Dislocations spawn from existing dislocations, and from defects,
grain boundaries and surface irregularities.
5. Slip Systems In single crystals there are preferred planes where dislocations
move (slip planes). There they do not move in any direction, but in preferred
crystallographic directions (slip direction). The set of slip planes and
directions constitute slip systems.
The slip planes are those of highest packing density. How do we explain this?
Since the distance between atoms is shorter than the average, the distance
perpendicular to the plane has to be longer than average. Being relatively far
apart, the atoms can move more easily with respect to the atoms of the
adjacent plane. (We did not discuss direction and plane nomenclature for
slip systems.)
BCC and FCC crystals have more slip systems, that is more ways for
dislocation to propagate. Thus, those crystals are more ductile than HCP
crystals (HCP crystals are more brittle).
6. Slip in Single Crystals

A tensile stress σ will have components in any plane that is not perpendicular to the stress.
These components are resolved shear stresses. Their magnitude depends on orientation (see
Fig. 7.7).
τR = σ cos φ cos λ
If the shear stress reaches the critical resolved shear stress τCRSS, slip (plastic deformation)
can start. The stress needed is:
σy= τCRSS / (cos φ cos λ)max
at the angles at which τCRSS is a maximum. The minimum stress needed for yielding is
when φ = λ = 45 degrees: σy=2 2τCRSS. Thus, dislocations will occur first at slip planes
oriented close to this angle with respect to the applied stress (Figs. 7.8 and 7.9).

6. Plastic Deformation of Polycrystalline Materials Slip directions vary from


crystal to crystal. When plastic deformation occurs in a grain, it will be
constrained by its neighbors which may be less favorably oriented. As a
result, polycrystalline metals are stronger than single crystals (the exception is
the perfect single crystal, as in whiskers.)
7. Deformation by Twinning This topic is not included.

Mechanisms of Strengthening in Metals

General principles. Ability to deform plastically depends on ability of


dislocations to move. Strengthening consists in hindering dislocation motion.
We discuss the methods of grain-size reduction, solid-solution alloying and
strain hardening. These are for single-phase metals. We discuss others when
treating alloys. Ordinarily, strengthening reduces ductility.

8. Strengthening by Grain Size Reduction This is based on the fact that it is


difficult for a dislocation to pass into another grain, especially if it is very
misaligned. Atomic disorder at the boundary causes discontinuity in slip
planes. For high-angle grain boundaries, stress at end of slip plane may
trigger new dislocations in adjacent grains. Small angle grain boundaries are
not effective in blocking dislocations. The finer the grains, the larger the area
of grain boundaries that impedes dislocation motion. Grain-size reduction
usually improves toughness as well. Usually, the yield strength varies with
grain size d according to:

σy = σ0 + ky / d1/2

Grain size can be controlled by the rate of solidification and by plastic


deformation.

9. Solid-Solution Strengthening Adding another element that goes into


interstitial or substitutional positions in a solution increases strength. The
impurity atoms cause lattice strain (Figs. 7.17 and 7.18) which can "anchor"
dislocations. This occurs when the strain caused by the alloying element
compensates that of the dislocation, thus achieving a state of low potential
energy. It costs strain energy for the dislocation to move away from this state
(which is like a potential well). The scarcity of energy at low temperatures is
why slip is hindered.
Pure metals are almost always softer than their alloys.
10. Strain Hardening Ductile metals become stronger when they are deformed
plastically at temperatures well below the melting point (cold working). (This is
different from hot working is the shaping of materials at high temperatures
where large deformation is possible.) Strain hardening (work hardening) is
the reason for the elastic recovery discussed in Ch. 6.8.
The reason for strain hardening is that the dislocation density increases with
plastic deformation (cold work) due to multiplication. The average distance
between dislocations then decreases and dislocations start blocking the
motion of each one.
The measure of strain hardening is the percent cold work (%CW), given by
the relative reduction of the original area, A0 to the final value Ad :

%CW = 100 (A0–Ad)/A

11. Recovery, recrystallization and Grain Growth


12. Plastic deformation causes 1) change in grain size, 2) strain hardening, 3)
increase in the dislocation density. Restoration to the state before cold-work
is done by heating through two processes: recovery and recrystallization.
These may be followed by grain growth.
13. Recovery Heating  increased diffusion  enhanced dislocation motion 
relieves internal strain energy and reduces the number of dislocation. The
electrical and thermal conductivity are restored to the values existing before
cold working.
14. Recrystallization Strained grains of cold-worked metal are replaced, upon
heating, by more regularly-spaced grains. This occurs through short-range
diffusion enabled by the high temperature. Since recrystallization occurs by
diffusion, the important parameters are both temperature and time.
The material becomes softer, weaker, but more ductile (Fig. 7.22).
Recrystallization temperature: is that at which the process is complete in one
hour. It is typically 1/3 to 1/2 of the melting temperature. It falls as the %CW
is increased. Below a "critical deformation", recrystallization does not occur.
15. Grain Growth

The growth of grain size with temperature can occur in all polycrystalline materials. It
occurs by migration of atoms at grain boundaries by diffusion, thus grain growth is faster
at higher temperatures. The "driving force" is the reduction of energy, which is
proportional to the total area. Big grains grow at the expense of the small ones.

Ductile Fracture
Ductile Fracture One of the most important and key concepts in the entire field of
Materials Science and Engineering is fracture. In its simplest form, fracture can be described as a
single body being separated into pieces by an imposed stress. For engineering materials there are
only two possible modes of fracture, ductile and brittle. In general, the main difference between
brittle and ductile fracture can be attributed to the amount of plastic deformation that the material
undergoes before fracture occurs. Ductile materials demonstrate large amounts of plastic
deformation while brittle materials show little or no plastic deformation before fracture. Figure 1
(below), a tensile stress-strain curve, represents the degree of plastic deformation exhibited by
both brittle and ductile materials before fracture.

Crack initiation and propagation


are essential to fracture. The manner through which the crack propagates through the material
gives great insight into the mode of fracture. In ductile materials (ductile fracture), the crack
moves slowly and is accompanied by a large amount of plastic deformation. The crack will
usually not extend unless an increased stress is applied. On the other hand, in dealing with brittle
fracture, cracks spread very rapidly with little or no plastic deformation. The cracks that
propagate in a brittle material will continue to grow and increase in magnitude once they are
initiated. Another important mannerism of crack propagation is the way in which the advancing
crack travels through the material. A crack that passes through the grains within the material is
undergoing transgranular fracture. However, a crack that propagates along the grain boundaries
is termed an intergranular fracture. Figure 2 (below) shows a scanning electron fractograph of
ductile cast iron, examining a transgranular fracture surface.
On both macroscopic and
microscopic levels, ductile fracture surfaces have distinct features. Macroscopically, ductile
fracture surfaces have larger necking regions and an overall rougher appearance than a brittle
fracture surface. Figure 3 (below) shows the macroscopic differences between two ductile
specimens(a,b) and the brittle specimen (c).

On the microscopic level, ductile


fracture surfaces also appear rough
and irregular. The surface consists of
many microvoids and dimples. Figure
4 (below left) and Figure 5 (below
right) demonstrate the microscopic
qualities of ductile fracture surfaces.
The failure of many ductile materials can be attributed to cup and cone fracture. This form of
ductile fracture occurs in stages that initiate after necking begins. First, small microvoids form in
the interior of the material. Next, deformation continues and the microvoids enlarge to form a
crack. The crack continues to grow and it spreads laterally towards the edges of the specimen.
Finally, crack propagation is rapid along a surface that makes about a 45 degree angle with the
tensile stress axis. The new fracture surface has a very irregular appearance. The final shearing
of the specimen produces a cup type shape on one fracture surface and a cone shape on the
adjacent connecting fracture surface, hence the name, cup and cone fracture. Figure 6 (below)
shows cup and cone, and brittle fracture in aluminum.

The Charpy and


Izod tests measure the impact energy of a specimen. By using an apparatus and impacting a
specimen with a weighted pendulum hammer the impact energy can be measured. A primary use
of the Charpy and Izod tests is to determine if a material experiences brittle to ductile transition
with decreasing temperature. Brittle to ductile transition is directly related to the temperature
dependency of the impact energy absorbed. Also an examination of the failure surface can prove
very beneficial. If a section of the failure surface seems to demonstrate the visual properties of
both brittle and ductile fracture, then brittle to ductile transition is evident at that temperature
range. It is very important to remember that with most specimens, there is a fairly wide band of
temperatures that support brittle to ductile transition. Therefore, for many specimens it is nearly
impossible to predict any one temperature as the transition temperature. In figure 7 (below), a
graph is given that determines brittle to ductile transition through an impact test for a 1018 hot-
rolled steel.
In most design situations a material that
demonstrates ductile fracture is usually preferred for several reasons. First and foremost, brittle
fracture occurs very rapidly and catastrophically without any warning. Ductile materials
plastically deform, thereby slowing the process of fracture and giving ample time for the
problem to be corrected. Second, because of the plastic deformation, more strain energy is
needed to cause ductile fracture. Next, ductile materials are considered to be "forgiving"
materials, because of their toughness you can make a mistake in the use, design of a ductile
material and still the material will probably not fail. Also, the properties of a ductile material can
be enhanced through the use of one of the strengthening mechanisms. Strain hardening is a
perfect example, as the ductile material is deformed more and more its strength and hardness
increase because of the generation of more and more dislocations. Therefore, in engineering
applications, especially those that have safety concerns involved, ductile materials are the
obvious choice. Safety and dependability are the main concerns in material design, but in order
to attain these goals there has to be a thorough understanding of fracture, both brittle and ductile.
Understanding fracture and failure of materials will lead the materials engineer to develop safer
and more dependable materials and products.
------------------------------------------------------------------------------------------------
Griffith theory

The Griffiths equation describes the relationship between applied nominal stress and crack length at fracture, i.e.
when it becomes energetically favourable for a crack to grow. Griffith was concerned with the energetics of fracture,
and considered the energy changes associated with incremental crack extension.
For a loaded brittle body undergoing incremental crack extension, the only contributors to energy changes are the
energy of the new fracture surfaces (two surfaces per crack tip) and the change in potential energy in the body. The
surface energy term (S) represents energy absorbed in crack growth, while the some stored strain energy (U) is
released as the crack extends (due to unloading of regions adjacent to the new fracture surfaces). Surface energy has
a constant value per unit area (or unit length for a unit thickness of body) and is therefore a linear function of (crack
length), while the stored strain energy released in crack growth is a function of (crack length)2, and is hence
parabolic. These changes are indicated in the figure below:

The next step in the development of Griffith's argument was consideration of the rates of energy change with crack
extension, because the critical condition corresponds to the maximum point in the total energy curve, i.e. dW/da = 0,
where a = a*. For crack lengths greater than this value (under a given applied stress), the body is going to a lower
energy state, which is favourable, and hence fast fracture occurs. dW/da = 0 occurs when dS/da = dU/da. The sketch
below shows these energy rates, or differentials with respect to a.
R is the resistance to crack growth (= dS/da) and G is the strain energy release rate (= dU/da). When fracture occurs,
R = G and we can define Gcrit as the critical value of strain energy release, and equate this to R. Hence Gcrit
represents the fracture toughness of the material. In plane stress the Griffith equation is:

where, to get the fracture stress in MPa (the standard SI engineering unit), the critical strain energy release rate is in
N/m, E is in N/m2, and a is in m. This provides an answer in N/m2 (Pa), which needs to be divided by 106 to get the
standard engineering unit of MPa. In plane strain:
Elastic and Viscoelastic properties of Metals

Elasticity (physics)
Elasticity is a branch of physics which studies the properties of elastic materials. A
material is said to be elastic if it deforms under stress (e.g., external forces), but then
returns to its original shape when the stress is removed. The amount of deformation is
called the strain.

Modeling elasticity
The elastic regime is characterized by a linear relationship between stress and strain,
denoted linear elasticity. This idea was first stated by Robert Hooke in 1676 as an
anagram, then in 1678 in Latin, as Ut tensio, sic vis, which means:

“ As the extension, so the force. ”

This linear relationship is called Hooke's law. The classic model of linear elasticity is the
perfect spring. Although the general proportionality constant between stress and strain
in three dimensions is a 4th order tensor, when considering simple situations of higher
symmetry such as a rod in one dimensional loading, the relationship may often be
reduced to applications of Hooke's law.
Because most materials are only elastic under relatively small deformations, several
assumptions are used to linearize the theory. Most importantly, higher order terms are
generally discarded based on the small deformation assumption. In certain special
cases, such as when considering a rubbery material, these assumptions may not be
permissible. However, in general, elasticity refers to the linearized theory of the
continuum stresses and strains.

Transitions to inelasticity
Above a certain stress known as the elastic limit or the yield strength of an elastic
material, the relationship between stress and strain becomes nonlinear. Beyond this
limit, the solid may deform irreversibly, exhibiting plasticity. A stress-strain curve is one
tool for visualizing this transition.
Furthermore, not only solids exhibit elasticity. Some non-Newtonian fluids, such as
viscoelastic fluids, will also exhibit elasticity in certain conditions. In response to a small,
rapidly applied and removed strain, these fluids may deform and then return to their
original shape. Under larger strains, or strains applied for longer periods of time, these
fluids may start to flow, exhibiting viscosity.

See also
• Stiffness
• Elastic modulus
• Linear elasticity
• 3-D elasticity
• Pseudoelasticity

-----------------------------------------------------------------------------------------------------

Viscoelasticity
---------------------------------------------------------------
Viscoelasticity, also known as anelasticity, describes materials that exhibit both
viscous and elastic characteristics when undergoing plastic deformation. Viscous
materials, like honey, resist shear flow and strain linearly with time when a stress is
applied. Elastic materials strain instantaneously when stretched and just as quickly
return to their original state once the stress is removed. Viscoelastic materials have
elements of both of these properties and, as such, exhibit time dependent strain.
Whereas elasticity is usually the result of bond stretching along crystallographic planes
in an ordered solid, viscoelasticity is the result of the diffusion of atoms or molecules
inside of an amorphous material [1].

Background
In the nineteenth century, physicists such as Maxwell, Boltzmann, and Kelvin
researched and experimented with creep and recovery of glasses, metals, and rubbers
[2]. Viscoelasticity was further examined in the late twentieth century when synthetic
polymers were engineered and used in a variety of applications [2]. Viscoelasticty
calculations depend heavily on the viscosity variable, η. The inverse of η is also known
as fluidity, φ. The value of either can be derived as a function of temperature or as a
given value (ie for a dashpot) [1].

Different types of responses (σ) to a change in strain rate (d /dt)

Depending on the change of strain rate versus stress inside a material the viscosity can
be categorized as having a linear, non-linear, or plastic response. When a material
exhibits a linear response it is categorized as a Newtonian material [1]. In this case the
stress is linearly proportional to the strain rate. If the material exhibits a non-linear
response to the strain rate, it is categorized as Non-Newtonian fluid. There is also an
interesting case where the viscosity decreases as the shear/strain rate remains
constant. A material which exhibits this type of behavior is known as thixotropic [1]. In
addition, when the stress is independent of this strain rate, the material exhibits plastic
deformation [1]. Many viscoelastic materials exhibit rubber like behavior explained by
the thermodynamic theory of polymer elasticity.
Some examples of viscoelastic materials include amorphous polymers, semicrystalline
polymers, biopolymers, and metals at very high temperatures. Cracking occurs when
the strain is applied quickly and outside of the elastic limit.
A viscoelastic material has the following properties:

• hysteresis is seen in the stress-strain curve.


• stress relaxation occurs: step constant strain causes decreasing stress
• creep occurs: step constant stress causes increasing strain

Elastic behavior versus viscoelastic behavior


Stress-Strain Curves for a purely elastic material (a) and a viscoelastic material (b). The
red area is a hysteresis loop and shows the amount of energy lost (as heat) in a loading

and unloading cycle. It is equal to , where σ is stress and is strain. [1]

Unlike purely elastic substances, a viscoelastic substance has an elastic component


and a viscous component. The viscosity of a viscoelastic substance gives the substance
a strain rate dependent on time[1]. Purely elastic materials do not dissipate energy
(heat) when a load is applied, then removed[1]. However, a viscoelastic substance
loses energy when a load is applied, then removed. Hysteresis is observed in the
stress-strain curve, with the area of the loop being equal to the energy lost during the
loading cycle[1]. Since viscosity is the resistance to thermally activated plastic
deformation, a viscous material will lose energy through a loading cycle. Plastic
deformation results in lost energy, which is uncharacteristic of a purely elastic material's
reaction to a loading cycle[1].
Specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a
viscoelastic material such as a polymer, parts of the long polymer chain change
position. This movement or rearrangement is called Creep. Polymers remain a solid
material even when these parts of their chains are rearranging in order to accompany
the stress, and as this occurs, it creates a back stress in the material. When the back
stress is the same magnitude as the applied stress, the material no longer creeps.
When the original stress is taken away, the accumulated back stresses will cause the
polymer to return to its original form. The material creeps, which gives the prefix visco-,
and the material fully recovers, which gives the suffix -elasticity[2].

Types of viscoelasticity
Linear viscoelasticity is when the function is separable in both creep response and
load. All linear viscoelastic models can be represented by a Volterra equation
connecting stress and strain:

or where

• t is time
• σ(t) is stress
• ε(t) is strain
• Einst,creep and Einst,relax are instantaneous elastic moduli for creep and
relaxation
• K(t) is the creep function
• F(t) is the relaxation function

Linear viscoelasticity is usually applicable only for small deformations.


Nonlinear viscoelasticity is when the function is not separable. It is usually happens
when the deformations are large or if the material changes its properties under
deformations.

Dynamic modulus
Main article: Dynamic modulus

Viscoelasticity is studied using dynamic mechanical analysis. When we apply a small


oscillatory strain and measure the resulting stress.

• Purely elastic materials have stress and strain in phase, so that the response of
one caused by the other is immediate.
• In purely viscous materials, strain lags stress by a 90 degree phase lag.
• Viscoelastic materials exhibit behavior somewhere in the middle of these two
types of material, exhibiting some lag in strain.

Complex Dynamic modulus G can be used to represent the relations between the
oscillating stress and strain:G = G' + iG'' where i = sqrt( − 1); G' is the storage modulus

and G'' is the loss modulus: where σ0 and are the


amplitudes of stress and strain and δ is the phase shift between them.

Constitutive models of linear viscoelasticity


Viscoelastic materials, such as amorphous polymers, semicrystalline polymers, and
biopolymers, can be modeled in order to determine their stress or strain interactions as
well as their temporal dependencies. These models, which include the Maxwell model,
the Kelvin-Voigt model, and the Standard Linear Solid Model, are used to predict a
material's response under different loading conditions. Viscoelastic behavior is
comprised of elastic and viscous components modeled as linear combinations of
springs and dashpots, respectively. Each model differs in the arrangement of these
elements, and all of these viscoelastic models can be equivalently modeled as electrical
circuits. The elastic modulus of a spring is analogous to a circuit's resistance and the
viscosity of a dashpot to a capacitor. The elastic components, as previously mentioned,
can be modeled as springs of elastic constant E, given the formula: where σ is
the stress, E is the elastic modulus of the material, and ε is the strain that occurs under
the given stress, similar to Hooke's Law.
The viscous components can be modeled as dashpots such that the stress-strain rate

relationship can be given as, where σ is the stress, η is the viscosity of the
material, and dε/dt is the time derivative of strain.
The relationship between stress and strain can be simplified for specific stress rates.
For high stress states/short time periods, the time derivative components of the stress-
strain relationship dominate. A dashpots resists changes in length, and in a high stress
state it can be approximated as a rigid rod. Since a rigid rod cannot be stretched past its
original length, no strain is added to the system[3]
Conversely, for low stress states/longer time periods, the time derivative components
are negligible and the dashpot can be effectively removed from the system - an "open"
circuit. As a result, only the spring connected in parallel to the dashpot will contribute to
the total strain in the system[3]

Maxwell model

Main article: Maxwell material

Maxwell model

The Maxwell model can be represented by a purely viscous damper and a purely elastic
spring connected in series, as shown in the diagram. The model can be represented by

the following equation: . The model represents a


liquid (able to have irreversible deformations) with some additional reversible (elastic)
deformations. If put under a constant strain, the stresses gradually relax. When a
material is put under a constant stress, the strain has two components as per the
Maxwell Model. First, an elastic component occurs instantaneously, corresponding to
the spring, and relaxes immediately upon release of the stress. The second is a viscous
component that grows with time as long as the stress is applied. The Maxwell model
predicts that stress decays exponentially with time, which is accurate for most polymers.
It is important to note limitations of such a model, as it is unable to predict creep in
materials based on a simple dashpot and spring connected in series. The Maxwell
model for creep or constant-stress conditions postulates that strain will increase linearly
with time. However, polymers for the most part show the strain rate to be decreasing
with time[2].

Kelvin-Voigt model

Main article: Kelvin-Voigt material

Schematic representation of Kelvin-Voigt model.

The Kelvin-Voigt model, also known as the Voigt model, consists of a Newtonian
damper and Hookean elastic spring connected in parallel, as shown in the picture. It is
used to explain the stress relaxation behaviors of polymers.
The constitutive relation is expressed as a linear first-order differential

equation: This model represents a solid undergoing


reversible, viscoelastic strain. Upon application of a constant stress, the material
deforms at a decreasing rate, asymptotically approaching the steady-state strain. When
the stress is released, the material gradually relaxes to its undeformed state. At
constant stress (creep), the Model is quite realistic as it predicts strain to tend to σ/E as
time continues to infinity. Similar to the Maxwell model, the Kelvin-Voigt Model also has
limitations. The model is extremely good with modelling creep in materials, but with
regards to relaxation the model is much less accurate.
Standard Linear Solid Model

Main article: Standard Linear Solid Model

Schematic representation of the Standard Linear Solid model.

The Standard Linear Solid Model effectively combines the Maxwell Model and a
Hookean spring in parallel. A viscous material is modeled as a spring and a dashpot in
series with each other, both of which are in parallel with a lone spring. For this model,

the governing constitutive relation is: Under a constant


stress, the modeled material will instantaneously deform to some strain, which is the
elastic portion of the strain, and after that it will continue to deform and asymptotically
approach a steady-state strain. This last portion is the viscous part of the strain.
Although the Standard Linear Solid Model is more accurate than the Maxwell and
Kelvin-Voigt models in predicting material responses, mathematically it returns
inaccurate results for strain under specific loading conditions and is rather difficult to
calculate.

Generalized Maxwell Model

Main article: Generalized Maxwell Model


Schematic of Maxwell-Weichert Model

The Generalized Maxwell also known as the Maxwell-Weichert model (after James
Clerk Maxwell and Dieter Weichert) is the most general form of the models described
above. It takes into account that relaxation does not occur at a single time, but at a
distribution of times. Due to molecular segments of different lengths with shorter ones
contributing less than longer ones, there is a varying time distribution. The Weichert
model shows this by having as many spring-dashpot Maxwell elements as are
necessary to accurately represent the distribution. The Figure on the right represents a
possible Weichert model [4].

Effect of temperature on viscoelastic behavior


The secondary bonds of a polymer constantly break and reform due to thermal motion.
Application of a stress favors some conformations over others, so the molecules of the
polymer will gradually "flow" into the favored conformations over time. Because thermal
motion is one factor contributing to the deformation of polymers, viscoelastic properties
change with increasing or decreasing temperature. In most cases, the creep modulus,
defined as the ratio of applied stress to the time-dependent strain, decreases with
increasing temperature. Generally speaking, an increase in temperature correlates to a
logarithmic decrease in the time required to impart equal strain under a constant stress.
In other words, it takes less energy to stretch a viscoelastic material an equal distance
at a higher temperature than it does at a lower temperature.

Viscoelastic creep
Main article: Creep (deformation)
Applied stress (a) and induced strain (b) as functions of time for a viscoelastic material.

When subjected to a step constant stress, viscoelastic materials experience a time-


dependent increase in strain. This phenomenon is known as viscoelastic creep.
At a time t0, a viscoelastic material is loaded with a constant stress that is maintained
for a sufficiently long time period. The material responds to the stress with a strain that
increases until the material ultimately fails. When the stress is maintained for a shorter
time period, the material undergoes an initial strain until a time t1, after which the strain
immediately decreases (discontinuity) then gradually decreases at times t > t1 to a
residual strain.
Viscoelastic creep data can be presented by plotting the creep modulus (constant
applied stress divided by total strain at a particular time) as a function of time [5]. Below
its critical stress, the viscoelastic creep modulus is independent of stress applied. A
family of curves describing strain versus time response to various applied stress may be
represented by a single viscoelastic creep modulus versus time curve if the applied
stresses are below the material's critical stress value.
Viscoelastic creep is important when considering long-term structural design. Given
loading and temperature conditions, designers can choose materials that best suit
component lifetimes.

Measuring viscoelasticity
Though there are many instruments that test the mechanical and viscoelastic response
of materials, broadband viscoelastic spectroscopy (BVS) and resonant ultrasound
specstroscopy (RUS) are more commonly used to test viscoelastic behavior because
they can be used above and below ambient temperatures and are more specific to
testing viscoelasticity. These two instruments employ a damping mechanism at various
frequencies and time ranges with no appeal to time-temperature superposition [6].
Using BVS and RUS to study the mechanical properties of materials is important to
understanding how a material exhibiting viscoelasticity will perform [6].
Fatigue (material)
In materials science, fatigue is the progressive and localised structural damage that
occurs when a material is subjected to cyclic or fluctuating strains at nominal stresses
that cause structural failure. The maximum values are often significantly less than the
ultimate tensile stress, and may be below the yield stress of the material.

Characteristics of fatigue failures

Fracture of an Aluminium Crank Arm. Dark area: slow crack growth. Bright area: sudden
fracture.

• The process starts with a microscopic crack, called the initiation site, which then
widens with each subsequent movement, a phenomenon analysed in the topic of
fracture mechanics.
• Failure is essentially probabilistic. The number of cycles required for failure
varies between homogeneous material samples. Analysis demands the
techniques of survival analysis.
• The greater the applied stress, the shorter the life.
• Fatigue life scatter tends to increase as stress decreases.
• Damage is cumulative. Materials do not recover when rested.
• Fatigue life is influenced by a variety of factors, such as temperature and surface
finish, in complicated ways.
• Some materials (e.g., some steel and titanium alloys) exhibit an endurance limit
or fatigue limit, a limit below which repeated stress does not induce failure,
theoretically, for an infinite number of cycles of load. Generally speaking, a steel
or titanium component being cycled at stresses below their endurance limit will
fail from some other mode before it fails from fatigue. Most other non-ferrous
metals (e.g., aluminium, copper and some titanium alloys) exhibit no such limit
and even small stresses will eventually cause failure.
• In recent years, researchers (see for example the work of Bathias, Murakami,
and Stanzl-Tschegg) have determined that many materials fail below their
endurance limits due to internal defects. This area of investigation is termed very
high or ultra high cycle fatigue.
• As a means to gauge fatigue characteristics of non-ferrous and other alloys that
do not exhibit an endurance limit, a fatigue strength is frequently determined, and
this is typically the stress level at which a component will survive 108 loading
cycles.
• Composite materials can also suffer fatigue, especially in components which
have been under-designed for the loads they have to resist. The appearance of
fatigue cracks is however, quite different to those shown by metals, multiple
delamination tending to occur within the parts. Discovery of such defects is thus
difficult using nondestructive testing.
• Plastics and rubbers behave like metals, usually with a small number of distinct
brittle cracks at surface stress raisers.

High-cycle fatigue
Historically, most attention has focused on situations that require more than 104 cycles
to failure where stress is low and deformation primarily elastic.

The S-N curve

In high-cycle fatigue situations, materials performance is commonly characterised by an


S-N curve, also known as a Wöhler curve. This is a graph of the magnitude of a cyclical
stress (S) against the logarithmic scale of cycles to failure (N).
S-N curves are derived from tests on samples of the material to be characterised (often
called coupons) where a regular sinusoidal stress is applied by a testing machine which
also counts the number of cycles to failure. This process is sometimes known as
coupon testing. Each coupon test generates a point on the plot though in some cases
there is a runout where the time to failure exceeds that available for the test (see
censoring). Analysis of fatigue data requires techniques from statistics, especially
survival analysis and linear regression.

Probabilistic nature of fatigue

As coupons sampled from a homogeneous frame will manifest variation in their number
of cycles to failure, the S-N curve should more properly be an S-N-P curve capturing the
probability of failure after a given number of cycles of a certain stress. Probability
distributions that are common in data analysis and in design against fatigue include the
lognormal distribution, extreme value distribution and Weibull distribution.
Complex loadings

Spectrum loading

In practice, a mechanical part is exposed to a complex, often random, sequence of


loads, large and small. In order to assess the safe life of such a part:

1. Reduce the complex loading to a series of simple cyclic loadings using a


technique such as rainflow analysis;
2. Create an histogram of cyclic stress from the rainflow analysis;
3. For each stress level, calculate the degree of cumulative damage incurred from
the S-N curve; and
4. Combine the individual contributions using an algorithm such as Miner's rule.
5.

Miner's rule

In 1945, M. A. Miner popularised a rule that had first been proposed by A. Palmgren in
1924. The rule, variously called Miner's rule or the Palmgren-Miner linear damage
hypothesis, states that where there are k different stress magnitudes in a spectrum, Si
(1 ≤ i ≤ k), each contributing ni(Si) cycles, then if Ni(Si) is the number of cycles to failure

of a constant stress reversal Si, failure occurs when: C is experimentally


found to be between 0.7 and 2.2. Usually for design purposes, C is assumed to be 1.
This can be thought of as assessing what proportion of life is consumed by stress
reversal at each magnitude then forming a linear combination of their aggregate.
Though Miner's rule is a useful approximation in many circumstances, it has two major
limitations:

1. It fails to recognise the probabilistic nature of fatigue and there is no simple way
to relate life predicted by the rule with the characteristics of a probability
distribution.
2. There is sometimes an effect in the order in which the reversals occur. In some
circumstances, cycles of high stress followed by low stress cause more damage
than would be predicted by the rule.

Paris' Relationship

Paris derived relationships for the stage II crack growth with cycles N, in terms of the
cyclical component ∆K of the Stress Intensity Factor K

where 2a is the crack length and m is typically in the range 3 to 5.


This relationship was later modified (by Forman, 1967[1]) to make better allowance for
the mean stress, by introducing a factor depending on (1-R) where R = min. stress/max
stress, in the denominator..

Low-cycle fatigue
Where the stress is high enough for plastic deformation to occur, the account in terms of
stress is less useful and the strain in the material offers a simpler description. Low-cycle
fatigue is usually characterised by the Coffin-Manson relation (published independently

by L. F. Coffin in 1954 and S. S. Manson 1953): -where:

• ∆εp /2 is the plastic strain amplitude;


• εf' is an empirical constant known as the fatigue ductility coefficient, the failure
strain for a single reversal;
• 2N is the number of reversals to failure (N cycles);
• c is an empirical constant known as the fatigue ductility exponent, commonly
ranging from -0.5 to -0.7 for metals.

Fatigue and fracture mechanics


The account above is purely phenomenological and, though it allows life prediction and
design assurance, it does not enable life improvement or design optimisation. For the
latter purposes, an exposition of the causes and processes of fatigue is necessary.
Such an explanation is given by fracture mechanics in four stages.

1. Crack nucleation;
2. Stage I crack-growth;
3. Stage II crack-growth; and
4. Ultimate ductile failure.

Factors that affect fatigue-life


Magnitude of stress including stress concentrations caused by part geometry.
Quality of the surface; surface roughness, scratches, etc. cause stress concentrations
or provide crack nucleation sites which can lower fatigue life depending on how the
stress is applied. On the other hand, surface stress can be intentionally manipulated to
increase fatigue life. For example, shot peening is widely used to put the surface in a
state of compressive stress which inhibits surface crack formation and thus improves
fatigue life. Such techniques for producing surface stress are often referred to
generically as peening, whatever the mechanism used to produce the stress. Other
more recently introduced surface treatments, such as laser peening and ultrasonic
impact treatment, can also produce this surface compressive stress and can increase
the fatigue life of the component. This improvement is normally observed only for high-
cycle fatigue. Little improvement is obtained in the low-cycle fatigue régime.
Material Type. Certain materials, such as steel, will never fail due to fatigue if the
stresses remain below a certain level. Other materials, such as aluminum, will
eventually fail due to fatigue regardless of the stresses the material sees.
Surface defect geometry and location. The size, shape, and location of surface defects
such as scratches, gouges, and dents can have a significant impact on fatigue life.
Significantly uneven cooling, leading to a heterogeneous distribution of material
properties such as hardness and ductility and, in the case of alloys, structural
composition. Uneven cooling of castings, for example, can produce high levels of tensile
residual stress, which will encourage crack growth.
Size, frequency, and location of internal defects. Casting defects such as gas porosity
and shrinkage voids, for example, can significantly impact fatigue life.
In metals where strain-rate sensitivity is observed (ferrous metals, copper, titanium, etc.)
strain rate also affects fatigue life in low-cycle fatigue situations.
For non-isotropic materials, the direction of the applied stress can affect fatigue life.
Grain size; for most metals, fine-grained parts exhibit a longer fatigue life than coarse-
grained parts.
Environmental conditions and exposure time can cause erosion, corrosion, or gas-
phase embrittlement, which all affect fatigue life. Corrosion fatigue is a problem
encountered in many aggressive environments.
The operating temperature over which the part is exposed to affects fatigue life.
Design against fatigue
Dependable design against fatigue-failure requires thorough education and supervised
experience in structural engineering, mechanical engineering, or materials science.
There are three principal approaches to life assurance for mechanical parts that display
increasing degrees of sophistication:

1. Design to keep stress below threshold of fatigue limit (infinite lifetime concept);
2. Design (conservatively) for a fixed life after which the user is instructed to replace
the part with a new one (a so-called lifed part, finite lifetime concept, or "safe-life"
design practice);
3. Instruct the user to inspect the part periodically for cracks and to replace the part
once a crack exceeds a critical length. This approach usually uses the
technologies of nondestructive testing and requires an accurate prediction of the
rate of crack-growth between inspections. This is often referred to as damage
tolerant design or "retirement-for-cause".

Stopping fatigue

Fatigue cracks that have begun to propagate can sometimes be stopped by drilling
holes, called drill stops, in the path of the fatigue crack.[4] However, it is not
recommended because a hole represents a stress concentration factor of about 2.
There is thus the possibility of a new crack starting in the side of the hole. It is always
far better to replace the cracked part entirely. Several disasters have been caused by
botched repairs to cracked structures, such as JAL 123.

Material change

Changes in the materials used in parts can also improve fatigue life. Parts can be made
from better fatigue rated metals for example. Complete replacement and redesign of
parts can also reduce if not eliminate fatigue problems. Thus helicopter rotor blades and
propellors in metal are being replaced by composite equivalents. They are not only
lighter, but also much more resistant to fatigue. They are more expensive, but the extra
cost is amply repaid by their greater integrity, since loss of a rotor blade usually leads to
total loss of the aircraft. A similar argument has been made for replacement of metal
fuselages, wings and tails of aircraft.

Infamous fatigue failures


Versailles train crash

On May 8, 1842 one of the trains carrying revellers on their return from Versailles to
Paris, having witnessed the celebrations of the birthday of Louis Philippe, derailed and
caught fire. Though the resulting conflagration mutilated the dead beyond recognition or
enumeration, it is estimated that 53 perished and around 40 were seriously injured.
The derailment had been the result of a broken locomotive axle. Rankine's investigation
of broken axles in Britain highlighted the importance of stress concentration, and the
mechanism of crack growth with repeated loading.

De Havilland Comet

Metal fatigue came strongly to the notice of aircraft engineers in 1954 after three de
Havilland Comet passenger jets had broken up in mid-air and crashed within a single
year. Investigators from the Royal Aircraft Establishment at Farnborough in England told
a public enquiry that the sharp corners around the plane's window openings (actually
the forward ADF antenna window in the roof) acted as initiation sites for cracks. The
skin of the aircraft was also too thin, and cracks from manufacturing stresses were
present at the corners. All aircraft windows were immediately redesigned with rounded
corners.
Glass Transition

The Glass Transition

Semi-crystalline solids have both • The Glass Transition


amorphous and crystalline
regions. According to the • Comparison with Melting
temperature, the amorphous
regions can be either in the glassy • Glass Transition Temperature
or rubbery state.
• Factors Affecting Tg
The temperature at which the
transition in the amorphous • Return to Transitions
regions between the glassy and
rubbery state occurs is called the • Return to Polymer Main
glass transition temperature. Menu

The Glass Transition


The glass transition is a property of only the amorphous portion of a semi-crystalline solid

34. The crystalline portion remains crystalline during the glass transition.
At a low temperature the amorphous
regions of a polymer are in the glassy
state. In this state the molecules are
frozen on place. They may be able to
vibrate slightly, but do not have any
segmental motion in which portions of
the molecule wiggle around.

In the glassy state, the motion of the


red molecule in the schematic diagram
at the right would NOT occur. When
the amorphous regions of a polymer are
in the glassy state, it generally will be
hard, rigid, and brittle.

If the polymer is heated it eventually will reach its glass transition temperature. At this
temperature portions of the molecules can start to wiggle around as is illustrated by the red
molecule in the diagram above. The polymer now is in its rubbery state. The rubbery state lends
softness and flexibility to a polymer.

You may have experienced the glass transition of chewing gum. At body temperature the gum is
soft and pliable, which is characteristic of an amorphous solid in the rubbery state. If you put a
cold drink in your mouth or hold an ice cube on the gum, it becomes hard and rigid. The glass
transition temperature of the gum is somewhere between 0oC and 37oC.

Comparison with Melting


The glass transition is NOT the same as melting.
Glass Transition

• Property of the amorphous region Melting


• Below Tg: Disordered amorphous
solid with immobile molecules • Property of the crystalline region
• Above Tg: Disordered amorphous • Below Tm: Ordered crystalline solid
solid in which portions of molecules • Above Tm: Disordered melt
can wiggle around • A first-order transition (see below)
• A second order transition (see
below)

Thermodynamic transitions are


classified as being first- or second-
order36. In a first-order transition
there is a transfer of heat between
system and surroundings and the
system undergoes an abrupt volume
change.

In a second-order transition, there is


no transfer of heat, but the heat
capacity does change. The volume
changes to accomodate the increased
motion of the wiggling chains, but it
does not change discontinuously..
Illustrative plots of specific volume
vs. temperature are shown at the right
for amorphous and crystalline
polymers.

Glass Transition Temperature

When an amorphous polymer is heated, the temperature at which it changes from a glass to the
rubbery form is called the glass transition temperature, Tg. A given polymer sample does not
have a unique value of Tg because the glass phase is not at equilibrium. The measured value of
Tg will depend on the molecular weight of the polymer, on its thermal history and age, on the
measurement method, and on the rate of heating or cooling38, 39. Approximate glass transition
temperatures of a few polymers are shown below37.
Polymer Tg (oC)
Polyethylene (LDPE) -125
Polypropylene (atactic) -20
Poly(vinyl acetate) (PVAc) 28
Poly(ethyleneterephthalate) (PET) 69
Poly(vinyl alcohol) (PVA) 85
Poly(vinyl chloride) (PVC) 81
Polypropylene (isotactic) 100
Polystyrene 100
Poly(methylmethacrylate) (atactic) 105

================================================
grain growth - recrystallisation - recovery
Recovery, Recrystallisation and Grain Growth

Grain growth refers to the increase in size of grains (crystallites) in a material at high
temperature. This occurs when recovery and recrystallisation are complete and further
reduction in the internal energy can only be achieved by reducing the total area of grain
boundary. The term is commonly used in metallurgy but is also used in reference to
ceramics and minerals

Importance of grain growth

Most materials exhibit the Hall-Petch effect at room-temperature and so display a higher
yield stress when the grain size is reduced. At high temperatures the opposite is true
since the open, disordered nature of grain boundaries means that vacancies can diffuse
more rapidly down boundaries leading to more rapid Coble creep. Since boundaries are
regions of high energy they make excellent sites for the nucleation of precipitates and
other second-phases e.g. Mg-Si-Cu phases in some aluminium alloys or martensite
platlets in steel. Depending on the second phase in question this may have positive or
negative effects.

Rules of grain growth

Grain growth has long been studied primarily by the examination of sectioned, polished
and etched samples under the optical microscope. Although such methods enabled the
collection of a great deal of empirical evidence, particular with regard to factors such as
temperature or composition, the lack of crystallographic information limited the
development of an understanding of the fundamental physics. Nevertheless, the
following became well-established features of grain growth:

1. Grain growth occurs by the movement of grain boundaries and not by


coalesence (i.e. like water droplets)
2. Boundary movement is discontinuous and the direction of motion may change
suddenly.
3. One grain may grow into one grain whilst being consumed from the other side
4. The rate of consumption often increases when the grain is nearly consumed
5. A curved boundary typically migrates towards its centre of curvature
6. When grain boundaries in a single phase meet at angles other than 120 degrees,
the grain included by the more acute angle will be consumed so that the angles
approach 120 degrees.

Normal vs Abnormal

Fig 1. The distintion between continuous (normal) grain growth, where all grains grow at
roughly the same rate, and discontinuous (abnormal) grain growth, where one grain
grows at a much greater rate than its neighbours.

n common with recovery and recrystallisation, growth phenomena can be separated into
continuous and discontinuous mechanisms. In the former the microstructure evolves
from state A to B (in this case the grains get larger) in a uniform manner. In the latter,
the changes occur heterogeneously and specific transformed and untransformed
regions may be identified. Discontinuous grain growth is characterised by a subset of
grains growing at a high rate and at the expense of their neighours and tends to result in
a microstructure dominated by a few very large grains. In order for this to occur the
subset of grains must possess some advantage over their competitors such as a high
grain boundary energy, locally high grain boundary mobility, favourable texture or lower
local second-phase particle density.

Driving force
The boundary between one grain and its neighbour is a defect in the crystal structure
and so it is associated with a certain amount of energy. As a result there is a
thermodynamic driving force for the total area of boundary to be reduced. If the grain
size increases, accompanied by a reduction in the actual number of grains, then the
total area of boundary will be reduced.

In comparison to phase transformations the energy available to drive grain growth is


very low and so it tends to occur at much slower rates and is easily slowed by particles
or solute atoms.

Ideal grain growth

Ideal grain growth is a specia


l case of normal grain growth where boundary motion is driven only by the reduction of
the total amount of grain boundary surface energy. Additional contributions to the
driving force by e.g. elastic strains or temperature gradients are neglected. If it holds
that the rate of growth is proportional to the driving force and that the driving force is
proportional to the total amount of grain boundary energy, then it can be shown that the
time t required to reach a given grain size is approximated by the equation

where d_0 is the initial grain size d is the final grain size and k is a temperature
dependent constant given by an exponential law:

where k_0 is a constant, T is the absolute temperature and Q is the activation energy
for boundary mobility. Theoretically, the activation energy for boundary mobility should
equal that for self-diffusion but this is often found to not be the case.

In general these equations are found to hold for ultra-high purity materials but rapidly fail
when even tiny concentrations of solute are introduced.

Factors hindering growth


If there are additional factors preventing boundary movement, such as Zener pinning by
particles, then the grain size may be restricted to a much lower value than might
otherwise be expected. This is an important industrial mechanism in preventing the
softening of materials at high temperature.

See also
• Recrystallization (metallurgical)
• Recovery

--------------------------------------------------------------------

Recrystallization (metallurgy)

Recrystallization is a process by which deformed grains are replaced by a new set of


undeformed grains that nucleate and grow until the original grains have been entirely
consumed. Recrystallization is usually accompanied by a reduction in the strength and
hardness of a material and a simultaneous increase in the ductility. Thus, the process
may be introduced as a deliberate step in metals processing or may be an undesirable
by product of another processing step. The most important industrial uses are the
softening of metals previously hardened by cold work, which have lost their ductility, and
the control of the grain structure in the final product.

Definition
Three EBSD maps of the stored energy in an Al-Mg-Mn alloy after exposure to
increasing recrystallization temperature. The volume fraction of recrystallized grains
(light) increases with temperature for a given time.

A precise definition of recrystallization is difficult as the process is strongly related to


several other processes, most notably recovery and grain growth. In some cases it is
difficult to precisely define the point at which one process begins and another starts.
Doherty et al (1998) defined recrystallization as:

"... the formation of a new grain structure in a deformed material by the formation
and migration of high angle grain boundaries driven by the stored energy of
deformation. High angle boundaries are those with greater than a 10-15°
misorientation"

Thus the process can be differentiated from recovery (where high angle grain
boundaries do not migrate) and grain growth (where the driving force is only due to the
reduction in boundary area). Recrystallization may occur during or after deformation
(during cooling or a subsequent heat treatment, for example). The former is termed
dynamic while the latter is termed static. In addition, recrystallization may occur in a
discontinuous manner, where distinct new grains form and grow, or a continuous
manner, where the microstructure gradually evolves into a recrystallised microstructure.
The different mechanisms by which recrystallization and recovery occur are complex
and in many cases remain controversial. The following description is primarily
applicable to static discontinuous recrystallization, which is the most classical variety
and probably the most understood. Additional mechanisms include (geometric) dynamic
recrystallization and strain induced boundary migration.

Laws of Recrystallization
There are several, largely empirical laws of recrystallization:

• Thermally activated. The rate of the microscopic mechanisms controlling the


nucleation and growth of recrystallised grains depend on the annealing
temperature. Arrhenius type equations indicate an exponential relationship.
• Critical temperature. Following from the previous rule it is found that
recrystallization requires a minimum temperature for the necessary atomic
mechanisms to occur. This recrystallization temperature decreases with
annealing time.
• Critical deformation. The prior deformation applied to the material must be
adequate to provide nuclei and sufficient stored energy to drive their growth.
• Deformation affects the critical temperature. Increasing the magnitude of prior
deformation, or reducing the deformation temperature, will increase the stored
energy and the number of potential nuclei. As a result the recrystallization
temperature will decrease with increasing deformation.
• Initial grain size affects the critical temperature. Grain boundaries are good sites
for nuclei to form. Since an increase in grain size results in fewer boundaries this
results in a decrease in the nucleation rate and hence an increase in the
recrystallization temperature
• Deformation affects the final grain size. Increasing the deformation, or reducing
the deformation temperature, increases the rate of nucleation faster than it
increases the rate of growth. As a result the final grain size is reduced by
increased deformation.

Driving force

During plastic deformation the work performed is the integral of the stress ? and the
plastic strain increment d?. Although the majority of this work is converted to heat, some
fraction (~1-5%) is retained in the material as defects - particularly dislocations. The
rearrangement or elimination of these dislocations will reduce the internal energy of the
system and so there is a thermodynamic driving force for such processes. At moderate
to high temperatures, particularly in materials with a high stacking fault energy such as
aluminium and nickel, recovery occurs readily and free dislocations will readily
rearrange themselves into subgrains surrounded low-angle grain boundaries. The
driving force is the difference in energy between the deformed and recrystallised state
?E which can be determined by the dislocation density or the subgrain size and
boundary energy (Doherty, 2005):

where ρ is the dislocation density, G is the shear modulus, b is the Burgers vector of the
dislocations, γ is the sub-grain boundary energy and ds is the subgrain size.

Nucleation

Historically it was assumed that the nucleation rate of new recrystallized grains would
be determined by the thermal fluctuation model successfully used for solidification and
precipitation phenomena. In this theory it is assumed that as a result of the natural
movement of atoms (which increases with temperature) small nuclei would
spontaneously arise in the matrix. The formation of these nuclei would be associated
with an energy requirement due to the formation of a new interface and an energy
liberation due to the formation of a new volume of lower energy material. If the nuclei
were larger than some critical radius then it would be thermodynamically stable and
could start to grow.

The main problem with this theory is that the stored energy due to dislocations is very
low (0.1-1 Jm-3) while the energy of a grain boundary is quite high (~0.5Jm-2).
Calculations based on these values found that the observed nucleation rate was greater
than the calculated one by some impossibly large factor (~1050).
As a result the alternate theory proposed by Cahn in 1949 is now universally accepted.
The recrystallised grains do not nucleate in the classical fashion but rather grow from
pre-existing sub-grains and cells. The 'incubation time' is then a period of recovery
where sub-grains with low-angle boundaries (<1-2°) begin to accumul ate dislocations
and become increasingly misoriented with respect to their neighbours. The increase in
misorientation increases the mobility of the boundary and so the rate of growth of the
sub-grain increases. If one sub-grain in a local area happens to have an advantage over
its neighbours (such as locally high dislocation densities, a greater size or favourable
orientation) then this sub-grain will be able to grow more rapidly than its competitors. As
it grows its boundary becomes increasingly misoriented with respect to the surrounding
material until it can be recognised as an entirely new strain-free grain.

Kinetics

Variation of recrystallized volume fraction with time

Recrystallization kinetics are commonly observed to follow the profile shown. There is
an initial 'nucleation period' t0 where the nuclei form, and then begin to grow at a
constant rate consuming the deformed matrix. Although the process does not strictly
follow classical nucleation theory it is often found that such mathematical descriptions
provide at least a close approximation. For an array of spherical grains the mean radius
R at a time t is (Humphreys and Hatherly 2004):

where t0 is the nucleation time and G is the growth rate dR/dt. If N nuclei form in the
time increment dt and the grains are assumed to be spherical then the volume fraction
will be:

This equation is valid in the early stages of recrystallization when f<<1 and the growing
grains are not impinging on each other. Once the grains come into contact the rate of
growth slows and is related to the fraction of untransformed material (1-f) by the
Johnson-Mehl equation:

While this equation provides a better description of the process it still assumes that the
grains are spherical, the nucleation and growth rates are constant, the nuclei are
randomly distributed and the nucleation time t0 is small. In practice few of these are
actually valid and alternate models need to be used.

It is generally acknowledged that any useful model must not only account for the initial
condition of the material but also the constantly changing relationship between the
growing grains, the deformed matrix and any second phases or other microstructural
factors. The situation is further complicated in dynamic systems where deformation and
recrystallization occur simultaneously. As a result it has generally proven impossible to
produce an accurate predictive model for industrial processes without resorting to
extensive empirical testing. Since this may require the use of industrial equipment that
has not actually been built there are clear difficulties with this approach.

Factors influencing the rate

The annealing temperature has a dramatic influence on the rate of recrystallization


which is reflected in the above equations. However, for a given temperature there are
several additional factors that will influence the rate.

The rate of recrystallization is heavily influenced by the amount of deformation and, to a


lesser extent, the manner in which it is applied. Heavily deformed materials will
recrystallize more rapidly than those deformed to a lesser extent. Indeed, below a
certain deformation recrystallization may never occur. Deformation at higher
temperatures will allow concurrent recovery and so such materials will recrystallize
more slowly than those deformed at room temperature e.g. contrast hot and cold rolling.
In certain cases deformation may be unusually homogeneous or occur only on specific
crystallographic planes.

The absence of orientation gradients and other heterogeneities may prevent the
formation of viable nuclei. Experiments in the 1970s found that molybdenum deformed
to a true strain of 0.3, recrystallized most rapidly when tensioned and at decreasing
rates for wire drawing, rolling and compression (Barto & Ebert 1971).
The orientation of a grain and how the orientation changes during deformation influence
the accumulation of stored energy and hence the rate of recrystallization. The mobility
of the grain boundaries is influenced by their orientation and so some crystallographic
textures will result in faster growth than others.

Solute atoms, both deliberate additions and impurities, have a profound influence on the
recrystallization kinetics. Even minor concentrations may have a substantial influence
e.g. 0.004% Fe increases the recrystallization temperature by around 100°C
(Humphreys and Hatherly 2004). It is currently unknown whether this effect is primarily
due to the retardation of nucleation or the reduction in the mobility of grain boundaries
i.e. growth.

Influence of second phases

Many alloys of industrial significance have some volume fraction of second phase
particles, either as a result of impurities or from deliberate alloying additions. Depending
on their size and distribution such particles may act to either encourage or retard
recrystallization.

Small particles
The effect of a distribution of small particles on the grain size in a recrystallized sample.
The minimum size occurs at the intersection of the growth stabilized

Recrystallization is prevented or significantly slowed by a dispersion of small, closely


spaced particles due to Zener pinning on both low- and high-angle grain boundaries.
This pressure directly opposes the driving force arising from the dislocation density and
will influence both the nucleation and growth kinetics. The effect can be rationalised with
respect to the particle dispersion level Fv/r where Fv is the volume fraction of the
second phase and r is the radius. At low Fv/r the grain size is determined by the number
of nuclei, and so initially may be very small.

However the grains are unstable with respect to grain growth and so will grow during
annealling until the particles exert sufficient pinning pressure to halt them. At moderate
Fv/r the grain size is still determined by the number of nuclei but now the grains are
stable with respect to normal growth (while abnormal growth is still possible). At high
Fv/r the unrecrystallised deformed structure is stable and recrystallization is
suppressed.

Large particles

The deformation fields around large (<1µm) non-deformable particles are characterised
by high dislocation densities and large orientation gradients and so are ideal sites for
the development of recrystallization nuclei. This phenomenon, called particle stimulated
nucleation (PSN), is notable as it provides one of the few ways to control
recrystallization by controlling the particle distribution.

The effect of particle size and volume fraction on the recrystallized grain size (left) and
the PSN regime (right).

The size and misorientation of the deformed zone is related to the particle size and so
there is a minimum particle size requried to initiate nucleation. Increasing the extent of
deformation will reduce the minimum particle size leading to a PSN regime in size-
deformation space. If the efficiency of PSN is 1 (i.e. each particle stimulates one nuclei)
then the final grain size will be simply determined by the number of particles.
Occasionally the efficiency can be greater than one if multiple nuclei form at each
particle but this is uncommon. The efficiency will be less than one if the particles are
close to the critical size and large fractions of small particles will actually prevent
recrystallization rather than initiating it (see above).

Bimodal particle distributions

The recrystallization behaviour of materials containing a wide distribution of particle


sizes can be difficult to predict. This is compounded in alloys where the particles are
thermally-unstable and may grow or dissolve with time. The situation is more simple in
bimodel alloys which have two distinct particle populations. An example is Al-Si alloys
where it has been shown that even in the presence of very large (<5µm) particles the
recrystallization behaviour is dominated by the small particles (Chan & Humphreys
1984). In such cases the resulting microstructure tends to resemble one from an alloy
with only small particles.

----------------------------

Recovery (metallurgy)

Recovery is a process by which deformed grains can reduce their stored energy by the
removal or rearrangement of defects in their crystal structure. These defects, primarily
dislocations, are introduced by plastic deformation of the material and act to increase
the yield strength of a material. Since recovery reduces the dislocation density the
process is normally accompanied by a reduction in a materials strength and a
simultaneous increase in the ductility. As a result recovery may be considered beneficial
or detrimental depending on the circumstances. Recovery is related to the similar
process of recrystallisation and grain growth. Recovery competes with recrystallisation,
as both are driven by the stored energy, but is also thought to be a necessary
prerequisite for the nucleation of recrystallised grains.

Definition

The physical processes that fall under the designations of recovery, recrystallisation and
grain growth are often difficult to distinguish in a precise manner. Doherty et al (1998)
stated:

"The authors have agreed that ... recovery can be defined as all annealing
processes occurring in deformed materials that occur without the migration of a
high-angle grain boundary"

Thus the process can be differentiated from recrystallisation and grain growth as both
feature extensive movement of high-angle grain boundaries.
If recovery occurs during deformation (a situation that is common in high-temperature
processing) then it is referred to as 'dynamic' while recovery that occurs after
processing is termed 'static'. The principle difference is that during dynamic recovery
stored energy continues to be introduced even as it is decreased by the recovery
process - resulting in a form of dynamic equilibrium.
Process

Fig 1. The annihilation and reorganisation of an array of edge dislocations in a crystal


lattice

Deformed structure

A heavily deformed metal contains a huge number of dislocations predominantly caught


up in 'tangles' or 'forests'. Dislocation motion is relatively difficult in a metal with a low
stacking fault energy and so the dislocation distribution after deformation is largely
random. In contrast, metals with moderate to high stacking fault energy, e.g. aluminum,
tend to form a cellular structure where the cell walls consist of rough tangles of
dislocations. The interiors of the cells have a correspondingly reduced dislocation
density.

Annihilation

Each dislocation is associated with a strain field which contributes some small but finite
amount to the materials stored energy. When the temperature is increased - typically
above 0.3 of the absolute melting point - dislocations become mobile and are able to
glide, cross-slip and climb. If two dislocations of opposite sign meet then they effectively
cancel out and their contribution to the stored energy is removed. When annihilation is
complete then only excess dislocation of one kind will remain.

Rearrangement

After annihilation any remaining dislocations can align themselves into ordered arrays
where their individual contribution to the stored energy is reduced by the overlapping of
their strain fields. The simplest case is that of an array of edge dislocations of identical
Burger's vector. This idealised case can be produced by bending a single crystal that
will deform on a single slip system (the original experiment performed by Cahn in 1949).
The edge dislocations will rearrange themselves into tilt boundaries, a simple example
of a low-angle grain boundary. Grain boundary theory predicts that an increase in
boundary misorientation will increase the energy of the boundary but decrease the
energy per dislocation. Thus, there is a driving force to produce fewer, more highly
misoriented boundaries. The situation in highly deformed, polycrystalline materials is
naturally more complex. Many dislocations of different Burger's vector can interact to
form complex 2-D networks.

Development of substructure

As mentioned above, the deformed structure is often a 3-D cellular structure with walls
consisting of dislocation tangles. As recovery proceeds these cell walls will undergo a
transition towards a genuine subgrain structure. This occurs through a gradual
elimination of extraneous dislocations and the rearrangement of the remaining
dislocations into low-angle grain boundaries.

Sub-grain formation is followed by subgrain coarsening where the average size


increases while the number of subgrains decreases. This reduces the total area of grain
boundary and hence the stored energy in the material. Subgrain coarsen shares many
features with grain growth.

If the sub-structure can be approximated to an array of spherical subgrains of radius R


and boundary energy γs; the stored energy is uniform; and the force on the boundary is
evenly distributed, the driving pressure P is given by:
Since γs is dependent on the boundary misorientation of the surrounding subgrains, the
driving pressure generally does not remain constant throughout coarsening.

----------------------------
Glossary
Amorphous - having no ordered arrangement. Polymers are amorphous when their chains are
tangled up in any old way. Polymers are not amorphous when their chains are lined up in ordered
crystals. (see: crystal)

Anion - an atom or molecule which has a negative electrical charge. (see: ion)

Cation - an atom or molecule which has a positive electrical charge. (see: ion)

Complex - two or more molecules which are associated together by some type of interaction of
electrons, other than a covalent bond. (see: covalent bond)

Copolymer - A polymer made from more than one kind of monomer. (see: monomer)

Covalent bond - a joining of two atoms when the two share a pair of electrons.

Crosslinking - crosslinking is when individual polymer chains are linked together by covalent
bonds to form one giant molecule. (see: elastomer, thermoset)

Crystal - a mass of molecules arranged in a neat and orderly fashion. In polymer crystal the
chains are lined up neatly like new pencils in a package. They are also bound together tightly by
secondary interactions. (see: secondary interactions)

Elastomer - rubber. Hot shot scientists say a rubber or elastomer is any material that can be
stretched many times its original length without breaking, and will snap back to its original size
when it is released.

Electrolyte - a molecule that separates into a cation and an anion when its dissolved in a solvent,
usually water. For example, salt, NaCl separates into Na+ and Cl- in water:

(see: anion, cation)

Elongation - how long a sample is stretched when it is pulled. Elongation is usually expressed as
the length after stretching divided by the original length.

Emulsion - a mixture in which two immiscible substances, like oil and water, stay mixed
together thanks to a third substance called an emulsifier. The emulsifier is usually something like
a soap, whose molecules have a water-soluble end and an organic-soluble end. The soap
molecules form little balls called micelles, in which the water-soluble ends point out into the
water, and the organic-soluble ends point into the inside of the ball. The oil is stabilized in the
water by hiding in the center of the micelle. Thus the water and oil stay mixed.

A micelle with the water-soluble ends of the soap molecule on the outside, and the organic-
soluble ends pointing inward, stabilizing a big brown organic particle on the inside.

Entropy - disorder. Entropy is a measure of the disorder of a system.

First order transition - a thermal transition that involves both a latent heat and a change in the
heat capacity of the material. (see: heat capacity, latent heat, second order transition, thermal
transition)

Free radical - an atom or molecule which has at least one electron which is not paired with
another electron.

Gel - a crosslinked polymer which has absorbed a large amount of solvent. Crosslinked polymers
usually swell a good deal when they absorb solvents. (see: crosslinking)

Gem diol - a diol in which both hydroxy groups are on the same carbon. Gem diols are unstable.
Why are they called gem diols? It's short for geminal, which means "twins". It's related to the
word gemini.

Glass transition temperature - the temperature at which a polymer changes from hard and
brittle to soft and pliable.

Heat capacity - the amount of heat it takes to raise the temperature of one gram of a material
one degree Celsius.

Hydrodynamic volume - the volume of a polymer coil when it is in solution. This can vary for a
polymer depending on how well it interacts with the solvent, and the polymer's molecular
weight.
Hydrogen bond - a very strong attraction between a hydrogen atom which is attached to an
electronegative atom, and an electronegative atom which is usually on another molecule. For
example, the hydrogen atoms on one water molecule are very strongly attracted to the oxygen
atoms on another water molecule.

Ion - an atom or molecule which has a positive or a negative electrical charge.

Latent heat - the heat given off or absorbed when a material melts or freezes, or boils or
condenses. For example, when ice is heated, once the temperature reaches 0 oC, its temperature
won't increase until all the ice is melted. The ice has to absorb heat in order to melt. But even
though it's absorbing heat, its temperature stays the same until all the ice has melted. The heat
required to melt the ice is called the latent heat. The water will give off the same amount of
latent heat when you freeze it.

Le Chatelier's principle - this principle states that if a system is placed under stress, it will act
so as to relieve the stress. Applied to chemical reactions, it means that if product or byproduct is
removed from the system, the equilibrium will be upset, and the reaction will produce more
product to make up for the loss. In polymerizations, this trick is used to make polymerization
reactions reach high conversion.

Ligand - an atom or group of atoms which is associated with a metal atom in a complex.
Ligands may be neutral or they may be ions. (see: complex, ion)

Living polymerization - a polymerization reaction in which there is no termination, and the


polymer chains continue to grow as long as there are monomer molecules to add to the growing
chain.

Matrix - in a fiber reinforced composite, the matrix is the material in which the fiber is
embedded, the material that the fiber reinforces. It comes from a Latin word which means
"mother", interestingly enough.

Modulus - the ability of a sample of a material to resist deformation. Modulus is usually


expressed as the ratio of stress exerted on the sample to the amount of deformation. For example,
tensile modulus is the ration of stress applied to the elongation which results from the stress.
(see: elongation, stress)

Monomer - a small molecule which may react chemically to link together with other molecules
of the same type to form a large molecule called a polymer.
Olefin Metathesis - a reaction between to molecules, both containing carbon-carbon double
bonds. In olefin metathesis, the double bond carbon atoms change partners, to create two new
molecules, both containing carbon-carbon double bonds.

Oligomer - a polymer whose molecular weight is too low to really be considered a polymer.
Oligomers have molecular weights in the hundreds, but polymers have molecular weights in the
thousands or higher.

Plasticizer - a small molecule that's added to polymer to lower its glass transition temperature.
(see: glass transition temperature).

Random coil - the shape of a polymer molecule when its in solution, and it's all tangled up in
itself, instead of being stretched out in a line. The random coil only forms when the
intermolecular forces between the polymer and the solvent are equal to the forces between the
solvent molecules themselves and the forces between polymer chain segments.

Ring-opening polymerization - a polymerization in which cyclic monomer is converted into a


polymer which does not contain rings. The monomer rings are opened up and stretched out in the
polymer chain, like this:
Secondary interaction - interaction between two atoms or molecules other than a covalent bond.
Secondary interactions include hydrogen bonding, ionic interaction, and dispersion forces. (see:
hydrogen bond)

Second order transition - a thermal transition that involves a change in heat capacity, but does
not have a latent heat. The glass transition is a second order transition. (see: first order transition,
glass transition temperature, heat capacity, latent heat, thermal transition)

Soap - a molecule in which one end is polar and water-soluble and the other end is non-polar and
organic-soluble, such as sodium lauryl sulfate:

These form micelles in water, little balls in which the polar ends of the molecules point out into
the water, and the non-polar ends point inward, away from the water. Water insoluble dirt can
hide inside the micelle, so soapy water washes away dirt that plain water can't. (see emulsion)

Strain - the amount of deformation a sample undergoes when one puts it under stress. Strain can
be elongation, bending, compression, or any other type of deformation. (see: elongation, stress)

Strength - the amount of stress an object can receive before it breaks. (see: stress)

Stress - the amount of force exerted on an object, divided by the cross-sectional area of the
object. The cross-sectional area is the area of a cross-section of the object, in a plane
perpendicular to the direction of the force. Stress is usually expressed in units of force divided by
area, such as N/cm2.
Termination - in a chain growth polymerization, the reaction that causes the growing chain to
stop growing. Termination reactions are reactions in which none of the products may react to
make a polymer grow.

Thermoplastic - a material that can be molded and shaped when it's heated.

Thermal transition - a change that takes place in a material when you heat it or cool it.
Examples of thermal transitions include melting, crystallization, or the glass transition. (see:
glass transition temperature)

Thermoset - a hard and stiff crosslinked material that does not soften or become moldable when
heated. (Compare thermosets with thermoplastics, which do become moldable when heated.)
Also, thermosets are different from crosslinked elastomers. Thermosets are stiff and don't stretch
the way that elastomers do. (see: elastomer, thermoplastic)

Toughness - a measure of the ability of a sample to absorb mechanical energy without breaking,
usually defined as the area underneath a stress-strain curve. (see: stress, strain)

Transesterification - A reaction between an ester and an alcohol in which the -O-R of the ester
and the -O-R' group of the alcohol trade places, as shown below.

Return to Macrogalleria Directory


Magnetic states

Magnetic states

diamagnetism – – superdiamagnetism – paramagnetism –


superparamagnetism – ferromagnetism – antiferromagnetism –
ferrimagnetism – metamagnetism – spin glass

Diamagnetism

is a weak repulsion from a magnetic field. It is form of magnetism that is only exhibited
by a substance in the presence of an externally applied magnetic field. It results from
changes in the orbital motion of electrons. Applying a magnetic field creates a magnetic
force on a moving electron in the form of F = Qv × B. This force changes the centripetal
force on the electron, causing it to either speed up or slow down in its orbital motion.
This changed electron speed modifies the magnetic moment of the orbital in a direction
opposing the external field.

Consider two electron orbitals; one rotating clockwise and the other counterclockwise.
An external magnetic field into the page will make the centripetal force on an electron
rotating clockwise increase, which increases its moment out of the page. That same
field would make the centripetal force on an electron rotating counterclockwise
decrease, decreasing its moment into the page. Both changes oppose the external
magnetic field into the page. However, the induced magnetic moment is very small in
most everyday materials.[clarify]

All materials show a diamagnetic response in an applied magnetic field; however for
materials which show some other form of magnetism (such as ferromagnetism or
paramagnetism), the diamagnetism is completely overpowered. Substances which only,
or mostly, display diamagnetic behaviour are termed diamagnetic materials, or
diamagnets. Materials that are said to be diamagnetic are those which are usually
considered by non-physicists as "non magnetic", and include water, DNA, most organic
compounds such as petroleum and some plastics, and many metals such as mercury,
gold and bismuth.

Diamagnetic materials have a relative magnetic permeability that is less than 1, thus a
magnetic susceptibility which is less than 0, and are therefore repelled by magnetic
fields. However, since diamagnetism is such a weak property its effects are not
observable in every-day life. For example, the magnetic susceptibility of diamagnets
such as water is = −9.05×10−6. The most strongly diamagnetic material is bismuth,
= −166×10−6, although pyrolytic graphite may have a susceptibility of =
−400×10−6 in one plane. Nevertheless these values are orders of magnitudes smaller
than the magnetism exhibited by paramagnets and ferromagnets. Superconductors may
be considered to be perfect diamagnets ( = −1), since they expel all field from their
interior due to the Meissner effect.

Diamagnetic levitation

A live frog levitates inside a 32 mm diameter vertical bore of a Bitter solenoid in a


magnetic field of about 16 teslas at the Nijmegen High Field Magnet Laboratory. Movie

A particularly fascinating phenomenon involving diamagnets is that they may be


levitated in stable equilibrium in a magnetic field, with no power consumption.
Earnshaw's theorem seems to preclude the possibility of static magnetic levitation.
However, Earnshaw's theory only applies to objects with permanent moments m, such
as ferromagnets, whose magnetic energy is given by m·B. Ferromagnets are attracted
to field maxima, which do not exist in free space. Diamagnetism is an induced form of
magnetism, thus the magnetic moment is proportional to the applied field B. This means
that the magnetic energy of diamagnets is proportional to B2, the intensity of the
magnetic field. Diamagnets are also attracted to field minima, and there can be a
minimum in B2 in free space (in fact ).
A thin slice of pyrolytic graphite, which is an unusually strong diamagnetic material, can
be stably floated in a magnetic field, such as that from rare earth permanent magnets.
This can be done with all components at room temperature, making a visually effective
demonstration of diamagnetism.
The Radboud University Nijmegen has conducted experiments where they have
successfully levitated water and a live frog, amongst other things.[1]
Recent experiments with studying the growth of protein crystals has led to a technique
that utilizes powerful magnets to allow growth in ways that counteract Earth's gravity. [2]

A simple homemade device may be constructed out of Bismuth plates and a few
permanent magnets that will levitate a permanent magnet. This does not disprove
Earnshaw's theorem. The reason it works has to do with the atomic structure of the
magnets and the bismuth plates interacting such that the gradual weakening of the the
magnetic moments of the magnetic material's atoms is in fact the energy being
expended. The energy of the magnetic field can be thought of as kinetic energy stored
by the material's electron configuration.

Superdiamagnetism
Superdiamagnetism (or perfect diamagnetism) is a phenomenon occurring in certain
materials at low temperatures, characterised by the complete absence of magnetic
permeability (i.e. a magnetic susceptibility = −1) and the exclusion of the interior
magnetic field. Superdiamagnetism is a feature of superconductivity. It was identified in
1933, by Walter Meissner and Robert Ochsenfeld (the Meissner effect).

Superdiamagnetism established that the superconductivity of a material was a stage of


phase transition. Superconducting magnetic levitation is due to superdiamagnetism,
which repels a permanent magnet, and flux pinning, which prevents the magnet floating
away.
Diagram of the Meissner effect. Magnetic field lines, represented as arrows, are
excluded from a superconductor when it is below its critical temperature.

Theory

Fritz London and Heinz London developed the theory that the exclusion of magnetic flux
is brought about by electrical "screening currents" that flow at the surface of the
superconducting metal and which generate a magnetic field that exactly cancels the
externally applied field inside the superconductor. These screening currents are
generated whenever a superconducting metal is brought inside a magnetic field. This
can be understood by the fact that a superconductor has zero electrical resistance, so
that "eddy currents", induced by the motion of the metal inside a magnetic field, will not
decay. Fritz, at the Royal Society in 1935, stated that the thermodynamic state would be
described by a single wave function.

"Screening currents" also appear in a situation wherein an initially normal, conducting


metal is placed inside a magnetic field. As soon as the metal is cooled below the
appropriate transition temperature, it becomes superconducting. This expulsion of
magnetic field upon the cooling of the metal cannot be explained any longer by merely
assuming zero resistance and is called the Meissner effect. It shows that the
superconducting state does not depend on the history of preparation, only upon the
present values of temperature, pressure and magnetic field, and therefore is a true
thermodynamic state.

See also
• Superfluidity
• Superconductivity
• Timeline of low-temperature technology

------------------------------------------------------

Paramagnetism
Paramagnetism is a form of magnetism which occurs only in the presence of an
externally applied magnetic field. Paramagnetic materials are attracted to magnetic
fields, hence have a relative magnetic permeability greater than one (or, equivalently, a
positive magnetic susceptibility). However, unlike ferromagnets which are also attracted
to magnetic fields, paramagnets do not retain any magnetization in the absence of an
externally applied magnetic field.

Introduction

Constituent atoms or molecules of paramagnetic materials have permanent magnetic


moments (dipoles), even in the absence of an applied field. This generally occurs due to
the presence of unpaired electrons in the atomic/molecular electron orbitals. In pure
paramagnetism, the dipoles do not interact with one another and are randomly oriented
in the absence of an external field due to thermal agitation, resulting in zero net
magnetic moment. When a magnetic field is applied, the dipoles will tend to align with
the applied field, resulting in a net magnetic moment in the direction of the applied field.
In the classical description, this alignment can be understood to occur due to a torque
being provided on the magnetic moments by an applied field, which tries to align the
dipoles parallel to the applied field. However, the truer origins of the alignment can only
be understood via the quantum-mechanical properties of spin and angular momentum.

If there is sufficient energy exchange between neighbouring dipoles they will interact,
and may spontaneously align or anti-align and form magnetic domains, resulting in
ferromagnetism (permanent magnets) or antiferromagnetism, respectively.
Paramagnetic behaviour can also be observed in ferromagnetic materials that are
above their Curie temperature, and in antiferromagnets above their Néel temperature.

In general paramagnetic effects are quite small: the magnetic susceptibility is of the
order of 10−3 to 10−5 for most paramagnets, but may be as high as 10-1 for synthetic
paramagnets such as ferrofluids.

Curie's law

For low levels of magnetisation, the magnetisation of paramagnets is approximated by


Curie's law:

where
M is the resulting magnetization B is the magnetic flux density of the applied field,
measured in teslas T is absolute temperature, measured in kelvins C is a material-
specific Curie constant

This law indicates that the susceptibiliy of paramagnetic materials is inversely


proportional to their temperature. However, Curie's law is only valid under conditions of
low magnetisation, since it does not consider the saturation of magnetisation that occurs
when the atomic dipoles are all aligned in parallel (after everything is aligned, increasing
the external field will not increase the total magnetisation since there can be no further
alignment).

Paramagnetic materials

Elements

Elements can be paramagnetic if they have unpaired electrons.


The following are some examples of paramagnetic elements:

• Aluminium Al [13] (metal) — Al is the preferred paramagnetic material in


theoretical designs of lunar mass driver applications using regolith as an ore.
• Barium Ba [56] (metal)
• Calcium Ca [20] (metal) [Ar]4s2 — diamagnetic
• Oxygen. O [8] (non-metal)
• Platinum Pt [78] (metal)
• Sodium Na [11] (metal)
• Strontium Sr [38] (metal)
• Uranium U [92] (metal)
• Magnesium Mg [12] (metal) 1s2 2s2 2p6 3s2 — diamagnetic
• Technetium Tc [43] (artificial)
• Dysprosium Dy [66] (metal) — ferromagnetic

Compounds

Many salts of the d and f transitional metal group show paramagnetic behaviour.
Examples are:

• Copper sulphate
• Dysprosium oxide
• Ferric chloride
• Ferric oxide
• Holmium oxide
• Manganese chloride

Some simple molecules contain unpaired electrons and are thus paramagnetic. The
most common is the diatomic oxygen molecule.

See also
• Pierre Curie
• Ferromagnetism

----------------------------------------------

Ferromagnetism
Ferromagnetism is the "normal" form of magnetism which most people are familiar
with, as exhibited in horseshoe magnets and refrigerator magnets, for instance. It is
responsible for most of the magnetic behavior encountered in everyday life. The
attraction between a magnet and ferromagnetic material is "the quality of magnetism
first apparent to the ancient world, and to us today," according to a classic text on
ferromagnetism.[1]

Ferromagnetism is defined as the phenomenon by which materials, such as iron, in an


external magnetic field become magnetized and remain magnetized for a period after
the material is no longer in the field.

All permanent magnets are either ferromagnetic or ferrimagnetic, as are the metals that
are noticeably attracted to them.

Historically, the term ferromagnet was used for any material that could exhibit
spontaneous magnetization: a net magnetic moment in the absence of an external
magnetic field. This general definition is still in common use.

More recently, however, different classes of spontaneous magnetisation have been


identified when there is more than one magnetic ion per primitive cell of the material,
leading to a stricter definition of "ferromagnetism" that is often used to distinguish it from
ferrimagnetism. I

n particular, a material is "ferromagnetic" in this narrower sense only if all of its magnetic
ions add a positive contribution to the net magnetization. If some of the magnetic ions
subtract from the net magnetization (if they are partially anti-aligned), then the material
is "ferrimagnetic". If the ions anti-align completely so as to have zero net magnetization,
despite the magnetic ordering, then it is an antiferromagnet. All of these alignment
effects only occur at temperatures below a certain critical temperature, called the Curie
temperature (for ferromagnets and ferrimagnets) or the Néel temperature (for
antiferromagnets).

Ferromagnetic materials

There are a number of crystalline materials that exhibit ferromagnetism (or


ferrimagnetism). The table on the right lists a representative selection of them here,
along with their Curie temperatures, the temperature above which they cease to exhibit
spontaneous magnetization (see below).

Ferromagnetic metal alloys whose constituents are not themselves ferromagnetic in


their pure forms are called Heusler alloys, named after Fritz Heusler.
One can also make amorphous (non-crystalline) ferromagnetic metallic alloys by very
rapid quenching (cooling) of a liquid alloy.

These have the advantage that their properties are nearly isotropic (not aligned along a
crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and
high electrical resistivity. A typical such material is a transition metal-metalloid alloy,
made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid
component (B, C, Si, P, or Al) that lowers the melting point.

Physical origin
The property of ferromagnetism is due to the direct influence of two effects from
quantum mechanics: spin and the Pauli exclusion principle.
The spin of an electron, combined with its orbital angular momentum, results in a
magnetic dipole moment and creates a magnetic field. (The classical analogue of
quantum-mechanical spin is a spinning ball of charge, but the quantum version has
distinct differences, such as the fact that it has discrete up/down states that are not
described by a vector; similarly for "orbital" motion, whose classical analogue is a
current loop.) In many materials (specifically, those with a filled electron shell), however,
the total dipole moment of all the electrons is zero (i.e., the spins are in up/down pairs).
Only atoms with partially filled shells (i.e., unpaired spins) can experience a net
magnetic moment in the absence of an external field. A ferromagnetic material has
many such electrons, and if they are aligned they create a measurable macroscopic
field.

These permanent dipoles (often called simply "spins" even though they also generally
include orbital angular momentum) tend to align in parallel to an external magnetic field,
an effect called paramagnetism. (A related but much weaker effect is diamagnetism,
due to the orbital motion induced by an external field, resulting in a dipole moment
opposite to the applied field.) Ferromagnetism involves an additional phenomenon,
however: the dipoles tend to align spontaneously, without any applied field. This is a
purely quantum-mechanical effect.

According to classical electromagnetism, two nearby magnetic dipoles will tend to align
in opposite directions (which would create an antiferromagnetic material). In a
ferromagnet, however, they tend to align in the same direction because of the Pauli
principle: two electrons with the same spin cannot also have the same "position", which
effectively reduces the energy of their electrostatic interaction compared to electrons
with opposite spin. (Mathematically, this is expressed more precisely in terms of the
spin-statistics theorem: because electrons are fermions with half-integer spin, their
wave functions are antisymmetric under interchange of particle positions. This can be
seen in, for example, the Hartree-Fock approximation to lead to a reduction in the
electrostatic potential energy.) This difference in energy is called the exchange energy.
At long distances (after many thousands of ions), the exchange energy advantage is
overtaken by the classical tendency of dipoles to anti-align. This is why, in an
equilibriated (non-magnetized) ferromagnetic material, the dipoles in the whole material
are not aligned. Rather, they organize into magnetic domains (also known as Weiss
domains) that are aligned (magnetized) at short range, but at long range adjacent
domains are anti-aligned. The transition between two domains, where the magnetization
flips, is called a domain wall (i.e., a Bloch/Néel wall, depending upon whether the
magnetization rotates parallel/perpendicular to the domain interface) and is a gradual
transition on the atomic scale (covering a distance of about 300 ions for iron).

Thus, an ordinary piece of iron generally has little or no net magnetic moment.
However, if it is placed in a strong enough external magnetic field, the domains will re-
orient in parallel with that field, and will remain re-oriented when the field is turned off,
thus creating a "permanent" magnet. This magnetization as a function of the external
field is described by a hysteresis curve. Although this state of aligned domains is not a
minimal-energy configuration, it is extremely stable and has been observed to persist for
millions of years in seafloor magnetite aligned by the Earth's magnetic field (whose
poles can thereby be seen to flip at long intervals). The net magnetization can be
destroyed by heating and then cooling (annealing) the material without an external field,
however.

As the temperature increases, thermal oscillation, or entropy, competes with the


ferromagnetic tendency for dipoles to align. When the temperature rises beyond a
certain point, called the Curie temperature, there is a second-order phase transition
and the system can no longer maintain a spontaneous magnetization, although it still
responds paramagnetically to an external field. Below that temperature, there is a
spontaneous symmetry breaking and random domains form (in the absence of an
external field). The Curie temperature itself is a critical point, where the magnetic
susceptibility is theoretically infinite and, although there is no net magnetization,
domain-like spin correlations fluctuate at all lengthscales.
The study of ferromagnetic phase transitions, especially via the simplified Ising spin
model, had an important impact on the development of statistical physics. There, it was
first clearly shown that mean field theory approaches failed to predict the correct
behavior at the critical point (which was found to fall under a universality class that
includes many other systems, such as liquid-gas transitions), and had to be replaced by
renormalization group theory.
--------------------------------

A selection of crystalline ferromagnetic (* = ferrimagnetic) materials, along with their


Curie temperatures in kelvins (K). (Kittel, p. 449.)

Curie
Material
temp. (K)

Co 1388

Fe 1043

FeOFe2O3* 858

NiOFe2O3* 858

CuOFe2O3* 728

MgOFe2O3* 713

MnBi 630

Ni 627

MnSb 587

MnOFe2O3* 573

Y3Fe5O12* 560

CrO2 386
MnAs 318

Gd 292

Dy 88

EuO 69

Unusual ferromagnetism

In 2004, it was reported that a certain allotrope of carbon, carbon nanofoam, exhibited
ferromagnetism. The effect dissipates after a few hours at room temperature, but lasts
longer at low temperatures. The material is also a semiconductor. It is thought that other
similarly-formed materials, such as isoelectronic compounds of boron and nitrogen, may
also be ferromagnetic. The alloy ZnZr2 is also ferromagnetic below 28.5 K.

---------------------------------------------

Ferrimagnetism
In physics, a ferrimagnetic material is one in which the magnetic moment of the atoms
on different sublattices are opposed, as in antiferromagnetism; however, in
ferrimagnetic materials, the opposing moments are unequal and a spontaneous
magnetization remains. This happens when the sublattices consist of different materials
or ions (such as Fe2+ and Fe3+).

Ferrimagnetic materials are like ferromagnets in that they hold a spontaneous


magnetization below the Curie temperature, and show no magnetic order (are
paramagnetic) above this temperature. However, there is sometimes a temperature
below the Curie temperature at which the two sublattices have equal moments, resulting
in a net magnetic moment of zero; this is called the magnetization compensation point.

This compensation point is observed easily in garnets and rare earth - transition metal
alloys (RE-TM). Furthermore, ferrimagnets may also exhibit an angular momentum
compensation point at which the angular momentum of the magnetic sublattices is
compensated. This compensation point is a crucial point for achieving high speed
magnetization reversal in magnetic memory devices*.
Ferrimagnetism is exhibited by ferrites and magnetic garnets. The oldest-known
magnetic substance, magnetite (iron(II,III) oxide; Fe3O4), is a ferrimagnet; it was
originally classified as a ferromagnet before Néel's discovery of ferrimagnetism and
antiferromagnetism.

Some ferrimagnetic materials are YIG (yttrium iron garnet) and ferrites composed of iron
oxides and other elements such as aluminum, cobalt, nickel, manganese and zinc.

Properties
Ferrimagnetic materials have high resistivity and have anisotropic properties. The
anisotropy is actually induced by an external applied field. When this applied field aligns
with the magnetic dipoles it causes a net magnetic dipole moment and causes the
magnetic dipoles to precess at a frequency controlled by the applied field called Larmor
or precession frequency. A microwave signal circularly polarised in the same direction
as this precession strongly interacts with the dipole moments; when it is in opposite
direction the interaction is very low. When the interaction is strong the microwave signal
can pass through the material. This directional property is used in the construction of
microwave devices like isolators, circulators and gyrators.

--------------------------------------
Magnetism
In physics, magnetism is one of the phenomena by which materials exert attractive or
repulsive forces on other materials.

Some well known materials that exhibit easily detectable magnetic properties (called
magnets) are nickel, iron and their alloys;

however, all materials are influenced to greater or lesser degree by the presence of a
magnetic field.

Magnetism also has other manifestations in physics, particularly as one of the two
components of electromagnetic waves such as light.

Magnetic lines of force of a bar magnet shown by iron filings on paper

Physics of magnetism

Magnetism, electricity, and special relativity

Main article: Electromagnetism

As a consequence of Einstein's theory of special relativity, electricity and magnetism are


understood to be fundamentally interlinked. Both magnetism without electricity, and
electricity without magnetism, are inconsistent with special relativity, due to such effects
as length contraction, time dilation, and the fact that the magnetic force is velocity-
dependent. However, when both electricity and magnetism are taken into account the
resulting theory (electromagnetism) is fully consistent with special relativity[4][5]. In
particular, a phenomenon that appears purely electric to one observer may be purely
magnetic to another, or more generally the relative contributions of electricity and
magnetism are dependent on the frame of reference. Thus, special relativity "mixes"
electricity and magnetism into a single, inseparable phenomenon called
electromagnetism (analogously to how special relativity "mixes" space and time into
spacetime).

Magnetic fields and forces

Main article: magnetic field

The phenomenon of magnetism is "mediated" by the magnetic field -- i.e., an electric


current or magnetic dipole creates a magnetic field, and that field, in turn, imparts
magnetic forces on other particles that are in the fields. To an excellent approximation
(but ignoring some quantum effects---see quantum electrodynamics), Maxwell's
equations (which simplify to the Biot-Savart law in the case of steady currents) describe
the origin and behavior of the fields that govern these forces. Therefore magnetism is
seen whenever electrically charged particles are in motion---for example, from
movement of electrons in an electric current, or in certain cases from the orbital motion
of electrons around an atom's nucleus. They also arise from "intrinsic" magnetic dipoles
arising from quantum effects, i.e. from quantum-mechanical spin. The same situations
which create magnetic fields (charge moving in a current or in an atom, and intrinsic
magnetic dipoles) are also the situations in which a magnetic field has an effect,
creating a force. Following is the formula for moving charge; for the forces on an
intrinsic dipole, see magnetic dipole. When a charged particle moves through a
magnetic field B, it feels a force F given by the cross product:

where is the electric charge of the particle, is the velocity vector of the particle, and
is the magnetic field. Because this is a cross product, the force is perpendicular to
both the motion of the particle and the magnetic field. It follows that the magnetic force
does no work on the particle; it may change the direction of the particle's movement, but
it cannot cause it to speed up or slow down. The magnitude of the force is
where is the angle between the and vectors. One tool for determining the direction
of the velocity vector of a moving charge, the magnetic field, and the force exerted is
labeling the index finger "V", the middle finger "B", and the thumb "F" with your right
hand. When making a gun-like configuration (with the middle finger crossing under the
index finger), the fingers represent the velocity vector, magnetic field vector, and force
vector, respectively. See also right hand rule.

Magnetic dipoles

Main article: magnetic dipole

A very common type of magnetic field seen in nature is a dipoles, having a "South pole"
and a "North pole"; terms dating back to the use of magnets as compasses, interacting
with the Earth's magnetic field to indicate North and South on the globe. Since opposite
ends of magnets are attracted, the 'north' magnetic pole of the earth must be
magnetically 'south'. A magnetic field contains energy, and physical systems stabilize
into the configuration with the lowest energy. Therefore, when placed in a magnetic
field, a magnetic dipole tends to align itself in opposed polarity to that field, thereby
canceling the net field strength as much as possible and lowering the energy stored in
that field to a minimum. For instance, two identical bar magnets placed side-to-side
normally line up North to South, resulting in a much smaller net magnetic field, and
resist any attempts to reorient them to point in the same direction. The energy required
to reorient them in that configuration is then stored in the resulting magnetic field, which
is double the strength of the field of each individual magnet. (This is, of course, why a
magnet used as a compass interacts with the Earth's magnetic field to indicate North
and South). An alternative, equivalent formulation, which is often easier to apply but
perhaps offers less insight, is that a magnetic dipole in a magnetic field experiences a
torque and a force which can be expressed in terms of the field and the strength of the
dipole (i.e., its magnetic dipole moment). For these equations, see magnetic dipole.

Atomic magnetic dipoles

The physical cause of the magnetism of objects, as distinct from electrical currents, is
the atomic magnetic dipole. Magnetic dipoles, or magnetic moments, result on the
atomic scale from the two kinds of movement of electrons. The first is the orbital motion
of the electron around the nucleus; this motion can be considered as a current loop,
resulting in an orbital dipole magnetic moment. The second, much stronger, source of
electronic magnetic moment is due to a quantum mechanical property called the spin
dipole magnetic moment (although current quantum mechanical theory states that
electrons neither physically spin, nor orbit the nucleus).
Dipole moment of a bar magnet.

The overall magnetic moment of the atom is the net sum of all of the magnetic moments
of the individual electrons. Because of the tendency of magnetic dipoles to oppose each
other to reduce the net energy, in an atom the opposing magnetic moments of some
pairs of electrons cancel each other, both in orbital motion and in spin magnetic
moments. Thus, in the case of an atom with a completely filled electron shell or
subshell, the magnetic moments normally completely cancel each other out and only
atoms with partially-filled electron shells have a magnetic moment, whose strength
depends on the number of unpaired electrons. The differences in configuration of the
electrons in various elements thus determine the nature and magnitude of the atomic
magnetic moments, which in turn determine the differing magnetic properties of various
materials. Several forms of magnetic behavior have been observed in different
materials, including:

• Diamagnetism
• Paramagnetism
o Molecular magnet
• Ferromagnetism
o Antiferromagnetism
o Ferrimagnetism
o Metamagnetism
• Spin glass
• Superparamagnetism

Magnetic monopoles

Main article: Magnetic monopole

Since a bar magnet gets its ferromagnetism from microscopic electrons distributed
evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces
is a smaller bar magnet. Even though a magnet is said to have a north pole and a south
pole, these two poles cannot be separated from each other.
A monopole — if such a thing exists — would be a new and fundamentally different kind
of magnetic object. It would act as an isolated north pole, not attached to a south pole,
or vice versa. Monopoles would carry "magnetic charge" analogous to electric charge.
Despite systematic searches since 1931, as of 2006, they have never been observed,
and could very well not exist.[6]

Nevertheless, some theoretical physics models predict the existence of these magnetic
monopoles. Paul Dirac observed in 1931 that, because electricity and magnetism show
a certain symmetry, just as quantum theory predicts that individual positive or negative
electric charges can be observed without the opposing charge, isolated South or North
magnetic poles should be observable. Using quantum theory Dirac showed that if
magnetic monopoles exist, then one could explain the quantization of electric charge---
that is, why the observed elementary particles carry charges that are multiples of the
charge of the electron.
Certain grand unified theories predict the existence of monopoles which, unlike
elementary particles, are solitons (localized energy packets).

Using these models to estimate the number of monopoles created in the big bang, the
initial results that contradicted cosmological observations---the monopoles would have
been so plentiful and massive that they would have long since halted the expansion of
the universe. However, the idea of inflation (for which this problem served as a partial
motivation) was successful in solving this problem, creating models in which monopoles
existed but were rare enough to be consistent with current observations.[7]

Types of magnets

Electromagnets

Main article: Electromagnet

An electromagnet is a magnet made from electrical wire wound around a magnetic


material, such as iron. This form of magnet is useful in cases where a magnet must be
switched on or off; for instance, large cranes to lift junked automobiles.
For the case of electric current moving through a wire, the resulting field is directed
according to the "right hand rule." If the right hand is used as a model, and the thumb of
the right hand points along the wire from positive towards the negative side
("conventional current", the reverse of the direction of actual movement of electrons),
then the magnetic field will wrap around the wire in the direction indicated by the fingers
of the right hand. As can be seen geometrically, if a loop or helix of wire is formed such
that the current is traveling in a circle, then all of the field lines in the center of the loop
are directed in the same direction, resulting in a magnetic dipole whose strength
depends on the current around the loop, or the current in the helix multiplied by the
number of turns of wire. In the case of such a loop, if the fingers of the right hand are
directed in the direction of conventional current flow (i.e., positive to negative, the
opposite direction to the actual flow of electrons), the thumb will point in the direction
corresponding to the North pole of the dipole.

Permanent and temporary magnets

Main article: Magnet

A permanent magnet retains its magnetism without an external magnetic field whereas
a temporary magnet is only magnetic while within another magnetic field. Inducing
magnetism in steel results in a permanent magnet but iron loses its magnetism when
the inducing field is withdrawn. A temporary magnet such as iron is thus a good material
for electromagnets. Magnets are made by stroking with another magnet, tapping while
fixed in a magnetic field or placing inside a solenoid coil supplied with a direct current. A
permanent magnet may be de-magnetised by subjecting it to heating or sharp blows or
placing it inside a solenoid supplied with a reducing alternating current.

Units of electromagnetism

SI units related to magnetism

edit
SI electromagnetism units

Symbol [citation
Name of Quantity Derived Units Unit Base Units
needed]

I Magnitude of current ampere (SI base unit) A A = W/V = C/s

Electric charge,
q coulomb C A·s
Quantity of electricity

Potential difference
J/C =
V or Electromotivevolt V
kg·m2·s−3·A−1
force

Resistance,
V/A =
R, Z, X Impedance, ohm Ω
kg·m2·s−3·A−2
Reactance

ρ Resistivity ohm metre Ω·m kg·m3·s−3·A−2

P Power, Electrical watt W V·A = kg·m2·s−3

C/V =
C Capacitance farad F
kg−1·m−2·A2·s4

V/C =
Elastance reciprocal farad F−1
kg·m2·A−2·s−4

ε Permittivity farad per metre F/m kg−1·m−3·A2·s4

χe Electric susceptibility (dimensionless) - -

Conductance,
Ω−1 =
G, Y, B Admittance, siemens S
kg−1·m−2·s3·A2
Susceptance
σ Conductivity siemens per metre S/m kg−1·m−3·s3·A2

Magnetic flux Wb/m2 =


B density, Magnetictesla T kg·s−2·A−1 =
induction N·A−1·m−1

V·s =
Φm Magnetic flux weber Wb
kg·m2·s−2·A−1

Magnetic field
H strength,Magnetic ampere per metre A/m A·m−1
field intensity

Reluctance ampere-turn per weber A/Wb kg−1·m−2·s2·A2

Wb/A = V·s/A =
L Inductance henry H
kg·m2·s−2·A−2

µ Permeability henry per metre H/m kg·m·s−2·A−2

Magnetic
χm (dimensionless)
susceptibility

Electric and
Π and Π * Magnetic hertziann/a n/a
vector potentials

----------------------------------------------------------------------------

Diamagnetism
Diamagnetism is a weak repulsion from a magnetic field. It is form of magnetism that is
only exhibited by a substance in the presence of an externally applied magnetic field. It
results from changes in the orbital motion of electrons. Applying a magnetic field creates
a magnetic force on a moving electron in the form of F = Qv × B. This force changes the
centripetal force on the electron, causing it to either speed up or slow down in its orbital
motion. This changed electron speed modifies the magnetic moment of the orbital in a
direction opposing the external field.

Consider two electron orbitals; one rotating clockwise and the other counterclockwise.
An external magnetic field into the page will make the centripetal force on an electron
rotating clockwise increase, which increases its moment out of the page. That same
field would make the centripetal force on an electron rotating counterclockwise
decrease, decreasing its moment into the page. Both changes oppose the external
magnetic field into the page. However, the induced magnetic moment is very small in
most everyday materials.[clarify]

All materials show a diamagnetic response in an applied magnetic field; however for
materials which show some other form of magnetism (such as ferromagnetism or
paramagnetism), the diamagnetism is completely overpowered. Substances which only,
or mostly, display diamagnetic behaviour are termed diamagnetic materials, or
diamagnets. Materials that are said to be diamagnetic are those which are usually
considered by non-physicists as "non magnetic", and include water, DNA, most organic
compounds such as petroleum and some plastics, and many metals such as mercury,
gold and bismuth.

Diamagnetic materials have a relative magnetic permeability that is less than 1, thus a
magnetic susceptibility which is less than 0, and are therefore repelled by magnetic
fields. However, since diamagnetism is such a weak property its effects are not
observable in every-day life. For example, the magnetic susceptibility of diamagnets
such as water is = −9.05×10−6. The most strongly diamagnetic material is bismuth,
= −166×10−6, although pyrolytic graphite may have a susceptibility of =
−400×10−6 in one plane. Nevertheless these values are orders of magnitudes smaller
than the magnetism exhibited by paramagnets and ferromagnets. Superconductors may
be considered to be perfect diamagnets ( = −1), since they expel all field from their
interior due to the Meissner effect.

Diamagnetic levitation

A particularly fascinating phenomenon involving diamagnets is that they may be


levitated in stable equilibrium in a magnetic field, with no power consumption.
Earnshaw's theorem seems to preclude the possibility of static magnetic levitation.
However, Earnshaw's theory only applies to objects with permanent moments m, such
as ferromagnets, whose magnetic energy is given by m·B. Ferromagnets are attracted
to field maxima, which do not exist in free space. Diamagnetism is an induced form of
magnetism, thus the magnetic moment is proportional to the applied field B. This means
that the magnetic energy of diamagnets is proportional to B2, the intensity of the
magnetic field. Diamagnets are also attracted to field minima, and there can be a
minimum in B2 in free space (in fact ).

A thin slice of pyrolytic graphite, which is an unusually strong diamagnetic material, can
be stably floated in a magnetic field, such as that from rare earth permanent magnets.
This can be done with all components at room temperature, making a visually effective
demonstration of diamagnetism.

The Radboud University Nijmegen has conducted experiments where they have
successfully levitated water and a live frog, amongst other things.[1]
Recent experiments with studying the growth of protein crystals has led to a technique
that utilizes powerful magnets to allow growth in ways that counteract Earth's gravity. [2]

Magnetic states

diamagnetism – superdiamagnetism – paramagnetism –


superparamagnetism – ferromagnetism – antiferromagnetism –
ferrimagnetism – metamagnetism – spin glass
A simple homemade device may be constructed out of Bismuth plates and
a few permanent magnets that will levitate a permanent magnet. This does
not disprove Earnshaw's theorem. The reason it works has to do with the
atomic structure of the magnets and the bismuth plates interacting such
that the gradual weakening of the the magnetic moments of the magnetic
material's atoms is in fact the energy being expended. The energy of the
magnetic field can be thought of as kinetic energy stored by the material's
electron configuration. ---------------------------------------------------------------------
--------------
magnetization or B-H curve

Magnetization Curves

Any discussion of the magnetic properties of a material is likely to include the type of
graph known as a magnetization or B-H curve. Various methods are used to produce B-
H curves, including one which you can easily replicate. Figure MPA shows how the B-H
curve varies according to the type of material within the field.

The 'curves' here are all straight lines and have magnetic field strength as the horizontal
axis and the magnetic flux density as the vertical axis. Negative values of H aren't
shown but the graphs are symmetrical about the vertical axis.
Fig. MPA a) is the curve in the absence of any material: a vacuum. The gradient of the
curve is 4π.10-7 which corresponds to the fundamental physical constant µ0. More on
this later. Of greater interest is to see how placing a specimen of some material in the
field affects this gradient. Manufacturers of a particular grade of ferrite material usually
provide this curve because the shape reveals how the core material in any component
made from it will respond to changes in applied fiel

Diamagnetic and paramagnetic materials

Imagine a hydrogen atom in which a nucleus with a single stationary and positively
charged proton is orbited by a negatively charged electron. Can we view that electron in
orbit as a sort of current loop? The answer is yes, and you might then think that
hydrogen would have a strong magnetic moment. In fact ordinary hydrogen gas is only
very weakly magnetic. Recall that each hydrogen atom is not isolated but is bonded to
one other to form a molecule, giving the formula H2 - because that has a lower chemical
energy (for H by a whopping 218 kJ mol-1) than two isolated atoms.

It is not a coincidence that in these molecules the angular momentum of one electron is
opposite in direction to that of its neighbour, leaving the molecule as a whole with little
by way of magnetic moment. This behaviour is typical of many substances which are
then said to lack a permanent magnetic moment.

When a molecule is subjected to a magnetic field those electrons in orbit planes at a


right angle to the field will change their momentum (very slightly). This is predicted by
Faraday's Law which tells us that as the field is increased there will be a an induced E-
field which the electrons (being charged particles) will experience as a force. This
means that the individual magnetic moments no longer cancel completely and the
molecule then acquires an induced magnetic moment. This behaviour, whereby the
induced moment is opposite to the applied field, is present in all materials and is called
diamagnetism. Hydrogen, ammonia, bismuth, copper, graphite and other diamagnetic
substances, are repelled by a nearby magnet (although the effect is extremely feeble).

Think of it as a manifestation of Lenz's law. Diamagnetic materials are those whose


atoms have only paired electrons.

In other molecules, however, such as oxygen, where there are unpaired electrons, the
cancellation of magnetic moments belonging to the electrons is incomplete. An O2
molecule has a net or permanent magnetic moment even in the absence of an
externally applied field. If an external magnetic field is applied then the electron orbits
are still altered in the same manner as the diamagnets but the permanent moment is
usually a more powerful influence.

The 'poles' of the molecule tend to line up parallel with the field and reinforce it. Such
molecules, with permanent magnetic moments are called paramagnetic.
Although paramagnetic substances like oxygen, tin, aluminium and copper sulphate are
attracted to a magnet the effect is almost as feeble as diamagnetism. The reason is that
the permanent moments are continually knocked out of alignment with the field by
thermal vibration, at room temperatures anyway (liquid oxygen at -183 °C can be pulled
about by a strong magnet).
Particular materials where the magnetic moment of each atom can be made to favour
one direction are said to be magnetizable.
The extent to which this happens is called the magnetization. Fig. MPA b) above is the
magnetization curve for diamagnetic materials. In diamagnetic substances the flux
grows slightly more slowly with the field than it does in a vacuum. The decrease in
gradient is greatly exaggerated in the figure - in practice the drop is usually less than
one part in 6,000.

Fig. MPA c) is the curve for paramagnetic materials. Flux growth in this case is again
linear (at moderate values of H) but slightly faster than in a vacuum. Again, the increase
for most substances is very slight.

Although neither diamagnetic nor paramagnetic materials are technologically important


(geophysical surveying is one exception), they are much studied by physicists, and the
terminology of magnetics is enriched thereby. A short QuickTime movie (388 KB)
demonstrates diamagnetism.

Ferromagnetic materials

The most important class of magnetic materials is the ferromagnets: iron, nickel, cobalt
and manganese, or their compounds (and a few more exotic ones as well). The
magnetization curve looks very different to that of a diamagnetic or paramagnetic
material. We might note in passing that although pure manganese is not ferromagnetic
the name of that element shares a common root with magnetism: the Greek mágnes
lithos - "stone from Magnesia" (now Manisa in Turkey).

Figure MPB above shows a typical curve for iron. It's important to realize that the
magnetization curves for ferromagnetic materials are all strongly dependant upon purity,
heat treatment and other factors. However, two features of this curve are immediately
apparent: it really is curved rather than straight (as with non-ferromagnets) and also that
the vertical scale is now in teslas (rather than milliteslas as with Figure MPA).

Figure MPB is a normal magnetization curve because it starts from an unmagnetized


sample and shows how the flux density increases as the field strength is increased. You
can identify four distinct regions in most such curves.

These can be explained in terms of changes to the material's magnetic 'domains':

1. Close to the origin a slow rise due to 'reversible growth'.


2. A longer, fairly straight, stretch representing 'irreversible growth'.
3. A slower rise representing 'rotation'.
4. An almost flat region corresponding to paramagnetic behaviour and then µ0 - the
core can't handle any more flux growth and has saturated.

At an atomic level ferromagnetism is explained by a tendency for neighbouring atomic


magnetic moments to become locked in parallel with their neighbours. This is only
possible at temperatures below the curie point, above which thermal disordering causes
a sharp drop in permeability and degeneration into paramagnetism. Ferromagnetism is
distinguished from paramagnetism by more than just permeability because it also has
the important properties of remnance and coercivity.

Ferrimagnetic materials

Almost every item of electronic equipment produced today contains some ferrimagnetic
material: loudspeakers, motors, deflection yokes, interference suppressors, antenna
rods, proximity sensors, recording heads, transformers and inductors are frequently
based on ferrites.
The market is vast.
What properties make ferrimagnets so ubiquitous? They possess permeability to rival
most ferromagnets but their eddy current losses are far lower because of the material's
greater electrical resistivity. Also it is practicable (if not straightforward) to fabricate
different shapes by pressing or extruding - both low cost techniques.

What is the composition of ferrimagnetic materials? They are, in general, oxides of iron
combined with one or more of the transition metals such as manganese, nickel or zinc,
e.g. MnFe2O4. Permanent ferrimagnets often include barium. The raw material is
turned into a powder which is then fired in a kiln or sintered to produce a dark gray,
hard, brittle ceramic material having a cubic crystalline structure.

At an atomic level the magnetic properties depend upon interaction between the
electrons associated with the metal ions. Neighbouring atomic magnetic moments
become locked in anti-parallel with their neighbours (which contrasts with the
ferromagnets). However, the magnetic moments in one direction are weaker than the
moments in the opposite direction leading to an overall magnetic moment.

Saturation

Saturation is a limitation occurring in inductors having a ferromagnetic or ferrimagnetic


core. Initially, as current is increased the flux increases in proportion to it (see figure
MPB). At some point, however, further increases in current lead to progressively smaller
increases in flux. Eventually, the core can make no further contribution to flux growth
and any increase thereafter is limited to that provided by µ0 - perhaps three orders of
magnitude smaller. Iron saturates at about 1.6 T while ferrites will normally saturate
between about 200 mT and 500 mT.

It is usually essential to avoid reaching saturation since it is accompanied by a drop in


inductance. In many circuits the rate at which current in the coil increases is inversely
proportional to inductance (I = V * T / L). Any drop in inductance therefore causes the
current to rise faster, increasing the field strength and so the core is driven even further
into saturation.

Core manufacturers normally specify the saturation flux density for the particular
material used. You can also measure saturation using a simple circuit.
There are two methods by which you can calculate flux if you know the number of turns
and either -

1. The current, the length of the magnetic path and the B-H characteristics of the
material.
2. The voltage waveform on a winding and the cross sectional area of the core -
see Faraday's Law.

Although saturation is mostly a risk in high power circuits it is still a possibility in 'small
signal' applications having many turns on an ungapped core and a DC bias (such as the
collector current of a transistor).

If you find that saturation is likely then you might -

• Run the inductor at a lower current


• Use a larger core
• Alter the number of turns
• Use a core with a lower permeability
• Use a core with an air gap

or some combination thereof - but you'll need to re-calculate the design in any case.

Materials classification

Table MPJ categorizes (in a simplified fashion) each class assigned to a material
according to its behaviour in a field.

Table MPJ: Materials classified by their


magnetic properties.

χ
Class
dependant
on B?

Diamagnetic No

Paramagnetic No

Ferromagnetic Yes

Antiferromagnetic Yes

Ferrimagnetic Yes

Ni 627

MnSb 587

MnOFe2O3* 573

Y3Fe5O12* 560

CrO2 386

MnAs 318

Gd 292

Dy 88

EuO 69

------------------------------
Permeability:

Permeability in the SI

permeability

Quantity name
alias absolute
permeability

Quantity symbol µ

Unit name henrys per metre

Unit symbols H m-1

Base units kg m s-2 A-2

Duality with the Electric World

Quantity Unit

Permeability henrys per metre

Permittivity farads per metre

Y3Fe5O12* 560

CrO2 386

MnAs 318

Gd 292

Dy 88
EuO 69

Although, as suggested above, magnetic permeability is related in physical terms most


closely to electric permittivity, it is probably easier to think of permeability as
representing 'conductivity for magnetic flux'; just as those materials with high electrical
conductivity let electric current through easily so materials with high permeabilities allow
magnetic flux through more easily than others.

Materials with high permeabilities include iron and the other ferromagnetic materials.
Most plastics, wood, non ferrous metals, air and other fluids have permeabilities very
much lower: µ0.
Just as electrical conductivity is defined as the ratio of the current density to the electric
field strength, so the magnetic permeability, µ, of a particular material is defined as the
ratio of flux density to magnetic field strength -

µ=B/H

Equation MPD

This information is most easily obtained from the magnetization curve. Figure MPC
shows the permeability (in black) derived from the magnetization curve (in colour) using
equation MPD. Note carefully that permeability so defined is not the same as the slope
of a tangent to the B-H curve except at the peak (around 80 A m-1 in this case). The
latter is called differential permeability, µ′ = dB/dH.
In ferromagnetic materials the hysteresis phenomenon means that if the field strength is
increasing then the flux density is less than when the field strength is decreasing. This
means that the permeability must also be lower during 'charge up' than it is during
'relaxation', even for the same value of H. In the extreme case of a permanent magnet
the permeability within it will be negative. There is an analogy here with electric cells,
since they may be said to have 'negative resistance'.

If you use a core with a high value of permeability then fewer turns will be required to
produce a coil with a given value of inductance. You can understand why by
remembering that inductance is the ratio of flux to current. For a given core B is
proportional to flux and H is proportional to the current so that inductance is also
proportional to µ: the ratio of B to H.

Unlike electrical conductivity, permeability is often a highly non-linear quantity. Most coil
design formulæ, however, pretend that µ is a linear quantity. If you were working at a
peak value of H of 100 A m-1, for example, then you might take an average value for µ
of about 0.006 H m-1. This is all very approximate, but you must accept inaccuracy if
you insist on treating a non-linear quantity as though it was actually linear.

This form of permeability, where µ is written without a subscript, is known in SI parlance


as absolute permeability. It is seldom quoted in engineering texts. Instead a variant is
used called relative permeability described next.

µ = µ0 × µr

--------------------------------------------

Relative permeability

Relative permeability

Relative
Quantity name
permeability

Quantity symbol µr
Unit symbols dimensionless

Relative permeability is a very frequently used parameter. It is a variation upon 'straight'


or absolute permeability, µ, but is more useful to you because it makes clearer how the
presence of a particular material affects the relationship between flux density and field
strength. The term 'relative' arises because this permeability is defined in relation to the
permeability of a vacuum, µ0

µr = µ / µ0 Equation MPE

For example, if you use a material for which µr = 3 then you know that the flux density
will be three times as great as it would be if we just applied the same field strength to a
vacuum. This is simply a more user friendly way of saying that µ = 3.77×10-6 H m-1.
Note that because µr is a dimensionless ratio that there are no units associated with it.

Many authors simply say "permeability" and leave you to infer that they mean relative
permeability. In the CGS system of units these are one and the same thing really. If a
figure greater than 1.0 is quoted then you can be almost certain it is µr.

Approximate maximum permeabilities

Material µ/(H m-1) µr Application

Ferrite U 60 1.00E-05 8 UHF chokes

Ferrite M33 9.42E-04 750 Resonant circuit RM cores

Nickel (99% pure) 7.54E-04 600 -

Ferrite N41 3.77E-03 3000 Power circuits

Iron (99.8% pure) 6.28E-03 5000 -


Ferrite T38 1.26E-02 10000 Broadband transformers

Silicon GO steel 5.03E-02 40000 Dynamos, mains transformers

supermalloy 1.26 1000000 Recording heads

Note that, unlike µ0, µr is not constant and changes with flux density. Also, if the
temperature is increased from, say, 20 to 80 centigrade then a typical ferrite can suffer
a 25% drop in permeability. This is a big problem in high-Q tuned circuits.

Another factor, with steel cores especially, is the microstructure, in particular grain
orientation. Silicon steel sheet is often made by cold rolling to orient the grains along the
laminations (rather than allowing them to lie randomly) giving increased µ. We call such
material anisotropic.

Before you pull any value of µ from a data sheet ask yourself if it is appropriate for your
material under the actual conditions under which you use it. Finally, if you do not know
the permeability of your core then build a simple circuit to measure it.

Variant forms of permeability and related quantities


Initial permeability

Initial permeability

Quantity name initial permeability

Quantity symbol µi

Unit symbols dimensionless *


Initial permeability describes the relative permeability of a material at low values of B
(below 0.1T). The maximum value for µ in a material is frequently a factor of between 2
and 5 or more above its initial value.

Low flux has the advantage that every ferrite can be measured at that density without
risk of saturation. This consistency means that comparison between different ferrites is
easy. Also, if you measure the inductance with a normal component bridge then you are
doing so with respect to the initial permeability.
* Although initial permeability is usually relative to µ0, you may see µi as an absolute
permeability.

Effective permeability

Effective permeability

Quantity name Effective permeability

Quantity symbol µe

Unit symbols dimensionless *

Effective permeability is seen in some data sheets for cores which have air gaps. Coil
calculations are easier because you can simply ignore the gap by pretending that you
are using a material whose permeability is lower than the material you actually have.
* Effective permeability is usually relative to µ0.

Permeability of a vacuum in the SI

Permeability of a vacuum
Permeability of a
vacuum

alias Permeability of free


space,
Quantity name

magnetic space constant,

magnetic constant

Quantity symbol µ0

Unit name henrys per metre

Unit symbols H m-1

Base units kg m s-2 A-2

The permeability of a vacuum has a finite value - about 1.257×10-6 H m-1 - and in the
SI system (unlike the cgs system) is denoted by the symbol µ0. Note that this value is
constant with field strength and temperature. Contrast this with the situation in
ferromagnetic materials where µ is strongly dependant upon both. Also, for practical
purposes, most non-ferromagnetic substances (such as wood, plastic, glass, bone,
copper aluminium, air and water) have a permeability almost equal to µ0; that is, their
relative permeability is 1.0.
In Fig. MPZ you see, in cross section, two long, straight conductors spaced one metre
apart in a vacuum. Both carry one ampere. The field strength due to the current in
conductor A at a distance of one metre may be found, using Ampère's Law -

H = I / d = 1 / (2π) A m-1

Equation MPI

where I is the current in conductor A and d is the path length around the circular field
line. We know, from the definition of the ampere, that the force on conductor C is 2×10-
7 newtons per metre of its length. However, flux density, B, is also defined in terms of
the force F, in newtons, exerted on a conductor of unit length and carrying unit current -

B = F / I = 2×10-7 / 1 tesla

Equation MPO

Since we now know both B and H at a distance of 1 metre from A we calculate the
permeability of a vacuum as -

µ0 = B / H = 2×10-7 / (1 / (2π)) = 4π10-7 H m-1

Equation MPF

Susceptibility

Susceptibility (magnetic) in the SI

Susceptibility alias bulk


susceptibility
Quantity name

or volumetric susceptibility

Quantity symbol χ, χv

Unit symbols dimensionless


Duality with the Electric World

Quantity Unit Formula

magnetic susceptibility 1 χmg = µr - 1

electric susceptibility 1 χel = ε - 1

Although susceptibility is not directly important to the designer of wound components it


is used in most textbooks which explain the theory of magnetism. When you work with
non-ferromagnetic substances the permeability is so close to µ0 that characterizing
them by µ is inconvenient. Instead use the magnetic susceptibility, χ - via the
permeability

χ = µr - 1

Equation MPS

In paramagnetic and diamagnetic materials the susceptibility is given by

χ=M/H

Equation MPH

Susceptibilities of some other substances are given in table MPS where the
paramagnetic substances have positive susceptibilities and the diamagnetic substances
have negative susceptibilities. The susceptibility of a vacuum is then zero. Susceptibility
is a strong contender for the title of 'most confusing quantity in all science'. There are
five reasons for this:

1. The counterpart to permeability in electrostatics has a distinct name: permittivity.


Unfortunately, the counterpart to susceptibility in electrostatics has the same
name. However, the electrostatic susceptibility should be given the symbol χe.
Susceptance has nothing to do with susceptibility.
2. There are variant forms of susceptibility, the main two of which are listed below.
Authors do not always explicitly state which variant is being used and, worse still,
there is incomplete agreement about the names and symbols of each variant.
The symbol χm is somewhat overloaded: magnetic susceptibility, mass
susceptibility, or molar susceptibility? Take your pick.
3. Most reference books, and many instruments, still present susceptibility figures in
CGS units. Often, the units are not made explicit and you are left to deduce them
from the context or the values themselves. The procedures for converting to SI
are not obvious.
4. Some quite prestigious publications incorrectly abbreviate the units to 'per gram'
or 'per kilogram'. :-(
5. Measurement of susceptibility is notoriously difficult. The slightest whiff of
contamination by iron in the sample will send the experimental results off into the
twilight. Published figures frequently show differences of 5%; and 50% is not
rare.

Table MPS: Magnetic susceptibilities

Material χv / 10-5

Aluminium +2.2

Ammonia -1.06

Bismuth -16.7

Copper -0.92

Hydrogen -0.00022

Oxygen +0.19

Silicon -0.37

Water -0.90

The international symbol for susceptibility of the ordinary ('volumetric') kind is simply χ
without any subscript. ISO suggests χm to distinguish magnetic susceptibility from
electric susceptibility but this may risk confusion with mass or molar susceptibility. Some
writers have used χv to indicate volumetric susceptibility. Although electromagnetism is
already up to its ears in subscript soup, this seems a good solution.

Table MPN: Variant forms of susceptibility

Name Equation

bulk susceptibility M/H

mass susceptibility χv / ρ

molar susceptibility

χv × Wa / ρ
or molar mass
susceptibility

Hydrogen -0.00022

Oxygen +0.19

Silicon -0.37

Water -0.90

where ρ is the density of the substance in kg m-3 and Wa is the molar mass in kg mol-1.

To appreciate the difference for each variant think of it as being a separate way to get
the total magnetic moment for a magnetic field strength of one amp per metre. With bulk
susceptibility you start from a known volume, with mass susceptibility you start from a
known mass and from molar susceptibility you start from a known number of moles.

Depending upon your application one form will be more convenient than another.
Physicists like molar susceptibility because their calculations derive from atomic
properties. Geologists like mass susceptibility because they know the weight of their
sample.

The definition of susceptibility given here accords with the Sommerfeld SI variant. In
the Kennelly variant χ has a different definition.

Mass susceptibility

Magnetic susceptibility by mass in the SI

Magnetic mass
susceptibility
Quantity name

alias specific susceptibility

Quantity symbol χρ

Unit symbols m3 kg-1

Magnetic mass susceptibility is simply

χρ = χv / ρ m3 kg-1 Equation MPT

where χv is the ordinary ('volumetric') susceptibility and ρ is the density of the material in
kg per cubic metre. Unfortunately some tables of mass susceptibility, even in
prestigious publications, abbreviate the units to 'susceptibility per gram' or 'susceptibility
per kilogram' which is, at best, a source of confusion. Take care to distinguish χρ from
χM or molar susceptibility; that is a different quantity.
So, if you know the mass of your material sample you need only multiply by χρ to find its
magnetic moment when the field strength is one amp per metre.

Table MPM: Magnetic mass susceptibilities

Material χρ /
(10-8 m3 kg-1 )

Aluminium +0.82

Ammonia -1.38

Bismuth -1.70

Copper -0.107

Hydrogen -2.49

Oxygen +133.6

This definition of susceptibility accords with the Sommerfeld SI variant. In the


Kennelly variant χ has a different definition.

--------------

Terminology for intrinsic fields within materials -

Magnetic moment

Magnetic moment in the SI

magnetic moment

Quantity name alias magnetic dipole moment

or electromagnetic moment
Quantity symbol m

Unit name ampere metre squared

Unit symbols A m2

The concept of magnetic moment is the starting point when discussing the behaviour of
magnetic materials within a field. If you place a bar magnet in a field then it will
experience a torque or moment tending to align its axis in the direction of the field. A
compass needle behaves the same. This torque increases with the strength of the poles
and their distance apart. So the value of magnetic moment tells you, in effect, 'how big a
magnet' you have.

It is also well known that a current carrying loop in a field also experiences a torque
(electric motors rely on this effect). Here the torque, τ, increases with the current, i, and
the area of the loop, A. θ is the angle made between the axis of the loop normal to its
plane and the field direction.

τ = B × i × A × sinθ Equation MPU


The unit of τ is the newton metre. This puzzling quantity appears to have the
dimensions of force times distance ... which is energy. Hmmm.
The quantity i × A is defined as the magnetic moment, m. This gives

τ = B × m × sinθ Equation MPL

Particular materials where the magnetic moment of each atom can be made to favour
one direction are said to be magnetizable. The extent to which this happens is called
the magnetization. Magnetic moment is a vector quantity which has both direction and
magnitude. This is important because although the atoms in most materials may have
magnetic moments they are not easily brought into alignment in one direction, so the
moments cancel each other, leading to weak magnetization.
The Earth has a magnetic moment of 8×1022 A m2. A single electron has a magnetic
moment due to its orbit around the nucleus which is a multiple of 9.27×10-24 A m2
(known as the Bohr magneton, µB).

We have, then, two ways of looking at the basis of magnetism: one is the idea of a pair
of opposing poles and the other is current circulation. Each viewpoint has some
advantages over the other; and this gave science a hard time deciding which to prefer.
The reason this is worth mentioning is that different definitions arose for several
quantities in magnetics. Both models, however, function more as convenient
mathematical abstractions rather than literal descriptions of the physical origins of
magnetism.

Magnetization

Magnetization in the SI

Quantity name Magnetization

Quantity symbol M
Unit name ampere per metre

Unit symbols A m-1

Magnetic fields are caused by the movement of charge, normally electrons. This
movement may take place in a wire carrying current. The wire then develops a
surrounding magnetic field which is given the symbol, H.
In a bar magnet you may not think that there need be any current but the magnetic field
here is also due to moving charge: the electrons circling around the nuclei of the iron
atoms or simply spinning about their own axis. Atoms like this are said to possess a
magnetic moment. The average field strength due to these moments at any particular
point is called magnetization and given the symbol M.
In most materials the moments are oriented almost at random - which leads to weak
magnetization and 'non-magnetic' properties. In iron the moments readily align
themselves along an applied field so inducing a large value of M and the familiar
characteristics in the presence of a field.
Magnetization is defined -

M = m / V A m-1

Equation MPV

where m is the total vector sum for the magnetic moments of all the atoms in a given
volume V (in m3) of the material. We can then say -

B = µ0 ( H + M) teslas

Sommerfeld field equation

This equation is of theoretical importance because it highlights a closeness between H


and M. The H field is related to 'free currents': for example those flowing from a battery
along a piece of wire. M, on the other hand, is related to the 'bound' ('Ampèrian')
currents of electron orbitals within magnetized materials.
In practice, with ferromagnetic materials, M tends to be a very complex function of H -
including values of H in the past. As a designer of wound components you therefore
pretend instead that B = µH ... and hope for the best!

Magnetization occurs not just in materials having permanent magnetic moments but
also in any magnetizable material in a field which can induce a magnetic moment in its
constituent atoms. In the special case of paramagnetic and diamagnetic materials this
magnetization is given by

M = χ H A m-1

Equation MPZ

Intensity of magnetization

Intensity of magnetization in the SI

Intensity of magnetization
Quantity name
alias Magnetic polarization

Quantity symbol I

Unit name tesla

Unit symbol T

Intensity of magnetization functions in the Kennelly variant of the SI as an alternative to


the Sommerfeld variant for magnetization, M.

I = µ0 M teslas Equation MPX

We can then say -


B = µ0 H + I teslas Kennelly field equation

Note the units of I: teslas, not amps per metre as in the Sommerfeld magnetization. So
don't confuse intensity of magnetization with magnetization

Magnetic polarization

Magnetic polarization in the SI

Magnetic polarization
Quantity name
alias Intensity of magnetization

Quantity symbol J

Unit name tesla

Unit symbols T

Magnetic polarization is a synonym for intensity of magnetization in the Kennelly variant


of the SI

-------------------------------------
Mechanical properties

Tensile strength
Tensile strength σUTS, or SU measures the force required to pull something such as
rope, wire, or a structural beam to the point where it breaks.
Contents
• 1 Explanation
• 2 Concept
• 3 Typical tensile strengths
• 4 Sources
• 5 See also
• 6 External links

Explanation
The tensile strength of a material is the maximum amount of tensile stress that it can be
subjected to before failure. The definition of failure can vary according to material type
and design methodology. This is an important concept in engineering, especially in the
fields of material science, mechanical engineering and structural engineering.
There are three typical definitions of tensile strength:

• Yield strength: The stress at which material strain changes from elastic
deformation to plastic deformation, causing it to deform permanently.

• Ultimate strength: The maximum stress a material can withstand.

• Breaking strength: The stress coordinate on the stress-strain curve at the point of
rupture.

Concept
The various definitions of tensile strength are shown in the following stress-strain graph
for low-carbon steel:

Stress vs. Strain curve typical of structural steel


1. Ultimate Strength
2. Yield Strength
3. Rupture
4. Strain hardening region
5. Necking region.

Metals including steel have a linear stress-strain relationship up to the yield point, as
shown in the figure. In some steels the stress falls after the yield point. This is due to the
interaction of carbon atoms and dislocations in the stressed steel. Cold worked and
alloy steels do not show this effect. For most metals yield point is not sharply defined.
Below the yield strength all deformation is recoverable, and the material will return to its
initial shape when the load is removed. For stresses above the yield point the
deformation is not recoverable, and the material will not return to its initial shape. This
unrecoverable deformation is known as plastic deformation. For many applications
plastic deformation is unacceptable, and the yield strength is used as the design
limitation.
After the yield point, steel and many other ductile metals will undergo a period of strain
hardening, in which the stress increases again with increasing strain up to the ultimate
strength. If the material is unloaded at this point, the stress-strain curve will be parallel
to that portion of the curve between the origin and the yield point. If it is re-loaded it will
follow the unloading curve up again to the ultimate strength, which has become the new
yield strength.
After a metal has been loaded to its yield strength it begins to "neck" as the cross-
sectional area of the specimen decreases due to plastic flow. When necking becomes
substantial, it may cause a reversal of the engineering stress-strain curve, where
decreasing stress correlates to increasing strain because of geometric effects. This is
because the engineering stress and engineering strain are calculated assuming the
original cross-sectional area before necking. If the graph is plotted in terms of true
stress and true strain the curve will always slope upwards and never reverse, as true
stress is corrected for the decrease in cross-sectional area. Necking is not observed for
materials loaded in compression. The peak stress on the engineering stress-strain
curve is known the ultimate tensile strength. After a period of necking, the material will
rupture and the stored elastic energy is released as noise and heat. The stress on the
material at the time of rupture is known as the breaking stress.
Ductile metals do not have a well defined yield point. The yield strength is typically
defined by the "0.2% offset strain". The yield strength at 0.2% offset is determined by
finding the intersection of the stress-strain curve with a line parallel to the initial slope of
the curve and which intercepts the abscissa at 0.002. A stress-strain curve typical of
aluminum along with the 0.2% offset line is shown in the figure below.

Stress vs. Strain curve typical of aluminum


1. Ultimate Strength
2. Yield strength
3. Proportional Limit Stress
4. Rupture
5. Offset Strain (typically 0.002).

Brittle materials such as concrete and carbon fiber do not have a yield point, and do not
strain-harden which means that the ultimate strength and breaking strength are the
same. A most unusual stress-strain curve is shown in the figure below. Typical brittle
materials do not show any plastic deformation but fail while the deformation is elastic.
One of the characteristics of a brittle failure is that the two broken parts can be
reassembled to produce the same shape as the original component. A typical stress
strain curve for a brittle material will be linear. Testing of several identical specimens will
result in different failure stresses. The curve shown below would be typical of a brittle
polymer tested at very slow strain rates at a temperature above its glass transition
temperature. Some engineering ceramics show a small amount of ductile behaviour at
stresses just below that causing failure but the initial part of the curve is a linear.

Stress vs. Strain curve of a very untypical brittle material


1. Ultimate Strength
2. Rupture.

Tensile strength is measured in units of force per unit area. In the SI system, the units
are newtons per square metre (N/m²) or pascals (Pa), with prefixes as appropriate. The
non-metric units are pounds-force per square inch (lbf/in² or PSI). Engineers in North
America usually use units of ksi which is a thousand psi.
The breaking strength of a rope is specified in units of force, such as newtons, without
specifying the cross-sectional area of the rope. This is often loosely called tensile
strength, but this is not a strictly correct use of the term.
In brittle materials such as rock, concrete, cast iron, or soil, tensile strength is negligible
compared to the compressive strength and it is assumed zero for many engineering
applications. Glass fibers have a tensile strength stronger than steel[1], but bulk glass
usually does not. This is due to the Stress Intensity Factor associated with defects in the
material. As the size of the sample gets larger, the size of defects also grows. In
general, the tensile strength of a rope is always less than the tensile strength of its
individual fibers.
Tensile strength can be defined for liquids as well as solids. For example, when a tree
draws water from its roots to its upper leaves by transpiration, the column of water is
pulled upwards from the top by capillary action, and this force is transmitted down the
column by its tensile strength. Air pressure from below also plays a small part in a tree's
ability to draw up water, but this alone would only be sufficient to push the column of
water to a height of about ten metres, and trees can grow much higher than that. (See
also cavitation, which can be thought of as the consequence of water being "pulled too
hard".)

Typical tensile strengths


Some typical tensile strengths of some materials:

This list is incomplete; you can help by expanding it.

• Note: Multiwalled carbon nanotubes have the highest tensile strength of any
material yet measured, with labs producing them at a tensile strength of 63 GPa,
still well below their theoretical limit of 300 GPa. However as of 2004, no
macroscopic object constructed of carbon nanotubes has had a tensile strength
remotely approaching this figure, or substantially exceeding that of high-strength
materials like Kevlar.

• Note: many of the values depend on manufacturing process and


purity/composition.

Yield strength Ultimate strength Density


Material
(MPa) (MPa) (g/cm3)
Structural steel ASTM A36 steel 250 400 7.8

Steel, API 5L X65 (Fikret Mert Veral) 448 531 7.8

Steel, high strength alloy ASTM A514 690 760 7.8

Steel, prestressing strands 1650 1860 7.8

Steel Wire 7.8

Steel, Piano wire c. 2000 7.8

High density polyethylene (HDPE) 26-33 37 0.95

Polypropylene Dec-43 19.7-80 0.91

Stainless steel AISI 302 - Cold-rolled 520 860

Cast iron 4.5% C, ASTM A-48 276 (??) 200

Titanium Alloy (6% Al, 4% V) 830 900 4.51

Aluminum Alloy 2014-T6 400 455 2.7

Copper 99.9% Cu 70 220 8.92

Cupronickel 10% Ni, 1.6% Fe, 1% Mn,


130 350 8.94
balance Cu

Brass approx. 180+ 250 ;

Tungsten 1510 19.25

Glass 50 (in compression) 2.53


Marble N/A 15

Concrete N/A 3

Carbon Fiber N/A 5650 1.75

Spider silk 1150 (??) 1200

Silkworm silk 500

Aramid (Kevlar or Twaron) 3620 1.44

UHMWPE 23 46 0.97

UHMWPE fibers[1][2] (Dyneema or


2300-3500 0.97
Spectra)

Vectran 2850-3340

Pine Wood (parallel to grain) 40

Bone (limb) 130

Nylon, type 6/6 45 75

Rubber - 15

Boron N/A 3100 2.46

Silicon, monocrystalline (m-Si) N/A 7000 2.33

Silicon carbide (SiC) N/A 3440

Sapphire (Al2O3) N/A 1900 3.9-4.1


Carbon nanotube (see note above) N/A 62000 1.34

Ultimate
Young's Modulus Proof or yield stress
strength
Elements in the annealed state

(GPa) (MPa) (MPa)

Aluminium 70 15-20 40-50

Copper 130 33 210

Gold 79 100

Iron 211 80-100 350

Lead 16 12

Nickel 170 14-35 140-195

Silicon 107 5000-9000

Silver 83 170

Tantalum 186 180 200

Tin 47 14-Sep 15-200

Titanium 120 100-225 240-370

Tungsten 411 550 550-620

Zinc (wrought) 105 110-200


(Source: A.M. Howatson, P.G. Lund and J.D. Todd, "Engineering Tables and Data" p41)

Sources
• A.M. Howatson, P.G. Lund and J.D. Todd, "Engineering Tables and Data"
• Giancoli, Douglas. Physics for Scientists & Engineers Third Edition. Upper
Saddle River: Prentice Hall, 2000.
• Köhler, T. and F. Vollrath. 1995. Thread biomechanics in the two orb-weaving
spiders Araneus diadematus (Araneae, Araneidae) and Uloboris walckenaerius
(Araneae, Uloboridae). Journal of Experimental Zoology 271:1-17.
• Edwards, Bradly C. "The Space Elevator: A Brief Overview"
http://www.liftport.com/files/521Edwards.pdf
• T Follett "Life without metals"
• Min-Feng Yu et. al (2000), Strength and Breaking Mechanism of Multiwalled
Carbon Nanotubes Under Tensile Load, Science 287, 637-640

See also
• Tension (mechanics)
• Toughness
• Deformation
• Tensile structure
• Universal Testing Machine
• Specific Strength
Metallic / Ionic bond
Metallic bonding is the bonding between atoms within metals. It involves the delocalized
sharing of free electrons among a lattice of metal atoms. Thus, metallic bonds may be compared
to molten salts.

Metallic bonding is the electrostatic attraction between the metal atoms or ions and the
delocalized electrons, also called conduction electrons. This is why atoms or layers are allowed
to slide past each other, resulting in the characteristic properties of malleability and ductility.

Metal atoms typically contain fewer electrons in their valence shell relative to their period or
energy level. These electrons can be easily lost by the atoms and therefore become delocalized
and form a sea of electrons surrounding a giant lattice of positive ions.

The electrons and the positive ions in the metal have a strong attractive force between them. This
means that more energy is required to negate these forces. Therefore metals often have high
melting or boiling points. The principle is similar to that of ionic bonds.

Metallic bonding is non-polar, because for pure elemental metals and even for alloys there is no
(or a very small) electronegativity difference among the atoms participating in the bonding
interaction, and the electrons involved in that interaction are delocalized across the crystalline
structure of the metal.

The metallic bond accounts for many physical characteristics of metals, such as strength,
malleability, ductility, conduction of heat and electricity, and lustre.
Due to the fact that the electrons move independently of the positive ions in a sea of negative
charge, the metal gains some electrical conductivity. It allows the energy to pass quickly through
the electrons generating a current. Heat conduction works on the same principle - the free
electrons can transfer the energy at a faster rate than other substances such as those which are
covalently bonded, as these have their electrons fixed into position.

There also are few non-metals which conduct electricity: graphite (because, like metals, they
have free electrons), and molten and aqueous ionic compounds which have free moving ions. [1]
[2] [3]

Metal atoms have at least one valence electron which they do not share with neighboring atoms,
nor do they lose electrons to form ions. Instead the outer energy levels of the metal atoms
overlap. They are similar to covalent bonds. [4]
Metallic bonding was first discovered by Frederick Louise in 1922.
See also
• Chemical bond
• Covalent bond
• Ionic bond
• Coordinate covalent bond

---------------------------------

Ionic bond
An ionic bond (or electrovalent bond) is a type of chemical bond based on electrostatic forces
between two oppositely-charged ions. In ionic bond formation, a metal donates an electron, due
to a low electronegativity to form a positive ion or cation. In ordinary table salt (NaCl), the
bonds between the sodium and chloride ions are ionic bonds. Often ionic bonds form between
metals and non-metals. The non-metal atom has an electron configuration just short of a noble
gas structure. They have high electronegativity, and so readily gain electrons to form negative
ions or anions. The two or more ions are then attracted to each other by electrostatic forces.

Ionic bonding occurs only if the


overall energy change for the reaction is favourable – when the bonded atoms have a lower
energy than the free ones. The larger the resulting energy change the stronger the bond.

Pure ionic bonding is not known to exist. All ionic bonds have a degree of covalent bonding or
metallic bonding. The larger the difference in electronegativity between two atoms the more
ionic the bond. Ionic compounds conduct electricity when molten or in solution. They generally
have a high melting point and tend to be soluble in water.

Polarization effects

Ions in crystal lattices of purely ionic compounds are spherical; however, if the positive ion is
small and/or highly charged, it will distort the electron cloud of the negative ion. This
polarization of the negative ion leads to a build-up of extra charge density between the two
nuclei, i.e., to partial covalency. Larger negative ions are more easily polarized, but the effect is
usually only important when positive ions with charges of 3+ (e.g., Al3+) are involved (e.g., pure
AlCl3 is a covalent molecule). However, 2+ ions (Be2+) or even 1+ (Li+) show some polarizing
power because their sizes are so small (e.g., LiI is ionic but has some covalent bonding present).

Ionic structure
Ionic compounds in the solid state form a continuous ionic lattice structure in an ionic crystal.
The simplest form of ionic crystal is a simple cubic. This is as if all the atoms were placed at the
corners of a cube. This unit cell has a wht that is the same as 1 of the atoms involved. When all
the ions are approximately the same size, they can form a different structure called a face-
centered cubic (where the weight is 4 * atomic weight), but, when the ions are different sizes, the
structure is often body-centered cubic (2 times the weight). In ionic lattices the coordination
number refers to the number of connected ions.

Ionic versus covalent bonds

In an ionic bond, the atoms are bound by attraction of opposite ions, whereas, in a covalent bond,
atoms are bound by sharing electrons. In covalent bonding, the molecular geometry around each
atom is determined by VSEPR rules, whereas, in ionic materials, the geometry follows maximum
packing rules.

Electrical conductivity

Main article: Electrolyte


Ionic substances in solution conduct electricity because the ions are free to move and carry the
electrical charge from the anode to the cathode. Ionic substances conduct electricity when molten
because atoms (and thus the electrons) are mobilised. Electrons can flow directly through the
ionic substance in a molten state.

Substances in ionic form

Common Cations Common Anions


Stock System Name Formula Historic Name Formal Name Formula Alt. Nam
Simple Cations Simple Anions
Aluminum Al3+ Arsenide As3−
Barium Ba2+ Azide N3−
Beryllium Be2+ Bromide Br−
Caesium Cs+ Chloride Cl−
Calcium Ca2+ Fluoride F−
Chromium(II) Cr2+ Chromous Hydride H−
Chromium(III) Cr3+ Chromic Iodide I−
Chromium(VI) Cr6+ Chromyl Nitride N3−
Cobalt(II) Co2+ Cobaltous Oxide O2−
Cobalt(III) Co3+ Cobaltic Phosphide P3−
Copper(I) Cu+ Cuprous Sulphide S2−
Copper(II) Cu2+ Cupric Peroxide O22−
Copper(III) Cu3+ Oxoanions
Gallium Ga3+ Arsenate AsO43−
Helium He2+ (Alpha particle) Arsenite AsO33−
Hydrogen H+ (Proton) Borate BO33−
Iron(II) Fe2+ Ferrous Bromate BrO3−
Iron(III) Fe3+ Ferric Hypobromite BrO−
Lead(II) Pb2+ Plumbous Carbonate CO32−
Lead(IV) Pb4+ Plumbic Hydrogen Carbonate HCO3− Bicarbona
Lithium Li+ Chlorate ClO3−
Magnesium Mg2+ Perchlorate ClO4−
Manganese(II) Mn2+ Manganous Chlorite ClO2−
Manganese(III) Mn3+ Manganic Hypochlorite ClO−
Manganese(IV) Mn4+ Manganyl Chromate CrO42−
Manganese(VII) Mn7+ Dichromate Cr2O72−
Mercury(II) Hg2+ Mercuric Iodate IO3−
Nickel(II) Ni2+ Nickelous Nitrate NO3−
Nickel(III) Ni3+ Nickelic Nitrite NO2−
Potassium K+ Phosphate PO43−
Silver Ag+ Hydrogen Phosphate HPO42−
Sodium Na+ Dihydrogen Phosphate H2PO4−
Strontium Sr2+ Permanganate MnO4−
Tin(II) Sn2+ Stannous Phosphite PO33−
Tin(IV) Sn4+ Stannic Sulphate SO42−
Zinc Zn2+ Thiosulphate S2O32−
Polyatomic Cations Hydrogen Sulphate HSO4− Bisulphat
Ammonium NH4+ Sulphite SO32−
Hydronium H3O+ Hydrogen Sulphite HSO3− Bisulphite
Nitronium NO2+ Anions from Organic Acids
Mercury(I) Hg22+ Mercurous Acetate C2H3O2−
Formate HCO2−
Oxalate C2O42−
Hydrogen Oxalate HC2O4− Bioxalate
Other Anions
Hydrogen Sulphide HS− Bisulphid
Telluride Te2−
Amide NH2−
Cyanate OCN−
Thiocyanate SCN−
Cyanide CN−

----------------------------------------------------------------------------------------

Covalent bond

Covalent bonding is a form of chemical bonding that is characterized by the sharing of pairs of
electrons between atoms, or between atoms and other covalent bonds. In short, attraction-to-
repulsion stability that forms between atoms when they share electrons is known as covalent
bonding.
Covalent bonding includes many kinds of interactions, including σ-bonding, π-bonding, metal-
metal bonding, agostic interactions, and three-center two-electron bonds.[1][2] The term
covalent bond dates from 1939.[3] The prefix co- means jointly, associated in action, partnered
to a lesser degree, etc.; thus a "co-valent bond", essentially, means that the atoms share
"valence", such as is discussed in valence bond theory.

In the molecule H2, the hydrogen atoms share the two electrons via covalent bonding. Covalency
is greatest between atoms of similar electronegativities. Thus, covalent bonding does not
necessarily require the two atoms be of the same elements, only that they be of comparable
electronegativity. Because covalent bonding entails sharing of electrons, it is necessarily
delocalized. Furthermore, in contrast to electrostatic interactions ("ionic bonds") the strength of
covalent bond depends on the angular relation between atoms in polyatomic molecules.

Schemes depicting covalent (left) and polar covalent (right) bonding in a diatomic molecule. The
arrows represent electrons provided by the participating atoms.
History
The term "covalence" in regards to bonding was first used in 1919 by Irving Langmuir in a
Journal of American Chemical Society article entitled The Arrangement of Electrons in Atoms
and Molecules:[4]
(p.926)… we shall denote by the term covalence the number of pairs of electrons which a ”

given atom shares with its neighbors.
The idea of covalent bonding can be traced several years prior to 1919 to Gilbert N. Lewis, who
in 1916 described the sharing of electron pairs between atoms. He introduced the so called Lewis
notation or electron dot notation or The Lewis Dot Structure in which valence electrons (those in
the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located
between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double
and triple bonds. Some examples of Electron Dot Notation are shown in the following figure. An
alternative form, in which bond-forming electron pairs are represented as solid lines, is shown
alongside.
Early concepts in covalent bonding arose from this kind of image of the molecule of methane.
Covalent bonding is implied in the Lewis structure that indicates sharing of electrons between
atoms.
While the idea of shared electron pairs provides an effective qualitative picture of covalent
bonding, quantum mechanics is needed to understand the nature of these bonds and predict the
structures and properties of simple molecules. Walter Heitler and Fritz London are credited with
the first successful quantum mechanical explanation of a chemical bond, specifically that of
molecular hydrogen, in 1927. Their work was based on the valence bond model, which assumes
that a chemical bond is formed when there is good overlap between the atomic orbitals of
participating atoms. These atomic orbitals are known to have specific angular relationships
between each other, and thus the valence bond model can successfully predict the bond angles
observed in simple molecules. kiss xxx

Bond order
Bond order is a number that indicates the number of pairs of electrons shared between atoms
forming a covalent bond. The term is only applicable to diatomic molecules, but is used to
describe bonds within polyatomic compounds as well.

1. The most common type of covalent bond is the single bond, the sharing of only one pair
of electrons between two atoms. It usually consists of one sigma bond. All bonds with
more than one shared pair are called multiple bonds.
2. Sharing two pairs is called a double bond. An example is in ethylene (between the
carbon atoms). It usually consists of one sigma bond and one pi bond.
3. Sharing three pairs is called a triple bond. An example is in hydrogen cyanide (between
C and N). It usually consists of one sigma bond and two pi-bonds.
4. Quadruple bonds are found in the transition metals. Molybdenum and rhenium are the
elements most commonly observed with this bonding configuration. An example of a
quadruple bond is also found in Di-tungsten tetra(hpp).
5. Quintuple bonds have been found to exist in certain dichromium compounds.
6. The only known molecules with true sextuple bonds (order 6) are diatomic Mo2 and W2,
in the gaseous phase at very low temperatures. Although diatomic Cr2 and U2 have
formal structures with twelve-electron bonds, their effective bond orders (derived from
quantum chemistry calculations) are less than 5. There is strong evidence to believe that
no two elements in the periodic table can form a bond with greater order than 6.[5]

Most bonding of course, is not localized, so the following classification, while powerful and
pervasive, is of limited validity. Three center bond do not conform readily to the above
conventions.

Resonance

Many bonding situations can be described with more than one valid Lewis Dot Structure (for
example, ozone, O3). In an LDS diagram of O3, the center atom will have a single bond with one
atom and a double bond with the other. The LDS diagram cannot tell us which atom has the
double bond; the first and second adjoining atoms have equal chances of having the double bond.
These two possible structures are called resonance structures. In reality, the structure of ozone is
a resonance hybrid between its two possible resonance structures. Instead of having one double
bond and one single bond, there are actually two 1.5 bonds with approximately three electrons in
each at all times.

A special resonance case is exhibited in aromatic rings of atoms (for example, benzene).
Aromatic rings are composed of atoms arranged in a circle (held together by covalent bonds) that
may alternate between single and double bonds according to their LDS. In actuality, the electrons
tend to be disambiguously and evenly spaced within the ring. Electron sharing in aromatic
structures is often represented with a ring inside the circle of atoms.

Current theory

Today the valence bond model has been supplanted by the molecular orbital model. In this
model, as atoms are brought together, the atomic orbitals interact to form molecular orbitals,
which are linear sums and differences of the atomic orbitals. These molecular orbitals are a cross
between the original atomic orbitals and generally extend between the two bonding atoms.

Using quantum mechanics it is possible to calculate the electronic structure, energy levels, bond
angles, bond distances, dipole moments, and electromagnetic spectra of simple molecules with a
high degree of accuracy. Bond distances and angles can be calculated as accurately as they can
be measured (distances to a few pm and bond angles to a few degrees). For small molecules,
calculations are sufficiently accurate to be useful for determining thermodynamic heats of
formation and kinetic activation energy barriers.
Metals & Alloys

Types of Materials

Metals:

Metals are elements that generally have good electrical and thermal conductivity. Many
metals have high strength, high stiffness, and have good ductility. Some metals, such as
iron, cobalt and nickel are magnetic. At extremely low temperatures, some metals and
intermetallic compounds become superconductors.

Pure metals:

Pure metals are elements which comes from a particular area of the periodic table.
Examples of pure metals include copper in electrical wires and aluminum in cooking foil
and beverage cans.

Metal Alloys:

Metal Alloys contain more than one metallic element. Their properties can be changed by
changing the elements present in the alloy. Examples of metal alloys include stainless steel
which is an alloy of iron, nickel, and chromium; and gold jewelry which usually contains an
alloy of gold and nickel. The most important properties of metals include density, fracture
toughness, strength and plastic deformation.

The atomic bonding of metals also affects their properties. In metals, the outer valence
electrons are shared among all atoms, and are free to travel everywhere. Since electrons
conduct heat and electricity, metals make good cooking pans and electrical wires.

Many metals and alloys have high densities and are used in applications which require a
high mass-to-volume ratio. Some metal alloys, such as those based on Aluminum, have low
densities and are used in aerospace applications for fuel economy. Many metal alloys also
have high fracture toughness, which means they can withstand impact and are durable.

Polymers

A polymer has a repeating structure, usually based on a carbon backbone. The repeating
structure results in large chainlike molecules. Polymers are useful because they are
lightweight, are corrosion resistant, are easy to process at low temperatures, and are
generally inexpensive. Some important characteristics of polymers include their size (or
molecular weight), softening and melting points, crystallinity, and structure.
The mechanical properties of polymers generally include low strength and high toughness.
Their strength is often improved using reinforced composite structures. One of the distinct
properties of polymers is that they are poor conductors of electricity and heat, which
makes them good insulators.

Ceramics

A ceramic is often broadly defined as any inorganic nonmetallic material. Examples of such
materials can be anything from NaCl (table salt) to clay (a complex silicate). Some of the
useful properties of ceramics and glasses include high melting temperature, low density,
high strength, stiffness, hardness, wear resistance, and corrosion resistance.

Many ceramics are good electrical and thermal insulators. Some ceramics have special
properties: some ceramics are magnetic materials; some are piezoelectric materials; and a
few special ceramics are superconductors at very low temperatures. Ceramics and glasses
have one major drawback: they are brittle.

Glasses

A glass is an inorganic nonmetallic material that does not have a crystalline structure. Such
materials are said to be amorphous. Examples of glasses range from the soda-lime silicate
glass in soda bottles to the extremely high purity silica glass in optical fibers.

Composites

Composites are formed from two or more types of materials. Examples include
polymer/ceramic and metal/ceramic composites. Composites are used because overall
properties of the composites are superior to those of the individual components.

For example: polymer/ceramic composites have a greater modulus than the polymer
component, but aren't as brittle as ceramics.
-----------------------------------------------------------------------------------------------

Metal
In chemistry, a metal is an element that readily loses electrons to form positive ions (cations)
and has metallic bonds between metal atoms.

Metals form ionic bonds with non-metals. They are sometimes described as a lattice of positive
ions surrounded by a cloud of delocalized electrons. The metals are one of the three groups of
elements as distinguished by their ionization and bonding properties, along with the metalloids
and nonmetals. On the periodic table, a diagonal line drawn from boron (B) to polonium (Po)
separates the metals from the nonmetals.
Most elements on this line are metalloids, sometimes called semi-metals; elements to the lower
left are metals; elements to the upper right are nonmetals. An alternative definition of metals is
that they have overlapping conduction bands and valence bands in their electronic structure. This
definition opens up the category for metallic polymers and other organic metals, which have
been made by researchers and employed in high-tech devices. These synthetic materials often
have the characteristic silvery-grey reflectiveness (luster) of elemental metals. The traditional
definition focuses on the bulk properties of metals. They tend to be lustrous, ductile, malleable,
and good conductors of electricity, while nonmetals are generally brittle (if solid), lack luster,
and are insulators.

Chemical properties
Most metals are chemically reactive, reacting with oxygen in the air to form oxides over
changing timescales (for example iron rusts over years and potassium burns in seconds). The
alkali metals react quickest followed by the alkaline earth metals, found in the leftmost two
groups of the periodic table. The transition metals take much longer to oxidize (such as iron,
copper, zinc, nickel). Others, like palladium, platinum and gold, do not react with the atmosphere
at all.

Some metals form a barrier layer of oxide on their surface which cannot be penetrated by further
oxygen molecules and thus retain their shiny appearance and good conductivity for many
decades (like aluminium, some steels, and titanium). The oxides of metals are basic (as opposed
to those of nonmetals, which are acidic), although this may be considered a rule of thumb, rather
than a fact. Painting or anodising metals are good ways to prevent their corrosion. However, a
more reactive metal in the electrochemical series must be chosen for coating, especially when
chipping of the coating is expected. Water and the two metals form an electrochemical cell, and
if the coating is less reactive than the coatee, the coating actually promotes corrosion.

Physical properties

Traditionally, metals have certain characteristic physical properties: they are usually shiny (they
have metallic luster), have a high density, are ductile and malleable, usually have a high melting
point, are usually hard, are usually a solid at room temperature and conduct electricity, heat and
sound well. However, this is mainly because the low density, soft, low melting point metals (the
alkali and alkaline earth metals) happen to be reactive, and we rarely encounter them in their
elemental, metallic form.

The electrical and thermal conductivity of metals originate from the fact that in the metallic bond
the outer electrons of the metal atoms form a gas of nearly free electrons, moving as an electron
gas in a background of positive charge formed by the ion cores.

Good mathematical predictions for electrical conductivity, as well as the electrons' contribution
to the heat capacity and heat conductivity of metals can be calculated from the free electron
model, which does not take the detailed structure of the ion lattice into account. When
considering the exact band structure and binding energy of a metal, it is necessary to take into
account the positive potential caused by the specific arrangement of the ion cores - which is
periodic in crystals.

The most important consequence of the periodic potential is the formation of a small band gap at
the boundary of the brillouin zone. Mathematically, the potential of the ion cores is treated in the
nearly-free electron model.

Alloys
An alloy is a mixture of two or more elements in solid solution in which the major component is
a metal.

Most pure metals are either too soft, brittle or chemically reactive for practical use. Combining
different ratios of metals as alloys modify the properties of pure metals to produce desirable
characteristics.

The aim of making alloys is generally to make them less brittle, harder, resistant to corrosion, or
have a more desirable color and luster. Examples of alloys are steel (iron and carbon), brass
(copper and zinc), bronze (copper and tin), and duralumin (aluminium and copper). Alloys
specially designed for highly demanding applications, such as jet engines, may contain more
than ten elements.

Categories

Base metal

In chemistry, the term 'base metal' is used informally to refer to a metal that oxidizes or corrodes
relatively easily, and reacts variably with dilute hydrochloric acid (HCl) to form hydrogen.
Examples include iron, nickel, lead and zinc. Copper is considered a base metal as it oxidizes
relatively easily, although it does not react with HCl. It is commonly used in opposition to noble
metal. In alchemy, a base metal was a common and inexpensive metal, as opposed to precious
metals, mainly gold and silver. A longtime goal of the alchemists was the transmutation of base
metals into precious metals. In numismatics, coins used to derive their value primarily from the
precious metal content. Most modern currencies are fiat currency, allowing the coins to be made
of base metal.

Ferrous metal

The term "ferrous" is derived from the latin word meaning "containing iron". This can include
pure iron, such as wrought iron, or an alloy such as steel. Ferrous metals are often magnetic, but
not exclusively. Noble metal Noble metals are metals that are resistant to corrosion or
oxidation, unlike most base metals. They tend to be precious metals, often due to perceived
rarity. Examples include tantalum, platinum, and rhodium.

Precious metal

A precious metal is a rare metallic chemical element of high economic value. Chemically, the
precious metals are less reactive than most elements, have high luster and high electrical
conductivity. Historically, precious metals were important as currency, but are now regarded
mainly as investment and industrial commodities. Gold, silver, platinum and palladium each
have an ISO 4217 currency code.

The best-known precious metals are gold and silver. While both have industrial uses, they are
better known for their uses in art, jewelry, and coinage. Other precious metals include the
Platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum, of
which platinum is the most widely traded. Plutonium and uranium could also be considered
precious metals.

The demand for precious metals is driven not only by their practical use, but also by their role as
investments and a store of value. Palladium was, as of summer 2006, valued at a little under half
the price of gold, and platinum at around twice that of gold. Silver is substantially less expensive
than these metals, but is often traditionally considered a precious metal for its role in coinage and
jewelry.

Extraction

Metals are often extracted from the Earth by means of mining, resulting in ores that are relatively
rich sources of the requisite elements. Ore is located by prospecting techniques, followed by the
exploration and examination of deposits.

Mineral sources are generally divided into surface mines, which are mined by excavation using
heavy equipment, and subsurface mines. Once the ore is mined, the metals must be extracted,
usually by chemical or electrolytic reduction. Pyrometallurgy uses high temperatures to convert
ore into raw metals, while hydrometallurgy employs aqueous chemistry for the same purpose.
The methods used depend on the metal and their contaminants.

Metallurgy

Metallurgy is a domain of materials science that studies the physical and chemical behavior of
metallic elements, their intermetallic compounds, and their mixtures, which are called alloys.

Applications

Some metals and metal alloys possess high structural strength per unit mass, making them useful
materials for carrying large loads or resisting impact damage. Metal alloys can be engineered to
have high resistance to shear, torque and deformation. However the same metal can also be
vulnerable to fatigue damage through repeated use, or from sudden stress failure when a load
capacity is exceeded.

The strength and resilience of metals has led to their frequent use in high-rise building and bridge
construction, as well as most vehicles, many appliances, tools, pipes, non-illuminated signs and
railroad tracks. Metals are good conductors, making them valuable in electrical appliances and
for carrying an electric current over a distance with little energy lost. Electrical power grids rely
on metal cables to distribute electricity.

Home electrical systems, for the most part, are wired with copper wire for its good conducting
properties. The thermal conductivity of metal is useful for containers to heat materials over a
flame. Metal is also used for heat sinks to protect sensitive equipment from overheating. The
high reflectivity of some metals is important in the construction of mirrors, including precision
astronomical instruments.

This last property can also make metallic jewelry aesthetically appealing. Some metals have
specialized uses; Radioactive metals such as Uranium and Plutonium are used in nuclear power
plants to produce energy via nuclear fission. Mercury is a liquid at room temperature and is used
in switches to complete a circuit when it flows over the switch contacts. Shape memory alloy is
used for applications such as pipes, fasteners and vascular stents. --------------------------------------
----------------------------------------------------------------
POLYMERS - NOTES |
Version 3 - view current page
Polymers

A polymer (from Greek: πολυ, polu, "many"; and µέρος, meros, "part") is a substance
composed of molecules with large molecular mass composed of repeating structural
units, or monomers, connected by covalent chemical bonds. Well known examples of
polymers include plastics and DNA.

Contents

• 1 Overview
• 2 Historical development
• 3 Polymer science
• 4 Polymer synthesis
o 4.1 Organic synthesis
o 4.2 Biological synthesis
o 4.3 Modification of natural polymers
• 5 Polymer Structure and Properties
o 5.1 Structure
 5.1.1 Monomer identity
 5.1.2 Chain linearity
 5.1.3 Chain size
 5.1.4 Monomer arrangement in copolymers
 5.1.5 Tacticity in polymers with chiral centers
o 5.2 Morphological Properties
 5.2.1 Crystallinity
o 5.3 Bulk Properties
 5.3.1 Tensile strength
 5.3.2 Young's Modulus of elasticity
 5.3.3 Transport Properties
 5.3.4 Pure component phase behavior
 5.3.4.1 Melting point
 5.3.4.2 Boiling point
 5.3.4.3 Glass transition temperature (Tg)
 5.3.5 Polymer solution behavior
o 5.4 Polymer Structure/Property relationships
 5.4.1 Chain Length
 5.4.2 Branching
 5.4.3 Chemical Cross-linking
 5.4.4 Inclusion of plasticizers
 5.4.5 Degree of Crystallinity
• 6 Standardized polymer nomenclature
• 7 Chemical properties of polymers
• 8 Polymer characterization
• 9 Polymer degradation
• 10 Industry
• 11 Cracking of polymers
• 12 References
• 13 See also
• 14 External links

Overview
While the term polymer in popular usage suggests "plastic", polymers comprise a large
class of natural and synthetic materials with a variety of properties and purposes.
Natural polymer materials such as shellac and amber have been in use for centuries.
Paper is manufactured from cellulose, a naturally occurring polysaccharide found in
plants. Biopolymers such as proteins and nucleic acids play important roles in biological
processes.

Historical development
The term polymer was coined in 1833 by Jöns Jakob Berzelius. Around the same time
Henri Braconnot did pioneering work in derivative cellulose compounds, perhaps the
earliest important work in polymer science. The development of vulcanization later in
the nineteenth century improved the durability of the natural polymer rubber, signifying
the first popularized semi-synthetic polymer. The first wholly synthetic polymer, Bakelite,
was introduced in 1909. Despite significant advances in synthesis and characterization
of polymers, a proper understanding of polymer molecular structure did not come until
the 1920s. Before that, scientists believed that polymers were clusters of small
molecules (called colloids), without definite molecular weights, held together by an
unknown force, a concept known as association theory. In 1922, Hermann Staudinger
proposed that polymers consisted of long chains of atoms held together by covalent
bonds, an idea which did not gain wide acceptance for over a decade, and for which
Staudinger was ultimately awarded the Nobel Prize. In the intervening century, synthetic
polymer materials such as Nylon, polyethylene, Teflon, and silicone have formed the
basis for a burgeoning polymer industry. Synthetic polymers today find application in
nearly every industry and area of life. Polymers are widely used as adhesives and
lubricants, as well as structural components for products ranging from childrens' toys to
aircraft. Polymers such as poly(methyl methacrylate) find application as photoresist
materials used in semiconductor manufacturing and low-k dielectrics for use in high-
performance microprocessors. Future applications include flexible polymer-based
substrates for electronic displays and improved time-released and targeted drug
delivery.

Polymer science
Main article: Polymer science Most polymer research may be categorized as polymer
science, a sub-discipline of materials science which includes researchers in chemistry
(especially organic chemistry), physics, and engineering. Polymer science may be
roughly divided into two subdisciplines:

• Polymer chemistry or macromolecular chemistry, concerned with the chemical


synthesis and chemical properties of polymers.
• Polymer physics, concerned with the bulk properties of polymer materials and
engineering applications.

The field of polymer science is generally concerned with synthetic polymers, such as
plastics, or chemical treatment and modification of natural polymers. The study of
biological polymers, their structure, function, and method of synthesis is generally the
purview of biology, biochemistry, and biophysics. These disciplines share some of the
terminology familiar to polymer science, especially when describing the synthesis of
biopolymers such as DNA or polysaccharides. However, usage differences persist, such
as the practice of using the term macromolecule to describe large non-polymer
molecules and complexes of multiple molecular components, such as hemoglobin.
Substances with distinct biological function are rarely described in the terminology of
polymer science. For example, a protein is rarely referred to as a copolymer.

Polymer synthesis
Polymers are synthesized by three primary methods: organic synthesis in a laboratory
or factory, biological synthesis in living cells and organisms, or by chemical modification
of naturally occurring polymers.
Organic synthesis

Main article: Polymerization In 1907, Leo Baekeland created the first completely
synthetic polymer, Bakelite, by reacting phenol and formaldehyde at precisely controlled
temperature and pressure. Subsequent work by Wallace Carothers in the 1920s
demonstrated that polymers could be synthesized rationally from their constituent
monomers. The intervening years have shown significant developments in rational
polymer synthesis. Most commercially important polymers today are entirely synthetic
and produced in high volume, on appropriately scaled organic synthetic techniques.
Laboratory synthetic methods are generally divided into two categories, condensation
polymerization and addition polymerization. However, some newer methods such as
plasma polymerization do not fit neatly into either category. Synthetic polymerization
reactions may be carried out with or without a catalyst. Efforts towards rational
synthesis of biopolymers via laboratory synthetic methods, especially artificial synthesis
of proteins, is an area of intense research.

Biological synthesis

Natural polymers and biopolymers formed in living cells may be synthesized by


enzyme-mediated processes, such as the formation of DNA catalyzed by DNA
polymerase. The synthesis of proteins involves multiple enzyme-mediated processes to
transcribe genetic information from the DNA and subsequently translate that information
to synthesize the specified protein. The protein may be modified further following
translation in order to provide appropriate structure and function.

Modification of natural polymers

Many commercially important polymers are synthesized by chemical modification of


naturally occurring polymers. Prominent examples include the reaction of nitric acid and
cellulose to form nitrocellulose and the formation of vulcanized rubber by heating natural
rubber in the presence of sulfur.

Polymer Structure and Properties


Types of polymer 'properties' can be broadly divided into several categories based upon
scale. At the nano-micro scale are properties that directly describe the chain itself.
These can be thought of as polymer structure. At an intermediate mesoscopic level are
properties that describe the morphology of the polymer matrix in space. At the
macroscopic level are properties that describe the bulk behavior of the polymer.
Structure

The structural properties of a polymer relate to the physical arrangement of monomers


along the backbone of the chain. Structure has a strong influence on the other
properties of a polymer. For example, a linear chain polymer may be soluble or
insoluble in water depending on whether it is composed of polar monomers (such as
ethylene oxide) or nonpolar monomers (such as styrene). On the other hand, two
samples of natural rubber may exhibit different durability even though their molecules
comprise the same monomers. Polymer scientists have developed terminology to
precisely describe both the nature of the monomers as well as their relative
arrangement: Monomer identity The identity of the monomers comprising the
polymer is generally the first and most important attribute of a polymer. Polymer
nomenclature is generally based upon the type of monomers comprising the polymer.
Polymers that contain only a single type of monomer are known as homopolymers,
while polymers containing a mixture of monomers are known as copolymers.
Poly(styrene), for example, is composed only of styrene monomers, and is therefore is
classifed as a homopolymer. Ethylene-vinyl acetate, on the other hand, contains more
than one variety of monomer and is thus a copolymer. Some biological polymers are
composed of a variety of different but structurally related monomers, such as
polynucleotides composed of nucleotide subunits. A polymer molecule containing
ionizable subunits is known as a polyelectrolyte. An ionomer is a subclass of
polyelectrolyte with a low fraction of ionizable subunits.t78og78g89o Chain
linearity The simplest form of polymer molecule is a straight chain or linear polymer,
composed of a single main chain. The flexibility of an unbranched chain polymer is
characterized by its persistence length. A branched polymer molecule is composed
of a main chain with one or more substituent side chains or branches. Special types of
branched polymers include star polymers, comb polymers, and brush polymers. If the
polymer contains a side chain that has a different composition or configuration than the
main chain the polymer is called a graft or grafted polymer. A cross-link suggests a
branch point from which four or more distinct chains emanate. A polymer molecule with
a high degree of crosslinking is referred to as a polymer network.[1] Sufficiently high
crosslink concentrations may lead to the formation of an 'infinite network', also known as
a 'gel', in which networks of chains are of unlimited extend - there is essentially all
chains have linked into one molecule.[2] Chain size Polymer bulk properties may
be strongly dependent on the size of the polymer chain. Like any molecule, a polymer
molecule's size may be described in terms of molecular weight or mass. In polymers,
however, the molecular mass may be expressed in terms of degree of polymerization,
essentially the number of monomer units which comprise the polymer. For synthetic
polymers, the molecular weight is expressed statistically to describe the distribution of
molecular weights in the sample. This is because of the fact that almost all industrial
processes produce a distribution of polymer chain sizes. Examples of such statistics
include the number average molecular weight and weight average molecular weight.
The ratio of these two values is the polydispersity index, commonly used to express the
"width" of the molecular weight. The space occupied by a polymer molecule is generally
expressed in terms of radius of gyration or excluded volume. Monomer
arrangement in copolymers Monomers within a copolymer may be
organized along the backbone in a variety of ways.

• Alternating copolymers possess regularly alternating monomer residues


• Periodic copolymers have monomer residue types arranged in a repeating
sequence
• Random copolymers have a random sequence of monomer residue types
• Statistical copolymers have monomer residues arranged according to a known
statistical rule
• Block copolymers have two or more homopolymer subunits linked by covalent
bonds. Block copolymers with two or three distinct blocks are called diblock
copolymers and triblock copolymers, respectively.

Tacticity in polymers with chiral centers Main article: Tacticity This


property describes the relative stereochemistry of chiral centers in neighboring
structural units within a macromolecule. There are three types: isotactic, atactic, and
syndiotactic.

Morphological Properties

Crystallinity When applied to polymers, the term crystalline has a somewhat


ambiguous usage. In some cases, the term crystalline finds identical usage to that used
in conventional crystallography. For example, the structure of a crystalline protein or
polynucleotide, such as a sample prepared for x-ray crystallography, may be defined in
terms of a conventional unit cell composed of one or more polymer molecules with cell
dimensions of hundreds of angstroms or more. A synthetic polymer may be described
as crystalline if it contains regions of three-dimensional ordering on atomic (rather than
macromolecular) length scales, usually arising from intramolecular folding and/or
stacking of adjacent chains. Synthetic polymers may consist of both crystalline and
amorphous regions; the degree of crystallinity may be expressed in terms of a weight
fraction or volume fraction of crystalline material. Few synthetic polymers are entirely
crystalline.[3]
Bulk Properties

The bulk properties of a polymer are those most often of end-use interest. These are
the properties that dictate how the polymer actually behaves on a macroscopic scale.
Tensile strength The tensile strength of a material quantifies how much stress
the material will endure before failing. This is very important in applications that rely
upon polymer's physical strength or durability. For example, a rubber band with a higher
tensile strength will hold a greater weigh before snapping. In general tensile strength
increases with polymer chain length. Young's Modulus of elasticity This
parameter quantifies the elasticity of the polymer. It is defined, for small strains, as the
ratio of rate of change of stress to strain. Like tensile strength this is highly relevant in
polymer applications involving the physical properties of polymers, such as rubber
bands. Transport Properties Transport properties such as diffusivity relate to
how rapidly molecules move through the polymer matrix. These are very important in
many applications of polymers for films and membranes. Pure component
phase behavior Melting point The term "melting point" when applied to polymers
suggests not a solid-liquid phase transition but a transition from a crystalline or semi-
crystalline phase to a solid amorphous phase. Though abbreviated as simply "Tm", the
property in question is more properly called the "crystalline melting temperature".
Among synthetic polymers, crystalline melting is only discussed with regards to
thermoplastics, as thermosetting polymers will decompose at high temperatures rather
than melt.[edit] Boiling point The boiling point of a polymer substance is never defined
due to the fact that polymers will decompose before reaching theoretical boiling
temperatures.[ Glass transition temperature (Tg) A parameter of particular interest in
synthetic polymer manufacturing is the glass transition temperature (Tg), which
describes the temperature at which amorphous polymers undergo a second order
phase transition from a rubbery, viscous amorphous solid to a brittle, glassy amorphous
solid. The glass transition temperature may be engineered by altering the degree of
branching or cross-linking in the polymer or by the addition of plasticizer.[4]
Polymer solution behavior In general, polymeric mixtures are far less
miscible than mixtures of small molecule materials. This effect is a result of the fact that
the driving force for mixing is usually entropics, not energetics. In other words, miscible
materials usually form a solution not because their interaction with each other is more
favorable than their self-interaction but because of an increase in entropy and hence
free energy associated with increasing the amount of volume available to each
component. This increase in entropy scales with the number of particles (or moles)
being mixed. Since polymeric molecules are much larger and hence generally have
much higher specific volumes than small molecules, the number of molecules involved
in a polymeric mixture are far less than the number in a small molecule mixture of equal
volume. The energetics of mixing, on the other hand, are comparable on a per volume
basis for polymeric and small molecule mixtures. This tends to increase the free energy
of mixing for polymer solutions and thus make solvation less favorable. Thus,
concentrated solutions of polymers are far rarer than those of small molecules. In dilute
solution, the properties of the polymer are characterized by the interaction between the
solvent and the polymer. In a good solvent, the polymer appears swollen and occupies
a large volume. In this scenario, intermolecular forces between the solvent and
monomer subunits dominate over intramolecular interactions. In a bad solvent or poor
solvent, intramolecular forces dominate and the chain contracts. In the theta solvent, or
the state of the polymer solution where the value of the second virial coefficient
becomes 0, the intermolecular polymer-solvent repulsion balances exactly the
intramolecular monomer-monomer attraction. Under the theta condition (also called the
Flory condition) the polymer behaves like an ideal random coil.

Polymer Structure/Property relationships

Polymer bulk properties are strongly dependent upon their structure and mesoscopic
behavior. A number of qualitative relationships between structure and properties are
known. Chain Length Increasing chain length tends to decrease chain mobility,
increase strength and toughness, and increase the glass transition temperature (Tg).
This is a result of the increase in chain interactions such as Van der Waals attractions
and entanglements that come with increased chain length. These interactions tend to fix
the individual chains more strongly in position and resist deformations and matrix
breakup, both at higher stresses and higher temperatures. Branching Branching of
polymer chains also affect the bulk properties of polymers. Long chain branches may
increase polymer strength, toughness, and Tg due to an increase in the number of
entanglements per chain. Random length and atactic short chains, on the other hand,
may reduce polymer strength due to disruption of organization. Short side chains may
likewise reduce crystallinity due to disruption of the crystal structure. Reduced
crystallinity may also be associated with increased transparency due to light scattering
by small crystalline regions. A good example of this effect is related to the range of
physical attributes of polyethylene. High density polyethylene (HDPE) has a very low
degree of branching, is quite stiff, and is used in applications such as milk jugs. Low
density polyethylene (LDPE), on the other hand, has significant numbers of short
branches, is quite flexible, and is used in applications such as plastic films. The
branching index of the polymer is a parameter that characterizes the effect of long-chain
branches on the size of a branched macromolecule in solution. Chemical
Cross-linking Cross linking tends to increase Tg and increase strength and
toughness. Cross linking consists of the formation of chemical bonds between chains.
Among other applications, this process is used to strengthen rubbers in a process
known as Vulcanization, which is based on crosslinking by sulfur. Car tires, for example,
are highly crosslinked in order to reduce the leaking of air out of the tire and to toughen
the tires durability. Eraser rubber, on the other hand, is not cross linked to allow flaking
of the rubber and prevent damage to the paper.[ Inclusion of plasticizers
Inclusion of Plasticizers tends to lower Tg and increase polymer flexibility. Plasticizers
are generally small molecules that are chemically similar to the polymer and create
gaps between polymer chains for greater mobility and reduced interchain interactions. A
good example of the action of plasticizers is related to polyvinylchlorides or PVCs. A
uPVC or unplastiscized polyvinylchloride is used for things such as pipes. A pipe has no
plasticizers in it because it needs to remain strong and heat resistant. Plasticized PVC
is used for clothing for a flexible quality. Plasticizers are also put in some types of cling
film to make the polymer more flexible. Degree of Crystallinity Increasing
degree of crystallinity tends to make a polymer more rigid. It can also lead to greater
brittlness. Polymers with degree of crystallinity approaching zero or one will tend to be
transparent, while polymers with intermediate degrees of crystallinity will tend to be
opaque due to light scattering by crystalline / glassy regions.

Standardized polymer nomenclature


There are multiple conventions for naming polymer substances. Many commonly used
polymers, such as those found in consumer products, are referred to by a common or
trivial name. The trivial name is assigned based on historical precedent or popular
usage rather than a standardized naming convention. Both the American Chemical
Society[5] and IUPAC[6] have proposed standardized naming conventions; the ACS
and IUPAC conventions are similar but not identical.[7] Examples of the difference
between the various naming conventions are given in the table below:

Common Name ACS Name IUPAC Name

Poly(ethylene oxide)
poly(oxyethylene) poly(oxyethylene)
or (PEO)

Poly(ethylene poly(oxy-1,2-
poly(oxyethyleneoxyterephth=
terephthalate) orethanediyloxycarbonyl -1,4-
aloyl)
(PET) phenylenecarbonyl)
poly[imino(1-oxohexane-1,6-
Nylon poly[imino(1-oxo-1,6-hexanediyl)]
diyl)]

In both standardized conventions the polymers names are intended to reflect the
monomer(s) from which they are synthesized rather than the precise nature of the
repeating subunit. For example, the polymer synthesized from the simple alkene ethene
is called polyethylene, retaining the -ene suffix even though the double bond is removed
during the polymerization process:

Chemical properties of polymers


The attractive forces between polymer chains play a large part in determining a
polymer's properties. Because polymer chains are so long, these interchain forces are
amplified far beyond the attractions between conventional molecules. Different side
groups on the polymer can lend the polymer to ionic bonding or hydrogen bonding
between its own chains. These stronger forces typically result in higher tensile strength
and melting points. The intermolecular forces in polymers can be affected by dipoles in
the monomer units. Polymers containing amide or carbonyl groups can form hydrogen
bonds between adjacent chains; the partially positively charged hydrogen atoms in N-H
groups of one chain are strongly attracted to the partially negatively charged oxygen
atoms in C=O groups on another. These strong hydrogen bonds, for example, result in
the high tensile strength and melting point of polymers containg urethane or urea
linkages. Polyesters have dipole-dipole bonding between the oxygen atoms in C=O
groups and the hydrogen atoms in H-C groups. Dipole bonding is not as strong as
hydrogen bonding, so a polyester's melting point and strength are lower than Kevlar's,
but polyesters have greater flexibility. Ethene, however, has no permanent dipole. The
attractive forces between polyethylene chains arise from weak van der Waals forces.
Molecules can be thought of as being surrounded by a cloud of negative electrons. As
two polymer chains approach, their electron clouds repel one another. This has the
effect of lowering the electron density on one side of a polymer chain, creating a slight
positive dipole on this side. This charge is enough to actually attract the second polymer
chain. Van der Waals forces are quite weak, however, so polyethene can have a lower
melting temperature compared to other polymers.

Polymer characterization
The characterization of a polymer requires several parameters which need to be
specified. This is because a polymer actually consists of a statistical distribution of
chains of varying lengths, and each chain consists of monomer residues which affect its
properties. A variety of lab techniques are used to determine the properties of polymers.
Techniques such as wide angle X-ray scattering, small angle X-ray scattering, and small
angle neutron scattering are used to determine the crystalline structure of polymers. Gel
permeation chromatography is used to determine the number average molecular
weight, weight average molecular weight, and polydispersity. FTIR, Raman and NMR
can be used to determine composition. Thermal properties such as the glass transition
temperature and melting point can be determined by differential scanning calorimetry
and dynamic mechanical analysis. Pyrolysis followed by analysis of the fragments is
one more technique for determining the possible structure of the polymer.

Polymer degradation
Polymer degradation is a change in the properties - tensile strength, colour, shape, etc -
of a polymer or polymer based product under the influence of one or more
environmental factors such as heat, light or chemicals. It is often due to the hydrolysis of
the bonds connecting the polymer chain, which in turn leads to a decrease in the
molecular mass of the polymer. These changes may be undesirable, such as changes
during use, or desirable, as in biodegradation or deliberately lowering the molecular
mass of a polymer. Such changes occur primarily because of the effect of these factors
on the chemical composition of the polymer. The degradation of polymers to form
smaller moleculars may proceed by random scission or specific scission. The
degradation of polyethylene occurs by random scission - that is by a random breakage
of the linkages (bonds) that hold the atoms of the polymer together. When heated above
450 Celsius it degrades to form a mixture of hydrocarbons. Other polymers - like
polyalphamethylstyrene - undergo 'specific' chain scission with breakage occurring only
at the ends. They literally unzip or depolymerize to become the constituent monomer. In
a finished product such a change is to be prevented or delayed. However the
degradation process can be useful from the view points of understanding the structure
of a polymer or recycling/reusing the polymer waste to prevent or reduce environmental
pollution. Polylactic acid and Polyglycolic acid, for example, are two polymers that are
useful for their ability to degrade under aqueous conditions. A copolymer of these
polymers is used for biomedical applications such as hydrolysable stitches that degrade
over time after they are applied to a wound. These materials can also be used for
plastics that will degrade over time after they are used and will therefore not remain as
litter.

Industry
Today there are primarily six commodity polymers in use, namely polyethylene,
polypropylene, polyvinyl chloride, polyethylene terephthalate, polystyrene and
polycarbonate. These make up nearly 98% of all polymers and plastics encountered in
daily life. Each of these polymers has its own characteristic modes of degradation and
resistances to heat, light and chemicals.

Cracking of polymers
Cracking is the process by which a polymer is divided into its subcomponents or
monomers. The resulting subcomponents are more viscous than the original polymer.
Polymers chemistry

Polymer Chemistry

Classification of Polymers

The most common way of classifying polymers is to separate them into three groups -
thermoplastics, thermosets, and elastomers5. The thermoplastics can be divided into two types -
those that are crystalline and those that are amorphous. You may click on the words in the
diagram below to learn more about these classifications.

Thermoplastics
Molecules in a thermoplastic are held together by relatively weak intermolecular forces so
that the material softens when exposed to heat and then returns to its original condition
when cooled 16. Thermoplastic polymers can be repeatedly softened by heating and then
solidified by cooling - a process similar to the repeated melting and cooling of metals. Most
linear and slightly branched polymers are thermoplastic. All the major thermoplastics are
produced by chain polymerization17. Thermoplastics have a wide range of applications because
they can be formed and reformed in so many shapes. Some examples are food packaging,
insulation, automobile bumpers, and credit cards.
Return to Top

Thermosets
A thermosetting plastic, or thermoset, solidifies or
"sets" irreversibly when heated. Thermosets cannot be
reshaped by heating. Thermosets usually are three-
dimensional networked polymers in which there is a high
degree of cross-linking between polymer chains. The
cross-linking restricts the motion of the chains and leads to
a rigid material.

A simulated skeletal structure of a network polymer with a


high cross-link density is shown at the right.
Thermosets are strong and durable. They primarily are
used in automobiles and construction. They also are used
to make toys, varnishes, boat hulls, and glues16.

Elastomers

Elastomers are rubbery polymers that can be stretched easily to several times their
unstretched length and which rapidly return to their original dimensions when the applied
stress is released.

Elastomers are cross-linked, but have a low cross-link density. The polymer chains still have
some freedom to move, but are prevented from permanently moving relative to each other by the
cross-links.

To stretch, the polymer chains must not be part of a rigid solid - either a glass or a crystal. An
elastomer must be above its glass transition temperature, Tg, and have a low degree of
crystallinity.

Rubber bands and other elastics are made of elastomers.

-------------------------------------------------
Properties of Magnetics
How Transformers, Chokes and Inductors Work, and Properties of Magnetics
The magnetic properties are characterized by its hysterisis loop, which is a graph of flux density versus
magnetization force as shown below:

Hysterisis Loop

When a electric current flows through a conductor ( copper wire), it generate a magnetic field. The magnetic field is
strongest at the conductor surface and weakens as its distance from the conductor surface is increased. The
magnetic field is perpendicular to the direction of current flow and its direction is given by the right hand rule shown
below.

When the conductor or wire is wound around a magnetic materials ( ferrite, iron, steel, MPP, sendust, high flux, etc),
and current flows through the conductor, a flux is induced on the magnetic materials. This flux is induced by the
magnetic field generated by the current carrying conductor. The magnetic material's atomic parts got influenced by
the magnetic field and causes them to align in a certain direction.
The application of this magnetic field on the magnetic materials is called magnetization force. Magnetization force is
called Oersted or A/m (amperes per meter).

The units for Magnetization force is "H"


The results of applying these magnetic field from the current carrying conductor causes the magnetic materials to
have magnetic flux being formed inside the magnetic materials. The intensity of these flux is called flux density.
Therefore flux density is defined as the flux per square area.
Flux density is called gauss or Tesla. I Tesla is10,000 gauss, or 1mT is 10 gauss.
The unit for Flux is "B"
Thus, the hysterisis loop is often called the BH curve. Understanding of the BH curve is extremely important in the
designs of transformers, chokes, coils and inductors.
For a square wave application as in SMPS, the Flux density or B is given as:

Note that B is a function of voltage ( input voltage if calculated from primary windings, and output voltage if
calculated from secondary side). Flux will reduce if you increase the number of turns, increase the switching
frequency or increasing the size of the cores ( increasing the area)
The magnetization force or H is given as:

Note that H is a function of input current. As the current swings from positive to negative the flux changes as well,
tracing the curve.
The permeability of a magnetic material is the ability of the material to increase the flux intensity or flux density
within the material when an electric current flows through a conductor wrapped around the magnetic materials
providing the magnetization force.
The higher the permeability, the higher the flux density from a given magnetization force.
If you look at the BH loop again, you will note that the permeability is actually the slope of the BH curve. The steeper
the curve, the higher the permeability as shown below.
As the magnetization force increases ( or the current over the conductor is increased), a point is reached where the
magnetic material or core will saturate. See point "S" above on the curves. When that happens, any further increase
in H, will not increase the flux. More importantly, the permeability goes to zero as the slope now is flat. In this
situation the magnetic material or core will fail to work as a transformer, chokes, or inductors.

So, it is very important in a choke or inductor design, not to drive the core into saturation by increasing the current
(AC or DC). Usually it is the DC current that saturate the cores since it is a constant current, and puts the cores to a
certain flux level.

In a transformer design, you must make sure that the maximum AC current swings from positive to negative is well
below the saturation point.
Another way to get saturation is by increasing the flux density which is normally achieved by increasing the voltage (
see equation above).

From the BH curve, you can see that when the permeability is high ( slope is steep), the cores will go into saturation
faster. Conversely, when the permeability is low, the cores saturate at a much higher flux density.

Power ferrite cores normally have a permeability of about 2000, and they saturate faster than iron powder or MPP
cores where the permeability of Iron Powder or MPP core is 125 or so.
The typical saturation flux density of Power Ferrite material is under 4000 gauss (400mT). Whereas the saturation
flux density of MPP material is 7000 gauss. High Flux is 15,000 gauss and Iron Powder is 10,000 gauss.
A transformer is an energy transfer device, so you want to have minimum losses when you transfer energy from
primary side to secondary side. This is why a ferrite cores is used.
In a choke or inductor design, the application is for energy storage, and there is always a DC current flowing
through, so you want to use a iron powder, MPP, sendust or high flux cores. Also, the saturation flux is a lot higher,
so a higher DC current can flow through.

Core Losses

There are always energy losses in transformers and chokes. These energy losses will generate heat and cause
thermal problems. The losses in a transformer, chokes or inductors are from the following sources:

1. Hysterisis loss from the sweeping of flux from positive to negative and the area enclosed by the loop is the
loss. Hysterisis loss is due to the materials intrinsic properties due to the energy used to align and re-align
the magnetic domains. You can lower this loss by using a more expansive materials such as TDK PC 44, for
example.
2. Eddy current loss from the circulating currents within the magnetic materials due to differential in flux voltage
inside the cores itself. This loss is high dependent upon the thickness of the walls of the cores. The higher
the switching frequency, the higher will be this eddy current loss.
3. Copper or winding loss. This is also dependent on the wire size, switching frequency, etc. Skin effect and
proximity effect will contribute to this loss.

Our engineers are experts at selecting the right types of materials, designs and analysis for your particular
application.

---------------------------------------------------------------------------------
SEMICONDUCTORS - NOTES

Semiconductor

A semiconductor is a solid whose electrical conductivity is in between that of a


conductor and that of an insulator, and can be controlled over a wide range, either
permanently or dynamically.[1] Semiconductors are tremendously important in
technology. Semiconductor devices, electronic components made of semiconductor
materials, are essential in modern electrical devices. Examples range from computers
to cellular phones to digital audio players. Silicon is used to create most semiconductors
commercially, but dozens of other materials are used as well.

Contents
[hide]

• 1 Overview
• 2 Band structure
o 2.1 Energy–momentum dispersion
• 3 Carrier generation and recombination
• 4 Doping
o 4.1 Dopants
o 4.2 Carrier concentration
o 4.3 Effect on band structure
• 5 Preparation of semiconductor materials
• 6 See also
• 7 References
• 8 External links

Overview
Semiconductors are very similar to insulators. The two categories of solids differ
primarily in that insulators have larger band gaps — energies that electrons must
acquire to be free to move from atom to atom. In semiconductors at room temperature,
just as in insulators, very few electrons gain enough thermal energy to leap the band
gap from the valence band to the conduction band, which is necessary for electrons to
be available for electric current conduction. For this reason, pure semiconductors and
insulators in the absence of applied electric fields, have roughly similar resistance. The
smaller bandgaps of semiconductors, however, allow for other means besides
temperature to control their electrical properties.
Semiconductors' intrinsic electrical properties are often permanently modified by
introducing impurities by a process known as doping. Usually, it is sufficient to
approximate that each impurity atom adds one electron or one "hole" (a concept to be
discussed later) that may flow freely. Upon the addition of a sufficiently large proportion
of impurity dopants, semiconductors will conduct electricity nearly as well as metals.
Depending on the kind of impurity, a doped region of semiconductor can have more
electrons or holes, and is named N-type or P-type semiconductor material, respectively.
Junctions between regions of N- and P-type semiconductors create electric fields, which
cause electrons and holes to be available to move away from them, and this effect is
critical to semiconductor device operation. Also, a density difference in the amount of
impurities produces a small electric field in the region which is used to accelerate non-
equilibrium electrons or holes.
In addition to permanent modification through doping, the resistance of semiconductors
is normally modified dynamically by applying electric fields. The ability to control
resistance/conductivity in regions of semiconductor material dynamically through the
application of electric fields is the feature that makes semiconductors useful. It has led
to the development of a broad range of semiconductor devices, like transistors and
diodes. Semiconductor devices that have dynamically controllable conductivity, such as
transistors, are the building blocks of integrated circuits devices like the microprocessor.
These "active" semiconductor devices (transistors) are combined with passive
components implemented from semiconductor material such as capacitors and
resistors, to produce complete electronic circuits.
In most semiconductors, when electrons lose enough energy to fall from the conduction
band to the valence band (the energy levels above and below the band gap), they often
emit light, a quantum of energy in the visible electromagnetic spectrum. This
photoemission process underlies the light-emitting diode (LED) and the semiconductor
laser, both of which are very important commercially. Conversely, semiconductor
absorption of light in photodetectors excites electrons to move from the valence band to
the higher energy conduction band, thus facilitating detection of light and vary with its
intensity. This is useful for fiber optic communications, and providing the basis for
energy from solar cells.
Semiconductors may be elemental materials such as silicon and germanium, or
compound semiconductors such as gallium arsenide and indium phosphide, or alloys
such as silicon germanium or aluminium gallium arsenide.
Band structure

Band structure of a semiconductor showing a full valence band and an empty


conduction band.

For more details on this topic, see Electronic band structure.

Like other solids, the electrons in semiconductors can have energies only within certain
bands (ie. ranges of levels of energy) between the energy of the ground state,
corresponding to electrons tightly bound to the atomic nuclei of the material, and the
free electron energy, which is the energy required for an electron to escape entirely
from the material. The energy bands each correspond to a large number of discrete
quantum states of the electrons, and most of the states with low energy (closer to the
nucleus) are full, up to a particular band called the valence band. Semiconductors and
insulators are distinguished from metals because the valence band in the
semiconductor materials is very nearly full under usual operating conditions, thus
causing more electrons to be available in the conduction band.
The ease with which electrons in a semiconductor can be excited from the valence band
to the conduction band depends on the band gap between the bands, and it is the size
of this energy bandgap that serves as an arbitrary dividing line (roughly 4 eV) between
semiconductors and insulators.
The electrons must move between states to conduct electric current, and so due to the
Pauli exclusion principle full bands do not contribute to the electrical conductivity.
However, as the temperature of a semiconductor rises above absolute zero, the range
of energy values of the electrons in a given band are increased, and some electrons are
likely to be found in with energy states of the conduction band, which is the band
immediately above the valence band. The current-carrying electrons in the conduction
band are known as "free electrons", although they are often simply called "electrons" if
context allows this usage to be clear.
Electrons excited to the conduction band also leave behind electron holes, or
unoccupied states in the valence band. Both the conduction band electrons and the
valence band holes contribute to electrical conductivity. The holes themselves don't
actually move, but a neighbouring electron can move to fill the hole, leaving a hole at
the place it has just come from, and in this way the holes appear to move, and the holes
behave as if they were actual positively charged particles.
One covalent bond between neighboring atoms in the solid is ten times stronger than
the binding of the single electron to the atom, so freeing the electron does not imply to
destroy the crystal structure.
The notion of holes, which was introduced for semiconductors, can also be applied to
metals, where the Fermi level lies within the conduction band. With most metals the Hall
effect reveals electrons to be the charge carriers, but some metals have a mostly filled
conduction band, and the Hall effect reveals positive charge carriers, which are not the
ion-cores, but holes. Contrast this to some conductors like solutions of salts, or plasma.
In the case of a metal, only a small amount of energy is needed for the electrons to find
other unoccupied states to move into, and hence for current to flow. Sometimes even in
this case it may be said that a hole was left behind, to explain why the electron does not
fall back to lower energies: It cannot find a hole. In the end in both materials electron-
phonon scattering and defects are the dominant causes for resistance.
Fermi-Dirac distribution. States with energy ε below the Fermi energy, here µ, have
higher probability n to be occupied, and those above are less likely to be occupied.
Smearing of the distribution increases with temperature.

The energy distribution of the electrons determines which of the states are filled and
which are empty. This distribution is described by Fermi-Dirac statistics. The distribution
is characterized by the temperature of the electrons, and the Fermi energy or Fermi
level. Under absolute zero conditions the Fermi energy can be thought of as the energy
up to which available electron states are occupied. At higher temperatures, the Fermi
energy is the energy at which the probability of a state being occupied has fallen to 0.5.
The dependence of the electron energy distribution on temperature also explains why
the conductivity of a semiconductor has a strong temperature dependency, as a
semiconductor operating at lower temperatures will have fewer available free electrons
and holes able to do the work.

Energy–momentum dispersion

In the preceding description an important fact is ignored for the sake of simplicity: the
dispersion of the energy. The reason that the energies of the states are broadened into
a band is that the energy depends on the value of the wave vector, or k-vector, of the
electron. The k-vector, in quantum mechanics, is the representation of the momentum
of a particle.
The dispersion relationship determines the effective mass, m * , of electrons or holes in

the semiconductor, according to the formula: The effective


mass is important as it affects many of the electrical properties of the semiconductor,
such as the electron or hole mobility, which in turn influences the diffusivity of the
charge carriers and the electrical conductivity of the semiconductor.
Typically the effective mass of electrons and holes are different. This affects the relative
performance of p-channel and n-channel IGFETs, for example (Muller & Kamins
1986:427).
The top of the valence band and the bottom of the conduction band might not occur at
that same value of k. Materials with this situation, such as silicon and germanium, are
known as indirect bandgap materials. Materials in which the band extrema are aligned
in k, for example gallium arsenide, are called direct bandgap semiconductors. Direct
gap semiconductors are particularly important in optoelectronics because they are much
more efficient as light emitters than indirect gap materials.

Carrier generation and recombination


For more details on this topic, see Carrier generation and recombination.

When ionizing radiation strikes a semiconductor, it may excite an electron out of its
energy level and consequently leave a hole. This process is known as electron–hole
pair generation. Electron-hole pairs are constantly generated from thermal energy as
well, in the absence of any external energy source.
Electron-hole pairs are also apt to recombine. Conservation of energy demands that
these recombination events, in which an electron loses an amount of energy larger than
the band gap, be accompanied by the emission of thermal energy (in the form of
phonons) or radiation (in the form of photons).
In the steady state, the generation and recombination of electron–hole pairs are in
equipoise. The number of electron-hole pairs in the steady state at a given temperature
is determined by quantum statistical mechanics. The precise quantum mechanical
mechanisms of generation and recombination are governed by conservation of energy
and conservation of momentum.
As probability that electrons and holes meet together is proportional to the product of
their amounts, the product is in steady state nearly constant at a given temperature,
providing that there is no significant electric field (which might "flush" carriers of both
types, or move them from neighbour regions containing more of them to meet together)
or externally driven pair generation. The product is a function of the temperature, as the
probability of getting enough thermal energy to produce a pair increases with
temperature, being approximately 1/exp(band gap / kT), where k is Boltzmann's
constant and T is absolute temperature.
The probability of meeting is increased by carrier traps – impurities or dislocations which
can trap an electron or hole and hold it until a pair is completed. Such carrier traps are
sometimes purposely added to reduce the time needed to reach the steady state.

Doping
For more details on this topic, see Doping (semiconductor).

The property of semiconductors that makes them most useful for constructing electronic
devices is that their conductivity may easily be modified by introducing impurities into
their crystal lattice. The process of adding controlled impurities to a semiconductor is
known as doping. The amount of impurity, or dopant, added to an intrinsic (pure)
semiconductor varies its level of conductivity. Doped semiconductors are often referred
to as extrinsic.

Dopants

The materials chosen as suitable dopants depend on the atomic properties of both the
dopant and the material to be doped. In general, dopants that produce the desired
controlled changes are classified as either electron acceptors or donors. A donor atom
that activates (that is, becomes incorporated into the crystal lattice) donates weakly-
bound valence electrons to the material, creating excess negative charge carriers.
These weakly-bound electrons can move about in the crystal lattice relatively freely and
can facilitate conduction in the presence of an electric field. (The donor atoms introduce
some states under, but very close to the conduction band edge. Electrons at these
states can be easily excited to conduction band, becoming free electrons, at room
temperature.) Conversely, an activated acceptor produces a hole. Semiconductors
doped with donor impurities are called n-type, while those doped with acceptor
impurities are known as p-type. The n and p type designations indicate which charge
carrier acts as the material's majority carrier. The opposite carrier is called the minority
carrier, which exists due to thermal excitation at a much lower concentration compared
to the majority carrier.
For example, the pure semiconductor silicon has four valence electrons. In silicon, the
most common dopants are IUPAC group 13 (commonly known as group III) and group
15 (commonly known as group V) elements. Group 13 elements all contain three
valence electrons, causing them to function as acceptors when used to dope silicon.
Group 15 elements have five valence electrons, which allows them to act as a donor.
Therefore, a silicon crystal doped with boron creates a p-type semiconductor whereas
one doped with phosphorus results in an n-type material.
Carrier concentration

The concentration of dopant introduced to an intrinsic semiconductor determines its


concentration and indirectly affects many of its electrical properties. The most important
factor that doping directly affects is the material's carrier concentration. In an intrinsic
semiconductor under thermal equilibrium, the concentration of electrons and holes is
equivalent. That is,n = p = ni Where n is the concentration of conducting electrons, p is
the electron hole concentration, and ni is the material's intrinsic carrier concentration.
Intrinsic carrier concentration varies between materials and is dependent on
temperature. Silicon's ni, for example, is roughly 1×1010 cm-3 at 300 kelvins (room
temperature).
In general, an increase in doping concentration affords an increase in conductivity due
to the higher concentration of carriers available for conduction. Degenerately (very
highly) doped semiconductors have conductivity levels comparable to metals and are
often used in modern integrated circuits as a replacement for metal. Often superscript
plus and minus symbols are used to denote relative doping concentration in
semiconductors. For example, n + denotes an n-type semiconductor with a high, often
degenerate, doping concentration. Similarly, p − would indicate a very lightly doped p-
type material. It is useful to note that even degenerate levels of doping imply low
concentrations of impurities with respect to the base semiconductor. In crystalline
intrinsic silicon, there are approximately 5×1022 atoms/cm³. Doping concentration for
silicon semiconductors may range anywhere from 1013 cm-3 to 1018 cm-3. Doping
concentration above about 1018 cm-3 is considered degenerate at room temperature.
Degenerately doped silicon contains a proportion of impurity to silicon in the order of
parts per thousand. This proportion may be reduced to parts per billion in very lightly
doped silicon. Typical concentration values fall somewhere in this range and are tailored
to produce the desired properties in the device that the semiconductor is intended for.

Effect on band structure

Band diagram of a p+n junction. The band bending is a result of the positioning of the
Fermi levels in the p+ and n sides.
Doping a semiconductor crystal introduces allowed energy states within the band gap
but very close to the energy band that corresponds with the dopant type. In other words,
donor impurities create states near the conduction band while acceptors create states
near the valence band. The gap between these energy states and the nearest energy
band is usually referred to as dopant-site bonding energy or EB and is relatively small.
For example, the EB for boron in silicon bulk is 0.045 eV, compared with silicon's band
gap of about 1.12 eV. Because EB is so small, it takes little energy to ionize the dopant
atoms and create free carriers in the conduction or valence bands. Usually the thermal
energy available at room temperature is sufficient to ionize most of the dopant.
Dopants also have the important effect of shifting the material's Fermi level towards the
energy band that corresponds with the dopant with the greatest concentration. Since the
Fermi level must remain constant in a system in thermodynamic equilibrium, stacking
layers of materials with different properties leads to many useful electrical properties.
For example, the p-n junction's properties are due to the energy band bending that
happens as a result of lining up the Fermi levels in contacting regions of p-type and n-
type material.
This effect is shown in a band diagram. The band diagram typically indicates the
variation in the valence band and conduction band edges versus some spatial
dimension, often denoted x. The Fermi energy is also usually indicated in the diagram.
Sometimes the intrinsic Fermi energy, Ei, which is the Fermi level in the absence of
doping, is shown. These diagrams are useful in explaining the operation of many kinds
of semiconductor devices.

Preparation of semiconductor materials


Semiconductors with predictable, reliable electronic properties are necessary for mass
production. The level of chemical purity needed is extremely high because the presence
of impurities even in very small proportions can have large effects on the properties of
the material. A high degree of crystalline perfection is also required, since faults in
crystal structure (such as dislocations, twins, and stacking faults) interfere with the
semiconducting properties of the material. Crystalline faults are a major cause of
defective semiconductor devices. The larger the crystal, the more difficult it is to achieve
the necessary perfection. Current mass production processes use crystal ingots
between four and twelve inches (300 mm) in diameter which are grown as cylinders and
sliced into wafers.
Because of the required level of chemical purity and the perfection of the crystal
structure which are needed to make semiconductor devices, special methods have
been developed to produce the initial semiconductor material. A technique for achieving
high purity includes growing the crystal using the Czochralski process. An additional
step that can be used to further increase purity is known as zone refining. In zone
refining, part of a solid crystal is melted. The impurities tend to concentrate in the melted
region, while the desired material recrystalizes leaving the solid material more pure and
with fewer crystalline faults.
In manufacturing semiconductor devices involving heterojunctions between different
semiconductor materials, the lattice constant, which is the length of the repeating
element of the crystal structure, is important for determining the compatibility of
materials.

See also

Electronics Portal

• Effective mass
• Electrical engineering
• Electron mobility
• Exciton
• Semiconductor device fabrication
• List of semiconductor materials
• Semiconductor industry
• Quantum tunneling
• Semiconductor device
• Solid state chemistry
• Spintronics
• Wide bandgap semiconductors

References
1. ^ International Union of Pure and Applied Chemistry. "semiconductor".
Compendium of Chemical Terminology Internet edition.

• Muller, Richard S.; Theodore I. Kamins (1986). Device Electronics for Integrated
Circuits, 2d, New York: Wiley. ISBN 0-471-88758-7.
• Sze, Simon M. (1981). Physics of Semiconductor Devices (2nd ed.). John Wiley
and Sons (WIE). ISBN 0-471-05661-8.
• Turley, Jim (2002). The Essential Guide to Semiconductors. Prentice Hall PTR.
ISBN 0-13-046404-X.
• Yu, Peter Y.; Cardona, Manuel (2004). Fundamentals of Semiconductors :
Physics and Materials Properties. Springer. ISBN 3-540-41323-5.

External links
• iLocus Report on Communications Chip[1] Communications Chip Market
• Howstuffworks' semiconductor page
• Semiconductor Concepts at Hyperphysics
• Semiconductor OneSource Hall of Fame, Glossary
• Principles of Semiconductor Devices by Bart Van Zeghbroeck, University of
Colorado. An online textbook
• US Navy Electrical Engineering Training Series
• Institute of Physics "Semiconductor Science and Technology" Journal
• NSM-Archive Physical Properties of Semiconductors
• SiliconFarEast.com General and manufacturing information

[hide]

v •d •e

General subfields within physics

Classical mechanics · Electromagnetism · Thermodynamics · Statistical mechanics ·


Quantum mechanics · Relativity · High energy physics · Condensed matter physics ·
Atomic, molecular, and optical physics

--------------------------------------------------------------------------------

List of semiconductor materials

Semiconductor materials are insulators at absolute zero temperature that conduct


electricity in a limited way at room temperature (see also Semiconductor). The defining
property of a semiconductor material is that it can be doped with impurities that alter its
electronic properties in a controllable way.
Because of their application in devices like transistors (and therefore computers) and
lasers, the search for new semiconductor materials and the improvement of existing
materials is an important field of study in materials science.
The most commonly used semiconductor materials are crystalline inorganic solids.
These materials can be classified according to the periodic table groups from which
their constituent atoms come.
The group III nitrides have high tolerance to ionizing radiation, making them suitable for
radiation-hardened electronics.

List of semiconductor materials


• Group IV elemental semiconductors
o Diamond (C)
o Silicon (Si)
o Germanium (Ge)

• Group IV compound semiconductors


o Silicon carbide (SiC)
o Silicon germanide (SiGe)

• III-V semiconductors
o Aluminium antimonide (AlSb)
o Aluminium arsenide (AlAs)
o Aluminium nitride (AlN)
o Aluminium phosphide (AlP)
o Boron nitride (BN)
o Boron phosphide (BP)
o Boron arsenide (BAs)
o Gallium antimonide (GaSb)
o Gallium arsenide (GaAs)
o Gallium nitride (GaN)
o Gallium phosphide (GaP)
o Indium antimonide (InSb)
o Indium arsenide (InAs)
o Indium nitride (InN)
o Indium phosphide (InP)

• III-V ternary semiconductor alloys


o Aluminium gallium arsenide (AlGaAs, AlxGa1-xAs)
o Indium gallium arsenide (InGaAs, InxGa1-xAs)
o Aluminium indium arsenide (AlInAs)
o Aluminium indium antimonide (AlInSb)
o Gallium arsenide nitride (GaAsN)
o Gallium arsenide phosphide (GaAsP)
o Aluminium gallium nitride (AlGaN)
o Aluminium gallium phosphide (AlGaP)
o Indium gallium nitride (InGaN)
o Indium arsenide antimonide (InAsSb)
o Indium gallium antimonide (InGaSb)
• III-V quaternary semiconductor alloys
o Aluminium gallium indium phosphide (AlGaInP, also InAlGaP, InGaAlP,
AlInGaP)
o Aluminium gallium arsenide phosphide (AlGaAsP)
o Indium gallium arsenide phosphide (InGaAsP)
o Aluminium indium arsenide phosphide (AlInAsP)
o Aluminium gallium arsenide nitride (AlGaAsN)
o Indium gallium arsenide nitride (InGaAsN)
o Indium aluminium arsenide nitride (InAlAsN)

• III-V quinary semiconductor alloys


o Gallium indium nitride arsenide antimonide (GaInNAsSb)

• II-VI semiconductors
o Cadmium selenide (CdSe)
o Cadmium sulfide (CdS)
o Cadmium telluride (CdTe)
o Zinc oxide (ZnO)
o Zinc selenide (ZnSe)
o Zinc sulfide (ZnS)
o Zinc telluride (ZnTe)

• II-VI ternary alloy semiconductors


o Cadmium zinc telluride (CdZnTe, CZT)
o Mercury cadmium telluride (HgCdTe)
o Mercury zinc telluride (HgZnTe)
o Mercury zinc selenide (HgZnSe)

• I-VII semiconductors
o Cuprous chloride (CuCl)

• IV-VI semiconductors
o Lead selenide (PbSe)
o Lead sulfide (PbS)
o Lead telluride (PbTe)
o Tin sulfide (SnS)
o Tin telluride (SnTe)

• IV-VI ternary semiconductors


o lead tin telluride (PbSnTe)
o Thallium tin telluride (Tl2SnTe5)
o Thallium germanium telluride (Tl2GeTe5)

• V-VI semiconductors
o Bismuth telluride (Bi2Te3)
• II-V semiconductors
o Cadmium phosphide (Cd3P2)
o Cadmium arsenide (Cd3As2)
o Cadmium antimonide (Cd3Sb2)
o Zinc phosphide (Zn3P2)
o Zinc arsenide (Zn3As2)
o Zinc antimonide (Zn3Sb2)

• Layered semiconductors
o Lead(II) iodide (PbI2)
o Molybdenum disulfide (MoS2)
o Gallium Selenide (GaSe)
o Tin sulfide (SnS)
o Bismuth Sulfide (Bi2S3)

• Others
o Copper indium gallium selenide (CIGS)
o Platinum silicide (PtSi)
o Bismuth(III) iodide (BiI3)
o Mercury(II) iodide (HgI2)
o Thallium(I) bromide (TlBr)

• Miscellaneous oxides
o Titanium dioxide: anatase (TiO2)
o Copper(I) oxide (Cu2O)
o Copper(II) oxide (CuO)
o Uranium dioxide (UO2)
o Uranium trioxide (UO3)

• Organic semiconductors

• Magnetic semiconductors
SLIP - TWINNING |
Version 2 - view current page

Plastic Deformation
When a material is stressed below its elastic limit, the resulting deformation or strain is
temporary. Removal of stress results in a gradual return of the object to its original dimensions.
When a material is stressed beyond its elastic limit, plastic or permanent deformation takes place,
and it will not return to its original shape by the application of force alone. The ability of a metal
to undergo plastic deformation is probably its most outstanding characteristic in comparison with
other materials. All shaping operations such as stamping, pressing, spinning, rolling, forging,
drawing, and extruding involve plastic deformation of metals. Various machining operations
such as milling, turning, sawing, and punching also involve plastic deformation. Plastic
deformation may take place by :
Slip Twinning Combination of slip and twinning Deformation by Slip: If a single
crystal of a metal is stressed in tension beyond its elastic limit, it elongates slightly, a step
appears on the surface indicating relative displacement of one part of the crystal with
respect to the rest, and the elongation stops. Increasing the load will cause another step. It
is as if neighboring thin sections of the crystal had slipped past one another like a sliding
cards on a deck. Each successive elongation requires a higher stress and results in the
appearance of another step, which is actually the intersection of a slip plane with the
surface of the crystal. Progressive increase of the load eventually causes the material to
fracture.
Slip occurs in directions in which the atoms are most closely packed, since this requires the
least amount of energy.

Figure 1. The effect of slip on the lattice structure.


Figure 1 shows that when the plastic deformation is due to slip, the atoms move a whole
interatomic space (moving from one corner to another corner of the unit cell). This means
that overall lattice structure remains the same. Slip is observed as thin lines under the
microscopes and these lines can be removed by polishing.

Figure 2. Slip appears as thin lines under the microscope.

Figure 3. Slip lines in copper.


Deformation by Twinning: When mechanical deformation is created by twinning, the
lattice structure changes. The atoms move only a fraction of an interatomic space and this
leads to a rearrangement of the lattice structure. Twinning is observed as wide bands
under the microscope. These wide bands can not be removed by polishing.
Two kinds of twins are of interest to the metallurgists:
1. Deformation or mechanical twins, most prevalent in close packed hexagonal metals
(magnesium, zinc, iron with large amount of ferrite)
2. Annealing twins, most prevalent in F.C.C. (Face centered cubic) metals (aluminum,
copper, brass, iron with austenite).These metals have been previously worked and heat
treated. The twins are formed because of a change in the normal growth mechanism.
Figure 4. The effect of twinning on the lattice structure.

Figure 5. Twin bands


Figure 6. Twin bands in zinc.
Slip vs. Twinning:
Slip Twinning

Atoms move a whole Atoms move fractional


Atomic movement
number of atomic spacing. atomic spacing.

Microscopic appearance Thin lines Wide bands or broad lines

No change in lattice
orientation. The steps are
Lattice orientation changes.
only visible on the surface of
Surface polishing will not
Lattice orientation the crystal and can be
destroy the evidence of
removed by polishing. After
twinning.
polishing there is no
evidence of slip.
Specific heat capacity
Specific heat capacity, also known simply as specific heat, is the measure of the heat energy
required to increase the temperature of a unit quantity of a substance by a certain temperature
interval.

The term originated primarily through the work of Scottish physician Joseph Black who used
conducted various heat measurements and used the phrase “capacity for heat.”[1] More heat
energy is required to increase the temperature of a substance with high specific heat capacity
than one with low specific heat capacity. For instance, eight times the heat energy is required to
increase the temperature of an ingot of magnesium as is required for a lead ingot of the same
mass.

The specific heat of virtually any substance can be measured, including chemical elements,
compounds, alloys, solutions, and composites.
The symbols for specific heat capacity are either C or c depending on how the quantity of a
substance is measured (see Symbols and standards below for usage rules).

In the measurement of physical properties, the term “specific” means the measure is a bulk
property (an intensive property), wherein the quantity of substance must be specified. For
example, the heat energy required to raise water’s temperature one kelvin (equal to one degree
Celsius) is approximately 4.2 joules per gram — the gram being the specified quantity.
Scientifically, this measure would be expressed as c = 4.2 J g–1 K–1.

Basic metrics of specific heat capacity

Unit quantity When measuring specific heat capacity in science and engineering, the unit
quantity of a substance is often in terms of mass: either the gram or kilogram, both of which are
an SI unit. Especially in chemistry though, the unit quantity of specific heat capacity may also be
the mole, which is a certain number of molecules or atoms.
When the unit quantity is the mole, the term molar heat capacity may also be used to more
explicitly describe the measure.

Heat energy

The unit of measure for heat energy is usually the SI unit joule. The calorie however, is still often
used in chemistry.

Temperature interval
The temperature interval in science, engineering, and chemistry is usually one kelvin or degree
Celsius (both of which have the same magnitude).
Other units

In the U.S., other units of measure for specific heat capacity are typically used in disciplines such
as construction and civil engineering. There, the mass quantity is often the pound-mass, the unit
of heat energy is the British thermal unit, and the temperature interval is the degree Fahrenheit.

Basic Equations

• The equation relating heat energy to specific heat capacity, where the unit quantity is in
terms of mass is:

Q = m c ∆T where Q is the heat energy put into or taken out of the substance, m is the mass of
the substance, c is the specific heat capacity, and ∆T is the temperature differential.

• Where the unit quantity is in terms of moles, the equation relating heat energy to specific
heat capacity (also known as molar heat capacity) is

Q = n C ∆T where Q is the heat energy put into or taken out of the substance, n is the number of
moles, C is the specific heat capacity, and ∆T is the temperature differential.

Factors that affect specific heat capacity

Degrees of freedom: Molecules are quite different from the monatomic gases like helium and
argon. With monatomic gases, heat energy comprises only translational motions.

Translational motions are ordinary, whole-body movements in 3D space whereby particles move
about and exchange energy in collisions—like rubber balls in a vigorously shaken container (see
animation here).

These simple movements in the three X, Y, and Z–axis dimensions of space means monatomic
atoms have three translational degrees of freedom. Molecules, however, have various internal
vibrational and rotational degrees of freedom because they are complex objects; they are a
population of atoms that can move about within a molecule in different ways (see animation at
right). Heat energy is stored in these internal motions.

For instance, nitrogen, which is a diatomic molecule, has five active degrees of freedom: the
three comprising translational motion plus two rotational degrees of freedom internally. Not
surprisingly, nitrogen has five-thirds the constant-volume molar heat capacity as do the
monatomic gases.[2] See Thermodynamic temperature for more information on translational
motions, kinetic (heat) energy, and their relationship to temperature
• Molar mass: When the specific heat capacity, c, of a material is measured (lowercase c
means the unit quantity is in terms of mass), different values arise because different
substances have different molar masses (essentially, the weight of the individual atoms or
molecules). Heat energy arises, in part, due to the number of atoms or molecules that are
vibrating. If a substance has a lighter molar mass, then each gram of it has more atoms or
molecules available to store heat energy. This is why hydrogen—the lightest substance
there is—has such a high specific heat capacity on a gram basis; one gram of it contains a
relatively great many molecules. If specific heat capacity is measured on a molar basis
(uppercase C), the differences between substances is less pronounced and hydrogen’s
molar heat capacity is quite unremarkable. Conversely, for molecular-based substances
(which also absorb heat into their internal degrees of freedom), massive, complex
molecules with high atomic count — like gasoline — can store a great deal of energy per
mole and yet, be quite unremarkable on a mass basis. Since the bulk density of a solid
chemical element is strongly related to its molar mass, generally speaking, there is a
strong, inverse correlation between a solid’s density and its cp (constant-pressure specific
heat capacity on a mass basis). Large ingots of low-density solids tend to absorb more
heat energy than a small, dense ingot of the same mass because the former usually has
proportionally more atoms. Thus, generally speaking, there a close correlation between
the size of a solid chemical element and its total heat capacity (see Volumetric heat
capacity). There are however, many departures from the general trend. For instance,
arsenic, which is only 14.5% less dense than antimony, has nearly 59% more specific
heat capacity on a mass basis. In other words; even though an ingot of arsenic is only
about 17% larger than an antimony one of the same mass, it absorbs about 59% more
heat energy for a given temperature rise.

• Hydrogen bonds: Hydrogen-containing polar molecules like ethanol, ammonia, and


water have powerful, intermolecular hydrogen bonds when in their liquid phase. These
bonds provide yet another place where kinetic (heat) energy is stored.

Symbols and standards


When mass is the unit quantity, the symbol for specific heat capacity is lowercase c. When the
mole is the unit quantity, the symbol is uppercase C. Alternatively—especially in chemistry as
opposed to engineering—the uppercase version for specific heat, C, may be used in combination
with a suffix representing enthalpy (symbol: either H or h); specifically, when the mole is the
unit quantity, the enthalpy suffix is uppercase H and when mass is the unit quantity, the suffix is
lowercase h.

The modern SI units for measuring specific heat capacity are either the joule per gram-kelvin (J
g–1 K–1) or the joule per mole-kelvin (J mol–1 K–1). The various SI prefixes can create
variations of these units (such as kJ kg–1 K–1 and kJ mol–1 K–1). Symbols for alternative units
are as follows: pounds-mass (symbol: lb) for quantity, calories (symbol: cal) and British thermal
units (symbol: BTU) for energy, and degree Fahrenheit (symbol: °F) for the increment of
temperature.
There are two distinctly different experimental conditions under which specific heat capacity is
measured and these are denoted with a subscripted suffix modifying the symbols C or c. The
specific heat of substances are typically measured under constant pressure (Symbols: Cp or cp).
However, fluids (gases and liquids) are typically also measured at constant volume (Symbols: Cv
or cv). Measurements under constant pressure produces greater values than those at constant
volume because work must be performed in the former. This difference is particularly great in
gases where values under constant pressure are typically 30% to 66.7% greater than those at
constant volume.
Thus, the symbols for specific heat capacity are as follows:
Under constant At constant
pressure volume
Unit quantity = mole Cp or CpH Cv or CvH
Unit quantity = mass cp or Cph cv or Cvh

The specific heat capacities of substances comprising molecules (distinct from the monatomic
gases) are not fixed constants and vary somewhat depending on temperature. Accordingly, the
temperature at which the measurement is made is usually also specified. Examples of two
common ways to cite the specific heat of a substance are as follows:
Water (liquid): cp = 4.1855 J g–1 K–1 (15 °C), and…
Water (liquid): CvH = 74.539 J mol–1 K–1 (25 °C)
The pressure at which specific heat capacity is measured is especially important for gases and
liquids. The standard pressure was once virtually always “one standard atmosphere” which is
defined as the sea level–equivalent value of precisely 101.325 kPa (760 Torr). In the case of
water, 101.325 kPa is still typically used due to water’s unique role in temperature and physical
standards. However, in 1982, the International Union of Pure and Applied Chemistry (IUPAC)
recommended that for the purposes of specifying the physical properties of substances, “the
standard pressure” should be defined as precisely 100 kPa (≈750.062 Torr).[3] Besides being a
round number, this had a very practical effect: relatively few people live and work at precisely
sea level; 100 kPa equates to the mean pressure at an altitude of about 112 meters (which is
closer to the 194–meter, world–wide median altitude of human habitation). Accordingly, the
pressure at which specific heat capacity is measured should be specified since one can not
assume its value. An example of how pressure is specified is as follows:
Water (gas): CvH = 28.03 J mol–1 K–1 (100 °C, 101.325 kPa)
Note in the above specification that the experimental condition is at constant volume. Still, the
pressure within this fixed volume is controlled and specified.

Heat capacity
Heat capacity (symbol: C) — as distinct from specific heat capacity — is the measure of the
heat energy required to increase the temperature of an object by a certain temperature interval.
Heat capacity is an extensive property because its value is proportional to the amount of material
in the object; for example, a bathtub of water has a greater heat capacity than a cup of water.
Heat capacity is usually expressed in units of J K–1 (or J/K), subject to the caveats and
exceptions detailed in both Basic metrics of specific heat capacity and Symbols and standards,
above. For instance, one could write that the gasoline in a 55-gallon drum has an average heat
capacity of 347 kJ/K.
The uncertainty of an object’s measured quantity is rarely better than one percent and this places
an upper limit on the accuracy and precision of most stated values of heat capacity. Accordingly,
it is usually unnecessary as a practical matter, to specify the defined state at which the
measurement was made; e.g. “(25 °C, 100 kPa).” It most cases, it is assumed that the substance’s
specific heat capacity is a published value and the object’s quantity is subject to such a sizable
relative uncertainty that it renders this detail moot. An exception would be when an object has an
accurately known or precisely defined quantity; e.g. “The heat capacity of the International
Prototype Kilogram is 133 J/K (25 °C).” Another exception would be when the defined state
varies significantly from standard conditions.

Table of specific heat capacities


Substance Phase cp
J g−1 K−1 Cp
J mol−1 K−1 Cv
J mol−1 K−1All measurements are at 25 °C unless otherwise noted.
Notable minima and maxima are shown in maroon.
Air (Sea level, dry, 0 °C) gas 1.0035 29.07
Air (typical room conditionsA) gas 1.012 29.19
Aluminium solid 0.897 24.2
Ammonia liquid 4.700 80.08
Antimony solid 0.207 25.2
Argon gas 0.5203 20.7862 12.4717
Arsenic solid 0.328 24.6
Beryllium solid 1.82 16.4
Copper solid 0.385 24.47
Diamond solid 0.5091 6.115
Ethanol liquid 2.44 112
Gasoline liquid 2.22 228
Gold solid 0.1291 25.42
Graphite solid 0.710 8.53
Helium gas 5.1932 20.7862 12.4717
Hydrogen gas 14.30 28.82
Iron solid 0.450 25.1
Lead solid 0.127 26.4
Lithium solid 3.58 24.8
Magnesium solid 1.02 24.9
Mercury liquid 0.1395 27.98
Nitrogen gas 1.040 29.12 20.8
Neon gas 1.0301 20.7862 12.4717
Oxygen gas 0.918 29.38
Paraffin wax solid 2.5 900
Silica (fused) solid 0.703 42.2
Uranium solid 0.116 27.7
gas (100 °C) 2.080 37.47 28.03
Water liquid (25 °C) 4.1813 75.327 74.53
solid (0 °C) 2.114 38.09

A Assuming an altitude of 194 meters above mean sea level (the world–wide median altitude of
human habitation), an indoor temperature of 23 °C, a dewpoint of 9 °C (40.85% relative
humidity), and 760 mm–Hg sea level–corrected barometric pressure (molar water vapor content
= 1.16%).

Specific heat capacity of building materials


Usually of interest to builders and solar designers

cp
Substance Phase
J g−1 K−1
Asphalt solid 0.92
Brick solid 0.84
Concrete solid 0.88
Glass,
solid 0.84
silica
Glass,
solid 0.67
crown
Glass, flint solid 0.503
Glass,
solid 0.753
pyrex
Granite solid 0.79
Gypsum solid 1.09
Marble,
solid 0.88
mica
Sand solid 0.835
Soil solid 0.8
Wood solid 0.42

Derivations of heat capacity and specific heat capacity


Definition of heat capacity

Heat capacity is mathematically defined as the ratio of a small amount of energy δQ added to the
body, to the corresponding small increase in its temperature

dT: For thermodynamic systems with more than one


physical dimension, the above definition does not give a single, unique quantity unless a
particular infinitesimal path through the system’s phase space has been defined (this means that
one needs to know at all times where all parts of the system are, how much mass they have, and
how fast they are moving). This information is used to account for different ways that heat can
be stored as kinetic energy (energy of motion) and potential energy (energy stored in force
fields), as an object expands or contracts. For all real systems, the path though these changes
must be explicitly defined, since the value of heat capacity depends on which path from one
temperature to another, is chosen. Of particular usefulness in this context are the values of heat
capacity for constant volume, CV, and constant pressure, CP. These will be defined below.

Heat capacity of compressible bodies

The state of a simple compressible body with fixed mass is described by two thermodynamic
parameters such as temperature T and pressure p. Therefore as mentioned above, one may
distinguish between heat capacity at constant volume, CV, and heat capacity at constant

pressure, Cp: whereδQ is


the infinitesimal amount of heat added, dT is the subsequent rise in temperature. The increment
of internal energy is the heat added and the work added: So the heat

capacity at constant volume is The enthalpy is defined by H = U + PV. The


increment of enthalpy is which, after replacing dU with the
equation above and cancelling the PdV terms reduces to: So the heat

capacity at constant pressure is Note that this last “definition” is a bit circular,
since the concept of “enthalpy” itself was invented to be a measure of heat absorbed or produced
at constant pressures (the conditions in which chemists usually work). As such, enthalpy merely
accounts for the extra heat which is produced or absorbed by pressure-volume work at constant
pressure. Thus, it is not surprising that constant-pressure heat capacities may be defined in terms
of enthalpy, since “enthalpy” was defined in the first place to make this so.

Specific heat capacity


The specific heat capacity of a material is, which in the absence of phase transitions is

equivalent to, where,C is the heat capacity of a body made of the

material in question, m is the mass of the body, V is the volume of the body, and is the
density of the material. For gases, and also for other materials under high pressures, there is need
to distinguish between different boundary conditions for the processes under consideration (since
values differ significantly between different conditions). Typical processes for which a heat
capacity may be defined include isobaric (constant pressure, dp = 0) or isochoric (constant
volume, dV = 0) processes. The corresponding specific heat capacities are expressed

as: A related parameter to c is , the volumetric heat


capacity. In engineering practice, for solids or liquids often signifies a volumetric heat
capacity, rather than a constant-volume one. In such cases, the mass-specific heat capacity
(specific heat) is often explicitly written with the subscript m, as . Of course, from the above

relationships, for solids one writes:

Dimensionless heat capacity

The dimensionless heat capacity of a material is whereC is the heat


capacity of a body made of the material in question (J·K−1) n is the amount of matter in the body
(mol) R is the gas constant (J·K−1·mol−1) nR=Nk is the amount of matter in the body (J·K−1) N
is the number of molecules in the body. (dimensionless) k is Boltzmann’s constant
(J·K−1·molecule−1) Again, SI units shown for example.

[Theoretical models

[edit] Gas phase The specific heat of the gas is best conceptualized in terms of the degrees of
freedom of an individual molecule. The different degrees of freedom correspond to the different
ways in which the molecule may store energy. The molecule may store energy in its translational

motion according to the familiar formula where m is the mass of


the molecule and [vx,vy,vz] is velocity of the center of mass of the molecule. Each direction of
motion constitutes a degree of freedom, so that there are three translational degrees of freedom.
In addition, a molecule may have rotational motion. The kinetic energy of rotational motion is

generally expressed as where I is the moment of inertia


tensor of the molecule, and [ω1,ω2,ω3] is the angular velocity pseudovector (in a coordinate
system aligned with the principle axes of the molecule). In general, then, there will be three
additional degrees of freedom corresponding to the rotational motion of the molecule, (For linear
molecules one of the inertia tensor terms vanishes and there are only two rotational degrees of
freedom). The degrees of freedom corresponding to translations and rotations are called the
“rigid” degrees of freedom, since they do not involve any deformation of the molecule.

The motions of the atoms in a molecule which are not part of its gross translational motion or
rotation may be classified as vibrational motions. It can be shown that if there are n atoms in the
molecule, there will be as many as 3n − 3 − nr vibrational degrees of freedom, where nr is the
number of rotational degrees of freedom. The actual number may be less due to various
symmetries.
If the molecule could be entirely described using classical mechanics, then we could use the
theorem of equipartition of energy to predict that each degree of freedom would have an average
energy in the amount of (1/2)kT where k is Boltzmann’s constant and T is the temperature.

Our calculation of the heat content would be straightforward. Each molecule would be holding,
on average, an energy of (f/2)kT where f is the total number of degrees of freedom in the
molecule. The total internal energy of the gas would be (f/2)NkT where N is the total number of
molecules. The heat capacity (at constant volume) would then be a constant (f/2)Nk , the specific
heat capacity would be (f/2)k and the dimensionless heat capacity would be just f/2.

The various degrees of freedom cannot generally be considered to obey classical mechanics.
Classically, the energy residing in each degree of freedom is assumed to be continuous - it can
take on any positive value, depending on the temperature. In reality, the amount of energy that
may reside in a particular degree of freedom is quantized: It may only be increased and
decreased in finite amounts.

A good estimate of the size of this minimum amount is the energy of the first excited state of that
degree of freedom above its ground state. For example, the first vibrational state of the HCl
molecule has an energy of about 5.74 × 10–20 joule. If this amount of energy were deposited in a
classical degree of freedom, it would correspond to a temperature of about 4156 K.

If the temperature of the substance is so low that the equipartition energy of (1/2)kT is much
smaller than this excitation energy, then there will be little or no energy in this degree of
freedom. This degree of freedom is then said to be “frozen out". As mentioned above, the
temperature corresponding to the first excited vibrational state of HCl is about 4156 K. For
temperatures well below this value, the vibrational degrees of freedom of the HCL molecule will
be frozen out. They will contain little energy and will not contribute to the heat content of the
HCl gas.

It can be seen that for each degree of freedom there is a critical temperature at which the degree
of freedom “unfreezes” and begins to accept energy in a classical way. In the case of
translational degrees of freedom, this temperature is that temperature at which the thermal
wavelength of the molecules is roughly equal to the size of the container. For a container of
macroscopic size (e.g. 10 cm) this temperature is extremely small and has no significance, since
the gas will certainly liquify or freeze before this low temperature is reached.

For any real gas we may consider translational degrees of freedom to always be classical and
contain an average energy of (3/2)kT per molecule.
The rotational degrees of freedom are the next to “unfreeze". In a diatomic gas, for example, the
critical temperature for this transition is usually a few tens of kelvins. Finally, the vibrational
degrees of freedom are generally the last to unfreeze. As an example, for diatomic gases, the
critical temperature for the vibrational motion is usually a few thousands of kelvins.

It should be noted that it has been assumed that atoms have no rotational or internal degrees of
freedom. This is in fact untrue. For example, atomic electrons can exist in excited states and even
the atomic nucleus can have excited states as well. Each of these internal degrees of freedom are
assumed to be frozen out due to their relatively high excitation energy. Nevertheless, for
sufficiently high temperatures, these degrees of freedom cannot be ignored.

[edit] Monatomic gas

In the case of a monatomic gas such as helium under constant volume, if it assumed that no
electronic or nuclear quantum excitations occur, each atom in the gas has only 3 degrees of
freedom, all of a translational type. No energy dependence is associated with the degrees of
freedom which define the position of the atoms. While, in fact, the degrees of freedom
corresponding to the momenta of the atoms are quadratic, and thus contribute to the heat
capacity. There are N atoms, each of which has 3 components of momentum, which leads to 3N
total degrees of freedom.

This gives:
whereCV is the heat capacity at constant volume of the gas CV,m is the molar heat capacity at
constant volume of the gas N is the total number of atoms present in the container n is the
number of moles of atoms present in the container (n is the ratio of N and Avogadro’s number) R
is the ideal gas constant, (8.314570[70] J K−1mol−1). R is equal to the product of Boltzmann’s
constant kB and Avogadro’s number The following table shows experimental molar constant
volume heat capacity measurements taken for each noble monatomic gas (at 1 atm and 25 °C):
Monatomic gas CV, m (J K−1 mol−1) CV, m/R
He 12.5 1.50
Ne 12.5 1.50
Ar 12.5 1.50
Kr 12.5 1.50
Xe 12.5 1.50
It is apparent from the table that the experimental heat capacities of the monatomic noble gases
agrees with this simple application of statistical mechanics to a very high degree.
[edit] Diatomic gas In the somewhat more complex case of an ideal gas of diatomic molecules,
the presence of internal degrees of freedom are apparent. In addition to the three translational
degrees of freedom, there are rotational and vibrational degrees of freedom. In general, the
number of degrees of freedom, f, in a molecule with na atoms is 3na: Mathematically,
there are a total of three rotational degrees of freedom, one corresponding to rotation about each
of the axes of three dimensional space. However, in practice we shall only consider the existence
of two degrees of rotational freedom for linear molecules. This approximation is valid because
the moment of inertia about the internuclear axis is vanishingly small with respect other
moments of inertia in the molecule (this is due to the extremely small radii of the atomic nuclei,
compared to the distance between them in a molecule). Quantum mechanically, it can be shown
that the interval between successive rotational energy eigenstates is inversely proportional to the
moment of inertia about that axis. Because the moment of inertia about the internuclear axis is
vanishingly small relative to the other two rotational axes, the energy spacing can be considered
so high that no excitations of the rotational state can possibly occur unless the temperature is
extremely high.

We can easily calculate the expected number of vibrational degrees of freedom (or vibrational
modes). There are three degrees of translational freedom, and two degrees of rotational freedom,
therefore Each rotational and translational
degree of freedom will contribute R/2 in the total molar heat capacity of the gas. Each vibrational
mode will contribute R to the total molar heat capacity, however. This is because for each
vibrational mode, there is a potential and kinetic energy component. Both the potential and
kinetic components will contribute R/2 to the total molar heat capacity of the gas. Therefore, we
expect that a diatomic molecule would have a molar constant-volume heat capacity

of where the terms originate from the


translational, rotational, and vibrational degrees of freedom, respectively.
The following is a table of some molar constant-volume heat capacities of various diatomic
gasses
Diatomic gas CV, m (J K−1 mol−1) CV, m / R
H2 20.18 2.427
CO 20.2 2.43
N2 19.9 2.39
Cl2 24.1 2.90
Br2 32.0 3.84

From the above table, clearly there is a problem with the above theory. All of the diatomics
examined have heat capacities that are lower than those predicted by the Equipartition Theorem,
except Br2. However, as the atoms composing the molecules become heavier, the heat capacities
move closer to their expected values. One of the reasons for this phenomenon is the quantization
of vibrational, and to a lesser extent, rotational states. In fact, if it is assumed that the molecules
remain in their lowest energy vibrational state because the inter-level energy spacings are large,
the predicted molar constant volume heat capacity for a diatomic molecule

becomes which is a fairly close approximation of the


heat capacities of the lighter molecules in the above table. If the quantum harmonic oscillator
approximation is made, it turns out that the quantum vibrational energy level spacings are
actually inversely proportional to the square root of the reduced mass of the atoms composing
the diatomic molecule. Therefore, in the case of the heavier diatomic molecules, the quantum
vibrational energy level spacings become finer, which allows more excitations into higher
vibrational levels at a fixed temperature.

Solid phase

The dimensionless heat capacity divided by three, as a function of temperature as predicted by


the Debye model and by Einstein’s earlier model. The horizontal axis is the temperature divided
by the Debye temperature. Note that, as expected, the dimensionless heat capacity is zero at
absolute zero, and rises to a value of three as the temperature becomes much larger than the
Debye temperature. The red line corresponds to the classical limit of the Dulong-Petit law
For matter in a crystalline solid phase, the Dulong-Petit law, which was discovered empirically,
states that the dimensionless specific heat capacity assumes the value 3R. Indeed, for solid
metallic chemical elements at room temperature, heat capacities range from about 2.8 to 3.4
(beryllium being a notable exception at 2.0).

The theoretical maximum heat capacity for larger and larger multi-atomic gases at higher
temperatures, also approaches the Dulong-Petit limit of 3R, so long as this is calculated per mole
of atoms, not molecules. The reason is that gases with very large molecules, in theory have
almost the same high-temperature heat capacity as solids, lacking only the (small) heat capacity
contribution that comes from potential energy that cannot be stored between separate molecules
in a gas.

The Dulong-Petit “limit” results from the equipartition theorem, and as such is only valid in the
classical limit of a microstate continuum, which is a high temperature limit. For light and non-
metallic elements, as well as most of the common molecular solids based on carbon compounds
at standard ambient temperature, quantum effects may also play an important role, as they do in
multi-atomic gases. These effects usually combine to give heat capacities lower than 3 R per
mole of atoms in the solid, although heat capacities calculated per mole of molecules in
molecular solids may be more than 3 R. For example, the heat capacity of water ice at the
melting point is about 4.6 R per mole of molecules, but only 1.5 R per mole of atoms. The lower
number results from the “freezing out” of possible vibration modes for light atoms at suitably
low temperatures, just as in many gases.

These effects are seen in solids more often than liquids: for example the heat capacity of liquid
water is again close to the theoretical 3 R per mole of atoms of the Dulong-Petit theoretical
maximum.
For a more modern and precise analysis of the heat capacities of solids, especially at low
temperatures, it is useful to use the idea of phonons. See Debye model. Heat capacity at absolute
zero
From the definition of entropy we can calculate the absolute entropy by
integrating from zero temperature to the final temperature

Tf The heat capacity must be zero


at zero temperature in order for the above integral not to yield an infinite absolute entropy, thus
violating the third law of thermodynamics. One of the strengths of the Debye model is that
(unlike the preceding Einstein model) it predicts an approach of heat capacity toward zero as
zero temperature is approached, and also predicts the proper mathematical form of this approach.

See also
• Heat • Specific melting heat
• Heat capacity ratio • Specific heat of vaporization
• Heat equation • Temperature
• Heat transfer coefficient • Thermodynamic (absolute) temperature
• Latent heat • Volumetric heat capacity
Stress-Strain Curves

Stress
The tensile stress on a material is defined as the force
per unit area as the material is stretched. The cross-
sectional area may change if the material deforms as it is
stretched, so the area used in the calculation is the
original undeformed cross-sectional area Ao.

stress:

The units of stress are the same as those of pressure.We


will use pascals, Pa, as the units for the stress. In the
polymer literature, stress often is expressed in terms of
psi (pounds per square inch).
1 MPa = 145 psi
Return to Top

Strain
The strain is a measure of the change in length of the
sample. The strain commonly is expressed in one of two
ways.

elongation:

extension ratio:

The strain is a unitless number.


Return to Top
Stress-Strain Curves
A tensile stress-strain curve is a plot of
stress on the y-axis vs. strain on the x-axis.
In the plot at the right, strain is expressed
as elongation. Stress-strain curves are
measured with an instrument designed for
tensile testing.

We see that as the strain (length) of the


material increases, a larger amount of
stress (force) is required.
As the elongation is increased the sample
eventually breaks.
---------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------

The relationship between the stress and strain that


a material displays is known as a Stress-Strain
curve.

It is unique for each material and is found by


recording the amount of deformation (strain) at
distinct intervals of tensile or compressive
loading. These curves reveal many of the
properties of a material (including data to
establish the Modulus of Elasticity, E).

What does a comparison of the curves for mild steel, cast iron and concrete illustrate about their
respective properties? It can be seen that the concrete curve is almost a straight line. There is an
abrupt end to the curve. This, and the fact that it is a very steep line, indicate that it is a brittle
material. The curve for cast iron has a slight curve to it.

It is also a brittle material. Both of these materials will fail with little warning once their limits
are surpassed. Notice that the curve for mild steel seems to have a long gently curving "tail".
This indicates a behavior that is distinctly different than either concrete or cast iron. The graph
shows that after a certain point mild steel will continue to strain (in the case of tension, to
stretch) as the stress (the loading) remains more or less constant. The steel will actually stretch
like taffy. This is a material property which indicates a high ductility. There are a number of
significant points on a stress-strain curve that help one understand and predict the way every
building material will behave.

An example plot of a test on two grades of steel is illustrated above. If one begins at the origin
and follows the graph a number of points are indicated. Point A is known as the proportional
limit. Up to this point the relationship between stress and strain is exactly proportional.

The number which describes the relationship between the two is the Modulus of Elasticity. the
curve beyond point A. Up to this point, any steel speciment that is loaded and unloaded would
return to its original length. This is known as elastic behavior. Point B is the point after which
any continued stress results in permanent, or inelastic, deformation. Thus, point B is known as
the elastic limit.

Since the stress resistence of the material decreases after the peak of the curve, this is also known
as the yield point.

The line between points C and D indicates the behavior of the steel specimen if it experienced
continued loading to stress indicated as point C. Notice that the dashed line is parallel to the
elastic zone of the curve (between the origin and point A). When the specimen is unloaded the
magnitude of the inelastic deformation would be determined (in this case 0.0725 inches /inch). If
the same specimen was to be loaded again, the stress-strain plot would climb back up the line
from D to C and continue along the initial curve. Point E indicates the location of the value of the
ultimate stress.

Note that this is quite different from the yield stress. The yield stress and ultimate stress are the
two values that are most often used to determine the allowable loads for building materials and
should never be confused. A material is considered to have completely failed once it reaches the
ultimate stress. The point of rupture, or the actual tearing of the material, does not occur until
point F. It is interesting to note the curve that indicates the actual stress experienced by the
specimen.
This curve is different from the apparent stress since the cross sectional area is actually
decreasing. There is quite a bit to be learned from both the study of the ideal and actual behavior
of all building materials. Changes in that body of knowledge have had large impacts on the way
in which building structures are designed.

The earliest methods of design limited the stresses that a structure would be "allowed" to
experience. Thus, the method of design was known as the Allowable Stress Method. Recognition
of the additional strength potential of most materials resulted in the Ultimate Stress Method of
design. Contemporary thought centers on the limitation of the various service conditions of the
structure at hand. This is known as the Limit States Design method. In the end, it is the authors
opinion that the actual method of design is less important than the legal bodies would like us to
believe. Human factors in the construciton process SHOULD prevent a good designer from
pushing too hard against the envelope of safety.

---------------------------------------------------------------------------------------------------------------------
---------

The Stress-Strain Curve


The stress-strain curve characterizes the behavior of the material tested. It is most often plotted
using engineering stress and strain measures, because the reference length and cross-sectional
area are easily measured.

Stress-strain curves generated from tensile test results help engineers gain insight into the
constitutive relationship between stress and strain for a particular material. The constitutive
relationship can be thought of as providing an answer to the following question: Given a strain
history for a specimen, what is the state of stress? As we shall see, even for the simplest of
materials, this relationship can be very complicated.

In addition to providing quantitative information that is useful for the constitutive relationship,
the stress-strain curve can also be used to qualitatively describe and classify the material. Typical
regions that can be observed in a stress-strain curve are:

1. Elastic region
2. Yielding
3. Strain Hardening
4. Necking and Failure

A stress-strain curve with each region identified is shown below in Figure 5. The curve has been
sketched using the assumption that the strain in the specimen is monotonically increasing - no
unloading occurs. It should also be emphasized that a lot of variation from what's shown is
possible with real materials, and each of the above regions will not always be so clearly
delineated. It should be emphasized that the extent of each region in stress-strain space is
material dependent, and that not all materials exhibit all of the above regions. We describe each
of the regions in more detail in the following sections.

Figure 5: Various regions and points on the stress-strain curve. Click on the figure to launch it in a
separate window, which might be useful as you read through this section.

---------------------------------------------------------------------------------------------------------------------
------

--------------------------------------------------------------------------------------------------------------------
Surface Hardening
Surface hardening a process which includes a wide variety of techniques is used to improve the
wear resistance of parts without affecting the softer, tough interior of the part. This combination
of hard surface and resistance and breakage upon impact is useful in parts such as a cam or ring
gear that must have a very hard surface to resist wear, along with a tough interior to resist the
impact that occurs during operation.

Further, the surface hardening of steels has an advantage over through hardening because less
expensive low-carbon and medium-carbon steels can be surface hardened without the problems
of distortion and cracking associated with the through hardening of thick sections. There are two
distinctly different approaches to the various methods for surface hardening (Table 1): methods
that involve an intentional buildup or addition of a new layer and methods that involve surface
and subsurface modification without any intentional buildup or increase in part dimensions.
Table 1. Engineering methods for surface hardening of steels.

Layer additions Substrate treatment


Diffusion methods Carburizing
Nitriding
Carbonitriding
Hardfacing Fusion harcifacing Nitrocarburizing
Thermal spray Coatings Boriding
Electrochemical plating Titanium-carbon diffusion
Chemical vapor deposition Toyota diffusion process Selective
(electroless plating) hardening methods Flame hardening
Thin films (physical vapor Induction hardening
deposition, puttering, ion plating) Laser hardening
Ion mixing Electron beam hardening
Ion implantation
Selective carburizing and nitriding
Use of arc lamps

The first group of surface hardening methods includes the use of thin films, coatings, or weld
overlays (hard-facings). Films, coatings, and overlays generally become less cost effective as
production quantities increase, especially when the entire surface of work pieces must be
hardened. The fatigue performance of films, coatings, and overlays may also be a limiting factor,
depending on the bond strength between the substrate and the added layer. Fusion-welded
overlays have strong bonds, but the primary surface-hardened steels used in wear applications
with fatigue loads include heavy case-hardened steels and flame or induction-hardened steels.
Nonetheless, coatings and overlays can be effective in some applications. For tool steels, for
example, TiN and Al2O3 coatings are effective not only because of their hardness but also
because their chemical inertness reduces wear and the welding of chips to the tool. Overlays can
be effective when the selective hardening of large areas is required.
The second group of methods on surface hardening is further divided into diffusion methods and
selective hardening methods. Diffusion methods modify the chemical composition of the surface
with hardening species such as carbon, nitrogen, or boron. Diffusion methods allow effective
hardening of the entire surface of a part and are generally used when a large number of parts are
to be surface hardened. In contrast, selective surface hardening methods allow localized
hardening. Selective hardening generally involves transformation hardening (from heating and
quenching), but some selective hardening methods (selective nitriding, ion implantation and ion
beam mixing) are based solely on compositional modification.

As previously mentioned, surface hardening by diffusion involves the chemical modification of a


surface. The basic process used is thermo-chemical because some heat is needed to enhance the
diffusion of hardening species into the surface and subsurface regions of part. The depth of
diffusion exhibits time-temperature dependence such that: Case depth ≈ K √Time where the
diffusivity constant, K, depends on temperature, the chemical composition of the steel, and the
concentration gradient of a given hardening species.

In terms of temperature, the diffusivity constant increases exponentially as a function of absolute


temperature. Concentration gradients depend on the surface kinetics and reactions of a particular
process. Methods of hardening by diffusion include several variations of hardening species (such
as carbon, nitrogen, or boron) and of the process method used to handle and transport the
hardening species to the surface of the part. Process methods for exposure involve the handling
of hardening species in forms such as gas, liquid, or ions.

These process variations naturally produce differences in typical case depth and hardness (Table
2). Factors influencing the suitability of a particular diffusion method include the type of steel
(Table 3). It is also important to distinguish between total case depth and effective case depth.
The effective case depth is typically about two-thirds to three-fourths the total case depth. The
required effective depth must be specified so that the heat treatment can process the parts for the
correct time at the proper temperature. Table 2: Typical characteristics of diffusion treatments

Process Typical Case


Nature of Typical base
Process temperature case hardness
case metals
(°C) depth (HRC)
Low-carbon
Carburizing Diffused 125µm- steels, low-
815-1090 50-63*
Pack carbon 1.5mm carbon alloy
steels
Low-carbon
Diffused 75 µm- steels, low-
Gas 815-980 50-63*
carbon 1.5mm carbon alloy
steels
Diffused Low-carbon
carbon and 50 µm- steels, low-
Liquid 815-980 50-65*
possibly 1.5mm carbon alloy
nitrogen steels
Low-carbon
Diffused 75 µm- steels, low-
Vacuum 815-1090 50-63*
carbon 1.5mm carbon alloy
steels
Alloy steels,
Diffused
nitriding
nitrogen, 12µm-
Nitriding Gas 480-590 50-70 steels,
nitrogen 0.75mm
stainless
compounds
steels
Diffused Most ferrous
nitrogen, 2.5µm- metals.
Salt 510-565 50-70
nitrogen 0.75mm Including
compounds cast irons
Alloy steels,
Diffused
nitriding
nitrogen. 75µm-
Ion 340-565 50-70 steels,
nitrogen 0.75mm
stainless
compounds
steels
Low-carbon
steels, low-
Diffused
Carbonitriding 75µm- carbon alloy
carbon and 760-870 50-65*
Gas 0.75mm steels,
nitrogen
stainless
steels
Diffused
Liquid 2.5- Low-carbon
carbon and 760-870 50-65*
(cyaniding) 125µm steels
nitrogen
Diffused
Ferritic 2.5- Low-carbon
carbon and 565-675 40-60*
nitrocarburizing 25µm steels
nitrogen
Other
Diffused 25µm- Low-carbon
Aluminizing 870-980 < 20
aluminum 1mm steels
(pack)
Siliconizing by
Diffused 25µm- Low-carbon
chemical vapor 925-1040 30-50
silicon 1mm steels
deposition
Low-
carbon
Chromizing by High- and
Diffused 25- steel <
chemical vapor 980-1090 low carbon
chromium 50µm 30; High-
deposition steels
carbon
50-60
Titanium Diffused 2,5- Alloy steels,
900-1010 > 70*
Carbide carbon and 12.5µm tool steels
titanium,
TiC
compound
Alloy steels,
Diffused
tool
boron. 12,5-
Boriding 400-1150 40- > 70 steels,Cobalt
boron 50µm
and nickel
compounds
alloys
* Requires quench from austenitizing temperature. Table 3. Types of steels used for various
diffusion processes
Diffusion substrates
Low-carbon steels Alloy steels Tool steels Stainless steels
Titanium
Carburizing Gas nitriding
carbide
Cyaniding Nitriding Titanium carbide
Boriding
Ferritic Ion Ion nitriding
Salt nitriding
nitrocarburizing nitriding Ferritic
Ion nitriding
Carbonitriding nitrocarburizing
Gas nitriding
(TTT ) Diagram

Time-Temperature-Transformation
(TTT ) Diagram
T (Time) T(Temperature) T(Transformation) diagram is a plot of temperature versus the
logarithm of time for a steel alloy of definite composition. It is used to determine when
transformations begin and end for an isothermal (constant temperature) heat treatment of a
previously austenitized alloy. When austenite is cooled slowly to a temperature below LCT
(Lower Critical Temperature), the structure that is formed is Pearlite. As the cooling rate
increases, the pearlite transformation temperature gets lower. The microstructure of the material
is significantly altered as the cooling rate increases. By heating and cooling a series of samples,
the history of the austenite transformation may be recorded. TTT diagram indicates when a
specific transformation starts and ends and it also shows what percentage of transformation of
austenite at a particular temperature is achieved.

Cooling rates in the order of increasing severity are achieved by quenching from elevated
temperatures as follows: furnace cooling, air cooling, oil quenching, liquid salts, water
quenching, and brine. If these cooling curves are superimposed on the TTT diagram, the end
product structure and the time required to complete the transformation may be found.

In Figure 1 the area on the left of the transformation curve represents the austenite region.
Austenite is stable at temperatures above LCT but unstable below LCT. Left curve indicates the
start of a transformation and right curve represents the finish of a transformation. The area
between the two curves indicates the transformation of austenite to different types of crystal
structures. (Austenite to pearlite, austenite to martensite, austenite to bainite transformation.)
Figure 1. TTT Diagram
Figure 2 represents the upper half of the TTT diagram. As indicated in Figure 2, when
austenite is cooled to temperatures below LCT, it transforms to other crystal structures
due to its unstable nature. A specific cooling rate may be chosen so that the transformation
of austenite can be 50 %, 100 % etc. If the cooling rate is very slow such as annealing
process, the cooling curve passes through the entire transformation area and the end
product of this the cooling process becomes 100% Pearlite. In other words, when slow
cooling is applied, all the Austenite will transform to Pearlite. If the cooling curve passes
through the middle of the transformation area, the end product is 50 % Austenite and 50
% Pearlite, which means that at certain cooling rates we can retain part of the Austenite,
without transforming it into Pearlite.

Figure 2. Upper half of TTT Diagram(Austenite-Pearlite Transformation Area)


Figure 3 indicates the types of transformation that can be found at higher cooling rates. If a
cooling rate is very high, the cooling curve will remain on the left hand side of the
Transformation Start curve. In this case all Austenite will transform to Martensite. If there
is no interruption in cooling the end product will be martensite.

Figure 3. Lower half of TTT Diagram (Austenite-Martensite and Bainite Transformation


Areas)
In Figure 4 the cooling rates A and B indicate two rapid cooling processes. In this case
curve A will cause a higher distortion and a higher internal stresses than the cooling rate B.
The end product of both cooling rates will be martensite. Cooling rate B is also known as
the Critical Cooling Rate, which is represented by a cooling curve that is tangent to the
nose of the TTT diagram. Critical Cooling Rate is defined as the lowest cooling rate which
produces 100% Martensite while minimizing the internal stresses and distortions.

Figure 4. Rapid Quench


In Figure 5, a rapid quenching process is interrupted (horizontal line represents the
interruption) by immersing the material in a molten salt bath and soaking at a constant
temperature followed by another cooling process that passes through Bainite region of TTT
diagram. The end product is Bainite, which is not as hard as Martensite. As a result of
cooling rate D; more dimensional stability, less distortion and less internal stresses are
created.

Figure 5. Interrupted Quench


In Figure 6 cooling curve C represents a slow cooling process, such as furnace cooling. An
example for this type of cooling is annealing process where all the Austenite is allowed to
transform to Pearlite as a result of slow cooling.

Figure 6. Slow cooling process (Annealing)


Sometimes the cooling curve may pass through the middle of the Austenite-Pearlite
transformation zone. In Figure 7, cooling curve E indicates a cooling rate which is not high
enough to produce 100% martensite. This can be observed easily by looking at the TTT
diagram. Since the cooling curve E is not tangent to the nose of the transformation
diagram, austenite is transformed to 50% Pearlite (curve E is tangent to 50% curve). Since
curve E leaves the transformation diagram at the Martensite zone, the remaining 50 % of
the Austenite will be transformed to Martensite.

Figure 7. Cooling rate that permits both pearlite and martensite formation.

Figure 8. TTT Diagram and microstructures obtained by different types of cooling rates
-----------------------------
Figure 9. Austenite Figure 10. Pearlite

Figure 11. Martensite


Figure 12. Bainite
Thermal conductivity |
Version 2 - view current page

In physics, thermal conductivity, k, is the intensive property of a material that indicates


its ability to conduct heat. It used primarily in Fourier's Law for heat condution.

It is defined as the quantity of heat, Q, transmitted in time t through a thickness L, in a


direction normal to a surface of area A, due to a temperature difference ∆T, under
steady state conditions and when the heat transfer is dependent only on the
temperature gradient.

thermal conductivity = heat flow rate × distance / (area × temperature difference)

Alternately, it can be thought of as a flux of heat (energy per unit area per unit time)
divided by a temperature gradient (temperature difference per unit length)

Examples
In metals, thermal conductivity approximately tracks electrical conductivity, as freely
moving valence electrons transfer not only electric current but also heat energy.
However, the general correlation between electrical and thermal conductance does not
hold for other materials, due to the increased importance of phonon carriers for heat in
non-metals. As shown in the table below, highly electrically conductive silver is less
thermally conductive than diamond, which is an electrical insulator.

Thermal conductivity depends on many properties of a material, notably its structure


and temperature. For instance, pure crystalline substances exhibit very different thermal
conductivities along different crystal axes, due to differences in phonon coupling along a
given crystal axis. Sapphire is a notable example of variable thermal conductivity based
on orientation and temperature, for which the CRC Handbook reports a thermal
conductivity of 2.6 W/m·K perpendicular to the c-axis at 373 K, but 6000 W/m·K at 36
degrees from the c-axis and 35 K (possible typo?).
Air and other gases are generally good insulators, in the absence of convection.
Therefore, many insulating materials function simply by having a large number of gas-
filled pockets which prevent large-scale convection. Examples of these include
expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica
aerogel. Natural, biological insulators such as fur and feathers achieve similar effects by
dramatically inhibiting convection of air or water near an animal's skin.
Thermal conductivity is important in building insulation and related fields. However,
materials used in such trades are rarely subjected to chemical purity standards. Several
construction materials' k values are listed below. These should be considered
approximate due to the uncertainties related to material definitions.
The following table is meant as a small sample of data to illustrate the thermal
conductivity of various types of substances. For more complete listings of measured k-
values, see the references.

List of thermal conductivities


Main article: List of thermal conductivities

This is a list of approximate values of thermal conductivity, k, for some common


materials. Please consult the list of thermal conductivities for more accurate values,
references and detailed information.
Material k, [k] = W/(m*K)

Air 0.025

Alcohol or oil 0.15

Aluminium 237

Copper 401

Gold 318

Lead 35.3
Silver 429

Cork 0.05

Diamond 900 – 2320

Glass 1.1

Rubber 0.16

Sandstone 2.4

Soil 0.15

Stainless steel 15

Thermal grease 0.7 – 3

Wood 0.04 – 0.4

Measurement
For good conductors of heat, Searle's bar method can be used [1]. For poor conductors
of heat, Lees' disc method can be used [2]. An alternative traditional method using real
thermometers is described at [3]. A brief review of new methods measuring thermal
conductivity, thermal diffusivity and specific heat within a single measurement is
available at [4] A thermal conductance tester, one of the instruments of gemology,
determines if gems are genuine diamonds using diamond's uniquely high thermal
conductivity.
Related terms
The reciprocal of thermal conductivity is thermal resistivity, measured in kelvin-metres
per watt (K·m·W−1).
When dealing with a known amount of material, its thermal conductance and the
reciprocal property, thermal resistance, can be described. Unfortunately there are
differing definitions for these terms.

First definition (general)

For general scientific use, thermal conductance is the quantity of heat that passes in
unit time through a plate of particular area and thickness when its opposite faces differ
in temperature by one degree. For a plate of thermal conductivity k, area A and
thickness L this is kA/L, measured in W·K−1. This matches the relationship between
electrical conductivity (A·m−1·V−1) and electrical conductance (A·V−1).

There is also a measure known as heat transfer coefficient: the quantity of heat that
passes in unit time through unit area of a plate of particular thickness when its opposite
faces differ in temperature by one degree. The reciprocal is thermal insulance. In
summary:

• thermal conductance = kA/L, measured in W·K−1


o thermal resistance = L/kA, measured in K·W−1
• heat transfer coefficient = k/L, measured in W·K−1·m−2
o thermal insulance = L/k, measured in K·m2·W−1.

The heat transfer coefficient is also known as thermal admittance

Second definition (buildings)

When dealing with buildings, thermal resistance or R-value means what is described
above as thermal insulance, and thermal conductance means the reciprocal. For
materials in series, these thermal resistances (unlike conductances) can simply be
added to give a thermal resistance for the whole.
A third term, thermal transmittance, incorporates the thermal conductance of a structure
along with heat transfer due to convection and radiation. It is measured in the same
units as thermal conductance and is sometimes known as the composite thermal
conductance. The term U-value is another synonym.
In summary, for a plate of thermal conductivity k (the k value [1]), area A and thickness
L:

• thermal conductance = k/L, measured in W·K−1·m−2;


• thermal resistance (R value) = L/k, measured in K·m2·W−1;
• thermal transmittance (U value) = 1/(Σ(L/k)) + convection + radiation, measured
in W·K−1·m−2.

Origins
The thermal conductivity of a system is determined by how atoms comprising the
system interact. There are no simple, correct expressions for thermal conductivity.
There are two different approaches for calculating the thermal conductivity of a system.
The first approach employs the Green-Kubo relations. Although this expression is
exact*, in order to calculate the thermal conductivity of a dense fluid or solid using this
relation requires the use of molecular dynamics computer simulation.

• The term exact is applied to mean that the equations are solvable.

The second approach is based upon the relaxation time approach. Due to the
anharmonicity within the crystal potential, the phonons in the system are known to
scatter. There are three main mechanisms for scattering:

• Boundary scattering, a phonon hitting the boundary of a system;


• Mass defect scattering, a phonon hitting an impurity within the system and
scattering;
• Phonon-phonon scattering, a phonon breaking into two lower energy phonons or
a phonon colliding with another phonon and merging into one higher energy
phonon.
thermal expansion

In physics, thermal expansion is the tendency of Specific heat


matter to increase in volume or pressure when heated.
For liquids and solids the amount of expansion will
normally vary depending on the material's coefficient of Compressibility
thermal expansion. While for gases the change in
volume or pressure is related to the container that the
gas is in, this can be easily estimated by the ideal gas Thermal
law. When things expand tensile forces are created. expansion
When things contract compressive forces are created.
To accurately calculate thermal expansion of a substance a more advanced Equation of state
must be used. This equation would be able to calculate thermal expansion among with many
other state functions.
For solid materials with a significant length, like rods or cables, an estimate of the amount of
thermal expansion can be described by the ratio of

strain: is the initial length before the change of


temperature and the final length recorded after the change of temperature.
For most solids, thermal expansion relates directly with temperature: Thus, the
change in either the strain or temperature can be estimated by:
where and is the coefficient of thermal expansion in inverse
kelvins. is the difference of the temperature between the two recorded strains, measured in
celsius or kelvin. A number of materials contract on heating within certain temperature ranges;
we usually speak of negative thermal expansion, rather than thermal contraction, in such cases.
For example, the coefficient of thermal expansion of liquid water is negative below 4 °C, where
water has its maximum density, is zero at 4 °C, and is positive above 4 °C.
In materials engineering, the three primary types of materials have well defined rates of
expansion. Polymers expand as much as 10 times more than metals, which expand more than
ceramics. Thermal expansion generally increases with bond energy. See PVT relation.
In general, liquids expand more than solids, and gases expand more than liquids. This is due to
the relative amount of energy contained in the molecules in each state. When things expand, they
take up more space as they are moving around more vigorously, not because the molecules
themselves are growing in size.
Heat-induced expansion has to be taken into account for many structures such as railways and
bridges, which without the use of expansion joints the structures may buckle. Similar techniques
are applied in buildings, water pipes, and road construction.
This phenomenon can be beneficial as well, and is used in techniques like shrink-fitting.
Thermometers are an example of thermal expansion. The liquid in them is heated and it expands
in the only direction it can, along the tube.

---------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------

Material properties (thermodynamics)

Thermodynamic equations
Laws of thermodynamics
Conjugate variables
Thermodynamic potential

Material properties

specific heat
compressibility
thermal expansion
Maxwell relations
Bridgman's equations
Exact differential
Table of thermodynamic equations

The thermodynamic properties of materials are intensive thermodynamic parameters which are
specific to a given material. Each is directly related to a second order differential of a
thermodynamic potential. Examples for a simple 1-component system are:

• Compressibility (or its inverse, the bulk modulus)

• Isothermal compressibility

• Adiabatic compressibility

• Specific heat (Note - the extensive analog is the heat capacity)

• Specific heat at constant pressure


• Specific heat at constant volume

• Coefficient of thermal expansion

where P is pressure, V is volume, T is temperature, S is


entropy, and N is the number of particles.
For a single component system, only three second derivatives are needed in order to derive all
others, and so only three material properties are needed to derive all others. For a single
component system, the "standard" three parameters are the isothermal compressibility βT, the
specific heat at constant pressure cP, and the coefficient of thermal expansion α.

For example, the following equations are true: The


three "standard" properties are in fact the three possible second derivatives of the Gibbs free
energy with respect to temperature and pressure.

------------------------------------------------------------------------------------------------
Thermal insulation materials |
Version 3 - view current page

The term thermal insulation can refer to materials used to reduce the rate of heat
transfer, or the methods and processes used to reduce heat transfer.
Heat is transferred from one material to another by conduction, convection and/or
radiation. Insulators minimize the transfer of heat energy. In home insulation, the R-
value is an indication of how well a material insulates. The major types of insulation are
associated with the major types of heat transfer:

• Reflectors reduce radiative heat transfer.


• Foams, fibrous materials or spaces reduce conductive heat transfer by reducing
physical contact between objects
• Foams, fibrous materials or evacuated spaces reduce convective heat transfer
by stopping or retarding the movement of fluids (liquids or gases) around the
insulated object.

Combinations of some of these methods are often used, for example the combination of
reflective surfaces and vacuum in a vacuum flask.
Understanding heat transfer is important when planning how to insulate an object or a
person from heat or cold, for example with correct choice of insulated clothing, or laying
insulating materials beneath in-floor heat cables or pipes in order to direct as much heat
as possible upwards into the floor surface and reduce heat loss to the ground
underneath.

Materials used for thermal insulation


Many different materials can be used as insulators. Many organic insulators are made
from petrochemicals and recycled plastic. Many inorganic insulators are made from
recycled materials such as glass and furnace slag.

Trapped air insulators

Most insulators in common use rely on the principle of trapping air to reduce convective
and conductive heat transfer, but not radiative. These insulators can be fibrous (e.g.
down feathers and asbestos), cellular (e.g. cork or plastic foam), or granular (e.g.
sintered refractory materials).
The quality of such an insulator depends on:
• The degree to which air flow is eliminated (large cells of trapped air will have
internal convection currents)
• The amount of solid material surrounding the air (large percentages of air are
better, as this reduces thermal bridging within the insulator)
• The degree to which the properties of the insulator are appropriate to its use:
o Stability at the temperatures encountered (e.g. refractory materials used in
kilns)
o Mechanical properties (e.g. softness and flexibility for clothes, hardness
and toughness for steam pipe insulation)
o Service lifetime (due to thermal breakdown, water resistance or resistance
to microbial decomposition)

Solid insulators

Any material with low thermal conductivity can be used to reduce conductive heat
transfer. Astronomic telescope lenses are held in place by solid fiberglass supports, to
prevent warping the lens slightly due to heat variations. A ceramic block or tile will keep
a kitchen counter from being damaged by a hot pot.
For a list of good and bad insulators, see list of thermal conductivities.

Choice of insulation
Often, one mode of heat transfer predominates, leading to a specific choice of
insulation.
Some materials are good insulators against only one of the heat-transfer mechanisms,
but poor insulators against another. For example, metals are good radiative insulators,
but poor conductive insulators, so their use as thermal reflective insulators in buildings
is limited to situations where they can be installed in contact with air and not with solid
material, such as on metal roofs, in attics (as a radiant barrier) or in cavity walls when
trapped air (as air pockets, bubbles or foam) is next to the layer of metal. When physical
contact is made with the layer of metal, the desired thermal resistance is lost and the
opposite impact is achieved, as the metal then acts as a thermal conductor and not as
an insulator.

Effect of humidity

Damp materials may lose most of their insulative properties. The choice of insulation
often depends on the means used to manage humidity (water vapor) on one side or the
other of the thermal insulator. Clothing and building insulation depend on this aspect
greatly, to function as expected.
Heat bridging

Comparatively more heat flows through a path of least resistance than through insulated
paths. This is known as a thermal bridge, heat leak, or short-circuiting. Insulation around
a bridge is of little help in preventing heat loss or gain due to thermal bridging; the
bridging has to be rebuilt with smaller or more insulative materials. A common example
of this is an insulated wall which has a layer of rigid insulating material between the
studs and the finish layer. When a thermal bridge is desired, it can be a heat source,
heat sink or a heat pipe.

Optimum insulation thickness

Industry standards are often "rules of thumb" developed over many years, that offset
many conflicting goals: what people will pay for, manufacturing cost, local climate,
traditional building practices, and varying standards of comfort. Heat-transfer analysis
can be performed in large industrial applications, but in household situations
(appliances and building insulation), airtightness is the key in reducing heat transfer due
to air leakage (forced or natural convection). Once airtightness is achieved, it has often
been sufficient to choose the thickness of the insulative layer based on rules of thumb.
Diminishing returns are achieved with each successive doubling of the insulative layer.
It can be shown that for some systems, there is a minimum insulation thickness required
for an improvement to be realized.[1]

Personal insulation
Clothing is chosen to maintain the temperature of the human body.
To offset high ambient heat, clothing must enable sweat to evaporate (cooling by
evaporation). When we anticipate high temperatures and physical exertion, the billowing
of fabric during movement creates air currents that increase evaporation and cooling. A
layer of fabric insulates slightly and keeps skin temperatures cooler than otherwise.
To combat cold, evacuating skin humidity is still essential while several layers may be
necessary to simultaneously achieve this goal while matching one's internal heat
production to heat losses due to wind, ambient temperature, and radiation of heat into
space. Also, crucial for footwear, is insulation against conduction of heat into solid
materials.

Building insulation
Main article: Building insulation
Common insulation applications in apartment building in Mississauga, Ontario, Canada.

Maintaining acceptable temperatures in buildings (by heating and cooling) uses a large
proportion of total energy consumption worldwide[citation needed]. When well insulated,
a building:

• is energy-efficient, thus saving the owner money.


• provides more uniform temperatures throughout the space. There is less
temperature gradient both vertically (between ankle height and head height) and
horizontally from exterior walls, ceilings and windows to the interior walls, thus
producing a more comfortable occupant environment when outside temperatures
are extremely cold or hot.
• has minimal recurring expense. Unlike heating and cooling equipment, insulation
is permanent and does not require maintenance, upkeep, or adjustment.

Many forms of thermal insulations also absorb noise and vibration, both coming from
the outside and from other rooms inside the house, thus producing a more comfortable
occupant environment.
See also weatherization and thermal mass; both describe important methods of saving
energy and creating comfort.

Industrial insulation
In industry, energy has to be expended to raise, lower, or maintain the temperature of
objects or process fluids. If these are not insulated, this increases the heat energy
requirements of a process, and therefore the cost and environmental impact.

Insulation in space travel


Spacecraft have very demanding insulation requirements. Lightweight insulators are a
strong requirement, as extra mass on a vehicle to be launched into earth orbit or
beyond is extremely expensive. In space, there is no atmosphere to attenuate the sun's
radiated energy, so the surface of objects in space heats up very quickly. In space, heat
cannot be given off by convective heat transfer, nor conducted to another object. Multi-
layer insulation, the gold foil often seen covering satellites and space probes, is used to
control thermal radiation, as are specialty paints.
Launch and re-entry place severe mechanical stresses on spacecraft, so the strength of
an insulator is critically important (as seen by the failure of insulating foam on the Space
Shuttle Columbia). Re-entry through the atmosphere generates very high temperatures,
requiring insulators with excellent thermal properties, for example the reinforced carbon-
carbon composite nose cone and silica fiber tiles of the Space Shuttle.
Thermoplastics
Thermoplastics and thermoplastic materials soften when heated and harden when cooled.
They can withstand many heating and cooling cycles and are often suitable for recycling.

Most thermoplastics consist of polymers, long chains of molecules that contain smaller,
repeating units called monomers. Typically, monomers are held together by covalent bonds
within or between polymer chains. Addition polymers are thermoplastic materials in which a
rearrangement of bonds joins monomers together without the loss of atoms or molecules.
Condensation polymers are formed by a reaction in which a molecule, usually water, is lost
during bond formation.

Some thermoplastic and thermoplastic materials contain filler materials such as powders or fibers
to provide improved strength and/or stiffness. Fibers can be either chopped or wound, and
commonly include glass, fiberglass, or cloth. Some products contain solid lubricant fillers such
as graphite or molybdenum disulfide. Others contain aramid fibers, metal powders, or inorganic
fillers with ceramics and silicates.

There are many material grades and types of thermoplastics and thermoplastic materials.
Examples include monomers, intermediates, binders, base polymers, elastomers, and rubber
materials. Composite materials consist of a matrix and a dispersed, fibrous or continuous second
phase. Semi-finished or shaped stock forms include bars, sheets, film, profiles, and hollow or
angled materials. Spheres, shims, and rectangular or hexagonal products are also available.
Fabricated or finished shapes or parts are formed through a variety of molding, casting,
extrusion, pultrusion, machining, welding, and grinding processes. Resins, liquids, gels, and
powders are common raw materials. Electrical, electronic and optical-grade materials are also
available.

Thermoplastics and thermoplastic materials are based upon a variety of chemical systems.
Examples include acrylics and polyacrylates; butyl, polybutene and polyisobutylene; polymers
such as liquid crystal polymer (LCP) and polyolefin; ethylene copolymers such as polyethylene
acrylate acid (EAA); fluropolymers such as polytetrafluorethylene (PTFE) and polyvinylidene
fluoride (PVDF); and vinyl and polyvinyl chloride (PVC).

Common thermoplastic and thermoplastic materials include ionomers, ketones such as


polyetheretherketone (PEEK), polyamides and polycarbonates, polyester and polyether block
amide (PBA), and polyphenylene oxide (PPO) and polyphenylene sulphide (PPS). Styrene-
isoprene-styrene (SIS) and styrene-butadiene-styrene (SBS) copolymers are used in pressure
sensitive adhesive (PSA) applications. Styrene butadiene rubber (SBR) has good resistance to
petroleum hydrocarbons and fuels. Styrene acrylonitrile copolymers include styrene acrylonitrile
(SAN), acrylic styrene acrylonitrile (ASA) and acrylonitrile ethylene styrene (AES).

Selecting thermoplastics and thermoplastic materials requires an analysis of physical,


mechanical, thermal, electrical, optical and processing specifications. Physical specifications
include overall thickness, overall width or outer diameter (OD), overall length, and inner
diameter (ID). Mechanical properties include tensile strength or break, tensile modulus, and
elongation.

Thermal specifications include use temperature, deflection temperature, thermal conductivity,


and coefficient of thermal expansion (CTE). Electrical resistivity, dielectric strength, and
dielectric strength (relative permittivity) are important electrical properties. Index of refraction
and transmission are optical specifications. Processing properties for thermoplastics and
thermoplastic materials include viscosity, melt flow index (MFI) and water absorption.

Thermoplastics and thermoplastic materials provide a variety of features. Products that are
designed for electrical and electronics applications often provide protection against electrostatic
discharge (ESD), electromagnetic interference (EMI), or radio frequency interference (RFI).
Materials that are electrically conductive, resistive, insulating, or suitable for high voltage
applications are commonly available. Flame retardant materials reduce the spread of flames or
resist ignition when exposed to high temperatures.

Thermal compounds form a thermally conductive layer on a substrate, either between


components or within a finished electronic product. Some thermoplastic and thermoplastic
materials contain water-based or water borne resins. Others contain solvent-based resins that use
a volatile organic compound (VOC) to thin or alter viscosity.

--------------------------------------------------------------------------------------

A thermoplastic is a material that is plastic or deformable, melts to a liquid when heated and
freezes to a brittle, glassy state when cooled sufficiently.

Most thermoplastics are high molecular weight polymers whose chains associate through weak
van der Waals forces (polyethylene); stronger dipole-dipole interactions and hydrogen bonding
(nylon); or even stacking of aromatic rings (polystyrene). Thermoplastic polymers differ from
thermosetting polymers (Bakelite; vulcanized rubber) as they can, unlike thermosetting
polymers, be remelted and remoulded. Many thermoplastic materials are addition polymers; e.g.,
vinyl chain-growth polymers such as polyethylene and polypropylene.

Temperature dependence
Thermoplastics are elastic and flexible above a glass transition temperature Tg, specific for each
one — the midpoint of a temperature range in contrast to the sharp freezing point of a pure
crystalline substance like water. Below a second, higher melting temperature, Tm, also the
midpoint of a range, most thermoplastics have crystalline regions alternating with amorphous
regions in which the chains approximate random coils. The amorphous regions contribute
elasticity and the crystalline regions contribute strength and rigidity, as is also the case for non-
thermoplastic fibrous proteins such as silk. (Elasticity does not mean they are particularly
stretchy; e.g., nylon rope and fishing line.) Above Tm all crystalline structure disappears and the
chains become randomly inter dispersed. As the temperature increases above Tm, viscosity
gradually decreases without any distinct phase change.

Thermoplastics can go through melting/freezing cycles repeatedly and the fact that they can be
reshaped upon reheating gives them their name. This quality makes thermoplastics recyclable.
The processes required for recycling vary with the thermoplast. The plastics used for pop bottles
are a common example of thermoplastics that can be and are widely recycled. Animal horn,
made of the protein α-keratin, softens on heating, is somewhat reshapable, and may be regarded
as a natural, quasi-thermoplastic material.

Some thermoplastics normally do not crystallize: they are termed "amorphous" plastics and are
useful at temperatures below the Tg. They are frequently used in applications where clarity is
important. Some typical examples of amorphous thermoplastics are PMMA, PS and PC.
Generally, amorphous thermoplastics are less chemically resistant and can be subject to stress
cracking. thermoplastics will crystallize to a certain extent and are called "semi-crystalline" for
this reason. Typical semi-crystalline thermoplastics are PE, PP, PBT and PET. The speed and
extent to which crystallization can occur depends in part on the flexibility of the polymer chain.
Semi-crystalline thermoplastics are more resistant to solvents and other chemicals.
If the crystallites are larger than the wavelength of light, the thermoplastic is hazy or opaque.
Semi-crystalline thermoplastics become less brittle above Tg. If a plastic with otherwise
desirable properties has too high a Tg, it can often be lowered by adding a low-molecular-weight
plasticizer to the melt before forming (Plastics extrusion; molding) and cooling.

A similar result can sometimes be achieved by adding non-reactive side chains to the monomers
before polymerization. Both methods make the polymer chains stand off a bit from one another.
Before the introduction of plasticizers, plastic automobile parts often cracked in cold winter
weather. Another method of lowering Tg (or raising Tm) is to incorporate the original plastic into
a copolymer, as with graft copolymers of polystyrene, or into a composite material. Lowering Tg
is not the only way to reduce brittleness. Drawing (and similar processes that stretch or orient the
molecules) or increasing the length of the polymer chains also decrease brittleness.

Although modestly vulcanized natural and synthetic rubbers are stretchy, they are elastomeric
thermosets, not thermoplastics. Each has its own Tg, and will crack and shatter when cold
enough so that the crosslinked polymer chains can no longer move relative to one another. But
they have no Tm and will decompose at high temperatures rather than melt. Recently,
thermoplastic elastomers have become available.

List of thermoplastics
• Acrylonitrile butadiene styrene (ABS)
• Acrylic
• Celluloid
• Cellulose acetate
• Ethylene-Vinyl Acetate (EVA)
• Ethylene vinyl alcohol (EVAL)
• Fluoroplastics (PTFEs, including FEP, PFA, CTFE, ECTFE, ETFE)
• Ionomers
• Kydex, a trademarked acrylic/PVC alloy
• Liquid Crystal Polymer (LCP)
• Polyacetal (POM or Acetal)
• Polyacrylates (Acrylic)
• Polyacrylonitrile (PAN or Acrylonitrile)
• Polyamide (PA or Nylon)
• Polyamide-imide (PAI)
• Polyaryletherketone (PAEK or Ketone)
• Polybutadiene (PBD)
• Polybutylene (PB)
• Polybutylene terephthalate (PBT)
• Polyethylene terephthalate (PET)
• Polycyclohexylene dimethylene terephthalate (PCT)
• Polycarbonate (PC)
• Polyhydroxyalkanoates (PHAs)
• Polyketone (PK)
• Polyester
• Polyethylene (PE)
• Polyetheretherketone (PEEK)
• Polyetherimide (PEI)
• Polyethersulfone (PES)- see Polysulfone
• Polyethylenechlorinates (PEC)
• Polyimide (PI)
• Polylactic acid (PLA)
• Polymethylpentene (PMP)
• Polyphenylene oxide (PPO)
• Polyphenylene sulfide (PPS)
• Polyphthalamide (PPA)
• Polypropylene (PP)
• Polystyrene (PS)
• Polysulfone (PSU)
• Polyvinyl chloride (PVC)
• Polyvinylidene chloride (PVDC)
• Spectralon
Yield strength
Yield strength, or the yield point, is defined in engineering and materials science as the stress at
which a material begins to plastically deform. Prior to the yield point the material will deform
elastically and will return to its original shape when the applied stress is removed. Once the yield
point is passed some fraction of the deformation will be permanent and non-reversible.
Knowledge of the yield point is vital when designing a component since it generally represents
an upper limit to the load that can be applied. It is also important for the control of many
materials production techniques such as forging, rolling, or pressing

In structural engineering, yield is the permanent plastic deformation of a structural member


under stress. This is a soft failure mode which does not normally cause catastrophic failure
unless it accelerates buckling.
In 3D space of principal stresses (σ1,σ2,σ3), an infinite number of yield points form together a
yield surface.

Definition

It is often difficult to precisely define yield due to the wide variety of stress-strain behaviors
exhibited by real materials. In addition there are several possible ways to define the yield point in
a given material:

• The point at which dislocations first begin to move. Given that dislocations begin to
move at very low stresses, and the difficulty in detecting such movement, this definition
is rarely used.
• Elastic Limit - The lowest stress at which permanent deformation can be measured. This
requires a complex interactive load-unload procedure and is critically dependent on the
accuracy of the equipment and the skill of the operator.
• Proportional Limit - The point at which the stress-strain curve becomes non-linear. In
most metallic materials the elastic limit and proportional limit are essentially the same.
• Offset Yield Point (proof stress) - Due to the lack of a clear border between the elastic
and plastic regions in many materials, the yield point is often defined as the stress at
some arbitrary plastic strain (typically 0.2% [1]). This is determined by the intersection of
a line offset from the linear region by the required strain. In some materials there is
essentially no linear region and so a certain value of plastic strain is defined instead.
Although somewhat arbitrary this method does allow for a consistent comparison of
materials and is the most common.

Yield criterion
A yield criterion, often expressed as yield surface, is an hypothesis concerning the limit of
elasticity under any combination of stresses. There are two interpretations of yield criterion: one
is purely mathematical in taking a statistical approach while other models attempt to provide a
justification based on established physical principles. Since stress and strain are tensor qualities
they can be described on the basis of three principal directions, in the case of stress these are
denoted by , and .
The following represent the most common yield criterion as applied to an isotropic material
(uniform properties in all directions). Other equations have been proposed or are used in
specialist situations.

Maximum Principal Stress Theory - Yield occurs when the largest principal stress exceeds the
uniaxial tensile yield strength. Although this criterion allows for a quick and easy comparison
with experimental data it is rarely suitable for design purposes.

Maximum Principal Strain Theory - Yield occurs when the maximum principal strain reaches
the strain corresponding to the yield point during a simple tensile test. In terms of the principal
stresses this is determined by the equation:

Maximum Shear Stress Theory - Also known as the Tresca criterion, after the French scientist
Henri Tresca. This assumes that yield occurs when the shear stress exceeds the shear yield
strength :

Total Strain Energy Theory - This theory assumes that the stored energy associated with elastic
deformation at the point of yield is independent of the specific stress tensor. Thus yield occurs
when the strain energy per unit volume is greater than the strain energy at the elastic limit in
simple tension. For a 3-dimensional stress state this is given by:

Distortion Energy Theory - This theory proposes that the total strain energy can be separated
into two components: the volumetric (hydrostatic) strain energy and the shape (distortion or
shear) strain energy. It is proposed that yield occurs when the distortion component exceeds that
at the yield point for a simple tensile test. This is generally referred to as the Von Mises criterion

and is expressed as: Based on a


different theoretical underpinning this expression is also referred to as octahedral shear stress
theory.

Factors influencing yield stress


The stress at which yield occurs is dependent on both the rate of deformation (strain rate) and,
more significantly, the temperature at which the deformation occurs. Early work by Alder and
Philips in 1954 found that the relationship between yield stress and strain rate (at constant
temperature) was best described by a power law relationship of the form where C
is a constant and m is the strain rate sensitivity. The latter generally increases with temperature,
and materials where m reaches a value greater than ~0.5 tend to exhibit super plastic behaviour.
Later, more complex equations were proposed that simultaneously dealt with both temperature

and strain rate: where α and A are constants and Z is the


temperature-compensated strain-rate - often described by the Zener-Hollomon

parameter: where QHW is the activation energy for hot deformation


and T is the absolute temperature.

Implications for structural engineering


Yielded structures have a lower and less constant modulus of elasticity, so deflections increase
and buckling strength decreases, and both become more difficult to predict. When load is
removed, the structure will remain permanently bent, and may have residual pre-stress. If
buckling is avoided, structures have a tendency to adapt a more efficient shape that will be better
able to sustain (or avoid) the loads that bent it. Because of this, highly engineered structures rely
on yielding as a graceful failure mode which allows fail-safe operation. In aerospace
engineering, for example, no safety factor is needed when comparing limit loads (the highest
loads expected during normal operation) to yield criteria. Safety factors are only required when
comparing limit loads to ultimate failure criteria, (buckling or rupture.) In other words, a plane
which undergoes extraordinary loading beyond its operational envelope may bend a wing
slightly, but this is considered to be a fail-safe failure mode which will not prevent it from
making an emergency landing.

Vous aimerez peut-être aussi