Vous êtes sur la page 1sur 39

Size and Electromagnetic radiation

Properties in macro and nanoscale


We have always assumed that these properties are constant for a given substance (gold
always acts the same no matter how much of it you have) and in our macro-scale world
experiences they have been. This means that even though we measure these properties
for large numbers of particles, we assume that the results should be true for any size
group of particles.
Using new tools that allow us to see small groups of molecules whose size in the
nanoscale, often exhibit different properties and behaviors than larger particles of the
same substance!
Meaning less properties:
Some physical properties of substances, for example, don’t necessarily make sense at
the nanoscale.
How would you define, boiling temperature for a substance that has only 50 atoms?
Boiling temperature is based on the average kinetic energy of the molecules needed for
the vapor pressure to equal the atmospheric pressure. The vapor pressure results from
the average force per unit area exerted by the fast moving particles in the vapor bubbles
in the water. When you only have 50 molecules of water, it is highly unlikely that a
bubble would form so it doesn’t make sense to talk about vapor pressure.
Why do these properties change at the nanoscale?

When we look at nanosized particles of substances, there are four main things that
change from macroscale objects.
•First, due to the small mass of the particles, gravitational forces are negligible. Instead
electromagnetic forces are dominant in determining the behavior of atoms and
molecules.
•Second, at nanoscale sizes, we need to use quantum mechanical descriptions of
particle motion and energy transfer instead of the classical mechanical descriptions.
•Third, nanosized particles have a very large surface area to volume ratio.
•Fourth, at this size, the influences of random molecular motion play a much greater
role than they do at the macroscale.
How does the dominance of electromagnetic forces make a difference?

There are four basic forces known in nature: gravity, electromagnetism, the nuclear
force (strong and weak).
(i) The gravitational force is the force of attraction between the masses of two objects.
This force is directly proportional to the masses of the two objects and inversely
proportional to the square of the distance between the objects. Because the mass of
nanoscale objects is so small, the force of gravity has very little effect on the
attraction between objects of this size.
m1m2
F =G
r2

(ii) Electromagnetic forces are forces of attraction and repulsion between objects
based on their charge and magnetic properties. These forces also increase with the
charge or the magnetism of each object and decrease as the distance between the
objects become greater, but they are not affected by the masses of objects but they
can be very strong even with nanosized particles. Examples of these types of
interactions include chemical bonding and intermolecular forces.
Cont. ….

(iii) The other two forces, the strong nuclear force and the weak nuclear force, are
interactions between the particles that compose the nucleus. These forces are
only significant at extremely short distances and therefore become negligible in
the nanoscale range. The nuclear (strong) force is responsible for keeping the
components of atoms together thus is dominant on the sub-atomic scale. The
weak force is associated with radioactivity (i.e., beta decay) and other nuclear
reactions.
The four basic forces in nature, and the scales at which these forces are
influential.

Gravitation Electromagnetic Weak Nuclear Strong Nuclear


al Force Forces Force Force

Cosmic Scale 107 m and bigger X X*

Macroscale 10-2 m to 106 m X X**


Microscale 10-3 m to10-7 m X X
Nanoscale 10-8 m to 10-9 m X
Sub-Atomic Scale 10-10 m and X X
smaller

∗In places like the sun, where matter is ionized and in rapid motion, electromagnetic forces
are dominant.
∗∗On a human scale, where matter is neither ionized nor moving rapidly, electromagnetism,

though important, is not dominant.


Physical properties change: Melting point
•Melting Point (Microscopic Definition)
Temperature at which the atoms, ions, or molecules in a substance have enough
energy to overcome the intermolecular forces that hold the them in a “fixed”
position in a solid.
Surface atoms require less energy to move because they are in contact with fewer
atoms of the substance
At the macroscale At the nanoscale
Almost all the atoms are inside of Majority of the atoms are
the object split between the inside and
In contact
with 3 atoms the surface of the object

In contact
with 7 atoms

Changing of the object size has a Changing of the object size


very small effect on the percentage has a big effect on the
of atoms on the surface percentage of atoms on the
surface
M.P doesn’t depend on size M.P is lower for smaller
particles
Melting point of gold particles
The melting point decreases dramatically as the particle size gets below 5 nm
Physical properties change: electrical, magnetic
•At sizes less than 10 nm, gold loses its metallic properties and is no longer able to
conduct electricity.

•Electrical conductivity CNTs are six orders of magnitude higher than copper.

•For magnetic materials such as Fe, Co, Ni, Fe3O4, etc., magnetic properties are
size dependent
Physical properties change: mechanical

Carbon in the form of graphite (like pencil lead) is soft and malleable; at the
nanoscale carbon can be 100 times stronger than steel and is six times lighter.
How does a quantum mechanical model make a
difference?
Classical mechanical models explain phenomena well at the macroscale level, but
in the very small level (atomic size) quantum mechanics is used. For everyday
objects, which are much larger than atoms and much slower than the speed of
light, classical models do an excellent job. However, at the nanoscale there are
many phenomena that cannot be explained by classical mechanics. The following
are among the most important things that quantum mechanical models can
describe (but classical models cannot):
• Discreteness of energy
• The wave-particle duality of light and matter
• Quantum tunneling
• Uncertainty of measurement

What is quantum mechanics?


Quantum mechanics is the study of matter and radiation at an atomic level
Discreteness of Energy
If you look at the spectrum of light emitted by energetic atoms (such as the
orange-yellow light from sodium vapor street lights, or the blue-white light from
mercury vapor lamps) you will notice that it is composed of individual lines of
different colors. These lines represent the discrete energy levels of the electrons
in those excited atoms. When an electron in a high energy state jumps down to a
lower one, the atom emits a photon of light which corresponds to the exact
energy difference of those two levels (conservation of energy). The bigger the
energy difference, the more energetic the photon will be, and the closer its color
will be to the violet end of the spectrum. If electrons were not restricted to
discrete energy levels, the spectrum from an excited atom would be a continuous
spread of colors from red to violet with no individual lines.
The wave-particle duality of light and matter
In 1690 Christiaan Huygens theorized that light was composed of waves, while in
1704 Isaac Newton explained that light was made of tiny particles. Experiments
supported each of their theories. However, neither a completely-particle theory nor
a completely-wave theory could explain all of the phenomena associated with light!
So scientists began to think of light as both a particle and a wave.

How can something be both a particle and a wave at the same time? For one thing,
it is incorrect to think of light as a stream of particles moving up and down in a
wavelike manner. Actually, light and matter exist as particles; what behaves like a
wave is the probability of where that particle will be. The reason light sometimes
appears to act as a wave is because we are noticing the accumulation of many of the
light particles distributed over the probabilities of where each particle could be.

For most light phenomena––such as reflection, interference, and polarization––the


wave model of light explains things quite well. However wave model can not
explain photoelectric effect. When light shines on a metal surface, the surface emits
electrons.
Quantum tunneling
This is one of the most interesting phenomena to arise from quantum mechanics;
without it computer chips would not exist, and a ‘personal’ computer would
probably take up an entire room. As stated above, a wave determines the
probability of where a particle will be. When that probability wave encounters an
energy barrier most of the wave will be reflected back, but a small portion of it will
‘leak’ into the barrier. If the barrier is small enough, the wave that leaked through
will continue on the other side of it. Even though the particle doesn't have enough
energy to get over the barrier, there is still a small probability that it can ‘tunnel’
through it!
Example
Let's say you are throwing a rubber ball against a wall. You know you don't have enough energy to throw it
through the wall, so you always expect it to bounce back. Quantum mechanics, however, says that there is a
small probability that the ball could go right through the wall (without damaging the wall) and continue its
flight on the other side! With something as large as a rubber ball, though, that probability is so small that
you could throw the ball for billions of years and never see it go through the wall. But with something as
tiny as an electron, tunneling is an everyday occurrence.
On the flip side of tunneling, when a particle encounters a drop in energy there is a small probability that it
will be reflected. In other words, if you were rolling a marble off a flat level table, there is a small chance
that when the marble reached the edge it would bounce back instead of dropping to the floor! Again, for
something as large as a marble you'll probably never see something like that happen, but for photons (the
massless particles of light) it is a very real occurrence.
Consider a particle with energy E in the
inner region of a one-dimensional potential
well V(x). (A potential well is a potential that
has a lower value in a certain region of space
than in the neighboring regions.) In classical
mechanics, if E < V (the maximum height of
the potential barrier), the particle remains in
the well forever; if E > V , the particle
escapes. In quantum mechanics, the situation
is not so simple. The particle can escape even
if its energy E is below the height of the
barrier V , although the probability of
escape is small unless E is close to V . In that
case, the particle may tunnel through the
potential barrier and emerge with the same
energy E.
Application
Uncertainty of measurement
People are familiar with measuring things in the macroscopic world around them.
Someone pulls out a tape measure and determines the length of a table. At the atomic
scale of quantum mechanics, however, measurement becomes a very delicate process.
Let's say you want to find out where an electron is and where it is going. How would you
do it? Get a super high-powered magnifier and look for it? The very act of looking
depends upon light, which is made of photons, and these photons could have enough
momentum that once they hit the electron, they would change the electron’s course! So
by looking at (trying to measure) the electron, you change where it is. Werner
Heisenberg was the first to realize that certain pairs of measurements have an intrinsic
uncertainty associated with them. In other words, there is a limit to how exact a
measurement can be. This is usually not an issue at the macroscale, but it can be very
important when dealing with small distances and high velocities at the nanoscale and
smaller. For example, to know an electron’s position, you need to “freeze” it in a small
space. In doing so, however, you get poor velocity data (since you had to make the
velocity zero). If you are interested in knowing the exact velocity, you must let it move,
but this gives you poor position data.
Synthesis of nanoparticles
“Top-down” approach
Top-down construction involves working like a sculptor and chiseling
nanomaterials out of a bulk material. Microchip manufacturing is the
most common example of this top-down approach to producing
nanomaterials.

“Bottom-UP” approach
The bottom-up approach to nanomanufacturing uses self-assembly to
arrange nanoparticles into various arrangements spontaneously. Self-
assembly is accomplished by coaxing the molecules to their lowest energy
state and making molecules reorient naturally. Carbon nanotubes are
examples of nanomaterials that are manufactured using the bottom-up
approach.
Would you make toothpicks out of a
tree trunk or wouldn’t it be better to
start from smaller pieces?
Starting from “big things”
has meant producing things with the precision that “we were able to
achieve”, but -at the same time producing lots of waste or pollution, and
consuming a lot of energy. As we got better at technology, precision
improved and waste/pollution diminished, but the approach was still the
same.
Starting from “small things”
means absolute precision (down to one single atom !), complete control
of processes (no waste?) and the use of less energy (with less CO2, less
greenhouse effect, …
The top-down approach often uses the traditional workshop or microfabrication
methods where externally-controlled tools are used to cut, mill and shape materials
into the desired shape and order.
Bottom-up approaches, in contrast, use the chemical properties of single molecules
to cause single-molecule components to automatically arrange themselves into
some useful conformation. These approaches utilize the concepts of molecular self-
assembly.
The trick with bottom-up manufacturing is understanding the chemical and
physical properties of nanoparticles and manipulating them to self-assemble.
Neither the top-down nor bottom-up approach is superior at the moment each has
its advantages and disadvantages. However, the bottom-up approach is said by
some to have the potential to be more cost-effective in the future.
Size and shape dependent melting temperature
of metallic nanoparticles
To account for the particle shape difference, a parameter, the shape factor α, which is
defined by the following equation
S'
α=
S
S = surface area of the spherical nanoparticle = 4πR2 (R is its radius). S’ is the surface
area of the nanoparticle in any shape, whose volume is the same as spherical
nanoparticle.
S’ = α4πR2
the particle surface area of each surface atom is πr2 (r is the atomic radius). The
number of the surface atoms N is the ratio of the particle surface area to πr2, which is
simplified as R2
N = 4α
r2
Shape factors for different particles

Particle shape α)
Shape factor (α
Spherical 1
Regular tetrahedral 1.49
Regular hexahedral 1.24
Regular octahedral 1.18
Disk-like >1.15
Regular quadrangular >1.24
Different crystal of gold nanoparticles

Au nanoparticles with different shapes: (a) cube; (b) octahedron; (c) cuboctahedron; (d)
truncated octahedron; (e) sphere.
BE model for polyhedral shape

Niface = number of face atoms


Eiface = Cohesive energy per face atoms in the i th face
Njedge = number of edge atoms
Ejedge = Cohesive energy per edge atoms in the j th edge
Ekcorn = Cohesive energy of the corner atom k
Total number of exterior atoms (N) can be summed as
N = ∑ N i face + ∑ N edge
j + ∑1
i j k
Different Miller indices in cubic crystal

x
Input parameters of GBE model for Au nanoparticles
Mechanical properties
Concepts of Stress and Strain
Force divided by area is called stress. In tension and compression tests, the relevant area is that
perpendicular to the force. In shear or torsion tests, the area is perpendicular to the axis of
rotation.
σ = F/A0 tensile or compressive stress
τ = F/A0 shear stress
There is a change in dimensions, or deformation elongation, ∆L as a result of a tensile or
compressive stress. To enable comparison with specimens of different length, the elongation is
also normalized, this time to the length L. This is called strain, ε.
ε = ∆L/L
Elastic deformation: When the stress is removed, the material returns to the dimension it had
before the load was applied. Deformation is reversible, non permanent
Plastic deformation: When the stress is removed, the material does not return to its previous
dimension but there is a permanent, irreversible deformation.
Yield point: If the stress is too large, the strain deviates from being proportional to the stress.
The point at which this happens is the yield point because there the material yields, deforming
permanently
Mechanical properties
Nanostructured materials reportedly exhibit unique microstructures and enhanced
mechanical performance. Metallurgists have known for many years that grain refinement
often leads to an improvement in properties of metals and alloys.
Grain: The single crystals are called grain. Aside from the electronics industry, most
practical engineering materials are polycrystalline rather than in the form of single
crystals. Grain boundaries (GBs) are those surface imperfections which separate adjacent
grains with different orientations, in a polycrystalline material.
Nano Crystalline Materials
Nanostructured materials are traditionally
defined as a group of materials with grain sizes
between 1 and 100 nanometers.
The well known Hall-Petch equation relates the
yield stress σy to average grain size d.
k
σ y =σ0 +
d
σ0 is the friction stress and k is a constant.
Pictures of GBs in Si3N4, SiC
Reducing d from 10 µm to 10 nm would increase the strength by a factor of about 30
Hall-Petch Plot for Electrodeposited
Nanocrystalline Ni
•Grain boundaries are barriers to slip.
•Barrier strength increases with
misorientation.
•Smaller grain size: more barriers to slip.
Dislocation pile up and how it effects the
strength of the material:
A material with larger grain size is able to
have more dislocation (dislocation is a
crystallographic defect, or irregularity,
within a crystal structure) to pile up
leading to a bigger driving force for
dislocations to move from one grain to
another. Thus you will have to apply less
force to move a dislocation from a larger
than from a smaller grain, leading
materials with smaller grains to exhibit
higher yield stress.
Negative Hall Hall-Petch slopes for nc nc-Pd and
Cu made by inert gas condensation and
compaction
Reason for deviation from the Hall-Petch equation:
Hall-Petch relation was developed using the the concept of dislocation piles ups in an
individual grains. However, since very fine grain materials e.g. nanocrystalline materials,
piles ups can not form when the gain size is less than a critical value dc weakening
mechanism (e.g. viscous type flow) operate and lead to a decrease in decreasing grain size.

Critical Grain size values:

3Gb
dc =
π (1 − υ ) H
G = Shear modulus
b = Berger vector
υ= Poition’s ratio
H = Hardness
Grain growth
• Grain growth occurs in polycrystalline materials to decrease the interfacial energy and
hence the total energy of the system.
• The nanocrystalline materials have a highly disorder large interfacial component (and
therefore they are in a high energy state), the driving force for grain growth is high.
• However, contrary to the expectations, experimental observations suggested that grain
growth in nanocrystalline materials, prepared by almost any method, is very small (and
almost negligible) up to a reasonably high temperature.
• The inherent stability of the nanocrystalline grains is due to:
Narrow grain size distribution, equiaxed grain morphology, low grain boundary
structures, relatively flat boundary configurations, and porosity present in the samples.

• Kinetics of normal grain growth under isothermal annealing conditions can be presented
by
d2 – d02 = Kt
Where d is the grain size at time t, d0 is the mean initial grain size (at t = 0) and K a
constant. The equation obeyed at temperature closed to melting point.
Assuming d >> d0, the empirical equation d = K′ t1/n, where K′ is another constant and n
the grain growth exponent values ranging between 2 to 3. The activation energy for the
grain growth (Q) can be calculated as
K′ = K′ 0 exp(-Q/RT)
K′ 0 is pre exponential constant, R is the gas constant.
Coble Creep equation
Creep: The tendency of a ‘solid’ material to slowly move or
deform permanently to relieve stresses.
Coble Creep occurs through the diffusion of atoms in a
material along the grain boundaries, which produces a net
flow of material and a sliding of the grain boundaries.
The strain rate in a material experiencing Coble Creep is
given by . dε AσΩδDb
ε= =
dt d 3kbT

A = Constant; σ = applied stress; Ω = atomic volume; δ = effective grain boundary diffision; Db =


grain boundary diffusion coefficient; kb = Boltzmann constant; T = absolute temperature.

Enhanced diffusivity in nanocrystalline material can have a significant effect in


mechanical properties like creep and superplasticity, ability to dope effeciently
nanocrystalline materials with impurities at lower temperature, and synthesis of alloy
phase in immiscible metals and at temperatures much lower than those usually required in
other systems.
Superplasticity
Plastic deformation at the atomic level corresponds to the breaking/reforming of
bonds
Superplasticity is the capability of crystalline materials to undergo extensive
tensile deformation (200% or more) without necking or fracture
Nanocrystalline materials are often extremely hard and brittle, but several examples of
substantial ductility under mechanical load have been reported. Some nanocrystalline
alloys even exhibit superplasticity at relatively high strain rates and low temperatures.
Grain boundary (GB) sliding has been considered to be the principal microscopic
mechanism present during high temperature deformation
Characteristic of superplastic materials
 Small grain size: Smaller the grain size reflects the key role of grain boundary diffusion
in the deformation process.
 Equiaxed grains: Since grain boundary sliding is an important mechanism in
superplasticity, (higher the sliding lower the superplasticity) it is necessary to established a
large shear stress across the grain boundary.
 High-energy grain boundaries: Grain boundary diffusion and sliding are fastest along
high-energy boundaries. In nanocrystalline materials, diffusion studies suggested that the
grain boundary energies are higher, than in more conventional polycrystalline materials.
 Presence of a second phases: At temperatures where grain boundary diffusion and
sliding are significant, grain growth is generally rapid. Additions of second-phase particles
are necessary to restrict grain growth. Grain growth in nanocrystalline materials has been
a serious problem.
Ductility
The decrease of ductility with the decrease of average grain size into nanometer range is
an inherent property of NC materials because of the onset of deformation instability
(necking) at early stages of deformation.
In coarse grained materials, necking can be delayed by work hardening and higher
ductility is achieved.
To improve the ductility of NC materials without loss of their predominant high strength
by adjusting the microstructure of NC and nanostructured (NS) materials is required.
Stress

Strain
A. Very ductile: soft metals (e.g. Pb, Au) at room temperature, other metals, polymers, glasses at high
temperature.
B. Moderately ductile fracture: typical for ductile metals
C. Brittle fracture: cold metals, ceramics
Composites
Al-carbonnanotube composite

Polymer nanocomposites:
Benefits:
• Chemical resistance
• Surface appearance
• Electrical conductivity
• Mechanical properties improvement, e.g. strength, modulus, and dimension stability
• Thermal stability, heat distortion temperature
• Optical clarity in comparison to conventionally field polymers
• Flame retardancy and reduced smoke emmisions
• Easy processing and recycling
Nanoclay composites

Surface treatment Cationic surfactants

Hydrophobic

Hydrophilic

Polymerization

Vous aimerez peut-être aussi