Vous êtes sur la page 1sur 52

Assignment: Describe the following devices with their image

applications and how to use .?

Atomic Layer Deposition

ALD Applications
Atomic Layer Deposition (ALD) has the potential to optimize product design
across a wide array of applications -- from making silicon chips run faster,
to increasing the efficiency of solar panels, to improving the safety of
medical implants.
Thin films, strong adhesion characteristics, and reproducible results of ALD
make it ideal for many research labs and large batch manufacturing
environments.
The ability to produce thin films to exacting standards makes ALD a
compelling solution for depositing high-quality films to challenging
substrates, such as heterostructures, nanotubes, and organic
semiconductors.
Many technologists and researchers are replacing older deposition
techniques such as evaporation, sputtering, and chemical vapor deposition
with ALD to take advantage of its unique ability to produce conformal
coating in and around 3D objects in a highly consistent manner.

 High-k dielectrics
 Hydrophobic coating
 Passivation layer
 High aspect ratio diffusion barriers for Cu interconnects
 Conformal coatings for micro fluidics applications
 Fuel cells, e.g. single metal coating for catalyst layers
Features
 Less than 1Å uniformity
 13" anodized Al chamber
 Minimal volume for fast cycle time and throughput
 Up to 8" substrate
 Heated chamber walls

 400°C substrate heater


 Onboard precursor glovebox
 Up to seven 50cc precursor cylinders
 300 l/sec maglev turbomolecular pumping package
 5×10-7 torr base pressure
 Fast pulse gas delivery valves
 Large area filter to capture unreacted precursors
 Large area filter to capture unreacted precursors
 High aspect ratio structure coating
 Fully automated PC based, recipe driven
 LabVIEW user interface
 Computer controlled safety interlocks

Atomic Layer Deposition Uses


The NLD-4000 is a stand alone PC controlled ALD system which is fully
automated and safety-interlocked having capabilities to deposit oxides and
nitrides (e.g. AlN, GaN, TaN, TiN, Al2O3, ZrO2, LaO2, HfO2) for
Semiconductor, Photovoltaic and MEMS applications. It has a 13"
aluminum reaction chamber with heated walls and a pneumatically lifted
top for easy chamber access. The system features an onboard glovebox
which can accomodate an array of up to seven heated or cooled 50cc
cylinders for precursors and reactants incorporating fast-pulse delivery
valves for pulsed gas input. Unreacted precursors can be captured with a
heated filter on the chamber exhaust port. Recipes, temperature setpoints,
gas flows, pump-down and vent cycles, and the flushing of delivery lines
are all controlled automatically via LabVIEW software. Options include
automatic load/unload (without changing system footprint), Planar ICP
source with remote plasma for Plasma Enhanced ALD (Planar ICP
geometry maintains a small reaction chamber volume for faster cycle
times), and turbo-molecular pump for lower base pressures.
originally termed atomic layer epitaxy, was pioneered and patented by T.
Suntola and coworkers for growing ZnS (Suntola and Antson, 1977). Initial
motivation for the development of ALD came from thin film
electroluminescent displays. After extensive research advances in the
past 40 years, ALD, which allows deposition at the atomic or
molecular level, can be used for a variety of thin films such as
metal oxides and nitrides, polymers, and inorganic–organic hybrid
materials with control at the atomic or molecular level (George,
2010). This versatile technique has been adopted by the
microelectronics industry but is also used to produce plasmonics
materials, medical devices, and biomaterials (Im et al., 2012;
Knez et al., 2007; Skoog et al., 2013).
ALD is conducted in cycles. In one cycle, alternating chemical
reactions of two precursors, “a” and “b,” occur with purging with an
inert gas in between. Precursor “a” is first saturated and chemisorbed
on the surface of the substrate. After excessive precursor “a” in the
gas phase is removed by purging with the inert gas, the precursor gas
“b” then reacts with the chemisorbed precursor “a” on the substrate,
producing a layer of the desired materials. The excess precursor “b”
and by-products are removed by a second purging and the growth
cycle is repeated until the desired film thickness is obtained. As an
example, the growth cycle to deposit a TiO2 thin film employing
gaseous precursors TiCl4 and H2O is presented in Fig. 1.3. The
sequential and saturating surface reactions ensure self-limitation of
the film growth. Thus, the film thickness depends only on the number
of reactions cycles, thereby enabling precise and simple control of the
thickness. The small deposition rate in ALD is also desirable in some
applications (Ritala and Niinisto, 2009).

Chemical vapor deposition applications


Chemical vapour deposition (CVD) is a technique that relies on the
formation of a gaseous species containing the coating element within a
coating retort or chamber. Alternatively, the gaseous species may be
generated external to the coating retort and introduced via a delivery
system. These gaseous species (eg, chromous chloride) are then allowed
to come into contact with the surfaces that require coating. The retort is
held at a high temperature, normally in excess of 800°C. The application of
this thermal energy and the presence of a reducing atmosphere results in
the decomposition of the molecules containing the coating element which
are subsequently deposited onto the surface of the substrate.

Using the CVD method a wide variety of coatings may be formed, ranging
from soft, ductile coatings to those with hard, ceramic like properties.
Coating thicknesses can vary from a few micron to over 200 mm, with
hardnesses in the range 150–3000 HV (0.1Kg). Coatings formed by the
CVD being used to combat the severe attrition of components used in a
variety of industrial situations where corrosion, oxidation or wear is
experienced.

The methods commonly used to apply CVD coatings will be discussed and
their advantages and limitations examined. Several case studies will be
highlighted, where CVD coatings have been used to solve specific
industrial problems.method are currently

Chemical vapor deposition Types

Hot-wall thermal CVD (batch operation type)

Plasma assisted CVD


CVD is practiced in a variety of formats. These processes generally differ in
the means by which chemical reactions are initiated.

 Classified by operating conditions:


 Atmospheric pressure CVD (APCVD) – CVD at atmospheric
pressure.
 Low-pressure CVD (LPCVD) – CVD at sub-atmospheric
[1]
pressures. Reduced pressures tend to reduce unwanted gas-phase
reactions and improve film uniformity across the wafer.
 Ultrahigh vacuum CVD (UHVCVD) – CVD at very low pressure,
typically below 10−6 Pa (~10−8 torr). Note that in other fields, a lower
division between high and ultra-high vacuum is common, often
10−7 Pa.
Most modern CVD is either LPCVD or UHVCVD.

 Classified by physical characteristics of vapor:


 Aerosol assisted CVD (AACVD) – CVD in which the precursors are
transported to the substrate by means of a liquid/gas aerosol, which
can be generated ultrasonically. This technique is suitable for use
with non-volatile precursors.
 Direct liquid injection CVD (DLICVD) – CVD in which the precursors
are in liquid form (liquid or solid dissolved in a convenient solvent).
Liquid solutions are injected in a vaporization chamber towards
injectors (typically car injectors). The precursor vapors are then
transported to the substrate as in classical CVD. This technique is
suitable for use on liquid or solid precursors. High growth rates can
be reached using this technique.
 Classified by type of substrate heating:
 Hot wall CVD – CVD in which the chamber is heated by an external
power source and the substrate is heated by radiation from the
heated chamber walls.
 Cold wall CVD – CVD in which only the substrate is directly heated
either by induction or by passing current through the substrate itself
or a heater in contact with the substrate. The chamber walls are at
room temperature.
 Plasma methods (see also Plasma processing):
 Microwave plasma-assisted CVD (MPCVD)
 Plasma-Enhanced CVD (PECVD) – CVD that utilizes plasma to
enhance chemical reaction rates of the precursors.[2] PECVD
processing allows deposition at lower temperatures, which is often
critical in the manufacture of semiconductors. The lower
temperatures also allow for the deposition of organic coatings, such
as plasma polymers, that have been used for nanoparticle surface
functionalization.[3]
 Remote plasma-enhanced CVD (RPECVD) – Similar to PECVD
except that the wafer substrate is not directly in the plasma discharge
region. Removing the wafer from the plasma region allows
processing temperatures down to room temperature.
 Atomic-layer CVD (ALCVD) – Deposits successive layers of different
substances to produce layered, crystalline films. See Atomic layer
epitaxy.
 Combustion Chemical Vapor Deposition (CCVD) – Combustion
Chemical Vapor Deposition or flame pyrolysis is an open-atmosphere,
flame-based technique for depositing high-quality thin films and
nanomaterials.
 Hot filament CVD (HFCVD) – also known as catalytic CVD (Cat-CVD) or
more commonly, initiated CVD (iCVD), this process uses a hot filament
to chemically decompose the source gases.[4] The filament temperature
and substrate temperature thus are independently controlled, allowing
colder temperatures for better absorption rates at the substrate and
higher temperatures necessary for decomposition of precursors to free
radicals at the filament.[5]
 Hybrid Physical-Chemical Vapor Deposition (HPCVD) – This process
involves both chemical decomposition of precursor gas
and vaporization of a solid source.
 Metalorganic chemical vapor deposition (MOCVD) – This CVD process
is based on metalorganic precursors.
 Rapid thermal CVD (RTCVD) – This CVD process uses heating lamps
or other methods to rapidly heat the wafer substrate. Heating only the
substrate rather than the gas or chamber walls helps reduce unwanted
gas-phase reactions that can lead to particle formation.
 Vapor-phase epitaxy (VPE)
 Photo-initiated CVD (PICVD) – This process uses UV light to stimulate
chemical reactions. It is similar to plasma processing, given that
plasmas are strong emitters of UV radiation. Under certain conditions,
PICVD can be operated at or near atmospheric pressure.[6]
 Laser Chemical vapor deposition (LCVD) - This CVD process uses
lasers to heat spots or lines on a substrate in semiconductor
applications. In MEMS and in fiber production the lasers are used
rapidly to break down the precursor gas-process temperature can
exceed 2000 °C-to build up a solid structure in much the same way as
laser sintering based 3-D printers build up solids from powders

Chemical vapor deposition applications Uses
CVD is commonly used to deposit conformal films and augment substrate
surfaces in ways that more traditional surface modification techniques are
not capable of. CVD is extremely useful in the process of atomic layer
deposition at depositing extremely thin layers of material. A variety of
applications for such films exist.Gallium arsenideis used in some integrated
circuits (ICs) and photovoltaic devices. Amorphous polysilicon is used in
photovoltaic devices. Certain carbides and nitrides confer wear-
[7]
resistance. Polymerization by CVD, perhaps the most versatile of all
applications, allows for super-thin coatings which possess some very
desirable qualities, such as lubricity, hydrophobicity and weather-resistance
to name a few.[8] CVD of metal-organic frameworks, a class of crystalline
nanoporous materials, has recently been demonstrated.[9] Applications for
these films are anticipated in gas sensing and low-k dielectrics CVD
techniques are adventageous for membrane coatings as well, such as
those in desalination or water treatment, as these coatings can be
sufficiently uniform (conformal) and thin that they do not clog membrane
pores.
X-ray Photoelectron Spectroscopy Applications

In recent years, X-ray photoelectron spectroscopy (XPS) technology


emerges from wood adhesive bonding surface and interface research and
analysis, and becomes an important analytical tool in wood adhesive
research area. In order to deep the understanding of XPS technology and
expand its application in the field of adhesive application, this paper
describes the basic working principles and the basic functions of XPS and
the results integrate with examples in adhesive research area and thus to
illustrate the application status of XPS in the area of adhesive researches,
especially in the surface and interface researches. XPS will improve the
cementation strength of adhesive and the quality of cementations materia

X-ray photoelectron spectroscopy Theory


is a surface-sensitive quantitative spectroscopic technique that measures the
elemental composition at the parts per thousand range, empirical
formula, chemical state and electronic state of the elements that exist within a
material. Put more simply, XPS is a useful measurement technique because it not
only shows what elements are within a film but also what other elements they are
bonded to. This means if you have a metal oxide and you want to know if the
metal is in a +1 or +2 state, using XPS will allow you to find that ratio. However at
most the instrument will only probe 20nm into a sample.

XPS spectra are obtained by irradiating a material with a beam of X-


rays while simultaneously measuring the kinetic energy and number
of electrons that escape from the top 0 to 10 nm of the material being
analyzed. XPS requires high vacuum (P ~ 10−8 millibar) or ultra-high
vacuum (UHV; P < 10−9 millibar) conditions, although a current area of
development is ambient-pressure XPS, in which samples are analyzed at
pressures of a few tens of millibar.

XPS can be used to analyze the surface chemistry of a material in its as-
received state, or after some treatment, for example: fracturing, cutting or
scraping in air or UHV to expose the bulk chemistry, ion beam etching to
clean off some or all of the surface contamination (with mild ion etching) or
to intentionally expose deeper layers of the sample (with more extensive
ion etching) in depth-profiling XPS, exposure to heat to study the changes
due to heating, exposure to reactive gases or solutions, exposure to ion
beam implant, exposure to ultraviolet light.
XPS is also known as ESCA (electron spectroscopy for chemical analysis),
an abbreviation introduced by Kai Siegbahn's research group to emphasize
the chemical (rather than merely elemental) information that the technique
provides.

In principle XPS detects all elements. In practice, using typical laboratory-


scale X-ray sources, XPS detects all elements with an atomic number (Z)
of 3 (lithium) and above. It cannot easily detect hydrogen (Z = 1)
or helium (Z = 2).

Detection limits for most of the elements (on a modern instrument) are in
the parts per thousand range. Detection limits of parts per million (ppm) are
possible, but require special conditions: concentration at top surface or very
long collection time (overnight).

XPS is routinely used to analyze inorganic compounds, metal


alloys, semiconductors, polymers, elements, catalysts, glasses, ceramics, p
aints, papers, inks, woods, plant parts, make-up, teeth, bones, medical
implants, bio-materials, viscous oils, glues, ion-modified materials and
many others. XPS is less routinely used to analyze the hydrated forms of
some of the above materials by freezing the samples in their hydrated state
in an ultra pure environment, and allowing or causing multilayers of ice to
sublime away prior to analysis. Such hydrated XPS analysis allows
hydrated sample structures, which may be different from vacuum-
dehydrated sample structures, to be

Uses and capabilities

XPS is routinely used to determine:

What elements and the quantity of those elements that are present within
the top 1-12 nm of the sample surface

What contamination, if any, exists on the surface or in the bulk of the


sample
Empirical formula of a material that is free of excessive surface
contamination

The chemical state identification of one or more of the elements in the


sample and also give information on local bonding of atoms

The binding energy of one or more electronic states

The thickness of one or more thin layers (1–8 nm) of different materials
within the top 12 nm of the surface

The density of electronic states

Capabilities of advanced systems

Measure uniformity of elemental composition across the top of the surface


(or line profiling or mapping)

Measure uniformity of elemental composition as a function of depth by ion


beam etching (or depth profiling)

Measure uniformity of elemental composition as a function of depth by


tilting the sample (or angle-resolved XPS)

Chemical states and chemical shift

The ability to produce chemical state information (as distinguished from


merely elemental information) from the topmost few nm of any surface
makes XPS a unique and valuable tool for understanding the chemistry of
any surface, either as received, or after physical or chemical treatment(s).
In this context, "chemical state" refers to the local bonding environment of a
species in question. The local bonding environment of a species in
question is affected by its formal oxidation state, the identity of its nearest-
neighbor atom, its bonding hybridization to that nearest-neighbor atom, and
in some cases even the bonding hybridization between the atom in
question and the next-nearest-neighbor atom. Thus, while the nominal
binding energy of the C1s electron is 284.6 eV (some also use 285.0 eV as
the nominal value for the binding energy of carbon), subtle but reproducible
shifts in the actual binding energy, the so-called chemical shift, provide the
chemical state information referred to here.

Chemical-state analysis is widely used for the element carbon. Chemical-


state analysis of the surface of carbon-containing polymers readily reveals
the presence or absence of the chemical states of carbon shown in bold, in
approximate order of increasing binding energy, as: carbide (-C2−), silicone
(-Si-CH3), methylene/methyl/hydrocarbon (-CH2-CH2-, CH3-CH2-, and -
CH=CH-), amine (-CH2-NH2), alcohol (-C-OH), ketone (-C=O), organic
ester (-COOR), carbonate (-CO32−), monofluoro-hydrocarbon (-CFH-CH2-),
difluoro-hydrocarbon (-CF2-CH2-), and trifluorocarbon (-CH2-CF3), to name
but a few examples.

Chemical state analysis of the surface of a silicon wafer readily reveals chemical
shifts due to the presence or absence of the chemical states of silicon in its
different formal oxidation states, such as: n-doped silicon and p-doped silicon
(metallic silicon in figure above), silicon suboxide (Si2O), silicon monoxide (SiO),
Si2O3, and silicon dioxide (SiO2). An example of this is seen in the figure above:
High-resolution spectrum of an oxidized silicon wafer in the energy range of the Si
2p signal.

x ray diffraction applications


X-ray powder diffraction is most widely used for the identification of unknown
crystalline materials (e.g. minerals, inorganic compounds). Determination of
unknown solids is critical to studies in geology, environmental science, material
science, engineering and biology.
Other applications include:

 characterization of crystalline materials

 identification of fine-grained minerals such as clays and mixed layer clays


that are difficult to determine optically

 determination of unit cell dimensions

 measurement of sample purity


XRD can be used

With specialized techniques, XRD can be used to:

 determine crystal structures using Rietveld refinement

 determine of modal amounts of minerals (quantitative analysis)

 characterize thin films samples by:

o determining lattice mismatch between film and substrate and to


inferring stress and strain

o determining dislocation density and quality of the film by rocking curve


measurements

o measuring superlattices in multilayered epitaxial structures

o determining the thickness, roughness and density of the film using


glancing incidence X-ray reflectivity measurements

 make textural measurements, such as the orientation of grains, in a


polycrystalline sample
X-ray Powder Diffraction Theory (XRD)

Max von Laue, in 1912, discovered that crystalline substances act as


three-dimensional diffraction gratings for X-ray wavelengths similar
to the spacing of planes in a crystal lattice. X-ray diffraction is now
a common technique for the study of crystal structures and atomic
spacing.
X-ray diffraction is based on constructive interference of
monochromatic X-rays and a crystalline sample. These X-rays are
generated by a cathode ray tube, filtered to produce monochromatic
radiation, collimated to concentrate, and directed toward the
sample. The interaction of the incident rays with the sample
produces constructive interference (and a diffracted ray) when
conditions satisfy Bragg's Law (nλ=2d sin θ). This law relates the
wavelength of electromagnetic radiation to the diffraction angle and
the lattice spacing in a crystalline sample. These diffracted X-rays
are then detected, processed and counted. By scanning the sample
through a range of 2θangles, all possible diffraction directions of
the lattice should be attained due to the random orientation of the
powdered material. Conversion of the diffraction peaks to d-
spacings allows identification of the mineral because each mineral
has a set of unique d-spacings. Typically, this is achieved by
comparison of d-spacings with standard reference patterns.

All diffraction methods are based on generation of X-rays in an X-


ray tube. These X-rays are directed at the sample, and the diffracted
rays are collected. A key component of all diffraction is the angle
between the incident and diffracted rays. Powder and single crystal
diffraction vary in instrumentation beyond this.

X-ray Powder Diffraction (XRD) Instrumentation


- How Does It Work?
X-ray diffractometers consist of three basic elements: an X-ray
tube, a sample holder, and an X-ray detector.

Bruker's X-ray Diffraction D8-Discover instrument. Details


X-rays are generated in a cathode ray tube by heating a filament to
produce electrons, accelerating the electrons toward a target by
applying a voltage, and bombarding the target material with
electrons. When electrons have sufficient energy to dislodge inner
shell electrons of the target material, characteristic X-ray spectra
are produced. These spectra consist of several components, the
most common being Kα and Kβ. Kα consists, in part, of Kα1 and Kα2.
Kα1 has a slightly shorter wavelength and twice the intensity as Kα2.
The specific wavelengths are characteristic of the target material
(Cu, Fe, Mo, Cr). Filtering, by foils or crystal monochrometers, is
required to produce monochromatic X-rays needed for diffraction.
Kα1and Kα2 are sufficiently close in wavelength such that a weighted
average of the two is used. Copper is the most common target
material for single-crystal diffraction, with CuKα radiation =
1.5418Å. These X-rays are collimated and directed onto the sample.
As the sample and detector are rotated, the intensity of the reflected
X-rays is recorded. When the geometry of the incident X-rays
impinging the sample satisfies the Bragg Equation, constructive
interference occurs and a peak in intensity occurs. A detector
records and processes this X-ray signal and converts the signal to a
count rate which is then output to a device such as a printer or
computer monitor.
Show caption
The geometry of an X-ray diffractometer is such that the sample
rotates in the path of the collimated X-ray beam at an angle θ while
the X-ray detector is mounted on an arm to collect the diffracted X-
rays and rotates at an angle of 2θ. The instrument used to maintain
the angle and rotate the sample is termed a goniometer. For typical
powder patterns, data is collected at 2θ from ~5° to 70°, angles that
are preset in the X-ray scan.

atomic absorption spectroscopy applications


The elemental composition of Chanca piedra (stone breaker leaf)
alternatively called Phyllantus amarus (Euphorbiaceae) was determined
using atomic absorption spectroscopy, AAS. Chanca
piedra a tropicalmedicinal plant, is reportedly useful in blocking kidney
stone formation and has anti-hepatitis B activity. The acid digested samples
were subjected to AAS analysis. The atomic absorption spectrophotometer
(PG-990) is a fully automated instrument used in the flame configuration
option, controlled by a personal computer, with Microsoft windows as the
operating system. Results showed that manganese (Mn) (228.9 ± 3.3
mg/kg), zinc (Zn) (41.7 ± 0.9 mg/kg), iron (Fe) (459.3 ± 3.3 mg/kg), calcium
(Ca) (521.9 ± 1.8 mg/kg) and magnesium (Mg) (397.4 ± 1.3 mg/kg) were
present in the plant. These high concentrations of calcium, iron and
magnesium in this plant should be useful for electrolyte balance,
enhancement of growth, bone and teeth formation, and activation of
enzyme reactions. It should also have implications for drug development in
this part of the world for the treatment of some kidney and other
related medical problems. The result equally shows the suitability
of Chanca piedra leaves for consumption and confirms its folkloric
applications in traditional medications.

Theory and instrumentation

The Atomic Absorption Spectroscopy technique


determines the concentrations of chemical elements present in
a given sample by measuring the absorbed radiation of the
chemical element of interest by reading the spectra produced
when the sample is excited. The three main techniques for
AAS; flame, graphite and hydride all have their own advantages
and disadvantages depending on analytical problems. The five
essential components of an atomic absorption
spectrophotometer, namely, the light source, the burner
assembly, optics, detector and signal processing are designed
such that each component produces minimum disruption to the
overall system and many design features are installed to keep
the signal-to-noise ratio as low as possible. In this work, the line
source PG-990 Atomic Absorption Spectrophotometer, LS AAS
located at the Central Science Research Laboratory of the
Bowen University, Iwo was used in the flame configuration
mode for the elemental analysis. It is a fully automated
instrument for flame and/or graphite furnace analysis developed
by PG Instruments Ltd. This instrument incorporates two
background correction systems, the deuterium lamp method
and the self-reversal method.

Atomic absorption spectrometer analysis

Air was allowed to mix with acetylene gas from the gas cylinder in good
proportion to ignite the burner of the Spectrophotometer. Immediately after
ignition, the spectrophotometer was calibrated using blank (distilled water)
and standard solution (as supplied with the spectrophotometer). The
aerosol of the sample was then aspirated through the nebulizer into the
flame for analysis. Analysis was done for each element of interest at their
specific wavelength using the hollow cathode lamp of the element
under investigation. Finally, the result is displayed on the computer read-
out.

Discussions and Conclusion


The determination of the elemental composition of Chanca piedra (stone
breaker leaf) using AAS technique showed that calcium had the highest
concentration of 521.9 ± 1.8 mg/kg, followed by iron with 459.3 ± 3.3
mg/kg, then, magnesium with 397.4 ± 1.3 mg/kg, and manganese followed
with 228.9 ± 3.3 mg/kg and lastly zinc with 41.7 ± 0.9 mg/kg. These high
concentrations of calcium, iron and magnesium in this plant should be
useful for electrolyte balance, enhancement of growth, bone and teeth
formation; activation of enzyme reactions and for drug development in the
pharmaceutical industry for the treatment of some kidney and other related
medical problems in this part of the world. The concentrations of Ca, Fe
and Mg could be attributed to the environment where the plant was
collected and the activities taking place there. The concentrations of Ca
and Mg might enhance the medicinal activity of the leaves of Chanca
piedra since both are minerals beneficial to the human body. Magnesium is
essential for life and is a co-factor in numerous enzymes involved in
phosphate transfer, muscle contractility and neuronal transmission.
Deficiency of Mg can result in tetany and lead to calcium deficiency [8]. Iron
is a constituent of haemoglobin, myoglobin, and a number of enzymes and,
therefore, is an essential nutrient for humans [9]. An association between
haemoglobin concentration and work capacity is the most clearly identified
functional consequence of iron deficiency [10]. Iron deficiency also has
been associated with decreased immune function as measured by changes
in several components of the immune system during iron deficiency. In
children, iron deficiency has been associated with apathy, short attention
span, irritability, and reduced ability to learn [11]. Manganese is a major
component of the mitochondrial antioxidant enzyme manganese
superoxide dismutase. Signs of deficiency include poor reproductive
performance, growth retardation, congenital malformations in the offspring,
abnormal formation of bone and cartilage, and impaired glucose tolerance
[12]. Zinc, a constituent of enzymes involved in most major metabolic
pathways, is an essential element for plants, animals, and humans [13].
The signs and symptoms of dietary zinc deficiency in humans include loss
of appetite, growth retardation, skin changes, and immunological
abnormalities. Pronounced zinc deficiency in men resulting in
hypogonadism and dwarfism has been found in the Middle East [14]. In
human patients with low plasma zinc levels, accelerated rates of wound
healing have been observed as a result of increased zinc intake,
suggesting that the zinc requirement of these subjects was not fully met by
their diets [15]. Adejumo and Ajayi [16], reported the mineral element
composition in Phyllanthus amarus as folows: calcium (2700.00 ± 400.00
mg/100g), iron (50.27 ± 1.47 mg/100g), magnesium (1766.67 ± 66.70
mg/100g), manganese (38.18 ± 2.07 mg/100g) and zinc (110.33 ± 2.01
mg/100g). Comparing these results with ours, we observe that high
concentrations of calcium and magnesium were also obtained in our study.
However, the variations in the concentration values for the other elements
may be due to the site and time of sample collection, environmental
influences, soil types and composition, as well as activities taking place
there [17]. In the 2009 study by Adejumo and Ajayi, the Phyllanthus
amarus samples were collected within the locality of Olabisi Onabanjo
University Teaching Hospital, Sagamu, Nigeria in August 2004, while
current sample was collected from Bowen University campus, Iwo.

Comparing the results obtained in Table 1 with the Recommended Dietary


Allowance, RDA values per day, shown in Table 2, it is observed that
consumption of a few grams of this leaf is well within tolerable limits of the
RDA values. This result therefore shows the suitability of Chanca
piedra leaves for consumption and confirms its folkloric applications in
traditional medications aside having implications for drug development
PROTON NUCLEAR MAGNETIC RESONANCE SPECTROSCOPY
(H-NMR)
applications

NMR or nuclear magnetic resonance spectroscopy is a technique used to


determine a compound’s unique structure. It identifies the carbon-hydrogen
framework of an organic compound. Using this method and other instrumental
methods including infrared and mass spectrometry, scientists are able to
determine the entire structure of a molecule. In this discussion, we will focus on H
NMR or proton magnetic resonance. Even though there are many other
spectrometers including C-NMR and N-NMR, hydrogen (H-NMR) was the first and
is the most common atom used in nuclear magnetic resonance spectroscopy
Proton nuclear magnetic resonance Theory
Proton nuclear magnetic resonance (proton NMR, hydrogen-1 NMR,
or 1H NMR) is the application of nuclear magnetic resonance in NMR
spectroscopy with respect to hydrogen-1 nuclei within the moleculesof a
substance, in order to determine the structure of its molecules.[1] In
samples where natural hydrogen(H) is used, practically all the hydrogen
consists of the isotope 1H (hydrogen-1; i.e. having a proton for a nucleus).
Simple NMR spectra are recorded in solution, and solvent protons must not
be allowed to interfere.Deuterated (deuterium = 2H, often symbolized as D)
solvents especially for use in NMR are preferred, e.g. deuterated water,
D2O, deuterated acetone, (CD3)2CO, deuterated methanol,
CD3OD, deuterated dimethyl sulfoxide, (CD3)2SO, and deuterated
chloroform, CDCl3. However, a solvent without hydrogen, such as carbon
tetrachloride, CCl4 or carbon disulfide, CS2, may also be used.
Historically, deuterated solvents were supplied with a small amount
(typically 0.1%) of tetramethylsilane(TMS) as an internal standard for
calibrating the chemical shifts of each analyte proton. TMS is
a tetrahedral molecule, with all protons being chemically equivalent, giving
one single signal, used to define a chemical shift = 0 ppm. [2] It is volatile,
making sample recovery easy as well. Modern spectrometers are able to
reference spectra based on the residual proton in the solvent (e.g. the
CHCl3, 0.01% in 99.99% CDCl3). Deuterated solvents are now commonly
supplied without TMS.
Deuterated solvents permit the use of deuterium frequency-field lock (also
known as deuterium lock or field lock) to offset the effect of the natural drift
of the NMR's magnetic field . In order to provide deuterium lock, the
NMR constantly monitors the deuterium signal resonance frequency from
the solvent and makes changes to the to keep the resonance
[3]
frequency constant. Additionally, the deuterium signal may be used to
accurately define 0 ppm as the resonant frequency of the lock solvent and
the difference between the lock solvent and 0 ppm (TMS) are well known.
How does it work?
The atomic nucleus is a spinning charged particle, and it generates a magnetic
field. Without an external applied magnetic field, the nuclear spins are random
and spin in random directions. But, when an external magnetic field is present,
the nuclei align themselves either with or against the field of the external magnet.
References: Bruice, Figures 14.1 and 14.2 References: Thinkbook pg 54 and 55 No
External Applied Magnetic Field External Magnetic Field is Applied _

-spin state: Protons that align with the external magnetic field. They are in a lower
energy state. _-spin state: Protons that align against the external magnetic field.
They are in a higher energy state

The _E is the energy difference between the _ and _ spin states. This depends on
the applied magnetic field. As shown by the graph above, the greater the strength
of the applied magnetic field, the larger the energy difference between the two
spin states. When radiation, that has the same energy as the _E, is placed upon
the sample, the spin flips from _ to _ spin states. Then, the nuclei undergoes
relaxation. Relaxation is when the nuclei return to their original state. In this
process, they emit electromagnetic signals whose frequencies depend on _E as
well. The HNMR spectrometer reads these signals and plots them on a graph of
signal frequency versus intensity. Resonance is when the nuclei flip back and forth
between _ and _ spin states due to the radiation that is placed on them. To
summarize, an NMR signal is observed when the radiation supplied matches the
_E. And, the energy required to cause spin flip is dependent on the magnetic
environment experienced by the nucleus

Gas chromatography–mass spectrometry (GC-MS)


Applications

Gas chromatography–mass spectrometry (GC-MS) is


an analytical method that combines the features of gas-
chromatography and mass spectrometry to identify different substances
within a test sample.[1] Applications of GC-MS
include drug detection, fire investigation, environmental
analysis, explosives investigation, and identification of unknown samples,
including that of material samples obtained from planet Mars during probe
missions as early as the 1970s. GC-MS can also be used in airport security
to detect substances in luggage or on human beings. Additionally, it can
identify trace elements in materials that were previously thought to have
disintegrated beyond identification. Like liquid chromatography–mass
spectrometry, it allows analysis and detection even of tiny amounts of a
substance.
GC-MS has been regarded as a "gold standard" for forensic substance
identification because it is used to perform a 100% specific test, which
positively identifies the presence of a particular substance. A nonspecific
test merely indicates that any of several in a category of substances is
present. Although a nonspecific test could statistically suggest the identity
of the substance, this could lead to false positive identification.

Gas chromatography–mass spectrometry Theory

The GC-MS is composed of two major building blocks: the gas


chromatograph and the mass spectrometer. The gas chromatograph
utilizes a capillary column which depends on the column's dimensions
(length, diameter, film thickness) as well as the phase properties (e.g. 5%
phenyl polysiloxane). The difference in the chemical properties between
different molecules in a mixture and their relative affinity for the stationary
phase of the column will promote separation of the molecules as the
sample travels the length of the column. The molecules are retained by the
column and then elute (come off) from the column at different times (called
the retention time), and this allows the mass spectrometer downstream to
capture, ionize, accelerate, deflect, and detect the ionized molecules
separately. The mass spectrometer does this by breaking each molecule
into ionizedfragments and detecting these fragments using their mass-to-
charge ratio.

GC-MS schematic
These two components, used together, allow a much finer degree of
substance identification than either unit used separately. It is not possible
to make an accurate identification of a particular molecule by gas
chromatography or mass spectrometry alone. The mass spectrometry
process normally requires a very pure sample while gas chromatography
using a traditional detector (e.g. Flame ionization detector) cannot
differentiate between multiple molecules that happen to take the same
amount of time to travel through the column (i.e. have the same retention
time), which results in two or more molecules that co-elute. Sometimes two
different molecules can also have a similar pattern of ionized fragments in a
mass spectrometer (mass spectrum). Combining the two processes
reduces the possibility of error, as it is extremely unlikely that two different
molecules will behave in the same way in both a gas chromatograph and a
mass spectrometer. Therefore, when an identifying mass spectrum
appears at a characteristic retention time in a GC-MS analysis, it typically
increases certainty that the analyte of interest is in the sample.

Gas chromatography–mass spectrometry uses

GC/MS is a technique that can be used to


separate volatile organic compounds (VOCs) and pesticides. Portable
GC units can be used to detect pollutants in the air, and they are
currently used for vapor intrusion investigations. However other uses of
GC or MS, combined with other separation and analytical techniques,
have been developed for radionuclides, explosive compounds such
as Royal Demolition Explosive (RDX) and Trinitrotoluene (TNT), and
metals. Some of these are described below.

A type of spectrometry can also be used to continuously monitor


incinerator emissions, in place of a standard method that collects
samples from a gas stream for laboratory analysis. That standard method
has a relatively long turn around time, and it does not provide
information that catastrophic releases have occurred or that there is a
system failure. With real-time, continuous monitoring, all releases are
monitored, and if there is a system breakdown, the system can be turned
off and/or the nearby community can be notified.

Description
The Gas Chromatography/Mass Spectrometry (GC/MS) instrument
separates chemical mixtures (the GC component) and identifies the
components at a molecular level (the MS component). It is one of the
most accurate tools for analyzing environmental samples. The GC works
on the principle that a mixture will separate into individual substances
when heated. The heated gases are carried through a column with an
inert gas (such as helium). As the separated substances emerge from the
column opening, they flow into the MS. Mass spectrometry identifies
compounds by the mass of the analyte molecule. A ÒlibraryÓ of known
mass spectra, covering several thousand compounds, is stored on a
computer. Mass spectrometry is considered the only definitive analytical
detector.
High-performance liquid chromatography

applications
High-performance liquid chromatography (HPLC; formerly referred to
as high-pressure liquid chromatography) is a technique in analytical
chemistry used to separate, identify, and quantify each component in a
mixture. It relies on pumps to pass a pressurized liquid solvent containing
the sample mixture through a column filled with a solid adsorbent material.
Each component in the sample interacts slightly differently with the
adsorbent material, causing different flow rates for the different
components and leading to the separation of the components as they flow
out of the column.
HPLC has been used for manufacturing (e.g., during the production
process of pharmaceutical and biological products), legal (e.g., detecting
performance enhancement drugs in urine), research (e.g., separating the
components of a complex biological sample, or of similar synthetic
chemicals from each other), and medical (e.g., detecting vitamin D levels in
blood serum) purposes.[1]
Chromatography can be described as a mass transfer process
involving adsorption. HPLC relies on pumps to pass a pressurized liquid
and a sample mixture through a column filled with adsorbent, leading to the
separation of the sample components. The active component of the
column, the adsorbent, is typically a granular material made of solid
particles (e.g., silica, polymers, etc.), 2–50 μm in size. The components of
the sample mixture are separated from each other due to their different
degrees of interaction with the adsorbent particles. The pressurized liquid is
typically a mixture of solvents (e.g., water, acetonitrile and/or methanol)
and is referred to as a "mobile phase". Its composition
and temperature play a major role in the separation process by influencing
the interactions taking place between sample components and adsorbent.
These interactions are physical in nature, such as hydrophobic (dispersive),
dipole–dipole and ionic, most often a combination.
HPLC is distinguished from traditional ("low pressure") liquid
chromatography because operational pressures are significantly higher
(50–350 bar), while ordinary liquid chromatography typically relies on the
force of gravity to pass the mobile phase through the column. Due to the
small sample amount separated in analytical HPLC, typical column
dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC
columns are made with smaller adsorbent particles (2–50 μm in average
particle size). This gives HPLC superior resolving power (the ability to
distinguish between compounds) when separating mixtures, which makes it
a popular chromatographic technique.
The schematic of a HPLC instrument typically includes a degasser,
sampler, pumps, and a detector. The sampler brings the sample mixture
into the mobile phase stream which carries it into the column. The pumps
deliver the desired flow and composition of the mobile phase through the
column. The detector generates a signal proportional to the amount of
sample component emerging from the column, hence allowing
for quantitative analysis of the sample components. A
digital microprocessor and user software control the HPLC instrument and
provide data analysis. Some models of mechanical pumps in a HPLC
instrument can mix multiple solvents together in ratios changing in time,
generating a composition gradient in the mobile phase. Various detectors
are in common use, such as UV/Vis, photodiode array (PDA) or based
on mass spectrometry. Most HPLC instruments also have a column oven
that allows for adjusting the temperature at which the separation is
performed.
Theoretical

HPLC separations have theoretical parameters and equations to describe


the separation of components into signal peaks when detected by
instrumentation such as by a UV detector or a mass spectrometer. The
parameters are largely derived from two sets of chromatagraphic theory:
plate theory (as part of Partition chromatography), and the rate theory of
chromatography / Van Deemter equation. Of course, they can be put in
practice through analysis of HPLC chromatograms, although rate theory is
considered the more accurate theory.
They are analogous to the calculation of retention factor for a paper
chromatography separation, but describes how well HPLC separates a
mixture into two or more components that are detected as peaks (bands)
on a chromatogram. The HPLC parameters are the: efficiency factor(N),
the retention factor (kappa prime), and the separation factor (alpha).
Together the factors are variables in a resolution equation, which describes
how well two components' peaks separated or overlapped each other.
These parameters are mostly only used for describing HPLC reversed
phase and HPLC normal phase separations, since those separations tend
to be more subtle than other HPLC modes (e.g., ion exchange and size
exclusion).
Void volume is the amount of space in a column that is occupied by
solvent. It is the space within the column that is outside of the column's
internal packing material. Void volume is measured on a chromatogram as
the first component peak detected, which is usually the solvent that was
present in the sample mixture; ideally the sample solvent flows through the
column without interacting with the column, but is still detectable as distinct
from the HPLC solvent. The void volume is used as a correction factor.
Efficiency factor (N) practically measures how sharp component peaks on
the chromatogram are, as ratio of the component peak's area ("retention
time") relative to the width of the peaks at their widest point (at the
baseline). Peaks that are tall, sharp, and relatively narrow indicate that
separation method efficiently removed a component from a mixture; high
efficiency. Efficiency is very dependent upon the HPLC column and the
HPLC method used. Efficiency factor is synonymous with plate number,
and the 'number of theoretical plates'.
Retention factor (kappa prime) measures how long a component of the
mixture stuck to the column, measured by the area under the curve of its
peak in a chromatogram (since HPLC chromatograms are a function of
time). Each chromatogram peak will have its own retention factor
(e.g., kappa1 for the retention factor of the first peak). This factor may be
corrected for by the void volume of the column.
Separation factor (alpha) is a relative comparison on how well two
neighboring components of the mixture were separated (i.e., two
neighboring bands on a chromatogram). This factor is defined in terms of a
ratio of the retention factors of a pair of neighboring chromatogram peaks,
and may also be corrected for by the void volume of the column. The
greater the separation factor value is over 1.0, the better the separation,
until about 2.0 beyond which an HPLC method is probably not needed for
separation. Resolution equations relate the three factors such that high
efficiency and separation factors improve the resolution of component
peaks in a HPLC separation

Operation
The sample mixture to be separated and analyzed is introduced, in a
discrete small volume (typically microliters), into the stream of mobile
phase percolating through the column. The components of the sample
move through the column at different velocities, which are a function of
specific physical interactions with the adsorbent (also called stationary
phase). The velocity of each component depends on its chemical nature,
on the nature of the stationary phase (column) and on the composition of
the mobile phase. The time at which a specific analyte elutes (emerges
from the column) is called its retention time. The retention time measured
under particular conditions is an identifying characteristic of a given
analyte.
Many different types of columns are available, filled with adsorbents
varying in particle size, and in the nature of their surface ("surface
chemistry"). The use of smaller particle size packing materials requires the
use of higher operational pressure ("backpressure") and typically improves
chromatographic resolution (i.e., the degree of separation between
consecutive analytes emerging from the column). Sorbent particles may be
hydrophobic or polar in nature.
Common mobile phases used include any miscible combination
of water with various organic solvents (the most common
are acetonitrile and methanol). Some HPLC techniques use water-free
mobile phases (see Normal-phase chromatography below). The aqueous
component of the mobile phase may contain acids (such as formic,
phosphoric or trifluoroacetic acid) or salts to assist in the separation of the
sample components. The composition of the mobile phase may be kept
constant ("isocratic elution mode") or varied ("gradient elution mode")
during the chromatographic analysis. Isocratic elution is typically effective
in the separation of sample components that are very different in their
affinity for the stationary phase. In gradient elution the composition of the
mobile phase is varied typically from low to high eluting strength. The
eluting strength of the mobile phase is reflected by analyte retention times
with high eluting strength producing fast elution (=short retention times). A
typical gradient profile in reversed phase chromatography might start at 5%
acetonitrile (in water or aqueous buffer) and progress linearly to 95%
acetonitrile over 5–25 minutes. Periods of constant mobile phase
composition may be part of any gradient profile. For example, the mobile
phase composition may be kept constant at 5% acetonitrile for 1–3 min,
followed by a linear change up to 95% acetonitrile.
A rotary fraction collector collecting HPLC output. The system is being used
to isolate a fraction containing Complex I from E. coli plasma membranes.
About 50 litres of bacteria were needed to isolate this amount.[2]
The chosen composition of the mobile phase (also called eluent) depends
on the intensity of interactions between various sample components
("analytes") and stationary phase (e.g., hydrophobic interactions in
reversed-phase HPLC). Depending on their affinity for the stationary and
mobile phases analytes partition between the two during the separation
process taking place in the column. This partitioning process is similar to
that which occurs during a liquid–liquid extraction but is continuous, not
step-wise. In this example, using a water/acetonitrile gradient, more
hydrophobic components will elute (come off the column) late, once the
mobile phase gets more concentrated in acetonitrile (i.e., in a mobile phase
of higher eluting strength).
The choice of mobile phase components, additives (such as salts or acids)
and gradient conditions depends on the nature of the column and sample
components. Often a series of trial runs is performed with the sample in
order to find the HPLC method which gives adequate separation.

Scanning Electron Microscope (SEM)

Applications

Surface structures

Samples for SEM need typically to be dry and conductive. Therefore, biological
specimens are fixated, dehydrated and coated with thin layer of metal. Small
specimens are dehydrated chemically, via ascending concentration series of
alcohol and hexamethyldisilazane (HDMS), but larger specimens can be dried
using a critical point drying (CPD) instrument. In either case, water within the
specimen is substituted with a solvent with lower surface tension that helps
keeping the specimen structure intact. Our new SEM (Zeiss Sigma HD|VP) has
maximum reolution of about 1 nm.
However, also non-conductive specimens can be imaged with appropriate
instrumentation and detection (so-called variable pressure mode). Utilizing an
environmetal mode in our old SEM (Philips XL30 ESEM-TMP) even moist
samples can be analyzed.
Surface 3D

In addition to conventional secondary electron imaging, 3D reconstruction of the


sample surfaces can be processed from data acquired with four-quadrant
backscatter electron detector. At best, surface roughness and volumetric analysis is
applicable.

Elemental analysis

Similarly to TEM, elemental composition can be analyzed at the same time with
SEM imaging. However, SEM is especially useful to analyze elemental
distribution (i.e. elemental mapping) in the sample. Keep in mind that typically the
elemental information is collected from about 1 micron deep layer.

Scanning-transmission mode

TEM grids can be inserted and analyzed with SEM, too! If low magnification is
sufficient and you are more familiar to SEM than to TEM, you may use SEM for
transmission imaging.

Correlative microscopy (CLEM)

Together with PhD Kirsi Rilla (Institute of Biomedicine), we have developed


protocols to study first live cells with confocal microscopy and later, the very same
cells with SEM.

HOW SEMS WORK


A Scanning Electron Microscope (SEM) uses focused beams of electrons to render
high resolution, three-dimensional images. These images provide information on:

 topography
 morphology
 composition

A schematic representation of an SEM is shown in Figure 1. Electrons are


generated at the top of the column by the electron source. They are then
accelerated down the column that is under vacuum, which helps to prevent any
atoms and molecules present in the column from interacting with the electron beam
and ensures good quality imaging.
Electromagnetic lenses are used to control the path of the electrons. The condenser
defines the size of the electron beam (which defines the resolution), while the
objective lens’ main role is the focusing of the beam onto the sample. Scanning
coils are used to raster the beam onto the sample. In many cases, apertures are
combined with the lenses in order to control the size of the beam.

Different types of electrons are emitted from samples upon interacting with the
electron beam. A BackScattered Electron (BSE) detector is placed above the
sample to help detect backscattered electrons. Images show contrast information
between areas with different chemical compositions as heavier elements (high
atomic number) will appear brighter. A Secondary Electron (SE) detector is placed
at the side of the electron chamber, at an angle, in order to increase the efficiency
of detecting secondary electrons which can provide more detailed surface
information.
Scanning electron microscope (SEM) Uses

The signals used by a scanning electron microscope to produce an image


result from interactions of the electron beam with atoms at various depths
within the sample. Various types of signals are produced
including secondary electrons (SE), reflected or back-scattered
electrons(BSE), characteristic X-rays and light (cathodoluminescence)
(CL), absorbed current (specimen current) and transmitted electrons.
Secondary electron detectors are standard equipment in all SEMs, but it is
rare for a single machine to have detectors for all other possible signals.
In secondary electron imaging (SEI), the secondary electrons are emitted
from very close to the specimen surface. Consequently, SEI can produce
very high-resolution images of a sample surface, revealing details less than
1 nm in size. Back-scattered electrons (BSE) are beam electrons that are
reflected from the sample by elastic scattering. They emerge from deeper
locations within the specimen and, consequently, the resolution of BSE
images is less than SE images. However, BSE are often used in analytical
SEM, along with the spectra made from the characteristic X-rays, because
the intensity of the BSE signal is strongly related to the atomic number (Z)
of the specimen. BSE images can provide information about the
distribution, but not the identity, of different elements in the sample. In
samples predominantly composed of light elements, such as biological
specimens, BSE imaging can image colloidal gold immuno-labels of 5 or
10 nm diameter, which would otherwise be difficult or impossible to detect
in secondary electron images.[13] Characteristic X-rays are emitted when
the electron beam removes an inner shell electron from the sample,
causing a higher-energy electron to fill the shell and release energy. The
energy or wavelength of these characteristic X-rays can be measured
by Energy-dispersive X-ray spectroscopy or Wavelength-dispersive X-ray
spectroscopy and used to identify and measure the abundance of elements
in the sample and map their distribution.
Due to the very narrow electron beam, SEM micrographs have a
large depth of field yielding a characteristic three-dimensional appearance
useful for understanding the surface structure of a sample.[14] This is
exemplified by the micrograph of pollen shown above. A wide range of
magnifications is possible, from about 10 times (about equivalent to that of
a powerful hand-lens) to more than 500,000 times, about 250 times the
magnification limit of the best light microscopes.
Field emission scanning electron microscope

Applications of FESEM

 The GeminiSEM imaging facilities are the ideal choice for maximum
sample flexibility for high performance, high resolution imaging and
excellent compositional materials analysis. This advance facility can be
used in a wide range of options, application-specific modules and
workflows and give satisfaction for various applications for top surface
imaging and elemental analysis of nanopowders, nanofilm and
nanofiber. These cover various fields such as mineralogy, ceramics,
polymer, metallurgy, electronic devices, chemistry, physics and life
sciences. In nanoscience researches, the GeminiSEM 500 Nano-twin lens
can be used to get image for materials that beam-sensitive and detailed
nanoscale structures at low beam energy. The efficient detection
allowing operating at low currents for minimum beam damage and
excellent materials contrasts can be obtained. This equipment has
successfully used to characterize the carbon nanostructures, engineered
and self organized nanosystems, and nanocomposite materials [6]. In
metallurgy studies, the Gemini complete detection system can be used
to characterize inclusions at ultra-high resolution and discriminate
between different phases with unparalleled contrast. Whereas in
electronic devices and semiconductors, GeminiSEM 500 enables rapid,
reliable and damage-free characterization of nanoscale defects and
sensitive resist structures at low beam energies. For life sciences
applications, GeminiSEM 500 can give images of subcellular structure
and tissue mapping. In polymeric materials, the image of nano fiber such
as kenaf and high density polyethylene can be obtained at high
resolution. In MTEC, BTI we tried to use FESEM to characterize the
minerals; monazite and xenotime to support the thorium flagship
research. We used EDS elemental mapping to get the distribution of
elements present. Furthermore we use WDS mapping to get the
distribution of low concentration elements in the materials and
quantitative analysis to accurately measure the content of the elements
that cannot be detected by EDS.
Field emission scanning electron microscope Theory

The first true scanning electron microscope (SEM) was described and developed in 1942 by
Zworykin, who showed that secondary electrons (SE) provided topographic contrast by
biasing the collector positively relative to the specimen. He reached the resolution of 50 nm
when using an electron multiplier tube as a pre-amplifier of the SE emission current [1].
Since then, many improvements had been made until the first commercial SEM was made in
1965 by Cambridge Scientific Instruments Mark I called ‘Stereoscan’ [2]. This instrument

 showed great SE detection using Everhart-Thornley detector (ETD)
which was found by Everhart and Thronley in 1960. ETD is a detector to
collect electrons with a positively biased grid comprising a scintillator to
convert the electrons and a light-pipe to transfer the light directly to a
photomultiplier tube (PMT) [1]. The SEMs that we are using today are
not very different from this one. The early SEM used heated tungsten
hairpin or filament cathode as the electron source, which known as
thermionic emitter (Fig. 1a). The development of lanthanum hexaboride
(LaB6) cathodes in 1975 which replacing tungsten became the major
improvement in instrument performance [3] and still can be found on
many instruments today. Thermionic emitter emits high current with
beam size of 4-8 nm. It is although inexpensive and reliable, the beam
currents produced are definitely low brightness and resulting
evaporation of filament so-called thermal drift, which limits the optical
performance especially at high-resolution. This classic SEM also
accustomed users to operating at high beam voltage i.e 15-30 kV either
necessary or not. This has led to many assumptions that SEM is
incapable of producing high-resolution images for many heat intolerant
samples such as biological and polymer materials. Figure 1: Field emitter
gun; the electron source in field emission scanning electron microscope.
The only electron source designed for high-resolution imaging and
suitable for various kinds of materials is field emission, which uses field
emitter gun (FEG) to emit electrons. The SEM that uses FEG as the
emitter type is called field emission scanning electron microscope
(FESEM); whereby the emitter type is used as a part of its name to
distinguish it from the classic SEM. FEG is made up of tungsten wire that
has diameter about 100 nm (Fig. 1b), preferably a single crystal type,
designed in such that the 310 plane is perpendicular to the electron
optical axis. The gun tip is placed near an anode which held at +2-6 kV,
the sharpness of the point produces fields of ˜1010 V/m near its surface
(Fig. 1c). This will result electrons to tunnel through the barrier into the
vacuum at a much focused beam (˜2 nm). The current emitted is seldom
reaching more than 5-10 μA, however it has brightness far higher than
thermionic emitter [4]. The first reliable FESEM was developed in 1968
by Prof. Crewe at Argonne National Laboratory [5]. FESEM is developed
based on a technology for high-resolution imaging and different
contrasting methods aiming for a comprehensive characterization of
specimens. FESEM can be used in wide range of applications including
imaging surface sensitive and non-conductive samples without the need
for pre-treatment. FESEM also comes with various attachments for
elemental analysis. This

field emission scanning electron microscope uses

These technical specifications describe FESEM with model GeminiSEM 500


developed using Gemini technology by Carl Zeiss (Fig.2). This SEM is
designed based on more than 20 years of experience in imaging
technology. The GeminiSEM 500 is a system comprising a high-resolution
field emission microscope, airlock chamber for specimen changing, plasma
cleaner for sample and chamber cleaning and an in situ cleaning apparatus.
There are three main advantages offered by this model; efficient detection,
excellent resolution and unsurpassed ease-of-use. Figure 2: The GeminiSEM
500 by Carl Zeiss Electron optical system One of the main components in
FESEM is electron beam column that describe the electron optic design.
The column accommodates the FEG and lenses. The design is to enable
smallest probe size possible for high resolution imaging. The novel optical
design of the GeminiSEM is mainly about the innovation to the FEG mode,
novel optics and beam booster technology (Fig.3) [6]. In Gemini technology,
a special gun mode was developed to reduce the energy spread of the
primary beam to reduce the effect of chromatic aberration. Most FEG is
designed to have energy spread as low as 0.35 eV as compared to 1.5 eV for
thermionic emitter [7]. The probe current is in the range of 3 pA to 100 nA.
The probe size is another important criterion for high resolution imaging.
Electron optic is used to demagnify the size of electron source to form
smallest possible probe for high resolution. The demagnification is achieved
using a series of lens known as ‘probe-firming’ lens comprising condenser
and objective lens [7]. Figure 3: The electron optic column design for
GeminiSEM 500 [6] Aberrations are lens imperfections due to spherical or
chromatic effects which limit the ability of focusing the electron beam on
the surface and thus blur the image. In spherical aberration, rays travelling
from the optical axis are focused more strongly than those close to the axis
whereas in chromatic aberration electrons with slightly different
wavelengths are focused more or less strongly. To reduce spherical
aberrations, objective lens aperture is used to limit the angle if the outer
rays through the lens, while electron beam with low energy distribution is
used to limit the chromatic aberrations. The Gemini lens design is Nano
Twin Geminin lenses comprising compound magnetic-electrostatic
objective lens (Fig.3)[6]. These lens have been further optimized in terms of
geometry and electrostatic as well as the magnetic fields distributions. In
this way these lens may provide resolution of 1.2 nm at 500 V, 1.1 nm at 1
kV and 0.6 nm at 15 kV. Moreover, the Gemini beam booster technology
can guarantee the imaging is done using small probe sizes and high signal-
to-noise ratios down to ultra-low accelerating voltages (Fig.3)[6]. This
technology ensures that small probe size imaging can be done even at
voltages lower than 1 kV. Therefore the chromatic and spherical imaging
aberrations (Cc and Cs) are decrease significantly with decreasing beam
energy [8]. Furthermore, the sensitivity to external stray fields is minimized
by keeping the beam at high voltage throughout the column until its final
decelaration [6]. Low voltage imaging The high resolution imaging at low
voltage is needed to avoid beam damage and to balance the secondary
electron (SE) yield and beam current for charge neutrality. The advantage
of low voltage imaging is to improve image contrast, for example by
reducing operating voltage from 10-20 kV to perhaps 1.5 kV can solve
problem of low topographic image contrast on bulk specimen. As the
electron range is proportional to E5/3, dropping beam voltage by this
amount decreases the approximately spherical interaction volume by about
106 [4]. Previous studies showed that, by reducing the beam energy from
20 kV to 1.5 kV the smallest features visible in the final image was the size
of the probe [9]. The nanoscale details can be resolved with high contrast
images at low beam voltages. The advantage of Gemini system is to achieve
high resolution at low voltages. The acceleration voltage for GeminiSEM
500 is from 0.02 to 30 kV. By combining the low probe current with
smallest possible probe size and at low acceleration voltage GeminiSEM
500 provides magnification from 20 to 2,000,000 times. Detection system
Various signals are generated as a result of the impact of incident electrons
to the specimen. There are mainly low energy secondary electrons (SE)
(energies of < 50 eV), high energy backscattered electrons (BSE) (energies
of > 50 eV) and characteristic x-rays (Fig. 4). These signals are collected
using detectors to form an image or to analyze the samples’ surface. The
right detection of these electrons will give detailed information about the
samples. To get high resolution images, the sufficient signals from surfaces
are vital. These signals are converted from the electrons that are coming
from the surfaces. The GeminiSEM system comes with significantly
improved detection efficiency. The detection concept ensures efficient
signal detection by detecting SE and BSE electrons in parallel. These so-
called ‘In-lens’ detectors are arranged on the optical axis, which reduces
the need for realignment and thus minimizes time-to-image. The higher
signals give advantage by getting images at reduced time and this is very
useful especially when dealing with low current imaging to avoid sample
damage. The types of detectors offered are standard In-Lens SE detector,
high efficient Everhart Thornley (ETD) SE detector and angular selective BSE
detector. Figure 4: Signals produced when electrons bombarded
specimen’s surface during SEM imaging In-lens SE detector is used to detect
SE signals directly from sample’s surface (Fig. 5a). It is mounted on the axis
with electron beam path in the objective lens (annular type) for ultra-high
SE detection. SEs are attracted to the detector by the electrical field in the
column and are deflected to the plane of SE detector by the objective lens.
The high efficiency Everhart-Thornley SE detector (ETD) is used for higher
bandwidth electrons (wide range of angles relative to the primary beam),
high quantum efficiency and without the addition of substantial noise.
Moreover the high efficiency angular BSE detector is used to detect signals
from high angle backscattered electrons (HABE) (Fig. 5b). BSE are electrons
scattered backward from specimens comprising higher energy than SE (> 50
eV). They are deflected to different angle from SE (15° relative to the
primary beam) and have single and multiple scattering depending on
deflection angle. In Gemini design, the detector used to detect BSE is called
In-Lens EsB (energy selectiv backscatter) detector, mounted on the optical
axis in the column. This detector comes with an energy filtering grid to
select or filter energy from 0 to 1500 eV so that a separation of SE and BSE
is enabled. Energy filtering is important to get pure HABE that is useful for
topographic contrast and compositional imaging (Fig. 5b). EsB detector
works well even at low voltages and show excellent sensitivity. This system
also comes with another detector for low angle BSE (LABE) with lower
energy. LABE does not enter column but land on objective lens, thus they
are detected via low angle BSE detector called AsB4 (angular BSE detector).
LABE signals give information about compositional and crystallography
contrast of materials which can be used in 3D surface modelling. Figure 5 :
The arrangement of detectors for GeminiSEM 500 column; (a) SE detector
and (b) BSE detector All detectors usually optically coupled with a
photomultiplier and collector which allow faster scan speed image
acquisition compared to standard ETD with the same signal to noise ratio.
The Gemini objective lens with a novel design optimizes in-lens SE
detection as it not only acts as an imaging lens, but also enhances detection
for SE and BSE emitted by the sample: the electron trajectories for the
detection path are further improved by the novel design of the objective
lens. Here, the in-lens SE signal is up to 20 times higher compared to classic
SEM designs. This enables imaging at very low voltages and usage of fast
scan speeds for high speed sample investigation. At the same time the in-
lens detector signal is boosted by up to 20 times under low voltage imaging
conditions. Elemental analysis Besides surface imaging, FESEM can be used
for compositional analysis for determining elements present in a sample.
For this, two different types of spectrometers are attached to FESEM to
perform this function; an energy dispersive spectrometer (EDS) and
wavelength dispersive spectrometer (WDS) (Fig. 6). The elements are
identified through detecting the characteristic X-rays that emitted from
specimens when bombarded with electrons. Each specimen produces a
unique characteristic X-rays with a specific energy and wavelength,
representing each element in the sample. EDS identify the X-rays based on
their energy; while WDS separate the X-rays based on their wavelengths. In
EDS system, the central component is a semiconductor solid-state detector
whereas in WDS the main components are analyzing crystals and a
detector. Based on these two different components, EDS and WDS have
distinct operating principles. Figure 6: The arrangement of elemental
analysis attachments ; EDS and WDS on GeminiSEM 500 In EDS, when each
X-ray photon hits the detector, a very small current is produced by knocking
out electrons from the semi-conductor. Each electron ejected from a silicon
electron shell consumes a specific energy of the element. By measuring the
amount of current produced by each X-ray photon, the original energy of
the X-ray can be calculated, thus the element is identified. On the other
hand in WDS, those characteristic X-rays that hit the crystal will diffract and
enter the detector. Whether an X-ray photon will diffract depends on its
wavelength, the orientation of the crystal, and the crystal's lattice spacing.
Only X-rays of a given wavelength will enter the detector at any one time.
WDS spectrometer typically has between two to five analyzing crystals,
each with a different lattice spacing, because each type of crystal can
diffract only a given range of wavelengths. To measure X-rays of another
wavelength, the crystal and detector are moved to a new position. The
most significant difference between WDS and EDS systems is their energy
resolution. A Mn Ka X-ray line on an EDS system have typically between
135-150 eV wide; whereas on a WDS system, this same X-ray line will only
be about 10 eV wide. This means that WDS has 10 times better energy
resolution than EDS, hence the amount of overlap between peaks of similar
energies is much smaller. However, since a specific WD spectrometer can
measure only one X-ray wavelength at a time, it requires longer analysis
time than EDS. Another disadvantage of EDS is the detector that using
Berillium as window; produce detection limit in the range of 1000-3000
ppm that is not useful for lightest elements determination (below atomic
number of Na). Meanwhile WDS has detection limit in the range of 30-300
ppm hence show a much better performance for light elements analysis
than EDS. Both EDS and WDS spectra are presented as histogram of the
number of X-rays measured at each energy. In a normal surface imaging
that requires quick phase identifications of major elements, typically
TEM Applications
The main application of a transmission electron microscope is to provide
high magnification images of the internal structure of a sample. Being able
to obtain an internal image of a sample opens new possibilities for what
sort of information can be gathered from it.
A TEM operator can investigate the crystalline structure of an object, see
the stress or internal fractures of a sample, or even view contamination
within a sample through the use of diffraction patterns, to name just a few
kinds of studies.

Transmission electron microscopy


Transmission Electron Microscope Uses
Transmission electron microscopy (TEM, an abbreviation which can
also stand for the instrument, a transmission electron microscope) is
a microscopy technique in which a beam of electrons is transmitted through
a specimen to form an image. The specimen is most often an ultrathin
section less than 100 nm thick or a suspension on a grid. An image is
formed from the interaction of the electrons with the sample as the beam is
transmitted through the specimen. The image is then magnified
and focused onto an imaging device, such as a fluorescentscreen, a layer
of photographic film, or a sensor such as a scintillator attached to a charge-
coupled device.
Transmission electron microscopes are capable of imaging at a
significantly higher resolution than light microscopes, owing to the
smaller de Broglie wavelength of electrons. This enables the instrument to
capture fine detail—even as small as a single column of atoms, which is
thousands of times smaller than a resolvable object seen in a light
microscope. Transmission electron microscopy is a major analytical
method in the physical, chemical and biological sciences. TEMs find
application in cancer research, virology, and materials science as well
as pollution, nanotechnology and semiconductor research.
TEM instruments boast an enormous array of operating modes including
conventional imaging, scanning TEM imaging (STEM), diffraction,
spectroscopy, and combinations of these. Even within conventional
imaging, there are many fundamentally different ways that contrast is
produced, called "image contrast mechanisms." Contrast can arise from
position-to-position differences in the thickness or density ("mass-thickness
contrast"), atomic number ("Z contrast," referring to the common
abbreviation Z for atomic number), crystal structure or orientation
("crystallographic contrast" or "diffraction contrast"), the slight quantum-
mechanical phase shifts that individual atoms produce in electrons that
pass through them ("phase contrast"), the energy lost by electrons on
passing through the sample ("spectrum imaging") and more. Each
mechanism tells the user a different kind of information, depending not only
on the contrast mechanism but on how the microscope is used—the
settings of lenses, apertures, and detectors. What this means is that a TEM
is capable of returning an extraordinary variety of nanometer- and atomic-
resolution information, in ideal cases revealing not only where all the atoms
are but what kinds of atoms they are and how they are bonded to each
other. For this reason TEM is regarded as an essential tool for nanoscience
in both biological and materials fields.

Components[

The electron source of the TEM is at the top, where the lensing system (4,7
and 8) focuses the beam on the specimen and then projects it onto the
viewing screen (10). The beam control is on the right (13 and 14)
A TEM is composed of several components, which include a vacuum
system in which the electrons travel, an electron emission source for
generation of the electron stream, a series of electromagnetic lenses, as
well as electrostatic plates. The latter two allow the operator to guide and
manipulate the beam as required. Also required is a device to allow the
insertion into, motion within, and removal of specimens from the beam
path. Imaging devices are subsequently used to create an image from the
electrons that exit the system

Total Suspended Solids (TSS)


Applications
Total suspended solids (TSS) is the dry-weight of suspended
particles, that are not dissolved, in a sample of water that can be
trapped by a filter that is analyzed using a filtration apparatus. It is
a water quality parameter used to assess the quality of a specimen of
any type of water or water body, ocean water for example,
or wastewater after treatment in a wastewater treatment plant. It is
listed as a conventional pollutant in the U.S. Clean Water Act.[1] Total
dissolved solids is another parameter acquired through a separate
analysis which is also used to determine water quality based on the
total substances that are fully dissolved within the water, rather than
undissolved suspended particles.

Theory of Total Suspended Solids (TSS)


TSS of a water or wastewater sample is determined by pouring a carefully
measured volume of water (typically one litre; but less if the particulate
density is high, or as much as two or three litres for very clean water)
through a pre-weighed filter of a specified pore size, then weighing the filter
again after the drying process that removes all water on the filter. Filters for
TSS measurements are typically composed of glass fibres.[2] The gain in
weight is a dry weight measure of the particulates present in the water
sample expressed in units derived or calculated from the volume of water
filtered (typically milligrams per litre or mg/L).
If the water contains an appreciable amount of dissolved substances (as
certainly would be the case when measuring TSS in seawater), these will
add to the weight of the filter as it is dried. Therefore it is necessary to
"wash" the filter and sample with deionized water after filtering the sample
and before drying the filter. Failure to add this step is a fairly common
mistake made by inexperienced laboratory technicians working with sea
water samples, and will completely invalidate the results as the weight of
salts left on the filter during drying can easily exceed that of the suspended
particulate matter.
Although turbidity purports to measure approximately the same water
quality property as TSS, the latter is more useful because it provides an
actual weight of the particulate material present in the sample. In water
quality monitoring situations, a series of more labor-intensive TSS
measurements will be paired with relatively quick and easy turbidity
measurements to develop a site-specific correlation. Once satisfactorily
established, the correlation can be used to estimate TSS from more
frequently made turbidity measurements, saving time and effort. Because
turbidity readings are somewhat dependent on particle size, shape, and
color, this approach requires calculating a correlation equation for each
location. Further, situations or conditions that tend to suspend larger
particles through water motion (e.g., increase in a stream current or wave
action) can produce higher values of TSS not necessarily accompanied by
a corresponding increase in turbidity. This is because particles above a
certain size (essentially anything larger than silt) are not measured by a
bench turbidity meter (they settle out before the reading is taken), but
contribute substantially to the TSS value.
Total suspended solids Uses

Although TSS appears to be a straightforward measure of particulate


weight obtained by separating particles from a water sample using a filter, it
suffers as a defined quantity from the fact that particles occur in nature in
essentially a continuum of sizes. At the lower end, TSS relies on a cut-off
established by properties of the filter being used. At the upper end, the cut-
off should be the exclusion of all particulates too large to be "suspended" in
water. However, this is not a fixed particle size but is dependent upon the
energetics of the situation at the time of sampling: moving water suspends
larger particles than does still water. Usually it is the case that the
additional suspended material caused by the movement of the water is of
interest.
These problems in no way invalidate the use of TSS; consistency in
method and technique can overcome short-comings in most cases. But
comparisons between studies may require a careful review of the
methodologies used to establish that the studies are in fact measuring the
same thing.
TSS in mg/L can be calculated as:
(dry weight of residue and filter - dry weight of filter alone, in grams)/
mL of sample * 1,000,000

Vous aimerez peut-être aussi